{"id":1541172,"date":"2025-06-11T17:40:00","date_gmt":"2025-06-11T21:40:00","guid":{"rendered":"https:\/\/bugaluu.com\/news\/?p=1541172"},"modified":"2025-06-11T17:40:00","modified_gmt":"2025-06-11T21:40:00","slug":"do-ai-models-think","status":"publish","type":"post","link":"https:\/\/bugaluu.com\/news\/do-ai-models-think\/1541172\/","title":{"rendered":"Do AI Models Think?"},"content":{"rendered":"<p><span class=\"field field--name-title field--type-string field--label-hidden\">Do AI Models Think?<\/span><\/p>\n<div class=\"clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item\">\n<p><a href=\"https:\/\/neuburger.substack.com\/p\/do-ai-models-think\"><em>Authored by Thomas Neubeger via &#8220;God&#8217;s Spies&#8221; Substack,<\/em><\/a><\/p>\n<p><em>AI can\u2019t solve a problem that hasn\u2019t been previously solved by a human.<\/em><\/p>\n<p>&#8211; Arnaud Bertrand<\/p>\n<p>A lot can be said about AI, but there are few bottom lines. Consider these my last words on the subject itself. (About its misuse by the national security state, I\u2019ll say more later.)<\/p>\n<h2>The Monster AI<\/h2>\n<p>AI will bring nothing but harm. As I said earlier, AI is not just a disaster for our political health, though yes,\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=vG7CvbccdVM\">it will be that<\/a>\u00a0(look for Cadwallader\u2019s line \u201cbuilding a techno-authoritarian surveillance state\u201d). But AI is also a disaster for the climate. It will hasten the collapse\u00a0<strong>by decades<\/strong>\u00a0as usage expands.<\/p>\n<p><em>(See the\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=LPZh9BOjkQs\">video beow<\/a>\u00a0for why AI models are massive energy hogs. See\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=aircAruvnKk\">this video<\/a>\u00a0to understand \u201cneural networks\u201d themselves.)<\/em><\/p>\n<\/p>\n<p>Why won\u2019t AI be stopped? Because the race for AI is not really a race for tech. It&#8217;s a greed-driven race for money, a lot of it. Our lives are already run by those who seek money, especially those who already have too much. They&#8217;ve now found a way to feed themselves even faster: by convincing people to do\u00a0<a href=\"https:\/\/www.reddit.com\/r\/aipromptprogramming\/comments\/1212kmm\/according_to_chatgpt_a_single_gpt_query_consumes\/\">simple searches with AI<\/a>, a\u00a0<a href=\"https:\/\/www.energy.gov\/articles\/doe-releases-new-report-evaluating-increase-electricity-demand-data-centers\">gas-guzzling death machine<\/a>.<\/p>\n<p>For both of these reasons \u2014 mass surveillance and climate disaster \u2014\u00a0<strong>no good will come from AI. Not one ounce.<\/strong><\/p>\n<h2>An Orphan Robot, Abandoned to Raise Itself<\/h2>\n<p>Why does AI persist in making mistakes? I offer\u00a0<a href=\"https:\/\/x.com\/ThomasNeuburger\/status\/1931964293441855594\">one answer below<\/a>.<\/p>\n<p><a href=\"https:\/\/cms.zerohedge.com\/s3\/files\/inline-images\/00-AI.jpg?itok=cnftX-13\"><\/a><\/p>\n<p>AI doesn\u2019t think. It does something else instead. For a full explanation, read on.<\/p>\n<h2>Arnaud Bertrand on AI<\/h2>\n<p>Arnaud Bertrand has the best explanation of what AI is at its core. It\u2019s not a thinking machine, and its output\u2019s not thought. It\u2019s actually the opposite of thought \u2014 it\u2019s what you get from a Freshman who hasn\u2019t studied, but learned a few words instead and is using them to sound smart. If the student succeeds, you don\u2019t call it thought, just a good emulation.<\/p>\n<p>Since Bertrand has put\u00a0<a href=\"https:\/\/x.com\/RnaudBertrand\/status\/1931932881162760524\">the following text on Twitter<\/a>, I\u2019ll print it in full. The expanded version is a\u00a0<a href=\"https:\/\/arnaudbertrand.substack.com\/p\/apple-just-killed-the-agi-myth\">paid post<\/a>\u00a0at his Substack site. Bottom line: He\u2019s exactly right. (In the title below, AGI means\u00a0<a href=\"https:\/\/en.wikipedia.org\/wiki\/Artificial_general_intelligence\">Artificial General Intelligence<\/a>, the next step up from AI.)<\/p>\n<h2><strong>Apple just killed the AGI myth<\/strong><\/h2>\n<p><em>The hidden costs of humanity&#8217;s most expensive delusion<\/em><br \/>\nby Arnaud Bertrand<\/p>\n<p>About 2 months ago I was having an argument on Twitter with someone telling me they were \u201creally disappointed with my take\u201c and that I was \u201ccompletely wrong\u201c for saying that AI was \u201cjust a extremely gifted parrot that repeats what it&#8217;s been trained on\u201c and that this wasn\u2019t remotely intelligence.<\/p>\n<p><a href=\"https:\/\/cms.zerohedge.com\/s3\/files\/inline-images\/00-AI2.jpg?itok=F3qLgpHq\"><\/a><\/p>\n<p>Fast forward to today and the argument is now authoritatively settled: I was right, yeah! \ud83c\udf89<\/p>\n<p>How so? It was settled by none other than Apple, specifically their Machine Learning Research department, in a seminal research paper entitled \u201cThe Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity\u201c that you can find here (<a href=\"https:\/\/ml-site.cdn-apple.com\/papers\/the-illusion-of-thinking.pdf\">https:\/\/ml-site.cdn-apple.com\/papers\/the-illusion-of-thinking.pdf<\/a>).<\/p>\n<p><em><strong>\u201cCan \u2018reasoning\u2019 models reason? Can they solve problems they haven\u2019t been trained on? No.\u201d<\/strong><\/em><\/p>\n<p>What does the paper say? Exactly what I was arguing: AI models, even the most cutting-edge Large Reasoning Models (LRMs), are no more than a very gifted parrots with basically no actual reasoning capability.<\/p>\n<p><strong>They\u2019re not \u201cintelligent\u201d in the slightest, at least not if you understand intelligence as involving genuine problem-solving instead of simply parroting what you\u2019ve been told before without comprehending it.<\/strong><\/p>\n<p>That\u2019s exactly what the Apple paper was trying to understand: can \u201creasoning\u201c models actually reason? Can they solve problems that they haven\u2019t been trained on but would normally be easily solvable with their \u201cknowledge\u201d? The answer, it turns out, is an unequivocal \u201cno\u201c.<\/p>\n<p>A particularly damning example from the paper was this river crossing puzzle: <em>imagine 3 people and their 3 agents need to cross a river using a small boat that can only carry 2 people at a time. The catch? A person can never be left alone with someone else&#8217;s agent, and the boat can&#8217;t cross empty &#8211; someone always has to row it back.<\/em><\/p>\n<p>This is the kind of logic puzzle you might find in a children brain teaser book &#8211; figure out the right sequence of trips to get everyone across the river. The solution only requires 11 steps.<\/p>\n<p>Turns out this simple brain teaser was impossible for Claude 3.7 Sonnet, one of the most advanced &#8220;reasoning&#8221; AIs, to solve. It couldn&#8217;t even get past the 4th move before making illegal moves and breaking the rules.<\/p>\n<p>Yet the exact same AI could flawlessly solve the Tower of Hanoi puzzle with 5 disks &#8211; a much more complex challenge requiring 31 perfect moves in sequence.<\/p>\n<p>Why the massive difference? The Apple researchers figured it out: Tower of Hanoi is a classic computer science puzzle that appears all over the internet, so the AI had memorized thousands of examples during training. But a river crossing puzzle with 3 people? Apparently too rare online for the AI to have memorized the patterns.<\/p>\n<p><strong>This is all evidence that these models aren&#8217;t reasoning at all. <\/strong>A truly reasoning system would recognize that both puzzles involve the same type of logical thinking (following rules and constraints), just with different scenarios. But since the AI never learned the river crossing pattern by heart, it was completely lost.<\/p>\n<p>This wasn\u2019t a question of compute either: the researchers gave the AI models unlimited token budgets to work with. But the really bizarre part is that for puzzles or questions they couldn\u2019t solve &#8211; like the river crossing puzzle &#8211; the models actually started thinking less, not more; they used fewer tokens and gave up faster.<\/p>\n<p>A human facing a tougher puzzle would typically spend more time thinking it through, but these &#8216;reasoning&#8217; models did the opposite: they basically \u201cunderstood\u201d they had nothing to parrot so they just gave up &#8211; the opposite of what you&#8217;d expect from genuine reasoning.<\/p>\n<p>Conclusion: they\u2019re indeed just gifted parrots, or incredibly sophisticated copy-paste machines, if you will.<\/p>\n<p>This has profound implications for the AI future we\u2019re all sold. Some good, some more worrying.<\/p>\n<p><strong>The first one being: no, AGI isn\u2019t around the corner. This is all hype. In truth we\u2019re still light-years away.<\/strong><\/p>\n<p>The good news about that is that we don\u2019t need to be worried about having &#8220;AI overlords&#8221; anytime soon.<\/p>\n<p><strong>The bad news is that we might potentially have trillions in misallocated capital.<\/strong><\/p>\n<\/div>\n<p>      <span class=\"field field--name-uid field--type-entity-reference field--label-hidden\"><a title=\"View user profile.\" href=\"https:\/\/cms.zerohedge.com\/users\/tyler-durden\" class=\"username\">Tyler Durden<\/a><\/span><br \/>\n<span class=\"field field--name-created field--type-created field--label-hidden\">Wed, 06\/11\/2025 &#8211; 13:40<\/span><\/p>\n<p>\u200b<a href=\"https:\/\/www.zerohedge.com\/ai\/do-ai-models-think\" target=\"_blank\" class=\"\">https:\/\/www.zerohedge.com\/ai\/do-ai-models-think<\/a>\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Do AI Models Think? Authored by Thomas Neubeger via &#8220;God&#8217;s Spies&#8221; Substack, AI can\u2019t solve a problem that hasn\u2019t been previously solved by a human&#8230;.<\/p>\n","protected":false},"author":0,"featured_media":1541173,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1541172","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","wpcat-1-id"],"_links":{"self":[{"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/posts\/1541172","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/comments?post=1541172"}],"version-history":[{"count":0,"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/posts\/1541172\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/media\/1541173"}],"wp:attachment":[{"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/media?parent=1541172"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/categories?post=1541172"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/tags?post=1541172"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}