AI "reasoning" models don't reason at all

(twitter.com)

3 points | by doener a month ago ago

3 comments

  • aurizon a month ago ago

    AI 'reasoning' is a allied to password cracking, where every combination of the allowed symbol set is incremented across until the locked door is opened. Analogously the AI does a similar task with the ability to create and test code until one works, and similarly hack code with a blizzard of variances until an entry point(bug) is found = record and continue. The speed and parallelism is not a reasoned path, it is an exhaustive path that, hopefully, finds the hole. It also finds 'hallucinations' - which a reasoning mind recognises, but these early stage AI's often do not.

    • JPLeRouzic a month ago ago

      LLMs are not based on brute force algorithms. They are based on finding the best string of tokens (for text based LLMs) that would correspond to a prompt.

      To illustrate that you can see how DeepSeek "thinks". There are lots of "Given the instructions, if we", "Alternatively, we can ", "Also, if ", "wait", "but" etc...

      • aurizon a month ago ago

        Brute force is also comprehensive parametric exploration. Things a human programmer often shortens based on his analysis as well as to save time. The AI has the ability to use massive parallelism to run down these alleys versus sequential exploration - although a pass word 'guesser' can run 10,000 strings in parallel as well. Not quite the same as password exhaustive testing of all characters/lengths, but is related to it.