• Niclas Nilsson
  • Blog
  • About
  • The philosopher's stone of AI

    Date:

    The theory that foundational LLM models currently are, or soon will be, strong enough to achieve AGI is the trope of our time. One common narrative claims that foundational models are already strong enough and don't need much refinement.

    All that's needed is a substantial amount of creative engineering and elbow grease - finding just the right way to assemble agents through supportive tooling and well shaped, large scope contexts around core models. AGI is around the corner.

    LLMs are powerful and useful today, however the gap between "powerful" and "AGI" is chasmic. I think it's fair to say that this theory is founded mostly in faith.

    Absurd theories are can be right, though they rarely are.

    That foundational models possess universally useful properties, of infinite proportions, in a black-box format is like a modern, digital analog to the philosopher's stone of medieval alchemy.

    LLMs obviously exist and the philosopher's stone doesn't. LLMs obviously create value today and the philosopher's stone never could. However...

    As long as one possesses the philosopher's stone, and knows exactly how to manipulate it, one can produce vast amounts of gold and the elixir of life, immortality.

    As long as one possesses the right foundational model, and pokes and prods the black-box that it is in just the right way, one can create infinite intelligence and productivity gains.

    It's an attractive prospect and some of us have very strong incentives to have faith in it.