• No products in the cart.

Why and How I Believe We’ll Attain AGI by 2025-2028

Why and How I Believe We'll Attain AGI by 2025-2028 2

I have a strong intuition about how we’ll achieve both AGI and consciousness
in machines.

Keep in mind: it’s just an intuition. And I’m not a triple Ph.D. in AI or
anything. But I don’t think I—nor anyone else—has anything solid to stand on
with this stuff, so intuition / hypothesis is what you’ll get here. So what
that throat-clearing out of the way, let’s get into it.

First a definition

To start, here’s how I’m defining AGI.

An AGI is a synthetic, autonomous intelligence system that:

1.
can process information and apply it to new situations
2. can do this
for any intelligence-based task
3. can do this as good or better than
an average human professional in a given field

The key is disparate components talking to each other

This is my hypothesis for how we’ve developed our present human
capabilities. Our cognitive evolution comprises:

  • Goal-driven creativity

  • Personal experiences

  • The perception of control or agency

I think these attributes stem from different regions of our brains (and
minds) that are:

a) Distinct from one another, and
b) Interlinked in numerous ways.

I got this idea in 2014 after reading
Sam Harris’ Waking Up. In that book, Sam talks about how you can lose major components of your
humanity by disrupting either the components themselves (like having a tumor
present), or the connections between components (like from a penetrating
head injury).

The best example is when you can
surgically isolate two sides of the brain (via the Corpus
Callosum)

and have one side do things that the other side then explains that it
did—when we know that it didn’t. Like you can show the left side of the
brain that you knocked over a glass on accident, and the unconnected right
side will say that “the wind blew it over”, which it just made up.

Then there are multiple examples where someone gets a tumor in their brain
and suddenly loses the ability to control impulses. Like
the serial killer
who didn’t know why he wanted to kill people, only to find out he had a
massive tumor in the spot that can cause this.

❝  

If you don’t believe we’re a functional illusion, try to think of something
creative and ask yourself whose voice you’re hearing. You have no idea how
that’s getting to you. It might as well be an alien sending you signals.

In other words, we’re not seeing perfection. What we have going on in our
minds is a very precarious balancing act where things just barely work, and
if you harm any of the components or the communication paths, things go
wonky in very strange ways. But the result of them working well is a fully
functioning human. And a miracle.

Evolution had hundreds of millions of years to strike this precarious
balance. It had all that time to tweak the components themselves having
enough neurons, synapses, etc., and to tweak the connections between these
components in precisely the right way. I.e., sending precisely the right
signals, between the right components—at the right time—to generate the
functional miracle/illusion that we live every day.

Looking at the components in detail

Why and How I Believe We'll Attain AGI by 2025-2028 3

So now let’s pull that hack-a-mind model into AI.

What are the components, communication paths, and knobs and switches we can
play with?

Luckily,
we have some knowledge
of that from public knowledge out of OpenAI, Google, Meta, and others. Here
are some of the major knobs they’re tweaking in their training to get to the
current state of the art in 2023.

  • The size of the neural net (number of nodes and layers)

  • The amount of data trained

  • The diversity and quality of the data trained

  • The amount of compute used to train

  • Time spent training

  • The use of multiple models interacting with each other

There is tons of complexity in all of those components. And infinite
combinations of size, what connects to what, what gets shared with what, in
what order, etc.

But we don’t have to wait millions of years to make our iterations.

OpenAI didn’t invent a new neural net to go from GPT-3 to GPT-4. It’s the
same stuff in the list above, just more of this, less of that, more of that,
put in different configurations.

The intuition

The core intuition—and it really is just that—is that the whole thing is a
massive hack. Intelligence. Consciousness. All of it.

❝  

Real magic, in other words, refers to the magic that is not real, while the
magic that is real, that can actually be done, is not real magic.

 Daniel C. Dennett

The brain is mystical because we don’t understand it. But we can see
physical differences in the brains of the smartest people and average
people, and we know its magic allure predictably breaks when we interrupt
specific pieces or communication paths. And we seem to be playing
with very similar analogs right now in our current tech.

But we don’t need to understand the human brain. It doesn’t need to be the
hack we end up with. That was just how evolution did it. I really don’t see
us coming up with a replication of the brain—just as evolution would
probably do it differently if you ran the experiment 1,000 more times.

But I think, given the tens of billions that will likely be spent on this
over the next few years, I see too many chances for one of those hacks to
match—and then exceed—the information processing and generalization of our
biological version.

Think of all the advantages we have:

  • We’re just starting to understand how GPT-4 did what it did

  • We’re just starting to optimize TPUs

  • We will soon be able to train on far more and better data

  • We’ll soon be able to train on thousands of times more compute

  • We’ll soon be able to train for thousands of times as long

  • We have multiple teams and companies working on combinations of all
    these knobs and levers to find hacks that multiply performance

  • The most powerful companies in the world are pivoting to spend billions
    on this

Now combine that with the (likely) fact that the human brain is not some
paragon of performance, but rather just the result of one system’s very slow
hack. And we can barely remember where our car keys are or stop ourselves
from ending civilization due to petty warfare.

I think human intellectual performance will soon be shown for what it is,
which is an arbitrary tick on a scale that goes very much higher than we can
imagine. And don’t see how our manipulation of all these variables doesn’t
propel us way above it very soon.

What about consciousness?

Right, and then there is consciousness. Dennet said it best when he said it
was basically a series of hacks. I think it comes down to disparate parts of
our brain’s processing not knowing about each other, and basically doing a
hand off.

And I think we get free will the same way—or the sensation of it anyway.
Basically, I think groups of humans who believe in agency and moral
responsibility have far better survivability than those that don’t. So we
evolved to have that sensation of ownership of our actions.

I don’t see how we won’t either 1) engineer that into an AI, or 2) watch it
emerge naturally as an adaptive advantage for the reason above. Again, it’s
just another hack.

Summary

I know this all sounds really down on humans. Like, really? We’re just a
bunch of biological meat hacks with no specialness?

Yes, but
I didn’t say we weren’t wonderful and special. We are. That’s not mutually exclusive with being a collection of
evolution-created meat hacks.

I’m only saying that I think we’re going to be able to surpass ourselves.
And sooner rather than later.

NOTES

  1. I am only predicting AGI for 2025-2028. While I think consciousness
    is quite possible, it’s not nearly as economically interesting. Nor is
    it required for AGI.

May 23, 2025

0 responses on "Why and How I Believe We'll Attain AGI by 2025-2028"

Leave a Message