from Balaji | by Balaji

Balaji

@balajis

almost 3 years ago

View on X

Good thread. I agree with much of this. Though I do think there are many areas of the economy where AI-aided is way better than human-only. We’re already seeing improvements in search, coding, writing, art, and design. t.co/NX1DLxE88a

ABCDs of apocalypse AGI: AI may kill us, so halt progress Bitcoin: hyperinflation may ruin us, so end all states Climate: warming may harm us, so halt growth Democracy: authoritarians may control us, so risk nuclear war The risks are uncertain. The “cures” are certain. And bad. t.co/974iuhYpEC

There are so many technically possible secular apocalypses: Carrington Events, nuclear exchanges, designer viruses, even quantum computing. But those working on preventing them sometimes help bring them about. The COVID lab leak may be an example. t.co/UVUgx5EykO

One observation: the AGI instant death scenario is a secular match to Pascal’s Wager. Yes, if an omnipotent being existed, it’d be dumb to not fear it and devote your life to it. Even if you think the probability is low, the downside is infinite right? t.co/974iuhYpEC

The counterargument is: we can see developments in AI accumulating in a way we don’t for God. The counter counter: yes, but many other apocalypses have accumulating evidence too. And many religious people believe all is evidence of God’s will, eg physics is evidence of designer.

I may write more on this. But in short: the hard takeoff + instantly invent nanotech + kill everyone scenario has many unproven steps. Like — nanotech doesn’t exist. And — superintelligence isn’t omnipotence. Adversarial AI exists. And — embodiment isn’t trivial.

In many ways, the AGI debate presumes the consequent. If you *assume* an omnipotent being, of course you can’t outsmart it. Then you let it take every sci-fi tech with some calculations around it and instantly push it to production. Why doesn’t AGI instantly invent time travel? t.co/7eAHD0ceWL

Another issue is equating “big” with “infinite”. It’s superintelligent…thus omnipotent. It’ll get smart fast…infinitely fast. It outsmarts one human at one thing…so it’ll outsmart all humans at everything. Omnipotence can’t be beat. Finite systems often can, though. t.co/Iz5LWPUhij

Yet another is equating the gray goo scenario with the “persuade some humans” scenario. In the scenario where it controls robots to invent as-yet-uninvented nanotech the AGI can act alone. Once you have a few cultists listening to AI whispers, it’s more like Aum Shinrikyo. t.co/NuUBHhiWP4

There are many examples through history where people listening to whispers from their god executed terrorist attacks. The AI-persuades-humans scenario looks like secular Al Qaeda. The problem then isn’t omnipotent AI — it’s just an LLM — but an impressionable, fanatical human. t.co/7WfudE5QXY

Another failure mode for AGI-instantly-wipes-us-out is like the Drake equation. You can get wildly different results from reasonable-seeming assumptions. But the observed answer so far is zero aliens. Point: could a seemingly plausible scenario actually be zero probability? t.co/kWRqQCImKT

With all that said, consider this 3 point scale. 1) AI is a stochastic parrot 2) AI will create billions in value 3) AI may kill us all I’d be a 2.5, but not a 3. I don’t think hard takeoff + gray goo is likely for reasons outlined above. But I do think robot war may happen. t.co/mX8gUaS8CO

Political backlash to AI is underrated. Will you *actually* be able to make billions disrupting the Democrat base from within the bluest city in the bluest state in the union, when SF and CA are both already facing enormous deficits…or will they tax and regulate it to death? t.co/8e5jNNRo93

Just for perspective, California is reporting a $20B+ budget deficit and says that might even triple. And 500k people have left California for places like Texas. So where will they find $20B? Ah. Those evil AI guys in SF with billions, taking all the jobs. The obvious target. t.co/44Q3Ghya2X t.co/XFhsZP3sEN

More from @balajisReply on X

Page created with TweetHunter

Write your own