For context, I watched first this, and then this video by two experts on AI. One is known as the “Godfather of AI”, and the other is the first to have coined the term, “AI safety.” That doesn’t seem like a big deal, but it was 2011 and we were laughably far away from anything like what we have now.
Then, there’s this, put together by a team of researchers and led by someone who worked on governance at OpenAI and then left because they were “recklessly racing to be the first” to get to AGI (Artificial General Intelligence, or a computer that can do anything a human can). He and a team of other current and former employees of OpenAI, Google DeepMind, and Anthropic (ChatGPT, Gemini, Claude) signed an open letter calling for better safety and regulation in a field where there is almost none.
I went from having a comfortable evening to a full-blown existential crisis in about 40 minutes.
I began writing a heavily cited article pointing to all the things that were mind-blowing, but I believe the works speak for themselves. So, instead, I’ll talk about how it made me feel.
I didn’t think we were (possibly) five years from UBI (Universal Basic Income). That always seemed like “Star Trek” fantasy to me; the utopian society where everyone has everything they need.
(Though to be fair, that’s because the entire society has undergone a total psychic change and now works collaboratively to support the species as a whole. This is more like the beginning of Avatar, really).
But it tracks. If AGI is around the corner (years instead of decades), then it becomes cheaper and cheaper to get autonomous workers to do the things humans used to. So unemployment rates will skyrocket. Not like 15% (the height during Covid). More like 50-80%. Which means HUGE swaths of the population won’t be making money. So we’ll end up with UBI, or a lot of dead people. Maybe both.
What does that look like for my kids? I mean, sure, they could be plumbers. Until robots do as good a job and even that’s not safe. Then what?
That’s what I’m talking about. I had a feeling about what the world would look like for a while. And now I really don’t.
Even if these guys – the speakers, the researchers, Elon Musk – are wrong about the dangers, are they going to be wrong about the timeline? There’s a common phrase running around Substack: “You have 36 months.” That’s how long it’ll be before AI’s ability creates what’s called a “permanent underclass,” and there will only be those above that line and those below.
Even if they’re wrong about the timeline, what’s it going to look like when my kids are old enough to have kids? How will society have changed?
I feel like I’m staring down the barrel of something as large as the Industrial Revolution, with the benefit of future hindsight, and I’m a farrier. And everything that everyone says, on both sides of the fence, is that the AI Revolution will dwarf the Industrial.
What do I do with that? How do you prepare your kids for a world you can’t imagine?



Thanks for the links!