Farzad
Chapter 9

The Transition Nobody's Preparing For

Part II: The Stakes

Consider a typical tax accountant - twenty years of experience, respected in

his community, kids in good schools. He started using AI tools that do 80 percent of his routine work. He's more productive than ever. But he also sees where this is heading. He doesn't know what to tell his son who's considering following him into the profession. The career that built his life may not exist in the form his son would inherit it. That's not a hypothetical future in lala land. A lawyer who spent seven years building expertise in contract analysis. A junior developer who assumed grinding through entry-level work would teach her the skills to advance. A financial analyst whose entire career is built on the kind of pattern recognition AI now does in seconds. They all face the same question: If the work that defines me can be automated, then who am I? Efficiency gains or productivity improvements is not the key question. This is about identity. And almost nobody is prepared for it.

The Agent Revolution Is Here

Let me show you something to drive home how disruptive this whole thing

is. This book you're reading was written with AI agent assistance. Without it, this book would’ve never existed. Roughly 95%+ of the words on this book were not physically written by me. How did that happen? The AI agent read through my entire YouTube transcript library that I’ve curated over 4 years. I've talked in great depth about the coming disruption for a while - over 1,700 videos. What the AI agent discovered was 3-5 overarching themes that would make for an excellent book topic. One that I would be specifically well-positioned to deliver on. Then, it went on my X account - which has over 350,000 followers and way too many posts - and studied how I write. My voice. My style. My cadence. Then, it merged these two things together, whipped up another set of research agents to help itself fact-check and highlight potential gaps as it writes, and it wrote the book. It came up with an overarching structure. It figured out what each chapter should be able. How it should be broken up into parts. What themes it should follow. And then it wrote the book. Then I took that book, proofread it, changed lines and sections that I thought needed to be modified, and went back to the AI agent to help me do a final edit, proofread, and format pass for Amazon’s book publishing standards. And now, this book is in front of your face in either Kindle format, paperback, or hardcover. (Thanks for your support!) This AI agent was Claude Code. You might be asking yourself how a Coding tool can write a book. As it turns out, if you do stuff in a computer, everything is code. Using MS Word is fundamentally using code. Researching on a computer is fundamentally using code.

This book would’ve never existed without Claude Code. The thought of

writing a book - something I’ve always wanted to do - has always been far too daunting. But with the help of AI agents, that friction point has been completely removed. I was fortunate enough to already have done the work by posting almost 2,000 videos. The AI agent helped me transform that into a format that wouldn’t have existed otherwise. Yes - I could’ve hired a team to write a book on my behalf. But that would’ve cost me thousands - maybe more. This entire process on Claude Code was maybe… $50? Maybe? Probably less. This type of insanely disruptive technology - one that can take a seemingly impossible task for an individual, and make it not only achievable, but very enjoyable, will spread like wildfire through the economy. This technology isn’t limited to writing books. I’ve been using AI agents to completely transform my research and writing stack for my YouTube videos. My views on my videos are up as much as 10x because of this. This has a direct impact to the AI’s ROI - it generates more revenue than it costs to operate. It just so happens AI is a far better researcher and writer than I ever could be. What a shocking development (not really). The kicker is this system is fully autonomous. It has found 10+ other book topics I can write about in my style, voicing, and framing. All I have to do now is hit enter to get it started - and over time, it’ll become easier and easier to automate and refine each sub-step in the process. If you see 10 books published by me in less than a year, you know what happened. But let me explain what AI agents are. If you read this in early 2026, you are way ahead of a trend that’s about to take the world by storm. If you read this in 2027, you’ve probably already caught up. ChatGPT, Grok, Gemini, and Claude as chatbots is version zero. You ask a question, you get an answer. Useful, but limited. You're still the one doing the work, with AI as a tool. AI agents are different. An agent can: - Receive a complex goal ("edit this book based on reader feedback")

- Break it into subtasks autonomously

- Execute each subtask (read files, make changes, run commands)

- Verify its own work

- Iterate when something fails

- Complete the entire workflow without constant human oversight

This is "AI as a worker."

Let me give you concrete examples from my actual workflow: The Skills System: I've built a library of "skills" that Claude Code can execute. MEMORY loads everything Claude needs to know about me - my projects, preferences, writing style, positions on key topics. DAILY-ENGINE runs my entire content production pipeline: scans news, generates topic ideas scored against a 49-point framework, writes scripts, fact-checks, prepares for publication. TOPIC-FINDER discovers high-impact video topics using validated criteria. SCIPT-PIPELINE takes a topic from research through full script with hooks and story structure. I used precisely zero code to do this. I just talked to the AI agent about what I needed - and it built it. Each of these used to require hours of my time. Now I say "let's go" and the agent orchestrates the entire workflow. If you're on X right now, you're seeing this everywhere. Developers shipping entire features without writing code themselves. Writers producing content at 10x their previous rate. Researchers synthesizing information across hundreds of sources in minutes. Everyone is losing their minds because the capability jump happened so fast. So what does this mean for the timeline? It started in 2024 really - not 2027. We're in it now - in the thick of it, and accelerating. These tools are already largely deployed. They're being used. By people like me, to produce content like this book, at speeds that weren't possible two years ago.

But what happens to the junior developer who thought they had years to

build expertise? AI agents can already do significant portions of their job - likely all of it to be honest. What about the content writer who assumed creativity was their moat? AI agents can research, outline, draft, and edit at scale - and soon, they’ll be able to do this in video as well. Not just in writing. The analyst who spent years mastering Excel and SQL? AI agents can query databases, build models, and generate reports from natural language descriptions 100x faster with far better accuracy.

The Agent Opportunity (For Those Who Adapt)

But this is also the biggest opportunity in a generation - if you adapt. The people who learn to orchestrate AI agents won't be displaced. They'll be amplified. A single person with the right agent setup can now produce what used to require a team. Not slightly more productive - an order of magnitude more productive. I'm one person. With Claude Code and the skills I've built, I am 10x better. None of this replaces my creativity or judgment. It amplifies them. I still decide what topics matter, what positions to take, what voice to use. But all the execution overhead - the research, the drafting, the editing, the formatting - gets compressed. And eventually, the agent will learn about what I want to amplify. What topics matter to me. What positions are important to me. What my voice is. Remember - I can have these AI agents pointed at anything. I can have them study my watch history on YouTube. My liked posts on X. My news feed. My conversations. They will learn about who I am to the smallest detail, and eventually, they will be able to essentially ‘clone’ me in the digital domain. All we need are smarter AI models, and they are coming faster than anyone can keep up. As I make the last pass of the book on February 6th 2026, Anthropic and OpenAI both dropped new frontier models - Opus 4.6 and ChatGPT 5.3. Grok will be coming out with theirs in about 2 months. Gemini likely soon after that.

The question isn't whether AI agents will transform work. They already have. The question is whether you're learning to ride this wave or getting swept away by it.

The Identity Crisis Nobody's Talking About

The benefits are obvious to anyone paying attention. AI massively empowers

builders, risk-takers, and the curious. If you're someone who creates things, who takes calculated risks, who learns continuously - you're going to be incredibly well off. AI is the ultimate force multiplier for people who already know how to generate value. But I'm deeply distressed about those who will be blindsided or can't adapt. Not everyone has the resources, knowledge, or circumstances to position themselves for this transition. And for those people, the next few years could be devastating. For a lot of people, work is how they understand themselves - the paycheck is almost secondary. Their identity is wrapped up in their profession. Their social connections form through workplace relationships. Their sense of purpose and contribution comes from their job.

What happens when that gets disrupted?

The standard tax accountant isn't worried about money. They’re worried

about meaning. Twenty years of expertise. Twenty years of being the person clients call when things get complicated. Twenty years of identity built around being good at something that matters. And now he/she watches an AI do most of it in minutes. Some will embrace the change and supercharge their abilities. Those that do will put everyone else out of business in their space that don’t adopt this. What happens to all the tax accountants that don’t embrace AI?

I think we're going to see a psychological crisis alongside the economic one. People who defined themselves by their careers suddenly facing an existential question: If I'm not a lawyer, analyst, developer, writer - then who am I? Previous technological transitions didn't hit this issue as hard because they displaced physical labor while creating cognitive labor. People could shift their identity to new types of work. But when AI displaces cognitive labor, what's left? What does human work even mean in a world where AI can do most knowledge work? As I’m sure you’ve noticed if you live in America, the areas of the country that were massively impacted by manufacturing moving overseas didn’t suddenly become hubs for cognitive work. They’ve been ravaged with poverty, drug abuse, depression, and anxiety. What happens when the same dynamic hits cognitive work to start? Are these folks going to suddenly all take on blue collar jobs? And even if they do, what happens when robots come in and start doing those jobs far better than any human could - just like in the digital world? I don't have a clean answer. No one does, really. But I think the psychological dimension of this transition is massively underestimated. The anger and confusion people will feel won't just be about money. It will be about meaning. And angry, confused people who feel their identity has been stolen are not a stable social foundation.

The Potential for Massive Social Unrest

I'll say it plainly: I think there's real potential for massive social unrest during this transition, and I think the odds of that unrest are far larger than anyone admits. I don’t think it’s 50/50. I think it’s 70/30. Perhaps 80/20. When large portions of the population find themselves economically marginalized while watching a small group thrive, the result is predictable. History has shown us what happens. It looks like collapse - social fabric tearing apart, institutions losing legitimacy, extremism filling the void left by broken social contracts.

I've said before that it's going to be inevitable for something like a universal basic income to be implemented in the United States. Not because policymakers suddenly become enlightened about redistribution. Because millions of people will be borderline about to turn violent, and the government will have no choice but to respond. A true UBI will only be a response to that pressure. The government will say, "Okay, fine, here's money" because the alternative - actual upheaval - is worse. That's the cynical but realistic view of how this plays out. And even with some form of UBI, the transition period is going to be messy. Money doesn't solve the identity crisis. Money doesn't restore the sense of purpose that came from work. Money keeps people fed and housed, which is essential, but it doesn't address the deeper disruption.

The Window Is Now

The window for preparing is now. Not when the disruption is obvious to

everyone. Now. Developing new skills takes time. Building alternative income streams takes time. Shifting from labor income to capital income takes time. Accumulating the resources to weather a disruption takes time. Starting that process when the disruption is already underway is too late. The people who will navigate this successfully are the ones who see it coming and move early. This is uncomfortable advice. I'm telling you to potentially make significant life changes based on predictions about the future. And predictions can be wrong. But the cost of being wrong in the direction I'm suggesting - developing additional skills that end up not being necessary - is low. The cost of being wrong in the other direction - dismissing the transition and being caught unprepared - is potentially catastrophic. I'd rather you prepare for something that doesn't hit as hard as expected than be blindsided by something you didn't see coming.

What the Unprepared Will Face

For those who don't see this coming, or can't reposition for reasons beyond

their control, the next few years are going to be difficult. Jobs that seemed stable will disappear faster than new ones appear due to the massive rate of change of the technology. Entire industries will contract while the new economy is still being built - most of it powered solely by AI systems. The gap between the disruption and the response will be filled with uncertainty and stress. Income will become unreliable. The steady paycheck that middle-class life is built around will become even harder to maintain. Gig work and contract arrangements - already prevalent - will become even more dominant, with all the insecurity that entails. Social support systems will be overwhelmed. Unemployment insurance wasn't designed for this kind of displacement. Retraining programs won't be able to move people fast enough. Family and community networks will strain under the pressure. And through all of it, there will be a psychological dimension. The uncertainty, the loss of identity, the feeling of being left behind while others thrive - these take a toll that shows up in mental health, relationships, and social cohesion. I'm not saying this to be alarmist. I'm saying it because sugarcoating the transition does a disservice. The people who understand what's coming can prepare. The ones who are told "it'll be fine, we'll adapt" are the ones who will be blindsided.

The Case Against My Thesis

I hope that I've laid out a strong case for why AI disruption is happening

faster than most people expect. Now let me steel man the other side. These are the strongest arguments against my thesis, presented as fairly as I can

manage. If you're going to bet on my worldview, you should understand how it might be wrong.

Argument 1: AI Progress Could Plateau

The scaling laws that have driven AI improvement might hit fundamental

limits. We've been picking the low-hanging fruit of AI capability - throwing more compute and data at transformer architectures and watching them get smarter. But there's no guarantee that trend continues. Maybe we've exhausted the easy gains. Maybe the next leap requires architectural breakthroughs that could take decades to disrupt the economy the way I’m describing. Maybe current AI is reaching the ceiling of what statistical pattern-matching can achieve, and true intelligence requires something we don't yet understand. This is a serious argument. Anyone telling you they know for certain that AI progress will continue exponentially is likely wrong. We've been surprised by capability jumps, but we could equally be surprised by a plateau. The history of AI includes multiple "winters" where progress stalled for years. I still believe we're in a sustained capability explosion, and it’s all just getting started, but I hold that belief with humility. If GPT-7 is only marginally better than GPT-5, I'll reassess.

Argument 2: Regulatory Capture Could Slow Deployment By a Decade

Even if AI keeps improving, its deployment into the real economy could be

dramatically slowed by regulatory friction. Unions fighting automation. Safety requirements that make deployment uneconomical. Liability frameworks that create so much legal risk that companies don't bother. Look at autonomous vehicles. The technology has been "ready" for deployment for years, depending on how you define ready. But regulatory uncertainty, liability questions, and political pressure from affected workers have kept full-scale deployment in limbo.

Multiply that across every industry. Healthcare AI faces FDA approval

processes. Legal AI faces bar association resistance. Financial AI faces regulatory scrutiny. Every sector has incumbents with political influence who don't want to be disrupted. This argument isn't about whether AI can do the work. If the deployment timeline stretches from 5 years to 15 years, my urgent "prepare now" message looks alarmist. I think the economic pressure eventually overwhelms the political resistance, but I could be wrong about the timeline by a factor of two or three.

Argument 3: Economic Disruption Could Trigger Protectionist

Backlash

If AI starts displacing workers at the scale I'm predicting, the political

response might not be UBI and adaptation. It might be AI bans, robot taxes, and "human work" mandates. We've seen this before. The Luddite movement wasn't irrational - it was workers correctly perceiving that machines would destroy their livelihoods, especially in the short to medium term. They weren’t prepared to adjust and move with the times. They lost that fight, but the political dynamics today are different. Modern democracies are more responsive to mass unemployment. Social media amplifies grievances. Politicians need votes from the disrupted. Imagine a world where the US passes laws requiring human labor for certain job categories. Where "Made by Humans" becomes a premium certification. Where companies face penalties for AI-driven layoffs. This would dramatically slow the transition and change who benefits from it. I’m not endorsing this position - just highlighting it as a potential outcome. I think this outcome is less likely than messy adaptation, but it's not impossible. And if it happens, my investment thesis around AI-leading companies would need revision.

Argument 4: China Could Leapfrog the US in Ways I'm Not Seeing

I wrote about China's structural advantages in Chapter 7. But let me extend

that into a real bear case. What if China isn't just catching up - what if they're pulling ahead? They're already vertically integrated in ways the US isn't. They control more of their supply chain. They have access to Chinese government support that Western companies can't match. What if Chinese AI development, despite current limitations, takes a different path that proves superior? Their approach to data collection, their willingness to deploy at scale, their state coordination - these are genuine advantages. What if the narrative of American technological superiority is a comforting story we tell ourselves while the center of gravity shifts East? I don't think this is the most likely outcome. American dynamism, capital markets, and innovation culture are real advantages. But the confident assumption that the US will lead the AI era is exactly the kind of assumption worth questioning.

Argument 5: I Could Just Be Wrong About Everything

What if AI turns out to be a massive net positive for almost everyone? What

if people adapt far better than I expect? What if the transition is smoother than any historical precedent suggests? Humans have repeatedly demonstrated remarkable adaptability. Every major technological transition has produced hand-wringing predictions of mass unemployment and social collapse. The Luddites. The automation scares of the 1960s. The "jobless recovery" fears of the 2000s. Each time, the doom- and-gloom predictions proved overblown. New jobs emerged. People retrained. Society adapted. What if AI follows this pattern? What if the "middle 60% gets crushed" prediction is just the latest iteration of a fear that never quite materializes?

What if the benefits of AI distribute more broadly than I expect because

that's what benefits from technology have always done? I've built my thesis on the assumption that "this time is different" because AI can do cognitive work, not just physical work. But maybe that distinction doesn't matter as much as I think. Maybe humans will find new ways to create value that I can't imagine, just as they always have. Maybe the jobs of 2035 are as unimaginable to me today as "social media manager" was unimaginable in 1995. I’m sure that will be the case. And the uncomfortable truth: I might have skin-in-the-game bias working in reverse. My investment thesis depends on disruption being real and concentrated. If AI benefits distribute broadly and smoothly, my concentrated position in Tesla looks less genius and more lucky. Maybe I'm subconsciously emphasizing disruption because it validates my investment choices. I still believe my thesis. I think this time really is different. But I hold that belief with genuine uncertainty. If in 2030 we look back and see smooth adaptation, broad benefit distribution, and minimal social disruption, I will have been wrong about the most important prediction in this book. And I think there's a real chance - maybe 20-30% - that I am.

Why I Still Believe My Thesis

Having laid out these bear cases fairly, let me explain why I still believe what I believe. On AI progress: The underlying drivers - compute, data, algorithms, and investment - are all still accelerating. Even if we hit diminishing returns on current architectures, the economic incentive to find new approaches is overwhelming. Tens of billions of dollars are flowing into AI. The smartest people in the world are working on this problem. The biggest AI companies are going public so that they can raise massive capital from the public markets to fund their growth. This train ain’t stopping. Not when AI agents,

self-driving cars, and humanoid robots are unbelievably useful, cheaper, and at available at scale. On regulatory friction: Economic pressure eventually wins. Companies that can do more with less will outcompete those that can't. Countries that embrace AI will outcompete those that don't. The friction is real but not permanent. On protectionist backlash: History shows technology wins in the long run. The Luddites lost. Agricultural automation proceeded despite resistance. The question is timeline, not outcome. On China: Genuine competition is good for the world. Even if China advances faster than I expect, that doesn't invalidate the thesis that this technological revolution is happening. It changes who captures the value, not whether the value exists. I've presented these bear cases because intellectual honesty requires it. Conviction without doubt is usually a red flag. I want you to understand both why I believe what I believe and why reasonable people might disagree.

What's At Stake

If you've read this far, you're probably not the person I'm most worried

about. The people who will be blindsided are the ones who won't read books about AI disruption. They're living their lives assuming continuity. But knowing what's coming creates responsibility. The family members, friends, and colleagues who trust your judgment - help them see what's coming. Not to create panic, but to create action. Maybe give them this book. Or buy them their own copy. I sure wouldn’t mind the latter (or the former). This is what's at stake. Part III is about what to do about it.

PART III: WHAT TO DO

Download the full book — free, no email required.