Farzad
Chapter 8

What Government Must Do (And Probably Won't)

Part II: The Stakes

Won't)

I want to start this chapter by saying something that might surprise people

who follow my content. I'm pro-regulation. And I mean that genuinely. Most people assume that if you're bullish on AI and technology, you must be anti-government and anti-regulation. That's a false dichotomy. I think governments should be massively involved in the AI transition. The question is how they should be involved. And that's where I differ from most of the conventional wisdom. I'm pro-regulation but anti-slowdown. Most people think these are the same thing. They're wrong.

The Core Job Nobody Wants to Talk About

So what should government actually do? The way I see it, there's one job that

matters more than anything else: ensure that AI benefits don't self-aggregate around a small group of people. That's it. That's the whole mandate. Because without deliberate intervention, the trajectory is predictable. AI capability will concentrate around those with capital to deploy it. The more

capital you have, the easier it is to employ AIs to do your bidding. The returns from that deployment generate more capital. That capital buys more AI capability. The flywheel spins. And within a decade, you'd have a world where a small fraction of the population controls an ever-expanding share of economic output while everyone else watches from the sidelines. I'm not being hyperbolic. This is the default trajectory if nothing changes. The math is straightforward. AI systems can work 24/7. They scale horizontally without friction. They improve continuously through iteration. And they're getting better at tasks that require judgment, not just execution. When you combine those properties with the existing distribution of capital, you get a self-reinforcing concentration mechanism. Government's job is to break that concentration. To ensure the benefits get distributed while keeping the system fair. And I'll be honest - that's an extremely hard job. But it's necessary.

Pro-Regulation, Anti-Slowdown

Now, when I say pro-regulation, I don't mean what most politicians mean

when they talk about AI regulation. Most of the current regulatory conversation is about slowing things down. Adding friction. Creating approval processes. Requiring impact assessments. Building bureaucratic checkpoints that AI companies have to navigate before they can deploy new capabilities. That approach is counterproductive for a few reasons. First, it doesn't actually address the concentration problem. Slowing down AI development doesn't change who benefits when the technology eventually arrives. If anything, it advantages incumbent players who can navigate regulatory complexity over scrappy startups who can't afford compliance departments. Second, it hands China an advantage. While American companies are filling out paperwork and waiting for approvals, Chinese labs are iterating. And as I

discussed in the previous chapter, losing the AI race to China has profound implications for how AI gets deployed globally. Third, and this is the part that frustrates me most, it conflates the wrong risks. The real danger isn't that AI will develop too fast. The real danger is that its benefits will concentrate too narrowly. Slowing development doesn't address that. It just delays the reckoning while making the concentration problem worse in the meantime. So what does productive regulation look like? It looks like ensuring broad access to AI capabilities. It looks like preventing any single entity from cornering critical AI infrastructure. It looks like aggressive antitrust enforcement when companies try to vertically integrate in ways that exclude competition. It looks like data portability requirements so that switching costs don't lock users into specific platforms. It looks like public investment in open-source AI research so that there's always a competitive alternative to private offerings. These are regulatory interventions that preserve competition and broaden access. They're the opposite of the friction-based approach most regulators seem to favor. The simplest way to put it: create a landscape where AI is as accessible and commoditized as bottled water. Everywhere. Cheap. Necessary.

UBI Is Inevitable

I think Universal Basic Income - or something like it - is coming. Not

because I'm ideologically committed to it. Because the math demands it. When AI systems can perform a significant fraction of economically valuable cognitive work, you face a choice. Either you find a mechanism to distribute the economic output those systems generate, or you watch a large portion of your population fall out of the economic system entirely. Those are the options. There isn't a third door.

I know UBI is politically controversial. Some people hear it and think welfare

state expansion. Others hear it and think government dependency. But I think both of those framings miss the point. UBI in an AI-abundant economy is fundamentally about redistributing the gains from a transformed production system. The framing around "supporting people who can't find work" misses what's actually happening here. The transition I described in Chapter 5 - where the middle 60 percent gets crushed - doesn't resolve itself. The people displaced from cognitive work don't magically find new jobs that AI systems can't do. Not at the scale we're talking about. Some will, obviously. But not enough to absorb the disruption without deliberate intervention (in my opinion). So governments are going to face pressure. Massive pressure. When large portions of the population find themselves economically marginalized by AI displacement, they're going to demand response. And UBI - some form of direct cash transfer - is the simplest mechanism to provide that response. It might be called something else. It might be structured differently in different countries. But the basic idea - government provides baseline income to ensure everyone can participate in the economy - is where this is heading. What form should it take? This gets complicated fast. Options include: Direct cash transfers - the purest form of UBI. Everyone gets X dollars per month, no strings attached. Simple to administer, respects individual choice, but critics worry about inflation and people spending it "wrong." Negative income tax - Milton Friedman's version. Below a certain income threshold, you receive money from the government instead of paying taxes. Preserves work incentives better than flat UBI, but more complex to administer. Universal basic services - instead of cash, government provides free housing, healthcare, education, transportation, and free credits towards humanoid robots and self-driving cars. Ensures needs are met but removes individual choice and requires massive government competence in service delivery.

Sovereign wealth fund dividends - the Alaska Permanent Fund model. A

portion of AI-generated wealth goes into a national fund that pays dividends to all citizens. Ties distribution to actual economic output. My guess is we'll end up with some hybrid approach - a basic cash floor plus enhanced public services - arrived at through messy political compromise rather than elegant design. The political obstacles are severe. The left sees UBI as insufficient - they want structural change, not a buyoff. The right sees it as socialism that destroys work ethic. Both sides worry (correctly) about inflation if you just print money without corresponding production. And the people who would benefit most from UBI - displaced workers - typically have less political power than the corporations and wealthy individuals who would fund it. We already have small-scale experiments to learn from. The Alaska Permanent Fund has paid dividends to residents for decades without destroying their work ethic. Pilot programs in Stockton, Kenya, and Finland have shown promising results. But scaling from pilot to national policy is a different challenge entirely. My skepticism centers on whether governments will implement it competently. The necessity is clear - the execution is where I have doubts. And that brings me to the bigger problem.

How Do You Pay For It?

The obvious question is funding. Where does the money for UBI come from

when AI is simultaneously displacing workers and concentrating wealth? I think the answer involves rethinking taxation entirely. The current tax system is built for a world where most economic value is created by human labor. You tax wages. You tax corporate profits. You tax transactions. But in an AI-abundant economy, a larger share of value creation happens without human labor. The traditional tax base shrinks even as the need for distribution grows.

So you need new mechanisms. Some possibilities: AI output taxes - tax the economic value produced by AI systems directly. When an AI performs work that would have been done by a human, capture some fraction of that value for redistribution. The challenge is measurement, but it's not conceptually impossible. Could be as simple as taxing tokens at the chip layer. Compute taxes - tax the computational resources used for AI inference and training. This has the advantage of being measurable and hard to evade. The more AI work you're doing, the more compute you're using, the more you pay. Data value taxes - the training data that makes AI systems valuable was often created by regular people who never got compensated. A data value tax could capture some of the downstream value and redistribute it. Robot taxes - this one gets discussed frequently. If a robot or AI system displaces a human worker, the company pays a tax equivalent to what they would have paid in payroll taxes for that human. It's a straightforward concept, though enforcement gets complicated. None of these are perfect. All of them have implementation challenges. But the point is that we need to be thinking about how taxation evolves for an AI economy, not just whether UBI is philosophically justified. The corporations benefiting most from AI are not going to volunteer to pay more taxes. They're going to lobby aggressively against any of these mechanisms. They're going to find loopholes. They're going to shift operations to jurisdictions with lower AI taxes. They’re going to massively reinvest the cashflows and profits from the AI systems back into the company to build even bigger AI systems. That's what corporations do. That’s the incentive. Expecting anything else is naive. Which means government has to be aggressive and coordinated - but most importantly, effective. International coordination on AI taxation would be ideal - prevents the race to the bottom. But international coordination is even

harder than domestic policy. So we're probably going to see a patchwork approach that creates massive inefficiencies and opportunities for arbitrage. Again, this is why I'm skeptical. The funding mechanisms for AI-era distribution require exactly the kind of competent, coordinated, aggressive government action that governments have historically struggled to deliver.

The Corporate Incentive Problem

Corporations have strong incentives to keep the current education system

exactly as it is. And they have strong incentives to oppose most forms of productive AI regulation. Think about it from a corporate perspective. You want workers who are trained to follow instructions, not question authority, and show up reliably. You want workers who derive their identity from employment, so they're loyal and committed. You don't want workers who think like entrepreneurs, because entrepreneurs leave to start their own companies. Good companies do the right thing, but those are few and far between today - especially in very large, bloated corporations that are dominated by office politics. The education system that produces corporate drones serves corporate interests perfectly. Any reform that shifted education toward entrepreneurship would threaten the labor supply model that corporations depend on. Similarly, corporations benefit from AI concentration. If AI capabilities are concentrated in a few large companies, those companies can extract rents from everyone who needs AI tools. If AI capabilities are broadly distributed through aggressive antitrust and public investment, corporate margins get compressed by competition. So the entities with the most resources to lobby government - large corporations - have direct financial incentives to oppose exactly the reforms that would help with the AI transition. They'll advocate for regulatory

frameworks that appear protective but actually entrench their positions. They'll fund politicians who support the status quo. They'll deploy armies of lobbyists to shape legislation in their favor. I'm not saying this to demonize corporations. They're responding rationally to their incentive structure. But recognizing this dynamic is important for understanding why reform is so hard. The people with money want one thing. The people without money need something else. And in a system where money buys political influence, guess who usually wins?

The Education Problem Nobody Wants to Fix

When I think about why wealth inequality is a problem in America, most

people reach for the wrong explanations. They blame wealthy entrepreneurs. They blame corporations. They blame the system being rigged. I blame education. America's education system is piss-poor. And I don't mean that test scores are low or that classrooms are underfunded, though both of those are true. I mean that the entire orientation of American education is wrong. Most people graduate from school without any fundamental understanding of how capitalism works. Ask the average American what a stock price represents and you'll get a blank stare. It's market capitalization divided by shares outstanding. That's it. But most people don't know that. They see Apple stock is $200, and Tesla stock is $400, and they’ll say Tesla is twice as expensive, when Apple’s market cap is 3x that of Tesla, roughly. Apple just happens to have “cut up” the company into smaller chunks. These folks have been through 12 to 16 years of formal education and emerged without the basic financial literacy required to participate in wealth creation. Why? Because American education was designed to create corporate drones. Go back to the history of public education in this country. It was built to serve the industrial economy. Show up on time. Follow instructions. Don't

question authority. Complete assigned tasks. Those are the skills that made good factory workers. And the basic structure of schooling - bells, periods, grades, standardized testing - all of it reflects that industrial origin. It started with amazing intentions. It devolved into a national embarrassment. But we're not in an industrial economy anymore. And we're definitely not going to be in an industrial economy in the AI era. The skills that matter now are entrepreneurial. Identifying opportunities. Taking calculated risks. Building things. Understanding capital allocation. Creating value rather than just executing tasks assigned by others. If education focused on entrepreneurship instead of corporate employment, we'd be in a fundamentally different position. Instead of graduating people who know how to be employees, we'd graduate people who know how to be builders. People who understand that owning equity is different from earning wages. People who can evaluate business models and allocate capital and create enterprises. But that's not what the system produces. And as AI makes employee skills less valuable, the gap between those who can build and those who can only work for others is going to explode. AI accelerates this divide times a million.

Historical Parallels and Why This Time Is Different

Previous technological transitions displaced specific types of work. Farming

jobs declined with mechanization. Manufacturing jobs declined with automation. But cognitive work - the stuff that requires thinking, judgment, creativity - kept expanding. The knowledge economy absorbed displaced workers because the technology couldn't do what human brains could do. AI changes that equation. For the first time, we have technology that competes directly with human cognitive capability. Not just routine cognitive tasks, but increasingly sophisticated ones. Writing. Analysis. Coding. Design. Strategy. The stuff that was supposed to be safe because it required human judgment.

That's a fundamentally different situation. When technology displaces

physical labor, people can shift to cognitive labor. When technology displaces cognitive labor, where do people go? The answer isn't obvious. Previous transitions also happened over decades. The shift from agricultural to industrial economy played out over multiple generations. People had time to adapt. Education systems had time to evolve. Social structures had time to adjust. AI is moving faster. Much faster. The capabilities I'm seeing now didn't exist 1 year ago. The capabilities five years from now will make current systems look like a complete and total joke. We're looking at a multi-year timeline here. Maybe shorter. The multi-decade transition period people imagine simply doesn't match the pace of change. So the historical parallels that comfort people - we adapted before, we'll adapt again - don't fully apply. The nature of what's being displaced is different. The timeline is different. The degree of government competence required is different. I wish the historical parallels held perfectly. It would be reassuring. But I think intellectual honesty requires acknowledging that this transition is different in ways that matter. It will be like the car displacing the horse. But this time, we are the horses. (Thank you RethinkX).

What Other Countries Are Doing

It's worth looking at how other governments are approaching AI. Because

the US doesn't exist in a vacuum, and international comparison is instructive. The European Union is pursuing its usual approach - heavy regulation, lots of bureaucracy, focus on risk management. The AI Act categorizes AI systems by risk level and imposes requirements accordingly. It's comprehensive and well-intentioned. It's also almost certainly going to slow European AI development while doing little to address the concentration

problem. European bureaucratic reflex is to regulate first and innovate second. China is taking a different approach entirely. The government is heavily involved, but as an accelerator rather than a brake. Massive investment in AI research and development. State direction of resources toward strategic technologies. Less concern about individual privacy or civil liberties if they conflict with AI advancement. If you're worried about AI concentration, China is an example of intentional concentration - concentrating AI power in the hands of the state. The UK has positioned itself as trying to find a middle ground - "pro- innovation" regulation that promotes development while managing risks. Early days on how that works in practice. Singapore, UAE, and other smaller countries are essentially trying to become AI hubs through favorable regulatory environments and strategic investment. They're betting that being AI-friendly will attract talent and capital. So far, it’s working. What's notable is that almost nobody is doing what I think should be done - regulating to ensure broad distribution while keeping development fast. The EU is slowing things down. China is accelerating but concentrating. The small countries are trying to compete but lack the scale to matter globally. The US is in limbo, with no coherent strategy. Perhaps by design. This international landscape matters because AI regulation will increasingly require coordination. As we talked about before, if one country cracks down while others accelerate, companies just move. If there's no international framework for AI taxation, companies will arbitrage the differences. The collective action problems are massive. What Government Should Actually Prioritize If I were advising a government on AI policy - and to be clear, nobody's asking - I would focus on five priorities.

First, education reform at a fundamental level. Stop producing corporate

drones and start producing entrepreneurs. Require financial literacy. Teach how businesses actually work. Make equity ownership and capital allocation core parts of the curriculum. This is a 20-year project, but without it, you're just managing decline. Second, aggressive antitrust enforcement in AI. Don't let any single company corner the market on foundation models, compute infrastructure, or training data. Keep the market competitive. This is the single most important lever for preventing concentration. Third, public investment in open-source AI. Fund research that produces capabilities anyone can use. This creates a floor on what's available to builders who don't have billions in capital. It keeps the proprietary players honest. And it ensures that AI capability doesn't become a toll road controlled by gatekeepers. OpenClaw is the best example of this. An unbelievably powerful tool in the age of AI, fully open source. Fourth, prepare for UBI. I know it's politically toxic right now. But the math doesn't care about political convenience. When AI displacement hits critical mass - and it will - governments are going to need distribution mechanisms ready to deploy. Start designing them now. Figure out funding mechanisms. Work out implementation details. So that when the moment arrives, you're not scrambling. Fifth, modernize government operations. Use AI internally to audit spending, detect fraud, streamline processes, and improve service delivery. Be an early adopter rather than a laggard. The irony of government trying to regulate AI while being unable to use AI to improve its own operations is not lost on me.

The Skeptic's View

Now, will any of this happen? As you can tell, I’m not optimistic.

The vested interests are too powerful. The political incentives are too

misaligned. The bureaucratic inertia is too strong. And the timeline is too short. Government moves slowly by design. Deliberation, debate, coalition- building, compromise - these processes take years. Decades for major reform. But AI is moving fast. The disruption I'm describing isn't a 30-year story. It's a 5-year story. Maybe less. By the time government figures out what to do, the transition will be well underway. And look at the actual political conversation about AI. It's dominated by people whose expertise developed in other domains - which is natural, because this technology barely existed a few years ago. But you see senators asking questions in hearings that reveal they haven't yet had time to develop deep technical understanding. Regulators proposing rules that reflect outdated mental models. Pundits focusing on speculative risks while the more immediate structural challenges get less attention. The people who understand AI deeply are mostly in the private sector, building. They're not in government, regulating. And that knowledge gap creates a fundamental problem. How can government craft good policy for technology that's evolving faster than any institution can track? But there will come a point where the existential nature of this race becomes impossible to ignore. When it becomes obvious that whoever wins AI wins everything, political resistance to change will evaporate overnight. We've seen this before: the Manhattan Project, the Apollo program, the COVID vaccine development. When survival is clearly at stake, governments can move with shocking speed. I expect that moment is coming. When China's AI capabilities become undeniably threatening, when job displacement hits critical mass, when the economic implications become viscerally real to voters - the same politicians who blocked reform will suddenly discover urgency. Regulatory hurdles that seemed immovable will disappear. Permitting that took years will happen in months.

This is both good and bad. Good because it means action will eventually

happen. Bad because moving fast comes with its own set of problems. Rushed decisions. Reduced safety review. Policies designed in crisis mode rather than thoughtfully. The cure can be almost as dangerous as the disease when you're scrambling. So while I'm skeptical about government action happening in time, I'm not skeptical that it will eventually happen. The question is whether "eventually" is soon enough to matter.

So where does this leave us?

I believe that navigating the AI transition successfully requires herculean, competent government intervention. This is the key variable between abundance and collapse. Without effective government action, the benefits concentrate, the middle gets crushed, and social instability spirals into something much darker. We're not talking about recession or hardship. We're talking about societal collapse - the kind where institutions fail, social contracts break, and the gains from AI become meaningless because there's no stable society left to enjoy them. There’s no need to dedicate an entire section of the book to it. It’s painfully obvious and brutal. Dystopian. But I doubt governments will deliver that intervention. Not because the people in government are malicious. Because the structural incentives don't support it. Because the political timeline doesn't match the technological timeline. Because the knowledge gap is too wide. Because the entrenched interests are too powerful. This creates an uncomfortable situation. The transition is coming regardless of whether government handles it well. AI capability is improving independent of policy choices. The disruption will unfold whether there's a safety net in place or not. For individuals, this means you can't wait for government to save you. I'm being realistic here. Build your own positioning. Develop your own

capabilities. Take responsibility for your own trajectory. Because counting on government to execute well is a bet I wouldn't make. For the system as a whole, though, I genuinely worry. Not everyone can position themselves optimally. Not everyone has the resources or knowledge or circumstances to build their own safety net. And a society where large portions of the population are economically marginalized is not a stable society. The historical precedents for what happens in that situation are not encouraging. Maybe I'm wrong. Maybe government will surprise me. Maybe the pressure of AI disruption will force reforms that seemed impossible before. Maybe new leaders will emerge who actually understand the technology and have the political skill to navigate the institutional obstacles. Maybe the immune response can be overcome. I hope so. Because the alternative - AI transformation with incompetent government response - is collapse. Real collapse. The kind where economic displacement triggers political extremism, where institutions lose legitimacy, where the social fabric tears apart. The technology can deliver abundance. But without effective governance, it delivers chaos instead.

The Path Forward

What I want you to take from this chapter is a clear-eyed view of

government's role in the AI transition. It's essential. And it's probably going to be botched. I hope I’m wrong. Government should be regulating to ensure broad distribution of AI benefits. Instead, it will probably regulate to slow things down while the concentration continues. Government should be reforming education to produce builders. Instead, it will probably continue producing corporate drones for jobs that won't exist. Government should be modernizing its own operations with AI. Instead, it will probably continue running on systems that were outdated decades ago.

Government should be preparing distribution mechanisms for AI-displaced

workers. Instead, it will probably scramble reactively when the crisis hits. Pro-regulation, anti-slowdown. That's my position. Governments have a role to play. But the role they should play and the role they will play are probably different. Don't assume government will handle this well. Position yourself accordingly.

Download the full book — free, no email required.