Part 1: The New Scarcity
What becomes scarce and valuable when intelligence is abundant and cheap?
I was raised in a rural area of New England that was shaped by furniture factories, textile mills, and machine shops for generations. Between 1980 and 2010, the United States lost approximately 7.5 million manufacturing jobs. Some went to China. Most went to machines. The people who lost those jobs were not unskilled workers. They were highly skilled at their respective crafts. Entire communities were built around their expertise, but their skills were soon automated by machines that were faster, cheaper, and better. The value of “physical intelligence” as a capability collapsed. The factories closed, one by one. The jobs never came back, and I experienced the fallout first hand: unemployment, addiction, and hollowed-out communities were the backdrop of my childhood. This displacement has been the defining undercurrent of the last decade. It's reshaped where people live, how they vote, what they're angry about. And until recently, most knowledge workers paid little attention.
Now, millions of knowledge workers are suddenly faced with the same question: what happens to me when the machine is better than me at my job? Technology stocks are in free fall. Layoffs continue in every sector. What was once considered “skilled” labor is becoming another form of “unskilled” labor as artificial intelligence commoditizes high-intellect work. AI is faster, cheaper, and better than humans at almost every conceivable knowledge-based task. Right now, today. For many, it threatens not only a career, but a sense of self. People are afraid, and understandably so.
But I do not think it will play out the same way, and I am experiencing the reason in real time. My business partner and I recently bootstrapped an advisory firm to substantial annualized revenue with just two people in about a year. This violates every narrative I’ve heard for the last few years. We run an advisory firm (which AI should be replacing any day) that has specialized expertise, and the quality bar for our work is going up substantially by the day. We are getting far better at our jobs, and the value we can deliver is accelerating rapidly, in no small part due to AI. We are also seeing all of the ways companies are successfully and unsuccessfully using these new tools. We use AI in virtually every part of our workflow: communications, internal tools, and work output generation. Some of our most valuable work is now delivered to clients in Claude Code.
We are at the very beginning of learning just how much of our own system and expertise can be replicated and scaled so that we can deliver more value to our clients faster. These same tools are available to everyone, everywhere, at a very low cost. Traditional training and education can help you learn these tools faster, but the truth is that using them day in and day out is the best training available. It’s a fundamentally new way of working. Previous periods of economic displacement required relocation, expensive degrees, professional training, and years of apprenticeship. It was a long and painful transition. This intelligence revolution requires a computer, an internet connection, an affordable subscription, and limitless curiosity. The future is more distributed and accessible than it has ever been.
Fear is understandable, but it’s still the mind killer. What’s needed is humility, curiosity, and clarity about where human value actually exists when intelligence is abundant.
History doesn’t repeat, but it rhymes
History suggests two things can be true at once. Technological disruption grows the economy as a whole, and AI will likely follow that same pattern. But there will also be winners and losers. Many individuals will be forced to retrain, reskill, or relocate, and that transition will be painful in the short run. Factory jobs disappeared and did not come back. New work emerged elsewhere, requiring different skills, different education, and relocation. New generations went where the opportunity was rather than staying in legacy factory towns. The communities left behind rarely fully recovered.
Similarly, AI is likely to be Kaldor–Hicks efficient, and the aggregate gains will exceed the aggregate losses. Productivity will rise holistically. Costs will fall. New industries will form, and demand and GDP will expand. In total, society will become wealthier. Unfortunately, I also expect many individuals will not be better off in the short run. Kaldor–Hicks does not require that no one is harmed. It only requires that the total gains exceed the total losses. The compensation is theoretical. The damage is real. Layoffs are real. Upskilling and finding new problems to work on and new value to provide is not an easy task. Especially when there is much uncertainty about what to learn or where to go. None of this is painless. I reached out to some of my former interns to ask them about their perspectives on Matt Shumer’s post as well as their current college experience as they prepare for the workforce. Here is one response that I loved:
It definitely seems like the path is eroding. Simultaneously, there is a lot of worry about AI taking jobs and… the rat race to get to the other side before it’s too late. I definitely had a period where I went down that rabbit hole. But I think if you zoom out, it’s actually potentially a good thing. Without a clear path, people will likely explore what truly interests them and will end up happier in their careers. It might democratize financial freedom more as well given Matt’s points about agency.
Needless to say, I’m quite proud of my interns.
But this time is also different in ways that are more challenging than previous economic revolutions. Last time it was one sector in more concentrated geographies, this time it is every knowledge sector simultaneously, everywhere. Last time it took decades, and this time the capabilities of the machines are doubling every seven months with nearly instantaneous distribution into the market. Last time a factory worker could retrain as a knowledge worker or services provider, but this time there is no adjacent sector to retrain into that isn’t also being disrupted. And last time, the people it happened to were largely ignored. This time, it’s happening to the people who were doing the ignoring.
We are entering a new epistemic era, and it’s happening fast.
The rate of change is accelerating
Society is experiencing the part of the exponential curve that no longer fits neatly on the graph, but the human mind and body are not accustomed to adapting at this accelerating rate of change. We cannot log scale our experience of reality, so we are just hanging on for the ride.
Matt Shumer’s post describes this acceleration well. He told an AI what he wanted built, walked away from his computer for four hours, and came back to find it done. Not a rough draft. The finished idea, completed correctly and without errors. The AI had opened the app itself, clicked through every button, tested every feature, decided what it didn’t like, fixed it, and came back when it decided the work met its own standards.⁵ Shumer’s conclusion cuts through: the experience that tech workers have had over the past year, watching AI go from helpful tool to doing the job better than they do, is the experience everyone else is about to have.
An organization called METR measures this effect with data. They track the length of real-world tasks (measured by how long they take a human expert) that an AI model can complete successfully, end-to-end, without human help. A year ago, the answer was roughly ten minutes. Then an hour. Then several hours. The most recent measurement showed AI completing tasks that take a human expert nearly five hours, and that number is doubling approximately every seven months, with recent data suggesting it may be accelerating.⁶ The accelerating rate of change has been happening for years:
2020: AI writes an essay. I got early access to the GPT playground ahead of the GPT-3 release, and I use it to write a “good” essay in response to the admissions question I had to answer to get into graduate school. I felt in that moment that the world had profoundly changed, but I vastly underestimated how quickly and intensely it would.
2021: AI generates images from text. OpenAI releases DALL-E. Anyone can type a description, get an image. For the first time, creative work doesn’t feel safe either.
2022: AI passes (with low marks) a college entrance exam. An early version of GPT scores well enough on the SAT to gain admission into many universities in the United States.
2023: AI passes the Bar Exam with flying colors. GPT-4 takes the Bar Exam and scores in the 90th percentile, a result that would qualify it to practice law in most US states, scoring higher than most practicing attorneys.
2024: AI wins the Nobel Prize in Chemistry. Google DeepMind’s AlphaFold predicts the 3D structure of virtually every protein known to science, a problem that had resisted fifty years of effort by the world’s best biologists.
2025: AI wins a gold medal at the International Mathematical Olympiad. Google’s Gemini solves competition problems at gold medal level at the IMO, the most prestigious mathematics competition in the world.
2026: AI solves a problem physicists had given up on. Researchers at the University of Chicago publish a paper in Physical Review Letters showing that an AI system solved turbulence modeling, a problem considered intractable for decades. Recently, physicists from Harvard, Cambridge, the Institute for Advanced Study, and Vanderbilt co-authored a paper with OpenAI showing that GPT-5.2 disproved a forty-year assumption about gluons in particle physics.
Today, AI is now writing much of the code at Anthropic, meaningfully accelerating the rate of progress in building the next generation of AI systems. Dario Amodei says this feedback loop is “gathering steam month by month” and that we may be “only one to two years away from a point where the current generation of AI autonomously builds the next.”⁷ Each generation helps build the next, which is smarter, which builds the next faster. The people who would know believe the process has already started.
We have built self-improving intelligence on tap, and it will only accelerate from here. This reality raises the question: in a world of limitless intelligence, what becomes scarce and valuable?
The scarcity shift
When physical intelligence became abundant, scarcity shifted to cognitive intelligence. The demand shifted from physical labor to cognitive labor as the challenge moved from the physical difficulty of making something to knowing how and what to make, and for whom, at scale. When cognitive intelligence becomes abundant, the scarcity shifts once more.
But where does it go? I believe most scarcity shifts further toward resources that uniquely provide a right to win in an increasingly competitive market.
The first scarce resource is trust and accountability. The “who” and the people behind the idea matter more and more, not less. Not credentials, but the credibility - often rooted in authenticity - that accumulates through a track record of consistent action, demonstrated judgment, and refined skill development over time. Credibility has always been valuable, but its value is growing rapidly, especially to those who are unafraid to embrace new modalities of intelligence and automation. It compounds as brand and reputation in practice.
Jensen Huang knew in 2006 what nobody else yet believed: that GPU architecture was what AI would eventually need at scale. He invested in CUDA before there was demand for it and spent nearly two decades being right before most of the world caught up. When the moment arrived, the trust and credibility were already banked. Customers followed because of a twenty-year track record of consistent commitment to a vision. NVIDIA’s brand and reputation, built through years of investing in the right technology ahead of others, compounded magnificently.
The second scarce resource is persuasive storytelling. The “why” matters more. In a world of limitless intelligence, there are limitless problems potentially worth solving. Potentially. The scarce skill is choosing which problems matter and convincing capital, both human and financial, to organize around that priority. To set a vision. To tell a compelling story about the future. To craft a narrative that employees, customers, and investors truly believe. It manifests as strong positioning and marketing.
Xerox PARC invented the graphical user interface, the mouse, and the laser printer. Steve Jobs visited once, understood what it meant for human beings, and built a company worth trillions around it. The first sign of intelligence was at Xerox. The vision and the ability to make others care were at Apple. The ability to make others care about what you care about and act alongside you toward a vision is already scarce today, but it will become more valuable and scarce with time.
The third scarce resource is a bias and courage to action. Trusted accountability and persuasive storytelling cannot be sustained without a willingness to act. Most companies and institutions face the same problem: they lack the immediate incentives, awareness, and skills to act on the abundant intelligence now available to them. Their existing cultures and business models perceive too much risk for them to move quickly - disruption theory at its finest. For example, it is unclear that traditional corporate org structures make sense in a world where abundant, agentic intelligence exists. It’s very possible companies with large knowledge-based workforces can now be run with 50-95% fewer people. And yet, there are likely millions of new companies that can be formed where productivity and revenue per employee will be greater than anything we’ve ever experienced before. The bottleneck has never been possessing the knowledge. The scarcity is possessing the courage, focus, and follow-through to build something valuable.
In 2000, Reed Hastings offered to sell Netflix to Blockbuster for $50 million. Blockbuster passed. They knew streaming was coming. They had the stores, the brand, the customers. But they couldn't act against their own business model. Netflix is now worth $400 billion. Blockbuster has one anachronistic store left in Bend, Oregon.
Working fearlessly
The NGMI crowd, the doomers, the permabears: they want to scare everyone because they themselves are scared. But it’s not a useful form of fear. Instead, it’s a compulsive, spiraling, and corrosive pessimism. Every generation of novel technology produces people whose entire identity becomes predicting its failure or the end of economic opportunity as we know it. I understand the fear, and I’ve felt it. But the evidence doesn’t support it. It shows us the opposite: fear and paralysis cause more economic displacement than the technology itself. We are almost certainly in a bubble, and it will pop. People will lose their jobs, and it will be messy. But there is no turning back from it, and any attempt to do so is foolish. The value of clear thinking and the courage to act has never been higher.
So there’s a big opportunity: Build trust by doing the work and doing it well. Tell a compelling story about why it matters. And act before you feel ready, while most people are still frozen and afraid. In the past, when the factories closed and the opportunity left, people often had to start over, get new education and training, and sometimes relocate. It was a multigenerational reset. Yet, the same intelligence that threatens traditional knowledge work also provides anyone with a laptop the leverage needed to be immensely successful. Work that would take me 5 hours to complete just a few years ago now takes me 5 minutes. We have more tools to leverage in knowledge and creative work than ever before, and they are accessible to everyone. You don’t need to be Andrew Carnegie to take advantage of the current moment.
So maybe the paths are eroding. And maybe that’s not a bad thing as a former intern suggested. Still, it reminds me of a story that Jerry Seinfeld told a few years ago on Tim Ferriss’ Podcast:
“Back in the ’80s, I had a friend who was teaching a comedy course at The Improv on Melrose in L.A., and he asked me if I would come in and talk to the class. And I went in and there was maybe 20 people in the class. I went up on stage, and I said, “The fact that you have even signed up for this class is a very bad sign for what you’re trying to do. The fact that you think anyone can help you or there’s anything that you need to learn, you have gone off on a bad track because nobody knows anything about any of this… What I really should do is I should have a giant flag behind me that I would pull a string and it would roll down, and on the flag would just say two words: just work.” - Jerry
I keep coming back to that story. Since I do not believe in giving advice, so I’ll share what I’m doing in the current moment: I’m ignoring fearmongers, but I am paying close attention to and engaging in the intelligence wave. I’m following my curiosity and tinkering a lot, and I’m finding ways to productively put that knowledge to good use.
And I’m getting to work.


