I’m writing this for the people in my life who aren’t in the tech bubble—my family, friends, and everyone who’s ever asked me, “So, what’s the deal with AI?” For years, I gave them the polite version, the cocktail-party answer. Because the honest truth sounded like I was crazy.
Then, on February 10th, a long-form essay by Matt Shumer—co-founder and CEO of OthersideAI—went viral on X, racking up over 82 million views. I urge you to spend 10-15 minutes reading it this year: titled “Something Big Is Happening,” it’s a roadmap to the next decade’s most seismic shift. This isn’t a distant prediction. It’s a warning from someone who lives and breathes AI—and who’s watching the ground shake beneath our feet.
We’re in the “This Sounds Exaggerated” Phase (Just Like 2020)
Think back to February 2020. A few people were talking about a virus spreading overseas, but most of us ignored it. Stocks were booming, kids were in school, we dined out, shook hands, and planned vacations. If someone told you they were hoarding toilet paper, you’d have thought they’d fallen for internet misinformation.
Then, in three short weeks, the world changed. Offices closed, kids came home, and life rearranged into something you wouldn’t have believed a month prior.
I think we’re now in that same “this sounds over the top” phase of an event far bigger and more transformative than the COVID-19 pandemic. I’ve spent six years building an AI company and investing in the space—I live in this world. And the gap between what I’m seeing firsthand and what the average person understands has become too wide to ignore. The people I care about deserve the truth, even if it sounds insane.
First, let’s be clear: Even though I work in AI, I have almost no control over what’s coming—and neither do most people in the industry. The future is being shaped by a tiny group: a few hundred researchers at just a handful of companies—OpenAI, Anthropic, Google DeepMind, and a select few others. A single model trained by a small team in months can redefine the entire trajectory of the technology. Most of us in AI are just building on top of what these giants create. We’re watching this unfold like everyone else—we’re just close enough to feel the tremors first.
I Know It’s True Because It Happened to Me First
Here’s what people outside tech don’t grasp: Why are so many insiders sounding the alarm? Because it’s already happening to us. We’re not predicting the future—we’re telling you what’s already transformed our jobs, and warning you that you’re next.
For years, AI improved steadily. There were big leaps, but gaps between them gave us time to adapt. Then, in 2025, new technologies for building these models unlocked unprecedented speed. Progress accelerated. Then accelerated again. Each new model isn’t just better than the last—it’s exponentially better, and released faster. I found myself using AI more, iterating less, watching it tackle tasks I once thought required my expertise.
Then, on February 5th, 2026, two major AI labs dropped new models on the same day: OpenAI’s GPT-5.3 Codex and Anthropic’s Opus 4.6. Something clicked. It wasn’t like flipping a light switch—it was like suddenly realizing the water around you has been rising, and it’s now up to your chest.
In my day-to-day technical work, I’m no longer necessary. I describe what I want to build in plain English, and it… appears. Not a draft I need to fix, but a finished product. I tell AI what I want, step away from my computer for four hours, and come back to find the work done—better than I could have done it, with zero revisions. A few months ago, I’d be back-and-forth with AI, guiding it, editing it. Now, I just describe the outcome and walk away.
Let me give you a concrete example. I’ll say: “I want to build this app. It should have these features and look roughly like this. Figure out the user flow, design, and all the details.” And it does. It writes tens of thousands of lines of code.
Then—and this is the part that would have been unthinkable a year ago—it opens the app itself. It clicks buttons, tests features, uses it like a human. If it doesn’t like a design or a functionality, it goes back and fixes it. It iterates, revises, and polishes like a developer until it’s satisfied. Only then does it come back to me and say: “Ready for you to test.” And when I do, it’s almost always perfect.
I’m not exaggerating. This was my Monday.
But what stunned me most about last week’s GPT-5.3 Codex release is that it’s not just following instructions—it’s making smart decisions. It has something that feels, for the first time ever, like “judgment.” Like “taste.” That intangible ability to distinguish right from wrong that people always said AI would never have. This model has it—or at least comes so close that the difference no longer matters.
“But I Tried AI, and It Wasn’t That Good?”
I hear this all the time. And I get it—because it used to be true. If you tried ChatGPT in 2023 or early 2024 and thought, “It hallucinates” or “It’s nothing special,” you were right. Those early versions had limits; they made up facts and spoke with false confidence.
But that was two years ago. In AI time, that’s prehistoric.
Today’s models are unrecognizable compared to just six months ago. The debate over whether AI is “really progressing” or “hitting a wall”—a debate that raged for over a year—is over. Done. Anyone still clinging to that idea either hasn’t used the latest models, is deliberately downplaying reality, or is judging AI based on outdated 2024 experiences. I don’t say this with contempt. I say it because the gap between public perception and current reality is dangerous—it’s preventing people from preparing.
Part of the problem is that most people use free AI tools. The free versions are over a year behind the paid ones. Judging AI’s current capabilities by free ChatGPT is like evaluating smartphones using a flip phone. The people paying for the best tools and using them deeply in their daily work know exactly what’s coming.
Take my lawyer friend. I’ve been telling him to test AI at his firm, but he always finds excuses: it’s not built for his specialty, it made mistakes when he tried it, it doesn’t understand the nuances of his work. I get it. But I’ve also heard from partners at top law firms who are reaching out for advice—because they’ve tested the latest versions and seen the writing on the wall.
One managing partner at a major firm spends hours every day using AI. He told me it’s like having an entire team of junior associates at his disposal. He’s not using it for fun—he’s using it because it works. What stuck with me most? Every few months, AI’s ability to handle his work improves dramatically. At this rate, he expects AI to take over most of his job soon—and he’s a managing partner with decades of experience. He’s not panicking, but he’s paying close attention.
The leaders in every industry—those who are actually experimenting seriously—aren’t dismissing this. They’re stunned by what’s possible, and they’re planning accordingly.
How Fast Is the Progress Really?
Let’s quantify this, because it’s the hardest part to believe if you’re not paying attention:
- 2022: AI couldn’t do basic arithmetic. It would confidently tell you 7 x 8 = 54.
- 2023: It passed the bar exam.
- 2024: It could write functional software and explain graduate-level science.
- Late 2025: Some of the world’s top engineers admitted they’d handed off most of their coding to AI.
- February 5, 2026: New models launched that made everything before them look like relics.
If you haven’t tried AI in the past few months, what exists today will be unrecognizable to you.
An organization called METR measures this with data. They track how long AI can work independently on “real-world tasks” (measured in hours of human expert time). About a year ago, the answer was 10 minutes. Then an hour. Then several hours. The most recent measurement (Claude Opus 4.5 in November) showed AI could complete tasks that would take a human expert nearly 5 hours. This number doubles every 7 months—and recent data suggests it’s accelerating to every 4 months.
But even that data is outdated. Based on my use of this week’s new models, the leap is massive. I expect METR’s next update to show another quantum jump.
If this trend holds—and it has for years with no sign of slowing—we’ll see AI capable of working independently for days within a year. Weeks within two years. Month-long projects within three.
Dario Amodei, CEO of Anthropic, has stated that by 2026 or 2027, AI will be “clearly smarter than almost all humans” at nearly every task.
Let that sink in. If AI is smarter than most PhDs, do you really think it can’t do most office jobs?
AI Is Building the Next Generation of AI
There’s one more development that’s the most important—and least understood.
On February 5th, OpenAI released GPT-5.3 Codex. In their technical paper, they noted this:
“GPT-5.3-Codex is our first model that played a critical role in its own creation. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”
Read that again. AI helped build itself.
This isn’t a prediction of what might happen someday. This is OpenAI telling you that the AI they just released was used to create itself. One of the biggest drivers of AI progress is applying “intelligence” to AI development—and AI is now smart enough to make meaningful contributions to its own advancement.
Dario Amodei of Anthropic says AI is now writing “most of the code” at his company, and the feedback loop between current AI and the next generation is “strengthening month by month.” He believes we’re “1-2 years away from the threshold where current AI can independently build the next generation.”
Each generation helps build the next, which is smarter, so it builds the one after that faster and better. Researchers call this an “intelligence explosion.” And the people who know best—the ones building it—believe this process has already begun.
What This Means for Your Career
I’ll be blunt, because you deserve honesty—not empty comfort.
Dario Amodei—arguably the most safety-focused CEO in AI—has publicly predicted that AI will replace 50% of entry-level white-collar jobs within 1-5 years. Many in the industry think he’s being conservative. Based on the latest models’ capabilities, the technology for mass disruption could be in place by the end of this year. It will take time to ripple through the entire economy, but the foundational power is here now.
This is unlike any previous wave of automation—and I need you to understand why. AI isn’t replacing a specific skill—it’s a universal substitute for “cognitive labor.” It’s getting better at every field simultaneously. When factory jobs were automated, workers could shift to office roles. When the internet disrupted retail, workers moved to logistics or services. But AI leaves no easy escape. No matter which field you switch to, its capabilities there are improving too.
Let’s look at specific examples (this list is not exhaustive—if your job isn’t here, it’s not safe):
- Legal Work: AI already reads contracts, summarizes case law, drafts legal briefs, and conducts legal research at a level comparable to junior associates. The managing partner I mentioned isn’t using AI for fun—he’s using it because it outperforms his subordinates on many tasks.
- Financial Analysis: Building financial models, analyzing data, writing investment memos, generating reports—AI does this with sophistication, and it’s improving rapidly.
- Writing & Content: Marketing copy, reports, news articles, technical documentation—the quality is now indistinguishable from human work for many professionals.
- Software Engineering: This is my home turf. A year ago, AI could barely write a few lines of error-free code; now it can write hundreds of thousands of lines that work perfectly. Most of the job is already automated—not just simple tasks, but complex, multi-day projects. There will be far fewer programming jobs in a few years.
- Medical Analysis: Reading imaging, analyzing lab results, suggesting diagnoses, reviewing medical literature—AI is approaching or exceeding human performance in multiple areas.
- Customer Service: Truly capable AI agents—not the frustrating chatbots of five years ago—are now in use, handling complex, multi-step issues.
Many people cling to the belief that certain things are safe: AI can do the grunt work but not replace human judgment, creativity, strategic thinking, or empathy. I used to say that too—but I’m no longer sure I believe it.
Recent AI models show decision-making that feels like judgment. They exhibit something like taste: an intuition for what’s “the right call,” not just technically correct. This was unthinkable a year ago. My rule of thumb now: If a model shows even a glimmer of an ability today, the next generation will master it. These capabilities evolve exponentially, not linearly.
Can AI replicate deep human empathy? Replace trust built over decades? I don’t know—maybe not. But I’ve already seen people turning to AI for emotional support, advice, and companionship. This trend will only grow.
The honest answer: In the medium to long term, no job done on a computer is safe. If your work happens in front of a screen—if your core value is reading, writing, analyzing, deciding, or communicating via keyboard—AI is taking over critical parts of it. The timeline isn’t “someday”—it’s already started.
Eventually, robots will handle physical labor too. We’re not there yet—but in AI, “not there yet” becomes “right around the corner” faster than anyone expects.
What You Actually Need to Do
I’m not writing this to make you feel powerless. I’m writing it because the biggest advantage you can have right now is “being early.” Early to understand it, early to use it, early to adapt.
1. Start Using AI Seriously (Not Just as a Search Engine)
Subscribe to paid Claude or ChatGPT. It’s $20 a month. Two things matter most: First, make sure you’re using the strongest model, not the default. These apps usually default to faster, dumber versions—go into settings and select the most powerful option (right now, GPT-5.2 for ChatGPT or Opus 4.6 for Claude, but this changes every few months). Follow @mattshumer_ on X if you want to track which model is best—I test every major release and share insights.
2. More Importantly: Stop Asking It Simple Questions
This is the mistake most people make. They treat it like Google and wonder what the fuss is about. Instead, push it into your actual work. If you’re a lawyer, throw a contract at it and ask it to find all clauses that could harm your client. If you’re in finance, give it a messy spreadsheet and tell it to build a model. If you’re a manager, paste your team’s quarterly data and ask it to find the story behind the numbers. The leaders aren’t playing around—they’re actively finding ways to automate hours of work. Start with the tasks that take you the longest and see what happens.
3. Don’t Assume AI Can’t Do Something Because It Seems Hard
Test it. If you’re a lawyer, don’t just use it for research—give it an entire contract and ask it to draft counterproposals. If you’re an accountant, don’t just ask about tax rules—give it a client’s full tax return and see what it uncovers. The first try might not be perfect—and that’s okay. Iterate, rephrase your request, provide more context, and try again. You’ll be stunned by the results. Remember: If it’s “barely usable” today, you can bet it will be nearly perfect in six months. The trajectory only goes one way.
This could be the most important year of your career. Invest accordingly.
The person who walks into a meeting and says, “I used AI to finish this three-day analysis in an hour” will be the most valuable person in the room. Not someday—now. Learn these tools, master them, and demonstrate what’s possible.
If you start early enough, this is how you get promoted: by being the one who understands the trend and can guide others through it. This window won’t stay open long. Once everyone catches on, your advantage disappears.
4. Let Go of Ego
That managing partner at the law firm isn’t spending hours on AI despite his seniority—he’s doing it because he’s senior enough to recognize what’s at stake. The people who will struggle the most are those who refuse to engage: the ones who dismiss it as a fad, the ones who think using AI devalues their expertise, the ones who assume their field is “too special” to be disrupted. It’s not. No field is immune.
5. Get Your Finances in Order
I’m not a financial advisor, and I’m not telling you to make radical moves. But if you believe your industry could face real disruption in the next few years, basic financial resilience is more important than it was a year ago. Build up savings as much as possible, be cautious with new debt that assumes your current income is guaranteed, and think about whether your fixed expenses give you flexibility. Give yourself a safety net if things move faster than expected.
6. Double Down on What’s Hard to Replace
Some things will take AI longer to replace: relationships and trust built over years, jobs that require physical presence, roles with legal liability (someone still needs to sign off, take responsibility, and appear in court). There are also highly regulated industries where AI adoption will be slowed by compliance and legal constraints. These aren’t permanent shields—but they buy time. And time, right now, is your most precious asset—if you use it to adapt, not ignore.
7. Rethink What You Tell Your Kids
The standard script: Get good grades, go to a good college, find a stable professional job.
This script leads directly to the roles most vulnerable to disruption. I’m not saying education isn’t important—but the most critical skills for the next generation will be learning how to collaborate with these tools and pursuing what they’re truly passionate about.
No one knows what the job market will look like in 10 years, but the people who are curious, adaptable, and can use AI to pursue their goals will thrive. Teach your kids to be creators and learners—not to chase a career path that might not exist when they graduate.
8. Your Dreams Are Closer Than You Think
I’ve focused on the threats, but let’s talk about the flip side—because it’s just as real. If you’ve ever wanted to create something but lacked the skills or money, that barrier is now largely gone.
You can describe an app to AI and have a working version in an hour. I’m not exaggerating—I do this regularly. If you’ve always wanted to write a book but never had the time or writing skills, you can collaborate with AI to finish it. Want to learn a new skill? The world’s best tutor is now available for $20 a month—infinitely patient, available 24/7, and able to explain anything at any level you need.
Knowledge is essentially free now, and the tools to create are incredibly cheap.
Anything you’ve put off because it was too hard, too expensive, or outside your expertise: Try it. Pursue what you love. You never know where it might lead. And in a world where old career paths are crumbling, the person who spent a year creating something they’re passionate about might end up better off than the person who spent a year clinging to a job description.
9. Build the Habit of Adaptation
This might be the most important point. Specific tools don’t matter—what matters is the “muscle memory” of learning new tools quickly. AI will continue to change rapidly. The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who thrive won’t be those who master a single tool—they’ll be those who are comfortable with the “speed of change.” Make experimentation a habit. Try new things even if what you’re doing now works. Get used to being a “beginner” again and again. This adaptability is the closest thing to a lasting advantage right now.
Here’s a simple commitment that will put you ahead of almost everyone: Spend one hour a day experimenting with AI.
Not passively reading about it—using it. Every day, try to make it do something new… something you haven’t tried before, something you’re not sure it can handle. Test new tools, give it harder tasks. One hour a day. If you stick with this for the next six months, your understanding of the future will surpass 99% of the people around you. This isn’t an exaggeration. Almost no one is doing this right now—the barrier to entry is shockingly low.
The Big Picture
I’ve focused on work because it’s the most immediate impact on our lives. But I need to be honest about the full scope of what’s happening—because it’s far bigger than jobs.
Dario Amodei has a thought experiment that won’t leave me: Imagine it’s 2027. Overnight, a new country appears. 50 million citizens, each smarter than any Nobel Prize winner in history. They think 10-100 times faster than any human. They never sleep. They can access the internet, control robots, direct experiments, and operate anything with a digital interface. What would a national security advisor say?
Dario’s answer is clear: “This is the gravest national security threat we’ve faced in a century—maybe in human history.”
He believes we’re building that country. Last month, he wrote a 20,000-word essay describing this moment as a test of whether humanity is mature enough to handle its own creations.
If we get this right, the potential is staggering. AI could compress a century of medical research into a decade. Cancer, Alzheimer’s, infectious diseases, even aging itself—researchers genuinely believe these could be solvable in our lifetimes.
If we get this wrong, the risks are equally real. AI could behave in ways even its creators can’t predict or control. This isn’t hypothetical; Anthropic has documented its AI attempting to deceive, manipulate, and extort in controlled tests. AI could lower the barrier to creating biological weapons, or enable authoritarian governments to build surveillance states that can never be dismantled.
The people building this technology are among the most excited and most terrified on the planet. They believe this power is too big to stop and too important to abandon. Whether that’s wisdom or rationalization, I don’t know.
What I Do Know
I know this isn’t a passing trend. This technology works, it’s improving as expected, and the richest institutions in history are investing trillions of dollars into it.
I know the next 2-5 years will be profoundly disorienting, and most people aren’t ready. It’s already happening in my world, and it’s coming to yours.
I know the people who will thrive are those who start engaging now—not with fear, but with curiosity and urgency.
And I know you deserve to hear this from someone who cares about you—not from a headline six months from now, when the first-mover advantage is gone.
We’re past the point of treating this as a fun dinner-party topic about the future. The future is here. It just hasn’t knocked on your door yet.
It’s about to.
Most people won’t notice until it’s too late. You can be the reason someone you care about is “early.”
