Just Start: Can you actually be "antifragile" in 2025?
Or is the world just changing far too fast?
In his 2012 book Antifragile, Nassim Nicholas Taleb argues that it’s better to be a taxi driver than a clerk in a large company, because you’re less ‘fragile’. A clerk, with only one source of income (his job), can lose it in an instant, while a taxi driver, though he’s vulnerable to downturns in demand, can shift in response to feedback and benefit from the occasional upswing.
Obviously this wasn’t a great choice of example, because something else was happening in 2012: Uber, which launched in 2009, was expanding to several new territories, radically undercutting the taxi drivers who’d spent years of effort or mortgage-sized amounts of money on their licenses to operate. Uber’s appearance, in fact, was basically a perfect example of a ‘black swan’ event: one that’s essentially unforeseeable but has enormous (and often bad) consequences for a lot of people. Taleb, who wrote a book about black swans and claims to have made a lot of money betting on them (not predicting what they would be, but just assuming that large-scale disruptive events will occasionally occur), doesn’t seem to have ever addressed this, but let’s hope that none of his readers bought a cab medallion in 2013, when their value peaked at over $1,000,000 — they’re worth less than a fifth of that now.
With that in mind, let’s get to the point of this post: is there any way to be ‘antifragile’ in 2025? The world is changing perhaps faster than it ever has, not because of a single invention (the mechanised loom, the printing press, the car), but because of an accumulation of them, and what comes next seems impossible to predict. A decade ago, ‘learn to code’ was the mainstream mantra, accompanied by a vague sense that coding might one day be as essential as learning to read or write. Five years ago, being an artist or graphic designer or copywriter seemed pretty safe, with early iterations of AI missing the creativity, understanding, and nuance to match what even an entry-level person could do.
Now, AI is taking away jobs from coders and creatives from the bottom up: junior-level stuff is being insta-generated by machines that do it for free, meaning fewer positions for people trying to climb the ladder. And it’s constantly getting better, overtaking entire industries and threatening to overtake more at what feels like increasing speed. When Antifragile came out, it seemed to have clear principles, even if some of those principles were wrong in hindsight. Now…what works?
Here’s what I think might work.
Trade in ideas
Here’s an interesting thing about AI art: if you’re a photorealistic artist or you’ve spent years honing your ability to make crisp, clean, digital fantasy art, you’re probably feeling quite threatened right now. If you’re making the most simple, basic, three-colour stuff possible — you might be fine, if you have good ideas. Nobody see,s to have worked out how to replicate Alex Norris’ Webcomic Name yet, for instance:
Partly this is because this sort of simple, clean style is easier to mess up with the single errant detail that AIs are bad at, but partly it’s because what sells it is the combination of the art and the idea: something that, right now, AI is terrible at generating. AIs are good at replicating or (badly) riffing on things that have been done before, but terrible at coming up with unique insights or expressing them in surprising ways. One thing that we haven’t lost yet, and probably won’t for a while, is that Which leads us to the next thing…
Build overlapping expertises
When I first read Nick Bostrom’s Superintelligence (I’ve checked, it was 2017), I genuinely thought that AGI might fix the world if it didn’t turn us all into paperclips first: one of AI’s big promises seemed to be that it might be able to synthesise vast amounts of literature and generate novel ideas about how to make incredibly efficient fuels, cure cancer, fix climate change, and so on. Unless I’m misunderstanding something, though, we are not even close to building that sort of AGI and it’s far from clear that it’s even possible: what we have now are large language models that produce results based on the stuff they’ve already been trained on, trying to produce the result that they think you’re expecting. As explained in depth in this excellent video by Angela Collier, that means that they can’t combine ideas to come up with new insights, or at least not in a helpful way: a recent article titled Can Google's new research assistant AI give scientists 'superpowers'?, explains that Gemini was tasked with finding “new” ways of potentially treating liver fibrosis, but goes on to point out that ‘the drugs proposed by the AI have previously been studied for this purpose.’
AI, in other words, is great at replicating things that exist, especially if those things aren’t very unique or good: it cannot combine things to come up with new insights. You probably already know this if you’ve used it a fair bit: recently, ChatGPT struggled to even suggest me a good name for a criminal mastermind rabbit.
These are all terrible.
What this means, I think, is that it’s more important than ever to be a T-shaped generalist, with one or two areas of deep expertise and then a bunch of other interests that allow you to riff of that expertise in interesting ways. I recommend David Epstein’s Range all the time, but really — you ought to read it.
Have an identifiable ‘thing’
Here’s a thing about Ms Collier, mentioned above: her YouTube channel isn’t super-polished, doesn’t include lots of stunts or retention editing, and often just consists of her talking to the camera for an hour with jarring cuts and the occasional graphic over the top — but it’s compulsive viewing, and she’s currently averaging two hundred thousand viewers a video, sometimes for stuff with titles like Physics as Resistance: Bose-Einstein Condensates. She also has a Patreon with 2,000+ members, all of them paying at least $5 a month.
Why? Because she’s doing something that basically nobody else is doing: explaining quite complex science-related stuff, and how it relates to the real world, in a way that’s digestible and bingeable and very funny. When I finished listening to all of her freely available stuff, I wanted more, and her channel is the only way to get it. Obviously, building up an identifiable ‘thing’ like this isn’t easy — she’s got a PhD! — but in a world of AI slop, I think it’s going to become more and more essential.
Become a dry stone waller
It doesn’t feel like we’re very far away from robots that can take over traditional bricklaying, but you know what they’re absolutely terrible at? Dry stone walling, or the art of building a wall without mortar, just by carefully stacking and fitting stones: an art that relies on years of practice and carefully built up intuition. There’s probably a lesson there.
Look, none of this is easy. In a way, it’s a terrifying time, and I’m not convinced that any of this stuff will work. Maybe in ten or twenty years, we’ll be forced into some sort of UBI situation because AI has destroyed so many traditional career paths, or we’ll all be living in Waterworld. But right now, as we all play career Frogger, our best collective bet seems to be thinking creatively, combining ideas, and leaning into the stuff that AI doesn’t seem to be getting any better at. I don’t know if that’s enough to make you Antifragile, but it’s better than spending a million dollars on a taxi medallion.
Have a great weekend!
Joel x
Stuff I’ve done
11 things you should know about using the gym
Me for the Guardian with a quick primer on how to use the gym properly, including the thing I see about 20% of squat rack users doing wrong.
Science books I wish I’d read earlier
Self-explanatory title: I’ve read a lot of popular science books, but there are a few I think I’d have got a lot from if I read them a lot earlier. These are them.
Stuff I like
📝 Article - The David Foster Wallace Disease
This seems to really get to the heart of something I like about DFW: what writer Sasha Chapin calls “a way of generating luminosity by perfectly capturing tiny pieces of sensation spliced out from even the most banal moment of consciousness.”
🎥 Video - Starships with Starships
As someone in the comments says, this video feels like the old days of YouTube, when people just made cool stuff because they wanted to, without any expectation of making money from it. It’s basically Nicki Minaj’s Starships with a lot of sci-fi footage cut to the vocals, but much more joyful than that sounds.
Like this newsletter?
If you’re keen to support this newsletter and the other stuff I do, I’d love it if you became a paid subscriber: you don’t get anything extra (I don’t paywall anything), but it helps me write longer and more original posts, as well as feeding my family. I really do appreciate it.
Also, if you’ve got a book or an article you think I should read, or something you think I should watch or try, please send a suggestion my way.
And finally, if you haven’t already, please check out my YouTube channel, where I deep-dive into stuff like productivity, lifelong learning, piano and Brazilian jiu-jitsu.
It’s a good point about cartooning. I think the hyper realistic stuff has become less and less appealing anyway due to how easy it is to photo ref stuff in photo shop or procreate, and it’s overuse in comics. Hyper realistic drawing is always impressive as an academic exercise, but i think we are always going to want to see the artists hand, their idiosyncrasies and imperfections in the work. The big thing with AI is a fundamentally do not care what an algorithm thinks about the world. It may become a better and better simulacrum of something a human might make, but it will never be something made by a human.
This made me feel better because this article was optimistic in a REALISTIC way.
Yes, this will happen but also not to underestimate ourselves in a way. Who knows what'll happen in the next 10-15 years? I will say though, like anything, AI will start to deteriorate, generative AI especially. From what I've been reading and looking at patterns, these genAI programmes like GPT at almost of not at their peak; the only way to go is down. Still sort of morbidly curious to see the state of the world with AI in future, but ultimately not too worried anymore right now. Thanks Joel