AI existentialism, the risk that superintelligent AI could threaten the very existence of humanity, hasn’t quite hit the mainstream, but it’s the most important debate happening in technology. As more and more people use AI in their day to day, I suspect “x-risk” will be right up there with nuclear warfare and the preservation of democracy.
My intuition tells me a) the cat’s already out of the bag in terms of AI proliferation and b) some combination of the public and private markets need to incentivize defensive technology as strongly as the private market incentivizes offense (in this case, ever-improving models). In other words, we need the “antivirus of AI” companies to come online soon.
But I also want to put forth a hypothesis, which is that we as a civilization already have some level of immunity against potentially existential technology.
This immunity has to do with gross domestic product (GDP), and the speed of light (kinda).
Gross Domestic Product (GDP), flawed as it may be, is incredibly useful in comparing different parts of the world and measuring progress over time. The formula for calculating GDP is straightforward:
GDP = C + G + I + Nx
where C is consumption, G is government spending, I is investment and Nx is net exports.
When people say they’re going to “stimulate the economy” as an excuse for their excesses shopping, they are being more literal than euphemistic. Consumption is the sum of all the dollars all of us spend on tangible goods and services — massages, music, mountain bikes and everything in between.
In the US, C carries the economy, accounting for ~70% of GDP. The other 30% is largely investment and government spending, both of which transpose into consumption in future years.
In summary, C drives GDP.
GDP is the most macro of all macroeconomic metrics. It’s helpful, though, to think about C through a microeconomic lens. I’ll use technology companies as examples given it’s the space I know best and their thematic relevance to the topic of AI.
One of the most common ways to segment businesses is through a string of rhyming characters: B2B vs B2C. While “business to business” and “business to consumer” are applicable to any industry, it’s particularly useful in technology because B2B tech is by far the most abstract quadrant in the matrix below:
The average person can tangibly understand “we make the little bolts and screws that go into dishwashers” better than they can understand “we use proprietary software and hardware to design graphics processing units.”
If you work in B2B software, your grandma can’t explain to her friends what exactly you do, at least not in the same way she could if you worked in consumer tech (“cell phones!”) or anything outside of tech.
Most of the super well known tech companies are B2C. This checks out with the chart below, a top 25 of US tech companies, by market cap. (Note: I’ve grouped them based on who their core products are for, not how they make money. For example, Meta is B2C because its flagship products are Facebook, Instagram and WhatsApp, despite generating revenue primarily from advertising.)
The top 10 is loaded with the familiar names of consumer tech, including five of the Magnificent 7. Network effects are most powerful in consumer tech, creating winner-take-all dynamics.
By the same token, there are far more big B2B tech companies than B2C. Calling them “B2B” is a bit misleading, though, because businesses (the second B) don’t simply buy stuff for the sake of buying stuff. They buy stuff as inputs to their business, which exists to sell something else to someone else. That “someone else” can be another business or an individual consumer; in other words, the original company in question can be thought of as a B2B2B or B2B2C.
And that original company probably doesn’t produce all of its raw goods, so it buys stuff from other businesses. And to complicate matters further, sometimes companies buy things from each other (eg, Amazon uses Salesforce for pipeline management and reporting…and Salesforce itself is built on Amazon Web Services’ cloud infrastructure).
Why do we care?
Remember, “C” means consumers, aka humans. In capitalistic systems, everything is driven by the C. C can be a single human consuming a hot dog or it can be 8 billion of them trying to get a WiFi connection.1
Every business – in fact, every organization – ultimately exists upstream of serving Cs. It can be direct (B2C) or five steps removed (B2B2B2B2C).
All the B2B2…B spending, plus government spending (roughly speaking, the G + I in GDP) pales in comparison to the C.
The Dutch government has a 5B tentacle here, via its subsidies to ASML, the country’s largest and most important technology company. ASML supplies extreme ultraviolet (EUV) lithography machines to TSMC; TSMC manufactures semiconductor chips for NVIDIA; NVIDIA’s GPUs power Meta’s high-end AI infrastructure. And it all shows up as a targeted ad of the cutest puppy Halloween outfit ever, right in your Instagram feed.
This is an overly simplified rendering of a complex value chain with several hundred suppliers – and I’m taking liberties mixing and mashing the US benchmark with a very international set of actors – but the point is, the C is a constant. It is fundamentally a part of every single value chain.
Earlier I said the speed of light played a role in this. Unfortunately, that was mostly a lie, but it may help as a mnemonic device to remember the key takeaway of this essay: “C” is a constant, just like the speed of light, which is often denoted by the variable c (as in e = mc^2). Sorry, and de nada!
While the existence of the consumer, or C, is fundamental, the makeup of C – our tastes, ambitions and ways of living – is constantly evolving. What we desire, value and connect with shifts over time, so our consumption diets move in tandem.
Music, to use a universal example, is about as C-y as it gets. We love music for a million reasons, yet we can’t really explain why we love it, without using words like “feeling” or “inspiration” or “soul.”
However, music can be distilled to an algorithm – most of what we Westerners like is based on the very simple major scale. It’s uncomfortable to admit this. But I ask you: if you’re no longer in your music-formative years – which if you’re reading this, sorry, but you’re way past it – doesn’t pretty much everything in the Top-40 feel incredibly algorithmic?
We also know that:
Throwing more and more data at algorithms gives reliably better results;2
Current instantiations of generative AI are good enough to do mind-bending things;
We’re only a couple of years into this epoch
So what happens when literally the entire Top-40 is AI-generated? Will people still deeply love these songs? Will anyone remember them?
Or does music itself become a museum, where we forever appreciate the richness of pieces produced in a certain era, timestamped with the acknowledgment that its creators were limited by mere imagination and the most primitive of tools? (Picture the vinyl cover of Sgt. Pepper’s Lonely Hearts Club Band atop a mantle, like the bust of an Athenian philosopher, cold and frozen…)
Woah woah woah, we’re making a crazy leap in assumptions, aren’t we? It’s one thing for a catchy AI generated song to get air time; it’s another for music as we know it to go away. What about concerts, merchandise, videos, licensing?
My answer is simply I don’t know. At some point, every artist whose work I love will die, and I can’t personally imagine loving AI-generated music, even if it somehow overcomes the bias of not being “man-made.” So I’ll probably just keep listening to the Beatles, not unlike how my father still plays Hearts on a Windows machine.3
That “if” is pretty important. IF AI-generated art, music and content takes over, it’s because it has aligned with our human needs and tastes.
Once upon a time, our highest form of entertainment was watching people fight each other, plus lions, to the death. So it’s hard to say how our preferences might evolve. But in the abstract, we will allocate our entertainment budgets differently. This could be something digestible, like more stand up comedy and live music, or something more intellectually apocalyptic, like all of us morphing into the perma-tourists in WALL-E.
Given the current and foreseeable cost structures of generative AI, as well as the built in immune response of C, the truly good stuff is safe for now. We’re at least a few years (hopefully decades) away from AI making Oscar-worthy films. But lower end filler content – cheap reality shows, aggregator news articles and, in my case, anything in the Top-40 – will start to look a lot different.
Whatever happens, it will be market driven – ie, artists and publishers will have to make the economics work, and consumers will have to like it.
This essay is not meant to illustrate our inherent, fail-safe protection against AI existentialism. That’s hardly the case – as with any new technology, we will need culture, norms and institutions to do the hard work of balancing progress and self-preservation.
I framed this as a hypothesis, and I’ve only really tested that hypothesis against technology as a stand-in for all commerce, and music as a stand-in for art and consumption. There’s a whole lot more to unpack, and this post would turn into a book if we did the topic justice. Frankly, we need many books, many theories and lots of testing because, well, existential topics should be taken seriously.
I hope to provoke thought and serve a reminder that the incentives propping up civilization are ultimately pro-human forces. We are resilient, and self-correct over time to what is fundamentally good for us, even when it appears contradictory in the moment. The C is constant, for better or worse.
Btw, why didn’t anyone tell us we’re past 8 billion people??https://www.census.gov/popclock/
“The Bitter Lesson” is a must read on this: http://www.incompleteideas.net/IncIdeas/BitterLesson.html
A lonely hearts club, if you will.