Some Thoughts on the New World
During the mid-2000s, I can confirm that in US elementary schools we were still being told a romanticized version of the European colonial escapades, "in 1492, Columbus sailed the ocean blue" etc. Moral misgivings aside, there is something kind of magical about imagining sailing to the proverbial, or perhaps even literal, edge of the world, half expecting the ship to teeter over the edge of the world into the abyss.
The people on those ships selected into what, even flat Earth theories aside, was an incredibly risky endeavor. In a pure resource creation and innovation sense, it's good that there were people for whatever reason driven enough to do that.
In 2025, we find ourselves sailing into the unknown once more (ChatGPT came up with "in 2025, the world trusted AI to lead and drive", which feels a bit premature but isn't half bad). But the difference is we're all in the same boat. There's only one basket for our eggs to go in.
It's a blessing and a curse to live on the precipice of history. My fear is humans aren't smart, selfless, or organized enough to be able to properly go on this voyage, and I'm worried I will play a part in screwing this up by not seeing something ex post obvious, as all the best economics is.
You can compare this to the industrial revolution sure, but given the scale and also the prevalence of information, the level of social and cultural awareness is inherently very different, even if you buy that the technological analogy is sound. That transformation took several decades to seep into the collective consciousness even on a purely visibility level; here we will be reflecting in real time. The real time-ness of it all will make this unlike any history before.
I have had so many thoughts about what we are about to (potentially) go through, but the task of formalizing something that at present is so intangible feels unnatural. So it's best to perhaps just write some main ideas that seem essential and in some cases underappreciated for gauging the path ahead. They are ordered by vanilla-ness.
Some Good News: Less "But Actually"s about Economic Theory
One of the primary things that got me hooked into economics was thinking that the models in Econ 101 had obvious flaws and I wanted to climb the intellectual ladder and learn/improve them. Of course further up the ladder, you can really appreciate the beauty and necessity of these assumptions for communicating central ideas. However, it seems to me a lot of the quibbles with standard frameworks are going to have even less ground to stand on. A common complaint about Economics from non-economists is that humans aren't computers and have more complicated behavioral and heterogenous tendencies that are essential to account for. We of course are well aware this can be the case, and have devoted a lot of cognitive horsepower to seeing exactly when the basic model is and isn't helpful for getting the right answers. But if we just have a bunch of server farms humming away, literally maximizing a utility function with so many economic agents out there that the environment truly resembles having a continuum of consumers, firms, and participants in financial markets, then maybe all of the work we've done with standard paradigms will be even more useful.
The flipside of course is AIs as economic agents may have some other nuances that were never previously relevant, which will change tradeoffs.
Learning from Malthus
While in some ways it arguably makes more sense to appeal to results in e.g., full information, rational expectations models, there's something that really bothers me about a lot of the AI-based analysis I read that has made me apprehensive about writing my own out of fear of this same pitfall.
"We observe X in data, so projecting out X + AGI, we will see Y". I do agree with the above point that AI agents will behave in ways that will make a lot of past economics useful to think about the world, but the environments in which these agents are interacting and their incentives/considerations may be quite different.
Modeling AI as a pure productivity shock or as a labor augmenting/replacing technology is an obvious starting point, but there will be fundamental changes to the structure of our economy that will render some economic laws of gravity obsolete. It is hard to know which ones and what is important to focus on. To me, the optimal strategy for AI research at the start of this stream is to focus on one way (besides the obvious tech boom) in which structural change will happen. And focus completely on that. That will essentially give us a large matrix of partial derivatives. If we are estimating those partial derivative worlds well, then it's a matter of assessing their likelihood, and of the most pressing worlds, how might they interact. But I think the most basic thing we have to focus on is really trying to think of specific ways in which our old models will fail. Because that is the world we are obviously the least prepared for.
The other factor, discussed a bit more later, is we will potentially have actors manipulating the objective of the AI, and in this sense they will be exploring areas of the "preference space" that we have never considered on a large scale.
"Robot Umpires" and Self-Driving Cars
A paradox of the integration of new technology is that is not "good enough" if it's merely better. That is, it has to be so good that it overcomes a status quo bias. It also becomes easy to scapegoat. Say for example, robot umpires in baseball call 1 out of 1000 batters out who were in fact safe. None of the 999 correct calls will get any praise, but the 1 incorrect one, which clearly would've been close enough that a human umpire would fare no better, will create a firestorm. Additionally, the amount of booing that occurs in away stadiums even when the robot ump makes the correct call lets you know that "improvement" and fairness are only relative terms. During large changes, counterfactual thinking tends to be biased towards the specific counterfactual of nothing changing. Granted, sports fandom is not exactly a venue where even the most ardent rational expectations advocate would insist against a behavioral explanation. But this extends more broadly, even if we don't have enough examples yet to really see. Anytime a self-driving car gets into an accident, people outrange without even looking at the context. ChatGPT output with hallucinations goes viral, but it's not obvious in many cases the expected number of human errors would be any lower. People will quip "see, you can't trust these tools!". But even though an unemotional bar may be if we can have weakly more trust in these tools than we can in the modal huaman substitute, at least at the outset, that won't be the case. I see this as a major friction to diffusion. In the words of Tyler Cowen, humans are the bottleneck.
There's also something more subtle to consider along these lines. Suppose we entrust an AI for some non-trivial decision. Suppose further it is optimized so that it gives the "right" decision. But what would that entail? The AI would have to internalize that is in fact a tool we, humans capable of error, have built. So if, especially in these early transition stages, it outputs something that's far from our prior belief of what the "right thing to do it", it will have to know that there may be doubt about its output. So optimal design will need to at a minimum include some level of reasoning traceback (e.g., DeepSeeek's stream of consciousness) and moreover will need to sort of enter into a game theoretic type environment with its operators, which seems like could be a problematic arrangement. It's one thing when you work at a company and know your bosses may not be receptive to the new face with all the bright ideas. It's another when you're the differences in cognitive capacity are orders of magnitude. It seems difficult to not need to program intelligent agents to incorporate this sort of equilibrium feedback (necessary to give actionable advice) without also necessitating a level of duplicity that we obviously want to avoid for alignment purposes. This seems like a very easy road to a paperclip problem, at least on a less dramatic scale.
Gov 3.4
An immediate concern is that there will be imbalances because of access to AI tools (and implicitly performance heterogeneity across tools). One easy way to think about it is through a level-k paradigm (in the simplest case, level 0 agents behave randomly, level 1 agents believe everyone else is level 0, and so on). Effective imbalances in the cognition playing field could lead to unwinding. As discussed above, it already seems like AI agents will need to be able to operate with at least a mild antagonistic flavor in their processing. Now consider geopolicis. It's very easy to worry about the implications of AI in the "wrong hands" in terms of a hacker or dictator, but even in in a world with solely rational actors, things could become unwieldy quickly. Can global cooperation be sustained in a repeated game? Well, if one country has an absolute advantage in higher order thinking, then some sort of "winner take all" scenario could emerge.
It's almost like raising a child. You want them to see the world as this rosy place. But you know them seeing that way will actually be harmful. So their mind has to be "poisoned" with thinking through how other people may be thinking adversely. In a limit case where public policy is completely automated, we will essentially have a bunch of agents who are trying to outsmart each other for marginal gain. This obviously already exists; the evolution of China's US posturing from Détente to now is a great example. But to formally encode goepolitical objectives would mean that there really are no guardrails. Of course we can try to manually incentive cooperative preferences, but there's this higher-order anxiety of wondering if everyone else is properly incentivized. And again, we could have some gradual breakaway scenario under an uneven playing field.
It's difficult to represent this idea without wandering into a realm that seems a bit paranoid, but the basic point is that the new arms race will be an operating system (a country operating system). Given the lack of parity in technological access, a gulf between winners and loses seems inevitable. Perhaps the best solution will be to resign to a toxic relationship with other parties, where we acknowledge some level of double-crossing and don't keep tying the knot.
What's this all for, anyway?
Now let's consider some ostensible best case, closed economy scenario. Superintelligence arrives, diffusion is seemless, economic output booms. We're immediately way better off right?
Economic diffusion is an asterisk equally important as technological. How would these new gains be shared? Let's suppose the government gets a share of the surplus large enough to fund a massive UBI, presumably that would be increasing with time in this exponential growth scenario. It doesn't take too much thinking to see all the problems immediate wealth for all citizens would create. We simply do not have the social infrastructure to handle that transition. There's a common wisdom that financial security is the root of a lot of systemic problems like crime and drug addiction, but the opposite end of the spectrum has arguably even more amplified versions of those trends from hedonism (at least following essentially a helicopter drop without requisite structural or cultural changes).
Okay fine, maybe blindly doling out money isn't the best response to transformative growth. So let's instead have some sort of safety net to take care of basic needs and then invest the rest in public goods that can't lead to an overindulgence problem. Even in this benevolent government scenario, there will likely still be some social unrest from the gains that in principle could be going directly into people's pockets. Presumably there will still be some sort of an income distribution based on ownership in certain firms, and there will lots of cries of unfairness. But let's leave that aside; perhaps public investment will be so robust people will be happy with their overall quality of life.
I think what gets to my stomach the most about imagining a post-transformational world is thinking about the existential questions, which actually come from pure economic realities. In this scenario where there's a UBI enough to make everyone comfortable but not rich, people essentially become economically irrelevant. We will consumption smooth by construction -- confined to do so, not by solving a problem. All interesting dynamics in the economy will be driven purely by firms and AI agents; many good, factor, and asset prices will be largely driven by something else. Again, I see the optimal strategy for economic research about transformational AI to be for people to focus on one thing that can change dramatically, and this is what I personally would choose to focus on. A world where "consumers" are equivalent to government spending in a first year Macro class; it's essentially just goods that get thrown into the ocean. Instead, the utility component that's important is purely externality based; humans need not be considered optimizing agents.
And again this leads me back to an odd proposition to consider. You often hear people talk of poorly run cities as "open air prisons". If we are suddenly thrust into a world where most everyone is symbolically hand-to-mouth, how will we deal with that? Everyone will take up knitting? What would happen if 8 billion people were suddenly given permission to stop?
AI is perhaps very well timed for the current population declines and potential shortage of labor and funding for pensions. But eventually, it seems like a problem is created by solving another. If most resources would have to be used for investment (to create more resources), then this exponential growth is just being used to create more growth for the sake of creating growth. Ending a lot of preventable suffering is nothing to sneeze at, but if economic struggles are removed, more struggles will fill the void. A void of making our biological propulsion through time a nuisance. This dark cloud of angst will be first felt by white collar workers made redundant, but soon will be an inescapable cloud. Yes, maybe builds and transit can be made better than ever. But for what? Limiting the growth of transformational technology to give us more time to prepare feels like a fools errand. So by extension, whether we have to confront this spiritual crisis of confidence seems to come down to a matter of luck.