When Chat GPT made its debut on November 30, 2022, it unleashed the hype of AI, and in the three years since, AI has taken on an outsized role not just in markets, but also in our lives. For much of the time, the AI story has been told by its advocates and its salespeople, and the companies in the AI ecosystem have benefited. Not surprisingly, given that its narrators benefit from this growth, that story has emphasized the positive, with dazzling AI use cases and optimistic extrapolation of the productivity gains from its adoption. In the last few months, we have seen cracks emerge in the AI story, with investors wondering when, and in what form, the immense investments in AI architecture will pay off, and how if they pay off, the businesses that they disrupt will fare. That disquiet has played out as negative market reactions to new AI investments at Meta and Amazon, a markdown in software company market capitalizations and in a sell off last week, in response, at least partially, to an AI scenario assessment from Citrini Research, a publisher of macro and stock research. Given that I know very little about the technology of AI, and that my macroeconomic knowhow is pedestrian, my intent in this post is less about promoting my favored AI scenario, and more about providing a framework for you to develop your own.
The Citrini AI Assessment - Report and Responses
The Citrini AI assessment came out on February 22, 2026, and it starts with a preface stating that it is presenting a scenario, not a prediction. I do have issues with that opening, but I will come to them later, but the report itself laid out a story for AI that unfolds with a dark end game for the economy, where by June 30, 2028, the AI disruption has unsettled businesses and displaced workers, with unemployment rates rising above 10% and the market down almost 40% in response. There have been other AI doomsayers, but many of those doomsday scenarios are built around the storyline that AI will not live up to its promise, and the pain comes from having over invested trillions of dollars in building its architecture. In contrast, the Citrini AI story is built on the expectation that not only does AI work well at doing tasks currently performed by white collar professionals, across a range of firms, but its adoption happens very quickly. The pain in the Citrini story comes from that disruption creating substantial job losses, and especially so among higher-earning workers, and the resulting loss of income driving these job losers to cut back on consumption. The ripple effects play out across businesses, with default risks and spreads rising, private credit collapsing and the market and economy pricing in the pain.
I do think that there are major flaws in the steps leading to the economic implosion in the Citrini assessment, but credit should be given where it is due. I have always been troubled by how much we have worshiped at the altar of disruption in this century, putting the founders of disruptors on pedestals and preaching disruption's virtue. In keeping with Joseph Schumpeter's description of capitalism as built around creative destruction, I do believe that a vibrant and dynamic economy needs a shake-up and challenging of the status quo, but disruption comes with costs to the businesses that are disrupted, and to the people who work in them. There is much to celebrate, as consumers, in terms of choice and price from the growth of online retail, but that does not take away from the devastation that has been wreaked on brick-and-mortar retail and its constituent parts. Ride sharing has brought car service from its nineteenth century ways into the twenty first century, but at the expense of yellow cabs and conventional car service businesses. The reason that many AI advocates took issue with the Citrini report was precisely because it bought into their sales pitch of how AI bots can not only do what lawyers, bankers, software engineers and consultants do, but also do them better, and then asked the question of "what then?.
The Citrini AI scenario must have hit some targets, because in the days since, we have been flooded with scenarios countering Citrini and arriving at different outcomes. While I was not surprised to see Goldman Sachs, Moody's and JP Morgan jump in with their AI scenarios, with more benign outcomes for the economy, where the job loss and income effects from AI are modest and temporary, I was surprised to see Citadel wade into the argument, with a direct rebuttal to Citrini, which sees a much more positive end game from AI disruption, and is built around three pillars. The first is the current data on jobs and layoffs in the businesses most directly targeted by AI, such as software, where they note that while jobs have been shed, the job losses have been modest, and AI adoption trends don’t see breakouts consistent with the speedy disruption predicted by Citrini. The second is history, where they look at disruptions in the past (PCs, the internet) and note that none of them have been speedy or have created the job losses or economic collapses predicted in the doomsday scenario. The third is grounded in macroeconomics, where they point to the inconsistency of assuming that a large positive productive shock, from AI’s success, will play out out as large negative shock to the economy and market in which it happens.
Completing the AI story
The problem with all of these AI scenarios is that they are rooted in the weakest of responses to uncertainty, which is to either pick a scenario and to describe it in detail, without establishing, at least in qualitative terms, how likely that scenario is, in the first place, or to list out a whole host of scenarios, without making judgments on likelihood on eany of them. It is entirely possible that what Citrini was presenting was a "worst-case" scenario (I read through the report and could not get a sense of if this was so, and the subsequent responses from Citrini have only muddied the waters), a "low likelihood" scenario or the "likely scenario" of how AI will unfold. If it is a likely scenario, and you buy into the pitch, the investment and personal consequences will be dramatic, since it is entirely possible that, if you are a white-collar worker, you may have lost your job by June 2028, and your savings, if invested in stocks, would have taken a beating. If it is a "low likelihood" scenario, and you are exposed, because of your job, age and portfolio composition, you should consider buying protection, but if it is a worst-case scenario, it is almost entirely useless, except for shock value.
Point Estimates and Probabilities
For much of its history, financial analysis has been built around point estimates, where you identify key drivers, estimate the effects on your bottom line (earnings, cash flows) and make your best judgments. Thus, when valuing a company, you estimate the earnings growth on base year earning, how much you will reinvest of those earnings to grow to get to cash flows, and discount those cash flows back at a risk-adjusted rate to get to value. The problem with point estimates, where almost everything is uncertain is that you will be wrong 100% of the time, though you may still make money, if you are wrong in the right direction.
Financial analysts and economics have been slow in adopting and using probabilistic approaches, where point estimates are replaced by distributions, and a single judgment on outcome by a distribution of outcomes. One reason, at least early on, was that economists and financial analysts often did not have rich enough data or powerful enough tools to use decision trees, simulations or scenario analysis in making their macroeconomic and investment judgments, but that is no longer true. Another reason may be that many in this group are uncomfortable with statistical distributions or probability estimates and stay away from using them, because of that discomfort. The third reason, at least for a subset of analysts, is a concern that being open about estimates and the errors in those estimates, which is visible to all in probabilistic approaches, will be viewed as a sign of weakness or lack of conviction on their part. I have a short paper on using probabilistic approaches, where I look not only at when you may want to use which approach (I look at decision trees, simulations and scenario analysis) but also have a short review of statistical distributions, if you are interested.
Since Citrini specifically titled their AI thought piece as a scenario, I will stick with scenario analysis in this post. In its most sloppy form, and one that has been around for decades, scenario analysis has taken the form of best case - base case - worst case scenarios, an almost useless exercise, since there are almost no risky investments that are going to pass muster under the worst case scenario, no matter how good they are, or are going to fail under the best case scenario, no matter how bad they are. A scenario analysis, done right, should look at scenarios that cover all possible outcomes on an investment or decision, and for completion, need probabilities attached to these scenarios, which can then be used by a decision maker to estimate expected values. That will be almost impossible to do if you are trying to work out future pathways to AI, since it is so early in the process and so little is known about outcomes.
There is an alternate path for scenario analysis that is less information-intensive and thus more feasible, and it draws on the 3P test that I use when valuing companies, where my company valuation narrative has to start with the possible test (it can happen) to being plausible (which requires more backing) and then on to the probable (where you can estimate a likelihood). In the context of scenario analysis, this would require that you categorize scenarios into their the three groupings:
The discussion around where AI is going would become much healthier if scenario proponents were required to state where their proposed scenarios fall in this spectrum. Citrini, for instance, could have saved itself from some of the backlash, if the writer of the AI doomsday report had specified that it was a possible, but not quite plausible scenario.
The AI Disruption - Gaming the Outcomes
In the last week, I have seen at least a dozen scenarios touted by individuals and entities, many of whom I respect, and I must confess that I am whipsawed. If, like me, you are drowning in these scenarios, with very different results and outcomes, the only way to retain your sanity and to take ownership of this process is for you to develop a framework where you can not only put each of these scenarios to the 3P test, but also to develop your own assessment of how AI will play out for businesses, investors and the economy.
1. The Disruption - Form and Speed
The first set of questions that you need to address in the AI story relate to how the AI disruption will evolve, both in form and timing, and to then trace out the aftereffects.
- AI Disruption Magnitude - Worker Displacement versus Productivity-enhancing Tools: If you listen to some of AI’s lead players, AI will have the capacity to replace workers across multiple businesses, as it develops strengths that go beyond the purely mechanical. One reason that the AI effect on unemployment is so large in the Citrini doomsday scenario is because AI’s reach in the scenario is not just restricted to replacing programmers in software but extends to replacing white collar workers in other technology businesses, financial intermediaries, banking and consulting. In contrast, Citadel’s more benign AI reading comes from AI displacing workers in a smaller subset of businesses, while providing tools in others. At the other end of the spectrum, there are still some who believe that when all is said and done, AI will provide tools to workers that may save them time, but will not be powerful or dependable enough to replace them.
- AI Disruption Speed: Here again, there is disagreement, with some AI optimists believing that its disruption of regular businesses is imminent, whether displacing workers or in giving them tools. Others believe that AI adoption will take time, partly because the tools need work and partly because businesses and workers are slow to adapt to change. The Federal Reserve in St. Louis has created a tracker of AI adoption rates across users, and while it does not capture the depth of the AI adoption, it does provide a measure of how much familiarity and comfort that users are acquiring, with AI tools.
With the caveats about survey data in place, there are interesting trends in these surveys. First, the use of Gen AI tools in non-work settings has grown more than its usage at work, an indication perhaps of how personal devices (phones, in particular) have changed technology adoption rates. Second, the time that AI has saved people, at least so far, has been modest, ranging from less than 1% in the accommodation and food businesses to about 4% in information and management of companies. Overall, this graph suggests that AI usage is neither as explosively fast growing nor as much of a time-saver, as its proponents suggest that it is. The pushback, though, is that these are surveys of the general population, and that there are data points indicating that the disruption effects are more substantial including the substantial write down in market capitalizations of software companies and layoffs at tech companies. The announcement by Block, the fintech company founded by Jack Dorsey, that it would it be letting go of almost 40% of its workforce, for instance, and blaming AI's rise for the action, was viewed as an indicator of AI's disruption potential. That is a noisy signal, though, since many tech companies have bloated work forces, and AI gives them easy cover, when correcting past mistakes.
It is true that there is no crystal ball that you can use to gauge the magnitude and speed of AI disruption, but every AI scenario that you see starts with a judgment on one or both.
2. The Disruption Aftershocks
Disruptions create aftershocks, some positive and some negative, and while we often avert our gaze and attention from the latter, a full assessment requires considering both. With AI, the positive effects take the form of higher productivity, as it either allows people to do their jobs more efficiently (with AI tools) or actually replaces people and does their jobs instead, in effect allowing for more output with less labor. Relating back to the different pathways that AI disruption can take, both in form and in form and speed, I would hypothesize that these disruption benefits will be a function of how AI disruption plays out.
Proposition 1: The disruption benefits from AI disruption will be greater from people displacement than from AI productivity tools
Proposition 2: The productivity effects from AI disruption will decrease, at least in economic value terms, the longer it takes for the AI disruption to unfold.
The negative effects of AI, in economic terms, will come from the immediate displacement of people, if AI replaces labor, or from the decrease in employees needed to get tasks done, if AI tools make existing employees more efficient. Here again, I would hypothesize that these disruption costs will be function of how the disruption plays out.
Proposition 3: The disruption costs from AI disruption will be greater from people displacement than from tools, as those laid off lose income and spending power.
Proposition 4: The productivity costs from AI disruption will decrease, at least in economic value terms, the longer it takes for the AI disruption to unfold, since time will allow new entrants into labor markets to adjust to a disrupted business world.
Intuitively, the longer it takes AI to find roots in business, the more time it gives workers time to adjust, retrain or move on. As you can see, the scenarios where AI displaces existing employees and happens quickly are the ones with the biggest benefits and the biggest costs, and the scenarios where AI supplies tools to existing employees and happens slowly has the least benefits and costs. Building on this theme, I see the net effect of AI disruption playing out as follows:
If AI disruption displaces existing workforces, across many businesses, and happens quickly, the net effect is likely to be negative, at least in the near term, since the economy will not only have to absorb major layoffs quickly, but also because those laid off will be higher-earning white collar workers. While that maps on to the Citrini doomsday scenario, there is still much to debate about which industries will see the most job displacement and how quickly these workers will find other jobs. There is also a discussion that should follow, even in this negative net-benefit scenario, of how quickly the economy (and workers) will adapt, and if and whether net benefits will turn positive in the long term. If AI job displacement is on a limited scale, and/or takes time to unfold, both the benefits and the costs of the AI disruption become smaller, but the net benefit is more likely to be positive, in the short and long term. Finally, the AI disruption takes the form of tools that make workers more efficient, but not efficient enough to reduce workforces, both the benefits and costs of AI become much smaller. In fact, if these tools take a long time to craft and displace little or no labor you get the AI disruption fizzle, with very small benefits and costs.
3. The 3P Test
Staying true to my earlier assertion that scenarios without probability estimates are not useful, I will try to put the various AI scenarios that I mapped out in the last section on the 3P continuum.
Let me start with the two possible, but not quite plausible scenarios. The first is the a speedy, massive AI disruption, where AI displaces worker across most businesses, and does so quickly, as visualized by Citrini. It can happen, but given the history of disruption, the limits of AI technology and inertia in the process, it is implausible. At the other extreme, it is possible that AI provides tools to workers that improve productivity marginally, with many ending up being more distractions than tools for productivity, effectively emptying its destructive potential, but that too strikes me as implausible, given what we are seeing in terms of AI capabilities. The most plausible scenarios are ones where AI displaces workers in some industries, such as software and some financial intermediaries, and provides tools that help workers to varying degrees in other businesses. As for probable, I think that disruption will reduce workforces in a subset of businesses, that its tools will include some game changers and that it will take longer to unfold, at least when it comes to monetization, than its advocates think.
My justification for why AI disruption will take time is based on a mix of factors. The first is that my (limited) knowledge and experience with AI products is that while they sometimes work magically well and quickly, they do have kinks, coming partly from being unable to separate good data from bad, and partly from their imperfect attempt to be imitate humans. The second is history, where no disruption has ever unfolded without delays and drawbacks; remember that the dot com disruption almost lost its moorings during the market bust in 2001. The third is human nature, where much as employees and managers claim to want to move on to new and better options, they remain attached to old technology and products; typewriters and mimeographs took a while to disappear after PCs stormed the workplace and flip phones persisted well into the smartphone era.
There are two reasons why I do think that AI disruption is still going to be significant, in the long term. The first is that some of those making the argument that AI will not displace jobs in the long term are assuming that AI in it more advanced form will look like ChatGPT on steroids or be primarily mechanical in its applications. Even my limited exposure to AI's advanced tools suggests that they have far greater capabilities, and their capacity to mimic human intuition and thought processes is unsettling. The second is the blanket assumption that workers in most white collar jobs will not be easily replaced because they bring training, brainpower and experience into those jobs that will be difficult to replicate. Many white collar workers are bright people with specialized knowledge, but the businesses that hire them put them in straight jackets, pushing mechanics over intuition and rule-driven thinking over principle-driven assessments. In short, it is the nature of the jobs that we have created in many white collar settings that makes them vulnerable to disruption, not the intelligence or training of the people holding those jobs.
It is worth noting that in my probable scenario, AI will unfold at different rates in different businesses, and if I were pushed to distinguish between the businesses that will be targeted most (and soonest) from the businesses where it will take more time, and have less impact, I would look at four factors:
4. Cui Bono?
Most of the AI scenarios yield net benefits, and even in the most damaging scenarios, where the AI disruption benefits are overwhelmed by its costs, at least in the short term, you could argue for net positive benefits in the long term. That is good news, but it should taken with a grain of salt, since the distribution of these net benefits across businesses and society will be unequal, and it is possible that the net benefits accrue to a few businesses (and individuals), leaving the rest (businesses and individuals) with net costs.
- The interests of the AI companies and the rest the economy/market will diverge on AI disruption, with the former benefiting if the disruption is across many businesses and happens quickly, and the latter benefiting from a slower disruption restricted to a few businesses. This will be the case even if AI tools add to productivity, since the lower costs that companies acquiring these tools will have as a consequence, may not translate into higher profits, especially if their competitors can pay and acquire the same tools.
- The last few major disruptions, starting with the internet, moving on the China and then the smartphone, have all tilted the playing field in many businesses towards larger companies, making businesses more winner-take-all. It is likely that the AI disruption will play out in similar ways, with the winners winning big, and lots of companies losing out.
- At the individual level, it is not just plausible, but also likely, that a strong AI disruption will make wealth and income inequality worse, with founders of AI businesses joining the ranks of the deca-billionaires and centi-billionaires.
The AI Personal Threat
If you are looking at these side costs and threat to jobs that will come from the AI disruption, and wondering whether we should opt out, by regulating or restricting its reach, I am afraid that the choice is out of our hands. The genie is out of the bottle, and the only pathway that you have, if you operate in a space where AI is ubiquitous, is prepare for a reality where AI tools can automate and do much of what you do on a daily basis, but where you have to create a niche or moat that still makes you necessary.
Just about two years ago, I wrote about an AI entity called the Damodaran Bot, that was being developed by Vasant Dhar, my colleague at NYU, and noted that having made all that material that I have developed in my lifetime (classes, books, writing, models, videos) publicly available, I was completely exposed to AI disruption. I have watched that bot develop, with quirks and occasional hiccups, to a point where it can replicate much of what I do almost effortlessly. At the time, though, I did write about what I could do to keep the moat at bay, including the following:
- Generalist vs Specialists: I am a dabbler, an expert in nothing and interested in lots of different things, and I do think that gives me an advantage over a bot that is trained to focus on a topic and drill down. The specialist advantages stem from mastering the vast content in a discipline, but those advantages are diluted with AI entities that can also see that content, but the generalist advantage of using multi-disciplinary thinking with be more difficult for AI to replicate.
- Left and Right Brain: I value companies, and early in my valuation life, I decided that financial modeling was not the right path to value businesses, and that good valuations bridge stories and numbers. If the legend of the right and left brains holds, where the left brain controls logic and numbers and the right brain drives your imagination, a bot will have a tougher time replicating what you do, if you use both sides. That said, I have seen the Damodaran Bot get much better at story telling in the two years that I have watched it, and I need to up my game.
- Reasoning muscle: When faced with questions in the days before the internet, you often had no choice but to reason your way to answers. That may have been time consuming, and your answers might even have been wrong, but each time you did this, you strengthened your reasoning muscles. As we move into a period, where the answer to every question is online, on Google Search and ChatGPT, we are losing the need to exercise those reasoning muscles, and exposing ourselves to being outsourced by our bots.
- An idle mind: I am not a voracious reader nor a listener to podcasts, and since I don't have much real work to occupy me, I also have plenty of vacant time, with nothing to do. I use that time to daydream and ponder about questions that capture my imagination, including why someone would pay billions of dollars for a sports franchise (like the Washington Commanders), how to deal with the risk of lava from a volcano hitting a spa and ruining its valuation and how streaming has broken the entertainment business. None of these posts include deep insights, but my guess is that the Damodaran bot would have trouble keeping up with my wandering mind.
With the admission that is may not be enough, and that my bot may soon be able write my books and posts, teach my classes and analyze/present data better than I can, I think that you should all be acting as if a bot with your name is looking over your shoulder and trying to learn what you do, and think about what you can do to keep that bot at bay.
There is always the possibility that you are arming yourself for a disruption that fizzles, but I will draw on Pascal's wager to explain why you should prepare for an AI imitator or bot, even if you don't believe that it is imminent:
Pascal, a French mathematician, used the wager to explain why be believed in God, even if he was doubtful of a heavenly presence, because the expected value from believing in God exceeded the expected cost from not believing. In the context of AI, acting as if an AI presence and competitor is present will make you better at whatever you do, as a teacher, banker, consultant or software engineer, and that will persist, no matter what AI's impact is ultimately. Good luck!
YouTube Video





