Nadav Spiegelman

Will A.I. Become the New McKinsey?

https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey?utm_source=twitter&utm_medium=social&mbid=social_twitter&utm_brand=tny&utm_social-type=owned
Author
Ted Chiang
My last highlight
2023-07-10
Number of highlights
9

My Highlights

Whenever anyone accuses anyone else of being a Luddite, it’s worth asking, is the person being accused actually against technology? Or are they in favor of economic justice? And is the person making the accusation actually in favor of improving people’s lives? Or are they just trying to increase the private accumulation of capital?
A former McKinsey employee has described the company as “[capital’s willing executioners](https://www.currentaffairs.org/2019/02/mckinsey-company-capitals-willing-executioners#:~:text=An%20insider's%20perspective%20on%20how,spreads%20the%20gospel%20of%20capitalism%E2%80%A6&text=The%20author%20of%20this%20piece%20has%20chosen%20to%20maintain%20anonymity.)”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.
So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between [McKinsey](https://www.newyorker.com/magazine/1999/10/18/the-kids-in-the-conference-room)—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, [Purdue Pharma](https://www.newyorker.com/magazine/2017/10/30/the-family-that-built-an-empire-of-pain) used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.
Many people think that A.I. will create more unemployment, and bring up [universal basic income](https://www.newyorker.com/magazine/2018/07/09/who-really-stands-to-win-from-universal-basic-income), or U.B.I., as a solution to that problem. In general, I like the idea of universal basic income; however, over time, I’ve become skeptical about the way that people who work in A.I. suggest U.B.I. as a response to A.I.-driven unemployment. It would be different if we already had universal basic income, but we don’t, so expressing support for it seems like a way for the people developing A.I. to pass the buck to the government. In effect, they are intensifying the problems that capitalism creates with the expectation that, when those problems become bad enough, the government will have no choice but to step in. As a strategy for making the world a better place, this seems dubious.
Is there a way for A.I. to do something other than sharpen the knife blade of capitalism? Just to be clear, when I refer to capitalism, I’m not talking about the exchange of goods or services for prices determined by a market, which is a property of many economic systems. When I refer to capitalism, I’m talking about a specific relationship between capital and labor, in which private individuals who have money are able to profit off the effort of others. So, in the context of this discussion, whenever I criticize capitalism, I’m not criticizing the idea of selling things; I’m criticizing the idea that people who have lots of money get to wield power over people who actually work. And, more specifically, I’m criticizing the ever-growing concentration of wealth among an ever-smaller number of people, which may or may not be an intrinsic property of capitalism but which absolutely characterizes capitalism as it is practiced today.
People who criticize new technologies are sometimes called Luddites, but it’s helpful to clarify what the Luddites actually wanted. The main thing they were protesting was the fact that their wages were falling at the same time that factory owners’ profits were increasing, along with food prices. They were also protesting unsafe working conditions, the use of child labor, and the sale of shoddy goods that discredited the entire textile industry. The Luddites did not indiscriminately destroy machines; if a machine’s owner paid his workers well, they left it alone. The Luddites were not anti-technology; what they wanted was economic justice. They destroyed machinery as a way to get factory owners’ attention. The fact that the word “Luddite” is now used as an insult, a way of calling someone irrational and ignorant, is a result of a smear campaign by the forces of capital.
Of course, there is the argument that new technology improves our standard of living in the long term, which makes up for the unemployment that it creates in the short term. This argument carried weight for much of the post-Industrial Revolution period, but it has lost its force in the past half century. In the United States, per-capita G.D.P. has almost doubled since 1980, while the median household income has lagged far behind. That period covers the information-technology revolution. This means that the economic value created by the personal computer and the Internet has mostly served to increase the wealth of the top one per cent of the top one per cent, instead of raising the standard of living for U.S. citizens as a whole.
In the United States, per-capita G.D.P. has almost doubled since 1980, while the median household income has lagged far behind. That period covers the information-technology revolution. This means that the economic value created by the personal computer and the Internet has mostly served to increase the wealth of the top one per cent of the top one per cent, instead of raising the standard of living for
Of course, we all have the Internet now, and the Internet is amazing. But real-estate prices, college tuition, and health-care costs have all risen faster than inflation. In 1980, it was common to support a family on a single income; now it’s rare. So, how much progress have we really made in the past forty years? Sure, shopping online is fast and easy, and streaming movies at home is cool, but I think a lot of people would willingly trade those conveniences for the ability to own their own homes, send their kids to college without running up lifelong debt, and go to the hospital without falling into bankruptcy. It’s not technology’s fault that the median income hasn’t kept pace with per-capita G.D.P.; it’s mostly the fault of Ronald Reagan and Milton Friedman. But some responsibility also falls on the management policies of C.E.O.s like [Jack Welch](https://www.newyorker.com/magazine/2022/11/07/was-jack-welch-the-greatest-ceo-of-his-day-or-the-worst), who ran General Electric between 1981 and 2001, as well as on consulting firms like McKinsey. I’m not blaming the personal computer for the rise in wealth inequality—I’m just saying that the claim that better technology will necessarily improve people’s standard of living is no longer credible.