artificial intelligence brain decisions

Laurent T/Shutterstock (Licensed)

Artificial intelligence and the death of decision-making

Artificial intelligence could weaken our ability to make decisions, to evaluate alternatives, and even to act.

 

Simon Chandler

Tech

Posted on May 2, 2019   Updated on May 20, 2021, 1:31 pm CDT

AI-based automation has understandably scared many, many people. They fear for their jobs, they fear discrimination will creep into automated decision-making, and we may even face an “existential threat” as a species (at least if Elon Musk is to be believed). However, there is one serious threat that has been neglected up until now: the danger AI poses to human agency and autonomy.

Yes, the ability to have AI choose on our behalf may sound wonderfully convenient at first, but it has the potential to disrupt how we function as human beings. By predicting what we might like, by confronting us with suggested products to purchase, and by planning how we organize our time, artificial intelligence could weaken our ability to make decisions, to evaluate alternatives, and even to act. And if we allow it to do this to us, we all risk giving up a big chunk of not only our personal freedom, but of our humanness.

Loss of agency

Such concerns may seem far-fetched, but they’re held by a litany of researchers working in the areas of AI, media, and ethics. This was brought out most starkly by a landmark study published in December in which the Pew Research Center sought the AI-related views and predictions of almost 1,000 relevant experts. Aside from 37% of its respondents stating that “most people will not be better off” in a future where AI has become prevalent, this study also produced a number of important testimonies regarding how an increased dependency on AI could erode human agency.

“The most-feared reversal in human fortune of the AI age is loss of agency,” said one respondent to the Pew study who chose to remain anonymous. “The trade-off for the near-instant, low-friction convenience of digital life is the loss of context about and control over its processes. People’s blind dependence on digital tools is deepening as automated systems become more complex and ownership of those systems is by the elite.”

Similarly, the chief scientist at Mimecast, Nathaniel Borenstein, offered Pew a similarly bleak prognosis: “I foresee a world in which IT and so-called AI produce an ever-increasing set of minor benefits, while simultaneously eroding human agency and privacy and supporting authoritarian forms of governance.”

Borenstein’s was only one of several views underlining the dangerous trade-off involved in delegating everyday tasks to artificial intelligence. But while he and the other 978 experts contacted by Pew were specifically forecasting what the world might look like in 2030, there are plenty of signs that we’re handing over autonomy to artificial intelligence already.

These signs don’t come only in the form of the “Recommended for You” suggestions YouTube, Facebook, and Amazon bombard us with on a daily basis. They also come in the form of auto-complete writing applications, as well as AI-based schedule management apps, apps that recommend suitable quotes for social media posts, technology that decides how to manage and prioritize your inbox, and apps that can even plan the itinerary of your next vacation.

And while such tools focus mostly on the consumer, there are also many examples in the professional sphere of people delegating decision-making to AI, as well as examples of the hazards that can result. “Some spectacular disasters have resulted from human beings confusing judgment with the sorts of mechanical/algorithmic/AI ‘decision-making’ that computational devices are capable of,” explains Charles Ess, a professor at the University of Oslo’s Department of Media and Communication. “That is, it is relatively easy—and critically important—to point to various catastrophes resulting from replacing such human judgment with AI techniques of decision-making, e.g., the financial crisis of 2007 and 2008, up to the recent plane crashes of the Boeing 737 MAX planes.”

Phronesis and corporate profit

That algorithms played a part in the financial crash of 2007, for instance, is well documented. In 2006, around 40% of all trades conducted on the London Stock Exchange were executed by computers, with this figure reaching 80% in some U.S. equity markets. For many economists and experts, the fact that transactions were made by “algos” written by quantitative analysts (or “quants” for short) was one of the main reasons why global markets built up so much risk prior to the collapse. As Richard Dooling—the author of Rapture for the Geeks: When AI Outsmarts IQwrote for the New York Times in 2008, “Somehow the genius quants—the best and brightest geeks Wall Street firms could buy—fed $1 trillion in subprime mortgage debt into their supercomputers, added some derivatives, massaged the arrangements with computer algorithms and—poof!—created $62 trillion in imaginary wealth.”

Even now, Deutsche Bank’s Chief International Economist Torsten Slok puts an “algo-driven” fire sale of stocks as the number one risk the global financial system currently faces. One big reason such automated decision-making is dangerous? Artificial intelligence isn’t capable of the kind of intuitive, value-based judgment that often characterizes human decision-making. “This kind of judgment is called reflective judgment, often referred to by the Greek term ‘phronesis,’” explains Ess. “This is the kind of judgment that comes into play precisely when we don’t have a clear sense of a straightforward ethical response.”

Ess adds that many moral and practical dilemmas arise from a conflict between contradictory or competing ethical principles. Importantly, “there is no ‘über-algorithm’ that we could train a machine to make use of” when choosing between competing norms, since the choice often depends upon not only circumstances and culture, but on the implicit “experience and knowledge that is encoded in our bodies in tacit ways that’s difficult to deploy in an algorithm.”

The implication is that as AI-based apps, platforms, tools, and technologies colonize our lives, we could be increasingly separated from this kind of decision-making. And as AI expert and Deloitte analyst David Schatsky argues, artificial intelligence and machine learning represent such a jump in the nature of technology that it may very well result in a jump in the nature of humanity.

“Compared to prior generations of technology, AI may seem just like a difference in degree—more things can be automated,” he says. “On the other hand, AI can be so powerful that a difference of degree becomes a difference of kind—systems can interpret and simulate emotions, can adapt to individual attitudes and behaviors, and become ever more effective at shaping human desires and behaviors—in ways people may even be unaware of.”

The danger of such technological evolution is that as AI becomes better at choosing on our behalf, our decision-making skills will worsen in parallel. “As AIs become more proficient at decision-making, humans are likely to defer to them for the convenience of not having to make a decision ourselves,” says Peter B. Reiner, a professor of neuroethics at the University of British Columbia. “One can easily see how this might reduce both the frequency of autonomous decision-making, and over time, agency. For decision-making is a skill like any other, and if we do not regularly practice making decisions in a given domain of life, our capacity to make such decisions may also diminish.”

The seemingly limitless delegation of control to artificial intelligence also raises another thorny question. If AI deprives us of autonomy, just where is this autonomy going, and whose ends are being served by its transfer? Well, there’s a simple answer to this question: the corporations that build and use AI that will benefit from having AI-based tools direct us toward the services and products they offer.

“The issue of corporate interests is one that is fundamental,” says Reiner. “I view our smartphones—and increasingly the entire ecosystem of algorithmic devices that we use day in and day out—as extensions of our minds. If this is true, then corporate interests have the potential to invade our minds, and most importantly are involved in helping us make decisions in ways that users generally don’t understand.”

Once again, Reiner is another researcher who predicts that this corporate-driven phenomenon of delegated agency will “only grow as AIs become more pervasive.” And while it may be difficult to envisage such a world now, a glimpse of it is available via the various patents the big FAANG titans (Facebook, Apple, Amazon, Netflix, and Google) have made for “emotional AI” technology in recent months and years. In October, for instance, Amazon patented a new version of its Alexa assistant capable of detecting the mood of a user and then recommending an appropriate activity to do or item to purchase.

This version could even tell when the user is ill and advise him or her to make chicken soup, and while this is only one patent (there are others), it provides further evidence of the profit-driven tendency of corporations to harness digital platforms in order to promote services and products. In turn, it hints at how AI-powered tools will increasingly be used to hack into our decision-making processes, all in a bid to promote the wares and interests of the companies that exploit them.

Digital resignation and technological diets

There’s little doubt that individual agency will be eroded at least to some degree by the growth of consumer AI, but the question remains as to what we can do to protect ourselves from such growth. There is something of a split on this issue among AI-focused researchers, however, with some predicting that the majority of people will happily resign autonomy and choice in favor of greater convenience and ease.

“There is a recent paper [here] that addresses this issue directly—and labels the phenomenon quite properly as digital resignation (also suggesting that corporations know this and capitalize upon it),” Reiner says. “I believe that people already have shown themselves to be willing to make such trade-offs with respect to their privacy, and that autonomy is not far behind—especially if my Netflix and Spotify feeds are any indication.”

However, while other experts broadly agree with such views, they also point out that this isn’t unique to AI, and that there are indeed a number of things that can be done to reduce any potential loss of personal agency. “There is no question that faced with technologies that offer convenience and efficiency in exchange for autonomy, people frequently choose the technology,” says Deloitte’s Schatsky. “Few people do arithmetic by hand when a calculator is available. Such technologies do seem to make people a bit less clever sometimes, and reliant on technology that their parents never needed to get something accomplished.”

Still, Schatsky argues that many people even today “still reject efficiency and convenience for the satisfaction of doing things themselves,” whether they’re cooking, navigating, performing music, writing, playing sports, or whatever else. Furthermore, he expects that any significant increase in AI-based tools will be correspondingly accompanied by the growth of a technological equivalent of a diet culture, which in fact is already in evidence today in the form of the Digital Wellbeing and Do Not Disturb functionality found on Google Pixels and iPhones.

“In primitive societies of the past, it would have been rare for people to confront the danger of consuming too many calories or getting too little exercise,” he explains. “Those dangers are very real in rich societies today. People mitigate them by attention to diet and exercise. An AI-infused future will present different threats to our well-being. We will need to work individually and as a society to mitigate those threats, to maintain our well-being, and to make the most of our humanity.”

This approach will also have to be accompanied by a conscious effort to build the “right” kind of AI-based tools. According to Oslo professor Ess, instead of passively directing us to the options their algorithms have determined we’ll like the “best,” AI technologies should be designed to act in terms of what’s best for people in a deeper sense. “In order to counter such dangers, there is the need to insert ethics into the very design processes of AI from the outset,” he says. “Specifically what is called deontological ethics, which foregrounds human autonomy and freedom, and virtue ethics, which foregrounds precisely these central capacities for reflective judgment, etc.”

More specifically, Bart Knijnenburg—an assistant professor of human-centered computing at Clemson University—believes that AI should be designed not just to recommend things to people, but to encourage them to experiment and choose for themselves, as well as to try things beyond their comfort zones. “I think we should try to use AI not just for recommendation but for exploration as well,” he says, before suggesting that AIs, for example, could show users the items that the algorithm is very uncertain about or believes the user wouldn’t ordinarily like, rather than the ones it thinks are “best.”

Such proposals are a long way from being realized, but they outline a path that might have to be taken if artificial intelligence ends up playing a role in nearly everything we do online and off. Of course, that’s not to say that the growth of AI won’t have its advantages. Nonetheless, as with almost every technological innovation, there will be drawbacks and tradeoffs involved, and our personal agency will thank us if we manage to keep sight of them as we enter an AI-saturated future.

Share this article
*First Published: May 2, 2019, 5:00 am CDT