dev-resources.site
for different kinds of informations.
What's Fueling the Spike in American Concerns Around AI?
Recently the personal care brand, Dove (Unilever) released a new ad celebrating the 20th anniversary of their famous Campaign for Real Beauty thatâs getting a lot of attention. The company isnât sheepish about its stand against the marketing of âfakeâ beauty products, photoshopped fashion models, and social media images that market impossible beauty standards to young girls. With this latest ad, entitled âThe Codeâ, Dove has doubled down its efforts and taken on the explosive market of AI-generated images, which it pledges to never use to âcreate or distort womenâs imagesâ.
The ad is flooding social media platforms, giving fresh oxygen and awareness to the hotly debated topics of social media impacts on young people, impossible standards of beauty, and the rising depression and anxiety levels among teenage girls. The ad also throws down the marketing gauntlet to firms and companies around the world, implicitly asking, âWill you follow âThe Codeâ of conduct weâre committing to?â
However, the adâs strong message and stance against AI comes on the heels of a recent spike in concern among Americans around AI and its influence on our daily lives. The fear of AI, or algorithmophobia, is a topic I covered in an earlier piece where I make the argument that our industryâs marketing strategy needs to better understand the causes of algorithmophobia so as to adapt its messaging to overcome common consumer fears.
In the article, I explored how popular culture often polarizes our view of AI, depicting it as either entirely benevolent or entirely malevolent. I argued that such extreme views could heighten anxiety about adopting new technologies. In this piece, however, I aim to delve deeper into the psychological factors that shape our attitudes towards technology and to examine the recent surge in concerns and their underlying causes.
In doing so, I do not wish to dismiss legitimate concerns about AI or AI-generated content and its potential impact on body image and mental health. There are valid reasons for caution, given AIâs unpredictable effects and recent emergence. Instead, my focus is on exploring the deeper-rooted anxieties that arise with the advent of technologies like AIâfears about the very concept of artificial intelligence and the apprehensions that typically accompany such disruptive innovations. Addressing these fears is crucial for us to fully leverage the benefits that AI offers.
Concerns Around AI Among Americans is Rising Sharply
A recent 2023 Pew Research Center Survey looked at how Americans felt about the role of artificial intelligence in their daily lives. The survey showed a sharp increase from the two previous years. When asked how the increased use of AI in daily life made them feel, 52% of respondents said they were more concerned than excited. In 2022, that number was 38%, and in 2021 it was 37%. So, whatâs contributing to this bump in concern?
One factor is the explosive growth in AIâs development and deployment, along with the resulting increase in media coverage and debate. We hear, see, and read about AI everywhere now. The publicâs awareness is rising and likely outpacing a fuller understanding of AI.
In fact, Pew reports that while most Americans (90%) are aware of AI, the majority are limited in their ability to identify many common uses of the tech (e.g. email spam detection or music playlist recommendations). Awareness without understanding promotes anxiety. Itâs functional. Our ancestors (those who survived to procreate) tended to focus more on a general awareness of potential threats than our kin who felt compelled to gain a true understanding of what was making noises in the bushes over there!
To that point, the survey showed that education plays a role in attitudes. Adults with a college or postgraduate degree are more likely to be familiar with AI than those with less education. Also, Americans with higher levels of education tend to be more positive towards AIâs impact on their lives.
However, itâs also important to note that more educated Americans are more likely to work with AI. In 2022, 19% of American workers were in jobs that are most exposed to AI. Those jobs tend to be in higher-paying fields where higher education was beneficial. Working with AI on a practical level helps explain more positive attitudes in this population. Education and experience often allay fears.
Two competing perspectives of artificial intelligence dominate today: 1) AI as the supplanter, leaving humanity to wither towards extinction (left). 2) AI as an assistant to unleash unlimited potential (right).
The Function of the âHuman Elementâ in our Attitudes
One of the more interesting take-aways from the Pew surveys was the shifting levels of trust in AI when it came to the absence or existence of the âhuman elementââ those parts we tend to value (creativity, empathy, analytical thinking) and those we tend to avoid (bias, destructive impulses, impulsiveness).
However, itâs this ambivalence to the human side of AI that provides more valuable insights into how we form our attitudes. Consider these Pew Survey findings on health care:
65% of Americans say they would want AI to be used in their own skin cancer screening.
67% say they would not want AI to help determine the amount of pain meds they get.
Now, consider how U.S. teens responded to a Pew survey asking them about using ChatGPT for schoolwork:
39% said it was âacceptableâ to use ChatGPT to solve math problems.
Only 20% said it was âacceptableâ to use ChatGPT to write essays.
What explains the apparent contradiction here? Whatâs the difference between skin cancer screening and pain med management? Why did twice as many respondents find it acceptable to use AI to solve math problems but not write essays?
Part of the answer is we assign varying levels of subjectivity or âhuman-nessâ to different tasks (even though humans do them all). In other words, itâs more âhumanâ to compose an essay because writing is about expressing yourself. Math is about following strict rules. The same attitude usually applies even when youâre tasked with writing an objective, unbiased, and factual piece like a news article. However, arenât these the same strategies you need to solve math problems?
To diagnose skin cancer successfully, it helps to be experienced, objective, and unbiased. AI skin cancer detection systems are trained on thousands of high-res images of skin lesions. These systems can remember and compare them all, dispassionately and efficiently. No ambiguity. No corrective lenses. No hangover from last nightâs bar hopping.
On the other hand, accurately gauging someoneâs pain level requires a person who feels pain, a human whoâs shared the experience of pain, who has empathy and compassion. Itâs likely itâs this lack of human connection that engenders distrust in AI on this issue.
Our Attitudes are Contextual
Itâs also important to recognize our attitudes align with the aim of our motives. While most of us are capable of compassion and empathy, these faculties also make us easy to manipulate. In most instances, itâs easier to bullshit your way through a book report than a calculus problem. Also, most people would have more success convincing a human doctor to over-prescribe pain meds than an AI physician. If we need moral nuance or ethical wiggle room, we distrust AI to accept such gray areas. I may be able to convince you that 2 + 2 = 5, but I canât âconvinceâ my calculator.
What these surveys show is that our attitudes and trust in AI are highly contextual. They change based on the stakes of potential outcomes and the need for objectivity or subjectivity (perceived or actual). If the potential negative outcomes are low (a robot cleaning our house), we tend to trust more. However, we wouldnât want a robot to make life-altering decisions about our health, unless that decision required high levels of objectivity and unbiased judgmentâŚbut only to a point. Thus, this back-and-forth âdanceâ with AI is our ambivalence at work.
Subjectivity and objectivity are both part of the human experience, yet we prioritize subjective faculties like compassion, empathy, trust, expression, and creativity as âtruly humanâ qualities. The crux of the issue is that, while these aspects of ourselves are indeed some of our best features, they are also the source of our distrust of ourselves and, by extension, AI. Because we are compassionate, we can be manipulated. Because we are emotional, we can be biased. Because we are trusting, we can be deceived. This double-edged sword pierces the heart of our shifting attitudes towards AI.
Mechanisms That Shift Attitudes Towards Technology
The Pew Surveys show Americanâs trust of AI runs along a spectrum. Just as the temperature in a building is regulated by a thermostat, our individual and collective trust in AI is influenced by an array of âcontrol mechanismsââsuch as our education levels, occupations, media portrayalsâthat continuously modulate our comfort levels and perceptions of safety with the technology. So, what else adjusts the thermostat? I asked clinical psychologist Dr. Caleb Lack to help me identify some common factors that could influence public sentiment on AI.
Habituation
One common mechanism to calm nerves around technology is the simple passage of time. All big technologiesâlike automobiles, television, and the internetâfollow a similar journey from resistance to full adoption. Given enough time, any new tech will go from being âmagicalâ or âthreateningâ to being commonplace.
âI think what often happens with tech is people habituate to it,â explains Lack. âAs with any new technology, you just become used to it. Rule changes in sports are a good example. People will often complain about them at first. Two seasons later, no one is talking about it. Itâs how it is now, so thatâs how itâs always been. Humans have very short memories and most of us donât reflect much on the past.â
Social Media & Mental Health
We often hear mental health and technology experts sounding the alarm about the negative effects of smartphones and social media. And with good reason. Studies continue to show the negative effects of smartphones, social media, and algorithms on our mental health, especially in adolescents.
One 2018 study showed participants who limited their social media usage reported reductions in loneliness and depression compared to those who didnât (Hunt, Melissa G., et al). Another found significant increases in depression, suicide rates, and suicidal ideation among American adolescents between 2010 and 2015, a period that coincides with increased smartphone and social media use (Twenge, Jean M., et al).
Few of us have failed to notice the irony that âsocialâ media seems to be making us feel more alone. Many are asking will AI super charge social media, create something that further alienates and damages mental wellbeing of young people. The Dove ad is a product and a reflection of this collective anxiety.
The âBlack Boxâ of Technology
The term âblack boxâ refers to any kind of system or device where the inputs and outputs are visible, but the processes or operations inside are hidden from the user. For most of us, the world is one big black box, technologically speaking.
âTechnology of all kinds has become impenetrable for most of the population,â says Lack. âI think thatâs where weâre seeing a lot of concern around AI. To someone completely unfamiliar, DALL¡E or ChatGPT look like magic. Itâs completely incomprehensible. But, then again, how many of those people know how their car works, or their phone, or the internet even. We eventually habituate and just accept them as normal.â
Even if we normalize new technology and forget our previous concerns, an opaque world is still a strange one, and we donât do well with strange for extended periods of time. We need resolution. We either find out whatâs really making noise in the bushes over there, or we run away. Either way, weâre satisfied with our newly acquired knowledge of the real squirl we discovered or the imagined snake we avoided. Yet, how do we run away from an inscrutable technology thatâs increasingly integral to our lives? Enter anxietyâŚ
Skills Specialization
The jobs and tasks we do are getting more and more specialized. Itâs a consequence of technology itself. We canât know everything obviously, and specialization can increase efficiency, bolster innovation, and raise wages for workers. However, these benefits have social costs, according to Lack, who offers an insightful perspective on the connection between specialization and social isolation:
âOur knowledge and skills have become so siloed we donât know what anyone else is doing. You only know your own skills and knowledge. These knowledge silos are keeping us from connecting with others. At the same time, weâve also become more physically disconnected from people mostly because of technology. The resulting social isolation is a major source of anxiety for humans. If you socially isolate someone, the same areas of their brain become active as if youâre actively torturing them,â he cautions.
The Great Replacement Theory
As weâve seen, algorithmophobia draws on other common parts of general tech phobia. However, it does seem to carry a unique feature not seen with other technologies. Lack refers to the feature as the âfear of supplantationâ or the idea that AI could replace humanity altogether. The notion goes well beyond just taking jobs, but making humanity obsolete, a vestigial species.
âWhen cars came along,â he explains, âpeople werenât worried Model Tâs would become sentient, take over, and kill everyone. Today, that fear does exist. So, itâs not just worries around the safety of self-driving cars. Itâs about safety, and the fear that humanity may not survive, that life will become Christine meets Terminator.â
Although supplantation is a unique fear with respect to technology and AI, itâs an anxiety as old as humanity itself. âWe see fear of supplantation in racist tropes also,â Lack explains. âTheyâre coming to take my job. Theyâre coming to marry into my family. Theyâre coming to replace me. Humans are really good at creating Us vs Them mentalities.â
âArtificial Intelligenceâ as Competitor to Humanity
Previously, Iâve argued that how we market âartificial intelligenceâ is important to how consumers will respond to it. To quell anxieties in consumers, Iâve called for âDehumanizingâ AI, by minimizing its human side and âpromoting it as a tool first and foremost.â There is a balance to be struck. Imbuing AI with too much âhumanityâ risks stoking fears of replacement. Too little may diminish the advantages of technology to automate our thinking.
Lack agrees that what we call AI has important psychological implications. In fact, he argues that the moniker âartificial intelligenceâ creates an antagonistic relationship by making it a direct competitor for resources.
âYouâre setting AI up essentially as another species,â he explains. âCompetition over resources has been one of the biggest drivers of conflict for our species. We have a hard enough time getting along with even slightly different members of our own.â
Conclusion
As AI continues to permeate every facet of our lives, it is essential to understand and address the multifaceted anxieties it generates among the public. The 2023 Pew Research Center survey serves as a bellwether for rising concerns, indicating a need for greater education and transparency about AIâs role and capabilities. By exploring the psychological underpinnings and societal impacts of AI integration, we can better navigate the complex dynamics of technology adoption.
Efforts to demystify AI through clearer communication, coupled with practical exposure, can alleviate fears and foster a more informed and balanced perspective. Ultimately, addressing the human element in technological development and ensuring ethical standards in AI applications will be pivotal in harmonizing our relationship with these advanced systems, ensuring they enhance rather than detract from societal well-being.
Works Cited
Twenge, Jean M., et al. âIncreases in Depressive Symptoms, Suicide-Related Outcomes, and Suicide Rates Among U.S. Adolescents After 2010 and Links to Increased New Media Screen Time.â Journal of Abnormal Psychology, vol. 127, no. 1, 2018, pp. 6-17. DOI:10.1037/abn0000336.
Hunt, Melissa G., et al. âNo More FOMO: Limiting Social Media Decreases Loneliness and Depression.â Journal of Social and Clinical Psychology, vol. 37, no. 10, 2018, pp. 751-768. DOI:10.1521/jscp.2018.37.10.751.
Pew Research Center Articles
What the data says about Americansâ views of artificial intelligence
60% of Americans Would Be Uncomfortable With Provider Relying on AI in Their Own Health Care
Which U.S. Workers Are More Exposed to AI on Their Jobs?
Public Awareness of Artificial Intelligence in Everyday Activities
Growing public concern about the role of artificial intelligence in daily life
Caleb W. Lack, Ph.D. is a Professor of Psychology at the University of Central Oklahoma.
Featured ones: