Wednesday, December 31, 2025

Study Results Identify Perils of AI

  B'chasdei Hashem, over the past almost 20 years, Beis Mevakesh Lev has produced over 13,300 audio shiurim and over 31,000 written posts, unmatched by any one-person website - all completely free of charge. There are no paywalls or anything else. Now we are turning to you for help so we can continue - any amount will help. Even 99 cents! Thank you to my sweetest and most beloved friends!!!:-)!!

alchehrm@gmail.com

----

My recent book, Disconnection: Identity Development in a Digital Age, devotes an entire chapter to the perils of AI. One of the arguments I make is that technological innovations flooding the market by Big Tech are advancing far more rapidly than our ability to study them as social scientists.

Unfortunately, the rigors that come with the scientific method mean that research inquiries into the impacts of technologies on users take longer to determine than the pace at which these corporations expose their products to consumers. By the time findings demonstrate negative impacts to users, consumers have often already embraced the technologies, making it that much harder to get individuals to modify their consumption. I wish that the results of a recent set of studies had been available when I was doing research for my book, as they substantiate many of the warnings I identified in that chapter.

Specifically, two studies, one published in Nature, the other in Science, demonstrated that participants’ preferences for actual political candidates could be changed significantly after having an exchange with a chatbot that had been programmed to “persuade” the user towards a specific candidate (as reported by Kozlov, 2025). In the study published in Nature, Lin et. al. (2025) demonstrated “significant treatment effects on candidate preferences” based on exchanges with a chatbot, more than the persuasive impact of traditional advertising.

Not only were the political opinions of consumers susceptible to being swayed, but perhaps just as disturbingly, the information shared by the chatbots that contributed to the shift in opinion was not always accurate (Lin et al., 2025). Consistent with other scholarship (e.g., Aalai, 2025), this study further observed greater misinformation by the chatbots when advocating for political candidates who were right-leaning. In fact, the study published in Science found that the same underlying mechanisms that make AI persuasive also contribute to its potential for spreading false or inaccurate information (Hackenburg, 2025).

In fact, these studies demonstrated that not only were the chatbots more persuasive when they provided users with more information, but they were also likely to produce more false statements as they engaged with users (Lin et al., 2025). Such findings are consistent with arguments I make in Disconnection, namely, AI is exploding across the globe right now, with virtually no consideration from the creators regarding what impact their products will have on users and society as a whole.

In fact, earlier this month, it was reported that AI is booming at such a rapid pace across Silicon Valley that funding for start-ups related to this has surpassed all other categories, making up nearly 65 percent of venture funding, totaling $161 billion in investment (Griffith, 2025). Moreover, the rapidity in investment reflects trading discipline for speed, as investors fear getting “left out” of potentially lucrative opportunities. In fact, investors have even remarked that if they wait for companies to have ideas surrounding their innovations, that will be too late (as reported by Griffith, 2025), raising the specter that the last concern of those driving this boom and seeking profit is the potential psychological effects these innovations will have for users.

One of the themes I grapple with in my book regarding the conflict consumers of technology are currently confronting is that with the rapid normalization of its use, many users feel compelled to adopt these tools without fully reflecting on whether they are necessary or how they fit with the type of life they want to live. Moreover, I caution readers to slow down their technology consumption and consider taking a step back to ponder whether the underlying values they hold align with their consumption of specific types of technology.

Studies like the ones recently reported in both Nature and Science demonstrate that users who regularly engage with chatbots are vulnerable to being manipulated (Lin et al., 2025; Hackenburg et al., 2025). As Hackenburg et al. (2025) identify, users oftentimes are more vulnerable when engaging with AI than with other technological tools because they are less attuned to the persuasive potential of these interactions. Engagement via chatbots often feels more personal or intimate for users, lowering their defenses.

Chatbots serve as an additional pipeline for mis- and disinformation, both of which are already running rampant in digital spaces. The scope and scale with which users have adopted chatbots suggest significant perils to the embrace of these tools, particularly the way they are currently being rolled out in our culture, with virtually no guardrails in place to protect users.

In addition to the likelihood of spreading more false content, the unhealthy attachments users can develop with chatbot companions are also wildly problematic (Aalai, 2025). I encourage those readers interested in learning more about AI and other technological innovations on mental health and wellness to read my new book, Disconnection: Identity Development in a Digital Age.

The best way consumers can combat the penetration of even greater technologies into their daily lives is to be as well-informed as possible on their likely effects. This way, consumers can be more vigilant when deciding how—or whether or not—to engage with them. In the case of chatbots, evidence is building to suggest the costs may be greater than the potential rewards.

Don’t say I didn’t warn you.


References


Aalai, A. (2025). Disconnection: The Search for Identity in a Digital Age. Innovative Ink: Kendall Hunt. ISBN 979-8-3197-0501-3


Griffith, E. (2025, December 4th). AI Dealmaking is Getting Faster and Faster. The New York Times: Business. https://www.nytimes.com/2025/12/04/technology/ai-deal-making-faster.html


Lee Myers, S., & Rosenbluth, T. (2025, December 22). Chatbots Can Shift Opinions on Politics, Two Studies Find. The New York Times: Business, B5. 

Psych. Today