Chatbots for Persuasive Arguments

New research show how GenAI sways political opinions. Students can discuss the strategies AI tools use and opportunities for their own persuasive arguments, while considering ethical issues and potential misinformation.

Research published in Science concludes,

When AI systems are optimized for persuasion, they may increasingly deploy misleading or false information. This research provides an empirical foundation for policy-makers and technologists to anticipate and address the challenges of AI-driven persuasion, and it highlights the need for safeguards that balance AI’s legitimate uses in political discourse with protections against manipulation and misinformation.

In an article in Nature, the authors write,

Examining the persuasion strategies used by the models indicates that they persuade with relevant facts and evidence, rather than using sophisticated psychological persuasion techniques. Not all facts and evidence presented, however, were accurate; across all three countries, the AI models advocating for candidates on the political right made more inaccurate claims.

Students might discuss how they can use a chatbot for their business arguments. For example, in what circumstances, might encouraging people to search for their own answers through an AI tool be more effective or more practical than presenting an argument? These studies recognize the value of unleashing a massive amount of evidence that students might not have at their fingertips. In addition, inviting people to converse with a chatbot allows them to ask unfiltered questions, avoiding fear of judgment.

An interesting class activity could involve a short student presentation with individual chatbot time following, either to replace or precede a Q&A session. Then, potential misinformation can be explored as a class.

Image source.

Previous
Previous

Calibri Oust Offers a Lesson in Typeface and Analyzing Claims

Next
Next

Netflix Announces Warner Bros. Acquisition