technologyneutral
AI and Election Tricks: The New Face of Political Deception
United KingdomTuesday, March 18, 2025
In another set of experiments, researchers wanted to see if people could tell the difference between human-written disinformation and that generated by LLMs. They found that content from most LLMs released since 2022 could fool human evaluators over 50% of the time. Some models even performed better than humans at creating convincing disinformation.
This research highlights a significant problem. Current LLMs can generate high-quality election disinformation, even in very specific local scenarios. They can do this at a much lower cost than traditional methods. This makes it easier for bad actors to spread misleading information. It also provides a benchmark for measuring and evaluating these capabilities in future models.
The findings are a wake-up call for researchers and policymakers. They need to find ways to detect and counteract this type of disinformation. As technology advances, so do the methods of deception. It is crucial to stay one step ahead to protect the integrity of elections and democracy.
Actions
flag content