Android

AI-generated election misinformation just can't reach virality


We’re only weeks away from the U.S. presidential election, and people are getting ready to make it to the polls. While anxiety is ramping up for this election, so is misinformation. OpenAI published a paper stating that, while a ton of people are using ChatGPT to spread misinformation, the AI-generated content just can’t reach virality.

If a tree falls in the forest, does it make a sound? Well, if someone makes a social media post, and no one engages with it, does it even exist? That’s an important question. The amount of misinformation being dealt is on the rise, as thousands of people are using AI tools like ChatGPT to generate social media posts, blog posts, images, deepfaked videos, etc. The internet is being inundated with this content, but the life’s blood of any misinformation campaign is engagement. If no one sees the posts that the fraudsters are posting, then it won’t have any sway.

Well, OpenAI stated that AI-generated misinformation isn’t reaching virality

It seems that most of these AI-generated posts just aren’t getting many viewers or clicks. OpenAI, the company behind ChatGPT and DALL-E, released a 57-page paper detailing some of its findings. In the report, it focused mostly on the elections in The U.S., Rwanda, and India.

Because of the influx of AI-generated misinformation happening in Rwanda, the country actually banned ChatGPT altogether. We’re not sure if this is just until the elections are over or if it’s indefinite. We’ll just have to see if misinformation will actually see a decline due to this. Over in India, there’s a major election going on, and this means that it’s primed for people to spread misinformation. According to the paper, an Israeli group released a ton of social media posts targeted at the Indian election back in May.

Europe wasn’t safe either, as people were generating social media comments about the European Parliament, the French election, and politics in the U.S., Germany, Poland, and Italy. Lastly, we can’t forget about the U.S. election. Back in August, there was a group of Iranian individuals that released “long-form articles” spreading lies about the U.S. election. Since we’re less than a month away from the elections, the amount of misinformation is only going to rise.

Silver lining: they weren’t effective

OpenAI stated that it was able to identify all of these cases. Not only that, but these posts weren’t able to attract much attention. While some people did engage with these posts, none of them were able to reach viral engagement.

The fewer people who interact with them, the less traction they get in the algorithms pushing them. That’s good news, however, we still need to find a way to minimize ai-generated misinformation.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.