As Russia and China ‘Seed Chatbots With Lies’, Any Bad Actor Could Game AI the Same Way

by oqtey
AI

“Russia is automating the spread of false information to fool AI chatbots,” reports the Washington Post. (When researchers checked 10 chatbots, a third of the responses repeated false pro-Russia messaging.)

The Post argues that this tactic offers “a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform,” and calls it “a fundamental weakness of the AI industry.”

Chatbot answers depend on the data fed into them. A guiding principle is that the more the chatbots read, the more informed their answers will be, which is why the industry is ravenous for content. But mass quantities of well-aimed chaff can skew the answers on specific topics. For Russia, that is the war in Ukraine. But for a politician, it could be an opponent; for a commercial firm, it could be a competitor. “Most chatbots struggle with disinformation,” said Giada Pistilli, principal ethicist at open-source AI platform Hugging Face. “They have basic safeguards against harmful content but can’t reliably spot sophisticated propaganda, [and] the problem gets worse with search-augmented systems that prioritize recent information.”

Early commercial attempts to manipulate chat results also are gathering steam, with some of the same digital marketers who once offered search engine optimization — or SEO — for higher Google rankings now trying to pump up mentions by AI chatbots through “generative engine optimization” — or GEO.
Our current situation “plays into the hands of those with the most means and the most to gain: for now, experts say, that is national governments with expertise in spreading propaganda.”

Russia and, to a lesser extent, China have been exploiting that advantage by flooding the zone with fables. But anyone could do the same, burning up far fewer resources than previous troll farm operations… In a twist that befuddled researchers for a year, almost no human beings visit the sites, which are hard to browse or search. Instead, their content is aimed at crawlers, the software programs that scour the web and bring back content for search engines and large language models. While those AI ventures are trained on a variety of datasets, an increasing number are offering chatbots that search the current web. Those are more likely to pick up something false if it is recent, and even more so if hundreds of pages on the web are saying much the same thing…

The gambit is even more effective because the Russian operation managed to get links to the Pravda network stories edited into Wikipedia pages and public Facebook group postings, probably with the help of human contractors. Many AI companies give special weight to Facebook and especially Wikipedia as accurate sources. (Wikipedia said this month that its bandwidth costs have soared 50 percent in just over a year, mostly because of AI crawlers….) Last month, other researchers set out to see whether the gambit was working. Finnish company Check First scoured Wikipedia and turned up nearly 2,000 hyperlinks on pages in 44 languages that pointed to 162 Pravda websites. It also found that some false information promoted by Pravda showed up in chatbot answers.
“They do even better in such places as China,” the article points out, “where traditional media is more tightly controlled and there are fewer sources for the bots.” (The nonprofit American Sunlight Project calls the process “LLM grooming”.)

The article quotes a top Kremlin propagandist as bragging in January that “we can actually change worldwide AI.”

Related Posts

Leave a Comment