The Russian invasion of Ukraine has given new impetus to online fake news to the extent that anti-misinformation firm NewsGuard is currently tracking 201 sites that it claims are spreading myths about the war. It’s not a new problem. War, of course, only accelerates it and the tactic is as old as the hills – the British famously set-up a transmitter called Aspidistra in 1942 to transmit programmes that would try to convince the German people that the war was going badly for their country.
The problem is that misinformation appears to be growing and it’s not all about Russia and Ukraine. There was a spike during the height of the Covid-19 pandemic, as well as recent elections and referendums in the US and Europe. Where there’s an opportunity to polarise opinion, or to steal data, it seems fake news will increasingly emerge.
The worrying development though is that AI is now being used to disseminate fake news. Julia Ebner, senior research fellow at the Institute for Strategic Dialogue recently detailed how Russian AI was used to identify, amplify and exploit grievances online, in order to undermine peace in Western nations. Speaking at a recent Cityforum webinar, she said that in effect, Russian was using AI to weaponise extremist ideologies and conspiracies.
AI, though, has huge potential in fighting it too. We need it, because we have to remember; we are all prone to being duped. As the World Economic Forum warned in its recent report ‘The Ability to ‘Distil the Truth’, it’s not just those with “with poor science knowledge, low cognitive abilities and a tendency to be accepting of weak claims,” that believe false stories. No one is immune, which is why a technology solution it the best way to go.