Fake news – what teachers need to know

Home » Education » Fake news – what teachers need to know

Photo by Nijwam Swargiary on Unsplash

We now live in a world in which the information we receive cannot be trusted, and the public, social media and news media interaction is constantly evolving. There have always been varying versions of events and perspectives about the world and how it works communicated to people through print and other media, often through a political or ideological lens. However the problem now is that while these various perspectives have always existed, they were more often than not rooted in an agreed and universal truth or basis in fact. Publications such as Weekly World News remained on the fringe and treated as satire, and the information people received could generally be trusted.

This week, we’re taking a look at the problem of fake news and misinformation, using articles that we’ve come across that highlight this issue. We are going to examine the problem through articles discussing trends and scoop, look at what misinformation looks like and discuss potential solutions and how people and organisations are grappling with the problem.

Let’s start with a question – can you tell what’s real or fake? Take a look at https://blurrd.ai/realorfake/ and see if you can pick between real and AI-generated images. Good luck!

Here’s a transcript from an excellent presentation from Data & Society Founder and President Danah Boyd, arguing that social media fosters the ‘strategic and purposeful production of ignorance’, and that modern media is being structurally manipulated. It’s an excellent read: thought-provoking, sobering and concluding by pointing out that “You will not achieve an informed public simply by making sure that high quality content is publicly available and presuming that credibility is enough while you wait for people to come find it.

Have you ever wondered how spreaders of misinformation acquire online influence? This is an excellent article from Danil Mikhailov about why the online environment is a uniquely permissive system, and how an individual with a lot of time can compete against wealthy institutions on near equal terms. Basically, this person can create content, spend time acquiring social capital (likes), which is picked up by algorithms (algorithmic capital), which grows and accelerates a positive feedback loop, which builds a community. From here, the community can build economic capital (Patreon donations etc.), which accelerates growth further and increases the size of the community. Voila! Lots of online misinformation. There’s a lot more here, and again is a fascinating read, but very important to understand so that a strategy to counter misinformation can be developed.

Continuing in the same vein, this article explores why pro-vaccination advocates are struggling to be heard online. A large and powerful non-profit in the United States named Vaccinate Your Family (VYF) had to stop posting videos with doctors and health advocates on Youtube because the algorithm would direct viewers from what VYF was posting to anti-vaccination videos. Again, the algorithms have been manipulated by extremely active anti-vaccination groups, and providers such as Youtube are struggling to keep up. The result is that VYF is outmatched online, with the overwhelming amount of vaccine-related content provided by anti-vaccination groups.

Further to our anti-vaccination exploration, The World Economic Forum weighs in, discussing the ‘perils of a post-truth world’, and looks to behavioural science as a means of providing a solution to the problem, together with other technical methods (see above).

Certain groups may share more misinformation and fake news on social media than others. In this article referencing research from Princeton, the following is identified about news sharing on Facebook:

  • 8.5 percent of users in the study shared at least one link from a fake news site.
  • 18 percent of those who identified as Republicans shared links to fake news sites, compared to less than 4 percent of Democrats.
  • 11 percent of users older than 65 shared a hoax, while just 3 percent of users 18 to 29 did.
  • Facebook users older than 65 shared nearly seven times as many fake news articles as the youngest age group (18 to 29).

The study did not explore why older users shared more misinformation online, but it may have something to do with information literacy, or the fact that sources in the news feed look similar on Facebook.

What Fake News Looks Like

The true story of fake headlines – how they grab our attention, how they mislead, and why mainstream ‘prestige’ media is falling into the trap of clickbait headlines. One fun fact from this article is that most readers spend their time reading headlines and not the actual articles.

We have two pieces of research that involve Twitter next. The first is that false news travels more quickly on the platform that real news does, although that’s hardly surprising given what we know about how lies spread online. The second is that Twitter can be manipulated by coordinated groups using automated accounts (bots) using key words and hashtags to  generate very high volumes of traffic and thereby influence online debate. It’s a long read, but fascinating.

Have you ever looked through product reviews on Amazon while you’re pondering a purchase online? It’s possible that the reviews are fake, and written by third party companies paid by the seller. In this example, a company selling a weight-loss product has been found guilty of using fake reviews to sell its (potentially dangerous) product, but this appears to be the tip of the iceberg. I have personally taught students whose part-time job is to write fake reviews for beauty products online, and can only imagine how far this goes.

It’s not happening yet, but it’s coming soon: AI-generated fake news blocking search engines and gaming Google search results. Artificial Intelligence is becoming ever better at generating factual-sounding text, and the concern is that this may start happening in response to key words or questions entered into a search engine. It’s a scary read, and the potential for a an even greater deluge of false information is ever-greater.

AI again, and this time we’re looking at Deepfakes – synthetic media created in which a person in a video is created or replaced by someone else with a different likeness. We’ve written about them before, but here are two more to frighten and entertain you – there was a third of Frank Sinatra singing Toxic by Britney Spears, but it’s been removed from Youtube.

President Trump explains why Jeffrey Epstein didn’t kill himself.

Jeff Bezos and Elon Musk appear in Star Trek.

The potential for misuse of this technology is significant, and it won’t be long before we’re trying (and failing) to guess between AI-generated and real videos, not just images.

Solutions to Fake News

So what are some possible solutions? It’s possible that we may have to accept that deepfakes and misinformation are our ‘new normal’ and also make peace with the idea that there is no solution to deepfakes. but there is work being done to offer a way forward in which we benefit from our technology and are not misled by it.

Finland has been running an anti-fake news initiative since 2014, teaching journalists, students, residents and politicians how to identify and counter false information that’s designed to sow division. The first line of defence is the kindergarten teacher, moving through schooling into the formal critical thinking curriculum, in which students need to separate fact from fiction. Participants learn how to identify sources and bias, and approach what they read with skepticism, not cynicism. One advantage that Finland has that other nations often don’t is a deliberately developed strong national narrative based in human rights and rule of law, meaning that a debunking of false claims is often not necessary.

Other initiatives that are not quite as comprehensive involve the banning of technologies such as deepfakes, which China and Facebook have sought to do recently. Both are well-positioned to be successful, thanks to China’s control over its Internet and Facebook’s control over its platform. However, one article shared earlier argues that any ban on deep fakes are bandages at best, and a richer discussion around topics such as consent and media literacy needs to take place. It argues that if a ban is put in place, someone will find a technological way around it, with the ban simply an attempt to mask problems.

Perhaps we need to look at how we redefine trust. Trust underpins democratic institutions and all human interactions, and this article speaks to the damage that a lack of it can cause and is currently causing, especially among young people. It examines the historical elements of trust in modern societies and links the development of trust to technology, then calls for ‘Trust 4.0’, “that builds a bridge across tribes, cultures and systems; from peer-to-peer networks to top-down structures. We need to make room for the interpersonal, for institutions and individuals – we need to build interdependent trust, creating a multi-dimensional relationship across different stakeholders. Trust 4.0 enables existing and emerging systems to work together.” That sounds great, but what’s missing is the how, and that might present one of our next great challenges.

We finish with a letter from Aaron Sorkin to Mark Zuckerberg. It’s scathing, and asks him how he can allow Facebook to run ads that “claim Kamala Harris ran dog fights out of the basement of a pizza place while Elizabeth Warren destroyed evidence that climate change is a hoax and the deep state sold meth to Rashida Tlaib and Colin Kaepernick.” Sorkin challenges Zuckerberg to address these lies and misrepresenting of the facts (Facebook has also been called out by Sacha Baron Cohen), and suggests that perhaps Facebook should have been started by the Winklevoss twins and not Zuckerberg. Ouch.

The research conducted and insights gained during the writing of this article have inspired the Indigo Schools Framework, the details of which can found in the Primer on our Resources Page. Send us an email at info@indigoschools.net or complete the form below if you’d like to learn more about how the Indigo Schools Framework can be successfully applied within your school. Also be sure to follow us on Facebook and Linkedin for our latest updates.

Interested in transforming your school? Let’s start a conversation.

Scroll to Top