Photo by Matthew Henry on Unsplash
Today we’re looking at privacy, how everyday technology affects our privacy, and what we need to be mindful of when we connect. One example of an area to think about is the Internet of Things, which steadily continues to extend its tentacles throughout our everyday lives, constantly generating, sharing and tracking data.
So what happens to all of this information? How is it used and who has access to it? Who owns our data and how can we protect it? Does the everyday, average person even need to worry about privacy and what happens to their personal data? Is privacy dead?
The articles below allow us a brief glimpse at some of what’s happening with our personal information, and gives us a pretty good idea of how technology, use of data and protection of privacy are trending and what we need to think about.
The internet of everything has long been promised, but it appears as though Apple and Google are making significant progress towards connecting and tracking every wireless capable device both within the home and outside it. Technical challenges continue, but with the addition of plant water sensors, personal wearables, mailbox sensors and weather sensors in addition to smart home technology and personal assistants, we can expect these companies (and Amazon) to fully integrate these services across platforms and connections over the coming decade, essentially making most aspects of our lives traceable and trackable.
This is a small step towards protecting Privacy – it’s now possible to tell Alex to delete your voice recordings. Amazon appears to be responding to a broad range of concerns regarding the privacy of its users and has designed a privacy hub to help make protecting one’s personal data a little easier.
Staying with Alexa, Amazon’s AI is able to accurately detect a person’s emotional state by the sound of their voice. The AI has been trained using tens of thousands of voices and takes place in three phases, and the process appears to be highly rigorous and detailed. The idea is to make Alexa more engaging, responsive and able to correct mistakes, and Alex may eventually be able to monitor and detect indicators of a person’s health using a person’s voice.
Amazon is the dominant player in online retail, smart home systems and is entering the market in almost every known domain. So, can we trust Amazon? With Amazon devices such as cameras becoming ubiquitous, the author of this article imagines a scenario where Amazon products have fully penetrated the market, it’s cameras and microphones are recording and AI is analysing our homes, lives, interactions, workplaces and shared space, sending alerts and prompts to us throughout the day. Perhaps Amazon’s relentless drive towards surveillance and efficiency might want to be watched a little more closely as the company grows and expands its services.
But it’s not just Amazon – Google is also listening to what’s happening at home. Its engineers are gathering sound recording and data through Google’s smart speakers and assistance, and it’s becoming clear that although the company isn’t technically ‘eavesdropping’, its human engineers are listening to recording of people’s voices and devices are recording and storing conversations when users think the device is switched off.
Google does need data to improve its speech to command products, and converts the sound recordings into text that trains its AI. The AI needs to be able to distinguish between pauses, coughs, sneezes, mumbles and the myriad of other sounds a person makes while speaking and maintaining a flow of communication. The problem is that when a person utters something that may or may not sound like ‘OK Google’, the devices record everything from that point: private conversations, sex, arguments, domestic violence – the list goes on.
Google admits that it does collect this data, but of course it needs to in order to improve its products. The company’s language experts listen to only 0.2% of audio fragments, and they contain no personal or identifiable information. Interesting, and one to watch.
So what does the future of privacy look like? It appears to start with becoming informed. This is an interesting study from Princeton University that tracked smart TV devices when connected to streaming services, and found that the data shared was intercepted and tracked by adsystem and other trackers such as doubleclick, collecting device information and IP addresses and information about likes, dislikes and viewing habits. Researchers at Princeton have also developed an app called IoT Inspector that reveals how active internet capable devices are even when we are not using them. The app lets you know which devices are sharing information and exactly what that is, even when you think they are switched off – Amazon devices such as Alexa continue to connect to Amazon servers even when the microphone and device is turned off.
The Global Government Biometrics Data Report for 2017-2027 is an interesting read. Briefly, the biometrics market is established and growing, especially in emerging economies, with technology such as finger, facial and voice and even vein recognition all gaining in importance with governments (this is concerning, and we’ve written about it before). The report goes into significant detail, detailed projected market size and share by technology and region, channels that are driving the market and future opportunities.
“In a world that runs on data, everyday uses of technology can suddenly put people in danger when circumstances change. This also means the opposite: Most of the time, most people won’t feel the cost of having their data exploited.”
This is a quote from an article that discusses what we can learn from the Hong Kong Protests about the future of privacy that resonated with me, and I believe me encapsulates what today’s post is about. As one example, the moment a society protests against its government, it’s ultra-efficient public transport system becomes an excellent tool for surveillance and crowd control. The article points to the impossibility of anonymity in modern cities due to constant surveillance, inequalities of access to privacy in modern times, and high AI error rates for groups that are not white men. There is hope – technological totalitarianism is not inevitable, but democratic institutions are under threat globally, and protection of privacy will be crucial.
Here’s an article looking at how the Secretary of State for Health and Social care in the UK is calling for increased automation of NHS services in collaboration with private sector businesses. This would involve private access to NHS datasets containing a huge amount of public information that may be misused. The article is optimistic about the potential of new technologies driven by big data to make our lives healthier and improve wellbeing, but argues that public datasets must be treated as public assets with private industry understanding what actions are appropriate in terms of access to and use of data in a democracy.
The power that we are starting to see from technological innovation has serious implications for the safety of our children and students. We’re going to look at a few examples of where things are going below, but the challenges we face in supporting students and keeping them safe even now are significant: online predators (warning: an unsettling read), sexting, bullying, porn, gossip, the list goes on.
The examples below will (hopefully) be thought provoking and may perhaps serve as a provocation for discussion. It does not take much stretching of the imagination to see how these emerging digital technologies, or versions of them, might be misused.
Realistic fake human images are being created by something called a Generative Adversarial Network, creating images so life-like they can’t be distinguished from photos of real people. It’s a heavy read, example:
“The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture.”
Take a look at https://thispersondoesnotexist.com/ to have a look at what the result of this complicated language looks like. It’s becoming harder to tell what’s real and what’s not.
This technology is starting to extend to videos in which one person’s face is superimposed onto another. These are called ‘Deep Fakes’, and the technology is advancing so rapidly that it may soon be impossible to distinguish between real and fake videos. Have a look at this Jennifer Lawrence / Steve Buscemi Deepfake, and then think of the implications for pornographic videos and bullying among students.
We’re not only generating digital images expertly, we’re now getting extremely good at identifying images as well. It’s another complicated read, but briefly, technology called a neural network looks at the information contained in images and identifies patterns. After some initial training, a neural network can teach itself and learn more about images and objects the more it ‘sees’, ie, the more data it has to work with. With billions of images now online in the public domain to learn from, the networks have become very good at identifying images, including human faces.
Data privacy – get used to hearing this term, because it’s going to become even more important as more and more of our lives are lived and stored online. Data is the new global currency, and companies are making billions through storing and mining massive amounts of it. Data is already being misused to influence our elections, and Facebook has allowed malicious 3rd party apps to gather and share personal data, operating with little transparency or oversight into how the data was (and still is) being stored and used.
If our data is being treated with little care by large companies on the one hand, it’s being actively hacked on the other. As more and more government and private services are moved online, the risk of hackers being able to access the data for purposes of identity theft and other nefarious means increases. Estonia and South Korea have faced challenges with their online ID systems, with Estonia’s having serious vulnerabilities and South Korea’s has already had the personal details and ID card numbers of 80% of its population stolen – In Singapore it happened to a major healthcare provider that serves millions of people.
So can we just refuse to participate? Maybe, maybe not. In this example, a man was fined when he protested against being filmed by facial recognition cameras during a trial in the UK. The police claimed the fine was for swearing, not for refusing to be filmed, but people do appear to be getting stopped regardless.
Data combined with powerful algorithms are also being used to control, hide or expose people. In Saudi Arabia, data is used to prevent women from travelling and to enforce traditional guardianship laws, and in China a person can receive an alert when they are in close proximity to someone in debt. In the interests of keeping people safe, it’s essential for people to understand how data and digital technology can be manipulated and misused, so they can recognise when they may be exposed and take action to protect themselves.
A possible solution may be found here, through reducing harm in social media and online through a duty of care. In it, any developer of a digital product must take care as it relates to people or things. A duty of care can describe the types of behaviours or effects of technology to avoid, and can also be bound by law. Duties of care work well in workplaces and public spaces, so why not online as well?
In this article’s example, there are a series of ‘key harms’ to be avoided, including:
1. Harmful threats to people or things such as pain, injury, damage etc.
2. Harms to national security such as violent extremism.
3. Emotional harm, for example encouraging others to commit suicide.
4. Harm to young people – exposure to harmful content such as bullying and grooming .
5. Harm to justice and democracy including protecting the integrity of the criminal trial process.
Anyone who is exposed to harm through a digital product would be able to sue the developer for failing to meet its duty of care, but only if the product was shown to fail at a systemic level. This is of benefit to the market as the company is responsible for the cost of its actions, not people.
The challenges for the misuse of data and AI are growing, and the risks are huge, not only for young people but for schools as well.
Think about apps that schools use for digital portfolios – who owns the data that’s stored and how is it used? The app developer’s goals for the use of their product and data don’t always align with schools and families, and that’s a problem.
How can we keep our young people safe in an era of Generative Adversarial Networks and Deepfakes? What actions can individuals take to protect their data, and how can we keep companies accountable for how their products are used?
The research conducted and insights gained during the writing of this article have inspired the Indigo Schools Framework, the details of which can found in the Primer on our Resources Page. Send us an email at info@indigoschools.net or complete the form below if you’d like to learn more about how the Indigo Schools Framework can be successfully applied within your school. Also be sure to follow us on Facebook and Linkedin for our latest updates.