Exponential technology crash course

Home » Education » Exponential technology crash course

Photo by Markus Winkler on Unsplash

Artificial Intelligence (AI) is a good example of an exponential technology. For reference, ‘exponential’ means that the power or performance of the technology doubles in any given amount of time, or the cost of the technology halves, or both.

Exponential technology is already everywhere, and we’ve written about it before. AI decides what we see on Youtube, Netflix and social media. Amazon’s supply chain can’t run without it, and we need to get used to the fact that it’s here to stay. But, as with any emerging digital technology, AI presents enormous opportunities to benefit us, and potential problems as well. Today we’re looking at what some of the challenges might be, along with what young people need to be aware of and talking about, if they’re not already.

The AI boom is accelerating quickly. For example, the training of ImageNet, a popular image recognition network, has dropped from 60 minutes to just 4 minutes in about 18 months – a 16x jump in training speed. The 2018 AI Index report examines the growing investment and progress in AI globally, breaking down different regions’ areas of focus, investment and progress. It’s fascinating reading, and well worth a look.

The most-funded AI startup in the world right now is a Chinese-based company called SenseTime, which specialises in facial recognition for China’s surveillance network. Western companies are flocking to invest, and SenseTime is looking for another $2 Billion – good business, but possibly raising a few ethical questions. Worth discussing with young people to see what their thoughts are.

It’s not just private groups that are investing in AI, governments are funding AI too, in this example a big drive by the UK Government to provide post-graduate AI training as part of its modern industrial strategy. The government is partnering with industry to provide these opportunities, and it’s a good example of new work that’s emerging as a result of the advances currently being made. The problem is of course that these opportunities will only be available to the smartest and most highly motivated, which the majority of us aren’t. Again, worth discussion, especially if initiatives such as this help address the gender gap in the AI workforce.

AI is starting to do some pretty interesting things beyond managing supply chains. Musenet is a neural network that can generate musical compositions, combining instruments and styles to create completely original pieces. The AI was not trained by humans, “… but instead discovered patterns of harmony, rhythm, and style by learning to predict …”. In short, the AI taught itself to make music based on listening to hundreds of thousands of hours of it.

AI is also teaching itself to write – too well. OpenAI, a company backed by Elon Musk, has produced an AI model that produces text so accurate that the company has not released the research over fears of misuse. The AI is fed a sentence to get started, and then makes predictions about what should come next based on information it learned from trawling the Internet and reading about 10 million articles online. The text it produces is plausible, yet entirely fictional, presenting a danger particularly when writing ‘news’ reports.

It continues. AI is now conducting behaviour analysis in stores to identify potential shoplifters. The technology appears to be working well, and the goal is prevention – staff are alerted to unusual behaviour and then approach the customer to ask whether they need any assistance. The company that developed this looks set to explode, and is attracting interest from security companies globally.

More benefits and downsides. AI has the potential to worsen global inequality, and AI bias is real and hard to fix due to problems with ensuring that the AI is actually dealing with the problem it’s trying to solve, along with how data is collected and prepared. One example is an AI that Amazon used to identify potential candidates for recruitment. Because more males than females had been hired previously, the AI continued this pattern, rejecting a percentage of applicants solely because they were female.

Because AI technology is developing so quickly, the rules are struggling to keep up, with one example being that some police departments in America are using facial recognition in a dubious manner with no guidelines. Example:

“… a suspect was caught on camera allegedly stealing beer from a CVS in New York City. When the pixelated surveillance footage produced zero hits on the New York Police Department’s (NYPD) facial recognition system, a detective swapped the image for one of actor Woody Harrelson, claiming the suspect resembled the movie star, in order to generate hits.”

So what’s the solution? There are lots of discussions happening around AI and ethics, with this example emphasising human wellbeing and transparency as being key to responsible AI development and use. Given the complexity of AI and its potential for misuse, some are wondering whether ethical AI is even possible. One example in this article is a company in which its stated goal is to ‘accelerate the progress of humanity through AI’. All fine so far, until staff became concerned about what their work might actually be used for, and the CEO followed up their concerns by stating that ethics officers were not necessary and the company’s technology would be used for autonomous weapons.

Other responses are to ban the technology. San Francisco is the first city in which facial recognition is not allowed to be used, due to concerns about privacy, misuse of data and the fact that the AI is prone to bias. Critics argue that the city has lost a potentially powerful crime-fighting tool, but these do not appear to be anywhere near a majority and most seem to favour the city’s pragmatic approach.

Finland is educating its population about AI, and building a workforce to enhance its potential benefits and create a ‘grassroots’ AI movement. There appears to be some positives to this approach; not only is the country re-skilling people for new work, but presumably through their learning they are becoming aware of the pitfalls of AI, along with the opportunities.

We think that’s where we need to start, by making people aware of the emerging power of AI, its potential for misuse, and how individual rights and safety might be compromised if no action is taken to regulate how the technology is developed and deployed. Conversations and provocations in classrooms and homes might be a good idea, perhaps based on plausible scenarios, perhaps as a debate.

The research conducted and insights gained during the writing of this article have inspired the Indigo Schools Framework, the details of which can found in the Primer on our Resources Page. Send us an email at info@indigoschools.net or complete the form below if you’d like to learn more about how the Indigo Schools Framework can be successfully applied within your school. Also be sure to follow us on Facebook and Linkedin for our latest updates.

Interested in transforming your school? Let’s start a conversation.

Scroll to Top