We’ve written about the emerging power of Machine Learning (or Artificial Intelligence) and its implication for learning and the future before, and often its potential challenges and pitfalls. However, like any human endeavour, Machine Learning has the potential to do enormous good, and today we’re looking at some of the benefits it’s starting to bring. With these benefits come new opportunities and new questions, and we’re keeping a firm eye on learning for the future as we share the examples of how Machine Learning can be a powerful force for good.
The first article looks at how AI is augmenting human capability, for the better. It starts with a commentary on how articles about technology are commonly presented in an alarmist, click-baiting manner that’s designed to create fear rather than inform. Instead, Frank Chen draws our attention to the lesser headlines, in which technology is doing amazing things to help make our world a better place. Specifically, he sees AI providing five key benefits:
1. We can automate routine tasks and become more creative. Mundane, soul-destroying tasks such as transcribing thousands of legal documents and data entry are fast becoming no more, and people are better off for it.
2. Machine learning can give us physical superpowers. We can hear better to prevent fraud, see better to explore the world, monitor crop growth, work alongside machines more safely and design safer cars.
3. AI can help us make better decisions. It can help us write effective resumes, make better sales decisions, identify places to explore for resources, make better money lending decisions, and help doctors diagnose disease.
4. Automating dangerous jobs makes us safer. Think of drones performing ocean rescues, helping soldiers check that buildings are safe, delivering medical supplies in war zones, and doing other jobs that cause humans harm.
5. AI can help us understand people better. AI can help children with autism identify other people’s emotions by using emojis, translate languages in real time during conversations, and identify people having a mental health crisis through their use of text and emojis.
AI is being used to combat illegal fishing. A company named OceanMind has developed machine learning algorithms to track fishing boats at sea and identify suspicious or potentially illegal behaviour, then alert authorities. OceanMind analyses huge amounts of data from sources such as boat transponders, GPS units, satellite image data and cell and satellite phone signals then makes recommendations to relevant authorities, who can then direct resources as necessary. The technology also protects marine protected areas, and helps ensure that fishing quotas are not exceeded and sustainable fishing practices are followed.
Japanese industry and manufacturing is facing a huge labour shortage as the population ages, and turning to AI as a potential job saver. The country is looking at work optimisation and automation extremely closely and investing heavily in technology that will continue Japanese productivity and heavy industry in a declining workforce. In this environment, work augmentation and replacement by AI is being treated as good news, and we can expect to see significant new technologies emerge from Japan over the coming decade.
Weather prediction is notoriously difficult, but our ability to do so continues to improve, with modern 72 hour hurricane prediction tracks more accurate than 24 hour predictions 40 years ago, and a modern five day forecast as accurate as a 24 hour forecast in 1980. But because it’s not possible to gather all atmospheric data available about the atmosphere, there will be gaps in our knowledge about atmospheric conditions. In addition, interactions between land, sea and air are still not yet fully understood, but more data from each is being generated and made ready for analysis. For this, new and more powerful algorithms are needed, with mathematical, physical and computational sciences and integration being key.
Medical science is an area that’s seeing huge investment in AI. Our first example of this uses a machine learning algorithm to analyse electroencephalogram (EEG) scans to determine the odds of a brain-injured patient ever regaining consciousness. This helps take the guesswork out of searching for consciousness, and may help doctors and families make better decisions about patient care. EEG data is complex, and AI is perfectly suited to classifying data correctly, without the potential bias of a human observer, thereby complementing doctor’s observations. There’s still work to do, of course, and using machines to determine consciousness raises ethical questions when experts can’t agree on what actually constitutes consciousness, but it’s a good start.
Machine learning algorithms are starting to be used to identify Alzheimer’s Disease in patients up to 6 years before a clinical diagnosis is made. The problem currently is that waiting for a diagnosis means that the brain is already irreversibly damaged, so identifying potential problems while treatment is still possible could make these scans part of a routine checkup. The AI was trained using thousands of Positron Emission Tomography (PET) scan images, and learned to identify microscopic changes to the brain’s structure that are invisible to the human eye. The next step is to trial the algorithm using different data sets from different hospitals globally, and watch this space for further developments.
Early work is being done to turn brain activity into speech. Researchers are developing a brain-computer interface to monitor when neurons turn off and on, and infer the speech sound. This work is still in its infancy, and limited by the fact that researchers need to gather data from electrodes attached to an exposed brain, limiting opportunities to gather it. However its potential is exciting, with decoding imagined speech being the ultimate achievement, although that will require a ‘huge jump’.
All of the work described above speaks to how doctors and AI are converging in high performance medicine. The potential benefits of AI in medicine are enormous, but whether these will be realised largely depend on if the technology actually benefits the patient/doctor relationship, or seeks to undermine it. This article cautions about the ‘AI Hype’ that is being observed, which vastly exceeds actual AI Science and our ability to deploy it. It also raises concerns about AI bias, and its potential to magnify inequalities. The article notes that we are at the very beginning of the medicine and AI journey, which is currently high on promise and low on proof. Watch this space.
Here is an interview with a researcher who’s doing some interesting work with an AI that’s designed to make judges less biased. The evidence appears to be clear that judges’ reasoning can often be based on factors that are not relevant to a case, such as decisions made in previous cases and political bias. In this interview, Daniel Chen argues that AI can be a useful tool in alerting judges to instances of potential bias, and that while the AI itself may also exhibit bias, it might be slightly fairer than a human being. Interesting.
Sex trafficking can be difficult to trace, track and prosecute. The reasons are many, with the illegal sex and human trafficking trade largely hidden from public view, the coercion of victims through fear, and a resulting reluctance to testify against traffickers and abusers. Enter Traffic Jam, an AI designed to analyse huge amounts of data to find and identify human traffickers and their victims. This article provides a good insight into how a typical criminal investigation might use Traffic Jam, with analysis involving phone location data, facial recognition and website use to build a picture that snares abusers and rescues victims. Excellent.
AI can also be used to identify inequality and poverty, in this example access to healthy toilets and safe sanitation conditions. An AI was trained to recognise toilets using hundreds of Dollar Street images, and then aligned the results with each family’s income group. The results indicate that the families in which the AI could not identify a toilet within a family home are exclusively among families identified as living in poverty. This aligns with UN data, which indicates that very low income groups (about 2.2 billion people) do not have access to a distinguishable toilet. The use of AI in this case helps triangulate data from other research and humanises the plight of those living in poverty.
A neural network has designed a new type of sports game, having been trained using data from over 400 different existing sports. Using this information, the AI then generated ideas about concepts for possible new sports, some of which are feasible, and some which are not (eg. underwater parkour and exploding frisbee). The new game is called Speedgate, and the AI not only designed the game itself (a combination of rugby, soccer, ultimate frisbee and croquet involving two teams of 6 players), but also came up with the logo and motto: “Face the ball to be the ball to be above the ball.” See the video here, and it looks like quite a cool game to do with students.
PredPol is a predictive policing company that uses an AI to determine where crime is most likely to occur. The company’s machine learning algorithm breaks a city down into 500-foot by 500-foot blocks, and uses crime data gathered over time to determine likely ‘hotspots’ that need to be patrolled more frequently. The algorithm also uses environmental and other data to form better predictions, and recommends how policing resources might be deployed. Criminologists appear to be unconvinced that this method works well, and there are the usual concerns about data privacy and structural bias, with black areas most often targeted. The technology is proving to be popular, and appears to be here to stay within resource-constrained police departments.
Machine Learning clearly has the potential to do enormous good, but as its use becomes more a part of how we live and work the technology will need to successfully integrate and align with human values. This is where it gets tricky, because of the uncertainties associated with human psychology. This article suggests that if we want AI to do what humans want, we first need to study and better understand human behaviour.
Humans have biases, lack knowledge, are often not that smart, and what is considered ‘correct’ is often couched in local customs, traditions and mores. The scale of the challenge is revealed through the article, and a huge amount of research is necessary, much of which may not provide the solutions we are seeking. For now, the researchers recommend that we use people instead of AI to make complex decisions (despite the problems), and consult AI safety researchers for a way forward, given that AI is set to get ever more powerful.
The research conducted and insights gained during the writing of this article have inspired the Indigo Schools Framework, the details of which can found in the Primer on our Resources Page. Send us an email at email@example.com or complete the form below if you’d like to learn more about how the Indigo Schools Framework can be successfully applied within your school. Also be sure to follow us on Facebook and Linkedin for our latest updates.