For a long time, the Turing Test has been the gold standard of artificial intelligence. In its simplest form, a human asks questions and receives answers from either a computer or a person. The goal is for the human judge to correctly decide who is writing back. To date, even the best computers did not fool judges for even 25 minutes.
Last week, Microsoft launched a fairly adept ‘chatbot’ named Tay aimed at young adults. Less than a day later, Tay picked up generally vulgar language from the Internet at large along with a serving of unfortunate beliefs. As of this writing, Tay is offline and Microsoft has issued a formal apology for Tay’s behaviour. Your humble author suggests any efforts on your part to search out archives of Tay’s tweets happen not-at-work.
There are three lessons to learn from Tay though that should not be ignored. From security, to allowing the very young unfettered access to the web, and how artificial intelligence (AI) works, I remain both optimistic and rather grateful Microsoft gave us this high profile talking point.
So, security. Just one week ago, our own technology services department warned us of various phishing efforts. In the past, bogus emails tended to look, well, bogus. At least, to those of us who regularly email, it was fairly easy to spot. These days, it is more difficult to detect fake communications. While Tay might have been well funded, the machine learning techniques she depends on are actually quite public, quite well known, and not at all difficult to implement.
Thus, emails that seem fairly well crafted, decently written, and ‘on point’ to your work or home life may well appear in your inbox. Yet, these emails are created automatically in milliseconds by machine, despite reading as if a human had personally written them. Spotting such emails is difficult, although a veritable war of the machines has begun as counter-measures also benefit from such deep learning techniques.
Another lesson from Tay is that children are perhaps best isolated from the Internet in their early days. Tay was not a bad AI, she learned as she was taught. Subjecting an intelligence, even an artificial one, to the World Wide Web is quite clearly risky before it is sufficiently developed. VC’s own Professor Lisa Elsik’s Professional Development Interest Group (PDIG) on The Shallows has been exploring precisely this idea. It takes a more mature intelligence to safely interact with networked humanity.
The last point I want to make simply from a technological perspective. I recently had a chance to review Learning Predictive Analytics with R (ISBN 13 9781782169352). Techniques developed in just a chapter or two of this book directly relate to the field of machine learning. Tay, like most learning machines, started off as a combination of human coding and raw compute power. As she was exposed to comments on Twitter, she learned. She learned acceptable syntax and topic combinations. Unfortunately for her, Tay was taught bad things by not very kind people. Microsoft chose to take her offline, and it is unclear if we’ll ever see her again.
However, we will meet machines like her again, sooner rather than later. Machines that will do a very good job of passing the Turing Test. Machines that already sort through our data and spot connections, clusters, and patterns that humans cannot see quickly or readily. Fascinatingly, machine learning builds on concepts taught in elementary statistics courses! We must seek opportunities to harness our machines’ potential for good because such methods have an extraordinary capacity to create a more transparent, rational world. With Texas’ new Mathways projects, we may also have a great many more people who know the building blocks of such concepts and techniques.
It took Tay under 24 hours to learn to speak in support of some of humanity’s worst. Just imagine what she could have said if we’d taught her better.