We already expect that humans to exhibit flashes of brilliance. It might not happen all the time, but the act itself is welcomed and not altogether disturbing when it occurs.
What about when Artificial Intelligence (AI) seems to display an act of novelty? Any such instance is bound to get our attention; questions arise right away.
How did the AI come up with the apparent out-of-the-blue insight or novel indication? Was it a mistake, or did it fit within the parameters of what the AI was expected to produce? There is also the immediate consideration of whether the AI somehow is slipping toward the precipice of becoming sentient.
Please be aware that no AI system in existence is anywhere close to reaching sentience, despite the claims and falsehoods tossed around in the media. As such, if today’s AI seems to do something that appears to be a novel act, you should not leap to the conclusion that this is a sign of human insight within technology or the emergence of human ingenuity among AI.
That’s an anthropomorphic bridge too far.
The reality is that any such AI “insightful” novelties are based on various concrete computational algorithms and tangible data-based pattern matching.
In today’s column, we’ll be taking a close look at an example of an AI-powered novel act, illustrated via the game of Go, and relate these facets to the advent of AI-based true self-driving cars as a means of understanding the AI-versus-human related ramifications.
Realize that the capacity to spot or suggest a novelty is being done methodically by an AI system, while, in contrast, no one can say for sure how humans can devise novel thoughts or intuitions.
Perhaps we too are bound by some internal mechanistic-like facets, or maybe there is something else going on. Someday, hopefully, we will crack open the secret inner workings of the mind and finally know how we think. I suppose it might undercut the mystery and magical aura that oftentimes goes along with those of us that have moments of outside-the-box visions, though I’d trade that enigma to know how the cups-and-balls trickery truly functions (going behind the curtain, as it were).
Speaking of novelty, a famous game match involving the playing of Go can provide useful illumination on this overall topic.
Go is a popular board game in the same complexity category as chess. Arguments are made about which is tougher, chess or Go, but I’m not going to get mired into that morass. For the sake of civil discussion, the key point is that Go is highly complex and requires intense mental concentration especially at the tournament level.
Generally, Go consists of trying to capture territory on a standard Go board, consisting of a 19 by 19 grid of intersecting lines. For those of you that have never tried playing Go, the closest similar kind of game might be the connect-the-dots that you played in childhood, which involves grabbing up territory, though Go is magnitudes more involved.
There is no need for you to know anything in particular about Go to get the gist of what will be discussed next regarding the act of human novelty and the act of AI novelty.
A famous Go competition took place about four years ago that pitted one of the world’s top professional Go players, Lee Sedol, against an AI program that had been crafted to play Go, coined as AlphaGo. There is a riveting documentary about the contest and plenty of write-ups and online videos that have in detail covered the match, including post-game analysis.
Put yourself back in time to 2016 and relive what happened.
Most AI developers did not anticipate that the AI of that time would be proficient enough to beat a top Go player. Sure, AI had already been able to best some top chess players, and thus offered a glimmer of expectation that Go would eventually be equally undertaken, but there weren’t any Go programs that had been able to compete at the pinnacle levels of human Go players. Most expected that it would probably be around the year 2020 or so before the capabilities of AI would be sufficient to compete in world-class Go tournaments.
DeepMind Created AlphaGo Using Deep Learning, Machine Learning
A small-sized tech company named DeepMind Technologies devised the AlphaGo AI playing system (the firm was later acquired by Google). Using techniques from Machine Learning and Deep Learning, the AlphaGo program was being revamped and adjusted right up to the actual tournament, a typical kind of last-ditch developer contortions that many of us have done when trying to get the last bit of added edge into something that is about to be demonstrated.
This was a monumental competition that had garnered global interest.
Human players of Go were doubtful that the AlphaGo program would win. Many AI techies were doubtful that AlphaGo would win. Even the AlphaGo developers were unsure of how well the program would do, including the stay-awake-at-night fears that the AlphaGo program would hit a bug or go into a kind of delusional mode and make outright mistakes and play foolishly.
A million dollars in prize money was put into the pot for the competition. There would be five Go games played, one per day, along with associated rules about taking breaks, etc. Some predicted that Sedol would handily win all five games, doing so without cracking a sweat. AI pundits were clinging to the hope that AlphaGo would win at least one of the five games, and otherwise, present itself as a respectable level of Go player throughout the contest.