We’re only scratching the surface of discovering what the art of the possible is with automation and robotics but the hype bandwagon is already far ahead of us and is making huge promises about the capability of “machine learning” and artificial intelligence.
As exciting as this might be, I wonder if we’re missing an important flaw in our thinking?
To explain why artificial intelligence might not be quite the panacea that technology commentators would have us believe, perhaps it’s useful to look at the development of human intelligence?
Humans demonstrate behaviour which suggests that we have the largest quotient of intelligence of any other species. Pinning down exactly what constitutes “intelligence” is not easy but, for the sake of this argument, let’s use the IQ tests which have been in the public domain for about a century, based on pattern recognition and prediction.
The average scores for IQ have been steadily increasing across most populations with few exceptions (hello North Korea!) for years. How much more intelligent is the 2017 AD model human compared with the 2017 BCE version and how much of the difference can be explained by nutrition and environment? Impossible to say but, based on our modern measurements of IQ, there would be difference.
Why did humans develop and are continuing to increase intelligence, what’s the evolutionary advantage? Put simply, the ability to observe and problem solve quite often trumps brute strength, speed, agility or other physical attributes. The knowledge of how to craft and throw a spear beats the deer’s ability to run faster than the spear-chucker.
Intelligence has increased in increments under the critical eye of evolution. Where an improvement in cognitive ability has resulted in an advantage to the genes of the owner, the improvement has been passed on to the next generation.
What does the development of human intelligence tell us about the prospects for artificial intelligence?
Intelligence has increased as a result of rewarding success and punishing failure across billions of individuals and millions of generations. How is that model recreated and fast-tracked for computer-based intelligence?
Sure, we can code certain obvious “guide rails” within which the programme can operate and learn, but how and where to define those parameters is still a human decision. There will be limits to how the artificial intelligence can develop and grow.
Natural intelligence, on the other hand, has developed within a massive laboratory over millions of generations. Unless we can set off similar numbers of individual software programmes over an equivalent number of procreating generations, the outcome will be severely limited by human imagination.
It is plausible that we might initiate a vast number of self-learning programmes in a vast array of computers but the guiding principles will still be set, and therefore limited, by humans. Meanwhile, human intelligence had only one guiding principle; to give the host organism enough of an advantage to reproduce.
Artificial intelligence is, by design, is never going to be equivalent to human intelligence unless it is given the same goal and operating parameters and there seems to be no real point to doing that other than intellectual curiosity.