Is there a flaw in our hopes for Artificial Intelligence?

We’re only scratching the surface of discovering what the art of the possible is with automation and robotics but the hype bandwagon is already far ahead of us and is making huge promises about the capability of “machine learning” and artificial intelligence.

As exciting as this might be, I wonder if we’re missing an important flaw in our thinking?

To explain why artificial intelligence might not be quite the panacea that technology commentators would have us believe, perhaps it’s useful to look at the development of human intelligence?

Humans demonstrate behaviour which suggests that we have the largest quotient of intelligence of any other species. Pinning down exactly what constitutes “intelligence” is not easy but, for the sake of this argument, let’s use the IQ tests which have been in the public domain for about a century, based on pattern recognition and prediction.

The average scores for IQ have been steadily increasing across most populations with few exceptions (hello North Korea!) for years. How much more intelligent is the 2017 AD model human compared with the 2017 BCE version and how much of the difference can be explained by nutrition and environment? Impossible to say but, based on our modern measurements of IQ, there would be difference.

Why did humans develop and are continuing to increase intelligence, what’s the evolutionary advantage? Put simply, the ability to observe and problem solve quite often trumps brute strength, speed, agility or other physical attributes. The knowledge of how to craft and throw a spear beats the deer’s ability to run faster than the spear-chucker.

Intelligence has increased in increments under the critical eye of evolution. Where an improvement in cognitive ability has resulted in an advantage to the genes of the owner, the improvement has been passed on to the next generation.

What does the development of human intelligence tell us about the prospects for artificial intelligence?

Intelligence has increased as a result of rewarding success and punishing failure across billions of individuals and millions of generations. How is that model recreated and fast-tracked for computer-based intelligence?

Sure, we can code certain obvious “guide rails” within which the programme can operate and learn, but how and where to define those parameters is still a human decision. There will be limits to how the artificial intelligence can develop and grow.

Natural intelligence, on the other hand, has developed within a massive laboratory over millions of generations. Unless we can set off similar numbers of individual software programmes over an equivalent number of procreating generations, the outcome will be severely limited by human imagination.

It is plausible that we might initiate a vast number of self-learning programmes in a vast array of computers but the guiding principles will still be set, and therefore limited, by humans. Meanwhile, human intelligence had only one guiding principle; to give the host organism enough of an advantage to reproduce.

Bill’s Opinion

Artificial intelligence is, by design, is never going to be equivalent to human intelligence unless it is given the same goal and operating parameters and there seems to be no real point to doing that other than intellectual curiosity.

3 Replies to “Is there a flaw in our hopes for Artificial Intelligence?”

  1. To keep the show rolling, I’ll add a thought on this. It is a naming problem. The examples I have seen would be better called “anticipatory computing”. It learns from the user, and repeats what you did last time if it worked, or learnt that it didn’t work and doesn’t repeat it (ideally). I can see the application, but I can’t have a beer at the pub with it and swap stories of stupidity or misadventure accumulated over years of being a human.

    It’s a revolution generated by vast stores of data, enabled by cheap storage of digital information that can be connected over time and space.

    It feels like artificial intelligence, and can be marketed as a replacement for actual intelligence, but only by those people who have something to sell you. It can be mistaken for actual human intelligence, but mainly by those people who don’t interact with normal humans acting normally on a regular basis.

    1. Yes, “machine learning” seems a reasonable term for it. Intelligence infers something I doubt we will ever truly witness.

  2. Yes, AI taking over from humans and secondly depriving us of jobs is bunkum.

    The other myth is that the digital age will somehow change land demand economics which is completely false.

    Something I wrote somewhere else about this.

    Its nearly a hundred years since Keynes made his two bold predictions for 2030 in his essay Economic Possibilities for our Grandchildren. The first being that we would all be eight times better off in economic terms, the second being the three-hour work day or the fifteen-hour week.

    His economic growth prediction has been fairly accurate and if anything, it will probably be slightly underestimated by 2030 but despite this prosperity his shorter work week remains elusive right up until this day and will continue to remain so. Why, because the fruits of this economic growth will always flow into increased land prices which will continue to outstrip wage increases. Why is it that economists back then and right up until this very day remain bamboozled by this economic fact, don’t they teach this at yooni?

    Here is my prediction for the next one hundred years. As we continue to make progress in medicine, health and life expectancy, so will our ability to work longer increase and measured over our lifetime our actual working hours (including a standard wife) will increase further as the retirement age is extended.

Leave a Reply

Your email address will not be published.