Cheapskate's Guide

Home Contact

Alchemy, the Hundredth Anniversary of the Proton, and the Singularity Point

2-11-19



The singularity point (also known a the infinity point) is the point where computers become more intelligent that human beings. Some say this will happen by the year 2045. Some say it will never happen. Elon Musk and Steven Hawking think it will bring about the end of mankind. I don't know what will ultimately happen to mankind and when, but I do know this isn't the first artificial intelligent "revolution" that we've been through.

The term "artificial intelligence" (A.I.) was coined in 1956 by John McCarthy, an American computer and cognitive scientist and the inventor of the Lisp programming language. In 1948, Norbert Weiner (sometimes called the father of radar) coined the term cybernetics, which is a field of study dedicated to understanding how mechanical, biological, social, and other systems react to feedback from their current states and actions to approach a desired state. In other words, cybernetics is the study of how something in the real word affects a machine, organism, society, etc., and using that understanding to get as close as possible to a desired result. Fields to which cybernetics apply are learning, cognition, control systems, communications, social control, and several others.

Back in the 1950's, computer scientists, engineers, and mathematicians were trying to apply cybernetic theory to the problem of artificial intelligence. Many were predicting that a machine as intelligent as a human could be built within a generation. It didn't happen. And research stopped. They said it didn't happen, because they didn't have fast enough computers.

When I was in college in the early 1980's the engineering journals were again filled with talk about how artificial intelligence was only a few years way. Then, the talk mostly went away for close to twenty years. The field of artificial intelligence was all but abandoned. They had failed to achieve it, again. The researchers said it was because they didn't have fast enough computers.

Now, people are once again predicting that artificial intelligent is just a few years away. All we need is fast enough computers, they say. Seems like the beginning of a pattern. Will each new generation get caught up in the artificial intelligence frenzy?

It seems that with each new generation, the general public gets drawn into the A.I. frenzy a few years after the engineers, computer scientists, and mathematicians. Perhaps the popular A.I. frenzy begins when Hollywood, inspired by new talk from the most recent crop of researchers, puts out an especially spectacular movie. Back in the early 1950's it was with the movie, "The Day the Earth Stood Still". In the 1980's it was with "The Terminator". This time, Hollywood has put out movies like "Ex Machina", "The Machine", and "Transcendence". Although I thought "Ex Machina" was a little perverted and sad, I can forgive that, because I love good science fiction. I can even forgive the raging political correctness built into the latest crop of movies that revolve around the general theme of female-looking machines beating up straight, white guys. To Hollywood's credit, it often manages to keep generating movies that keep the idea of artificial intelligence alive, even at times when it's out of favor with the mathematicians and engineers. "Star Wars" came out in the late 1970's before the A.I. fad of the 1980's and managed to keep producing movie after movie all through the A.I. lull of the 1990's and early 2000's. But regardless of the many great movies, realistically, I believe we are much further from the kind of artificial intelligence depicted in the movies than people like Elon Musk are predicting.

Part of the problem with the term "artificial intelligence" is that people don't usually explain (or perhaps have not even thought about) what they mean. Most of us probably envision artificial intelligence as some kind of computer that can think the way we do. However, the type of artificial intelligence that we currently see with the Amazon Echo, Google Alexa, and Apple Siri are not that type. There are two types of artificial intelligence: general and narrow. General artificial intelligence is a type of intelligence that is able to learn about anything, as our brains do. Narrow artificial intelligence specializes in only one area of learning, only in that which is required to perform a specific task. And that is the only task it can perform. Siri, Alexa and the Echo are examples of narrow A.I. (AKA "applied A.I."). Narrow A.I. is made possible by "machine learning", which is in turn made possible by "neural networks". Neural network theory is a product of the field of control systems. A neural network is a mathematical technique that is as close as we have currently managed to get to something that we think functions sort of like the human brain. The words, "that we think" are key.

The problem is that, even though we are learning more every day, we do not have a good understanding of how the human brain (or any brain) actually works. This is most likely knowledge that we must posses before we will be able to build a true, general artificial intelligence. And I doubt very much that anyone has an accurate prediction of when that understanding will occur. The study of the brain and general artificial intelligence remind me of alchemy. Alchemy was popular in the middle ages and has its origins all the way back in the 4th century B.C. It eventually evolved into what we know today as the science of chemistry. Although some have said that higher forms of alchemy involved philosophical and spiritual dimensions, the basic stated goal of alchemy was to find a way to turn led into gold. The problem was that alchemists in the 4th century B.C. didn't understand what made lead, lead, or what made gold, gold. They didn't have a clue. And they didn't have a clue that they didn't have a clue. It wasn't until after Ernest Rutherford discovered the proton in 1919 that anyone began to get an inkling of what it would take to actually succeed at turning led into gold. At that point, they realized that mining gold would be far cheaper than turning lead into gold.

Now, exactly a hundred years after Rutherford's dicovery, we laugh at the idea and think how silly those alchemist were. But we don't perceive the same silliness in our pursuit of general artificial intelligence, of turning a machine into a "man". And maybe it's possible, I don't know. But I think, before we waste more of our time trying to create general artificial intelligence, we should first understand just what intelligence actually is. This reminds me of the old joke, "If an alien visited the earth, would he find intelligent life here?"



Related Articles:

Is Technoaddiction Real?

The High Cost of Technological Illiteracy in Our Society

What I Learned about the Internet by Creating My Own Website

What's the Point of Cryptocurrencies?

Comments


Required Fields *

*Name:

*Comment:
Comments Powered by Babbleweb

Copyright © 2018-2019 The Cheapskate's Guide to Computers and the Internet. All rights reserved.