Share on FacebookTweet about this on TwitterEmail this to someone
Article as it appeared in Inc Magazine /January 27, 2017
Greetings from 2017.
This is the year of self-driving cars and drones that deliver packages to your door. Someday, perhaps in the not-so-distant future, a machine will be able to do anything you think of, from creating patents and copyrights, to even being able to generate new ideas. More recently, one of Google’s great minds and a director of engineering, Ray Kurzweil, was quoted as saying that the ability to transfer the human mind to a computer would be possible within the next four decades. In the same vein, Ian Pearson, the head of British Telecom’s futurology unit suggested that rapid advances in computer power would make cyber immortality a reality by 2050.
To some, such a future would reach its zenith with nowhere to go but down. To others, it points to a need to progress with inventions at a spectacular speed, so as not to be out of date before a product is finished. Revolution in human affairs of this kind is certainly not impossible, and it is precisely those changes that make us human, inventing our way into the future.
The next Industrial Revolution is the Artificial Intelligence Revolution, but what are the risks?
Before we can consider the topic of artificial intelligence, we need to agree on a definition of exactly what it is. Artificial intelligence is simply the ability of computer systems to accomplish the kinds of tasks that currently require human intelligence and intellect. And while the brain is a remarkable creation, and no computer can yet duplicate its immense capabilities, the field of AI is advancing quickly.
A Case for…
Imagine businesses with formalized decision-making procedures that could outsource this to a computer with human-like intelligence–making the best possible decisions in a fraction of the time that it takes a typical executive team. While it might take some time before the most-complex decisions can be entrusted to AI, routine decisions would be a piece of cake.
A Case for Doubt…
As computers get smarter, more and more of what they do is driven by intent. If your dog happens to chew and rip apart your favorite slippers, it is “okay,” because that’s what dogs do. But what if an adult human were to destroy your favorite pair of slippers? Because we are talking about actions of a human adult, we would hold them accountable. AI has the potential to overcome humans, but how would we hold a computer accountable for its actions?
Are the Risks Realistic?
Some people have the attitude that the human race should not be surpassed, no matter how good the new AI technology might be. The risks are far too great to bear. But setting aside our fears for the moment, and grounding our argument in the understanding of AI, let’s assess the risks.
An intelligent computer system does not have needs or emotions, and it is not formed by a shared language or a set of belief systems. If a machine is to take over, would it not make sense that it first has to interact with us on a level that grants it the understanding of the human life? What we understand simply by virtue of being human–that an insult might make us angry or a break up might leave us hurt, has to be programmed somehow into a computer. In a sense, a computer must be given a belief system.
In his recent Scientific American article on AI, Douglas Lenat reports this difficulty:
Ideally, an entire encyclopedia would somehow be stored in computer-accessible form, not as a text but as a collection of thousands of structured indexed units. Preliminary work toward this goal by a few investigators has revealed that it is even more elusive than it sounds: the understanding of encyclopedia articles itself requires a large body of common sense knowledge not yet shared by computer software.
Lenat clearly highlights what some of us might have intuitively known all along, that some of what makes us human, simply is out of grasp. What we know about certain cultural practices that enable us to recognize specific situations is the knowledge that is gained through imitation, trial and error, as well as training. No one, not a machine or another human, can make explicit in terms of facts, what you have learned from the pain of setback.
From High Hopes to Sober Reality
AI has been the subject of recent cover stories in Business Week, Newsweek, as well as the New York Times. The heightened interest in machine intelligence is attributable not only to the new accomplishments of AI, but also to our much-publicized competition with Japan to build a new generation of computers with some sort of expertise.
So stepping aside for a minute from the arguments of for and against, is AI something that we want to cultivate into the future? Is AI the type of knowledge that is rooted in intuitive capacities fostered by experience, trial and error or is it a fact-based approach that could just slide us back further in our human development?
Some argue that AI could lead to an increase in unemployment as machine-learning algorithms use past information to predict future outcomes in performance, as is the case with mobile devices and office software. These days, businesses rely more and more on AI, not just as an adjust to workflow, but in some cases, a replacement to human labor. The friction, however, is in the assumption that humans will not be needed to manage these machines, regulate inputs and outputs, or to make sure that these machines are functioning properly.
Then there is of course the entire business of interfacing with customers, suppliers and policymakers. And that is not something that machines can do. In that sense, there will always be a need for humans.
To rate common sense, wisdom, and judgment as qualities less worthy than automated, facts-based knowledge is dangerous because it is precisely out of this experiential knowledge–the art of good guessing–that we as people have learned more about ourselves, our beliefs and our connection with others. It has allowed us to flourish in the way that we have.
In the words of Andy Clark “I think of the biological brain as something like the boot program of human intelligence, it gets the thing going but its job is to pull in all this other structure, to load up all this other stuff and that’s when we really become fully human.”