Three Scientific Revolutions: How They Transformed Our Conceptions of Reality (31 page)

Read Three Scientific Revolutions: How They Transformed Our Conceptions of Reality Online

Authors: Richard H. Schlagel

Tags: #Science, #Religion, #Atheism, #Philosophy, #History, #Non-Fiction

The technological developments that were most instrumental in creating the computer revolution apparently were the following: (1) by relying on electrical circuits computers could perform close to the speed of light that permits nearly instantaneous transmissions and communication with the rest of the world; (2) these electrical conductions are further enhanced by the development of miniaturized transistors or switches; and (3) the creation of the computer chip or silicon wafer the size of one's fingernail that can be etched with millions of tiny transistors to form integrated units making it possible to carry out instantaneously enormously intricate calculations that, otherwise, would have taken decades, years, or even centuries.

Turning now to Kaku's account of the various conceptions and predictions of the future developments that will be brought about by the computer revolution, the one I find the most startling and threatening is based on computerized artificial intelligence and the creation of robots that in the most extreme case could, it is
predicted
, replace or convert human beings into computerized robots, as indicated in the initial section chapter 2 of his book
The End of Humanity?
(p. 75).

As of now the most advanced robot is ASIMO created by the Japanese “that can walk, run, climb stairs, dance, and even serve coffee” and “is so lifelike that when it talked, I half expected the robot to take off its helmet and reveal the boy who was cleverly hidden inside” (p. 77). In addition, there “are also robot security guards patrolling buildings at night, robot guides, and robot factory workers. In 2006, it was estimated that there were 950,000 industrial robots and 3,540,000 service robots working in homes and buildings” (pp. 87–88). But while these are remarkable achievements, they are not indications that the robot has attained an ounce of control over or initiates any of its behavior. Everything ASIMO does has been preprogrammed so that its actions are entirely beyond its control. It of course has no conscious awareness of its surroundings or any feelings since every action it performs is computerized. In some cases it is controlled by a person who directs the actions from the images on a computer thousands of miles away, similar to controlling a drone.

More remarkable was the event in 1997 when “IBM's Deep Blue accomplished a historic breakthrough by decisively beating world chess champion Gary Kasparov. Deep Blue was an engineering marvel, computing 11 billion operations per second” (p. 80). Nonetheless, Deep Blue cannot take credit for the achievement that has to be attributed to the intelligence of the gifted programmers who devised all the correct moves to beat Kasparov.

This fact was not lost on the artificial intelligence (AI) researchers who then began attempting to “simulate” conscious awareness by installing object recognition, expressing inner emotional states and feelings by facial expressions, and initiating intelligent actions. Thus, instead of the top-down approach of treating robots like digital computers with all the rules of intelligence preprogrammed from the very beginning, they began imitating the brain's bottom-up approach. They tried to create an artificial neural network with the capacity of learning from experience that would require conscious awareness of the environment, along with emotions and affective feelings that are the source of value judgments, such as whether things are beneficial or harmful.

In addition to attempting to replicate the learning process of human beings, they would have had to install such mental capacities as memory, conceptualizing, imagining, speaking, learning languages, and reasoning, all of which exceeds just following electronic rules. Given the fact that the brain is an
organ
with
unique neuronal and synaptic connections composed of biomolecular
components
directed by numerous chemicals that produce a great deal of flexibility,
the challenge of trying to duplicate this with just an electrical, digital network proved formidable.

Unlike a computer program, the brain has evolved into various areas representing evolutionary transitions responsible for lesser or more advanced anatomical structures and functions. This includes the reptilian area near the base of the brain that is the source of basic instincts, automatic bodily processes, and behavioral functions; the limbic system or mid-brain that comprises the amygdala, hippocampus, and hypothalamus that together are responsible for memory, emotions, and learning, including much of the hormonal activity of more highly socialized mammals and primates; and the newest, most important convoluted gray matter called the cerebral cortex or cerebrum divided into the frontal, parietal, and occipital lobes that produces such human capacities as language acquisition, learning, reasoning, and creativity.

That Kaku is aware of these differences between computers and human capabilities is indicated in the following statement.

Given the glaring limitations of computers compared to the human brain, one can appreciate why computers have not been able to accomplish two key tasks that humans perform effortlessly: pattern recognition and common sense. These two problems have defied solution for the past half century. This is the main reason why we do not have robot maids, butlers, and secretaries. (pp. 82–83)

But, as he adds, programmers have been able to overcome these obstacles to some extent. One robot developed at MIT scored higher on object recognition tests than humans, even performing equal to or better than Kaku himself. Another robot named STAIR developed at Stanford University, still relying on the top-down approach, was able to pick out different kinds of fruit, such as an orange, from a mixed assortment that seems simple enough to us, yet very difficult for robots because of the dependence on object recognition. Yet the best result was achieved at New York University where a robot named LAGR was programmed to follow the human bottom-up approach enabling it to identify objects in its path and gradually “learn” to avoid them with increased skill (cf., p. 86).

Furthermore, an MIT robot named KISMET was programmed to respond lifelike to people with given facial expressions that mimicked a variety of emotions (which have now been programmed into dolls), yet “scientists have no illusion that the robot actually feels emotions” (p. 98). While programmers are striving to overcome these differences they still have a long way to go, as Kaku indicates.

On one hand, I was impressed by the enthusiasm and energy of these researchers. In their hearts, they believe that they are laying the foundation for artificial intelligence, and that their work will one day impact society in ways we can only begin to understand. But from a distance, I could also appreciate how far they have to go. Even cockroaches can identify objects and learn to go around them. We are still at the stage where Mother Nature's lowliest creatures can outsmart our most intelligent robots. (p. 87)

Apparently there are two major approaches to resolving this problem. As indicated previously, Kaku identified two crucial capacities that robots lack that prevent their simulating human behavior: pattern recognition and common sense, both of which require conscious awareness that humans possess and computers and robots entirely lack. One way of solving the problem is to try to endow a computer or robot with consciousness, using a method called “reverse engineering of the human brain.” Instead of attempting to “simulate” the function of the brain with an
artificial
intelligence, it involves trying to
reproduce human intelligence
by replicating the neuronal structure of the brain neuron by neuron and then installing them in a robot.

This new method, “called optogenetics, combines optics and genetics to unravel specific neural pathways in animals” (p. 101). Determining by optical means the neural pathways in the human brain presumably would enable optogeneticists not only to
detect
which neural pathways determine specific bodily and mental functions, but also
duplicate
them. At Oxford University Gero Meisenböck and his colleagues

have been able to identify the neural mechanisms of animals in this way. They can study not only the pathways for the escape reflex in fruit flies but also the reflexes involved in smelling odors. They have studied the pathways governing food-seeking in roundworms. They have studied the neurons involved in decision making in mice. They found that while as few as two neurons were involved in triggering behaviors in fruit flies, almost 300 neurons were activated in mice for decision making. (p. 102)

But the problem is that
identifying
the neuron's function is not the same as
reproducing
it. The intended purpose was to model the entire human brain using two different approaches. The first approach was to “simulate” the vast number of neurons and their interconnections in the brain of a mouse by a supercomputer named Blue Gene constructed by IBM. Computing “at the blinding speed of 500 trillion operations per second . . . Blue Gene was simulating the thinking process of a mouse brain, which has about 2 million neurons (compared to the 100 billion neurons that we have)” (p. 104). But the question is whether simulating is equivalent to reproducing?

This success was rivaled by another group in Livermore, California, who built a more powerful model of Blue Gene called “Dawn.” At first in “2006 it was able to simulate 40 percent of a mouse's brain. In 2007, it could simulate 100 percent of a rat's brain (which contains 55 million neurons, much more than the mouse brain” (p. 105). Then progressing very rapidly in 2009 it, “succeeded in simulating 1 percent of the human cerebral cortex . . . containing 1.6 billion neurons with 9 trillion connections” (p. 105).

Although it convinced optogeneticists that
simulating
the human brain was not only possible, it was inevitable, yet again the crucial question is whether “simulating” is equivalent to “reconstructing” or “reproducing,” although it seems to me that the distinction has been overlooked and assumed to be the same. Significantly, in addition to meaning “imitating,” the term “simulate” has the additional adverse connotations of feigning, pretending, and faking.

The second approach, perhaps to avoid the above problem, is called “reverse engineering of the brain” and it confronted problems of even greater magnitude since it consisted of dissecting the entire system of neurons in the brain into miniscule slices no more than 50 nanometers wide (a nanometer is 1 billionth of a meter) in order to examine each of them under an electron microscope to “reconstruct” their function. Illustrating the enormity of the task, after producing a million slices

a scanning electron microscope takes a photograph of each, with a speed and resolution approaching a billion pixels per second. The amount of data spewing from the electron microscope is staggering, about 1,000 trillion bytes of data, enough to fill a storage room just for a single fruit fly brain. Processing this data, by tediously
reconstructing
the 3-D wiring of every single neuron of the fly brain, would take about five years. To get a more accurate picture of the fly brain, you then have to slice many more fly brains. (p. 107; italics added)

Although “the human brain has 1 billion neurons more than the fruit fly,” it was nevertheless assumed

that sometime by mid-century, we will have both the computer power to
simulate
the human brain and also crude maps of the brain's neural architecture. But it may take until late in this century before we fully understand human thought or can create a machine that can
duplicate
the function of the human brain. (p. 108; italics added)

Here the distinction between ‘simulate' and ‘duplicate' seems to be recognized but not considered. Moreover, since we still do not “understand” how the
chemical
-electrical neural processes of the human brain
produce human awareness, perception, memory, emotions, and thought, etc.,
it is questionable whether “constructing” the brain with a wholly
electronic
computer would actually create a “duplicate” of the brain that could function as the original brain.

Among those creating robots there is a consensus that, although in an entirely different way, they can be programmed to “exceed us in intelligence.” There is considerable disagreement as to how long this will take, but not that it can be done. According to Kaku a “large part of the problem with these scenarios is that there is no universal consensus as to the meaning of
consciousness
. . . . Nowhere in science have so many devoted so much to create so little” (pp. 110–11). He then offers what he believes are the three capacities essential for being conscious (p. 111):

l. sensing and recognizing the environment

2. self-awareness

3. planning for the future by setting goals and plans, that is, simulating the future and plotting strategy.

I would agree that these are essential aspects, but I do not see what the difficulty has been in attaining a consensus as to the nature of consciousness. When one considers the difference between being awake and being in a
dreamless
sleep or being conscious and then made unconscious by a sedative, blow, or death, we have distinct examples of being conscious and unconscious: in the former cases one is completely aware while in the latter cases one is entirely unaware and unconscious.

There is a difference between the awareness (as minimal as it is) of a fruit fly and a worm and the absence of any awareness in a rose or a rock in that the former involves a sensory content while the latter does not. Of course there are degrees of consciousness, but if one is just sensing, smelling, or feeling one is in a state of minimal awareness. Even dreaming is a kind of pseudo­consciousness in which one is aware that the dream is frightening or pleasant, but that one has no control over it because it is entirely a product of the brain disconnected to one's normal self-­awareness and behavioral responses.

That an automaton can be programmed to respond
as if
it had feelings and conscious awareness or to
simulate
either is not sufficient to consider it actually having either. That Deep Blue could defeat Gary Kasparov in a chess match was not an indication that Deep Blue was conscious of the moves it took to defeat the world champion, as Kaku acknowledges. I am not sure whether “self awareness is easier to achieve” as a condition of consciousness than his other two criteria as he claims, but I certainly agree with his assessment of the current state of robotics.

Other books

The Practical Navigator by Stephen Metcalfe
Craig Kreident #2 Fallout by Kevin J Anderson, Doug Beason
Operation Reunion by Justine Davis
Vision of Love by Xssa Annella
Disobey by Jacqui Rose