Read Mathematics and the Real World Online
Authors: Zvi Artstein
Clearly to carry out these procedures manually is not a practical proposition, but for a computer it is a simple matter. Furthermore, the task will remain efficient even if we need to check a map with many more than 3,000 counties or a sudoku much larger than 9 × 9. That is not the case if you want to calculate how to color the map with only three colors with the given constraints or how to solve a huge sudoku. In addition, it is not known whether the three-color problem is in the P class. We know that it is NP-complete. In particular, as with the sudoku problem, if you manage to find a polynomial algorithm that can check whether the map can be colored as required with three colors, you will have answered the question does P equal NP, and if you are the first, you will have won a million dollars.
55. ENCODING
Encoding information, or cryptology, has always fascinated mankind. We have mentioned the German code, Enigma, used in World War II; it was broken with the use of electromechanical calculators. The mathematics of computation enables practical methods of encrypting to be used, which as far as we know today, cannot be decrypted even by the fastest computers. We will present the basic idea, which is known as a one-way function.
Take for example the multiplication and factorization of two prime numbers. When given two prime numbers
p
and
q
, finding the product,
pq
=
n
is a very simple task, even if the numbers are big, and certainly for a computer. If we are given only the product, that is,
n
, to find
p
and
q
is today an impractical task, even for the fastest computers. An exponential algorithm can be found to solve the problem, but it is not known whether a polynomial algorithm exists. Such a relation, which can be calculated easily in one direction but is difficult to calculate in the reverse direction, is called a one-way function. Thus, in the framework of the state of mathematical knowledge today, the relation of the product of two prime numbers to the factorizing of the product is a one-way function. If you find a polynomial algorithm for the factorizing of this product, the function will no longer be called one-way and, moreover, your discovery will have far-reaching implications. Some other one-way functions, not many it must be said, have been put forward in the professional literature. The product of prime numbers and factorizing the result is a function that has found its way into day-to-day use.
The idea of using a one-way function for encoding was presented by three American scientists. Two of them, Martin Hellman and his colleague Whitfield Diffie, published the idea of encryption using the so-called public-key cryptology in 1976, following a paper written by the third, Ralph Merkel, on a similar topic. The three are generally acknowledged as being the pioneers of the method. The idea was translated into a practical system by three scientists then at MIT, Ron Rivest, who is still there, Adi Shamir of the Weizmann Institute, and Leonard Adelman, currently at the University of Southern California. The three were joint recipients
of the Turing Award in 2002; and their initials, RSA, provide the name by which the system—currently the most widely used encryption system—is known today. The method is based on the fact that the product of two prime numbers is a one-way function. Moreover, not even an efficient probabilistic algorithm is known today for finding the prime factors of a given number. If tomorrow you find an efficient algorithm for doing that, you will have neutralized the primary instrument for encoding currently in use. The mathematics underlying the use of a one-way function is as follows.
I want to convince you that I know the result of tomorrow's drawing in the national lottery without actually telling you what the winning numbers are. I choose two large prime numbers and insert the results of the next day's draw into the lower of the two numbers; for example, I identify the six winners in the drawing at the beginning of the lower prime number. I multiply the two prime numbers, show you the product, and tell you how the winning numbers appear in the lower prime factor without showing you the number itself. If you manage to factorize the product, you will know the numbers that I encoded. Multiplying the two numbers is a simple matter, as we have said. In contrast, to factorize the product is extremely difficult. Even if you were to work all night and all the next day, using the fastest computers, you would be unable to factorize the product and discover the winning numbers of the draw. After the draw and the announcement of the winning numbers, I reveal to you the factorization and the winning numbers written there. I cannot cheat. The product is known to you, and it is easy to check that that number is in fact the product of the two prime numbers.
The method can be used to send encrypted messages. I provide you beforehand a large prime number. When I want to send you a message, I embed it in a new prime number and send to you the product of it with the prime number you already have. You can decipher the message very easily. The product of the two numbers can be safely delivered over the telephone or via e-mail, and there is no concern that anyone who might intercept the e-mail message or eavesdrop on the phone call would be able to understand the message. Even if the method is widely publicized (this is the source of the term
public key
), no one but you will manage to decipher the code, that is, to calculate the prime factors of the number. This example illustrates the
mathematical basis of the system. Its practical application requires several adjustments, which we have not described here, particularly if we wish to send encoded messages to many customers and to enable each one to open the message. The underlying idea of the system in this case is to give each customer one of the prime numbers and to send him the product of the encoded number and the prime number he knows. It is a simple matter to divide the product by the prime number he has been given, which enables him to decipher the message easily, while the message remains coded and inaccessible to anyone who does not have one of the prime numbers.
56. WHAT NEXT?
The ideas presented in the last two sections are specific to the era of very fast computers. Yet even the fastest computers today do not fulfill all our desires for rapid computation. Moreover, as computers become faster, our desires and aspirations for calculations become more intense, and we will never reach a situation in which the computational power available satisfies us. In addition, as we have seen, there are problems that theory indicates cannot be solved efficiently. On the other hand, some daily computational tasks are solved by the human brain satisfactorily, tasks that the fastest computers using the best programs known are incapable of handling. In like fashion, processes in nature can be seen that are essentially computational, whose implementation is incomparably superior to that of the fastest computers. Therefore, understanding nature's processes is likely to provide a clue as to how to arrive at more efficient computational processes. Thus nature does not only serve as the main source of objectives, it also provides inspiration for developments on which mathematicians and computer scientists are engaged in their efforts to advance computational ability. In this section we will describe several directions in which mathematics and technology are developing, with the intention of promoting computational possibilities and the possible uses of computers.
The first objective is related to the human brain. The human brain is capable of performing tasks that are out of reach to the fastest computers.
For example, facial recognition. Sometimes a quick glance at a large, tightly crowded group of people is sufficient for you to recognize a school friend whom you have not seen for some years. There is no computer program that can come anywhere remotely close to being able to do that. That gap is used for instance in entry to certain websites on the Internet, when you are asked to identify and copy a random alphanumeric series written in a deformed way. The human brain can identify the numbers or letters, while computer programs cannot. A second example is understanding a language and translating from one language to another. Even if you hear just part of a badly structured sentence with an unusual accent or pronunciation, you may well be able to understand what is being said. A computer will understand and react correctly only if the sentence falls within the framework that it has been programmed to recognize. There are programs for translating from one language to another, but their quality is acceptable only if the material is of a technical nature and is in a predetermined structure. Checking the meaning of what is written, particularly reading between the lines and translating accordingly, straightforward tasks for a human translator, are beyond the reach of any computer today. To go further, people sometimes mean the opposite of what they write or say. For instance, someone might want to say that every head injury, however serious, should be treated, and might write, “There is no head injury too serious for you to give up hope” (compare with the example in section 50). Someone else will correctly realize the writer's meaning and will comment and act accordingly, but not a computer. A computer would understand from the written text that one must give up on all head injuries—the correct grammatical meaning. No current translation program can correctly analyze the intention of what is written, which humans do relatively simply. The research area that develops computer programs that can imitate human abilities is called artificial intelligence.
We have spoken of Alan Turing as the man who laid the foundations of the mathematics of computation. He also recognized the gulf between the computational abilities of the human brain and those of the computer, and in an article in 1950 he presented a challenge to the science of artificial intelligence. It is known as
Turing's test
and is the following. When will
we be able to say that a machine has human intelligence? A person can talk to another person or to a machine even when he cannot see his collocutor. When the speaker cannot decide whether his collocutor is a person or a machine, the machine has reached the threshold of human intelligence. The test highlights the great difference between human thinking and the computational procedure. Turing's test does not relate to the extent of the knowledge or the speed of reaction. In these, every computer is vastly superior to humans. The test examines the extent to which the computer can imitate human thinking. We have stressed more than once that evolution has taught us to think intuitively, which is incompatible with mathematical logic. Our judgment is based on our own life experience and also on experience gained through millions of years of development, experience coded into our genes and transmitted by heredity.
Can the computer emulate that experience? The answer is unclear. Artificial intelligence has many achievements to its credit and has made impressive progress, but only time will tell whether programs based on logic can reach the achievements of thinking produced by evolution.
One of the ways of advancing a machine with artificial intelligence is to copy the way the brain itself carries out its thinking tasks. We do not know or understand all the elements of thought in the human brain, but we do have an idea of the brain's anatomy. Specifically, the brain consists of a huge number of neurons, forming a network of neurons, with each neuron connected to several of its neighbors and capable of transmitting an electrical signal to each one of them, according to the condition of the neuron and the environmental conditions. The thought process after receiving any input occurs by means of transfers of these electrical signals, whose number is inconceivable, and they take place simultaneously between different clusters of neurons. At the end of the thought process, the brain is in a new situation, where the output might be an instruction to one of the organs of the body to react in the appropriate way, or preserving this fact (the input) or another in the memory. One direction in which efforts are being made to construct a machine to achieve the same capabilities as the brain is focusing on attempts to copy this structure, called neural networks.
That is an attempt to build a mathematical model that takes advantage of the structure created by nature to perform efficient computations. At this stage the target is still far away, partly because the implementation of the mathematical model uses those logical computers that lack human intuition. If indeed intuition and the exercise of human discretion are just the outcome of a huge number of neurons in the brain, computers and computer programs that imitate neuron networks have a chance of passing Turing's test. It may well be, however, that within human thought there is another structure as yet unknown to us.
There is another impressively effective computational process in nature, demonstrated by evolution. The debate between the proponents and opponents of the theory of evolution, with the latter favoring the concept of intelligent design, is not purely science based. The opponents of the theory of evolution do however put forward a scientific argument, that such an ordered and successful structure cannot be the result of a random, unprogrammed process. It should be noted that although randomness does play a role in evolution, the process itself is defined deterministically. Among the random mutations, the one to survive will be the one best adapted to its given environment. The mutations occur in the process of gene reproduction. It is difficult to grasp how efficient the process is. It is hard to imagine the development via mutation and selection from one protein molecule to the vast array of animal life on Earth. That very success serves the opponents of the theory of evolution, who claim that the outcome does not stand to reason. Yet the evidence that that is indeed the way all living things developed keeps accumulating. Such evidence includes computer simulations of the process of evolution, which show that the process is feasible. Over and above computers’ ability to simulate evolution itself, they are capable of imitating the principles of the process of evolution, that is, they can construct algorithms based on mutation and selection to achieve other computational tasks efficiently.