Understanding Computation (37 page)

Read Understanding Computation Online

Authors: Tom Stuart

Tags: #COMPUTERS / Programming / General

[
43
]
This is called
currying
, and we can use
Proc#curry
to do this transformation automatically.

[
44
]
We could certainly
model
printing to the
console by introducing a proc to represent standard output and
devising a convention for how to send text to it, but that would
complicate the exercise in an uninteresting way. FizzBuzz isn’t
about printing, it’s about arithmetic and control flow.

[
45
]
To be more specific, what we want to implement here are the
nonnegative
integers
: zero, one, two, three, and so on.

[
46
]
This is called “iterating the function.”

[
47
]
Actually, “takes two arguments” is inaccurate, because we’re restricting ourselves
to single-argument procs (see
Arguments
). To be technically correct,
we should say “takes one argument and returns a new proc that takes another argument,”
but that’s too long-winded, so we’ll stick with the shorthand and just remember what we
really mean.

[
48
]
You might protest that
3 - 5 = 0
isn’t called
“subtraction” where you come from, and you’d be right: the technical name for this
operation is “
monus
,” because the nonnegative integers under addition form a
commutative monoid
instead of a proper
abelian group
.

[
49
]
Most people associate it with the
differential and integral
calculus
, a system concerned with rates of change and accumulation of
quantities in mathematical functions.

[
50
]
The correct behavior is to automatically rename the function’s parameter so that
it doesn’t clash with any free variables: rewrite
-> x {
x[y] }
as the equivalent expression
-> w {
w[y] }
, say, and then safely perform the replacement to get
-> w { w[z[x]] }
, leaving
x
free.

[
51
]
We could fix this by reimplementing
#reduce
to
use a more aggressive evaluation strategy (like
applicative
order
or
normal order
evaluation) that performs
reduction on the bodies of functions, but a function body taken in isolation usually
contains free variables, so that would require a more robust implementation of
#replace
.

[
52
]
We’re taking a risk by evaluating an expression containing the free variables
inc
and
zero
,
but fortunately, none of the functions in the expression have arguments with those
names, so in this specific case, there’s no danger of either variable being
accidentally captured.

Chapter 7. Universality Is Everywhere

Most of the complexity we see in the world comes from complicated systems—mammals,
microprocessors, the economy, the weather—so it’s natural to assume that a simple system can
only do simple things. But in this book, we’ve seen that simple systems can have impressive
capabilities:
Chapter 6
showed that even a very minimal
programming language has enough power to do useful work, and
Chapter 5
sketched the design of a universal Turing machine that can
read an encoded description of another machine and then simulate its execution.

The existence of the universal
Turing machine is extremely significant. Even though any individual Turing machine
has a hardcoded rulebook, the universal Turing machine demonstrates that it’s possible to design
a device that can adapt to arbitrary tasks by reading instructions from a tape. These
instructions are effectively a piece of software that controls the operation of the machine’s
hardware, just like in the general-purpose programmable computers we use every day.
[
53
]
Finite and pushdown automata are slightly
too
simple to support
this kind of full-blown programmability, but a Turing machine has just enough complexity to make
it work.

In this chapter, we’ll take a tour of several simple systems and see that they’re all
universal—all capable of simulating a Turing machine, and therefore all capable of executing an
arbitrary program provided as input instead of hardcoded into the rules of the system—which
suggests that universality is a lot more common than we might expect.

Lambda Calculus

We’ve seen that
the lambda calculus is a usable programming language, but we haven’t yet explored
whether it’s as powerful as a Turing machine. In fact, the lambda calculus must be at least
that powerful, because it turns out to be capable of simulating any Turing machine, including
(of course) a
universal
Turing machine.

Let’s get a taste of how that works by quickly implementing part of
a Turing machine—the tape—in the lambda calculus.

Note

As in
Chapter 6
, we’re going to
take the convenient shortcut of representing lambda calculus expressions
as Ruby code, as long as that code does nothing except make procs, call
procs, and use constants as abbreviations.

It’s a little risky to bring Ruby into play when it’s not the language we’re supposed to
be investigating, but in exchange, we get a familiar syntax for expressions and an easy way
to evaluate them, and our discoveries will still be valid as long as we stay within the
constraints.

A Turing machine tape has four attributes: the list of characters
appearing on the left of the tape, the character in the middle of the tape
(where the machine’s read/write head is), the list of characters on the
right, and the character to be treated as a blank. We can represent those
four values as a pair of pairs:

TAPE
=
->
l
{
->
m
{
->
r
{
->
b
{
PAIR
[
PAIR
[
l
][
m
]][
PAIR
[
r
][
b
]]
}
}
}
}
TAPE_LEFT
=
->
t
{
LEFT
[
LEFT
[
t
]]
}
TAPE_MIDDLE
=
->
t
{
RIGHT
[
LEFT
[
t
]]
}
TAPE_RIGHT
=
->
t
{
LEFT
[
RIGHT
[
t
]]
}
TAPE_BLANK
=
->
t
{
RIGHT
[
RIGHT
[
t
]]
}

TAPE
acts as a constructor that
takes the four tape attributes as arguments and returns a proc
representing a tape, and
TAPE_LEFT
,
TAPE_MIDDLE
,
TAPE_RIGHT
, and
TAPE_BLANK
are the accessors that can take one
of those tape representations and pull the corresponding attribute out
again.

Once we have this data structure, we can implement
TAPE_WRITE
, which takes a tape and a character and returns a new tape with that
character written in the middle position:

TAPE_WRITE
=
->
t
{
->
c
{
TAPE
[
TAPE_LEFT
[
t
]][
c
][
TAPE_RIGHT
[
t
]][
TAPE_BLANK
[
t
]]
}
}

We can also define operations to move the tape head. Here’s a
TAPE_MOVE_HEAD_RIGHT
proc for moving
the head one square to the right, converted directly from the unrestricted
Ruby implementation of
Tape#move_head_right
in
Simulation
:
[
54
]

TAPE_MOVE_HEAD_RIGHT
=
->
t
{
TAPE
[
PUSH
[
TAPE_LEFT
[
t
]][
TAPE_MIDDLE
[
t
]]
][
IF
[
IS_EMPTY
[
TAPE_RIGHT
[
t
]]][
TAPE_BLANK
[
t
]
][
FIRST
[
TAPE_RIGHT
[
t
]]
]
][
IF
[
IS_EMPTY
[
TAPE_RIGHT
[
t
]]][
EMPTY
][
REST
[
TAPE_RIGHT
[
t
]]
]
][
TAPE_BLANK
[
t
]
]
}

Taken together, these operations give us everything we need to
create a tape, read from it, write onto it, and move its head around. For
example, we can start with a blank tape and write a sequence of numbers
into consecutive squares:

>>
current_tape
=
TAPE
[
EMPTY
][
ZERO
][
EMPTY
][
ZERO
]
=> #
>>
current_tape
=
TAPE_WRITE
[
current_tape
][
ONE
]
=> #
>>
current_tape
=
TAPE_MOVE_HEAD_RIGHT
[
current_tape
]
=> #
>>
current_tape
=
TAPE_WRITE
[
current_tape
][
TWO
]
=> #
>>
current_tape
=
TAPE_MOVE_HEAD_RIGHT
[
current_tape
]
=> #
>>
current_tape
=
TAPE_WRITE
[
current_tape
][
THREE
]
=> #
>>
current_tape
=
TAPE_MOVE_HEAD_RIGHT
[
current_tape
]
=> #
>>
to_array
(
TAPE_LEFT
[
current_tape
]
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [1, 2, 3]
>>
to_integer
(
TAPE_MIDDLE
[
current_tape
]
)
=> 0
>>
to_array
(
TAPE_RIGHT
[
current_tape
]
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> []

We’ll skip over the rest of the details, but it’s not difficult to continue like this,
building proc-based representations of states, configurations, rules, and rulebooks. Once we
have all those pieces, we can write proc-only implementations of
DTM#step
and
DTM#run
:
STEP
simulates a single step of a Turing machine by applying a rulebook to one
configuration to produce another, and
RUN
simulates a
machine’s full execution by using the Z combinator to repeatedly call
STEP
until no rule applies or the machine reaches a halting state.

In other words,
RUN
is a lambda
calculus program that can simulate any Turing machine.
[
55
]
It turns out that the reverse is also possible: a Turing
machine can act as an interpreter for the lambda calculus by storing a
representation of a lambda calculus expression on the tape and repeatedly
updating it according to a set of reduction rules, just like the
operational semantics from
Semantics
.

Note

Since every Turing machine can be simulated by a lambda calculus program, and every
lambda calculus program can be simulated by a Turing machine, the two systems are exactly
equivalent in power. That’s a surprising result, because Turing machines and lambda calculus
programs work in completely different ways and there’s no prior reason to expect them to
have identical capabilities.

This means there’s at least one way to simulate the lambda calculus
in itself: first implement a Turing machine in the lambda calculus, then
use that simulated machine to run a lambda calculus interpreter. This
simulation-inside-a-simulation is a very inefficient way of doing things,
and we can get the same result more elegantly by designing data structures
to represent lambda calculus expressions and then implementing an
operational semantics directly, but it does show that the lambda calculus
must be universal without having to build anything new. A self-interpreter
is the lambda calculus version of the universal Turing machine: even
though the underlying interpreter program is fixed, we can make it do any
job by supplying a suitable lambda calculus expression as input.

As we’ve seen, the real benefit of a universal system is that it can be programmed to
perform different tasks, rather than always being hardcoded to perform a single one. In
particular, a universal system can be programmed to simulate any other universal system; a
universal Turing machine can evaluate lambda calculus expressions, and a lambda calculus
interpreter can simulate the execution of a
Turing machine.

Partial Recursive Functions

In much the same
way that lambda calculus expressions consist entirely of
creating and calling procs,
partial recursive
functions
are programs that are constructed from four
fundamental building blocks in different combinations. The first two
building blocks are called
zero
and
increment
, and we can implement them
here as Ruby methods:

def
zero
0
end
def
increment
(
n
)
n
+
1
end

These are straightforward methods that return the number zero and
add one to a number respectively:

>>
zero
=> 0
>>
increment
(
zero
)
=> 1
>>
increment
(
increment
(
zero
))
=> 2

We can use
#zero
and
#increment
to define some new methods, albeit
not very interesting ones:

>>
def
two
increment
(
increment
(
zero
))
end
=> nil
>>
two
=> 2
>>
def
three
increment
(
two
)
end
=> nil
>>
three
=> 3
>>
def
add_three
(
x
)
increment
(
increment
(
increment
(
x
)))
end
=> nil
>>
add_three
(
two
)
=> 5

The third building block,
#recurse
, is more complicated:

def
recurse
(
f
,
g
,
*
values
)
*
other_values
,
last_value
=
values
if
last_value
.
zero?
send
(
f
,
*
other_values
)
else
easier_last_value
=
last_value
-
1
easier_values
=
other_values
+
[
easier_last_value
]
easier_result
=
recurse
(
f
,
g
,
*
easier_values
)
send
(
g
,
*
easier_values
,
easier_result
)
end
end

#recurse
takes two method names
as arguments,
f
and
g
, and uses them to perform a recursive
calculation on some input values. The immediate result of a call to
#recurse
is computed by delegating to
either
f
or
g
depending on what the last input value
is:

  • If the last input value is zero,
    #recurse
    calls the
    method named by
    f
    , passing the rest of the values as
    arguments.

  • If the last input value is not zero,
    #recurse
    decrements it, calls itself with the updated input values, and then calls the method named
    by
    g
    with those same values and the result of the
    recursive call.

This sounds more complicated than it is;
#recurse
is
just a template for defining a certain kind of recursive function. For example, we can use it
to define a method called
#add
that takes two arguments,
x
and
y
, and adds them
together. To build this method with
#recurse
, we need to
implement two other methods that answer these questions:

  • Given the value of
    x
    , what is
    the value of
    add(x, 0)
    ?

  • Given the values of
    x
    ,
    y -
    1
    , and
    add(x, y - 1)
    , what is the value of
    add(x, y)
    ?

The first question is easy: adding zero to a number doesn’t change
it, so if we know the value of
x
, the
value of
add(x, 0)
will be identical.
We can implement this as a method called
#add_zero_to_x
that simply returns its
argument:

def
add_zero_to_x
(
x
)
x
end

The second question is slightly harder, but still simple enough to answer: if we already
have the value of
add(x, y - 1)
, we just need to increment
it to get the value of
add(x, y)
.
[
56
]
This means we need a method that increments its third argument (
#recurse
calls it with
x
,
y - 1
, and
add(x, y -
1)
). Let’s call it
#increment_easier_result
:

def
increment_easier_result
(
x
,
easier_y
,
easier_result
)
increment
(
easier_result
)
end

Putting these together gives us a definition of
#add
built out of
#recurse
and
#increment
:

def
add
(
x
,
y
)
recurse
(
:add_zero_to_x
,
:increment_easier_result
,
x
,
y
)
end
Note

The same spirit applies here as in
Chapter 6
: we may only use method
definitions to give convenient names to expressions, not to sneak
recursion into them.
[
57
]
If we want to write a recursive method, we have to use
#recurse
.

Let’s check that
#add
does what
it’s supposed to:

>>
add
(
two
,
three
)
=> 5

Looks good. We can use the same strategy to implement other familiar
examples, like
#multiply

def
multiply_x_by_zero
(
x
)
zero
end
def
add_x_to_easier_result
(
x
,
easier_y
,
easier_result
)
add
(
x
,
easier_result
)
end
def
multiply
(
x
,
y
)
recurse
(
:multiply_x_by_zero
,
:add_x_to_easier_result
,
x
,
y
)
end

…and
#decrement

def
easier_x
(
easier_x
,
easier_result
)
easier_x
end
def
decrement
(
x
)
recurse
(
:zero
,
:easier_x
,
x
)
end

…and
#subtract
:

def
subtract_zero_from_x
(
x
)
x
end
def
decrement_easier_result
(
x
,
easier_y
,
easier_result
)
decrement
(
easier_result
)
end
def
subtract
(
x
,
y
)
recurse
(
:subtract_zero_from_x
,
:decrement_easier_result
,
x
,
y
)
end

These implementations all work as expected:

>>
multiply
(
two
,
three
)
=> 6
>>
def
six
multiply
(
two
,
three
)
end
=> nil
>>
decrement
(
six
)
=> 5
>>
subtract
(
six
,
two
)
=> 4
>>
subtract
(
two
,
six
)
=> 0

The programs that we can assemble out of
#zero
,
#increment
, and
#recurse
are called
the
primitive
recursive functions.

All primitive recursive functions are
total
: regardless of their
inputs, they always halt and return an answer. This is because
#recurse
is the only legitimate way to define a recursive method, and
#recurse
always halts: each recursive call makes the last argument
closer to zero, and when it inevitably reaches zero, the recursion will stop.

#zero
,
#increment
,
and
#recurse
are enough to construct many useful functions,
including all the operations needed to perform a single step of a Turing machine: the contents
of a Turing machine tape can be represented as a large number, and primitive recursive
functions can be used to read the character at the tape head’s current position, write a new
character onto the tape, and move the tape head left or right. However, we can’t simulate the
full execution of an arbitrary Turing machine with primitive recursive functions, because some
Turing machines loop forever, so primitive recursive functions aren’t universal.

To get a truly universal system we have to add a fourth fundamental operation,
#minimize
:

def
minimize
n
=
0
n
=
n
+
1
until
yield
(
n
)
.
zero?
n
end

#minimize
takes a block and calls it repeatedly with a
single numeric argument. For the first call, it provides
0
as the argument, then
1
, then
2
, and keeps calling the block with larger and larger numbers until it returns
zero.

By adding
#minimize
to
#zero
,
#increment
, and
#recurse
, we can build many more functions—all the
partial
recursive functions—including ones that don’t always halt. For
example,
#minimize
gives us an easy way to implement
#divide
:

def
divide
(
x
,
y
)
minimize
{
|
n
|
subtract
(
increment
(
x
),
multiply
(
y
,
increment
(
n
)))
}
end

Other books

Changes by Danielle Steel
Last Breath by Debra Dunbar
Promise Kept (Perry Skky Jr.) by Perry Moore, Stephanie
Ghost Town by Rachel Caine
Starfields by Carolyn Marsden
Come Near Me by Kasey Michaels
As Love Blooms by Lorna Seilstad
A Whirlwind Vacation by Krulik, Nancy