Philosophy of Robotics


——————————————————————————————–
Contents:

________________________________________________________________________________________[Top]

What is the Philosophy of Robotics?

Before we consider this question as a whole, I think it a good idea to first separate it and consider it’s parts. That is, instead of asking the above question, first we’ll ask the questions, ‘What is philosophy?’ and ‘What is Robotics?’ Then at the end we will recombine them in the hopes of forming a complete definition.

What is Robotics?

I’ll take the easier of the questions first. I will consider Robotics as the study of robots and all the research and creation that goes into making robots.

Well then, one might ask, what is a robot? For the purposes of this discussion, I will consider a robot an artificial device which usually acts independently of human interaction, and which can receive information digitally and and perform tasks mechanically. A robot is also such that it always operates using some level of autonomous decision making. As an example of a machine that is not a robot, consider a mechanical arm that can be used to pick up and drop objects using a series of purely mechanical controls. This simple arm does not use electronics or pneumatics or whatever: it only operates purely by the manipulating of levers by the operator. This is not a robot, because it does not involve any autonomous decision making; though it is a machine.

Instead, take the arm just mentioned, add a pneumatic system and control it with electronic buttons and a joystick, all run by software; now this arm, specifically the software running the arm, must ‘decide’ what it should do with each input. When it receives the signal to drop whatever is in it’s arm, what operations should it carry out? These are rudimentary decisions, but they are decisions nonetheless. This arm would meet our definition of a robot: it makes decisions autonomously and it performs mechanical tasks. Some might disagree with this definition that it is too broad, and they might argue that while this arm would fall into the science of robotics, it should not be considered a robot because it still needs direct human control to operate. However, for the purposes of this discussion, I think it helpful to keep the definition of a robot as wide as possible. So then a robot could be a mechanical arm, as we’ve discussed, or an autonomous vacuum cleaner, or Rosie the robot maid, or any and all devices that operate mechanically and that can make autonomous decisions.

It might be argued, then, that a device such as a computer, which has autonomous decision making capability, would qualify as a robot. However, it does not meet one of the qualifications of a robot. Computers can receive and process information digitally, however they cannot perform mechanical tasks. It might be said that a printer or a monitor is a mechanical output, but this type of output should be considered digital output because these devices are mainly a mode of outputting digital information.

So computers are not robots. However, robots need some form of a computer to operate. It is the computers within robots that make the autonomous ‘decisions’ and send commands to the mechanical parts. The designing of the computers and software falls more whithin the realm of computer science. The line between robotics and computer science though is sometimes quite fuzzy. At any rate, keep in mind that a computer does not meet our definition of robot.

Robotics, then, is the study and creation of robots. Roboticists are scientists who work in the field of robotics. How can we make a mechanical arm operate like a human arm? What is the best way for this machine to move from one place to another? How should we program this robot to conduct this certain task? These are all questions a roboticist might ask and study.

What is Philosophy?

This question will tax us a little more, and if we’re not careful, we could become bogged down in trying to answer it. Still, this site is the Philosophy of Robotics, so we should at least attempt to come to some understanding about what philosophy is.

In my view, there are several ways that we could attempt to try to get a proper definition of a word. Not just ‘philosophy,’ but any word. One way one might take in attempting to define a word is looking at its common usage, and by common usage I mean in the sense of common usage by philosophers. This could also be considered its dictionary definition. The Merriam-Webster dictionary defines philosophy, among other ways, in this way:

a search for a general understanding of values and reality by chiefly speculative rather than observational means

In my opinion, I see philosophy as a method for inquiring about ‘values and reality’, and I think it more important to focus on the fact that there is a method. Borrowing from the Webster definition, this method is chiefly speculative. As an example, consider the branch of philosophy known as ethics. Moral philosophers, or those philosophers who think about ethics and morality, think about what we can consider moral and if there is a method for knowing if we can consider some act moral. They do not, for the most part, go into society and conduct opinion polls or study human interaction or habits1 though they might use these to motivate some issue. Instead, they think about morality in a chiefly speculative way. Of course, what they speculate about must have some relevance to human interaction and habits; but the point is that they do not learn about morality by studying human interaction and habits.

Next, I think it is always a good idea when attempting to define a certain word to look at the etymological roots of the word. This is simple in this case. Philosophy comes from the Greek words ‘Philos,’ which can be interpreted as ‘Love;’ and ‘Sophia,’ which can be interpreted as ‘Wisdom.’ ‘Philosophy as a word then roughly means ‘love of wisdom.’

A third tactic is to try to define a word negatively; that is, we should attempt to define what it is not. When we speak of philosophy, we are not speaking in the sense of having a philosophy about some particular course of action. For example, we can have a cooking philosophy of only using olive oil as a fat, or a philosophy of building robots a certain way, for example the BEAM philosophy of robot building. This is not the use of the word ‘philosophy’ that we are searching for.

My favorite definition of philosophy then, though having problems of it’s own, can be summed up nicely in two words: Rational Inquiry. When I say ‘rational’ I mean that all arguments within philosophy should be clearly and solidly based upon sound justifications, and without involving appeals to emotion. So then rational inquiry would mean inquiring into things, rationally defining all the relevant questions, and only settling upon answers to those questions when there is sound justifiable evidence. It might be argued that Rational Inquiry is the view of science as well, but I would simply reply “All the better!” Philosophers merely inquire into a broader range of subjects than scientists, including science itself.

I did mention, however, that this definition can have some problems, and these arise from my definition of ‘rational.’ I won’t take any more time looking at these issues, but only ask that for the purposes of this discussion that you accept my definition of rational, and therefore my definition of ‘philosophy.’

Subbranches of Philosophy

As I noted above, there are subbranches of philosophy, and we will have to be aware of the distinctions. Philosophy is commonly separated into 4 major parts:2 Epistemology, Metaphysics, Logic, and Ethics. I will deal with each of these in turn, but do note that each of these four will play (at least a small) role, directly or indirectly, in our discussion of the Philosophy of Robotics.

Epistemology

Epistemology is defined as the study of knowledge. An epistemological question asks how we know about the things we claim to know about, or if we can know them at all. Asking how we know that the earth revolves around the sun is asking an epistemological question. When we look to the sky, it seems as if the sun is revolving around us, ‘rising’ in the morning and ‘setting’ in the evening. But astronomers have shown that in fact the earth, and all the other planets, revolve around the sun. However, using that line of reasoning, you might be then tempted to ask (as you should) how do we know what the astronomer tells us is true? That is, how does the astronomer know these things that he or she tells us to be true? If he or she uses a telescope to determine this, how do we know the telescope works in the way advertised? We might also ask if what the telescope shows astronomers is what is actually out there. Is a telescope a reliable instrument for reporting about nature? These are all epistemological questions.

Metaphysics

I rather hesitate to use this term, as I could use the similar word ‘ontology’ and avoid certain negative connotations associated with the word ‘metaphysics.’ However, metaphysics is the word most commonly used, so I will stick with it. Metaphysics is the study of what reality really is like, without asking how we know what it is like. Here is where we can quickly get into trouble if we are not careful. I mentioned the negative connotations of the word earlier because the word metaphysics is often associated with the word ‘new-age.’ It is usually a very good idea when researching metaphysics to associate metaphysical study and epistemological study together in studying some subject, but it is not necessary. Hence, many New-Age and metaphysical books to attempt to tell us what reality is like without the epistemological concerns, with often crazy results. I need not go into details, but just be aware that when we ask a metaphysical question we want to be well aware of the dangers associated in doing so without considering how we know such information.

A nice little excerpt from a speech by Dennis Kucinich sums up the dangers of ‘irresponsible metaphysics.’ This is from a speech titled ‘Spirit and Stardust‘:

Spirit merges with matter to sanctify the universe. Matter transcends to return to spirit. The interchangeability of matter and spirit means the starlit magic of the outermost life of our universe becomes the soul-light magic of the innermost life of our self. The energy of the stars becomes us. We become the energy of the stars. Stardust and spirit unite and we begin: One with the universe. Whole and holy. From one source, endless creative energy, bursting forth, kinetic, elemental. We, the earth, air, water and fire-source of nearly fifteen billion years of cosmic spiraling.

This is the kind of language we want to avoid.

Logic

Logic is quite different from the previous two subbranches. It is the science of argument construction and validation, and is in some respects closer to a mathematical study than a philosophical one, but in other respects is closer to philosophy. Logic will be important to our study of the Philosophy of Robotics for a couple reasons. Firstly, it will be indispensable for analyzing arguments that other people have put forward to justify various positions that we will be looking at, and at constructing valid arguments of our own. Secondly, Logic itself is essential to the operation of robots. Computer Science was founded upon logic, and circuit boards operate using something called logic gates.

Ethics

Ethics is again different from the other three. It concern itself with the question of ‘How are we to live our lives?’ There are three main sub-sections of Ethics, but we will really only be concerned with one of them. The sub-sections are normative ethics, meta-ethics, and applied ethics. Normative ethics deals with ethical theory. ‘What is Utilitarianism?’ or ‘What is Kantian Ethics?’ are questions or normative ethics. It looks to systems of ethics and attempts to flesh them out and analyze them. We will be dealing with normative ethics only briefly in a couple sections later on. The second sub-sections is meta-ethics. Meta-Ethics could also be considered the philosophy of ethics, if that helps any. It deals with questions concerning the reality of ethics and the meaning of ethical statements. We will not be dealing with it at all in this discussion, so I leave it at that. What we will be dealing with in some detail is applied ethics. This sub-branch involves, not surprisingly, applying ethical theories to common moral problems, as well as examining moral problems and deciding the best course of action, without appealing to ethical theories as justification. For example, within medical ethics (a branch of applied ethics,) there is debate over whether it is ethical to remove some person from life support and then use his or her organs to help save others’ lives. We could look into ethical theory to help us solve this (or at least provide arguments in favor of the practice.) A Utilitarian might say that if in the end more people live if we use the organs and the person was never going to recover anyway, then it is ethical to use those organs. However applied ethics need not use normative theories. Indeed someone else might respond to this argument, using no moral theory as justification, but arguing instead that taking anyone off of life support and not letting them die naturally is unethical, and perhaps that ’surely using a humans’ organs is obscene.’

The Philosophy of Robotics

So now we return to the original question: What is the Philosophy of Robotics? Literally, if we take our definition of robotics (the study of robots and all the research and creation that goes into making robots) and our definition of philosophy (rational inquiry) and combine them, we get that the philosophy of robotics is:

The rational inquiry into the study of robots and all the research and creation that goes into making robots.

This definition will get us quite far. But how then do I go about inquiring into robotics?

I will be taking two different branches of philosophy, along with some smaller contributions from a couple of others, and combing them to form the Philosophy of Robotics. The first branch is the Philosophy of Artificial Intelligence (AI from now on.) It originates and is usually classed under the philosophical branch known as the Philosophy of Mind. We would be in error to use the Philosophy of AI without remembering where it originates from, as we will not be able to avoid larger questions in the Philosophy of Mind in exploring the Philosophy of AI. Questions like ‘What is consciousness?’ is a question from the Philosophy of Mind that will be important to the discussion of the Philosophy of AI.

The second branch of philosophy is called Roboethics, a large, sweeping, and relatively new branch of applied ethics that seeks to explore the ethical issues implicit in robotics and the future of robotics. We will not explore all the issues in Roboethics or all the issues in the Philosophy of AI; only the ones pertinent to the discussion. This is our agenda, and we will begin with the Philosophy of AI.

Next: Philosophy of AI

Notes and Sources:
1. There is a relatively new area of philosophy called ‘experimental philosophy’ that actually does this. We need not worry about this though.
2. See, for example, wikipedia: http://en.wikipedia.org/wiki/Philosophy

________________________________________________________________________________________[Top]

Philosophy of AI

This section is separated into several pages.

________________________________________________________________________________________[Top]

Preliminary Considerations

As with the opening page, before we get into the philosophy, we must define our terms and understand certain concepts. In this case, we have to define what we mean when we say ‘Artificial Intelligence.’ However, before we even get to that, we need to look at a couple more basic terms and concepts that will be essential to the discussion of Artificial Intelligence. The next couple of pages delve into these details, and finish with a working definition of Artificial Intelligence.

Next: The Digital Computer

________________________________________________________________________________________[Top]

The Digital Computer

The actual meaning of the word ‘digital’ is quite different from it’s common usage. We always hear about the ‘Digital Revolution’ or the ‘Digital Age,’ almost as if its a synonym for ‘electronic.’ Actually, it’s meaning has nothing to do with ‘electronic’ or ‘technology.’

Digital, in the sense used when we speak of a digital computer, means a computer which operates using discrete values (numbers, or digits) to operate, represent, and store everything that the computer does.1 This is opposed to an analog computer, which uses continuous values to represent information.2 What does that mean? Well, think of a mercury thermometer. It has one function: to display the temperature of the room, or of whatever it is inserted into. It has no discrete temperature values, like 1 degree, 2 degrees, etc. It can display any continuous values in the range between 1 and 2 degrees, as well as less than 1 degree or more than 2 degrees. Another way to think of it as a graph. If we graph the function y=2x, for any value of x we get an output for y, regardless of what value x is, for example x=2 (y=4) or x=2.56 (y=5.12) Yet another example are the fingers, or digits, on our hands. Our digits certainly occur in discrete values.

A digital computer, as mentioned, uses discrete variables. Our numerical system is set up as a base-10 system. This means that there are 10 characters as a base in our system, 0-9. Digital computers normally use a base-2 system, though this is not necessary. It is just far easier to program for when there are only two states. Most modern people are familiar with binary code, that is, using 0’s and 1’s to represent information. It is perhaps better to think of the 0’s as ‘off’ and the 1’s as ‘on’. We shall have much more to say about this later (in the section on the Symbol System Hypothesis,) but for now it is sufficient to say that a digital computer uses discrete variables and uses a 2 variable system. These two variables are represented by ‘on’ and ‘off.’

One of the many interesting things about digital computers was shown before any such computers even existed, in the 1930’s. This is that it has essentially been proven that a digital computer can made of almost anything, say toilet paper, and this toilet paper computer can calculate a program of any size, given enough memory and speed.

What I have just stated is a variation of the Church-Turing Thesis, proven by Alonzo Church and Alan Turing, using different but equivalent methods, in the 1930’s. I introduce this here because I think this is very important to what I wish to talk about. First, however, we must under what the thesis actually says. To do this, I will ignore Alonzo Church’s proof (because it is proven mathematically, and, at least for me, is not easily understood) and instead focus on Turing’s proof. In order to understand what the proof is (and for many other reasons) we must introduce the concept of a Turing machine.

Next: The Turing Machine and Thesis

Notes and Sources:

1. http://pespmc1.vub.ac.be/ASC/DIGITA_COMPU.html

2. http://dcoward.best.vwh.net/analog/

________________________________________________________________________________________[Top]

The Turing Machine and Thesis

A Turing machine, originally known to Turing as a “Computing Machine” and later as a “Logical Computing Machine,” or LCM, is a simple machine that can be operated in many ways, and I give but a few: in the mind with a paper and pencil; with a roll of toilet paper and some sort of marking instrument; or a tape and a reading/recording device. Originally, Turing specified that the media be a ‘ “tape”, (the analogue of a program), which is to be divided into sections (called “squares,”) and each capable of bearing a “symbol.”1 We imagine it then to have a reading/recording device to read, erase, and write symbols on the media, and finally a block of instructions for what the reading/recording device is to do when it encounters certain squares of the media. For example, if the machine comes to a particular square on the tape, what function(s) should it perform? The instructions, or program, contains this information. The tape (or the reading/recording device) is moved to the left or the right, one square at a time, based upon these instructions. In this way, the machine is able to compute digitally, that is, it will compute based upon discrete symbols.

Turing uses the the binary convention of using ‘0′ and ‘1,’ as shall we. It is easy to see why. The state of each square of the tape can represent anything we want it to, but if we restrict it to two possible states, ‘0′ and ‘1′, then we have greatly simplified the machine.

So let us build a Turing Machine and have it calculate the sum of 5+2. To facilitate the description of the computer, the following Turing Machine will be capable only of adding. The reason for this is that to construct the adding machine we will need only a very small number of commands. However, even a full Turing Machine can be built with only a relatively small number of commands, and that ‘computer’ will be able to theoretically replicate any computer.

As mentioned, we need 3 things; a set of instructions (a machine table from here on,) a reading/recording device (in this example, us,) and a tape. The first thing we do, and really the most important step, is to specify the machine table that the reading/recording device is to follow. Once we have specified the table, everything else is straightforward; the machine (in this case su) simply follows the instructions of that table. This is the table I will use:2

Symbols: Move: State: Read: Write: Move: Goto State:
0 R(ight) A 0 0 H D
1 L(eft) 1 1 R
+ H(old) + 1 L B
B 0 0 R C
1 1 L
+ + H D
C 0 0 L D
1 0 L D
+ + L D
D 0 0 H
1 1 H
+ + H

The first 2 columns simply define the terms we will use. We will be using (0), (1), and (+), and our movement commands will be (R), (L), and (H), which are respectively Right, Left, and Hold. The rest of the table tells the reading/recording device what to do. It reads the current position it is in, then consults the table to determine what to write, how to move, and which state to change to, if a change of state is needed. If a change of state is not needed, the machine remains in its current state.

As I mentioned in note 4, I hoped to combine the 2 expositions of adding machines to make my table as compact yet easy to read as possible. At first glance, the above looks neither compact nor easy, but I do believe it is better than either Weizenbaum or Putnam, and by better I mean easiest to follow (though the spirit of my table is most similar to Putnam.) That is up to the reader to decide.

So let us work through this program. Now we simply need to give the machine a tape, or a program, and let it run. (Note that the following is not necessarily how a tape would look going through a reading/recording device. I use the following, annotating the state and position, for clarity.) We wanted to add 5+2, so our tape will look as such:

State: A
Program: 0 1 1 1 1 1 + 1 1 0
Position: X

The reading/recording device is in State A, and the X marks its position on the program. This is how we begin. The reading/recording device then consults its instructions, and since it currently reads 1, it prints 1, moves right, and remains in state A. The situation now looks like this:

State: A
Program: 0 1 1 1 1 1 + 1 1 0
Position: X

I will not annotate each change, but only provide subsequent diagrams, and bold the items that changed:

State: A
Program: 0 1 1 1 1 1 + 1 1 0
Position: X
State: A
Program: 0 1 1 1 1 1 + 1 1 0
Position: X
State: A
Program: 0 1 1 1 1 1 + 1 1 0
Position: X
State: A
Program: 0 1 1 1 1 1 + 1 1 0
Position: X
State: B
Program: 0 1 1 1 1 1 1 1 1 0
Position: X
State: B
Program: 0 1 1 1 1 1 1 1 1 0
Position: X
State: B
Program: 0 1 1 1 1 1 1 1 1 0
Position: X
State: B
Program: 0 1 1 1 1 1 1 1 1 0
Position: X
State: B
Program: 0 1 1 1 1 1 1 1 1 0
Position: X
State: B
Program: 0 1 1 1 1 1 1 1 1 0
Position: X
State: C
Program: 0 1 1 1 1 1 1 1 1 0
Position: X
State: D
Program: 0 0 1 1 1 1 1 1 1 0
Position: X
State: D
Program: 0 0 1 1 1 1 1 1 1 0
Position: X

After all that, all this program really did was move to the (+), change it to a 1, and move back and change the first (1) to a (0). Still, this machine computed the equation 11111+11=1111111, or 5+2=7.

I mentioned previously that the LCM could be made out of toilet paper, and it’s not hard to see how.3 Toilet paper could actually make a decent tape (if the program only has to be run one or a very small number of times) because the ’squares’ of the ‘tape’ are already demarcated. In the experiment, the toilet paper will act as the tape but the read/record device and machine table will remain the same.

It could be objected here that I will actually be doing the computing, and not the toilet paper. However, assume that do not know the answer to 5+2 but only followed the block of instructions; I would nevertheless obtain the answer I am looking for. Another possible objection is that the language I am using is misleading. An objection might run that “Sure, using the toilet paper and a set of instruction you were able to compute 5+2, but that is just the issue, you computed 5+2, not the toilet paper.” I would respond that I was only acting as the read/record device. I easily could have substituted a machine (well, not easily, but I can easily think of such a machine,) run by (for example) a pneumatic system, which followed the instructions and acted as the read/record device. Then the toilet paper computer can be said to have computed the sum. “But then it would be a pneumatic computer, not a toilet paper computer.” OK, so consider this thought experiment (adapted from Weizenbaum, who uses it for a different purpose):

Take a young boy who is not old enough to know how to add, give him stones, a roll of toilet paper, and the machine table illustrated above. If we were in a separate room from the child, communicating through an audio system, and we told the child step by step what to do next, who or what is doing the computing? The child who can’t add? Me standing in a different room? The stones? The toilet paper? I don’t know if there is a good answer to that, so labeling the above as a toliet-paper computer is in my opinion as good as label as any. Perhaps, though, the best answer would perhaps be the system of me, the child, the stones, and the toilet paper computed the sum, but I’m not sure how I would label that.

So for lack of a better way to describe this computer, I describe it as a toilet paper computer.

The Turing Thesis

So now we return to the Turing thesis. We already have everything we need to state the Thesis: A Logical Computing Machine, or Turing Machine, given enough tape and a sufficient set of instructions, can replicate any other LCM. The computer on which you are reading this text is an LCM, so therefore the basic LCM with a proper amount of instructions can imitate any desktop computer. So therefore, our system above of me, the child, stones, toilet paper and a full machine table can theoretically imitate the computer on which you are reading this text. That is the Turing Thesis, and this is what Turing proved in 1936.

Next: Symbol Systems and the Definition of AI

Notes and Sources:

1. The description of the LCM and everything else mentioned comes from Alan Turing’s paper “On Computable Numbers, With an Application to the Entscheidungsproblem” http://www.thocp.net/biographies/papers/turing_oncomputablenumbers_1936.pdf

2. I am adapting this from Joseph Weizenbaum’s exposition of his machine in his out of print book ‘Computer Power and Human Reason,’ 1976, pages 51-56, and also from Hilary Putnam’s book Mind, Language and Reality. Weizenbaum goes for simplicity, but his ‘rule table’ is rather large for what it needs to do, and Putnam goes for strict formality, but his ‘machine table’ is rather hard to follow. I hope to combine these two approaches for the best results, though the spirit of my table is most similar to Putnam.

3. The idea of a toilet paper computer was first, to the best of my knowledge, posited by Joseph Weizenbaum in his out of print book ‘Computer Power and Human Reason,’ 1976, pages 51-56. John Searle mentions it in his paper ‘Minds, Brains, and Programs,’ (1980 ‘The Behavioral and Brain Sciences, vol.3, pages 400-401.)

________________________________________________________________________________________[Top]

Symbol Systems and the Definition of AI

In this section we change gears slightly and look harder at programs, including the program outlined above.

All programs are written in a language that the computer can understand, and then run. The computer reads these commands as symbols, and has an underlying language in which to understand what to do with these symbols. The program on the previous page, the one that added 5+2, was run using a machine table that was written in a simplified manner so that a human could follow the steps of the language, and hence easily carry out the program. But even with that machine table above, we were using symbols. When we saw the letter (R) in the ‘move’ column, the first thing we had to do was understand what the ‘R’ stood for, so we consulted our table, and found that (R) was a symbol for ‘right.’ However, we could make it more formal so that English is not required at all to understand the commands. For example, the first row in the B-state could be written 00RC. Then we could take the hypothetical pneumatic system I mentioned earlier and set it up in a way that when it reads (00RC) from the machine table it executes the following:

1. Reads the current square, and reports it.

2. Writes (0).

3. Rotates wheels holding the tape rotate in a clock-wise direction.

4. Moves to state-D.

Of course these wouldn’t be written in English, but the machine would be set up in a way that the command ‘R’ would automatically rotate the wheels clockwise. None of this needs to be physical, of course. We did it using a pencil and paper, and the computer sitting under your desk could run the program if it was written in a language that it could execute. Of course, if we want computers to be able to do more than add small numbers, we have to expand what the programming language can do.

Modern computer languages have thousands of different commands, and indeed let programmers write new specific commands on the fly for inclusion into a specific program. So instead of using just (R) and (L) as commands, we can have (still basic) commands like ‘print.’ and <stdin>. The computer doesn’t understand these in English (much like we do not immediately understand <stdin>); indeed it doesn’t look at every letter to decide what the command says and then decide what it should do. Instead it looks at each of those words, not as words, but as symbols. The following symbols are commands from the Perl programming language.

print “What size is your shoe?”;

$size=;

chomp $size;

print “Your shoe size is $size, or so you say. \n”

This is a simple program1, which really does nothing, but it is instructive for our purposes. In the first line, the command (symbol) ‘print’ tells the computer to print the string “What size is your shoe?” to the monitor. In the second line, the $-sign creates a scalar that we’ve named size, and fills the scalar with input from whatever the user of the program enters as input from the keyboard (hence the command Standard Input, or ). The third line, with the command chomp, cleans up the scalar variable $size, and then finally in the fourth line, we see the print command again telling the computer to print the string “Your shoe size is $size, or so you say.” to the monitor. Notice at the end, yet another command /n, which represents ‘Newline.’ The computer reads none of this as we would, but will run this program flawlessly because it can handle commands such as print, chomp, and /n.

This is but one example in one language, but I think the point has been made. Computers run as symbol systems, that is, they follow their programming based upon symbols. Also, I gave an example in Perl, but any programming language could be used. They all operate on the same concept, that is, the use of symbols to carry out operations, whether as simple as 5+2 or as complex as running a word processor.

One final note of interest is that of differing machines (for example our toilet paper computer above and the computer sitting under your desk); if they can perform the same functions, are said to be equivalent.2 To take a less dramatic example, take two computers, one with an word processor built with Perl3 and the other computer with Microsoft Word, written with a variety of the C programming language (a completely different language.) The underlying programming for each of these programs is worlds apart, but if they both can carry out the same operations (and we’re assuming they can,) then we say they are equivalent. And, as should be obvious, all LCM’s are equivelent.

We will have more to say about physical symbol systems later on when we talk of the symbol system hypothesis.

The definition of artificial intelligence

One final task needs to be accomplished before we can move on to the philosophy. That is, we need to provide a definition (or at least a working definition) of artificial intelligence if we are to examine the philosophy of whatever this artificial intelligence is. A good place to start might be Wikipedia, which says “John McCarthy, who coined the term in 1956, defines it as ‘the science and engineering of making intelligent machines.’ “4 That definition, by the man who coined5 the term, will work nicely I think, though I would broaden it slightly and say that

Artificial Intelligence is the science and engineering of making intelligence.

The distinction is that what I want to examine is the mere possibility of artificial intelligence, regardless of whether it can or cannot (currently) be run on a machine.

Conclusion

You might be thinking at this point, “What is the point?” You may understand the stuff explained above and in the previous page, but may not understand what it is supposed to relate to or how it all relates together, or most importantly, how it relates to our question. We get to that now and we can finally start with the philosophy. If you’ve made it this far, great. Getting through the previous two sections only sets up the fun stuff to begin, starting with ‘Can Machines Think?

Next: ‘Can Machines Think?

Notes and Sources:

1. This program is taken directly from ‘Sams Teach Yourself Perl in 24 Hours‘, by Clinton Pierce, p.32.

2. The notion of computer programs being equivalent comes from Jack Copeland’s ‘Artificial Intelligence: a Philosophical Introduction‘, 1993, p. 79.

3. Clinton Pierce mentions on page 6 of ‘Sams Teach Yourself Perl in 24 Hours‘ that “You probably wouldn’t want to write a word processor in Perl-although you could-because good word processors are already available.”

4. http://en.wikipedia.org/wiki/Artificial_intelligence

5. A note on Wikipedia says that the assertion that McCarthy coined the term is somewhat controversial, but we need not be concerned with that. Our concern is the definition.

________________________________________________________________________________________[Top]

‘Can Machines Think?’

Introduction

The Philosophy of AI really gets its beginnings in 1950 with the publication of Alan Turing’s ‘Computing Machinery and Intelligence,’ so this is where we will begin. We will spend the bulk of our time with the question he posed, as I think it the most important in the Philosophy of AI. With this paper Turing asks the famous and fundamental question, ‘Can Machines Think?,’ and he attempts to define and provide a framework for answering that question.

The core of the Philosophy of AI is really in that question: ‘Can Machines Think?’ I think it to be the ultimate problem within the discipline. Indeed, most other questions and problems in the Philosophy of AI are in some way related to that simple question. As an example, another question is if it is possible for a machine to have consciousness; but this question means nothing if the first is answered in the negative. As such, most of the time we will spend studying the Philosophy of AI (in hopes of answering the question of our own) will deal with this question of a thinking machine.

Computing Machinery and Intelligence

So we begin with Turing’s paper, ‘Computing Machinery and Intelligence.’ Much of the information contained on this page and the pages that follow will closely follow this paper and will attempt to clarify, expand, and respond to various arguments and assertions within it. It might be helpful to follow along for yourself while reading my treatment of it. It is by no means necessary, but if you were to read one primary source from all the pages on this site, this is the one you want to read. A .pdf of the article is located here.

I will often refer to page numbers when referring to Turing’s article; the page numbers refer to the page numbers in the paper just linked.

Much as we began the discussion by attempting to define the terms ‘philosophy’ and ‘robotics,’ Turing begins by noting that we should attempt to define ‘machine’ and ‘think.’ However, he says that it would be misleading to attempt to define them through popular definition (that is, in the way most people use them,) because if we were to attempt to determine the meanings of ‘machine’ and ‘think’ through popular definition then the question itself, ‘Can Machines Think?’ could itself be assumed to be answerable through popular definition. This is an assertion Turing thinks absurd (due to the inherent difficulty of the question.) So instead, he says he will replace the question with a different one which he thinks is very similar but avoids ambiguous words.

It might seem at first glance that there should be nothing wrong with using popular definition for these words, because what are words if not popular definition? However, if we want to answer a specific and difficult question (as the question we are considering is,) we must use specific, unambiguous words and terms to formulate the question. Or, at least, we must attempt to give specific, unambiguous meanings to the words we use. Of the two words he considers ambiguous, machine and think, really only the second gives us major problems when we are trying to be specific and unambiguous. As regards the first, it really is not very consequential to the outcome of the question that it be ambiguous. You can think of any definition of machine that you wish, and then ask the question: “Is the thing you currently have an image of in your head capable of thought?”

As mentioned, the second word, ‘think,’ gives us far more problems when we try to define it unambiguously. As an example, lets replace the word ‘machine’ in the question with something biological. “Can Gorilla’s Think?” This is a perfectly fine question, and I would think that a majority of people would respond affirmatively. “Can Dog’s Think?” I think again that a majority of people (I would also think that the number in the majority would be very similar) would also respond affirmatively. “Can Worm’s Think? And, if they can, is it the same kind of thought that gorillas have?” Suddenly we seem to have a problem. Gorilla’s display a large amount of reasoning power, that is, they can solve problems. Dogs have somewhat less reasoning power, that is, they cannot solve some of the more abstract problems gorillas can but still seem to have ways of solving some problems related to their survival. What reasoning powers do worms have? Isn’t it probably all instinct? Some might say that this is the most basic reasoning power, but by reasoning power I mean the ability to solve abstract problems. Earthworms come to the surface when it rains but then dry up and die when the sun comes out. It doesn’t seem as if they are solving the problem of ‘When it rains and I get flooded I should go to the surface, but then what should I do?’ But then if worms only have instinct, should we define that as thought, because surely the instinct is in their brains?

We run into these problems because we haven’t defined what ‘thought’ is. However, as soon as we try to define it we run into problems of the type above. Should we not determine and differentiate the different types of thought that biological creatures have? Surely the type of thought that humans have, the type most important to the discussion of thinking machines, is not the same type as dogs have, or much less worms. Or if that doesn’t seem right, certainly dogs don’t have the same capacity for thought that humans have. So if we cannot even differentiate between different types of thought of biological creatures, how can we define what type of thought (whatever that is) a non-biological machine could/does have?

So it seems that we cannot use a popular definition of ‘think,’ because many different people will have many different ideas of what that word means when applying it to many different things, biological or non-biological. So Turing intends to eliminate this word from his question while still getting at the meaning that he wants to. The ‘question,’ as it were, is actually a question in the form of a question about the success rate of a certain player in a game. So Turing first describes that game, which he calls the ‘imitation game.’ It is played with three participants.

The Turing Test

To provide a proof of concept, he begins with two people, each one of opposite sex, and a so called ‘interrogator.’ The goal of the interrogator is to determine which of the other two participants is male, and thus, which one is female. Say, for example, that the interrogator is attempting to determine which one is male. Then we place a restriction on the male participant that he must tell the truth. The goal of the female participant then is to attempt to force an incorrect answer, by saying whatever she needs to in order to convince the interrogator that she is male. This would obviously be done in separate rooms as the interrogator should have no clues as to the sex of the participants other than the details of their replies. Today it could be done over separate parts of the globe using nothing but MSN messenger or a similar IM program. Now assume this game is played out many times. What should be the success rate of the interrogator? We might assume that the average success rate of the interrogator should be about 50%, that is, the interrogator would be just as successful if he or she had guessed without asking any questions whatsoever. Thus, there should be no way for the interrogator to determine which participant is male and which participant is female by means of interrogation.

Now we alter the game a little. Let a machine (say the computer on which you read this) take the part of the female trying to force a false answer. Then the goal of the interrogator is to decide which participant is human (and thus which participant is machine.) We can then ask the same question as above: What should be the success rate of the interrogator? This question is the question that Turing wants to replace the original question with. If the game then is played many times with many different interrogators, and the result always turns into a statistical guess, then according to Turing the machine playing the game should be thought of as ‘thinking.’ Given the direct question, ‘Can this machine think?,’ we would be justified in responding affirmatively.

So Turing has not yet addressed what he considers a machine (though as we said it theoretically does not matter,) but he has given an exposition of a game that he argues should replace the definition of ‘think’ when we refer to machines. In essence, then, he considers the machine that is best able to act human, and best able to deceive humans in order to make them think that it is in fact human, capable of thought.

This then, is the Turing Test. According to this test, any machine able to beat the interrogator, that is, to force a 50% accuracy rate, should be said to be thinking.

Next: The Definition of ‘Machine’

________________________________________________________________________________________[Top]

Definition of ‘Machine’

If you are following along in the actual paper, I will be skipping section 2 for now and will very briefly address sections 3, 4, and 5 on this page. The section I’ve called ‘Attacks against the Turing Test,’ which covers some of section 2 and all of section 6, will be addressed starting on the next page. For now, we want to briefly consider Turing’s definition of ‘machine,’ which is more formal than what I allowed when discussing the imitation game.

Actually, much of this will seem familiar if you read the Preliminary Considerations page. I do want to expand and elaborate, however, on what I wrote there to bring it in line with Turing’s paper.

Turing begins section 3 by discussing what type of machine we should like to use in the experiment. He gives three criteria that the ‘machine’ could meet:

1. The machine may be built using any possible technique of engineering.
2. The manner of operation of this machine need not necessarily to be understood.
3. This machine should not include human machines, or people “born in the usual manner” (p. 435.)

In stating the first criterion (1,) he notes that this is a natural desire and should not be discounted. Also, if humans can create a thinking machine we should not really care how it was created. The second (2) seems a rather strange criterion. If we build a thinking machine, shouldn’t we know how it operates? However, an obvious example of a machine that meets the second criterion are humans. We are in many respects machines and we consider ourselves capable of thought, yet we really have no idea how the brain does what it does. We have an idea of how smaller, individual parts of the brain work, but overall we don’t have a good idea at all. So if we were to exclude machines which operated in a way not completely understood, we would be excluding machines that might operate like us. In fact, we would be specifically excluding humans; we’re not even sure the human brain really isn’t just a machine.

Finally, the third criterion (3) excludes humans. While (2) allows for machines whose operation is not understood, like humans, we do not want to allow humans themselves into considerations. If humans were not excluded, we could say that we create ‘thinking machines’ all the time; it just takes a 9 month incubation period.

Turing finally says that clones should be excluded. If (and when) we are able to clone human beings, it could be said that we are creating ‘thinking machines,’ but this is not the type of thinking machine in which we wish to talking about.

Next Turing notes that when we want to know if a machine can think, or want to consider the possibility of a thinking machine, we naturally think of a machine as a contemporary computer. Using this suggestion, he narrows his criteria and states that only digital computers will be allowed to to take the place of ‘machine’ in the original question.

The final few sentences in the section address the notion that Turing is not really interested in whether contemporary computers can do well in the imitation game, but “whether there are imaginable computers which would do well.” That is, the key question is is it feasible that any digital computer could pass the Turing Test?

Sections 4 and 5 mostly address the considerations put forth in our discussions of Turing Machines in the previous page. One thing that I do want to emphasize though is the notion of a universal machine. When I referred to the Turing thesis, I referred to what Turing is now speaking of. An important and interesting consequence that Turing mentions is that we do not need different computers for different tasks; only different programs for different tasks. We do not need one computer to add and another to photoshop your head on to a super model’s body; all digital computers can replicate any other digital computer, given the right program. Why is this fact important to this discussion? Because if any digital computer can imitate any other digital computer, then any robot can imitate any other robot, given the right hard and software.

________________________________________________________________________________________[Top]

Classical Attacks Against the Turing Test

At this point Turing addresses attacks against the imitation game, as shall we. It should also be noted, however, that some of the attacks are not necessarily against the imitation game by itself, but against the whole notion that machines are capable of thought. So we will consider these, along with attacks against the imitation game itself. We will not only consider the attacks that Turing addresses, however; after addressing the classical attacks in Turing’s paper, we will address some more modern ones as well. Here I define ‘classical attacks’ as only those attacks mentioned in Turing’s paper, and ‘modern attacks’ as those not listed, though some could have been made around the same time the paper was released.

This page is more or less an introduction to the Classical Attacks. I’ll begin with the actual attacks on the next page, when I consider the Theological Objection.

Virtues of the Imitation Game (or Vices?)

At first it might be hard to accept the replacement of a seemingly simple yes/no question with something as complex as the imitation game. But if that seemingly simple question (’Can Machines Think?’) is vary hard to coherently answer (or impossible to coherently answer,) we must become more complex in addressing the idea that the question attempts to convey, and this is what the imitation game attempts to do.

In section 2 Turing states some of what he thinks are the virtues of the imitation game. Some people may think these are vices, not virtues, but Turing makes arguments as to why they should be considered virtues. The first virtue is that the imitation game does not require that the machine in question be required to imitate a human in any physical way. Someday perhaps a robot will be indistinguishable from a human, but Turing argues that this should not bias us in any way against whether or not a machine can think. Simply because one of the players on the other end of the line is a box (or whatever,) and not a humanoid, should not distract us if in fact this box can act like a human.

A second virtue of the imitation game is the question and answer method, which in some ways relates to the first virtue. As Turing notes, this method “seems to be suitable for almost any one of the fields of human endeavor that we wish to include” (p. 435) We can ask one of the players if the player is an aerospace engineer, and if it claims the affirmative, we could ask it related questions. We cannot, however, ask it to build an airplane for us, because this would place a bias towards a human against a machine’s physical capabilities, if that machine is simply a box. (It should be noted however, that if the machine is a sufficiently advanced robot, this challenge could also bias the humans physical capabilities as well. That is, the robot could certainly build a better airplane than a person. Then again, the machine might not, and probably would not, build the best airplane it is capable of building in an attempt to fool the interrogator. But this need not concern us for now.)

Turing then briefly addresses an objection we will consider later when considering more modern attacks. The objection is that the imitation game is weighted too strongly against the machine, that is, that it requires to much. I’ll leave this for later.

Finally, Turing finishes out the second section by addressing an objection to the game’s design. It could be objected that the best strategy for a computer trying to beat the interrogator is to not act as human. That is, that the computer might not try to imitate a human to try to force a wrong answer that way. Turing dismisses this because he is not concerned with the game’s theory at this point, though he suspects that it would not have “any great effect” (p. 435) I would tend to agree. This objection seems to me an empirical question, one that really cannot be answered until actual machines attempt the test. Of course, machines have attempted to pass the test for decades, but I am unaware of any research into whether this would be a good strategy. Even if it were, I’m not sure that the imitation game would become invalid and that it would not be a valid replacement for the original question because if a computer is able to plan out a good strategy, whatever that strategy may be, that might be an argument that it thinks as well.

The Classical Objections

Section 6 contains what is commonly known as the ‘Classical Attacks,’ or the ‘Classical Objections.’ Turing includes nine of these in this section, and responds to them in kind. One of these, as we will see, encompasses many additional possible objections. I will more or less follow the order in which Turing presents them, giving a concise quote from the actual paper to sum up the attack, then analyzing and expounding on that objection.

Turing begins the section by noting that at this point one cannot fully get rid of the original question, ‘Can Machines Think?’ I think this is for a couple of reasons. First, the asking of the original question is the more natural way to speak. And I think Turing would agree that asking the original question is fine, if by the question one is thinking of the imitation game. It is certainly easier to ask ‘Can a Machine Think?’ Today, of course, we may just (and usually do) ask the question ‘Can a Machine pass the Turing Test?’

The second reason, the reason Turing mentions, is that we cannot get rid of the original question because some people will reject that the imitation game is an apt substitute. We will consider objections of this kind.

Classes of Objections

There are two classes of objections that Turing presents: objections that are against the notion that machines are capable of thought in the abstract (Class 1); and objections against the imitation game itself, that is that the original question is not correctly replaced by the imitation game (Class 2.) Using these classes, I will then refer to the objections on the following pages as Class 1 objections or Class 2 objections. The second class of objections are what Turing is alluding to when he states that we can’t fully get rid of the original question, because some may reject the imitation game as a replacement.

Turing’s Predictions

Finally, before getting to the actual objections, Turing makes a few predictions. As usually happens when some scientist or mathematician or whoever makes predictions about the future, they end up sounding rather silly, and in some sense don’t even make sense. In a couple of the predictions Turing makes, this is the case.

First, Turing predicts that in 50 years time (2001) computers will have storage capacity of 109. I’m not sure we can even make sense of this prediction, because he doesn’t explain by what he means by storage capacity. However, whatever he meant I think we have certainly passed that mark. If he meant bits (which usually are not used to measure memory capacity, but network speed)1 then we passed this long ago. 109 is roughly equivalent to 125 megabytes. In 2001, that amount of memory was easily available. However, as noted, memory is usually annotated in bytes, so 109 bytes is 1 gigabyte, which again was easily available mass market in 2001. Of course, this much memory was possible and was in development much earlier.

Turing uses the 109 to predict that this will be enough memory to enable a computer to play the imitation game. A related prediction he makes is he thinks that by 2001 an interrogator playing the imitation game will have no better than a 70% chance of guessing which player is the machine after 5 minutes. This, simply, has not even come close to occurring. It is interesting to note that it is not for lack of storage capacity (whatever he may be referring to) that no one has found success in trying to develop a machine that can pass the test, it’s that the problem is trying to fill a machine’s storage capacity so it does indeed act human. This has been the most formidable challenge to passing the Turing Test (and to creating A.I.) to date. We will see more of this later.

His final prediction is that people “will be able to speak of machines thinking without expecting to be contradicted.” Certainly this is true, at least for younger generations. If the computer on which we are writing a love note crashes, we might say that ‘The computer doesn’t want me to write this letter.’ The remark is certainly tongue-in-cheek, but literally interpreted we are ascribing personal wants to a computer, and wanting something entails thinking about it. This kind of language has only appeared as computers have become ubiquitous.

Turing then gives his own view to the intelligibility of the question ‘Can Machines Think?’ That is, he thinks it completely unintelligible, and (quite obviously) thinks the imitation game to be an apt substitute. Now on to the attacks; I begin with The Theological Objection.

Next: The Theological Objection.

Notes and Sources:

1. Talk of bits and bytes and speed and memory gets really confusing, and I hope I got it right. If not, let me know. For a guide on this and more see this.

________________________________________________________________________________________[Top]

The Theological Objection

“Thinking is a function of man’s immortal soul. God has given an immortal soul to every man and women, but not to any other animal or machines. Hence no animal or machine can think.” (p. 443)

This is a class 1 objection, that is it is an objection that is against the notion that machines are capable of thought in the abstract, and descends directly from Rene Descartes’ Cartesian Dualism.1 There is no way I could do justice to the concept of Dualism in such a short space, but I think it is important to understand the outlines of this position in order to fully understand the theological objection. Also, it will be helpful later as it does lay a framework for other objections.

Descartes’ view of the connection between the soul, mind, and body (henceforth ‘Cartesian Dualism’) is that the mind (that is, your character, internal thoughts, etc.) is the soul, which is in fact quite separate from the body. The mind is of a different immaterial, substance than that of the material brain. Hence the term Dualism. Cartesian Dualism seemingly has many inescapable problems, and though many would perhaps disagree, I consider it widely rejected today. However, the theory of Dualism (with or without the ‘Cartesian’) could be considered the default position of most modern people. When we refer to a thought we may of had, we say that it happened in our minds. Informally, this is probably fine. Formally, however, saying something happened in our minds begs the question: What is a mind? If I had a thought in my mind, where is my mind? The Cartesian would say that the mind is the soul, and because the soul is immaterial, it has no definable location in the body (or head.).

The opposite position from Dualism is materialism, that is, that the body is all there is. This view states that when we speak of the mind all we really are talking about are brain processes. Of course, these positions (Dualism and Materialism) are not exclusive and people hold widely varying views in between these poles, but these are what I take to be the extremes.

Materialism is not, however, without philosophical problems of its own. (For example, If my mind is my brain, and I have a thought, then where did that thought ‘occur’ in my brain?) For brevity, however, we will ignore these types of problems.

In answering the questions at the end of the second paragraph, a materialist would say that the mind is merely the brain, and that strictly speaking the ‘mind’ is a simply a construct of our brains. In other words, the mind is the brain. In any case, it in not the goal of this site to explore these problems, which constitute the discipline of the Philosophy of Mind (do note, however, that the Philosophy of AI is a sub-discipline of the Philosophy of Mind.) These concerns about what a mind is, or if one is needed for thought will reappear later, however, and it should not be hard to see why: they get at the very heart of ‘Can Machines Think?

It is tempting to get more involved in this debate, because as noted, it gets to the heart of the question of ‘Can Machines Think?’ Yet I really consider a discussion like that outside the scope of what I’m attempting to convey, and I can only hope that the reader falls into one of the following camps; that he or she already knows the basic debate and the various positions; that he or she is willing to read more elsewhere before proceeding, in which case see (note 1,) or; that he or she simply takes no interest in the larger matter of ‘What is a Mind?’

At any rate, I’ll press on with the matter at hand. I restate the original objection:

“Thinking is a function of man’s immortal soul. God has given an immortal soul to every man and women, but not to any other animal or machines. Hence no animal or machine can think.”

It should be noted that there are certain forms of dualism that do not make use of the soul. However in this objection the soul is referenced, and because the soul was the crucial part of Descartes’ dualism, I shall call the version of dualism inherent in the objection ‘Cartesian Dualism.’ In my view, it is a more extreme from of Dualism in that it introduces this ’soul’ thing . Descartes held that animals do not have souls, and hence they have no minds. He then infamously argued that animals cannot feel pain, and any appearance of an animal in pain is just a ‘mechanistic action’ by that animal as if it is in pain, but it’s really not.2 So, for example, Descartes could take a dog, and conduct painful experiments on it, and when the dog may whimper or cry in pain, he would claim that the dog wasn’t really in pain, but only appeared to be. It might be as if we designed a robot dog, and when hit the robot dog would act as if it was in pain. Descartes would argue that this is what happens in a real dog, or how a real dog ‘works.’

Now in order for this objection to hold, we would have to accept Cartesian Dualism. However, in order to accept that, we must accept a whole slew of questionable assumptions. For example, we might question whether there is a God, or specifically, the God of the Bible; we might question the theology that God grants a soul to human beings; and we might question the theology that He does not grant a soul to animals. But let us grant all this, does it still follow that animals or machines cannot think? I would argue no, because on top of all this it would have to be argued that a soul (whatever that is, and whatever it is that has one) is necessary for thought. Outside of dogma, of which we should be completely uninterested, there is no way to show this. So even if it were to be argued and proven that there are such things as souls, and animals lack them, it would not follow then that animals cannot think.

In modern times it is indeed the default position that animals do think, in some sense of that term; as mentioned on the page ‘Can Machines Think,’ most people would agree with the assertions that gorillas and dogs can think (again, in some sense of the term.) This is not to say it is the correct position, but there must be strong arguments to show that it is not correct. To my knowledge, none exist. An argument for the assertion that animals can think could simply be that certain animals seem to act in ways similar to the ways in which humans in similar situations act, and that in these situations we would say that the human is thinking. Hence, even if there is a God, and he grants souls, and animals lack them, it does not show that animals do not think or are not capable of thought. Therefore, it does not show that machines are incapable of thought.

Turing, however, with the goal of charitably, attempts to respond theologically. That is, he attacks some of the things I granted above (for the sake of argument) on theological grounds. He claims that the argument might work better if “animals were classed with men” (which would diffuse my point above by eliminating the conclusion that animals cannot think) and also notes that different religions might assert that different things have souls. As an example, he asks: “How do Christians regard the Moslem view that women have no souls?” (p.443)

But his main theological point is that the objection seems to him to imply that God is not omnipotent, that is, that by making the objection the objector is also bound to say that there are things that God cannot do. Turing claims that it should certainly be within the power of God to give a soul to some animal should it have some mutation that gave it a more competent brain. Likewise, if humans were to design a machine capable of the same amount of thought as humans, that it then certainly should be within the power of God to give this machine a brain. Therefore, if we were to design a machine that we deemed capable of thought, we should assume that God can give this machine a soul as he could give one to an animal.

I’m not certain that Turing’s argument works, that is, that in making the theological objection that the objector is also forced to reject the omnipotence of God. In my view, it probably does not, because I don’t see how the objector is denying God’s omnipotence; the objector is only describing what God actually does, not what He is capable of doing. But I don’t wish to take any more time discussing this, as it has no real relevance to broader discussion. What we should take from the discussion of the theological objection is that the question of the existence or nature of souls should have no bearing on the question, ‘Can Machines Think?’

Next: Two More Objections

Notes and Sources:

1. This wikipedia article is not a bad overview. Do notice that there are several forms of dualism, while we will be concerned only with Cartesian Dualism, or alternatively, Substance Dualism.

2. See this article for more on Descartes and mechanistic views of the natural world.

________________________________________________________________________________________[Top]

Two More Objections

The ‘Heads in the Sand’ Objection

“The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so.” (p.444)

This class 1 objection may induce a snicker or two. However, due to its continued modern prevalence, that snicker should be held in check for the moment. While it is never phrased quite this outwardly, surely many people feel some form of this sentiment even today. I would bet that in a good chunk of the people watching the Terminator movies, or the Matrix movies, or etc., a thought somewhat like the one above crossed their minds. Even if these people (myself included) didn’t entertain it seriously, or dismissed it quickly, nevertheless the objection maintains its relevancy today.

What did Turing have to say about it? Roughly along the same lines as his response to the Theological Objection. He thinks it would be the best possible outcome if it could be shown that humans are necessarily superior (not only superior, but that it is a logical necessity that humans are superior) over other lifeforms (and, of course, robots.) Yet, and of course, there is no reason (outside the aforementioned theological reasons) to think this is so. Interestingly, he also thinks that this objection would affect intellectual people more, simply because they value thinking more; therefore if a machine can do it, how special is it?

The objection is humorous in itself, but Turing replies with a more-humorous rejoinder:

“I do not consider that this argument is sufficiently substantial to require refutation. Consolation would be more appropriate…”

I’ll leave this objection at that.

The Mathematical Objection

“There are a number of results of mathematical logic which can be used to show that there are limitations to the powers of discrete-state machines. The best known of these results is known as Gödel’s theorem, and shows that in any sufficiently powerful logical system statements can be formulated which can neither be proved nor disproved within the system, unless possibly the system itself is inconsistent.” (p. 444)

Now we begin to see more serious objections, if not in strength, then at least in substance. Similar to the Theological Objection, this class 1 objection also requires a fair bit of background information.

In 1931, Kurt Gödel published what are known as the two incompleteness theorems1. The two are listed here:

1. For any consistent formal, computably enumerable theory that proves basic arithmetical truths, an arithmetical statement that is true, but not provable in the theory, can be constructed. That is, any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete.

2. For any formal recursively enumerable (i.e. effectively generated) theory T including basic arithmetical truths and also certain truths about formal provability, T includes a statement of its own consistency if and only if T is inconsistent.

We need not worry about trying to understand the proofs of these. What is important is the gist of the two theorems, and the gist is that it is impossible to set up a mathematical system that is both complete and consistent. This is stated nicely by Turing:

…any sufficiently powerful logical system statements can be formulated which can neither be proved or disproved within the system, unless possibly the system itself is inconsistent.

In order to better understand this, as Turing notes, we can conveniently use a Turing Machine. Say we set up a Turing Machine, which is programmed to ‘know’ the answer to every yes/no question that could possibly be asked. That is, we would say that this machine (logical system) is complete. But then, as a consequence of Gödel’s theorem, some of these questions will be answered incorrectly, or not answered at all. Conversely, if all the questions were to be somehow answered correctly, we would then conclude that the machine (logical system) is somehow incomplete.

What is going on here? Return to Gödel’s original theorem. He proved that we cannot set up a mathematical system that is both complete and consistent. Now because Turing machines are in essence mathematical systems (recall that Turing originally called them Logical Computing Machines,) they can be either complete or consistent, bot not both.

Now to the objection. If a machine cannot be both complete and consistent, it seems to follow that these machines cannot fully replace a human mind because the human mind is supposedly not subject to these limitations. There are two issues here. The first is that this objection assumes that human minds aren’t computers, and the second is the question of whether it matters if a machine cannot be both complete and consistent. The first is something that I’ll address in more detail on a later page, but because it is important to this objection, I’ll anticipate the treatment here.

In brief, there are two possibilities in which the human mind could be construed as a computer:

1. The human mind was programmed, much like we program computers today, by something intelligent.

Ignoring the questions as to what this other intelligent thing might be, it is within the realm of possibility that this could be true, and then we certainly would be subject to Godel’s theorem. However, barring any falsifiable proof that we were programmed intelligently as logical systems, we would be just as well off ignoring this notion.

2. The human mind was ‘programmed’ by evolution, and the human mind is a logical system by nature.

Assume for the moment that no yes/no questions exist that humans must neccesarily answer wrong, or fail to answer. It would seem then that humans, following Gödel’s theorem, would then be either complete or consistent, but not both. Of course we have no idea what it would mean for the human mind to be complete in the logical sense of the term. That is, it is at least a practical impossibility for a human to know every yes/no question; but what is important is that it is not a logical impossibility. Assuming that a human mind could be logically complete, it then must allow (if the human mind is a logical system) that human minds must be inconsistent. To this, I could only reply ‘but of course.’ We all have inconsistent beliefs in some sense. Then we would ask whether beliefs in the logical system of the human mind (if indeed it is one) correspond to statements in logical systems as we think of them, and we’ve returned to our original question: is the human mind itself a computer?

To this question, as noted, we have no good answer. And the assumption I made above, that there exist no yes/no questions that humans must neccesarily answer wrong, is dubious at best. There is no way to prove that either way. So in the end, the question of whether the human mind is a computer (or the assertion that it is) doesn’t really answer the objection. It might be a computer, and then again, it might not.

The second issue is whether it matters if a machine cannot be both complete and consistent. That is, assume that computers are logical systems and cannot be both complete and consistent, and that human minds are not logical systems and do not suffer this supposed fault. I would argue that it still does not prove that computers cannot think. I would argue that it does not matter if machines cannot be both complete and consistent. And more importantly, this does not prove that humans are necessarily superior to computers. Different computers (logical systems) will have different internal inconsistencies, and because one computer cannot answer one specific question does not mean that all computers cannot answer that question. So perhaps a computer cannot answer some question, and we feel somehow superior to it. This does not mean that we should feel superior to all computers, and it certainly doesn’t mean that we are logically superior to any given computer. Finally, it does not show that these limitations are substantial enough to show that machines cannot think.

Next: The Argument form Consciousness

Notes and Sources:

1. The information here and immediately following is from Wikipedia, here.

________________________________________________________________________________________[Top]

The Argument from Consciousness

Turing quotes from Geoffrey Jefferson, who gave a Lister oration in 1949 titled ‘The Mind of Mechanical Man,’ for the substance of this objection:

“Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.” (p. 445-446)

Turing splits Jefferson’s argument into 2 forms: the extreme and the modest (the first term is his, the second mine.) Both of these forms are class 2 objections, that is, they are objections against the imitation game.

The extreme view states that in order to know if a machine is thinking, or to know if it is conscious of itself, we must be the machine. This is a class 2 objection because it states that we cannot really know if a machine is thinking, something the Turing Test tries to show. But as Turing notes, this is the solipsist position.

The position of Solipsism is nicely summarized by Gilbert Ryle1:

Contemporary philosophers…have found it impossible to discover any logically satisfactory evidence warrenting one person in believing that there exist minds other than his own. I can witness what your body does, but i cannot witness what your mind does, and my pretensions to infer what your body does to what your mind does all collapse, since the premises for such inferences are either inadequate or unknowable.

So solipsism is the position that the only mind we can be sure exists is our own. A solipsist might say “I am the only mind which exists.”2 It might be objected that “Well, it is obvious other people exist! I see them right now, I just talked to one. Why would anyone take this position?” Because it is in fact not possible to absolutely prove that other people actually exist. I cannot absolutely prove that you exist, and are not just a figment of my imagination, or that you are not just some sort of machine programed to behave as I do. The only way I could prove that you exist is to become you and go into your mind, but then I’d be you, and not me, and I(you) couldn’t prove that I exist. Likewise, a solipsist would say that they only way we can know a machine is thinking is to be the machine. But then we’d be the machine, and not us. So like we can’t prove other humans have minds and can think, we cannot prove machines can think.

As should be clear, this cannot be a serious objection because it would require the rejection of not only a thinking machine, but other thinking humans. That is, while there might not be an absolutely conclusive argument against the solipsist view, the holder of it has other things to worry about than whether a machine can think. Turing says that we should accept the “polite convention,” and assume “that everyone thinks,” and I would agree.

So what of the modest view? This view I think dissolves into a form of incredulity. The author cannot imagine that a computer could do any of these things while understanding what it was doing. This is a class 2 objection because it maintains that even if a machine appeared to understand that it had (for example) written a sonnet, it would not be truly conscious of this action.

Turing on page 446 gives an example of how we could test a machines understanding of an action by using the Turing Test. In his example, we imagine that the machine wrote a sonnet, and we question the machine to determine if it understand what it wrote. He states that if a machine were to justify lines of the sonnet, exactly how a human might, there seems to be no reason for denying that it does understand what action it did. Under this kind of evidence, Turing thinks that they either must agree that the machine is conscious of its actions, or be “forced into the solipsist position” (p.447).

I’m not going to outline Turing’s sonnet example because I want to use a different example using math, but my point remains the same. Imagine for the moment that someone has been taught to calculate algebraic problems, without any underlying knowledge of the algebraic process, but only by memorization. For example, whenever they see an ‘2x=4′, they know that the answer is 2, simply by memorizing this equation. They have no idea what the x in the equation signifies, they only know the answer is 2 from memorization. Now if we were to test this person’s understanding by giving the person the following problem and told to solve for x:

3x = 12

They probably wouldn’t be able to get the answer. They might guess 2 for all we know. They don’t know that they are supposed to get the ‘x’ on one side of the equation and the rest on the other:

x = 12/3

x = 4

Turing calls this learning something in a ‘parrot fashion.’ If the person could not show how he or she got ‘x=2′ in the original equation, then we would assume that he or she had no understanding. But if they took us through the process, then we could assume they understood the algebra. The same applies to a machine. If a machine can show us the process of how it came to something, then we should assume it understands, and it conscious of the action. If it says that it feels depressed, and can take us through its reasoning as to why it is depressed, we should take it as being conscious. If you deny this, Turing says, then you are probably forced into the solipsist position.

Next: The Argument from Various Disabilities

Notes and Sources:

1. From page 60 of the book ‘The Concept of Mind’ by Gilbert Ryle, 1949.

2. From the Internet Encyclopedia of Philosophy. http://www.iep.utm.edu/s/solipsis.htm

________________________________________________________________________________________[Top]

The Argument from Various Disabilities

Turing next addresses a series of objections, which he calls the ‘Argument from Various Disabilities.’ What he means by ‘various disabilities’ is that people make arguments against the notion that machines can think by arguing that they lack something vital; hence the term disabilities. A list of examples of the purported disabilities is listed under (5) , but a few examples are ‘have initiative,’ ‘fall in love,’ or make mistakes.’ Turing gives the objection the following form:

“I grant you that you can make machines do all the things you have mentioned, but you will never be able to make one do X” (p.447.)

Where X is something like the examples I listed above.

The disabilities are all forms of class 1 objections. They can, however, be read in two ways, one of which won’t fall into one of my classes. We can read the objections as saying that, for example, (1) “Machines cannot think because they cannot enjoy strawberries and cream.” This obviously fits a class 1 objection.

However, the objectors may not be saying this. What they might be saying is simply (2) “Machines cannot enjoy strawberries and cream,” and what they mean by this is that this is a human characteristic which machines cannot have. I’ll comment on this, but what I want to note is that this reading of the objection does not deny that machines can think, just that they have certain disabilities. As such, this reading does not fall neatly into a class.

Turing begins by noting that most of these assertions are made by people who think of contemporary (for his time) computers, and then argue that these computers could not possibly ‘have initiative’, ‘fall in love,’ etc. But much like no contemporary computer can pass the Turing Test, no contemporary computer can ‘fall in love.’ So the fact that contemporary computers cannot fall in love does not mean that computers will never be able to fall in love, or have initiative, or whatever the objection may be. So the objection fails to work in this broad sense, and at this point it would probably be fine to forget about the individual examples (the narrow sense.) But for completeness, I’ll go over a couple in detail.

Strawberries and Cream

The first objection Turing considers is that a machine will never be able to “enjoy strawberries and cream.” He claims that though he sees no reason why a machine would not be able to enjoy strawberries and cream, to attempt to do so “would be idiotic” (p.448.) Turing appeals to what he considers a distinction: the friendship between a white man and a black man will not be the same type of freindship as that between a black man and another black man. Much like this (apparent to Turing) distinction in freindship, there will be a distinction between what humans and machine enjoy. So Turing wants to use this analogy: much like freindship between races is different, enjoyment between humans and machine will be different.

However, I object whole-heartedly to this because I can only interpret Turing’s analogy as a racist remark. In the whole paper Turing is arguing about what is theoretically possible for a machine to do, and that there is nothing that proves that machines cannot necessarily think. But he then goes on and makes an argument that friendship between a white man and a black man must be necessarily different. This simply doesn’t follow. I see no reason why friendships between races must be necessarily different than friendships within races. Perhaps Turing was not making this argument, and perhaps he was only appealing to the 1950’s distinction. But even then, in my view, the analogy is very weak. The supposed difficulties in friendships between races would be nothing like the difficulties between something as diverse as man and machine. Or perhaps there will be no difficulties; but either way, this is not a good analogy; I will not consider it further.

Let’s return breifly to the objection. While it is possible to make a machine that enjoys strawberries and cream, 1.) enjoying strawberries and cream doesn’t have anything to do with the possibilty of a thinking machine, and 2.) it would be idiotic to make a machine that enjoys strawberries and cream, because the enjoyment of strawberries and cream is a human attribute. We are trying to distinguish thinking machines, not necessarily machines that are human in every conceivable way. It would be like a machine that considers a human not to think because the human doesn’t enjoy fresh oil every few months. In short, this objection doesn’t work.

Make Mistakes

Here the claim is that “machines cannot make mistakes” (p.448.) Turing is tempted to retort “Are they any the worse for that?” However he does consider the argument, and sees no reason why machines necessarily cannot make mistakes. There are two different categories of mistakes a computer could make; mathematical and non-mathematical. Perhaps as far as mathematics is concerned, machines in general will not make the type of errors humans are prone to committing, but this does not mean that they cannot make those errors. Perhaps an error in programming leads to a systematic error that a computer always calculates some function incorrectly.

As far as non-mathematical mistakes are concerned, I see no reason why computers would be much less susceptible to these errors than humans. Consider Turing’s example of scientific induction. If a computer is programmed to draw conclusions by scientific induction (in short, basing future assumptions on what has occurred in the past,) and makes decisions upon this, the computer will make mistakes in certain discussions. However this type of reasoning is essential to common everyday occurrences, at least to humans. When there is snow on the ground, we put on a jacket because every other time there has been snow on the ground it has been cold out. It doesn’t mean that it is cold outside, but this is scientific induction. I’m note sure how a machine could operate in any world without at least some form of this, so machines will make mistakes of this sort.

One further thing I want to note is that, like the strawberries and cream example, this example seems to argue that machines must be human-like. Or it seems to say that we will never be able to make machines human like. Again, a human complaining that ‘because a machine cannot make mistakes it cannot thin’k would be like a machine arguing that ‘because humans make mistakes they cannot think.’ It doesn’t work.

Be the Subject of it’s Own Thought

All Turing wants to say here is that if a machine were to pass the Turing Test, and if it were calculating something at the present time, we should say that whatever the machine is calculating is the subject of it’s thought. Then, if the machine were calculating something relating to itself (say, reprogramming some function,) then we should say that at that point the machine is the subject of it’s own thought. I would tend to agree.

Fall in love

Turing does not expand on this in his paper, but I just want to note that a recent book just came out which argues that in the future humans and machines will fall in love with each other. Called Love and Sex With Robots, it makes the argument that humans will love robots (in a limited sense, some humans do already1,) and eventually this love will be reciprocal. I have not read the book, so I cannot speak for it’s arguments, but I only note that this has been argued against (Turing doesn’t mention the objection in detail.)

Conclusion

As Turing notes, many of these objections are just veiled Arguments From Consciousness, of which we’ve already seen. Suffice it to say, it would be a ill-advised prediction to say that a robot will not be able to do X (whatever X may be,) ever. So this objection, in both the broad and narrow senses, does not work.

Next: The Remaining Classical Attacks

Notes and Sources:

1. See my post Snow Eating Robots and Anthropomorphism for an example.

________________________________________________________________________________________[Top]

Roboethics

This section, much like the last, is separated into several pages.

  • Preliminary Considerations.

________________________________________________________________________________________[Top]

 

 

 

{ 1 comment… read it below or add one }

Gustavo April 21, 2012 at 1:53 am

I think you posed a big point in what I was trying to write in my post. It is blercdiiny hard to define intelligence in a way that can be measured in both humans and machines. When we think of intelligence, we think of it as an adjective to describe something alive, something with a conscious and physical ability to think and articulate ideas. However, this is the problem. I believe that Turing wants us to think about the definition of intelligence and broaden our definition .expanding and applying it to things that are not-human, but still show intelligence. Although this definition of intelligence is manipulated by humans to essentially imitate a human, the machine has surpassed our intelligence. We are the programmers, but we are also dependent on this heightened intelligence that machines give us. If computers weren’t intelligent, we wouldn’t be dependent on them .we wouldn’t need them. I also agree with Lady Lovelace, in saying that a computer’s intelligence is not defined by it’s conscious ability to respond to questions like, What do you think of this painting? , but by it’s programmed ability to express it’s intelligence. I do believe this is a big aspect that went into the thought process behind Turing’s reasonings, and I believe the essential component in his message to us is to delve deeper into the meanings behind his texts.

Reply

Leave a Comment