Are humans Turing Machines?
Jun. 25th, 2003 05:03 pmI think so. It's almost required of believers in strong AI.
This afternoon, I had a colleague tell me that humans are superior to computers because we are not Turing machines, so we aren't subject to the halting problem. I told his arguments sounded a lot like Penrose's fallacious anti-strong-AI arguments.
We argued for about 10 minutes and then I gave up convincing him.
Crispin Cowan on the metaphysical implications of humans being Turing machines.
Boyer and Moore have a machine prove the Halting Problem
This afternoon, I had a colleague tell me that humans are superior to computers because we are not Turing machines, so we aren't subject to the halting problem. I told his arguments sounded a lot like Penrose's fallacious anti-strong-AI arguments.
We argued for about 10 minutes and then I gave up convincing him.
Crispin Cowan on the metaphysical implications of humans being Turing machines.
Boyer and Moore have a machine prove the Halting Problem
(no subject)
Date: 2003-06-25 02:20 pm (UTC)hmm
Date: 2003-06-25 11:17 pm (UTC)Now, if you count pieces of paper and RAM chips as being acceptable parts of humans (we did create them, after all), that boosts our memory a lot. Then I'd put humans in the category of "Deterministic Automata with so much available memory that they look like Turing Machines except on really large, complex problems".
musings slightly more applied than philosophical
Date: 2003-06-26 04:04 am (UTC)The two articles below are related (and not premium content):
"Spam wars"
"E-mail as we know it is dying"
Re: hmm
Date: 2003-06-26 08:44 am (UTC)There is a hierarchy:
Deterministic/Non-deterministic Finite Automata: can decide if a string fits a regular expression.
Pushdown Automata: decide context-free languages
Turing Machines: decide computable/"decidable" problems
So DFAs/NFAs don't have memory by definition, just states. Can't decide whether a string fits 0^m 1^m.
PAs have a partially-accessible memory in the form of an unlimited stack. Can decide whether a string fits 0^m 1^m, but can't decide whether 0^m 1^m 0^m (?).
TMs have an infinite tape.
Re: hmm
Date: 2003-06-26 05:33 pm (UTC)You could transform a computer program into a DFA as long as it only used a finite amount of memory known in advance, by letting each state in the DFA represent a complete description of the CPU registers, memory space of the program, current location of the PC, etc. The entire transition table (mammoth though it would be) could be precomputed and the DFA generated.
A DFA can't decide if a string fits 0^m 1^m because that requires log_2(m) bits of memory. If m can be arbitrarily large, then we just make m bigger than the number of states in the DFA, and the DFA can't keep track of the count of incoming 0's. Being forced to cycle in the state graph can be thought of as "running out of memory".
A Turing Machine with finite tape (or a PA with a finite stack) is just a DFA. It only has finite memory, and just like the computer program, we can "simulate" it with a DFA, each of whose states represents the complete state of the tape and the position of the head.
Since humans have only a finite amount of memory, how can we be Turing Machines? A TM can solve problems that take more memory than there are particles in the universe. We can't. We're just large DFA's...
Re: hmm
Date: 2003-06-26 11:05 pm (UTC)It was counterintuitive for me that we can't decide 0^m 1^m.
A Turing Machine with finite tape (or a PA with a finite stack) is just a DFA. It only has finite memory, and just like the computer program, we can "simulate" it with a DFA, each of whose states represents the complete state of the tape and the position of the head.
That's an insight I should have had back when I studied Theory of Computation, but for some reason I didn't (I don't think the prof showed it either). I guess I was just sick of the whole thing.
(no subject)
Date: 2003-06-29 12:28 am (UTC)To toss my own two bits in: I don't believe that human brains are Turing Machines. Besides the obvious bit about memory which was already addressed in earlier posts, there is a further problem. Turing Machines are exact and deterministic. Given the same input, they will always produce the same output if, indeed, there is any output to be produced.
Humans, on the other hand, are inexact and nondeterministic. I don't mean nondeterministic in any sort of fundamental physical way, but simply that if you gave a large enough sample of reasonable people a complicated Turing Machine program and ask them to follow it through, you would receive different answers.
I would even go so far as to say that if you gave the same person the same program multiple times but somehow messed with his memory so that he could not remember the answer, perhaps by giving him a large number of other TM problems to solve in between, he may be likely to give different answers. The usual philosophical stuff about whether that's really the same person apply, but it's irrelevant to the point at hand because he still has a human brain, and if human brains are Turing Machines, or a finitistic variation thereon, he'd have to give the same answers.
The point to all my rambling (sorry, it's late, and I don't think clearly on the best of days) is that human brains seem merely to approximate Turing Machines. This isn't an original thought, but I can't remember where I read it for the life of me. At any rate, this poses an obstacle to most attempts to get theorems of formal logic and computation to apply to human cognition. For example, Goedel's famous Incompleteness Theorem applies only to sound deduction systems. Human reasoning, in its fallibility, is definitely not sound.