gusl: (Default)
[personal profile] gusl
I think so. It's almost required of believers in strong AI.

This afternoon, I had a colleague tell me that humans are superior to computers because we are not Turing machines, so we aren't subject to the halting problem. I told his arguments sounded a lot like Penrose's fallacious anti-strong-AI arguments.
We argued for about 10 minutes and then I gave up convincing him.

Crispin Cowan on the metaphysical implications of humans being Turing machines.

Boyer and Moore have a machine prove the Halting Problem

(no subject)

Date: 2003-06-25 02:20 pm (UTC)
From: [identity profile] easwaran.livejournal.com
I was just having a similar discussion with some of the philosophy students a couple days ago. Of course, being philosophers, they were less set in any point of view. Penrose definitely seems off his rocker on that argument to me. Searle's Chinese Room I suppose is an interesting argument only in that it phrases the naive view in a way that makes it easy to argue against. (That's most of what we were discussing, since I was wondering what people personally think of Searle in the Berkeley philosophy department.)

hmm

Date: 2003-06-25 11:17 pm (UTC)
From: [identity profile] patrissimo.livejournal.com
I don't think we're turing machines, because TM's have infinite memory. I think we're just finite automata.

Now, if you count pieces of paper and RAM chips as being acceptable parts of humans (we did create them, after all), that boosts our memory a lot. Then I'd put humans in the category of "Deterministic Automata with so much available memory that they look like Turing Machines except on really large, complex problems".

Re: hmm

Date: 2003-06-26 08:44 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
Hm... I think there was a linguistic confusion here: by "deterministic automata", I understood you meant either DFA or NFA, but you obviously didn't mean those since they have no memory. If we are thinking of the same thing, I'm guessing you probably meant "Turing machine with finite tape", tweaking the definition of TM if necessary.


There is a hierarchy:

Deterministic/Non-deterministic Finite Automata: can decide if a string fits a regular expression.
Pushdown Automata: decide context-free languages
Turing Machines: decide computable/"decidable" problems

So DFAs/NFAs don't have memory by definition, just states. Can't decide whether a string fits 0^m 1^m.

PAs have a partially-accessible memory in the form of an unlimited stack. Can decide whether a string fits 0^m 1^m, but can't decide whether 0^m 1^m 0^m (?).

TMs have an infinite tape.

Re: hmm

Date: 2003-06-26 05:33 pm (UTC)
From: [identity profile] patrissimo.livejournal.com
There was no miscommunication about a "deterministic automata". A DFA does indeed have memory, because it knows which of its states it is in. After all, what is memory if not a large set of possible "states" to be in? A DFA has log_2(N) bits of memory, where N is the number of states.

You could transform a computer program into a DFA as long as it only used a finite amount of memory known in advance, by letting each state in the DFA represent a complete description of the CPU registers, memory space of the program, current location of the PC, etc. The entire transition table (mammoth though it would be) could be precomputed and the DFA generated.

A DFA can't decide if a string fits 0^m 1^m because that requires log_2(m) bits of memory. If m can be arbitrarily large, then we just make m bigger than the number of states in the DFA, and the DFA can't keep track of the count of incoming 0's. Being forced to cycle in the state graph can be thought of as "running out of memory".

A Turing Machine with finite tape (or a PA with a finite stack) is just a DFA. It only has finite memory, and just like the computer program, we can "simulate" it with a DFA, each of whose states represents the complete state of the tape and the position of the head.

Since humans have only a finite amount of memory, how can we be Turing Machines? A TM can solve problems that take more memory than there are particles in the universe. We can't. We're just large DFA's...

Re: hmm

Date: 2003-06-26 11:05 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
I agree with you now. Thanks for refreshing my knowledge, and adding to my understanding.
It was counterintuitive for me that we can't decide 0^m 1^m.

A Turing Machine with finite tape (or a PA with a finite stack) is just a DFA. It only has finite memory, and just like the computer program, we can "simulate" it with a DFA, each of whose states represents the complete state of the tape and the position of the head.

That's an insight I should have had back when I studied Theory of Computation, but for some reason I didn't (I don't think the prof showed it either). I guess I was just sick of the whole thing.

musings slightly more applied than philosophical

Date: 2003-06-26 04:04 am (UTC)
From: [identity profile] akaliko.livejournal.com
Recently read an interesting article, "Excuse me, are you human?", in the MIT magazine Technology Review: unfortunately it's premium content but maybe you have access somehow? It is about the increasing use of anti-email-spam reverse Turing tests aka captchas, and how the author fears that we may in the future have to spend most of our days persuading machines that we are not machines ourselves.

The two articles below are related (and not premium content):

"Spam wars"
"E-mail as we know it is dying"

(no subject)

Date: 2003-06-29 12:28 am (UTC)
From: [identity profile] xuande.livejournal.com
Howdy! I came across your livejournal from my info page, which informed me you had added me as a friend. You are the first stranger I've met through Livejournal. =) For some definition of "met," anyway.

To toss my own two bits in: I don't believe that human brains are Turing Machines. Besides the obvious bit about memory which was already addressed in earlier posts, there is a further problem. Turing Machines are exact and deterministic. Given the same input, they will always produce the same output if, indeed, there is any output to be produced.

Humans, on the other hand, are inexact and nondeterministic. I don't mean nondeterministic in any sort of fundamental physical way, but simply that if you gave a large enough sample of reasonable people a complicated Turing Machine program and ask them to follow it through, you would receive different answers.

I would even go so far as to say that if you gave the same person the same program multiple times but somehow messed with his memory so that he could not remember the answer, perhaps by giving him a large number of other TM problems to solve in between, he may be likely to give different answers. The usual philosophical stuff about whether that's really the same person apply, but it's irrelevant to the point at hand because he still has a human brain, and if human brains are Turing Machines, or a finitistic variation thereon, he'd have to give the same answers.

The point to all my rambling (sorry, it's late, and I don't think clearly on the best of days) is that human brains seem merely to approximate Turing Machines. This isn't an original thought, but I can't remember where I read it for the life of me. At any rate, this poses an obstacle to most attempts to get theorems of formal logic and computation to apply to human cognition. For example, Goedel's famous Incompleteness Theorem applies only to sound deduction systems. Human reasoning, in its fallibility, is definitely not sound.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags