gusl: (Default)
[personal profile] gusl
Bringsjord has worked out my Penrose-like idea, formalizing his argument in quantified modal logic.

So the Halting Problem, HP becomes: forall TMs x, there exists a TM y such that x does not decide y.

He shows that the assumptions:

(1) I!: There exists a Turing Machine M, such that no Turing machine can decide whether it halts. And *necessarily* so, since it's a mathematical theorem. (this seems wrong to me!)
(2) For every Turing Machine M, there is a person S such that it's logically possible for S to decide M.
(3) All people are machines.

lead to a contradiction. (the contradiction is trivial, but he goes through the formal steps of his modal logic)


Of course, (2) is controversial. If I am a computer (which I believe I am), I would like to see my "Goedel sentence": which Turing Machine can I in principle not decide?

The lesson from Goedel's incompleteness theorem is that you always need to pick more axioms.

Analogously, if you're a Turing Machine whose mission in life is to decide whether Turing Machines halt (my new favorite way for thinking about this stuff, thanks to [livejournal.com profile] r6), you always need to change your Universal Turing Machine to one that decides more TMs.

Then the question becomes: "But how?". To me, the paradox remains, because if you have a systematic way of changing your Turing Machine simulator, then you're just using a meta-Turing Machine which is just as susceptible to undecidability: you'll never be able to decide whether the halting construction for meta-TM halts.

See Bringsjord - A Modal Disproof of "Strong" Artificial Intelligence (page 8)

(no subject)

Date: 2005-04-08 04:24 am (UTC)
From: [identity profile] jcreed.livejournal.com
If I understand I! right, it's actually pretty reasonable.

Let S be the set {x | turing machine with godel number x halts on input x}.
Claim: no turing machine is a decision procedure for the set S.
Proof [totally standard adaptation of halting-problem impossibility proof]: Let such a turing machine M be given, by assumption on any input x it halts with answer "yes" or "no". Construct a turing machine M' from M so that M' goes into a halting state whenever M says "no", and goes into an infinite loop whenever M says "yes". Take the godel number of M', call it n. Observe that n in S iff n not in S, a contradiction.

However, S is of course semi-decidable, so I can easily rig up a TM (based on a UTM) that has the behavior of halting on any input that is in S, and not halting otherwise. This is the machine that no other machine can decide. So if I call this machine N, I can special case

∃x.(Mx ∧ ∀y.(My ⇒ ¬Dyx))

to

∀y.(My ⇒ ¬DxN)

So now, by the reasoning that this is a "purely mathematical statement", at which point my confidence in my intuition starts getting shaky, we have

[]∀y.(My ⇒ ¬DxN)

and, I guess, by the converse of Barcan's formula, (which perhaps is supposed to be even more evidently true?)

∀y. [](My ⇒ ¬DxN)

and so if we have ∀y.(My ⇒ []My) then we get by K the desired

∀y. (My ⇒ ¬◊DxN)

so that any entity which can even possibly decide the machine N (and I'm totally willing to believe in the logical possibility of the physical universe, and consequently the human brain (however unlikely I may think it to actually be the case) including halting-oracle-powerful mechanisms) is not a "machine" in the sense of satisfying the predicate M.

But I'm very concerned about this apparent hidden assumption ∀y.(My ⇒ []My). I might expect Bringsjord to argue that "oh! well! the notion of "machine" is purely mathematical, and so anything that is a machine in this world is also a machine in any possible world" but I think that's begging the question. This is because the objects under discussion are actually objects considered only for their computational power. The C=C thesis is not saying that every person is actually a finite-state read-write head shuffling around an infinite tape, but that every person has the abilities of a turing machine. Hence I think denying ∀y.(My ⇒ []My) is reasonable if you believe C=C, meaning the argument in this paper has gotten us nowhere. In fact, if you take the contrapositive of that implication, you get

∀y.(◊¬My ⇒ ¬My)
i.e. "for any entity y, if y is possibly super-turing, then y is indeed super-turing"

which, now that I come to it, I think is exactly the deadly gap in his argument. He hides this assumption under talking about "since I! is a mathematical truth", as if that licenses him to insert a modal operator into the proposition wherever he likes.

(no subject)

Date: 2005-04-08 08:51 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
It's not so clear to me what it means to be "possibly a machine". It's not always clear to me what philosophers mean when they say "necessarily".

Can I give the modality an epistemic reading? When I do, I agree with you that ∀y.(My ⇒ []My) is unreasonable.

I also dislike his use of equality, "=" to say that every person is a machine. I would be happier with a "simulates" relation. But since I believe that people are machines (though not *Turing* machines), it's fair enough.


I'm still suspicious of (2), or at least his arguments for it. I believe that all his arguments for what people can do should also be arguments for what machines can do (though his reasoning isn't clear enough to verify this). I sense that he's twisting the meaning of "possibly" somewhere.


But in any case, if (1) doesn't hold then we no longer have the essential difference between people and machine, and I have no motivation to argue.

(no subject)

Date: 2005-04-10 03:42 am (UTC)
From: [identity profile] jcreed.livejournal.com
"being equal to a machine" here I think really means "being at most turing-powerful in terms of computations".

His notion of equality seems to be the result of "quotienting out" all entities in the domain we're considering by their computational power. So to be "equal to some machine" just means that you can compute as richly as some turing machine, which would include the possibility of universal turing machines.

(no subject)

Date: 2005-04-10 03:48 am (UTC)
From: [identity profile] jcreed.livejournal.com
I think the intended alethic ("necessary", "possibly") reading is the safest bet.

∀y.(My ⇒ []My) in this case means, "for any entity y, if it so happens in the world we live in that y's behavior can be simulated by a turing machine, then it is logically impossible that y's behavior could be super-turing; in other words, it is necessarily true that y's behavior is turing-simulatable."

This is objectionable to me as an assumption, because it rules out the conjunction of two things I find separately and jointly plausible:

(1) human beings are no better than turing machines at computing (i.e. are turing-simulatable)
(2) it's logically possible that humans are better than turing machines at computing (i.e. are super-turing)

My (2) here is coincidentally equivalent to Selmer's (2), which I think is probably true. I'm willing to imagine that it might be false, though, but I'm not sure how you'd show that in no universe it could be possible that humans can out-do turing machines in the same sense that in no universe, say, 1=2.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags