Bringsjord has worked out my Penrose-like idea, formalizing his argument in quantified modal logic.
So the Halting Problem, HP becomes: forall TMs x, there exists a TM y such that x does not decide y.
He shows that the assumptions:
(1) I!: There exists a Turing Machine M, such that no Turing machine can decide whether it halts. And *necessarily* so, since it's a mathematical theorem. (this seems wrong to me!)
(2) For every Turing Machine M, there is a person S such that it's logically possible for S to decide M.
(3) All people are machines.
lead to a contradiction. (the contradiction is trivial, but he goes through the formal steps of his modal logic)
Of course, (2) is controversial. If I am a computer (which I believe I am), I would like to see my "Goedel sentence": which Turing Machine can I in principle not decide?
The lesson from Goedel's incompleteness theorem is that you always need to pick more axioms.
Analogously, if you're a Turing Machine whose mission in life is to decide whether Turing Machines halt (my new favorite way for thinking about this stuff, thanks to
r6), you always need to change your Universal Turing Machine to one that decides more TMs.
Then the question becomes: "But how?". To me, the paradox remains, because if you have a systematic way of changing your Turing Machine simulator, then you're just using a meta-Turing Machine which is just as susceptible to undecidability: you'll never be able to decide whether the halting construction for meta-TM halts.
See Bringsjord - A Modal Disproof of "Strong" Artificial Intelligence (page 8)
So the Halting Problem, HP becomes: forall TMs x, there exists a TM y such that x does not decide y.
He shows that the assumptions:
(1) I!: There exists a Turing Machine M, such that no Turing machine can decide whether it halts. And *necessarily* so, since it's a mathematical theorem. (this seems wrong to me!)
(2) For every Turing Machine M, there is a person S such that it's logically possible for S to decide M.
(3) All people are machines.
lead to a contradiction. (the contradiction is trivial, but he goes through the formal steps of his modal logic)
Of course, (2) is controversial. If I am a computer (which I believe I am), I would like to see my "Goedel sentence": which Turing Machine can I in principle not decide?
The lesson from Goedel's incompleteness theorem is that you always need to pick more axioms.
Analogously, if you're a Turing Machine whose mission in life is to decide whether Turing Machines halt (my new favorite way for thinking about this stuff, thanks to
Then the question becomes: "But how?". To me, the paradox remains, because if you have a systematic way of changing your Turing Machine simulator, then you're just using a meta-Turing Machine which is just as susceptible to undecidability: you'll never be able to decide whether the halting construction for meta-TM halts.
See Bringsjord - A Modal Disproof of "Strong" Artificial Intelligence (page 8)
(no subject)
Date: 2005-04-08 08:51 am (UTC)Can I give the modality an epistemic reading? When I do, I agree with you that ∀y.(My ⇒ []My) is unreasonable.
I also dislike his use of equality, "=" to say that every person is a machine. I would be happier with a "simulates" relation. But since I believe that people are machines (though not *Turing* machines), it's fair enough.
I'm still suspicious of (2), or at least his arguments for it. I believe that all his arguments for what people can do should also be arguments for what machines can do (though his reasoning isn't clear enough to verify this). I sense that he's twisting the meaning of "possibly" somewhere.
But in any case, if (1) doesn't hold then we no longer have the essential difference between people and machine, and I have no motivation to argue.
(no subject)
Date: 2005-04-10 03:42 am (UTC)His notion of equality seems to be the result of "quotienting out" all entities in the domain we're considering by their computational power. So to be "equal to some machine" just means that you can compute as richly as some turing machine, which would include the possibility of universal turing machines.
(no subject)
Date: 2005-04-10 03:48 am (UTC)∀y.(My ⇒ []My) in this case means, "for any entity y, if it so happens in the world we live in that y's behavior can be simulated by a turing machine, then it is logically impossible that y's behavior could be super-turing; in other words, it is necessarily true that y's behavior is turing-simulatable."
This is objectionable to me as an assumption, because it rules out the conjunction of two things I find separately and jointly plausible:
(1) human beings are no better than turing machines at computing (i.e. are turing-simulatable)
(2) it's logically possible that humans are better than turing machines at computing (i.e. are super-turing)
My (2) here is coincidentally equivalent to Selmer's (2), which I think is probably true. I'm willing to imagine that it might be false, though, but I'm not sure how you'd show that in no universe it could be possible that humans can out-do turing machines in the same sense that in no universe, say, 1=2.