Bringsjord has worked out my Penrose-like idea, formalizing his argument in quantified modal logic.
So the Halting Problem, HP becomes: forall TMs x, there exists a TM y such that x does not decide y.
He shows that the assumptions:
(1) I!: There exists a Turing Machine M, such that no Turing machine can decide whether it halts. And *necessarily* so, since it's a mathematical theorem. (this seems wrong to me!)
(2) For every Turing Machine M, there is a person S such that it's logically possible for S to decide M.
(3) All people are machines.
lead to a contradiction. (the contradiction is trivial, but he goes through the formal steps of his modal logic)
Of course, (2) is controversial. If I am a computer (which I believe I am), I would like to see my "Goedel sentence": which Turing Machine can I in principle not decide?
The lesson from Goedel's incompleteness theorem is that you always need to pick more axioms.
Analogously, if you're a Turing Machine whose mission in life is to decide whether Turing Machines halt (my new favorite way for thinking about this stuff, thanks to
r6), you always need to change your Universal Turing Machine to one that decides more TMs.
Then the question becomes: "But how?". To me, the paradox remains, because if you have a systematic way of changing your Turing Machine simulator, then you're just using a meta-Turing Machine which is just as susceptible to undecidability: you'll never be able to decide whether the halting construction for meta-TM halts.
See Bringsjord - A Modal Disproof of "Strong" Artificial Intelligence (page 8)
So the Halting Problem, HP becomes: forall TMs x, there exists a TM y such that x does not decide y.
He shows that the assumptions:
(1) I!: There exists a Turing Machine M, such that no Turing machine can decide whether it halts. And *necessarily* so, since it's a mathematical theorem. (this seems wrong to me!)
(2) For every Turing Machine M, there is a person S such that it's logically possible for S to decide M.
(3) All people are machines.
lead to a contradiction. (the contradiction is trivial, but he goes through the formal steps of his modal logic)
Of course, (2) is controversial. If I am a computer (which I believe I am), I would like to see my "Goedel sentence": which Turing Machine can I in principle not decide?
The lesson from Goedel's incompleteness theorem is that you always need to pick more axioms.
Analogously, if you're a Turing Machine whose mission in life is to decide whether Turing Machines halt (my new favorite way for thinking about this stuff, thanks to
Then the question becomes: "But how?". To me, the paradox remains, because if you have a systematic way of changing your Turing Machine simulator, then you're just using a meta-Turing Machine which is just as susceptible to undecidability: you'll never be able to decide whether the halting construction for meta-TM halts.
See Bringsjord - A Modal Disproof of "Strong" Artificial Intelligence (page 8)
(no subject)
Date: 2005-04-08 09:26 am (UTC)(no subject)
Date: 2005-04-08 09:34 am (UTC)(no subject)
Date: 2005-04-08 09:36 am (UTC)(no subject)
Date: 2005-04-08 09:53 am (UTC)(no subject)
Date: 2005-04-08 09:56 am (UTC)So R would have to, for all inputs u, decide whether M halts on u. That seems awfully hard, don't you think? M itself can't decide whether it halts on u (in particular for the ones where it doesn't halt).
(no subject)
Date: 2005-04-08 10:26 am (UTC)I haven't read the paper, I just assumed that you meant the usual halting problem for TMs in step (1).
(no subject)
Date: 2005-04-08 09:36 am (UTC)