Bringsjord has worked out my Penrose-like idea, formalizing his argument in quantified modal logic.
So the Halting Problem, HP becomes: forall TMs x, there exists a TM y such that x does not decide y.
He shows that the assumptions:
(1) I!: There exists a Turing Machine M, such that no Turing machine can decide whether it halts. And *necessarily* so, since it's a mathematical theorem. (this seems wrong to me!)
(2) For every Turing Machine M, there is a person S such that it's logically possible for S to decide M.
(3) All people are machines.
lead to a contradiction. (the contradiction is trivial, but he goes through the formal steps of his modal logic)
Of course, (2) is controversial. If I am a computer (which I believe I am), I would like to see my "Goedel sentence": which Turing Machine can I in principle not decide?
The lesson from Goedel's incompleteness theorem is that you always need to pick more axioms.
Analogously, if you're a Turing Machine whose mission in life is to decide whether Turing Machines halt (my new favorite way for thinking about this stuff, thanks to
r6), you always need to change your Universal Turing Machine to one that decides more TMs.
Then the question becomes: "But how?". To me, the paradox remains, because if you have a systematic way of changing your Turing Machine simulator, then you're just using a meta-Turing Machine which is just as susceptible to undecidability: you'll never be able to decide whether the halting construction for meta-TM halts.
See Bringsjord - A Modal Disproof of "Strong" Artificial Intelligence (page 8)
So the Halting Problem, HP becomes: forall TMs x, there exists a TM y such that x does not decide y.
He shows that the assumptions:
(1) I!: There exists a Turing Machine M, such that no Turing machine can decide whether it halts. And *necessarily* so, since it's a mathematical theorem. (this seems wrong to me!)
(2) For every Turing Machine M, there is a person S such that it's logically possible for S to decide M.
(3) All people are machines.
lead to a contradiction. (the contradiction is trivial, but he goes through the formal steps of his modal logic)
Of course, (2) is controversial. If I am a computer (which I believe I am), I would like to see my "Goedel sentence": which Turing Machine can I in principle not decide?
The lesson from Goedel's incompleteness theorem is that you always need to pick more axioms.
Analogously, if you're a Turing Machine whose mission in life is to decide whether Turing Machines halt (my new favorite way for thinking about this stuff, thanks to
Then the question becomes: "But how?". To me, the paradox remains, because if you have a systematic way of changing your Turing Machine simulator, then you're just using a meta-Turing Machine which is just as susceptible to undecidability: you'll never be able to decide whether the halting construction for meta-TM halts.
See Bringsjord - A Modal Disproof of "Strong" Artificial Intelligence (page 8)
For the record, my comment at JohnnyLogic's
Date: 2005-04-07 09:33 pm (UTC)in response to his linking to Jordan Ellenberg's article "Does Gödel Matter?"
I wonder why Goedel's incompleteness theorem seems so significant to people. I used to be fascinated by it myself (although, to be honest, it still puzzles me sometimes that no TM can prove the (objective) truths about all TMs (i.e. whether they halt), although humans can seemingly do it. Am I starting to sound like Penrose? I hope not, since I "believe" that humans' computational powers are no different than computers).
My guess is that people have a straw man of the formalist program, i.e. they think that some people would believe that all mathematical truths can be decided by one axiomatic system. And when this is shown to be impossible, they consider logicism and all similar ideas to be doomed. Maybe it's a case of "associative thinking".
Posted by Gustavo Lacerda at March 22, 2005 07:38 PM
Re: For the record, my comment at JohnnyLogic's
Date: 2005-04-07 11:32 pm (UTC)It's a clue, not a limitation. Of course, I can't prove it, and it assumed rather a lot, but theoretically, it should be possible.
Re: For the record, my comment at JohnnyLogic's
Date: 2005-04-08 12:00 am (UTC)The contradiction falls apart when you eliminate the hubris of point 2.
Re: For the record, my comment at JohnnyLogic's
Date: 2005-04-08 02:02 am (UTC)(no subject)
Date: 2005-04-07 11:23 pm (UTC)I'm trying to give him the benefit of doubt that he's making a real point through all this muddle, but I'm a having very hard time doing it. His assumption (2) I worried about at first, but I'm quite willing to agree with it, because of the modal operator. Surely he thinks he's getting more mileage out of it than I think he is, because all I see that follows is that it's logically possible that strong AI fails.
(no subject)
Date: 2005-04-07 11:50 pm (UTC)I agree, interpreting the parentheses conjunctively. And if you omit the stuff in the parentheses, the relation is "semi-decides" (but that's not very strong (just simulate it forever, if that may be the case)).
If you believe in classical logic (and I think Selmer clearly does from his definition of necessity as not-possibility-not, though I seem to recall that you yourself don't(?))
he. Modal logic is not normally considered "classical logic".
But I'm definitely not an intuitionist if that's what you mean (far from it!).
how can "for all M* and w there is no TM that decides whether M* halts on w" possibly be true? If you fix both the machine and the input, isn't the answer just a single "yes" or "no" answer?
...
then the turing machine definitely either halts or not, and consequently either the constantly-YES or constantly-NO function "decides" this fact.
Yes! this is why I said assumption 1 seems wrong.
Do you believe I!, btw? It seems weird, and I'd like to see a construction.
(no subject)
Date: 2005-04-08 04:24 am (UTC)Let S be the set {x | turing machine with godel number x halts on input x}.
Claim: no turing machine is a decision procedure for the set S.
Proof [totally standard adaptation of halting-problem impossibility proof]: Let such a turing machine M be given, by assumption on any input x it halts with answer "yes" or "no". Construct a turing machine M' from M so that M' goes into a halting state whenever M says "no", and goes into an infinite loop whenever M says "yes". Take the godel number of M', call it n. Observe that n in S iff n not in S, a contradiction.
However, S is of course semi-decidable, so I can easily rig up a TM (based on a UTM) that has the behavior of halting on any input that is in S, and not halting otherwise. This is the machine that no other machine can decide. So if I call this machine N, I can special case
∃x.(Mx ∧ ∀y.(My ⇒ ¬Dyx))
to
∀y.(My ⇒ ¬DxN)
So now, by the reasoning that this is a "purely mathematical statement", at which point my confidence in my intuition starts getting shaky, we have
[]∀y.(My ⇒ ¬DxN)
and, I guess, by the converse of Barcan's formula, (which perhaps is supposed to be even more evidently true?)
∀y. [](My ⇒ ¬DxN)
and so if we have ∀y.(My ⇒ []My) then we get by K the desired
∀y. (My ⇒ ¬◊DxN)
so that any entity which can even possibly decide the machine N (and I'm totally willing to believe in the logical possibility of the physical universe, and consequently the human brain (however unlikely I may think it to actually be the case) including halting-oracle-powerful mechanisms) is not a "machine" in the sense of satisfying the predicate M.
But I'm very concerned about this apparent hidden assumption ∀y.(My ⇒ []My). I might expect Bringsjord to argue that "oh! well! the notion of "machine" is purely mathematical, and so anything that is a machine in this world is also a machine in any possible world" but I think that's begging the question. This is because the objects under discussion are actually objects considered only for their computational power. The C=C thesis is not saying that every person is actually a finite-state read-write head shuffling around an infinite tape, but that every person has the abilities of a turing machine. Hence I think denying ∀y.(My ⇒ []My) is reasonable if you believe C=C, meaning the argument in this paper has gotten us nowhere. In fact, if you take the contrapositive of that implication, you get
∀y.(◊¬My ⇒ ¬My)
i.e. "for any entity y, if y is possibly super-turing, then y is indeed super-turing"
which, now that I come to it, I think is exactly the deadly gap in his argument. He hides this assumption under talking about "since I! is a mathematical truth", as if that licenses him to insert a modal operator into the proposition wherever he likes.
(no subject)
Date: 2005-04-08 08:51 am (UTC)Can I give the modality an epistemic reading? When I do, I agree with you that ∀y.(My ⇒ []My) is unreasonable.
I also dislike his use of equality, "=" to say that every person is a machine. I would be happier with a "simulates" relation. But since I believe that people are machines (though not *Turing* machines), it's fair enough.
I'm still suspicious of (2), or at least his arguments for it. I believe that all his arguments for what people can do should also be arguments for what machines can do (though his reasoning isn't clear enough to verify this). I sense that he's twisting the meaning of "possibly" somewhere.
But in any case, if (1) doesn't hold then we no longer have the essential difference between people and machine, and I have no motivation to argue.
(no subject)
Date: 2005-04-10 03:42 am (UTC)His notion of equality seems to be the result of "quotienting out" all entities in the domain we're considering by their computational power. So to be "equal to some machine" just means that you can compute as richly as some turing machine, which would include the possibility of universal turing machines.
(no subject)
Date: 2005-04-10 03:48 am (UTC)∀y.(My ⇒ []My) in this case means, "for any entity y, if it so happens in the world we live in that y's behavior can be simulated by a turing machine, then it is logically impossible that y's behavior could be super-turing; in other words, it is necessarily true that y's behavior is turing-simulatable."
This is objectionable to me as an assumption, because it rules out the conjunction of two things I find separately and jointly plausible:
(1) human beings are no better than turing machines at computing (i.e. are turing-simulatable)
(2) it's logically possible that humans are better than turing machines at computing (i.e. are super-turing)
My (2) here is coincidentally equivalent to Selmer's (2), which I think is probably true. I'm willing to imagine that it might be false, though, but I'm not sure how you'd show that in no universe it could be possible that humans can out-do turing machines in the same sense that in no universe, say, 1=2.
(no subject)
Date: 2005-04-08 08:24 am (UTC)can you quantify the "w"? Did you mean "u"?
(no subject)
Date: 2005-04-10 02:15 am (UTC)(no subject)
Date: 2005-04-07 11:31 pm (UTC)(no subject)
Date: 2005-04-07 11:57 pm (UTC)To me, given machines x,y [] ~ D x y seems equivalent to ~ D x y.
Afterall they are fixed mathematical objects.
So assuming I!, he could de-necessitate, and then use the above equivalence (if true) to prove (1).
(no subject)
Date: 2005-04-08 12:05 am (UTC)Of course, you could question whether this logic (L_QML) models correct reasoning.
(no subject)
Date: 2005-04-08 03:59 am (UTC)I'm also not convinced Barcan gives you
[]Ex.Px => Ex.[]Px
but I haven't thought about it too hard. But semantically, it seems plausible to my sleep-deprived brain that a witness for the existential should exist in all accessible worlds, but a different witness in each world, so that it's not the case that there exists a single x such that, in all accessible worlds, P holds of that x.
(no subject)
Date: 2005-04-08 02:07 am (UTC)(no subject)
Date: 2005-04-08 08:10 am (UTC)I am skeptical of any a priori argument which purports to prove something like "whether people are computers", because this is question is about the real world, and therefore contingent.
It's like using mathematics to prove that "time is money":
http://ukrobotics.com/html/humour.htm
Yet, I believe such logical arguments sometimes give good insights... We often don't realize how many interesting consequences our beliefs can have, if we take them together and make logical deductions from them.
(no subject)
Date: 2005-04-08 08:28 am (UTC)Correction: "where is Bringsjord getting that from?" In the paper he makes a 5-point argument for this.
Hmmm... I just took a look at his 5 points; I don't understand some of them... but some of them definitely look like bs to me. Nothing looks particularly compelling.
#1 and #4 in particular remind me of your "time is money" example: If some TM is a non-halter, there must be a specific reason why this is so; and that reason can, at least in pricinple, be devined by a person.
Aside from the fact that this is complete hand-waving with nothing backed up, couldn't he just substitute the word "person" for TM to have a statement which appears equally valid for Turing machines? (which contradicts everything else in his "proof")
I don't see that there's really any question that humans are computers... the only question is whether they can be modeled with classical or quantum Turing machines. From what I hear, the answer is almost certainly that they are classical Turing machines. And as far as I know, even quantum Turing machines can't get around the halting problem so that wouldn't help either. What other possibilities are there? That humans are some kind of supernatural force that processes information without adhering to any laws of logic?
(no subject)
Date: 2005-04-09 03:18 am (UTC)Aside from the fact that this is complete hand-waving with nothing backed up, couldn't he just substitute the word "person" for TM to have a statement which appears equally valid for Turing machines? (which contradicts everything else in his "proof")
I intended to write 'substitute TM for the word "person"'... but my dyslexia got the better of me. In other words, I was saying he could have just as easily written "there must be a specific reason why this is so, and that reason can in principle be derived by a TM"
And girls are evil
Date: 2005-04-08 08:48 am (UTC)girls = time x money
But time is money:
time = money
Money is the root of all evil:
money = sqrt(evil).
Therefore girls are evil:
girls = time x money = money x money = money^2 = sqrt(evil)^2 = evil
QED
--Sebastian
(no subject)
Date: 2005-04-08 09:12 am (UTC)Now construct a very simply Turing machine R wich accepts a program as input and if that program is M then it outputs "THIS PROGRAM HALTS" and then halts itself. Otherwise it just goes into an infinite loop.
Clearly the program for M is finite and so R exists. Furthermore, R will correctly decided if the Turing machine M halts. In other cases it simply doesn't know and so we have no contradiction with the halting problem.
(no subject)
Date: 2005-04-08 09:26 am (UTC)(no subject)
Date: 2005-04-08 09:34 am (UTC)(no subject)
Date: 2005-04-08 09:36 am (UTC)(no subject)
Date: 2005-04-08 09:53 am (UTC)(no subject)
Date: 2005-04-08 09:56 am (UTC)So R would have to, for all inputs u, decide whether M halts on u. That seems awfully hard, don't you think? M itself can't decide whether it halts on u (in particular for the ones where it doesn't halt).
(no subject)
Date: 2005-04-08 10:26 am (UTC)I haven't read the paper, I just assumed that you meant the usual halting problem for TMs in step (1).
(no subject)
Date: 2005-04-08 09:36 am (UTC)(no subject)
Date: 2005-04-08 06:06 pm (UTC)I’m not sure what it means to “decide if a turing machine halts”, but I will comment anyways.
This is (necessarily) false. Pick any Turing Machine M. M either halts or does not halt.
QED
If a Turing Machine M does not halt then it impossible for a person to know that it does not halt. The best they can do is say that M does not halt assuming some given logical system is sound. (2) is either true or false depending on what means.
This is true.
The statement is not correct. Axiom systems are not Universal Turing Machines. Axiom systems produce recursively enumerable lists of Turing Machines that (hopefully) halt on all inputs. It is impossible to know if any item in this list is correct or not.
(no subject)
Date: 2005-04-08 11:50 pm (UTC)1. Suppose M halts. Then the turing machine that prints, “M halts” correctly decides that M halts.
2. Suppose M does not halt. Then the turing machine that prints, “M does not halt” correctly decides that M does not halt.
QED
As I was saying to
(no subject)
Date: 2005-04-09 09:18 am (UTC)Does “N decides if a turing machine M halts” mean N is machine that outputs 0 or 1 on input x depending on whether M halts or not? In that case (1) is true, and (2) is false. It is not possible for people to decide when a UTM will not halt.
Actually I am in about the same position as Bringsjord and Arkoudas. (1), (2) and (3) are clearly incompatible. We all believe (1). They think (2) is more likely than (3), and I think (3) is more likely than (2). Neither of us can prove our case yet. Perhaps someday we will have a model of physics that will entail (3). Or (unlikely) perhaps someday we will have an example of (2), or rather a device that can solve the halting problem. In this case we will just redefine computation to include this device, and be in the same position as we are now.
(no subject)
Date: 2005-04-09 09:30 am (UTC)Apparently, there's mistake pushing the [] through the quantifiers and implications.
(no subject)
Date: 2005-04-09 12:01 am (UTC)Can you correct the statement so that it says something meaningful?
Whenever our UTM runs into trouble proving whether a certain TM halts, we could create a new UTM with the true value of that special case added (which is analogous to adding independent axioms to an axiom system when an incompleteness is found, except that with axiom systems there is no "true value")
Axiom systems produce recursively enumerable lists of Turing Machines that (hopefully) halt on all inputs.
Axiom systems produce a R.E. list of theorems. Do theorems somehow correspond to TMs?
It is impossible to know if any item in this list is correct or not.
I don't know what you mean by this.
(no subject)
Date: 2005-04-09 09:35 am (UTC)You probably want to say, “If you're a Turing Machine whose mission in life is to decide whether Turing Machines are total—halts on all inputs—and all you have is a program that halts on input x if the xth TM is (hopefully) total, then you always need to change your program to one that halts on more total TMs.
As an aside, Gödel sentences are always Π1, but the more interesting problems are Π2. So no only is adding more and more Gödel sentences not only insufficient to complete you axiom system, you probably don’t make much progress in proving more Π2 sentences. Adding inaccessible cardinal axioms do give you more Π2 sentences.