gusl: (Default)
[personal profile] gusl
Bringsjord has worked out my Penrose-like idea, formalizing his argument in quantified modal logic.

So the Halting Problem, HP becomes: forall TMs x, there exists a TM y such that x does not decide y.

He shows that the assumptions:

(1) I!: There exists a Turing Machine M, such that no Turing machine can decide whether it halts. And *necessarily* so, since it's a mathematical theorem. (this seems wrong to me!)
(2) For every Turing Machine M, there is a person S such that it's logically possible for S to decide M.
(3) All people are machines.

lead to a contradiction. (the contradiction is trivial, but he goes through the formal steps of his modal logic)


Of course, (2) is controversial. If I am a computer (which I believe I am), I would like to see my "Goedel sentence": which Turing Machine can I in principle not decide?

The lesson from Goedel's incompleteness theorem is that you always need to pick more axioms.

Analogously, if you're a Turing Machine whose mission in life is to decide whether Turing Machines halt (my new favorite way for thinking about this stuff, thanks to [livejournal.com profile] r6), you always need to change your Universal Turing Machine to one that decides more TMs.

Then the question becomes: "But how?". To me, the paradox remains, because if you have a systematic way of changing your Turing Machine simulator, then you're just using a meta-Turing Machine which is just as susceptible to undecidability: you'll never be able to decide whether the halting construction for meta-TM halts.

See Bringsjord - A Modal Disproof of "Strong" Artificial Intelligence (page 8)

For the record, my comment at JohnnyLogic's

Date: 2005-04-07 09:33 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
http://crumpled.com/cgi-bin/mt/mt-comments.cgi?entry_id=707

in response to his linking to Jordan Ellenberg's article "Does Gödel Matter?"

I wonder why Goedel's incompleteness theorem seems so significant to people. I used to be fascinated by it myself (although, to be honest, it still puzzles me sometimes that no TM can prove the (objective) truths about all TMs (i.e. whether they halt), although humans can seemingly do it. Am I starting to sound like Penrose? I hope not, since I "believe" that humans' computational powers are no different than computers).

My guess is that people have a straw man of the formalist program, i.e. they think that some people would believe that all mathematical truths can be decided by one axiomatic system. And when this is shown to be impossible, they consider logicism and all similar ideas to be doomed. Maybe it's a case of "associative thinking".
Posted by Gustavo Lacerda at March 22, 2005 07:38 PM

(no subject)

Date: 2005-04-07 11:23 pm (UTC)
From: [identity profile] jcreed.livejournal.com
I am completely baffled by Brinsjord's use of the predicate "decides". I thought that "machine X decides machine Y" meant "for all input u, machine X on input w says yes (respectively says no) if machine Y on input u halts (respectively does not halt)" but this seems possibly at odds with the lines at the top of page 5, which surprisingly includes no u whatsoever on the left side of the "iff"s. But I'm completely lost in footnote 4 at the bottom of page 5: how can "for all M* and w there is no TM that decides whether M* halts on w" possibly be true? If you fix both the machine and the input, isn't the answer just a single "yes" or "no" answer? If you believe in classical logic (and I think Selmer clearly does from his definition of necessity as not-possibility-not, though I seem to recall that you yourself don't(?)) then the turing machine definitely either halts or not, and consequently either the constantly-YES or constantly-NO function "decides" this fact.

I'm trying to give him the benefit of doubt that he's making a real point through all this muddle, but I'm a having very hard time doing it. His assumption (2) I worried about at first, but I'm quite willing to agree with it, because of the modal operator. Surely he thinks he's getting more mileage out of it than I think he is, because all I see that follows is that it's logically possible that strong AI fails.

(no subject)

Date: 2005-04-07 11:31 pm (UTC)
From: [identity profile] jcreed.livejournal.com
On second reading, I think the sleight-of-hand occurs on formula (1) on page 8. I don't see how he can commute box with exists, much less push it all the way inside the implication there. Am I crazy, or does !I not entail (1) at all?

Re: For the record, my comment at JohnnyLogic's

Date: 2005-04-07 11:32 pm (UTC)
From: [identity profile] hermesbrain.livejournal.com
People also used to think that the world could be completely defined by Euclid's version of geometry, and pre Pascal/Fermat, that all events are random or decided by the gods/winds/whathaveyou and could never be predicted to any degree -- Goedel's theorum likely applies because mathematics is missing a key concept, of some kind, IMHO.

It's a clue, not a limitation. Of course, I can't prove it, and it assumed rather a lot, but theoretically, it should be possible.

(no subject)

Date: 2005-04-07 11:50 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
I thought that "machine X decides machine Y" meant "for all input u, machine X on input w says yes (respectively says no) if machine Y on input u halts (respectively does not halt)"

I agree, interpreting the parentheses conjunctively. And if you omit the stuff in the parentheses, the relation is "semi-decides" (but that's not very strong (just simulate it forever, if that may be the case)).


If you believe in classical logic (and I think Selmer clearly does from his definition of necessity as not-possibility-not, though I seem to recall that you yourself don't(?))

he. Modal logic is not normally considered "classical logic".
But I'm definitely not an intuitionist if that's what you mean (far from it!).


how can "for all M* and w there is no TM that decides whether M* halts on w" possibly be true? If you fix both the machine and the input, isn't the answer just a single "yes" or "no" answer?
...
then the turing machine definitely either halts or not, and consequently either the constantly-YES or constantly-NO function "decides" this fact.

Yes! this is why I said assumption 1 seems wrong.



Do you believe I!, btw? It seems weird, and I'd like to see a construction.

(no subject)

Date: 2005-04-07 11:57 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
It doesn't seem valid to push the box inside the implication, yet...

To me, given machines x,y [] ~ D x y seems equivalent to ~ D x y.

Afterall they are fixed mathematical objects.

So assuming I!, he could de-necessitate, and then use the above equivalence (if true) to prove (1).

Re: For the record, my comment at JohnnyLogic's

Date: 2005-04-08 12:00 am (UTC)
From: [identity profile] ernunnos.livejournal.com
I don't think humans can do it, seemingly or otherwise.

The contradiction falls apart when you eliminate the hubris of point 2.

(no subject)

Date: 2005-04-08 12:05 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
by the way, the K axiom and the Barcan formula allow you to push the box [] through implications and quantifiers respectively. (page 7 and 8)

Of course, you could question whether this logic (L_QML) models correct reasoning.

Re: For the record, my comment at JohnnyLogic's

Date: 2005-04-08 02:02 am (UTC)
From: [identity profile] hermesbrain.livejournal.com
That's another possible point of validity I'd consider. Where's the logical proof that humans can do it? Certainly not impossible according to Godel's theorum, as logic isn't a means to express the natural numbers...

(no subject)

Date: 2005-04-08 02:07 am (UTC)
From: [identity profile] spoonless.livejournal.com
People can decide whether all Turing machines will halt? Where are you getting that from? If it's not possible for a Turing machine, then it's obviously not possible for a person.

(no subject)

Date: 2005-04-08 03:59 am (UTC)
From: [identity profile] jcreed.livejournal.com
K does not allow to go from [](A => B) to A => []B!
I'm also not convinced Barcan gives you
[]Ex.Px => Ex.[]Px
but I haven't thought about it too hard. But semantically, it seems plausible to my sleep-deprived brain that a witness for the existential should exist in all accessible worlds, but a different witness in each world, so that it's not the case that there exists a single x such that, in all accessible worlds, P holds of that x.

(no subject)

Date: 2005-04-08 04:24 am (UTC)
From: [identity profile] jcreed.livejournal.com
If I understand I! right, it's actually pretty reasonable.

Let S be the set {x | turing machine with godel number x halts on input x}.
Claim: no turing machine is a decision procedure for the set S.
Proof [totally standard adaptation of halting-problem impossibility proof]: Let such a turing machine M be given, by assumption on any input x it halts with answer "yes" or "no". Construct a turing machine M' from M so that M' goes into a halting state whenever M says "no", and goes into an infinite loop whenever M says "yes". Take the godel number of M', call it n. Observe that n in S iff n not in S, a contradiction.

However, S is of course semi-decidable, so I can easily rig up a TM (based on a UTM) that has the behavior of halting on any input that is in S, and not halting otherwise. This is the machine that no other machine can decide. So if I call this machine N, I can special case

∃x.(Mx ∧ ∀y.(My ⇒ ¬Dyx))

to

∀y.(My ⇒ ¬DxN)

So now, by the reasoning that this is a "purely mathematical statement", at which point my confidence in my intuition starts getting shaky, we have

[]∀y.(My ⇒ ¬DxN)

and, I guess, by the converse of Barcan's formula, (which perhaps is supposed to be even more evidently true?)

∀y. [](My ⇒ ¬DxN)

and so if we have ∀y.(My ⇒ []My) then we get by K the desired

∀y. (My ⇒ ¬◊DxN)

so that any entity which can even possibly decide the machine N (and I'm totally willing to believe in the logical possibility of the physical universe, and consequently the human brain (however unlikely I may think it to actually be the case) including halting-oracle-powerful mechanisms) is not a "machine" in the sense of satisfying the predicate M.

But I'm very concerned about this apparent hidden assumption ∀y.(My ⇒ []My). I might expect Bringsjord to argue that "oh! well! the notion of "machine" is purely mathematical, and so anything that is a machine in this world is also a machine in any possible world" but I think that's begging the question. This is because the objects under discussion are actually objects considered only for their computational power. The C=C thesis is not saying that every person is actually a finite-state read-write head shuffling around an infinite tape, but that every person has the abilities of a turing machine. Hence I think denying ∀y.(My ⇒ []My) is reasonable if you believe C=C, meaning the argument in this paper has gotten us nowhere. In fact, if you take the contrapositive of that implication, you get

∀y.(◊¬My ⇒ ¬My)
i.e. "for any entity y, if y is possibly super-turing, then y is indeed super-turing"

which, now that I come to it, I think is exactly the deadly gap in his argument. He hides this assumption under talking about "since I! is a mathematical truth", as if that licenses him to insert a modal operator into the proposition wherever he likes.

(no subject)

Date: 2005-04-08 08:10 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
Correction: "where is Bringsjord getting that from?" In the paper he makes a 5-point argument for this.

I am skeptical of any a priori argument which purports to prove something like "whether people are computers", because this is question is about the real world, and therefore contingent.
It's like using mathematics to prove that "time is money":

http://ukrobotics.com/html/humour.htm

Postulate 1: Knowledge is Power.
Postulate 2: Time is Money.
As every engineer knows: Power = Work / Time.

Since: Knowledge = Power,
then Knowledge = Work / Time,
and Time = Money,
then Knowledge = Work / Money.

Solving for Money, we get: Money = Work / Knowledge.



Yet, I believe such logical arguments sometimes give good insights... We often don't realize how many interesting consequences our beliefs can have, if we take them together and make logical deductions from them.

(no subject)

Date: 2005-04-08 08:24 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
I thought that "machine X decides machine Y" meant "for all input u, machine X on input w says yes (respectively says no) if machine Y on input u halts (respectively does not halt)"

can you quantify the "w"? Did you mean "u"?

(no subject)

Date: 2005-04-08 08:28 am (UTC)
From: [identity profile] spoonless.livejournal.com

Correction: "where is Bringsjord getting that from?" In the paper he makes a 5-point argument for this.

Hmmm... I just took a look at his 5 points; I don't understand some of them... but some of them definitely look like bs to me. Nothing looks particularly compelling.

#1 and #4 in particular remind me of your "time is money" example: If some TM is a non-halter, there must be a specific reason why this is so; and that reason can, at least in pricinple, be devined by a person.

Aside from the fact that this is complete hand-waving with nothing backed up, couldn't he just substitute the word "person" for TM to have a statement which appears equally valid for Turing machines? (which contradicts everything else in his "proof")

I don't see that there's really any question that humans are computers... the only question is whether they can be modeled with classical or quantum Turing machines. From what I hear, the answer is almost certainly that they are classical Turing machines. And as far as I know, even quantum Turing machines can't get around the halting problem so that wouldn't help either. What other possibilities are there? That humans are some kind of supernatural force that processes information without adhering to any laws of logic?

And girls are evil

Date: 2005-04-08 08:48 am (UTC)
From: (Anonymous)
Girls cost time and money:
girls = time x money

But time is money:
time = money

Money is the root of all evil:
money = sqrt(evil).

Therefore girls are evil:
girls = time x money = money x money = money^2 = sqrt(evil)^2 = evil
QED

--Sebastian

(no subject)

Date: 2005-04-08 08:51 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
It's not so clear to me what it means to be "possibly a machine". It's not always clear to me what philosophers mean when they say "necessarily".

Can I give the modality an epistemic reading? When I do, I agree with you that ∀y.(My ⇒ []My) is unreasonable.

I also dislike his use of equality, "=" to say that every person is a machine. I would be happier with a "simulates" relation. But since I believe that people are machines (though not *Turing* machines), it's fair enough.


I'm still suspicious of (2), or at least his arguments for it. I believe that all his arguments for what people can do should also be arguments for what machines can do (though his reasoning isn't clear enough to verify this). I sense that he's twisting the meaning of "possibly" somewhere.


But in any case, if (1) doesn't hold then we no longer have the essential difference between people and machine, and I have no motivation to argue.

(no subject)

Date: 2005-04-08 09:12 am (UTC)
From: [identity profile] mathemajician.livejournal.com
I don't understand (1). Let's assume that we have this Turing machine M which halts.

Now construct a very simply Turing machine R wich accepts a program as input and if that program is M then it outputs "THIS PROGRAM HALTS" and then halts itself. Otherwise it just goes into an infinite loop.

Clearly the program for M is finite and so R exists. Furthermore, R will correctly decided if the Turing machine M halts. In other cases it simply doesn't know and so we have no contradiction with the halting problem.

(no subject)

Date: 2005-04-08 09:26 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
So R *semi-decides* M, since it doesn't give back an answer when M doesn't halt.

[livejournal.com profile] jcreed gave a construction above of a TM that no TM can decide (not even itself!).

(no subject)

Date: 2005-04-08 09:34 am (UTC)
From: [identity profile] mathemajician.livejournal.com
But M always halts. This is our assumption right? That M halts.

(no subject)

Date: 2005-04-08 09:36 am (UTC)
From: [identity profile] mathemajician.livejournal.com
Or if M doesn't halt, then just use R' which always outputs "THIS PROGRAM DOESN'T HALT" if the input is M.

(no subject)

Date: 2005-04-08 09:36 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
Why are you making this assumption? That is not what (1) says.

(no subject)

Date: 2005-04-08 09:53 am (UTC)
From: [identity profile] mathemajician.livejournal.com
Ok, either it halts or it doesn't. Either way, we have R or R' that will give you the right answer.

(no subject)

Date: 2005-04-08 09:56 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
M is not an ordinary TM. It takes input.

So R would have to, for all inputs u, decide whether M halts on u. That seems awfully hard, don't you think? M itself can't decide whether it halts on u (in particular for the ones where it doesn't halt).

(no subject)

Date: 2005-04-08 10:26 am (UTC)
From: [identity profile] mathemajician.livejournal.com
Ok, if you're not just deciding for a normal TM, but rather for one that accepts input, then that's a completely different problem.

I haven't read the paper, I just assumed that you meant the usual halting problem for TMs in step (1).

(no subject)

Date: 2005-04-08 06:06 pm (UTC)
From: [identity profile] r6.livejournal.com

I’m not sure what it means to “decide if a turing machine halts”, but I will comment anyways.

(1) I!: There exists a Turing Machine M, such that no Turing machine can decide whether it halts. And *necessarily* so, since it's a mathematical theorem. (this seems wrong to me!)

This is (necessarily) false. Pick any Turing Machine M. M either halts or does not halt.

  1. Suppose M halts. Then the turing machine that prints, “M halts” correctly decides that M halts.
  2. Suppose M does not halt. Then the turing machine that prints, “M does not halt” correctly decides that M does not halt.

QED

(2) For every Turing Machine M, there is a person S such that it's logically possible for S to decide M.

If a Turing Machine M does not halt then it impossible for a person to know that it does not halt. The best they can do is say that M does not halt assuming some given logical system is sound. (2) is either true or false depending on what logically possible for S to decide M means.

(3) All people are machines.

This is true.

Analogously, if you're a Turing Machine whose mission in life is to decide whether Turing Machines halt (my new favorite way for thinking about this stuff, thanks to [livejournal.com profile] r6), you always need to change your Universal Turing Machine to one that decides more TMs.

The statement you always need to change your Universal Turing Machine is not correct. Axiom systems are not Universal Turing Machines. Axiom systems produce recursively enumerable lists of Turing Machines that (hopefully) halt on all inputs. It is impossible to know if any item in this list is correct or not.

(no subject)

Date: 2005-04-08 11:50 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
This is (necessarily) false. Pick any Turing Machine M. M either halts or does not halt.

1. Suppose M halts. Then the turing machine that prints, “M halts” correctly decides that M halts.
2. Suppose M does not halt. Then the turing machine that prints, “M does not halt” correctly decides that M does not halt.

QED

As I was saying to [livejournal.com profile] mathemajician these are not ordinary Turing machines, since they take input.

(no subject)

Date: 2005-04-09 12:01 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
The statement you always need to change your Universal Turing Machine is not correct. Axiom systems are not Universal Turing Machines.

Can you correct the statement so that it says something meaningful?
Whenever our UTM runs into trouble proving whether a certain TM halts, we could create a new UTM with the true value of that special case added (which is analogous to adding independent axioms to an axiom system when an incompleteness is found, except that with axiom systems there is no "true value")


Axiom systems produce recursively enumerable lists of Turing Machines that (hopefully) halt on all inputs.

Axiom systems produce a R.E. list of theorems. Do theorems somehow correspond to TMs?


It is impossible to know if any item in this list is correct or not.

I don't know what you mean by this.

(no subject)

Date: 2005-04-09 03:18 am (UTC)
From: [identity profile] spoonless.livejournal.com

Aside from the fact that this is complete hand-waving with nothing backed up, couldn't he just substitute the word "person" for TM to have a statement which appears equally valid for Turing machines? (which contradicts everything else in his "proof")

I intended to write 'substitute TM for the word "person"'... but my dyslexia got the better of me. In other words, I was saying he could have just as easily written "there must be a specific reason why this is so, and that reason can in principle be derived by a TM"

(no subject)

Date: 2005-04-09 09:18 am (UTC)
From: [identity profile] r6.livejournal.com

Does “N decides if a turing machine M halts” mean N is machine that outputs 0 or 1 on input x depending on whether M halts or not? In that case (1) is true, and (2) is false. It is not possible for people to decide when a UTM will not halt.

Actually I am in about the same position as Bringsjord and Arkoudas. (1), (2) and (3) are clearly incompatible. We all believe (1). They think (2) is more likely than (3), and I think (3) is more likely than (2). Neither of us can prove our case yet. Perhaps someday we will have a model of physics that will entail (3). Or (unlikely) perhaps someday we will have an example of (2), or rather a device that can solve the halting problem. In this case we will just redefine computation to include this device, and be in the same position as we are now.

(no subject)

Date: 2005-04-09 09:30 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
Well, in contrary to what I have written, I! is not the same as (1) (you'd have to look at the paper, page 8).

[livejournal.com profile] jcreed doesn't believe in (1), and I am also skeptical.

Apparently, there's mistake pushing the [] through the quantifiers and implications.

(no subject)

Date: 2005-04-09 09:35 am (UTC)
From: [identity profile] r6.livejournal.com

You probably want to say, “If you're a Turing Machine whose mission in life is to decide whether Turing Machines are total—halts on all inputs—and all you have is a program that halts on input x if the xth TM is (hopefully) total, then you always need to change your program to one that halts on more total TMs.

As an aside, Gödel sentences are always Π1, but the more interesting problems are Π2. So no only is adding more and more Gödel sentences not only insufficient to complete you axiom system, you probably don’t make much progress in proving more Π2 sentences. Adding inaccessible cardinal axioms do give you more Π2 sentences.

(no subject)

Date: 2005-04-10 02:15 am (UTC)
From: [identity profile] jcreed.livejournal.com
Yes, that was a typo, sorry.

(no subject)

Date: 2005-04-10 03:42 am (UTC)
From: [identity profile] jcreed.livejournal.com
"being equal to a machine" here I think really means "being at most turing-powerful in terms of computations".

His notion of equality seems to be the result of "quotienting out" all entities in the domain we're considering by their computational power. So to be "equal to some machine" just means that you can compute as richly as some turing machine, which would include the possibility of universal turing machines.

(no subject)

Date: 2005-04-10 03:48 am (UTC)
From: [identity profile] jcreed.livejournal.com
I think the intended alethic ("necessary", "possibly") reading is the safest bet.

∀y.(My ⇒ []My) in this case means, "for any entity y, if it so happens in the world we live in that y's behavior can be simulated by a turing machine, then it is logically impossible that y's behavior could be super-turing; in other words, it is necessarily true that y's behavior is turing-simulatable."

This is objectionable to me as an assumption, because it rules out the conjunction of two things I find separately and jointly plausible:

(1) human beings are no better than turing machines at computing (i.e. are turing-simulatable)
(2) it's logically possible that humans are better than turing machines at computing (i.e. are super-turing)

My (2) here is coincidentally equivalent to Selmer's (2), which I think is probably true. I'm willing to imagine that it might be false, though, but I'm not sure how you'd show that in no universe it could be possible that humans can out-do turing machines in the same sense that in no universe, say, 1=2.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags