Apr. 7th, 2005

gusl: (Default)
I am interested in formally reconstructing expert reasoning in order to biases. Apparently Keith Devlin, Selmer Bringsjord, and many other people have been working in this area since Sep 11. The area still seems to be scarce though.

The CIA has published an online book on Intelligence Analysis:

Richards J. Heuer, Jr. - Psychology of Intelligence Analysis

Here's an excerpt from the chapter on cognitive biases:

The impact of information on the human mind is only imperfectly related to its true value as evidence.91 Specifically, information that is vivid, concrete, and personal has a greater impact on our thinking than pallid, abstract information that may actually have substantially greater value as evidence. For example:

* Information that people perceive directly, that they hear with their own ears or see with their own eyes, is likely to have greater impact than information received secondhand that may have greater evidential value.

* Case histories and anecdotes will have greater impact than more informative but abstract aggregate or statistical data.


In other words, people are not Bayesians! And it's like they don't even try! Even when we believe others should know better than us, and have no reason to distrust them, we still believe ourselves more... which means that our probability judgements won't converge. I should make a link here from People Disagree Too Much.

Bayesians, as we know, cannot agree to disagree. Much of my struggle with humanity (and with myself!) has to do with accepting such irrationality.
gusl: (Default)
Bringsjord has worked out my Penrose-like idea, formalizing his argument in quantified modal logic.

So the Halting Problem, HP becomes: forall TMs x, there exists a TM y such that x does not decide y.

He shows that the assumptions:

(1) I!: There exists a Turing Machine M, such that no Turing machine can decide whether it halts. And *necessarily* so, since it's a mathematical theorem. (this seems wrong to me!)
(2) For every Turing Machine M, there is a person S such that it's logically possible for S to decide M.
(3) All people are machines.

lead to a contradiction. (the contradiction is trivial, but he goes through the formal steps of his modal logic)


Of course, (2) is controversial. If I am a computer (which I believe I am), I would like to see my "Goedel sentence": which Turing Machine can I in principle not decide?

The lesson from Goedel's incompleteness theorem is that you always need to pick more axioms.

Analogously, if you're a Turing Machine whose mission in life is to decide whether Turing Machines halt (my new favorite way for thinking about this stuff, thanks to [livejournal.com profile] r6), you always need to change your Universal Turing Machine to one that decides more TMs.

Then the question becomes: "But how?". To me, the paradox remains, because if you have a systematic way of changing your Turing Machine simulator, then you're just using a meta-Turing Machine which is just as susceptible to undecidability: you'll never be able to decide whether the halting construction for meta-TM halts.

See Bringsjord - A Modal Disproof of "Strong" Artificial Intelligence (page 8)

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags