Mar. 15th, 2005

gusl: (Default)
Damn... John McCarthy took my idea for a title: "The AI of Philosophy"

My idea was about simulated intelligences, agents in a machine if you will. Under what conditions would agents start asking philosophical questions?

I suspect that only scruffy creatures like us, which have trouble communicating with each other and even with ourselves, would have the need or inspiration to do philosophy. (If this isn't true, then which areas of philosophy are intelligence-universal and which ones are human-specific?)

Transparent intelligences, for example, would need no philosophy of language. If humans suddenly acquired the ability to have direct mind-to-mind communication, I suspect a lot of philosophy would seem pointless.

So, in other words, my research topic would be about human quirkiness (as reflected in language, for example).

Other possibly-relevant human quirks (those "irrational" things about being human):
* unconscious knowledge and emotions: the fact that we may not be aware of our own emotions / knowledge.
* natural languages seem unnecessarily complex
* the placebo effect, and its corresponding paradox: if we can be cured by lying to ourselves, why can't we do this consciously?

These quirks should also explain why it's hard to formalize context.
What is it about the evolution of humanity? Is this what you get when intelligence evolves under our constraints? Would it really be that difficult for us to have much more than 7 chunks in working memory? Is this an artifact of a quirky computational substrate? Why are we so bad at calculations and yet so good at language? Why is so much of our processing unconscious, and why are we not able to tap into this powerful computer we have in our brains (except possibly for autistic savants)?

I'd like to play with simple intelligences. To such a being, a simple ontology refinement could seem like a deep philosophical insight.

Tangentially, there is a journal on metaphilosophy, and Peter Suber also has a page about metaphilosophy.

SEE: LJ entry, where I talk about John Sowa - Representing Knowledge Soup In Language and Logic
gusl: (Default)
This Friday, I went to the Philosophy of Information lectures. The lectures themselves were for a popular audience.

I met and spoke with:

Keith Devlin, who happens to be working in intelligence analysis. I told him about Bringsjord's reasoning assistant Slate, and he said he would look it up. Intelligence analysis seems to be all about "debiasing" expert opinions.

John McCarthy, whose goals are very close to mine. We hold on to the Leibnizian ideal. (I would actually like to debate someone who is "against" logical AI). I asked him if he knew of an interface / language / system for supporting the process of formalizing arbitrary texts, a sort of semi-structured language to support the process of gradual formalization, along with revisions, ontology changes, etc. I told him about the people doing Mathematical Knowledge Management, and said I would like to create a system to help formalize general arguments. He said he was interested and told me to contact him if I did anything in this direction. He then gave me his business card, with cell phone and all.

Happy that I got the interest of the bigshot, I started joking around with my colleagues. I saw a man who seemed to be having an interesting argument about Occam's razor: why should the universe be simple? I jumped in with Schmidhuber's argument that "by the anthropic principle, we are more likely to be living in a quickly-computable universe".

He was amused, so he introduced himself.
- Hi, I'm Kevin Kelly
- You are Kevin Kelly???

We walked to Oorlam to have a few drinks, where I sat next to Henrik, who picked things up faster than me, partly due to his knowledge of recursion theory. Kevin Kelly blew our minds with:

* Bayesianism as a meta-method is wrong/bad.
* The application of Kolmogorov Complexity to the problem of induction is philosophically bad (citing the arbitrary choice of universal machine). There are many paths to the truth. People tend to like Solomonoff induction because it's too tempting. He also said that he got one of the KC bigshots to admit that MDL is not about finding the true theory (thus putting all disagreement off the table).
* The problem of induction is the problem of the undecidability of the halting problem.
* Topology and recursion theory are the right tools for thinking about induction. Verifiable propositions are to be interpreted as open sets.
* The cases where induction is needed are the boundary points (i.e. they can't be logically verified, since any open set around them will intersect with the outside... the semantics isn't very clear to me)
* Topological structures are invariant under "Goodman's grue" automorphisms, whereas Bayesian structures aren't. (I remarked that it's well-known that the uniform prior is sensitive to the representation one uses)

That day I also ran into a few other people, one of whom was Breanndán, the expert Lisper who I've been meaning to talk to.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags