Damn... John McCarthy took my idea for a title: "The AI of Philosophy"
My idea was about simulated intelligences, agents in a machine if you will. Under what conditions would agents start asking philosophical questions?
I suspect that only scruffy creatures like us, which have trouble communicating with each other and even with ourselves, would have the need or inspiration to do philosophy. (If this isn't true, then which areas of philosophy are intelligence-universal and which ones are human-specific?)
Transparent intelligences, for example, would need no philosophy of language. If humans suddenly acquired the ability to have direct mind-to-mind communication, I suspect a lot of philosophy would seem pointless.
So, in other words, my research topic would be about human quirkiness (as reflected in language, for example).
Other possibly-relevant human quirks (those "irrational" things about being human):
* unconscious knowledge and emotions: the fact that we may not be aware of our own emotions / knowledge.
* natural languages seem unnecessarily complex
* the placebo effect, and its corresponding paradox: if we can be cured by lying to ourselves, why can't we do this consciously?
These quirks should also explain why it's hard to formalize context.
What is it about the evolution of humanity? Is this what you get when intelligence evolves under our constraints? Would it really be that difficult for us to have much more than 7 chunks in working memory? Is this an artifact of a quirky computational substrate? Why are we so bad at calculations and yet so good at language? Why is so much of our processing unconscious, and why are we not able to tap into this powerful computer we have in our brains (except possibly for autistic savants)?
I'd like to play with simple intelligences. To such a being, a simple ontology refinement could seem like a deep philosophical insight.
Tangentially, there is a journal on metaphilosophy, and Peter Suber also has a page about metaphilosophy.
SEE: LJ entry, where I talk about John Sowa - Representing Knowledge Soup In Language and Logic
My idea was about simulated intelligences, agents in a machine if you will. Under what conditions would agents start asking philosophical questions?
I suspect that only scruffy creatures like us, which have trouble communicating with each other and even with ourselves, would have the need or inspiration to do philosophy. (If this isn't true, then which areas of philosophy are intelligence-universal and which ones are human-specific?)
Transparent intelligences, for example, would need no philosophy of language. If humans suddenly acquired the ability to have direct mind-to-mind communication, I suspect a lot of philosophy would seem pointless.
So, in other words, my research topic would be about human quirkiness (as reflected in language, for example).
Other possibly-relevant human quirks (those "irrational" things about being human):
* unconscious knowledge and emotions: the fact that we may not be aware of our own emotions / knowledge.
* natural languages seem unnecessarily complex
* the placebo effect, and its corresponding paradox: if we can be cured by lying to ourselves, why can't we do this consciously?
These quirks should also explain why it's hard to formalize context.
What is it about the evolution of humanity? Is this what you get when intelligence evolves under our constraints? Would it really be that difficult for us to have much more than 7 chunks in working memory? Is this an artifact of a quirky computational substrate? Why are we so bad at calculations and yet so good at language? Why is so much of our processing unconscious, and why are we not able to tap into this powerful computer we have in our brains (except possibly for autistic savants)?
I'd like to play with simple intelligences. To such a being, a simple ontology refinement could seem like a deep philosophical insight.
Tangentially, there is a journal on metaphilosophy, and Peter Suber also has a page about metaphilosophy.
SEE: LJ entry, where I talk about John Sowa - Representing Knowledge Soup In Language and Logic
(no subject)
Date: 2005-03-15 08:07 pm (UTC)That is to say, a robot has a better chance of becoming conscious than pure code has. There has to be a "there" there before that "there" can ask questions about itself.
It's not logically impossible for pure code to bypass robots and think independently on its own, but to me it is exceptionally unlikely. Before you get to "I think therefore I am" you must first have an "I".
The philosophy of grammar
Date: 2005-03-15 08:39 pm (UTC)Re: The philosophy of grammar
Date: 2005-03-16 12:32 am (UTC)I was presuming "philosophy" meant something beyond mere logical deduction. a philosophy would make statements about their condition and the condition of the world around them.