Last week, I had an excellent chat with
Terry Stewart over AIM. We covered:
* cognitive modeling
* "neural compilers"
* philosophy of science (empirical vs theoretical models in physics and cogsci, unification/integration, predictions vs explanations, reduction). See his
excellent paper here.
* automatic model induction
* shallow vs deep approaches to AI
*
automated scientistsIt went longer than 2 hours. It was the first time ever that I argued against a pro-formalization position... because I normally sympathize very strongly with formalization efforts.
Our conversation:
( Read more... )Terry had to run off, but if I interpret his point correctly, it
sounds like saying that 99% of the research produced in universities (including most math and CS) don't qualify as theories because they are too vague and/or ambiguous, since they fall short of this standard. So I must be misinterpreting him.
---
I like the idea of the following proof-checking game:
* Proposer needs to defend a theorem that he/she does or does not have a proof of.
* Skeptic tries to make Proposer contradict himself, or state a falsity.
Since formal proofs are typically too long (and unpleasant) to read in one go (see
de Bruijn factor), this method only forces Proposer to formalize one branch of the proof tree. Since Skeptic can choose what branch this is, he should be convinced that Proposer really has a proof (even if it's not fully formalized).
---
Yesterday, while thinking about Leitgeb's talk, I came to consider the possibility that mathematical AI might not need formal theories. In fact, cognitively-faithful AI mathematicians would not rely on formalized theories of math.
Of course, we would ideally integrate cognitive representations of mathematical concepts with their formal counterparts.
Of course, losing formality opens the door to disagreements, subjectivity, etc. But real human mathematical behavior is like that. Can a machine learning system learn to mimic this behavior (by discovering the cognitive representations that humans use)? How do we evaluate mathematical AI? Do we give it math tests? Tutoring tasks?