Last week, I had an excellent chat with Terry Stewart over AIM. We covered:
* cognitive modeling
* "neural compilers"
* philosophy of science (empirical vs theoretical models in physics and cogsci, unification/integration, predictions vs explanations, reduction). See his excellent paper here.
* automatic model induction
* shallow vs deep approaches to AI
* automated scientists
It went longer than 2 hours. It was the first time ever that I argued against a pro-formalization position... because I normally sympathize very strongly with formalization efforts.
Our conversation:
Terry had to run off, but if I interpret his point correctly, it sounds like saying that 99% of the research produced in universities (including most math and CS) don't qualify as theories because they are too vague and/or ambiguous, since they fall short of this standard. So I must be misinterpreting him.
---
I like the idea of the following proof-checking game:
* Proposer needs to defend a theorem that he/she does or does not have a proof of.
* Skeptic tries to make Proposer contradict himself, or state a falsity.
Since formal proofs are typically too long (and unpleasant) to read in one go (see de Bruijn factor), this method only forces Proposer to formalize one branch of the proof tree. Since Skeptic can choose what branch this is, he should be convinced that Proposer really has a proof (even if it's not fully formalized).
---
Yesterday, while thinking about Leitgeb's talk, I came to consider the possibility that mathematical AI might not need formal theories. In fact, cognitively-faithful AI mathematicians would not rely on formalized theories of math.
Of course, we would ideally integrate cognitive representations of mathematical concepts with their formal counterparts.
Of course, losing formality opens the door to disagreements, subjectivity, etc. But real human mathematical behavior is like that. Can a machine learning system learn to mimic this behavior (by discovering the cognitive representations that humans use)? How do we evaluate mathematical AI? Do we give it math tests? Tutoring tasks?
* cognitive modeling
* "neural compilers"
* philosophy of science (empirical vs theoretical models in physics and cogsci, unification/integration, predictions vs explanations, reduction). See his excellent paper here.
* automatic model induction
* shallow vs deep approaches to AI
* automated scientists
It went longer than 2 hours. It was the first time ever that I argued against a pro-formalization position... because I normally sympathize very strongly with formalization efforts.
Our conversation:
(00:12:37) GusLacerda: I know of linguists who shy away from a computational approach
(00:13:52) GusLacerda: the point is that they know structures that should feature in these explanations, but the work of formalizing the ideas at a computational level (i.e. bottom level) is too overwhelming.
(00:14:22) GusLacerda: I would say that they have partial explanations, at best. This kind of work is hard to evaluate objectively.
(00:15:54) TerryStewart123: Hmm.. not sure why the computational level has to be the bottom level..... My feeling on the matter is that until you can specify your theory well enough to make a computational implementation, you don't actually have a theory. You just have some ideas floating around. But that's a whole separate discussion. :)
(00:16:18) GusLacerda: I meant: bottom level of formality
(00:17:53) GusLacerda: The similarity is that these linguists can claim that they are doing explanation, but not prediction. They will say that you can't predict what happens in novel situations because each situation has its own particularities. This means that their explanations are using tacit knowledge about the examples they use.
(00:18:57) TerryStewart123: At which point I start really wanting a definition of what counts as an explanation and how we can know we're making progress at getting better explanations.... But at the same time, I recognize that this may not yet be appropriate for that field. Which is another reason why I need to stay out of it for now. :)
(00:19:09) GusLacerda: "My feeling on the matter is that until you can specify your theory well enough to make a computational implementation, you don't actually have a theory."
I very much sympathize with this feeling, but can you honestly say that?
(00:19:58) TerryStewart123: Yes. That I'm quite willing to back up and defend. Otherwise there's so much ambiguity and interepretation going on that I don't know how useful it is to call it a theory.
(00:20:24) GusLacerda: OMG, this is the first time ever that I'm arguing against formality
(00:20:34) TerryStewart123: :) :)
(00:20:45) GusLacerda: my point is that we often understand things using our implicit knowledge
(00:21:24) GusLacerda: this knowledge could in principle be formalized and put onto the computer
(00:21:46) GusLacerda: but even before you do that, you know that you CAN do it
(00:22:21) TerryStewart123: Hmm... you can certianly _believe_ that you can do it, but I'm not sure how willing I am to trust that belief
(00:22:21) GusLacerda: convincing other people is a different matter
(00:22:42) GusLacerda: formality is a great protocol
(00:23:08) GusLacerda: but maybe there are better protocols, when the two parties have enough common ground
(00:23:55) GusLacerda: of course, we could both be fooled this way
Terry had to run off, but if I interpret his point correctly, it sounds like saying that 99% of the research produced in universities (including most math and CS) don't qualify as theories because they are too vague and/or ambiguous, since they fall short of this standard. So I must be misinterpreting him.
---
I like the idea of the following proof-checking game:
* Proposer needs to defend a theorem that he/she does or does not have a proof of.
* Skeptic tries to make Proposer contradict himself, or state a falsity.
Since formal proofs are typically too long (and unpleasant) to read in one go (see de Bruijn factor), this method only forces Proposer to formalize one branch of the proof tree. Since Skeptic can choose what branch this is, he should be convinced that Proposer really has a proof (even if it's not fully formalized).
---
Yesterday, while thinking about Leitgeb's talk, I came to consider the possibility that mathematical AI might not need formal theories. In fact, cognitively-faithful AI mathematicians would not rely on formalized theories of math.
Of course, we would ideally integrate cognitive representations of mathematical concepts with their formal counterparts.
Of course, losing formality opens the door to disagreements, subjectivity, etc. But real human mathematical behavior is like that. Can a machine learning system learn to mimic this behavior (by discovering the cognitive representations that humans use)? How do we evaluate mathematical AI? Do we give it math tests? Tutoring tasks?
Defending myself (slightly)
Date: 2007-03-09 07:26 am (UTC)To a certian degree, yes I do think that a lot of what is produced by universities doesn't count as a theory. However, that doesn't mean that I think those non-theory parts of research are useless. Instead, I'd say more that they are the necessary foundations on which a theory can be built.
But my main point is that, until you can specify a theory in such a way that it has non-ambiguous uses, then something is missing. This may be specifying it mathematically, or via logic, or via a computer program -- I don't care which (but as things get more complicated, I think computer programs are the way to go). Theories based on boxes and arrows, or verbal descriptions that don't define their underlying terms are useful in that they are intermediate stages, but I think the goal should be something more concrete. And once they are concrete, I think we get huge advantages (think of the change in physics once it was placed on a mathematical foundation).
Re: Defending myself (slightly)
Date: 2007-03-09 07:34 am (UTC)(no subject)
Date: 2007-03-09 09:44 pm (UTC)I like the idea of the following proof-checking game:
* Proposer needs to defend a theorem that he/she does or does not have a proof of.
* Skeptic tries to make Proposer contradict himself, or state a falsity.
Since formal proofs are typically too long (and unpleasant) to read in one go (see de Bruijn factor), this method only forces Proposer to formalize one branch of the proof tree. Since Skeptic can choose what branch this is, he should be convinced that Proposer really has a proof (even if it's not fully formalized).
What you're talking about sounds a lot like an interactive proof system, which is a well studied branch of complexity theory.