Yesterday, I met "the generalists" at Kiva Han, voted on films, and even got to be dictator for a round. They are a rather interesting set of geeks. Not-too-surprisingly,
jcreed was there. (We always run into each other).
When we were done, it was about 11pm, and I was ready to go to bed.
On my way back to the bike, I ran into
jcreed and
simrob (who I hadn't seen in a while) at the 4th floor whiteboard, and saw the former's very cool statement of Arrow's theorem in the language of category theory (no pun intended). In this instance, category theory seems unnecessarily abstract, since we could easily talk about them as sets.
As my sleepiness turned into dreaminess, the topic drifted into my Herbert-Simon-ish ideas of making AI mathematicians. We seem to have opposite inclinations on this: he is skeptical, believes it's too hard.
My main belief, that "mathematical reasoning is easy to automate", is grounded on:
* serial reasoning is easy to formalize
* most of mathematical and scientific reasoning is serial
* the current state-of-the-art is poor because of brittleness, due to a lack of multiple representations (see
this), a lack of integration.
His main objection seemed to be that, in the context of trying to prove things, figuring out what to do next is not easy: mathematicians have intuitions that are hard to put into theorem provers. i.e. these search heuristics are not easy to formalize.
My response: intuition comes from experience. (i.e. we just need a corpus to learn from)
Other interesting thoughts:
* the intelligence required to perform any particular set of tasks, even if you're talking about "self-modifying" programs, is going to have a Kolmogorov Complexity (i.e. minimum program size) that needs to come either from the programmer (via programming) or from the world (via machine learning).
Amazingly (or not, given GMTA), he and I quickly agreed on a plan for building human-level mathematical AI (like, after 15 minutes):
* construct good representations with which to learn (we may need to reverse-engineer representations that are hard-coded in people, like our innate ability to do physical reasoning, which is useful in understanding geometry)
* give it a corpus to learn from (i.e. a world to live in)
He also wrote down an axiomatization of group theory in Twelf, which I believe is complete in the sense of "every statement that is expressible and true of every group (i.e. every model) is a consequence of the axioms of group theory". Logicians, can you confirm this?
Finally, we talked about desirable features of future math books, like "expand definition", "generate examples and non-examples", etc. This should be easy: all I'm asking for is beta reduction... unlike those ambitious proof-mining people who want automatic ways of making theorems more general.
---
What do you think of the argument:
A - X should be easy!
B - instead of saying it's easy, you should be doing it.
?