Yesterday, I met "the generalists" at Kiva Han, voted on films, and even got to be dictator for a round. They are a rather interesting set of geeks. Not-too-surprisingly,
jcreed was there. (We always run into each other).
When we were done, it was about 11pm, and I was ready to go to bed.
On my way back to the bike, I ran into
jcreed and
simrob (who I hadn't seen in a while) at the 4th floor whiteboard, and saw the former's very cool statement of Arrow's theorem in the language of category theory (no pun intended). In this instance, category theory seems unnecessarily abstract, since we could easily talk about them as sets.
As my sleepiness turned into dreaminess, the topic drifted into my Herbert-Simon-ish ideas of making AI mathematicians. We seem to have opposite inclinations on this: he is skeptical, believes it's too hard.
My main belief, that "mathematical reasoning is easy to automate", is grounded on:
* serial reasoning is easy to formalize
* most of mathematical and scientific reasoning is serial
* the current state-of-the-art is poor because of brittleness, due to a lack of multiple representations (see this), a lack of integration.
His main objection seemed to be that, in the context of trying to prove things, figuring out what to do next is not easy: mathematicians have intuitions that are hard to put into theorem provers. i.e. these search heuristics are not easy to formalize.
My response: intuition comes from experience. (i.e. we just need a corpus to learn from)
Other interesting thoughts:
* the intelligence required to perform any particular set of tasks, even if you're talking about "self-modifying" programs, is going to have a Kolmogorov Complexity (i.e. minimum program size) that needs to come either from the programmer (via programming) or from the world (via machine learning).
Amazingly (or not, given GMTA), he and I quickly agreed on a plan for building human-level mathematical AI (like, after 15 minutes):
* construct good representations with which to learn (we may need to reverse-engineer representations that are hard-coded in people, like our innate ability to do physical reasoning, which is useful in understanding geometry)
* give it a corpus to learn from (i.e. a world to live in)
He also wrote down an axiomatization of group theory in Twelf, which I believe is complete in the sense of "every statement that is expressible and true of every group (i.e. every model) is a consequence of the axioms of group theory". Logicians, can you confirm this?
Finally, we talked about desirable features of future math books, like "expand definition", "generate examples and non-examples", etc. This should be easy: all I'm asking for is beta reduction... unlike those ambitious proof-mining people who want automatic ways of making theorems more general.
---
What do you think of the argument:
A - X should be easy!
B - instead of saying it's easy, you should be doing it.
?
When we were done, it was about 11pm, and I was ready to go to bed.
On my way back to the bike, I ran into
As my sleepiness turned into dreaminess, the topic drifted into my Herbert-Simon-ish ideas of making AI mathematicians. We seem to have opposite inclinations on this: he is skeptical, believes it's too hard.
My main belief, that "mathematical reasoning is easy to automate", is grounded on:
* serial reasoning is easy to formalize
* most of mathematical and scientific reasoning is serial
* the current state-of-the-art is poor because of brittleness, due to a lack of multiple representations (see this), a lack of integration.
His main objection seemed to be that, in the context of trying to prove things, figuring out what to do next is not easy: mathematicians have intuitions that are hard to put into theorem provers. i.e. these search heuristics are not easy to formalize.
My response: intuition comes from experience. (i.e. we just need a corpus to learn from)
Other interesting thoughts:
* the intelligence required to perform any particular set of tasks, even if you're talking about "self-modifying" programs, is going to have a Kolmogorov Complexity (i.e. minimum program size) that needs to come either from the programmer (via programming) or from the world (via machine learning).
Amazingly (or not, given GMTA), he and I quickly agreed on a plan for building human-level mathematical AI (like, after 15 minutes):
* construct good representations with which to learn (we may need to reverse-engineer representations that are hard-coded in people, like our innate ability to do physical reasoning, which is useful in understanding geometry)
* give it a corpus to learn from (i.e. a world to live in)
He also wrote down an axiomatization of group theory in Twelf, which I believe is complete in the sense of "every statement that is expressible and true of every group (i.e. every model) is a consequence of the axioms of group theory". Logicians, can you confirm this?
Finally, we talked about desirable features of future math books, like "expand definition", "generate examples and non-examples", etc. This should be easy: all I'm asking for is beta reduction... unlike those ambitious proof-mining people who want automatic ways of making theorems more general.
---
What do you think of the argument:
A - X should be easy!
B - instead of saying it's easy, you should be doing it.
?
(no subject)
Date: 2006-10-19 02:25 am (UTC)Don't forget to use the words "abstract nonsense" whenever you mention category theory. That is how mathematicians tell each other they are cool.
(no subject)
Date: 2006-10-19 02:54 am (UTC)Being the B in this situation, I think I can clarify my position in the following way:
A - X should be easy!
B - Well, there have been many smart people who tried to do X, and yet we don't have X. It may be that there are new techniques only recently available that are applicable to doing X, or that I'm wrong in judging all these people to be smart, or that I'm wrong in judging them to be many (because maybe people have been scared off by precisely the impression that I have that it's a very hard problem!).
If indeed there is an easy road towards progress in X, and if you are failing to convince me of this by general argumentation, then it should not be difficult by your own claim to exhibit progress in doing X. Since I would like to have X as much as you, and since you claim to understand how to accomplish it, by all means, go for it, and get back to me once you're done.
I would treat very similarly claims of the form, "well, why is P ?= NP such a hard problem? Let's just, you know, reason about what kind of things poly-time programs can do, and try to show that they don't include solving 3SAT."
(no subject)
Date: 2006-10-19 03:07 am (UTC)Second, a more practical objection, even supposing that you're correct about simply being able to train with a corpus of proofs I'm not sure that there's enough desire in the community to either form such a corpus or formalize the higher order mathematics that would need to be formalized in a machine checkable and creatable manner. Most actual mathematicians are happy with the current way of checking proofs (i.e. peer review), and as non-computer scientists they don't have a ton of motivation to actually write their proofs in a machine checkable way. And most actual computer scientists don't necessarily care about this stuff either -- theorists are like mathematicians, and non-theorists don't really care about proofs. It's really just this tiny subset of PL-like people that care. This subset is large at CMU, but small in the general CS community. And they also tend to be interested only in certain kinds of mathematics, e.g. the kinds that are applied to PL and logic. Who's going to put in the super fancy and complicated theorems from analysis, or from differential geometry, that you would want to have so you could then try to build on them?
Lastly, a complexity-theoretic argument. Suppose that you could automatically generate proofs for true theorems, even only theorems with "small" proofs (polynomial in size of the statement of the theorem). Then you could solve SAT, since each instance of SAT is essentially a very specific theorem with a short proof. Since SAT is NP-hard, this means that you shouldn't be able to generate theorems automatically.
I get into a similar argument with Ruy every once in a while, but about machine checkable proofs rather than automated theorem proving. I'm pretty pessimistic about even being able to have machine checkable proofs, mainly for practical reasons.
(no subject)
Date: 2006-10-19 03:12 am (UTC)All of this applies to humans too.
(no subject)
Date: 2006-10-19 03:12 am (UTC)(no subject)
Date: 2006-10-19 03:15 am (UTC)(no subject)
Date: 2006-10-19 03:18 am (UTC)But to try to explain it better, I mean that I feel AI-ish problems and deciding P=NP are hard problems for somewhat the same reason; they're things that a lot of smart people have worked on, and not succeeded yet.
If someone tried to convince you P=NP wasn't all that hard, how would you convince them that there was a good reason to think it was hard?
(no subject)
Date: 2006-10-19 03:22 am (UTC)I think you're right, as of right now, but I have a hunch this can and will change. Somewhere between the mid 19th century and the early part of the 20th century there was a pretty significant shift in what it meant for a proof to be correct, concommittant with a vastly better understanding of formal methods. I think we are seeing, or will see, another such change, with another increase in formality.
(no subject)
Date: 2006-10-19 03:40 am (UTC)(no subject)
Date: 2006-10-19 03:43 am (UTC)(no subject)
Date: 2006-10-19 03:47 am (UTC)(no subject)
Date: 2006-10-19 03:49 am (UTC)(no subject)
Date: 2006-10-19 04:44 am (UTC)My point is that I can say something is easy to do, but saying is not the same as doing. When arguing that something is easy, I should clearly state what needs to be done, argue why these pieces amount to the whole, and provide a reasonable argument for my time estimates. I can say something is easy, even thought it takes a year's worth of hard work, doing routine tasks like retrieving information, planning, etc. Because of this, I sometimes think of "work" as being sort of like friction.