Zeilberger on two pedagogical principles, via
rweba << The first one, due to my colleague and hero Israel Gelfand, I will call the Gelfand Principle, which asserts that whenever you state a new concept, definition, or theorem, (and better still, right before you do) give the SIMPLEST possible non-trivial example. For example, suppose you want to teach the commutative rule for addition, then 0+4=4+0 is a bad example, since it also illustrates another rule (that 0 is neutral). Also 1+1=1+1 is a bad example, since it illustrates the rule a=a. But 2+1=1+2 is an excellent example, since you can actually prove it: 2+1=(1+1)+1=1+1+1=1+(1+1)=1+2. It is a much better example than 987+1989=1989+987. >>
I should totally write an example generator embodying these principles (although I think 987+1989=1989+987 is an ok example: thanks to its (presumably) high
KC, it's probably very difficult to find a competing principle of which it could be an example). Example generation, of course, is the inverse of concept inference, the idea behind
programming by demonstration, which is one of the focuses of my upcoming job at CMU.
<< Buchberger introduced the White-Box Black-Box Principle, asserting, like Solomon, that there is time for everything under Heaven. There is time to see the details, and work out, with full details, using pencil and paper, a simple non-trivial example. But there is also time to not-see-the-details.>>
This sounds just like a common interactive textbook idea: only show details on demand (which forces the reader to ask questions. This prevents the book from getting boring, since everything it ever says is an answer to a question the reader asked.), etc. Our conscious mind is like a spotlight that can only focus on one thing at a time
*, so we have to choose what to focus on at any given time.
<< Seeing all the details, (that nowadays can (and should!) be easily relegated to the computer), even if they are extremely hairy, is a hang-up that traditional mathematicians should learn to wean themselves from. A case in point is the excellent but unnecessarily long-winded recent article (Adv. Appl. Math. 34 (2005) 709-739), by George Andrews, Peter Paule, and Carsten Schneider. It is a new, computer-assisted proof, of John Stembridge's celebrated TSPP theorem. It is so long because they insisted on showing explicitly all the hairy details, and easily-reproducible-by-the-reader "proof certificates". It would have been much better if they would have first applied their method to a much simpler case, that the reader can easily follow, that would take one page, and then state that the same method was applied to the complicated case of Stembridge's theorem and the result was TRUE. >>
I agree that the proof is probably unnecessarily long (I haven't seen it!), but I am also referring to the
formal proof, not just its exposition. If one page proves something close to it, and the same idea applies, then these mathematicians are not using a rich enough formalism: they should have used a formalism that uses a meta-language to tell the readers how to
unfold / generalize the more specific case or the "proof idea" into the full proof. This meta-proof, i.e. this proof-generating program and its inputs, can be much shorter than 30 pages. Afterall, it is "easily-reproducible-by-the-reader", as Zeilberger says. (Note: This only follows if you accept computationalism, i.e. that all human (mathematical) cognition can be simulated by computations of the kind that can be run in an ordinary computer, requiring just as much time and space as cognition requires from the brain. So Penrose probably wouldn't believe this inference.)
In short, the authors' fault lies not in "showing explicitly all the hairy details", but in using a formal language so poor that the only way to express this proof in it is with lots of "hairy details". Of course, one could reply that the authors are mathematicians, not proof-language-designers. But I strongly believe that should when a mathematician (or a programmer!) is forced to create something so ugly and unnatural (i.e. a "hack") because of the limitations of the language, they should at least send a "bug report" to the language developers.
Also,
some interesting discussions at
jcreed's.
(*) I stole this metaphor from GTD.