gusl: (Default)
Let N ~ Poisson with mean mu.
Let X|N=n ~ Binomial(n,p).

Show that X ~ Poisson with mean p*mu.

source: STAT560 - Assignment 2, Problem 2

What most people call a "formal" proof (and what appears in the solution linked) involves writing down the relevant pdfs and working out infinite sums.

I prefer to prove this by simple intuitive logic:Read more... )
gusl: (Default)
Last week, I had an excellent chat with Terry Stewart over AIM. We covered:
* cognitive modeling
* "neural compilers"
* philosophy of science (empirical vs theoretical models in physics and cogsci, unification/integration, predictions vs explanations, reduction). See his excellent paper here.
* automatic model induction
* shallow vs deep approaches to AI
* automated scientists

It went longer than 2 hours. It was the first time ever that I argued against a pro-formalization position... because I normally sympathize very strongly with formalization efforts.

Our conversation:
Read more... )

Terry had to run off, but if I interpret his point correctly, it sounds like saying that 99% of the research produced in universities (including most math and CS) don't qualify as theories because they are too vague and/or ambiguous, since they fall short of this standard. So I must be misinterpreting him.

---

I like the idea of the following proof-checking game:
* Proposer needs to defend a theorem that he/she does or does not have a proof of.
* Skeptic tries to make Proposer contradict himself, or state a falsity.

Since formal proofs are typically too long (and unpleasant) to read in one go (see de Bruijn factor), this method only forces Proposer to formalize one branch of the proof tree. Since Skeptic can choose what branch this is, he should be convinced that Proposer really has a proof (even if it's not fully formalized).

---

Yesterday, while thinking about Leitgeb's talk, I came to consider the possibility that mathematical AI might not need formal theories. In fact, cognitively-faithful AI mathematicians would not rely on formalized theories of math.

Of course, we would ideally integrate cognitive representations of mathematical concepts with their formal counterparts.

Of course, losing formality opens the door to disagreements, subjectivity, etc. But real human mathematical behavior is like that. Can a machine learning system learn to mimic this behavior (by discovering the cognitive representations that humans use)? How do we evaluate mathematical AI? Do we give it math tests? Tutoring tasks?
gusl: (Default)
I just found out that there is a journal about Hybrid Intelligent Systems, i.e. systems that use multiple representations, multiple methodologies, etc. It reminds me of Minsky's speech about multiple representations being important, which I very much agree with.

While I consider the work of integrating these different reprentations and methodologies crucial for tackling complex problems, it seems like it could be a particularly frustrating activity ...similar to formalizing mathematical proofs into computer-checkable form. I'm just imagining a ton of ontology mismatches, a few of which are interesting and possibly hide paradoxes, and a great majority of which are just boring.
gusl: (Default)
Zeilberger on two pedagogical principles, via [livejournal.com profile] rweba

<< The first one, due to my colleague and hero Israel Gelfand, I will call the Gelfand Principle, which asserts that whenever you state a new concept, definition, or theorem, (and better still, right before you do) give the SIMPLEST possible non-trivial example. For example, suppose you want to teach the commutative rule for addition, then 0+4=4+0 is a bad example, since it also illustrates another rule (that 0 is neutral). Also 1+1=1+1 is a bad example, since it illustrates the rule a=a. But 2+1=1+2 is an excellent example, since you can actually prove it: 2+1=(1+1)+1=1+1+1=1+(1+1)=1+2. It is a much better example than 987+1989=1989+987. >>
I should totally write an example generator embodying these principles (although I think 987+1989=1989+987 is an ok example: thanks to its (presumably) high KC, it's probably very difficult to find a competing principle of which it could be an example). Example generation, of course, is the inverse of concept inference, the idea behind programming by demonstration, which is one of the focuses of my upcoming job at CMU.


<< Buchberger introduced the White-Box Black-Box Principle, asserting, like Solomon, that there is time for everything under Heaven. There is time to see the details, and work out, with full details, using pencil and paper, a simple non-trivial example. But there is also time to not-see-the-details.>>
This sounds just like a common interactive textbook idea: only show details on demand (which forces the reader to ask questions. This prevents the book from getting boring, since everything it ever says is an answer to a question the reader asked.), etc. Our conscious mind is like a spotlight that can only focus on one thing at a time*, so we have to choose what to focus on at any given time.


<< Seeing all the details, (that nowadays can (and should!) be easily relegated to the computer), even if they are extremely hairy, is a hang-up that traditional mathematicians should learn to wean themselves from. A case in point is the excellent but unnecessarily long-winded recent article (Adv. Appl. Math. 34 (2005) 709-739), by George Andrews, Peter Paule, and Carsten Schneider. It is a new, computer-assisted proof, of John Stembridge's celebrated TSPP theorem. It is so long because they insisted on showing explicitly all the hairy details, and easily-reproducible-by-the-reader "proof certificates". It would have been much better if they would have first applied their method to a much simpler case, that the reader can easily follow, that would take one page, and then state that the same method was applied to the complicated case of Stembridge's theorem and the result was TRUE. >>
I agree that the proof is probably unnecessarily long (I haven't seen it!), but I am also referring to the formal proof, not just its exposition. If one page proves something close to it, and the same idea applies, then these mathematicians are not using a rich enough formalism: they should have used a formalism that uses a meta-language to tell the readers how to unfold / generalize the more specific case or the "proof idea" into the full proof. This meta-proof, i.e. this proof-generating program and its inputs, can be much shorter than 30 pages. Afterall, it is "easily-reproducible-by-the-reader", as Zeilberger says. (Note: This only follows if you accept computationalism, i.e. that all human (mathematical) cognition can be simulated by computations of the kind that can be run in an ordinary computer, requiring just as much time and space as cognition requires from the brain. So Penrose probably wouldn't believe this inference.)
In short, the authors' fault lies not in "showing explicitly all the hairy details", but in using a formal language so poor that the only way to express this proof in it is with lots of "hairy details". Of course, one could reply that the authors are mathematicians, not proof-language-designers. But I strongly believe that should when a mathematician (or a programmer!) is forced to create something so ugly and unnatural (i.e. a "hack") because of the limitations of the language, they should at least send a "bug report" to the language developers.

Also, some interesting discussions at [livejournal.com profile] jcreed's.



(*) I stole this metaphor from GTD.
gusl: (Default)
Jeremy Avigad's "Mathematical Method and Proof" exposes lots of the ideas that I've been writing about time and time again on this blog, under the tags formal_math and formal_ed (for my education-related formal dreams / practical epistemology).

Here's some ideas we have in common:
* we're interested in the program of creating a theory of mathematical understanding (here; I've also blogged about a program for understanding scientific understanding)
* "proof ideas" (Kohlenbach) as an application of proof theory (what about formalizing the notion of "proof identity"?)
* if would be good if theorem-provers used cognitive representations more (Kerber)
* to be finished...

here's a thought:
the challenge of doing mathematics is often the challenge of expressive one's intuition... like remembering a word that's at the tip of your tongue.
gusl: (Default)
Pralam: A Practical Language for Mathematics
I'm proposing an easy way of formalizing mathematics, by translating mathematical statements into FOL formulas. Please comment or contribute on the wiki.
gusl: (Default)
I wish I had written this:
Manfred Kerber, Martin Polleti - On the Design of Mathematical Concepts

That foundational systems like first-order logic or set theory can be used to construct large parts of existing mathematics and formal reasoning is one of the deep mathematical insights. Unfortunately it has been used in the field of automated theorem proving as an argument to disregard the need for a diverse variety of representations. While design issues play a major rôle in the formation of mathematical concepts, the theorem proving community has largely neglected them. In this paper we argue that this leads not only to problems at the human computer interaction end, but that it causes severe problems at the core of the systems, namely at their representation and reasoning capabilities.


This is the main reason I don't like the mainstream approach to formalizing mathematics. On a previous post, I compared it to doing everything in assembly-language (even though people at Nijmegen don't mainly use lambda-terms, but more powerful tactics instead).

Kerber has also made slides, where he had made a point about representation with the clever problem of domino-tiling an NxN square missing 2 tiles at opposite corners.
gusl: (Default)
It's impossible to prove Pi_1 statements without axioms that talk about infinity.

If it were possible, I would like to do it right now with Lisp. I have implemented a refuter for general Pi_1 statements, but even though implementing a verifier would never be successful, I still firmly believe in some proofs of Pi_1 statements. That means that I must implicitly accept axioms about infinity, much as I'd like not to.

Here's a nice theorem:
;;;All numbers with an odd number of divisors are perfect squares:
(forall #'(lambda (n) (implies (not (perfect-square-p n)) (evenp (n-divisors n)))))


And here's a nice proof:

(Lemma 1) Forall n in N, n is a perfect square IFF its prime factorization has an even power at all the primes.
(Lemma 2) Forall n in N, n-divisors(n) = PRODUCT (power(p_i,n) + 1) for each prime p_i in the factorization of n

Suppose n is not a perfect square (1)
Then there is a prime p in the prime factorization of n such that its power is odd. (by 1, L1)(2)
Since power(p,n) is odd, then power(p,n) + 1 is even. (3)
Then n-divisors(n) is even. (by 3, L2) (4)


But which axioms am I using in this proof? Are they too hard to dig out of the lemmas? My justification for the lemmas is my visual intuitions. Somewhere in proving them in any kind of system, you would need an axiom about infinity, right?


in case you want the rest of the Lisp code:
Read more... )
gusl: (Default)
Peter Aczel argues for a plurality of conceptual frameworks, coherence
Does he mean multiple foundations?

Peter Aczel, Conceptual frameworks for foundations

Abstract

I will claim that, in view of the lack of resolution between the competing conceptual frameworks of the classical period and after and in view of Gödel's incompleteness theorems and later work in proof theory, there can be no absolute conceptual framework for mathematics.

Instead, in pursuing the philosophy of mathematics one should allow for a plurality of conceptual frameworks that will vary in the kind of mathematical practise and its extent that the framework covers and the degree of coherence of the ideas involved in the framework.

I will attempt to make clear an objective notion of conceptual framework that can combine a kind of mathematical practise with a belief system and with formal systems that can represent the practise or encapsulate the beliefs. Among the key examples of conceptual frameworks will be those from the classical period.
gusl: (Default)
There are two kinds of ways of looking at mathematics... the Babylonian tradition and the Greek tradition... Euclid discovered that there was a way in which all the theorems of geometry could be ordered from a set of axioms that were particularly simple... The Babylonian attitude... is that you know all of the various theorems and many of the connections in between, but you have never fully realized that it could all come up from a bunch of axioms... Even in mathematics you can start in different places... In physics we need the Babylonian method, and not the Euclidian or Greek method.
— Richard Feynman


The physicist rightly dreads precise argument, since an argument which is only convincing if precise loses all its force if the assumptions upon which it is based are slightly changed, while an argument which is convincing though imprecise may well be stable under small perturbations of its underlying axioms.
— Jacob Schwartz, "The Pernicious Influence of Mathematics on Science", 1960, reprinted in Kac, Rota, Schwartz, Discrete Thoughts, 1992.


When I say I'm an advocate of formalization, I'm not saying we need to understand all the precise details of what we're arguing for (although this usually is the case in mathematics, at least more so than in physics). What I want to do is to formalize the partial structure that does exist in these vague ideas. Favoring a dynamic approach, I hold that we must accept formalizing theories in small steps, each adding more structure. We will need "stubs", and multiple, parallel stories to slowly evolve into a formal form. The point is that a vague, general idea *can* be formalized to a point: this is evidenced by the fact that we humans use precise reasoning when talking about them.
Again, the idea is about doing what humans do, formally. If the humans' idea is irremediably vague, we don't hope to do any better, but we do hope to formalize it as far as the ideas are thought out / understood (even if vaguely). To the extent that there exists a systematic (not necessarily "logical", but necessarily normative) in the way we reason and argue, it will be my goal to formalize this in a concrete form.

Regarding the normative aspect, the reason we need one is: not all ideas make sense! For fully-formalized mathematics (i.e. vagueness-free mathematics), it's easy to come up with a normative criterion: a mathematical idea or argument is fully-formalized if it corresponds to a fully-formal definition or a fully-formal proof. One of the challenges of this broader approach is to define what it means for an idea to "make sense": what does it attempt to do? What is its relation with related concepts?

The "natural" medium expression of these ideas is English. The idea is to connect English words to concepts in the formal knowledge system. We say an English sentence makes sense in a given context iff it addresses the goal / there is sound reasoning behind it (not all may be applicable).

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags