gusl: (Default)
Right now I'm reading William Cohen's book "A Computer Scientist's guide to Cell Biology", and I find the delivery to be very efficient (though I have little to compare it with), probably because he takes an informational perspective, and isn't shy about using CS concepts and terminology.

You know the "X for dummies" collection? I'd love to see some "X for geeks" series. It could be specialized into "X for mathematicians", "X for Computer Scientists", "X for type theory geeks", etc.

According to Sussman, the legacy of Computer Science is its formal language:
<< Computer Science is not a science, and its ultimate significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. >>


I suspect that, when most scientists speak this "language", we will see greater understanding across disciplines. This is already happening.

---

Tangentially, I'd like to see a book on how to cope with "bad" programming languages, all the while being a hygienist. e.g. tricks for emulating a type system, etc.
gusl: (Default)
Lockhart's "A Mathematician's Lament" is a sad paper about the tragedy of K-12 math education. Lockhart starts with the metaphor of a musician's nightmare in which students are extensively trained in memorizing the names of notes, but not allowed to play them, let alone compose. (h/t Vikash)

---

NerdWisdom talks about Sussman's new paper "The Role of Programming in the Formulation of Ideas", about why so many concepts in physics are difficult to learn.

<< The basic point is that our notation is often an absolute mess, caused by the fact that we use equations like we use natural language, in a highly ambiguous way. >>
<< Sussman and Wisdom do show how the ambiguous conventional notation can be replaced with unambiguous notation that can even be used to program a computer. >>


They focus on Lagrangian mechanics, but lest mathematicians feel smug, Sussman and Wisdom have similar things to say about Differential Geometry.

I hope someone formalizes things using a typed lambda calculus too.
gusl: (Default)
Let N ~ Poisson with mean mu.
Let X|N=n ~ Binomial(n,p).

Show that X ~ Poisson with mean p*mu.

source: STAT560 - Assignment 2, Problem 2

What most people call a "formal" proof (and what appears in the solution linked) involves writing down the relevant pdfs and working out infinite sums.

I prefer to prove this by simple intuitive logic:Read more... )
gusl: (Default)
Last week, I had an excellent chat with Terry Stewart over AIM. We covered:
* cognitive modeling
* "neural compilers"
* philosophy of science (empirical vs theoretical models in physics and cogsci, unification/integration, predictions vs explanations, reduction). See his excellent paper here.
* automatic model induction
* shallow vs deep approaches to AI
* automated scientists

It went longer than 2 hours. It was the first time ever that I argued against a pro-formalization position... because I normally sympathize very strongly with formalization efforts.

Our conversation:
Read more... )

Terry had to run off, but if I interpret his point correctly, it sounds like saying that 99% of the research produced in universities (including most math and CS) don't qualify as theories because they are too vague and/or ambiguous, since they fall short of this standard. So I must be misinterpreting him.

---

I like the idea of the following proof-checking game:
* Proposer needs to defend a theorem that he/she does or does not have a proof of.
* Skeptic tries to make Proposer contradict himself, or state a falsity.

Since formal proofs are typically too long (and unpleasant) to read in one go (see de Bruijn factor), this method only forces Proposer to formalize one branch of the proof tree. Since Skeptic can choose what branch this is, he should be convinced that Proposer really has a proof (even if it's not fully formalized).

---

Yesterday, while thinking about Leitgeb's talk, I came to consider the possibility that mathematical AI might not need formal theories. In fact, cognitively-faithful AI mathematicians would not rely on formalized theories of math.

Of course, we would ideally integrate cognitive representations of mathematical concepts with their formal counterparts.

Of course, losing formality opens the door to disagreements, subjectivity, etc. But real human mathematical behavior is like that. Can a machine learning system learn to mimic this behavior (by discovering the cognitive representations that humans use)? How do we evaluate mathematical AI? Do we give it math tests? Tutoring tasks?

Leitgeb

Mar. 7th, 2007 05:51 pm
gusl: (Default)
I just had a half-hour chat with Hannes Leitgeb.

At noon, he gave a very interesting talk on formalizing the notion of informal provability (in which he expressed the Penroseish view that human intelligence may mean that may some steps in mathematical proofs may be unformalizable, or at least that the AI required for proof-checking this a single step would be very hard to implement). If that happens, this would create an interesting application for human computation.

A cognitive science of math must be about cognitive representations: what data structures do people have in their heads? (It should also address questions like "Why are definitions useful?") He suggests using these data structures as a way of modeling intuition (a belief is intuitive if it's a frequently-observed pattern). He used Steve's face as an example of a cognitive representation. The way we draw and talk about diagrams illustrates what features are relevant (i.e. essential to our cognitive representations), and which ones are irrelevant (e.g. the size of the drawing).

Our private conversation was mostly about his other work. I learned that Carnap's "The Logical Structure of the World" is about translating all statements into statements about sense experience. This reminded me of Hartry Field's "Science without Numbers", which is about doing science without abstracta. They are both analogous to cut-elimination (i.e. translating your proofs so that they don't use definitions). Apparently, for Carnap, the focus is not as much on ontology as for Field.

He also referenced some interesting work about proof transformations directed at changing the genus of the natural deduction proof graph, although I don't remember how this fits with the rest of the talk.

--

I told him about my ideas regarding formalized math.

Given a corpus of computer-formalized proofs, can we learn to automatically make them more readable, either by summarizing them, or by adding higher-level structure (as desired by people in MKM)? (in both cases, an interesting machine learning project)

Can we teach computers to read math papers? (i.e. either into a formal system, or into informal human-like representation... although we would ideally like to integrate these two)

In both of the above questions, I think the most important factor is: how much background knowledge do we need to give the computer? This is the same question that we face with the SimStudent project.

It seems no-one is working on automatic reading of math texts. This work would involve reconstructing human-like cognitive representations of mathematics. Undergraduate-level number-theory is probably a good place to start with. Introductory group theory would probably be boring, because most proofs are almost formal already.

It seems to me that making math-reading AI is hard because formalization of math is hard.
Generally speaking, formalization is hard because of:
* the ubiquity of mathlore: retrieving the needed facts is time-consuming.

Maybe what I need is to do task analysis on formalization work. But that may be too hopeful.
gusl: (Default)
If you haven't already, you should play Battleground God.

When I have some free time, I should formalize the "hit" (contradictions).

Can we automatically generate games like this, given some knowledge about a particular domain? Would they count as educational software?

Can we use this idea to make a tool for refining expert beliefs, or as a formalization tool?
gusl: (Default)
There is a cognitive style I like to call algebraic-mindedness. [livejournal.com profile] jcreed calls it "analogical mindedness".

I had just explained Landsburg's account of price discrimination in books: hard-cover vs soft-cover. Even though they cost the same to produce, and *everyone* prefers hard-cover, the producer has an interest in making soft-cover books, so that they can charge more from those who are willing to more pay more, while not losing those customers who are not willing to pay more. (This only works because copyright laws gives the producer a monopoly. I predict that ending copyright laws would drastically reduce the number of soft-cover books produced.)

Labour unions are the "dual" of corporate collusion to keep the salaries low (supposing this exists).

So he asked the analogical question: "what is the dual of price discrimination?"

There is no economic reason for asking this question, other than perhaps as a way to find out how to fight back against price discrimination.

The trait of algebraic-mindedness tends to lead one away from the original goal of the investigation, and towards improved understanding of the situation (a tighter knowledge network). The algebraic thinker is interested in general results: when looking at a specific situation, he wants to identify the minimum set of reasons that lead to the observed result. The algebraic thinker is the programmer who, when given a debugging task, insists on refactoring the code first. It is the physicist who will try to resolve paradoxes (philosophical progress) before making "ordinary" progress.

While making one slow at individual tasks, a reasonable amount of algebraic-mindedness tends to pay off on the long term: the algebraic thinker uses his numerous analogical bridges to "see" things that ordinary people can't. Math makes your smarter because it produces chunks of purely logical knowledge, and pure logic is extremely reusable: theorems can be instantiated on anything exhibiting the same mathematical structure. The algebraic thinker is the person who tries to extract the mathematical content out of ordinary situations, thereby creating this general knowledge.

While discussing topic A, algebraic thinkers T are known for bringing up apparently unrelated topics (let's call it B) which they know more about, because (1) A and B are analogically-linked in T's mind (2) T's knowledge of B lets him draw conclusions about A for free (or, in some cases, not for free (if the analogy is incomplete and needs to be verified), but still cheaper).

Questions algebraic thinkers tend to ask:
* if A and B are similar, why does A exhibit property X while B doesn't?

hmm... I'm thinking of splitting algebraic thinking into:
* analogical thinking
* mathematical thinking

The difference between these two kinds of thinking is that analogical thinking is about mappings between specific instances, while mathematical thinking is about mapping between abstract and concrete instances.

The mathematical idea of a situation X corresponds to the equivalence class of all possible situations analogous to X. (This type of analogy is an equivalence relation)
gusl: (Default)
I have decided to read a mathematical textbook over the holidays, something I haven't done in a long time.

I keep wanting to click on terms that I don't recognize, to go to the definition.

Here's a useful programming/text-learning project: create a system that, given a mathematical document (LaTeX, PDF, scanned text), automatically turns it into a mathematical hypertext linking each technical term to its definition.
You can use a wiki to do the linking, as long as you have WikiTeX enabled to do the rendering. This should be a straightforward project if the original document is already in LaTeX. AFAIK, you can't put links on TeX-rendered images, but fortunately, terms that have definitions tend to be the kind that don't need any TeXing anyway.

For me, the next steps would be:
* variable substitution: the text lets the reader plug in values. This would be useful for understanding theorems.
* recognize whether a reader-given object satisfies a definition (along with a "WhyNot?" tool)

I think the main difficulty here is interpreting the written mathematical text into a formal language. Most people reading this are probably thinking about difficult questions, like which foundation they would choose. But the definitions that I have in mind could be decided with Prolog.

I should read OpenMath applications to see what people are up to. They have whole conferences on this stuff, so we should be seeing some progress in this area. I have to wonder why we don't yet see any interactive math books online (advanced, theorem-based math).

More ideas:
* interpreting diagrams as mathematical objects, which can (fail to) satisfy definitions.
gusl: (Default)
Herb Simon, and his student Pat Langley have long been interested in comparing human-learning and machine-learning side-to-side.

Much of our AI-based educational software is necessarily in domains where machines do better than human students (e.g. equation-solving). In domains where students are better, e.g. playing soccer, learning natural languages, we cannot create full-fledged cognitive tutors (as these require the system to judge correctness in novel problems). This doesn't mean that computer can't support the learning process, by creating tutors for skills that transfer (either vertically or horizontally) to the target skill, e.g. I can imagine soccer players benefitting from biofeedback.

We can create support for some subtasks essential for learning (e.g. vocabulary building), but the computer's knowledge of words will be "dead" knowledge, necessarily disembodied from its natural function (i.e. communicating with humans, with no domain restrictions).

Human and machines speak different languages. This is to be expected, since they evolved under different constraints:
* environment: real world vs. simulated world
* hardware architecture: wetware

OTOH, the way programs do things can't be that far from the way humans do things, since they are programmed by humans, afterall. I can imagine different programming styles, in which one implements more or less cognitively-plausible ways of solving the problem.

Much of the challenge of HCI and ITS is to formalize human concepts so that humans and machines can communicate more easily. By formalizing concepts that people actually think about, we make it possible to give the computer high-level instructions, instead of having to worry about annoying technicalities.

It has long bothered me that computers are generally rather unreflective, unaware of their output (even if you give the computer a screen capture, its perception of it is extremely raw and low-level, because it doesn't know how to interpret menus, widgets, etc, which is rather ironic: they know how to make these widgets, but not how to read them). It seems that they could a learn a lot from feedback: if they could put themselves in the users' place, they would know when they being a bad, bad computer.

Is anyone aware of user-simulators that try to figure out how to use a new interface? Is this a standard method of evaluating UI designs?

---

Another idea that I've become familar with in my recent experience has been that of a behavior recorder: it's basically like a key&mouse logger, but for higher-level steps.

By applying machine learning on behavior-recorder data, we can figure out how people behave. i.e. learn a cognitive model. Tagging these behaviors into higher-level chunks should improve this.

Now, take video games. There should be no shortage of behavior data for them. In fact, there are probably myriads of cognitive phenomena manifested in this data. To paraphrase Herb Simon, there is no shortage of data: there is only shortage of expert attention.
gusl: (Default)
What is the best way of memorizing something some information x?

For computers, this is the compression problem. We have the usual trade-off between space and time. At one extreme (ignoring time), the space occupied is the Kolmogorov Complexity of x (relative to the computer, and the memories it already has), written KC(x).

If humans are just a different kind of machine, the optimal encoding's length will be the KC relative to this machine. Can we make such a machine from ACT-R?

Since KC is incomputable, computable approximations are found through a universal program search: what is the shortest program we can find whose output is x? The analogous question for humans is: what is the simplest way of presenting this information?

If we do a "cognitive TM search", won't the output be a near-optimal tutoring program? Maybe not: unlike the case with computers, tutors need to worry about retention.


I have a more ambitious dream of automatically creating a tutor from a formal theory. One way is to encode these formal theories as ordinary memories.


formalizing my analogy (this needs to be refined):
human memory <-> computer memory
compressed data <-> axioms of the theory

Of course, people aren't meant to learn theories only as formal systems. They need to use them concretely too.


See Phil Pavlik's work on optimizing of memorization schedules.
gusl: (Default)
GusLacerda: I went to a talk by Pfenning on Friday
GusLacerda: and asked him about who here does AR [automated reasoning]
GusLacerda: and the answer was disappointing
GusLacerda: basically, people who are into type theory
aCleverEpithet: yes
GusLacerda: I told him I was interested in "algorithm discovery"
GusLacerda: this was 50% bullshit
GusLacerda: because "algorithm discovery" is a non-existent research area
GusLacerda: but he seemed interested
GusLacerda: I explained the idea... which he and one student seemed to get excited about
aCleverEpithet: it is interesting
GusLacerda: but his conclusion, as far as collaborators go, was: Herb Simon is dead...
[note: to be fair, he did tell me to check out Raul Valdes-Perez, which is a very good lead. Is this guy actually at CMU?]
aCleverEpithet: hard, but interesting
aCleverEpithet: this is true
GusLacerda: everyone says this
GusLacerda: but that's like saying "AI is hard"
aCleverEpithet: thats a good point actually
GusLacerda: I am happy to make incremental progress
GusLacerda: record people doing it
GusLacerda: figure out how they're doing it
GusLacerda: formalize it
GusLacerda: and implement it
GusLacerda: that's it!
GusLacerda: of course, this only works for reasoning-based tasks
GusLacerda: you can't necessarily figure out what people are doing if it involves unconscious pattern-recognition
aCleverEpithet: yes, i wonder how much that actually comes into play
aCleverEpithet: if it is more or less than we think
GusLacerda: hard to say
GusLacerda: certainly, a lot of the things we find hard to formalize in math, are *not* due to such processes
GusLacerda: but rather due to us not having cognitively-faithful representations
GusLacerda: like diagrammatic reasoning, which we're making progress on
GusLacerda: a lot of math and science is reasoning
aCleverEpithet: definitely
GusLacerda: formalizing it can't be bad
GusLacerda: maybe 100% of algorithm discovery is reasoning
aCleverEpithet: i can't think of any examples of algorithm discovery that are not just reasoning
GusLacerda: me neither
GusLacerda: although in some cases, spatial reasoning is needed
GusLacerda: or maybe that's just mathematical reasoning
aCleverEpithet: yeah, im not usre
aCleverEpithet: i think it's a kind of mathematical reasoning
GusLacerda: well, algorithm discovery IS mathematical discovery
...
GusLacerda: anyway, this stuff is an interesting mix of theoretical and empirical reasoning
[...just like scientific discovery and ordinary mathematical discovery]
gusl: (Default)
Formalization may be my biggest intellectual cause.

I just asked someone to comment on my manifesto: A QED Project for Science: The Promises of Computer-Level Formalization. I just replied addressing his criticisms:

I guess my gut instinct is to say that half of the point of science is to figure out what logic such a proof should be written in. Scientific theories, by deffinition, are not complete.

I disagree with this. There are many phenomena that we understand
quite well today, and some theories are pretty stable, e.g. Newtonian
mechanics. Even though it is well-understood, this knowledge does not
have a formal representation in the computer, which means it's not
very usable if you're trying to do advanced information retrieval,
etc. However, to the extent that scientists understand the theory, it
*is* represented formally inside their heads...
Read more... )
gusl: (Default)
My view of natural language: there is a huge communicative gap between any two human beings: our thoughts need to be serialized in order to fit through the information bottleneck that are our communicative channels. This channel is 1D, slow and noisy. Things are not so bad, because one can usually (but not reliably) convey the concepts that one is thinking of, given enough time and a collaborative communication partner. But we can and should overcome these limits.

Consciousness is a spotlight. So is speech. I can give you (or myself) a tour of the structures in my mind (associations, explanations, complex theories1 2) by means of a narrative. But wouldn't it be great to do it in parallel, as happens in "The Matrix"3? This communicative limitation of ours is analogous to having PCs that can only copy CDs the way VCRs used to copy VHS tapes: by playing the whole thing.4

I have a lot of concepts and background knowledge in common with the rest of humanity, but unless they speak one of a handful of languages, we will not be able to communicate. It's like trying to look something up in a book where the index is gibberish.

But even when two people speak the same language, they are restricted to using words and constructs that are available in their common languages. More fundamentally, this 1D channel doesn't allow one to convey associations and feelings that appear briefly in one's consciousness (thought is faster than speech), although filmmakers sometimes do a pretty good (if extremely expensive) job of it.

But why does the text medium copy the auditory medium? Legacy. Text has the potential to be much more. I believe that we now have the potential of inventing visual languages that provide much more natural representations of our thoughts, for the sake of better human-human interaction.




1- someone once said that finishing a complex mathematical proof is like arriving at the end of long and winding road: even if you can retrieve each step as requested, you can't see it all at once.

2- as represented by argument maps.

3- To be nitpicky, Neo's auto-lessons taught him procedural, not declarative knowledge.

4- this analog analogy also models the introduction of noise. This is also analogous to the way that revisited memories get distorted.
gusl: (Default)
Programmed learning can be a good method for people with short attention spans. It's a brute-force way of making progress, when one lacks or is unable to follow through with a study plan.

[livejournal.com profile] metaeducat10n gave me this concept on this previous entry of mine about "smart text", where I suggested that it should be easy to automatically generate study questions from source text.

At a basic level, the system can only do some simple NLP, and can only generate "regurgitate"-type questions.

At a higher level, the system could generate questions that demand some reasoning (deduction, abduction) from the student, using common-sense facts (for this, we need some integration with a common-sense database). This way, one can test a deeper level of understanding and integration with other knowledge.

Of course, we're going to run into SW-hard (semantic-web hard) problems.

I wonder if a corpus of questions and answers about text could be the basis for understanding of the domain. Could Google Answers become a good foundation for truly intelligent question answering?
gusl: (Default)
Here's a process for debugging one's thoughts:
* state your beliefs (data) in a formal language
* state hypotheses (i.e. something of the form forall x_1,..x_n, phi(x_1,...,x_n), e.g. explaining patterns in the data). (ignore all statistical connotations of the word "pattern": I'm talking about logical relationships)
* computer checks for consistency between beliefs and hypotheses, by trying to prove contradictions and absurdities from the premises above... also warn if unnecessary concepts are used, etc.
* now the user can revise hypotheses / concept definitions himself, or the machine could suggest a way out.

Go back to the first step to continue your theory-building / exploration for the purposes of understanding the stuff.

I think this would be similar to Socratic dialogue.
gusl: (Default)
One of the central ideas motivating my research is expressed by Herb Simon in the following quote:

Q: So you have moved from field to field as you could bring new tools to bear on your study of decision making?

A: I started off thinking that maybe the social sciences ought to have the kinds of mathematics that the natural sciences had. That works a little bit in economics because they talk about costs, prices and quantities of goods. But it doesn't work a darn for the other social sciences; you lose most of the content when you translate them to numbers.

So when the computer came along -- and more particularly, when I understood that a computer is not a number cruncher, but a general system for dealing with patterns of any type -- I realized that you could formulate theories about human and social phenomena in language and pictures and whatever you wanted on the computer and you didn't have to go through this straitjacket of adding a lot of numbers.


As Dijkstra said, Computer Science is not about computers. It is about processes.

It is a very common error is for people to make an argument like the following:
Stock prices have to do with human behavior. Therefore they are unpredictable. It's not like physics, where computers and mathematical models are useful.


I go all "oy vey" whenever I hear arguments like this... and then, they accuse me of reductionism.

My mom doesn't like it when I interview doctors trying to formalize their knowledge about my problem, so I can truly understand my problems. At the same time, she says (non-sarcastically) I should go into biomedical research.
gusl: (Default)
This is something I wrote at [livejournal.com profile] metaeducat10n. I'm not blockquoting it, because it's undergoing editing:


Transparency (check out the different meanings!) is an interesting metaphor for formalization.

Non-formalized processes are not transparent: you can't see inside the mind of the decision-maker, even if that means your own mind (although in the latter case, you can work towards slowly seeing it again).

Formalization is about making unconscious processes (and the implicit knowledge used in them) conscious. It is about establishing a good foundation for arguments and reasonings, as they may, upon inspection, turn out to be ill-founded. Formalization, like transparency, makes this inspection objective (since anyone can look at it, anyone can judge it), and easier to perform (because of the memoization provided by formalization, retrieval is quick).

--

Map-making is memoizing. In formalization, it can memoize both one's perceptions (for the writer who writes in a formal language) or the interpretation process (for a reader/interpreter of an informal text).

Formalization, like mapmaking, has two advantages:
* it makes future retrieval of the content quick and inexpensive.
* by shining light on a whole picture at the same time, it allows one to see the picture under different perspectives (in which one can check if certain supposedly-straight lines really are straight, thanks to free rides) and different zoom levels (where one can check whether the formalized local impressions are globally coherent). This way, one refines one's impressions, by correcting the incoherencies (e.g. contradictions, as well as weaker forms of absurdity).

Imagine someone who decided to formalize his knowledge about the geometry of a sculpture from memory, and ended up drawing an Escher cube. Upon seeing his whole cube, such a person would immediately revise his interpretations/formalizations of his local impressions, because they cannot all hold together.
gusl: (Default)
Zeilberger on two pedagogical principles, via [livejournal.com profile] rweba

<< The first one, due to my colleague and hero Israel Gelfand, I will call the Gelfand Principle, which asserts that whenever you state a new concept, definition, or theorem, (and better still, right before you do) give the SIMPLEST possible non-trivial example. For example, suppose you want to teach the commutative rule for addition, then 0+4=4+0 is a bad example, since it also illustrates another rule (that 0 is neutral). Also 1+1=1+1 is a bad example, since it illustrates the rule a=a. But 2+1=1+2 is an excellent example, since you can actually prove it: 2+1=(1+1)+1=1+1+1=1+(1+1)=1+2. It is a much better example than 987+1989=1989+987. >>
I should totally write an example generator embodying these principles (although I think 987+1989=1989+987 is an ok example: thanks to its (presumably) high KC, it's probably very difficult to find a competing principle of which it could be an example). Example generation, of course, is the inverse of concept inference, the idea behind programming by demonstration, which is one of the focuses of my upcoming job at CMU.


<< Buchberger introduced the White-Box Black-Box Principle, asserting, like Solomon, that there is time for everything under Heaven. There is time to see the details, and work out, with full details, using pencil and paper, a simple non-trivial example. But there is also time to not-see-the-details.>>
This sounds just like a common interactive textbook idea: only show details on demand (which forces the reader to ask questions. This prevents the book from getting boring, since everything it ever says is an answer to a question the reader asked.), etc. Our conscious mind is like a spotlight that can only focus on one thing at a time*, so we have to choose what to focus on at any given time.


<< Seeing all the details, (that nowadays can (and should!) be easily relegated to the computer), even if they are extremely hairy, is a hang-up that traditional mathematicians should learn to wean themselves from. A case in point is the excellent but unnecessarily long-winded recent article (Adv. Appl. Math. 34 (2005) 709-739), by George Andrews, Peter Paule, and Carsten Schneider. It is a new, computer-assisted proof, of John Stembridge's celebrated TSPP theorem. It is so long because they insisted on showing explicitly all the hairy details, and easily-reproducible-by-the-reader "proof certificates". It would have been much better if they would have first applied their method to a much simpler case, that the reader can easily follow, that would take one page, and then state that the same method was applied to the complicated case of Stembridge's theorem and the result was TRUE. >>
I agree that the proof is probably unnecessarily long (I haven't seen it!), but I am also referring to the formal proof, not just its exposition. If one page proves something close to it, and the same idea applies, then these mathematicians are not using a rich enough formalism: they should have used a formalism that uses a meta-language to tell the readers how to unfold / generalize the more specific case or the "proof idea" into the full proof. This meta-proof, i.e. this proof-generating program and its inputs, can be much shorter than 30 pages. Afterall, it is "easily-reproducible-by-the-reader", as Zeilberger says. (Note: This only follows if you accept computationalism, i.e. that all human (mathematical) cognition can be simulated by computations of the kind that can be run in an ordinary computer, requiring just as much time and space as cognition requires from the brain. So Penrose probably wouldn't believe this inference.)
In short, the authors' fault lies not in "showing explicitly all the hairy details", but in using a formal language so poor that the only way to express this proof in it is with lots of "hairy details". Of course, one could reply that the authors are mathematicians, not proof-language-designers. But I strongly believe that should when a mathematician (or a programmer!) is forced to create something so ugly and unnatural (i.e. a "hack") because of the limitations of the language, they should at least send a "bug report" to the language developers.

Also, some interesting discussions at [livejournal.com profile] jcreed's.



(*) I stole this metaphor from GTD.
gusl: (Default)
I think a lot of my bugs come from needing to focus on the difficult essential idea, and thereby overlooking the details.

I often write something intentionally provisory, and it becomes a bug when I forget to change it later. I should get into the habit of making obvious which pieces of code need to be changed before running. Maybe I should append those variable names with "----" or something. I could program my IDE to stop me from evaluating any code containing this.

---

A programming language is often a tool to think in / with: it can express concepts and ideas in succinct ways, often much better than English.

I would probably be a more accurate programmer if I wrote more pseudo-code before writing in real code. That way, I would be separating the conceptual work from the low-level fill-in-the-blank work.

BUT my whole philosophy about formalization would suggest that pseudo-code need not be essentially different from real code. "if a proof is obvious, then it should be trivial to formalize!". We all know this is not true, though.

Writing my pseudocode in Lisp (i.e. thinking in code) is very tempting, but also confusing, because the later reader (probably the me of 5 minutes later) will not know that this was meant to be pseudo-code. Maybe there are tricks I can do with syntax coloring.

---

This is reminding me about something Hofstadter wrote: when the meta-thought is expressed in the same language as the thought, confusion can ensue.

... maybe some deep thoughts about this will come later.

smart text

May. 10th, 2006 06:59 pm
gusl: (Default)
Wouldn't it be great if everything you wrote automatically came with a (dynamic) comprehension test? ...if the author didn't even have to (explicitly) write the test, but it got automatically generated by "AI", with the help of semantic tags, with pieces of text being linked to logical statements?

This way, I would never need to read past a section that I didn't understand.

Profile

gusl: (Default)
gusl

December 2016

S M T W T F S
    123
45678910
11121314151617
18 192021222324
25262728293031

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags