gusl: (Default)
[personal profile] gusl
I've been working on a difficult programming puzzle, whose main difficulty consists in computing a function efficiently. The specification is therefore much simpler than the solution. This should make it a perfect case for applying automatic programming: what is required is "cleverness", not "knowledge". (of course, this "knowledge" does not include knowledge of heuristics, knowledge of mathematical theorems, etc. (all of which *are* useful), since they are low-Kolmogorov-Complexity, being consequences of just a few axioms.)

It reminds me of something like Fermat's Last Theorem: easy to state, hard to prove. It's also easy to write an algorithm that eventually proves it, but very hard to make it output the proof before the end of the universe: just do a breadth-first proof search through the axioms of ZFC (if we don't want to worry about interpretations or waste time proving non-computable results, then I think substituting ZFC with "Lambda Calculus" will do). The "creativity" lies in finding the representation with which the proof-finding becomes computationally easy. (Could Lenat's and Colton's "automated mathematicians" be applied to Automatic Programming? Kerber is probably interested in the automated design of mathematical concepts: is this applicable?)

The smart way to tackle such problems, therefore, is to do a "representation search". We can implement heuristics used by human mathematicians (Colton and Pease have worked on "Lakatos-Style Reasoning").

Can normal Automated-Theorem Provers find "creative" proofs and "Proofs without Words" of the sort found here? Why not? Because they are missing representations. Jamnik's work could be used to add diagrammatic representations to such automated mathematicians.

This reminds me of Kasparov vs Deep Blue. It seems that Deep Blue won by "brute force". Not brute force alone, of course: without the tons of hours spent on making it smarter, all those computations would still have gotten them nowhere. But a "fair" game would have been one in which Deep Blue's computational resources were limited: you're only allowed so many megaflops or whatever. While it is hard to quantify the computational power of Kasparov's brain (in fact, it's probably a hybrid analog-digital computer), accepting the outcome of the match as indicating that Deep Blue is a "better chess player" than Kasparov is like saying that a retarded giant is a "better fighter" than a tiny man, when "fairness" requires putting them in different weight categories.


Notes:

[livejournal.com profile] fare suggests that Jacques Pitrat has done relevant work on automatic programming, but I haven't found such a reference.

[livejournal.com profile] simonfunk has suggested that AI could emerge out of compilers, since they are try to be code-optimization machines. One problem with this, of course, is that most programming languages are specified to perform the exact computations determined by the code (maybe not Prolog). The kind of "compilers" relevant here are something like code-generators (given a formal specification). (would very general constraint-solvers be helpful too?). In any case, a compiler that "optimizes" such a function would need to come up with the required representations.

(no subject)

Date: 2006-03-25 05:14 pm (UTC)
From: [identity profile] zarex.livejournal.com
I thought there was more to the Kasparov loss controversy. I think many were claiming it was unfair because Deep Blue was allowed to "study" earlier games of Kasparov, while Kasparov was not allowed to do the same. [ among other things ]

I think that if it were a fair match, Kasparov would have won.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags