gusl: (Default)
This lecture of Fredkin's, about making computers more self-aware and automatic-programming, is similar to some ideas of mine that I blogged recently. It's entirely possible that I stole them from him, indirectly.
gusl: (Default)
Yesterday, I met "the generalists" at Kiva Han, voted on films, and even got to be dictator for a round. They are a rather interesting set of geeks. Not-too-surprisingly, [livejournal.com profile] jcreed was there. (We always run into each other).

When we were done, it was about 11pm, and I was ready to go to bed.

On my way back to the bike, I ran into [livejournal.com profile] jcreed and [livejournal.com profile] simrob (who I hadn't seen in a while) at the 4th floor whiteboard, and saw the former's very cool statement of Arrow's theorem in the language of category theory (no pun intended). In this instance, category theory seems unnecessarily abstract, since we could easily talk about them as sets.

As my sleepiness turned into dreaminess, the topic drifted into my Herbert-Simon-ish ideas of making AI mathematicians. We seem to have opposite inclinations on this: he is skeptical, believes it's too hard.

My main belief, that "mathematical reasoning is easy to automate", is grounded on:
* serial reasoning is easy to formalize
* most of mathematical and scientific reasoning is serial
* the current state-of-the-art is poor because of brittleness, due to a lack of multiple representations (see this), a lack of integration.

His main objection seemed to be that, in the context of trying to prove things, figuring out what to do next is not easy: mathematicians have intuitions that are hard to put into theorem provers. i.e. these search heuristics are not easy to formalize.

My response: intuition comes from experience. (i.e. we just need a corpus to learn from)

Other interesting thoughts:
* the intelligence required to perform any particular set of tasks, even if you're talking about "self-modifying" programs, is going to have a Kolmogorov Complexity (i.e. minimum program size) that needs to come either from the programmer (via programming) or from the world (via machine learning).

Amazingly (or not, given GMTA), he and I quickly agreed on a plan for building human-level mathematical AI (like, after 15 minutes):
* construct good representations with which to learn (we may need to reverse-engineer representations that are hard-coded in people, like our innate ability to do physical reasoning, which is useful in understanding geometry)
* give it a corpus to learn from (i.e. a world to live in)

He also wrote down an axiomatization of group theory in Twelf, which I believe is complete in the sense of "every statement that is expressible and true of every group (i.e. every model) is a consequence of the axioms of group theory". Logicians, can you confirm this?

Finally, we talked about desirable features of future math books, like "expand definition", "generate examples and non-examples", etc. This should be easy: all I'm asking for is beta reduction... unlike those ambitious proof-mining people who want automatic ways of making theorems more general.

---

What do you think of the argument:
A - X should be easy!
B - instead of saying it's easy, you should be doing it.
?
gusl: (Default)
Theory people, what are your thoughts on this?

from an email to a faculty member:
I wonder if ALADDIN would be a good place to find people to work with.

As I said, one of my main interests is creating cognitively-inspired AI, namely to automate mathematics work and programming work. Like computational scientific discovery (which automates scientific work), algorithm discovery can be approached either theoretically or empirically (and, in the best case, combination of the two).

Some of my ideas:
* For problems with well-understood solutions, one could automatically determine which algorithms / data structures to use, depending on the expected parameter values. I imagine this is already fully-baked.
* Jumping one level of automation, we could make this decision automatically, by automating average-case, worst-case analyses.

* Discovering new representations, and using "reductions" to prove that they represent the same information (an easy way to make a program more space-efficient is to use lossless compression, but this is probably not a good idea most of the time), and that they have nice accessibility properties (e.g. querying is fast, editing is fast, etc.).
* Discovering new algorithms, and "proving" (either theoretically or empirically) that they meet specifications (namely, (1) being equivalent to another algorithm and (2) being efficient).

* formalizing high-level programming concepts, like divide-and-conquer, and associated algorithm-discovery heuristics.

---

Bar-David, Taubenfeld - Automatic Discovery of Mutual Exclusion Algorithms looks good.
gusl: (Default)
CMU's Aladdin Project ("ALgorithm ADaptation, Dissemination and INtegration"):
High level goal: improve the process of incorporating powerful algs into app domains.
...
database of "war stories", open problems.


This idea of automating the analysis, i.e. making AI to improve programmerss reasoning, reminds me of my automatic programming ideas, namely: automated efficiency (rather simple cognitive AI: just implementing optimization techniques from programmers), representation search (The "creativity" lies in finding the representation with which the proof-finding becomes computationally easy.).
gusl: (Default)
I've been working on a difficult programming puzzle, whose main difficulty consists in computing a function efficiently. The specification is therefore much simpler than the solution. This should make it a perfect case for applying automatic programming: what is required is "cleverness", not "knowledge". (of course, this "knowledge" does not include knowledge of heuristics, knowledge of mathematical theorems, etc. (all of which *are* useful), since they are low-Kolmogorov-Complexity, being consequences of just a few axioms.)

It reminds me of something like Fermat's Last Theorem: easy to state, hard to prove. It's also easy to write an algorithm that eventually proves it, but very hard to make it output the proof before the end of the universe: just do a breadth-first proof search through the axioms of ZFC (if we don't want to worry about interpretations or waste time proving non-computable results, then I think substituting ZFC with "Lambda Calculus" will do). The "creativity" lies in finding the representation with which the proof-finding becomes computationally easy. (Could Lenat's and Colton's "automated mathematicians" be applied to Automatic Programming? Kerber is probably interested in the automated design of mathematical concepts: is this applicable?)

The smart way to tackle such problems, therefore, is to do a "representation search". We can implement heuristics used by human mathematicians (Colton and Pease have worked on "Lakatos-Style Reasoning").

Can normal Automated-Theorem Provers find "creative" proofs and "Proofs without Words" of the sort found here? Why not? Because they are missing representations. Jamnik's work could be used to add diagrammatic representations to such automated mathematicians.

This reminds me of Kasparov vs Deep Blue. It seems that Deep Blue won by "brute force". Not brute force alone, of course: without the tons of hours spent on making it smarter, all those computations would still have gotten them nowhere. But a "fair" game would have been one in which Deep Blue's computational resources were limited: you're only allowed so many megaflops or whatever. While it is hard to quantify the computational power of Kasparov's brain (in fact, it's probably a hybrid analog-digital computer), accepting the outcome of the match as indicating that Deep Blue is a "better chess player" than Kasparov is like saying that a retarded giant is a "better fighter" than a tiny man, when "fairness" requires putting them in different weight categories.


Notes:

[livejournal.com profile] fare suggests that Jacques Pitrat has done relevant work on automatic programming, but I haven't found such a reference.

[livejournal.com profile] simonfunk has suggested that AI could emerge out of compilers, since they are try to be code-optimization machines. One problem with this, of course, is that most programming languages are specified to perform the exact computations determined by the code (maybe not Prolog). The kind of "compilers" relevant here are something like code-generators (given a formal specification). (would very general constraint-solvers be helpful too?). In any case, a compiler that "optimizes" such a function would need to come up with the required representations.
gusl: (Default)
General intelligence means being able to transfer skills and knowledge to similar or analogous tasks ("transfer", when used as a noun, is a measure of success for education). In some cases, it means being able to deal with context.

Lisp is wonderful. My programs can't program themselves yet, but we're getting there...

Agents with high "general intelligence" can do the "meta-level transition" and start programming what's on their mind. My ideas are still vague.

What is it that I have that my programs don't when I read a specification and start implementing it? If we translated "easy" programming puzzles into a formal language, would current AIs tackle them successfully? If not, why not?

Profile

gusl: (Default)
gusl

December 2016

S M T W T F S
    123
45678910
11121314151617
18 192021222324
25262728293031

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags