I've been meaning to email Raul Valdes-Perez, to ask if there's anyone at CMU still doing work on scientific discovery, mathematical discovery, algorithm discovery, etc.
His research goal is similar to mine:
This explains my interests in:
* formalization
* machine learning / scientific methodology (i.e. same thing).
His publications are all over the place too.
Anyway, I was just biking back from the cluster, when I noticed a big Vivisimo sign next to the RiteAid on Murray (corner with Forbes). I probably noticed this sign a handful of times before, but I only made the connection now. Vivisimo is his start-up, which explains why he is an "adjunct" professor.
But what does "adjunct professor" mean?
His research goal is similar to mine:
My research goal is to improve on the current state of scientific, engineering, and other professional reasoning by developing characterizable methods and interactive software that partially automate high-end tasks of discovery. In short: Better discovery of new, interesting knowledge.
This explains my interests in:
* formalization
* machine learning / scientific methodology (i.e. same thing).
His publications are all over the place too.
Anyway, I was just biking back from the cluster, when I noticed a big Vivisimo sign next to the RiteAid on Murray (corner with Forbes). I probably noticed this sign a handful of times before, but I only made the connection now. Vivisimo is his start-up, which explains why he is an "adjunct" professor.
But what does "adjunct professor" mean?
(no subject)
Date: 2006-12-14 06:18 am (UTC)It's a temporary position, of course: when the class is over, the position ends (though adjuncts typically maintain long-term contact teaching a class or two every semester).
(no subject)
Date: 2006-12-14 06:20 am (UTC)(no subject)
Date: 2006-12-14 07:02 am (UTC)The "this" that I am interested is the scientific process (and its automation), which includes:
* discovery: coming up with interesting hypotheses
* justification: "proving" the above, by finding explanatory links (deductive, inductive, abductive, etc.) between the hypothesis and the theory&data.
Justification can make use of proof search. This should be especially useful in deeply-deductive (math-heavy) domains. But most science is like reverse mathematics: you work from the consequences (observables) to the axioms (theoretical entities). So we need theorem-provers that do reverse-mathematics.
Machine learning, i.e. the automatic creation of statistical models, is about justification, I would say, although the "theory" is usually implicit: often, the output of learning is an "empirical model", with no (or weak) links to a theory. By looking at these graphs, you may eventually develop a bridge between empirical model and theory.
I suppose you could say that sometimes the theory comes first and models come from theories, but I can't imagine any examples of that.
"Data mining" usually means machine learning for the discovery of new patterns (The term "KDD" is more explicitly so, since it contains the word "discovery").
Oops, I guess I got carried away with my definitions!
(no subject)
Date: 2006-12-14 08:51 pm (UTC)Basically I stick to my belief that automatically replicating the way human cognition is presently used in discovering interesting conjectures and proofs of them is tremendously hard, and certainly at least somewhat harder than was expected by early AI research. I've heard of essentially no research that demonstrated that "jumping in the deep end" (studying how humans deal with proofs at a high level, as opposed to incrementally studying logics and proof theories, and perhaps tactics for them) actually yields results that generalize well, although I would love to have my intuitions refuted by such results!
(no subject)
Date: 2006-12-15 03:35 am (UTC)So you need an machine learning expert who is also a structural proof theorist. That combination of skills may not actually exist....
(no subject)
Date: 2006-12-15 04:56 am (UTC)There is a big gap between machine learning algorithms that do admittedly incredibly useful things with text, and future algorithms that might actually do things with text that resembles cognition. I think if you found a way to do machine learning to search for generalizations of the induction hypothesis, it might buy you a little, but only that; really being smart is what's necessary in the long run, and that's hard.
(no subject)
Date: 2006-12-15 05:08 am (UTC)(no subject)
Date: 2006-12-15 04:01 pm (UTC)(no subject)
Date: 2006-12-15 05:04 am (UTC)What does this mean?
(no subject)
Date: 2006-12-15 03:19 pm (UTC)Proof theory can help, because you can prove that a good deal of that nondetermism is inessential -- that is, it doesn't matter which choice you make, so you don't have to save those backtracking points. This shrinks the search space a lot, which means that algorithms for exploring it have a much easier time of it.
(no subject)
Date: 2006-12-14 06:56 am (UTC)What are some of the big Open Questions in philosophy these days? I know there are a lot in philosophy of mind -- dualism vs. monism; strong A.I. vs weak A.I.; can consciousness be algorithmic?; does a computer "understand" what it does? -- "The Chinese Room" thought experiment -- is that by Searle? by whomever.
What are some more?
Also, in other fields of philosophy?
Also are there any Open Questions in logic today, as there were so famously, say, 100 years ago, before Goedel and Tarski shattered the dreams of the formalists?
Who would you say are the most important philosophers still active today? (Is Gilbert Harman one of them? I ask because I studed with him at Princeton oh sooo many years ago.
Many thanks!