I was just browsing thesis titles from the CMU Philosophy department, and some faculty pages. These are the highlights.
Richard Scheines - Causality (from Dictionary of the History of Ideas)
starts out with a history of the field, from a personal perspective.
Spirtes, Glymour, Scheines, Meek, Richardson - The TETRAD Project: Constraint Based Aids to Causal Model Specification
Michael Kohlhase, Mandy Simons - Interpreting Negatives in Discourse
M. Kohlhase, A. Koller - Resource-Adaptive Model Generation as a Performance Model
In my terminology, they "cognitivize" the task of natural language interpretation (which makes sense, since natural language is normally meant to be interpreted by humans), i.e. they make a model of human stupidity. The "competence model" approach corresponds to "decision sciences AI", whereas "performance model" approach corresponds to "cognitive AI".
This is the same Kohlhase who works on OMDoc with the Saarbruecken people, and has been affiliated with UvA's ILLC. He is also an adjunct associate professor at CMU's SCS.
Dirk Schlimm seems to have some interesting research about axiomatics in science, which is an area I love. But whenever I see things like this (which is not very often), I am skeptical that we are making progress. Why is automated scientific discovery so out-of-fashion? (at CMU, only Bob Murphy seems to be doing something like that. Raul Valdez-Perez seems to have left academia altogether: I can't even find him on Google anymore.)
Richard Scheines - Causality (from Dictionary of the History of Ideas)
starts out with a history of the field, from a personal perspective.
Spirtes, Glymour, Scheines, Meek, Richardson - The TETRAD Project: Constraint Based Aids to Causal Model Specification
Michael Kohlhase, Mandy Simons - Interpreting Negatives in Discourse
In recent work, tableau-based model generation calculi have been used as computational models of the reasoning processes involved in utterance interpretation. In this linguistic application of an inference technique that was originally developed for automated theorem proving, natural language understanding is treated as a process of generating Herbrand models for the logical form of an utterance in a discourse.
...
Using model generation, we will demonstrate how the various possible readings of simple negated sentences are generated, and by what criteria an interpreter chooses among these possibilities.
M. Kohlhase, A. Koller - Resource-Adaptive Model Generation as a Performance Model
Unfortunately, existing model generation calculi are not yet plausible as performance models of
actual human processing, since they fail to capture computational aspects of human language
processing.
In my terminology, they "cognitivize" the task of natural language interpretation (which makes sense, since natural language is normally meant to be interpreted by humans), i.e. they make a model of human stupidity. The "competence model" approach corresponds to "decision sciences AI", whereas "performance model" approach corresponds to "cognitive AI".
This is the same Kohlhase who works on OMDoc with the Saarbruecken people, and has been affiliated with UvA's ILLC. He is also an adjunct associate professor at CMU's SCS.
Dirk Schlimm seems to have some interesting research about axiomatics in science, which is an area I love. But whenever I see things like this (which is not very often), I am skeptical that we are making progress. Why is automated scientific discovery so out-of-fashion? (at CMU, only Bob Murphy seems to be doing something like that. Raul Valdez-Perez seems to have left academia altogether: I can't even find him on Google anymore.)