recent GOFAI
Oct. 15th, 2007 10:29 pmIt seems like Gerry Sussman has recently supervised at least two GOFAI theses:
Jacob Beal - Learning by Learning to Communicate
Bob Hearn - Building Grounded Abstractions for Artificial Intelligence Programming
I have an admiration for this kind of work. It smells ambitious, but I haven't seen enough in the way of results to want to jump into it.
Of all the AI conferences, AAAI seems to be the best one for this sort of work.
---
Richard Belew's research group at UCSD sounds interesting in the same sort of way:
This reminds me of scientific discovery research, a la Simon Colton.
Jacob Beal - Learning by Learning to Communicate
Bob Hearn - Building Grounded Abstractions for Artificial Intelligence Programming
I have an admiration for this kind of work. It smells ambitious, but I haven't seen enough in the way of results to want to jump into it.
Of all the AI conferences, AAAI seems to be the best one for this sort of work.
---
Richard Belew's research group at UCSD sounds interesting in the same sort of way:
The focus of our research group is the characterization of adaptive knowledge representations. Issues of representation have always played a central role in artificial intelligence (AI), as well as in computer science and theories of mind more generally. We would argue that most of this work has (implicitly or explicitly) assumed that the representational language is wielded manually, by humans encoding an explicit characterization of what they believe to be true of the world. We believe there are fundamental philosophical difficulties inherent in any such approach. Further, there now exist modern machine learning techniques capable of automatically developing elaborate representations of the world. To date, however, the representations underlying this learning have not shown themselves able to "scale up" to the semantically sophisticated task domains often associated with AI expert systems. We believe it is therefore appropriate to reconsider basic notions of what makes for good knowledge representation, with constraints imposed by the learning process considered sine qua non but in conjuction with others (expressive adequacy, valid inference, etc.) more typically considered by AI.
This reminds me of scientific discovery research, a la Simon Colton.