cogsci ; chat with Christian
Jun. 27th, 2007 03:44 pmThe GSS schedule is posted. Now I can decide what I'll focus on.
I just met with Christian, and had a general philosophical conversation about AI vs cogsci, and what they have to learn from each other.
* Cogsci could learn to tackle hard problems (even their most complex models are trivial to program in any reasonable programming language). Cogsci today is stuck with assembly-like languages.
* AI, OTOH, could learn to be more interested in general solutions, architectures for general intelligence. Computer Scientists like general solutions: they treat their products as hammers: the attitude is "take it or leave it!", rather than integrating deeply. Chess AI is useless for playing Go.
Big Cognitive Scientists (the ones who would publish on Psych Review), are expected to generalize their conclusions to a class of problems, but they are allowed to fudge parameters.
* "Allen Newell, in his 20 Questions paper, said that cognitive architectures allow us to go back to the high-level automatically." I'm still trying to interpret that.
* there has been progress at "tight integration" of cognitive architectures. But really this means "reimplementation", rather than the usual meaning of "integration": you steal ideas from other architectures and reprogram them in your architecture. This only works because the state-of-the-art in cognitive architectures is computationally rather simple: if they were more complex, real integration would be the easier option.
* Herb Simon, Brad Best: human pattern recognition in chess. Experts are good at compressing real chessboard configurations, but no better than novices at compressing made-up configurations. Can we come up with experiments to determine what representations (perceptual chunks) chess experts are using? Coming up with hypothesized chunks might itself be a hard search problem, before you can test experimentally whether this is what they are doing.
* There is an area of cogsci called "sequence learning", but there don't seem to be studies relating Kolmogorov Complexity to human memory.
General advice for GSS: "see what people are doing, and look for something you can do better. Nowadays, most of the differences between people is not in how clever they are, but what their background is". (I think I score points here). People with unusual backgrounds are more likely to see low-hanging fruit (or rather, it is only low-hanging for them).
I just met with Christian, and had a general philosophical conversation about AI vs cogsci, and what they have to learn from each other.
* Cogsci could learn to tackle hard problems (even their most complex models are trivial to program in any reasonable programming language). Cogsci today is stuck with assembly-like languages.
* AI, OTOH, could learn to be more interested in general solutions, architectures for general intelligence. Computer Scientists like general solutions: they treat their products as hammers: the attitude is "take it or leave it!", rather than integrating deeply. Chess AI is useless for playing Go.
Big Cognitive Scientists (the ones who would publish on Psych Review), are expected to generalize their conclusions to a class of problems, but they are allowed to fudge parameters.
* "Allen Newell, in his 20 Questions paper, said that cognitive architectures allow us to go back to the high-level automatically." I'm still trying to interpret that.
* there has been progress at "tight integration" of cognitive architectures. But really this means "reimplementation", rather than the usual meaning of "integration": you steal ideas from other architectures and reprogram them in your architecture. This only works because the state-of-the-art in cognitive architectures is computationally rather simple: if they were more complex, real integration would be the easier option.
* Herb Simon, Brad Best: human pattern recognition in chess. Experts are good at compressing real chessboard configurations, but no better than novices at compressing made-up configurations. Can we come up with experiments to determine what representations (perceptual chunks) chess experts are using? Coming up with hypothesized chunks might itself be a hard search problem, before you can test experimentally whether this is what they are doing.
* There is an area of cogsci called "sequence learning", but there don't seem to be studies relating Kolmogorov Complexity to human memory.
General advice for GSS: "see what people are doing, and look for something you can do better. Nowadays, most of the differences between people is not in how clever they are, but what their background is". (I think I score points here). People with unusual backgrounds are more likely to see low-hanging fruit (or rather, it is only low-hanging for them).
(no subject)
Date: 2007-06-28 02:38 pm (UTC)