automatic model induction
Apr. 28th, 2007 06:37 pmYesterday, after the advice-seeking, I presented David with my ideas for automatic model induction.
He drew a line on the whiteboard. On the leftmost end, he drew a bag with different cognitive-model-types (neural networks, ACT-R, Bayesian networks); at the right end, fully instantiated models.
His question was: how far to the left are you willing to push this idea?
This was a good question, to which I did not have an answer.
He thinks my idea is very exciting, because human expert modelers prune the search space too much (often to a single model, i.e. single hypothesis), and stick with it as long as the data isn't enough to reject it. Given the huge underdetermination in psychology, this methodology is pretty far from optimal. Also, scientists tend to stick to their paradigm, always using the same kinds of model. By searching through many kinds of models, my idea has the potential to improve methodology.
However, this model search is computationally a very hard problem, unless I specify more constraints. My usual answer to this would be: let's do some task analysis, and copy what humans do. However, in this case, he would say that we don't gain anything by having computers do the work. But I think a halfway is possible: yes, by using heuristics, we do lose some generality, but since computers can crunch more data than humans, they can look through a wider range of models, and this way improve the quality of models that are being proposed today.
Another issue was how experts select among the multiple well-fitting models. A lot of tacit knowledge goes into this (sometimes, you need to have read an obscure paper in order to prefer or disprefer a certain model). There was no proposal to automate this.
He drew a line on the whiteboard. On the leftmost end, he drew a bag with different cognitive-model-types (neural networks, ACT-R, Bayesian networks); at the right end, fully instantiated models.
His question was: how far to the left are you willing to push this idea?
This was a good question, to which I did not have an answer.
He thinks my idea is very exciting, because human expert modelers prune the search space too much (often to a single model, i.e. single hypothesis), and stick with it as long as the data isn't enough to reject it. Given the huge underdetermination in psychology, this methodology is pretty far from optimal. Also, scientists tend to stick to their paradigm, always using the same kinds of model. By searching through many kinds of models, my idea has the potential to improve methodology.
However, this model search is computationally a very hard problem, unless I specify more constraints. My usual answer to this would be: let's do some task analysis, and copy what humans do. However, in this case, he would say that we don't gain anything by having computers do the work. But I think a halfway is possible: yes, by using heuristics, we do lose some generality, but since computers can crunch more data than humans, they can look through a wider range of models, and this way improve the quality of models that are being proposed today.
Another issue was how experts select among the multiple well-fitting models. A lot of tacit knowledge goes into this (sometimes, you need to have read an obscure paper in order to prefer or disprefer a certain model). There was no proposal to automate this.
(no subject)
Date: 2007-04-29 04:23 pm (UTC)I don't remember seeing you discuss this before in your LJ,and when I clicked the tag you included for this entry, it only linked to this entry.
(no subject)
Date: 2007-04-29 04:53 pm (UTC)This is a very general framework: I haven't assumed any kind of cognitive architecture, people here can be any sort of machine. For this reason, it could be the case that most of the time, such models are modeling things that are not normally considered "psychology".
On the other hand, we can observe psychologists' work as an analogous process: they do experiments, collect data, and try to come up with models that fit the data *and* are cognitively-plausible (i.e. agree with theories, can be unified with the psychologist's knowledge of well-established theories). The last criterion can be used as the last step, in selecting among the multiple models that have good fit. This part is something we are missing in our general machine-learning approach, and this would be hard to do, because this knowledge is very domain-specific and often tacit.
Anyway, I like the potential of video games as cognitive tests. You can do all kinds of experiments in that medium, and collect tons of data (people won't need to be paid much, and you can record all their micromotions).