gusl: (Default)
Newton da Costa reviews "An Architectonic for Science"

Pfeiffer, Niehave - EVALUATION OF CONCEPTUAL MODELS – A STRUCTURALIST APPROACH seems to be about how to evaluate the logic of scientific theories. They compare it with the problem of evaluating software.

Wajnberg, Corruble, Ganascia, and C. Ulises Moulines - A Structuralist Approach towards Computational Scientific Discovery"

Tangent:

Henry Kyburg - Combinatorial Semantics: Semantics for Frequent Validity
The title sounds dumb. But it's Henry Kyburg, and he seems thorough review of previous attempts to combine probability and logic.

Henry Kyburg - Uncertain Inferences and Uncertain Conclusions
Uncertainty may be taken to characterize inferences, their conclusions, their premises or all three. Under some treatments of uncertainty, the inference itself is never characterized by uncertainty. We explore both the significance of uncertainty in the premises and in the conclusion of an argument that involves uncertainty. We argue that for uncertainty to characterize the conclusion of an inference is natural, but that there is an interplay between uncertainty in the premises and uncertainty in the procedure of argument itself. We show that it is possible in principle to incorporate all uncertainty in the premises, rendering uncertainty arguments deductively valid. But we then argue (1) that this does not reflect human argument, (2) that it is computationally costly, and (3) that the gain in simplicity obtained by allowing uncertainty in inference can sometimes outweigh the loss of flexibility it entails.
gusl: (Default)
Yesterday, after the advice-seeking, I presented David with my ideas for automatic model induction.

He drew a line on the whiteboard. On the leftmost end, he drew a bag with different cognitive-model-types (neural networks, ACT-R, Bayesian networks); at the right end, fully instantiated models.

His question was: how far to the left are you willing to push this idea?

This was a good question, to which I did not have an answer.

He thinks my idea is very exciting, because human expert modelers prune the search space too much (often to a single model, i.e. single hypothesis), and stick with it as long as the data isn't enough to reject it. Given the huge underdetermination in psychology, this methodology is pretty far from optimal. Also, scientists tend to stick to their paradigm, always using the same kinds of model. By searching through many kinds of models, my idea has the potential to improve methodology.

However, this model search is computationally a very hard problem, unless I specify more constraints. My usual answer to this would be: let's do some task analysis, and copy what humans do. However, in this case, he would say that we don't gain anything by having computers do the work. But I think a halfway is possible: yes, by using heuristics, we do lose some generality, but since computers can crunch more data than humans, they can look through a wider range of models, and this way improve the quality of models that are being proposed today.

Another issue was how experts select among the multiple well-fitting models. A lot of tacit knowledge goes into this (sometimes, you need to have read an obscure paper in order to prefer or disprefer a certain model). There was no proposal to automate this.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags