gusl: (Default)
Herb Simon, and his student Pat Langley have long been interested in comparing human-learning and machine-learning side-to-side.

Much of our AI-based educational software is necessarily in domains where machines do better than human students (e.g. equation-solving). In domains where students are better, e.g. playing soccer, learning natural languages, we cannot create full-fledged cognitive tutors (as these require the system to judge correctness in novel problems). This doesn't mean that computer can't support the learning process, by creating tutors for skills that transfer (either vertically or horizontally) to the target skill, e.g. I can imagine soccer players benefitting from biofeedback.

We can create support for some subtasks essential for learning (e.g. vocabulary building), but the computer's knowledge of words will be "dead" knowledge, necessarily disembodied from its natural function (i.e. communicating with humans, with no domain restrictions).

Human and machines speak different languages. This is to be expected, since they evolved under different constraints:
* environment: real world vs. simulated world
* hardware architecture: wetware

OTOH, the way programs do things can't be that far from the way humans do things, since they are programmed by humans, afterall. I can imagine different programming styles, in which one implements more or less cognitively-plausible ways of solving the problem.

Much of the challenge of HCI and ITS is to formalize human concepts so that humans and machines can communicate more easily. By formalizing concepts that people actually think about, we make it possible to give the computer high-level instructions, instead of having to worry about annoying technicalities.

It has long bothered me that computers are generally rather unreflective, unaware of their output (even if you give the computer a screen capture, its perception of it is extremely raw and low-level, because it doesn't know how to interpret menus, widgets, etc, which is rather ironic: they know how to make these widgets, but not how to read them). It seems that they could a learn a lot from feedback: if they could put themselves in the users' place, they would know when they being a bad, bad computer.

Is anyone aware of user-simulators that try to figure out how to use a new interface? Is this a standard method of evaluating UI designs?

---

Another idea that I've become familar with in my recent experience has been that of a behavior recorder: it's basically like a key&mouse logger, but for higher-level steps.

By applying machine learning on behavior-recorder data, we can figure out how people behave. i.e. learn a cognitive model. Tagging these behaviors into higher-level chunks should improve this.

Now, take video games. There should be no shortage of behavior data for them. In fact, there are probably myriads of cognitive phenomena manifested in this data. To paraphrase Herb Simon, there is no shortage of data: there is only shortage of expert attention.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags