user-adaptive proof explanation
May. 3rd, 2007 01:14 amArmin Fiedler's User-adaptive proof explanation seems like an excellent PhD thesis. It's about making formal proofs into human-readable proofs (and maybe interactive arguments of the persuasive kind), and it touches on:
* proof-planning
* user-modeling with ACT-R
* customized proof presentation (to which the user can respond: "too detailed", "too difficult", etc.)
* natural language generation
From the thesis itself:
Here is the longer abstract to a journal version of his thesis:
Also, the author spent a year at CMU!
* proof-planning
* user-modeling with ACT-R
* customized proof presentation (to which the user can respond: "too detailed", "too difficult", etc.)
* natural language generation
From the thesis itself:
The system should adapt to the user with respect to the level of abstraction at which the proof is presented. Moreover, the system should allow the user to intervene if he is not satisfied with an explanation and to ask follow-up or background questions. The metaphor we have in mind is a human mathematician who teaches a proof to a student or else explains a proof to a colleague.
Here is the longer abstract to a journal version of his thesis:
Today, automated theorem provers are becoming more and more important in practical industrial applications and more and more useful in mathematical education. For many applications, it is important that a deduction system communicates its proofs reasonably well to the human user. To this end, proof presentation systems have been developed. However, state-of-the-art proof presentation systems suffer from several deficiencies. First, they simply present the proofs, at best in a textbook-like format, without motivating why the proof is done as it is done. Second, they neglect the issue of user modeling and thus forgo the ability to adapt the presentation to the specific user, both with respect to the level of abstraction chosen for the presentation and with respect to steps that are trivial or easily inferable by the particular user and, therefore, should be omitted. Finally, they do not allow the user to interact with the system. He can neither inform the system that he has not understood some part of the proof, nor ask for a different explanation. Similarly, he cannot ask follow-up questions or questions about the background of the proof. As a first step to overcome these deficiencies, we shall develop in this talk a computational model of user-adaptive proof explanation, which is implemented in a generic, user-adaptive proof explanation system, called P.rex (for PRoof EXplainer). To do so, we shall use techniques from three different fields, namely from computational logic to represent proofs from various calculi with several levels of abstractions ensuring the correctness of the proofs; from cognitive science to model the users mathematical knowledge and skills; and from natural language processing to plan the explanation of the proofs and to accept and appropriately react to the user“s interactions.
Also, the author spent a year at CMU!