proof-planning
Jun. 17th, 2005 12:52 amNow that I have the motivation to build my own proof-planning system, Alan Bundy's Proof Planning FAQ makes a lot of sense, and has tons of cool meta ideas, such as proof-critics and meta-tactic generation. Btw, I did reinvent proof-planning last month, only 16 years too late (this is narrowing... my last good idea was >50 years too late) (although I might have been inspired by the word "proof-plan" to think of AI Planning, I didn't know what it really meant)
The point is to think of the tactics (i.e. steps of the proof-plan) as common ways of getting closer to a goal, with specifications (e.g. if the goal is to solve an equation for X, a common tactic would be to bring all terms with an X to the 'left-hand-side'). Some tactics may have guaranteed post-conditions, while some may guarantee nothing: while 'collect-var-on-lhs' is guaranteed to leave no X's on the rhs, 'try-to-factor-out' is not guaranteed to narrow down the X to just one factor (e.g. in the case when the lhs is X^2 + X).
Btw, I got my rejection letter from Edinburgh yesterday. I might be able to get a proposal funded there in the future, but it won't be this year.
Anyway, this stuff is delicious cognitive AI, or as Bringsjord would call it, "Human-Guided Logic-Based AI", because it's about analyzing human intelligence: perfect for an introspective, navel-gazing phenomenologist like myself (I suspect that the most reliable symptom of this ailment is being into linguistics).
The point is to think of the tactics (i.e. steps of the proof-plan) as common ways of getting closer to a goal, with specifications (e.g. if the goal is to solve an equation for X, a common tactic would be to bring all terms with an X to the 'left-hand-side'). Some tactics may have guaranteed post-conditions, while some may guarantee nothing: while 'collect-var-on-lhs' is guaranteed to leave no X's on the rhs, 'try-to-factor-out' is not guaranteed to narrow down the X to just one factor (e.g. in the case when the lhs is X^2 + X).
Btw, I got my rejection letter from Edinburgh yesterday. I might be able to get a proposal funded there in the future, but it won't be this year.
Anyway, this stuff is delicious cognitive AI, or as Bringsjord would call it, "Human-Guided Logic-Based AI", because it's about analyzing human intelligence: perfect for an introspective, navel-gazing phenomenologist like myself (I suspect that the most reliable symptom of this ailment is being into linguistics).