reductionistic dreams
Feb. 17th, 2009 05:39 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Are the hangers-on of the semantic web dream suffering from the same condition as the "AI optimists" of the '60s and '70s?
It's very tempting for us, intellectual children of Carl Hempel and Herb Simon, to seek to formalize1 and automate science, mathematics, or any of the higher human intellectual functions2. Many of my mental cycles for the last 5 years have been spent on such questions, and I've discovered some very interesting work on the way, but these cycles have yet to pay off in any concrete sense.
I think the basic axiom behind the belief that semantic web (or AI) is near, and much of the resulting excitement, is "human intelligence is simple". I don't know if this is the case. All I know is that it's easy to take for granted the complexity of your own intelligence when you're unable to introspect, when you don't have conscious access to the inner workings of your mind. Although the simplicity bias is a good heuristic, by itself it does not warrant optimism.
If you have any insight, please leave a comment. In particular, I am interested in the potential of Semantic Science and similar efforts. What interesting questions/issues/methods, if any, will arise in Machine Learning and Statistics when we integrate different kinds of scientific data/theories through ontologies? And how long will it take for this to happen? As much as I'd love to see formal embodiments of scientific theories and inter-theoretical links, I don't expect to see anything significant in the foreseeable future.
1 - it's especially tempting for us nitpicking types.
2 - Geoff Hinton has said that one of the big mistakes in AI has been focusing on high-level problems, and ignoring low-level problems; resulting in systems that are really good at tracing their way out of a maze (or beat humans at chess), but unable to pick up a cup from a table.
It's very tempting for us, intellectual children of Carl Hempel and Herb Simon, to seek to formalize1 and automate science, mathematics, or any of the higher human intellectual functions2. Many of my mental cycles for the last 5 years have been spent on such questions, and I've discovered some very interesting work on the way, but these cycles have yet to pay off in any concrete sense.
I think the basic axiom behind the belief that semantic web (or AI) is near, and much of the resulting excitement, is "human intelligence is simple". I don't know if this is the case. All I know is that it's easy to take for granted the complexity of your own intelligence when you're unable to introspect, when you don't have conscious access to the inner workings of your mind. Although the simplicity bias is a good heuristic, by itself it does not warrant optimism.
If you have any insight, please leave a comment. In particular, I am interested in the potential of Semantic Science and similar efforts. What interesting questions/issues/methods, if any, will arise in Machine Learning and Statistics when we integrate different kinds of scientific data/theories through ontologies? And how long will it take for this to happen? As much as I'd love to see formal embodiments of scientific theories and inter-theoretical links, I don't expect to see anything significant in the foreseeable future.
1 - it's especially tempting for us nitpicking types.
2 - Geoff Hinton has said that one of the big mistakes in AI has been focusing on high-level problems, and ignoring low-level problems; resulting in systems that are really good at tracing their way out of a maze (or beat humans at chess), but unable to pick up a cup from a table.
(no subject)
Date: 2009-02-19 03:43 am (UTC)Yet, at the same time, I am very positive about the possibility of articulating science in a way legible to machine learning. One of the most interesting ontological systems (Poincaire->Thom->Deleuze->DeLanda) characterizes entities as statistical ensembles which emerged from specific complex statistical processes. Yet, in a certain way, this is not a formal account, as the phenomena found are not logically implied, but are historically contingent within particular generative processes.
However, I'm optimistic about the value of working on scientific understanding, even if it does not lead to any explication of our history. Given the complexity of feedback cycles, it is perhaps a practical task to continue with the development of pragmatic induction in distributed agents, watching on our behalf for unintended consequences.
"It is the business of the future to be dangerous; and it is among the merits of science that it equips the future for its duties." -Alfred North Whitehead
(no subject)
From:I'll take a shot at explicating this
From:Re: I'll take a shot at explicating this
From:Re: I'll take a shot at explicating this
From:(no subject)
From: