gusl: (Default)
[personal profile] gusl
Are the hangers-on of the semantic web dream suffering from the same condition as the "AI optimists" of the '60s and '70s?

It's very tempting for us, intellectual children of Carl Hempel and Herb Simon, to seek to formalize1 and automate science, mathematics, or any of the higher human intellectual functions2. Many of my mental cycles for the last 5 years have been spent on such questions, and I've discovered some very interesting work on the way, but these cycles have yet to pay off in any concrete sense.

I think the basic axiom behind the belief that semantic web (or AI) is near, and much of the resulting excitement, is "human intelligence is simple". I don't know if this is the case. All I know is that it's easy to take for granted the complexity of your own intelligence when you're unable to introspect, when you don't have conscious access to the inner workings of your mind. Although the simplicity bias is a good heuristic, by itself it does not warrant optimism.

If you have any insight, please leave a comment. In particular, I am interested in the potential of Semantic Science and similar efforts. What interesting questions/issues/methods, if any, will arise in Machine Learning and Statistics when we integrate different kinds of scientific data/theories through ontologies? And how long will it take for this to happen? As much as I'd love to see formal embodiments of scientific theories and inter-theoretical links, I don't expect to see anything significant in the foreseeable future.



1 - it's especially tempting for us nitpicking types.
2 - Geoff Hinton has said that one of the big mistakes in AI has been focusing on high-level problems, and ignoring low-level problems; resulting in systems that are really good at tracing their way out of a maze (or beat humans at chess), but unable to pick up a cup from a table.

(no subject)

Date: 2009-02-19 03:43 am (UTC)
From: [identity profile] mapjunkie.livejournal.com
I'd like to say that I don't claim this intellectual heritage as my own, and explicitly deny Herb Simon and Carl Hempel as uncritically positive intellectual predecessors, instead taking Winnograd, Agre, Brooks and interactionist AI as one school of departure, and rigorously non-philosophical interpretations of Computational Learning Theory as another, and rigorous development of a self-standing Computational Neuroscience as yet another. Of course, I recognize AI primarily as an influence and a provider of techniques, as opposed to providing any particular motivation to my personal intellectual program, so I recognize this might not be a mainstream view. Yet, I suspect it is actually a widely held view that information retrieval, machine learning, and a host of related "AI" fields are finding more wide employ in IA, intelligence amplification as descended from Douglas Engelbart, than from strong AI. Needless to say, I certainly don't adopt the mantle of the semantic web uncritically, although I do think there is a certain "information chemistry" which has some valuable intuitions.

Yet, at the same time, I am very positive about the possibility of articulating science in a way legible to machine learning. One of the most interesting ontological systems (Poincaire->Thom->Deleuze->DeLanda) characterizes entities as statistical ensembles which emerged from specific complex statistical processes. Yet, in a certain way, this is not a formal account, as the phenomena found are not logically implied, but are historically contingent within particular generative processes.

However, I'm optimistic about the value of working on scientific understanding, even if it does not lead to any explication of our history. Given the complexity of feedback cycles, it is perhaps a practical task to continue with the development of pragmatic induction in distributed agents, watching on our behalf for unintended consequences.

"It is the business of the future to be dangerous; and it is among the merits of science that it equips the future for its duties." -Alfred North Whitehead

(no subject)

Date: 2009-02-19 04:26 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
<< explicitly deny Herb Simon and Carl Hempel as uncritically positive intellectual predecessors >>

I can't parse this.


<< rigorously non-philosophical interpretations of Computational Learning Theory as another >>

what are you talking about?


<< characterizes entities as statistical ensembles which emerged from specific complex statistical processes. Yet, in a certain way, this is not a formal account, as the phenomena found are not logically implied, but are historically contingent within particular generative processes. >>

you seem to be talking about emergence, but what are you getting at? Are you talking about ontology in the sense of what "really exists", or as a way to represent knowledge, for reasoning purposes?


<< However, I'm optimistic about the value of working on scientific understanding, even if it does not lead to any explication of our history. Given the complexity of feedback cycles, it is perhaps a practical task to continue with the development of pragmatic induction in distributed agents, watching on our behalf for unintended consequences. >>

again, I'm completely lost.

I'll take a shot at explicating this

Date: 2009-02-19 12:17 pm (UTC)
From: [identity profile] mapjunkie.livejournal.com
<< explicitly deny Herb Simon and Carl Hempel as uncritically positive intellectual predecessors >>

I mean that although I recognize their contributions, praise their developments, and use their techniques, their goals are not my goals and their time to be setting the agenda has passed.

<< rigorously non-philosophical interpretations of Computational Learning Theory as another >>

I'm talking about the Valiant/Vapnik notions that cast learning in firmly mathematical grounds, making no particular reference to intelligence. I recognize that this may be a strange reading for Valiant, yet I think it reflects the actual content of Machine Learning and Computational Learning Theory papers.


"you seem to be talking about emergence, but what are you getting at? Are you talking about ontology in the sense of what "really exists", or as a way to represent knowledge, for reasoning purposes?"

I'm more interest in what does exist, and I'm particularly not interested in representation techniques that can't seem to model what exists in operational or transferable ways.

<< However, I'm optimistic about the value of working on scientific understanding, even if it does not lead to any explication of our history. Given the complexity of feedback cycles, it is perhaps a practical task to continue with the development of pragmatic induction in distributed agents, watching on our behalf for unintended consequences. >>

I also don't take semantic science to be justified in itself, and therefore I look for applications. I think that a mechanized scientific understanding could be deployed in a useful way, namely to monitor observations and to raise potential consequences as an alert.

Re: I'll take a shot at explicating this

Date: 2009-02-20 01:01 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
<< I'm talking about the Valiant/Vapnik notions that cast learning in firmly mathematical grounds, making no particular reference to intelligence. I recognize that this may be a strange reading for Valiant, yet I think it reflects the actual content of Machine Learning and Computational Learning Theory papers. >>

I'm having trouble connecting all this talk of learning theory to the topic at hand.

Re: I&#39;ll take a shot at explicating this

Date: 2009-02-20 01:29 am (UTC)
From: [identity profile] mapjunkie.livejournal.com
I think I see the issue, as there are actually two topics.

1) There is at least one other separate tradition in machine learning and other AI topics drawing a distinction from classical AI and its labors in knowledge representation, of which computational learning theory could regarded as one of the points of departure, moving the rigorous portion of the discipline from logic into statistics and pure mathematics.

2) Even in these other traditions, there are strong reasons to hope for good approaches to working on large-scale scientific problems, drawing from statistical physics.

So, computational learning theory really doesn't directly enter into the topic of machine learning and statistics for science, at least at the level I've been discussing it, but instead serves as an example of changing traditions in the field.

(no subject)

Date: 2009-02-19 12:46 pm (UTC)
From: [identity profile] mapjunkie.livejournal.com
<< characterizes entities as statistical ensembles which emerged from specific complex statistical processes. Yet, in a certain way, this is not a formal account, as the phenomena found are not logically implied, but are historically contingent within particular generative processes. >>

"you seem to be talking about emergence, but what are you getting at?"

I'm saying that a formal, semantic representation that doesn't explicitly capture a given generative process will have a hard time linking one kind of observations with those from a lower level of reductionist abstraction, because there isn't necessarily a semantic path along the way (even in the actual science), and that the agent should be prepared to draw evidence through statistical means.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags