gusl: (Default)
Today I had a brief chat with Nima at a café on Heather St. He told me that his research is about algorithms to: (a) look at some scatterplots that have been annotated by biologists, (b) figure out how the annotations are made, and (c) outperform the humans...

(b) is a supervised learning problem, in which we construct a descriptive model of human behavior.
(c) requires a prescriptive model, and is impossible to do from this data alone. In NLP, human annotations are normally taken to be the gold standard, because there's nothing better to compare our predictions against.

However, when you consider human imperfection, and the fact that we can use Occam's razor to zero in on the objective of their annotation, it's not so implausible anymore. For example, if you observed that the human annotations seem to be an attempt at circling the densest cluster in the plot, then that's something that computers can clearly do better on. However, the human behavior will have systematic biases, i.e. it deviates from the prescription, which is why the prescriptive model is unlearnable from this data...

However, when the motive behind the annotations is revealed, i.e. when the annotations are used as input to another problem, then our attempt at (c) can finally be evaluated.

See also: programming by demonstration
gusl: (Default)
Interacting with the real world (especially doctors) often makes me want to create expert systems.

Here's an idea for diagnosing allergy/cold/flu complaints. It should be fairly easy to set up a booth on campus, and collect data from volunteers (using Excel and a digital camera); call up 10 doctors, and have them diagnose the profiles given to them. Trying to get the ground truth would be more expensive, since that might involve blood tests (if that works!).

INPUTS (roughly ordered by presumed importance)

photograph of:
* face
* throat
* nose, ears, eyes

* complaints (possibly null)

* answer to "how long have you felt this way?", "do you have seasonal allergies?", and other relevant questions

* time of year, time of day when the data was collected

* biographical info: age, sex, height/weight, race


* probability distribution over {nothing, allergy, virus, post-nasal drip, sleep apnea, other} (optionally, allow multiple conditions)

We can control which inputs are given to doctors. My goal is to do machine learning, to automate the diagnostic process, by learning a function from the inputs (images and text) to the diagnosis.

I'm 99.9% sure that studies like this have been done before. How well did the system work?

Other possible applications of the statistics computed from this:
* help individual doctors correct their biases
* help individual doctors correct their incoherences
* help doctors make quicker decisions
* help patients select doctors who are good at diagnosing people with their type of profile
* for patients who want second opinions, help find an optimal pair of doctors (e.g. pairs whose biases would cancel out)
gusl: (Default)
Daphne Koller gave a very impressive talk today, mostly about this stuff.

Cool stuff on human bodies:
* object completion: completing the front of someone's body from just seeing their back
* reconstructing full body shape from mo-cap data, essentially making "synthetic full-body scans" a very cheap substitute for the real thing
* PCA on body shapes: note that gender was roughly the 4th principal component: see page 5 of this paper

Try .
gusl: (Default)
Last week, I had an excellent chat with Terry Stewart over AIM. We covered:
* cognitive modeling
* "neural compilers"
* philosophy of science (empirical vs theoretical models in physics and cogsci, unification/integration, predictions vs explanations, reduction). See his excellent paper here.
* automatic model induction
* shallow vs deep approaches to AI
* automated scientists

It went longer than 2 hours. It was the first time ever that I argued against a pro-formalization position... because I normally sympathize very strongly with formalization efforts.

Our conversation:
Read more... )

Terry had to run off, but if I interpret his point correctly, it sounds like saying that 99% of the research produced in universities (including most math and CS) don't qualify as theories because they are too vague and/or ambiguous, since they fall short of this standard. So I must be misinterpreting him.


I like the idea of the following proof-checking game:
* Proposer needs to defend a theorem that he/she does or does not have a proof of.
* Skeptic tries to make Proposer contradict himself, or state a falsity.

Since formal proofs are typically too long (and unpleasant) to read in one go (see de Bruijn factor), this method only forces Proposer to formalize one branch of the proof tree. Since Skeptic can choose what branch this is, he should be convinced that Proposer really has a proof (even if it's not fully formalized).


Yesterday, while thinking about Leitgeb's talk, I came to consider the possibility that mathematical AI might not need formal theories. In fact, cognitively-faithful AI mathematicians would not rely on formalized theories of math.

Of course, we would ideally integrate cognitive representations of mathematical concepts with their formal counterparts.

Of course, losing formality opens the door to disagreements, subjectivity, etc. But real human mathematical behavior is like that. Can a machine learning system learn to mimic this behavior (by discovering the cognitive representations that humans use)? How do we evaluate mathematical AI? Do we give it math tests? Tutoring tasks?
gusl: (Default)
I just found out that there is a journal about Hybrid Intelligent Systems, i.e. systems that use multiple representations, multiple methodologies, etc. It reminds me of Minsky's speech about multiple representations being important, which I very much agree with.

While I consider the work of integrating these different reprentations and methodologies crucial for tackling complex problems, it seems like it could be a particularly frustrating activity ...similar to formalizing mathematical proofs into computer-checkable form. I'm just imagining a ton of ontology mismatches, a few of which are interesting and possibly hide paradoxes, and a great majority of which are just boring.
gusl: (Default)
I love Zeilberger's style. From What Is Experimental Mathematics?:

Let me try and explain to you, by example, what is experimental math. It is really an attitude
and way of thinking, or rather not thinking. Mathematicians traditionally love to solve problems
by thinking. Myself, I hate to think. I love to meta-think, try to do things, whenever possible,
by brute force, and of course, let the computer do the hard work.


Zeilberger-style Experimental Mathematics

Traditionally there was a dichotomy between the context of discovery, that nowadays is mostly
done by computers, and the context of verification that is still mostly carried out by humans.
In my style of experimental math, the computer does everything, the guessing and the (rigorous!)
proving, if possible completely seamlessly without any human intervention. Feel free to browse my
website for many examples.

gusl: (Default)
I would like to see an AI program that makes caricatures of human faces.

The idea is that the way we represent (perceive, remember) faces is by storing a "diff" from the baseline. We probably have different baselines for different categories of people: gender, age, race, etc.

The caricature program would select image features that people perceive (probably eyes, nose, lips, chin, etc.), amplify the deviation from the baseline in terms of location/size/shape, and reconstruct the image. This gets interesting when features perceived interact with each other, e.g. the distance between the eyes interacts with the size of the eyes, the size of the lips wrt to the nose interacts with the size of the nose. The point is that, while we would like to amplify the difference in all features by the same amount, this is impossible: some features need to be sacrificed for the sake of others, so we need to give priority to the more salient ones... just like when you project a globe into 2D, you must choose some properties you want to preserve, while losing others.

I think this means that the caricature function is information-lossy, i.e. irreversible. An algorithm that makes you more "average" again would not return your original face.

This face-morphing website lets you can change your gender / race / age.


A caricature of my body would probably include narrow shoulders, short legs. But how does my face deviate from the baseline? I suspect I am rather brachycephalic, and have a large forehead, but I couldn't say more. What is your caricature of me? Submit your entry today.


I want to get serious about biometrics. I would like to scan every millimeter of my body, every month or so. Who knows what the benefits will be? When medicine finally knows use all this information, my medical history will have a lot more data.
gusl: (Default)
Yesterday, I met "the generalists" at Kiva Han, voted on films, and even got to be dictator for a round. They are a rather interesting set of geeks. Not-too-surprisingly, [ profile] jcreed was there. (We always run into each other).

When we were done, it was about 11pm, and I was ready to go to bed.

On my way back to the bike, I ran into [ profile] jcreed and [ profile] simrob (who I hadn't seen in a while) at the 4th floor whiteboard, and saw the former's very cool statement of Arrow's theorem in the language of category theory (no pun intended). In this instance, category theory seems unnecessarily abstract, since we could easily talk about them as sets.

As my sleepiness turned into dreaminess, the topic drifted into my Herbert-Simon-ish ideas of making AI mathematicians. We seem to have opposite inclinations on this: he is skeptical, believes it's too hard.

My main belief, that "mathematical reasoning is easy to automate", is grounded on:
* serial reasoning is easy to formalize
* most of mathematical and scientific reasoning is serial
* the current state-of-the-art is poor because of brittleness, due to a lack of multiple representations (see this), a lack of integration.

His main objection seemed to be that, in the context of trying to prove things, figuring out what to do next is not easy: mathematicians have intuitions that are hard to put into theorem provers. i.e. these search heuristics are not easy to formalize.

My response: intuition comes from experience. (i.e. we just need a corpus to learn from)

Other interesting thoughts:
* the intelligence required to perform any particular set of tasks, even if you're talking about "self-modifying" programs, is going to have a Kolmogorov Complexity (i.e. minimum program size) that needs to come either from the programmer (via programming) or from the world (via machine learning).

Amazingly (or not, given GMTA), he and I quickly agreed on a plan for building human-level mathematical AI (like, after 15 minutes):
* construct good representations with which to learn (we may need to reverse-engineer representations that are hard-coded in people, like our innate ability to do physical reasoning, which is useful in understanding geometry)
* give it a corpus to learn from (i.e. a world to live in)

He also wrote down an axiomatization of group theory in Twelf, which I believe is complete in the sense of "every statement that is expressible and true of every group (i.e. every model) is a consequence of the axioms of group theory". Logicians, can you confirm this?

Finally, we talked about desirable features of future math books, like "expand definition", "generate examples and non-examples", etc. This should be easy: all I'm asking for is beta reduction... unlike those ambitious proof-mining people who want automatic ways of making theorems more general.


What do you think of the argument:
A - X should be easy!
B - instead of saying it's easy, you should be doing it.
gusl: (Default)
While [ profile] marymcglo was driving us back from the Machine Learning picnic last Saturday, we somehow came up with some empirical questions that were difficult to answer objectively. For example:

"Are low-income people more likely to marry early?"

i.e. the kind of demographic questions that economists are interested in.

We have two kinds of data available:
* Census data
* Marriage records

Integrating these two to answer our question is not trivial.

For one thing, census data is anonymous. Also, if you don't have access to microdata (i.e. individual data points), then all you get are distributions conditioned on variables like "gender", "age group", "race" or "marriage status". In particular, you can't condition on more than one thing. In situations like this, one trick is to ask a different question:

"Are people in low-income counties more likely to marry early?"

whose answer can be used to answer our original question, but only if we buy an independence assumption, namely that people in low-income countries are representative of low-income people in general. In other words we have to assume that the bias is small. Economists use such tricks all the time.

The methodologist in me wants to create a formal language for querying all this demographic data, while making these economists' tricks explicit. Once we have such a language, some logical questions are:
* what class of questions can be answered by our data?
* what questions need extra assumptions to be answered by our data?

Using this language, you would ask the reasoning engine a particular question, and it would come back offering you a choice of assumptions that could be used to answer the question. It is up to you to decide whether and how much you believe each of these assumptions. The more often an assumption gets accepted, the higher its prior gets: this way, the system formalizes what assumptions are considered "common-sense".

This is also a semantic-web-ish idea. For example, your question might talk about concepts that are not explicitly talked about in the data, but only indirectly so (there is a gap between your question and the data). Or you might have semantic interoperability issues between your data sets (the gap is inside the data).

Finally, I would like to create a Library of Formalized Economic Arguments. I don't know if anyone else is interested in this. While many economists seem to be interested in methodological issues, I don't know any who would like to take this to a foundational level.

P.S.: I didn't even mention causal inferences yet.


Census Microdata:

Uses of Microdata
Most population data - especially historical census data - have traditionally been available only in aggregated tabular form. The IPUMS is microdata, which means that it provides information about individual persons and households. This makes it possible for researchers to create tabulations tailored to their particular questions. Since the IPUMS includes nearly all the detail originally recorded by the census enumerations, users can construct a great variety of tabulations interrelating any desired set of variables. The flexibility offered by microdata is particularly important for historical research because the aggregate tabulations produced by the Census Bureau are often not comparable across time, and until recently the subject coverage of census publications was limited.
gusl: (Default)
From a comment I just wrote to [ profile] jcreed.

Another thing I asked Pfenning about was the proper interpretation of sentences in Linear Logics (it might as well have been about non-monotonic logics, or para-consistent logics (thanks to [ profile] quale for the reminder)). My inclination would be to say that these are "logics" only in the mathematical sense, not in the philosophical sense (i.e. In what I call "philosophical logics", sentences are about real truth. In particular, excluded middle and monotonicity hold.). But when we talk about agents' beliefs, we are in an intensional context, and these two no longer need to hold.

If we claim that sentences of such logics are meaningful, then we should be able to translate them into sentences in philosophical logics, e.g. temporal logics, by jumping out of the agent, and into an outsider's "objective perspective". But I don't see anyone bothering to do this.

For an illustration of what I mean:

While non-monotonic logic can model an agent's belief revision, we know that sentences in this logic are not to be judged as modeling truth. When we see a pair of sentences like:
X |- Z
X, Y |/- Z

we know that |- can't possibly refer to truth (afterall, truth is monotonic). Instead, |- must refer to the agent's beliefs and reasoning processes. Furthermore, this formalism is vague about what refers to the agent's beliefs about facts, what refers to the agent's beliefs about what inferences are valid, or whether the agent's inferences follow this logic blindly, without reflection.

Therefore, if we want to use a true philosophical logic, we should write something like:
B( B(X) ||- B(Z) ) (agent believes that: belief in X, in the default case, justifies belief in Z)

B( B(X) /\ B(Y) ||/- B(Z) ) (agent believes that: belief in X, when accompanied by belief in Y, in the default case, does not justify belief in Z)

Real reasoning involves reflection. Logicians often don't care enough about reflection.

AI course

Aug. 8th, 2006 11:38 pm
gusl: (Default)
This seems like an excellent course of AI. I'd like to highlight the section on learning
gusl: (Default)
One of the central ideas motivating my research is expressed by Herb Simon in the following quote:

Q: So you have moved from field to field as you could bring new tools to bear on your study of decision making?

A: I started off thinking that maybe the social sciences ought to have the kinds of mathematics that the natural sciences had. That works a little bit in economics because they talk about costs, prices and quantities of goods. But it doesn't work a darn for the other social sciences; you lose most of the content when you translate them to numbers.

So when the computer came along -- and more particularly, when I understood that a computer is not a number cruncher, but a general system for dealing with patterns of any type -- I realized that you could formulate theories about human and social phenomena in language and pictures and whatever you wanted on the computer and you didn't have to go through this straitjacket of adding a lot of numbers.

As Dijkstra said, Computer Science is not about computers. It is about processes.

It is a very common error is for people to make an argument like the following:
Stock prices have to do with human behavior. Therefore they are unpredictable. It's not like physics, where computers and mathematical models are useful.

I go all "oy vey" whenever I hear arguments like this... and then, they accuse me of reductionism.

My mom doesn't like it when I interview doctors trying to formalize their knowledge about my problem, so I can truly understand my problems. At the same time, she says (non-sarcastically) I should go into biomedical research.
gusl: (Default)
I am all about working with multiple representations, combining induction with deduction, merging probability with logic, etc.

When we write software, we desire it to be correct, and demonstrably so ("demonstrably" in the sense of being able to convince someone: perhaps "demoably" is a better term). There are many ways of doing this:
* empirical testing: we see that it does the right thing.
* agreement: testing, not against the world or our belief of what it should do, but against an "independent" implementation. We could unify these two by defining the concept of "agent", which could encompass a human test-judger, another program, etc.
* formal proving: more definite than the above two, but we have to rely on the system in which the proof is built.

Software development routinely involves all of the above, except for the "proving", which is informal. But, (as would be typical of me to say) we do make real deductions when writing or judging a piece of code.

The mainstream of software development follows an engineering / instrumentalist epistemology: they know that the program has bugs, but they don't mind them as long as the program is still useful. As a consequence, they "abrem mão" of formal proofs and are satisfied if the program passes a particular class of empirical test cases.

The purpose of test cases is often to convince one that the program would work on a broader class of test cases (otherwise all you would be proving is that the program will work on a non-interactive demo). This is induction. Perhaps the most important epistemological question for software is: how far and how confidently can one generalize from a set of test cases?

Another question has to do with the dynamic development process: when and how should we test? Common-sense testing methodology tells us to write simple test cases first. This is Occam's razor.

Btw, this is similar to mathematical epistemology: how do we combine experimental mathematics with normal deductive math? How do we combine different pieces of intuitive knowledge in a consistent, logical framework?


If you ask a doctor (or any specialist) to write down probabilities about a particular domain that he/she knows about, these numbers will almost certainly be susceptible to Dutch book. Bayesians consider this to be a bad thing.

I believe that, by playing Dutch book with him/herself, our doctor would achieve better estimates. I would like to see experiments in which this is done. Actually, I should probably write some of this software myself. The input is a set of initial beliefs (probabilities), and the output is a Dutch-book strategy. This Dutch-book strategy corresponds to an argument against the set of beliefs. This forces our specialist to reevaluate his beliefs, and choose which one(s) to revise. This is like a probabilistic version of Socratic dialogue.


Do you see the connection between the above two? Please engage me in dialog!
gusl: (Default)
I've been working on a difficult programming puzzle, whose main difficulty consists in computing a function efficiently. The specification is therefore much simpler than the solution. This should make it a perfect case for applying automatic programming: what is required is "cleverness", not "knowledge". (of course, this "knowledge" does not include knowledge of heuristics, knowledge of mathematical theorems, etc. (all of which *are* useful), since they are low-Kolmogorov-Complexity, being consequences of just a few axioms.)

It reminds me of something like Fermat's Last Theorem: easy to state, hard to prove. It's also easy to write an algorithm that eventually proves it, but very hard to make it output the proof before the end of the universe: just do a breadth-first proof search through the axioms of ZFC (if we don't want to worry about interpretations or waste time proving non-computable results, then I think substituting ZFC with "Lambda Calculus" will do). The "creativity" lies in finding the representation with which the proof-finding becomes computationally easy. (Could Lenat's and Colton's "automated mathematicians" be applied to Automatic Programming? Kerber is probably interested in the automated design of mathematical concepts: is this applicable?)

The smart way to tackle such problems, therefore, is to do a "representation search". We can implement heuristics used by human mathematicians (Colton and Pease have worked on "Lakatos-Style Reasoning").

Can normal Automated-Theorem Provers find "creative" proofs and "Proofs without Words" of the sort found here? Why not? Because they are missing representations. Jamnik's work could be used to add diagrammatic representations to such automated mathematicians.

This reminds me of Kasparov vs Deep Blue. It seems that Deep Blue won by "brute force". Not brute force alone, of course: without the tons of hours spent on making it smarter, all those computations would still have gotten them nowhere. But a "fair" game would have been one in which Deep Blue's computational resources were limited: you're only allowed so many megaflops or whatever. While it is hard to quantify the computational power of Kasparov's brain (in fact, it's probably a hybrid analog-digital computer), accepting the outcome of the match as indicating that Deep Blue is a "better chess player" than Kasparov is like saying that a retarded giant is a "better fighter" than a tiny man, when "fairness" requires putting them in different weight categories.


[ profile] fare suggests that Jacques Pitrat has done relevant work on automatic programming, but I haven't found such a reference.

[ profile] simonfunk has suggested that AI could emerge out of compilers, since they are try to be code-optimization machines. One problem with this, of course, is that most programming languages are specified to perform the exact computations determined by the code (maybe not Prolog). The kind of "compilers" relevant here are something like code-generators (given a formal specification). (would very general constraint-solvers be helpful too?). In any case, a compiler that "optimizes" such a function would need to come up with the required representations.
gusl: (Default)
Kowalski, Toni (1996) - Abstract Argumentation

We outline an abstract approach to defeasible reasoning and argumentation which
includes many existing formalisms, including default logic, extended logic programming,
non-monotonic modal logic and auto-epistemic logic, as special cases. We show, in particular,
that the admissibility" semantics for all these formalisms has a natural argumentation theoretic
interpretation and proof procedure, which seem to correspond well with informal

Dung, Kowalski, Toni (2005) - Dialectic proof procedures for assumption-based, admissible argumentation
We present a family of dialectic proof procedures for the admissibility semantics
of assumption-based argumentation. These proof procedures are defined for any
conventional logic formulated as a collection of inference rules and show how any
such logic can be extended to a dialectic argumentation system.
The proof procedures find a set of assumptions, to defend a given belief, by starting
from an initial set of assumptions that supports an argument for the belief
and adding defending assumptions incrementally to counter-attack all attacks.
The novelty of our
approach lies mainly in its use of backward reasoning to construct arguments
and potential arguments, and the fact that the proponent and opponent can
attack one another before an argument is completed. The definition of winning
strategy can be implemented directly as a non-deterministic program, whose
search strategy implements the search for defences.

In conventional logic, beliefs are derived from axioms, which are held to be beyond
dispute. In everyday argumentation, however, beliefs are based on assumptions, which
can be questioned and disputed...

The purpose of this paper is to study the fundamental mechanism, humans use in
argumentation, and to explore ways to implement this mechanism on computers.
Roughly, the idea of argumentational reasoning is that a statement is believable if it can be
argued successfully against attacking arguments.

Panzarasa, Jennings, Norman - Formalizing Collaborative Decision-Making and Practical Reasoning in Multi-agent Systems

Kenneth Forbus - Exploring analogy in the large
Read more... )
gusl: (Default)

All the pieces finally fall into place:

Causal diagram of Chronic Rhinitis

Solid lines mean positive influence (+), i.e. more of the source tends to cause more of the target.
Dashed lines mean negative influence (-), i.e. more of the source tends to cause less of the target.

N.B.: I don't suffer from all causes or all symptoms above.

I could add a node for "vasoconstrictor" (e.g. Afrin) right next to "fluticasone", having a negative (e.g. health-positive) effect on "amount of blood in mucosa", but the problem is that vasoconstrictors have a short-term effect that rebound, becoming a positive (e.g. health-negative) effect.

Thanks WikiTex/Wikisophia, for providing me with a sandbox! Wiki code is behind the cut.

Fluticasone appears to be effective in the long run. But if I end up needing to use it for the rest of my life, then I'll go for a ~50% partial turbinectomy (under the knife, since laser seems to damage mucociliary function).

I am interested in the semantics of these diagrams, and how they relate to argument maps and formal proofs.

semantics of diagrams

* Say we want to instantiate a particular allergen and a particular individual: what kind of graph rewriting will we need to do?

* What about expressing the distinction between independent and dependent influences (e.g. conjunction, synergy)?

* What about tagging nodes with information about which leaves are controllable?

* Some effects have preconditions: snoring requires sleeping. Sleeping requires lying down. So we have an implicit relationship in the graph: the consequence is that turbinate enlargement will be worse during sleep. Could conclusions of the kind be drawn automatically, by simply adding to the implicit information to the current representation?

Read more... )
gusl: (Default)
Frank van Harmelen seems like an interesting person

"Groot, Ten Heije, van Harmelen - Towards a Structured Analysis of Approximate Problem Solving: a Case Study in Classification
The use of approximation as a method for dealing with complex problems is a fundamental research issue in Knowledge Representation. Using approximation in symbolic AI is not straightforward. Since many systems use some form of logic as representation, there is no obvious metric that tells us `how far' an approximate solution is from the correct solution.

This is an issue in the philosophy of science, in particular the issue of how reliable simulations are: how much will errors spread? In terms of inference, I think of a simulation as a large chunk full of deductions with a few (false) auxiliary assumptions thrown in. Ideally, we would use the false assumptions as little as possible, but the reason we make those assumptions in the first place is because analytical solutions are intractable.


Oct. 30th, 2005 06:01 pm
gusl: (Default)
I'm feeling pretty good, after a long game of football(soccer). I scored a lot, but mostly had fun stealing the ball and doing aerial play.

I think I finally understand a fundamental difference between Brazilian (and Argentinian) and Gringo football: control of the ball. For me, tackling in Holland is easy. (NB: I play clean & safe: no sliding tackles) Today I stole lots and lots of balls from these guys, but they couldn't steal it from me nearly as easily (I think I have more "malice" than they do). Besides my Brazilian training, I have a knack for accurate shooting: I've always liked to practice kicking things in the air. I'm also good at finding spaces to pass. So what am I not good at? My long-distance shooting could use some improvement... and I could be better at "dribbling" to get past defenders. But I think I really need to play with people at my level in order to see my shortcomings well.

In Europe, players don't control the ball nearly as well as in Brazil. They can't fake their dribbles well enough: I can still tell what they're going to do.


I'd like to model the plan construction that goes on in players' heads. I think most people don't realize how much planning goes on in football games. It's true that you can't plan more than 30 seconds ahead. And the beautiful thing is that this planning/replanning process becomes automatic and second-nature as you get experienced.

Players who can see solutions that the others don't see are said to have "insight". Such moves also have the advantage of catching the opponent off-guard.
A lot of the man-to-man confrontation involves economic "game theory": fake signalling, etc.

One of my early AI project dreams was a system to understand football games. I think we first need good 3D data of these games.
gusl: (Default)
me- aren't all these philosophical questions, questions about AI? (unifying epistemic logic with probability, coherentism, etc)

somebody- what about metaphysics?

me- irrelevant things are, well, irrelevant.

Fitelson- early Hempel meets Herbert Simon!

me- ...uhmmm.. ok...

I haven't read any Hempel.
gusl: (Default)
Gustavo - "Kolmogorov Complexity"! "information distance"! "case-based reasoning"!
Google - no major websites or papers connecting the two

Isn't the connection obvious??
Doesn't CBR require a similarity measure? Isn't information distance the most general similarity measure?


gusl: (Default)

December 2016

18 192021222324


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags