gusl: (Default)
I just submitted Ch. 8 of my Master's thesis as a writing sample. To give my readers more background, I wrote the following:

Background for the Writing Sample

The following writing sample is a chapter taken from my Master's thesis, which I wrote in Amsterdam in 2005.

The idea was to create a corpus of scientific derivations (taken from undergraduate textbooks) in the form of derivation trees, and use that corpus to create problem-solving AI that is simultaneously a cognitive model of scientists who have that set of "puzzle solutions" in their mind.

The central idea of the thesis was to model scientists' reuse patterns. A result can be seen as a "chunk", a memoized consequence of a set of higher-level laws (and model-specific conditions). If the goal of scientific explanation is to link hypotheses to higher-level laws (as in the Deductive-Nomological (DN) view), then the reuse of a previously-seen result is a shortcut: that path has already been traced for us. Why bother proving a lemma that has been proven before?

This is analogous to my advisor's research area, Data-Oriented Parsing (DOP), in which analyses of natural language utterances are memoized (chunks are subtrees of parse trees). DOP has a number of interesting properties, including the ability to model chunk salience and extra-grammatical utterances. Are extra-grammatical utterances analogous to deviations from strictly DN derivations (i.e. derivations that use approximations, fudge factors, etc.)? This was one of the main questions I investigated.

My thesis was a failed attempt to apply DOP to the domain of scientific reasoning. While language behavior (production and interpretation) can be said to reuse previously-seen subtrees as chunks, the same is not the case for derivation trees: scientists do not need to know how a lemma was proven in order to use it. While language approximately obeys the principle of compositionality (the meaning of the whole is a function of the meaning of its parts), the same in not true for derivations.

Furthermore, given the nature of the task, I was unable to get a decent-sized corpus with which to estimate usage frequencies for scientific results, let alone evaluate the resulting system. Nevertheless, the thesis contains significant work on equational reasoning and formalization of physics.

This chapter focuses on issues of how to best formalize textbook derivations. It proposes a strong ontology, e.g. “(force gravitational Earth Moon)” instead “F_moon”, which enables the further formalization steps of adding preconditions (formalized as predicates) to formulas, representing constraints given by the model; and tagging axioms with the theory that they came from.

Gustavo Lacerda

This seems interesting:

Pearce, Rantala - Approximative Explanation is Deductive-Nomological
gusl: (Default)
Francesco Guala - Models, Simulations, and Experiments

My thoughts:
* In mathematics, simulations and experiments are the same thing.

* As an AI/Cogsci person, I would like to see a simulation in which resource-bounded agents each make a simulation of their world. Looking at their performance might shed some light on how *we* should make our simulations in the real world (this may be especially true, if you believe that we are living in a simulation). Even if the lesson is too hard for us to understand, e.g. imagine that one of the simulated agents came up with crazy feature selection algorithm, maybe using neural networks (or some other algorithm that is a blackbox to us). We might still benefit from copying their algorithm and using it in the real world... especially if we try to make sure that the reason it works is not because it exploits artifacts of the simulation (one way of doing this is to make sure that the algorithm is robust across different simulations, written by different people).

I'm reminded of this idea:
* Debugging is like the scientific method: you combine theory (reasoning about programs) and experiment (testing). The difference is that debugging is easier:
** computer programs are known to be deterministic, and we can control initial conditions.
** closed world: when debugging, there is a bounded number of things that could be causing the undesired behavior. The evil genie of worst-case can only be so evil.
gusl: (Default)
MR's short post "Risk Analysis using Roulette Wheels" reads:
A PSA test can reveal the presence of prostate cancer. But not all such cancers are fatal and treatment involves the risk of impotence. Do you really want the test? It's difficult for patients to evaluate these kinds of risks. Mahalanobis points us to an article advocating visual tools such as roulette wheels to help patients understand relative risks and chance. Even better than the diagrams is this impressive video; the video may be of independent interest to the older men in the audience.

The basic problem is that the screening doesn't distinguish non-fatal cancer (presumably harmless) from fatal cancer. They say that, in case of a positive test result, one "must" pursue treatment, since the probability of death and is pretty high without it. They don't mention that one could just as well refuse treatment, since with treatment the probability is also high that one will get bad side-effects unnecessarily (in the cases of non-fatal cancer).

But if the screening is free, shouldn't we *always* have it done? Isn't this an axiom of rationality?
A rationalist would argue that one should choose not to do the screening ONLY IF one's decision were going to be the same in either case, i.e. no treatment regardless of the result of the screening.

My interpretation:

They say that one could choose not to be screened because people who get screened have a higher probability of bad side-effects. This is true, but only because people who get screened have a higher probability of finding something, and therefore a higher probability of getting treatment. A rationalist (like me) would argue that if you have the balls to face a higher risk of death in exchange for a smaller chance of getting side-effects when these probabilities are small, then you should have the balls to make the same choice when the chances are high (e.g. tumor strikes). But in practice, one might not trust oneself to.

Graphviz code:
Read more... )
gusl: (Default)
Shut up and calculate!

Why do physicists care about interpretations of QM? Do differerent interpretations make different predictions? If so, shouldn't they then be called theories instead?
gusl: (Default)
Frank van Harmelen seems like an interesting person

"Groot, Ten Heije, van Harmelen - Towards a Structured Analysis of Approximate Problem Solving: a Case Study in Classification
The use of approximation as a method for dealing with complex problems is a fundamental research issue in Knowledge Representation. Using approximation in symbolic AI is not straightforward. Since many systems use some form of logic as representation, there is no obvious metric that tells us `how far' an approximate solution is from the correct solution.

This is an issue in the philosophy of science, in particular the issue of how reliable simulations are: how much will errors spread? In terms of inference, I think of a simulation as a large chunk full of deductions with a few (false) auxiliary assumptions thrown in. Ideally, we would use the false assumptions as little as possible, but the reason we make those assumptions in the first place is because analytical solutions are intractable.
gusl: (Default)
Great Ideas in Personality is a site comparing several approaches in personality psychology side-to-side, with lots of material on methodology & philosophy of science.
gusl: (Default)
According to the "received view" in formal philosophy of science, we need a notion of nomic (law-like) necessity in order to formulate scientific laws.

For example:
"all lions that ever drowned in the North Atlantic were female"
forall x ( (lion(x) /\ drowned-in-NA(x)) -> female(x) )

is not considered a law because it's a contingent fact,

"all bodies with mass have a gravitational attraction to the sun"
forall x ( ( body(x) /\ has-mass(x) -> has-grav-attraction-to-the-sun(x) )


There is a sense in which the latter statement is necessary: if we learn that X is a body with mass, we say that X *must* have a gravitational attraction to the sun. While you could still say the word "must" when drawing a conclusion from the first sentence, you would be less likely to.

I would say that this is because the first sentence is only quantifying over actual lions in actual observed situations, whereas the second quantifies over all possible bodies, giving it the generality required for being a scientific law.

The necessity expressed by the "must" can be formalized by adding the so-called "nomic" modality (nomos(gr.) = law). There are many things that are nomically necessary that are not logically necessary: in fact, scientific laws are never logically necessary. Any statement that is logically necessary is unfalsifiable, and thus fails to be "scientific", at least in Popper's view.

Determinism can be seen as the view in which all true statements are necessarily true (different modes of "necessary" corresponding to different brands of determinism). While determinism is an irrefutable view, one should not simply discard the nomic modality: there exists an important difference between the two kinds of sentences exemplified above, even if it's only a cognitive difference: the second sentence allows us to draw conclusions about all potential massy bodies (or future situations involving massy bodies), while the first does not allow us to draw conclusions about all possible lions (or future lions).

My thesis has been about formalizing scientific reasoning. I think my formalization is safe, even though it doesn't use a nomic modality, because my laws always quantify over all possible situations.

So for example, (IMPLIES (PRED1 x) (PRED2 x)) should be interpreted as saying that all potential objects (are these the same as Zalta & Fitelson's abstract objects?) satisfying PRED1 will satisfy PRED2. You could put a nomic necessity box in front of this statement if you like, but I don't think it adds anything.

My system already distinguishes laws (tagged "LAW TH" for some theory TH) from contigent statements (boundary-conditions, tagged "BC"). While laws in the corpus (a corpus is log of what has been seen before: the idea is that it represents the scientist's experience) can get reused, boundary conditions should not (although they are still true, as long as names are kept unique), except when the same condition remains across problems. Better idea: we could have libraries of boundary-conditions, for reuse in problems that share the same BC's. Each library contains statements a set of BC's, and you could possibly use several libraries simulatenously (e.g. one library has information about the sun's radiation, one has information about the Itaipu Dam).

So while statements like (IMPLIES (UCM B1 B2 (UCM-PERIOD B1 B2)) (= (acc B1) (/ (^ (vel B1) 2) (distance B1 B2))))) should get reused, statements like (= (height wall) (* 3 m)) should not, unless there exists only one wall in the universe, whose height is 3 meters. A statement like (= (height wall78942396) (* 3 m)) seems perfectly fine, however, as long as there is some name management (generating large random numbers seems like a fine solution).
gusl: (Default)
Richard Cooper - What would convince me that my theory was wrong? is a PowerPoint presentation in 15 slides.

it begins summarizing Lakatos's "A Critique of Naïve Falsification":
* All theories include peripheral assumptions (hard core)
* Anomalies can always be accommodated via adjustments to peripheral assumptions (protective belt)

Adjustments to peripheral assumptions reminds me of "monster-barring" which, in turn, seems to have a connotation of weaseling out (apparently lawyers do something like monster-barring)

He later uses ACT-R and Soar as examples.

Harvey A. Cohen (1974) - THE ART OF SNARING DRAGONS seems like an interesting paper that discusses monster-barring in physics.
gusl: (Default)
Mark Priestley - The Logic of Correctness in Software Engineering

Abstract. This paper uses a framework drawn from work in the philosophy
of science to characterize the concepts of program correctness that
have been used in software engineering, and the contrasting methodological
approaches of formal methods and testing. It is argued that software
engineering has neglected performative accounts of software development
in favour of those inspired by formal logic.

He discusses Dijkstra and Lakatos in the same paper!
gusl: (Default)
W. H. Newton-Smith's "Companion to the Philosophy of Science" is an excellent encyclopedia-style book. I might buy it if it weren't so bulky (the lighter I travel, the better)

Scientific Understanding

Understanding Scientific Understanding is an interesting project. I would like to unify Minsky's notion of "understanding as multiple representations" with Lakatos's models of reasoning and theory change. Understanding can also come in the form of explanatory redundancy: if you derive the same result in two different ways, you are justified in saying that you understand it, at least with respect to the theory. One could call this "understanding as deductive confirmation", and it's similar to the confidence that you get by double-checking a computation by implementing an algorithm in different ways, or by proving things with Coq: the probability that there's a mistake is greatly reduced.

But "understanding" usually means more than just increased confidence. I think understanding involves generalization: could you answer another question about the derived result? This reminds me of work on computational analogy, and the idea of applying theory formation to generating questions that test student understanding (I've heard of work on this by Simon Colton via Alison Pease). I'd really like to see math problem generators that go beyond changing parameters. (At AIED, I met Henry Halff, an independent developer of physics-tutoring systems who was interested in this very problem, and wasn't aware of the possibility of applying theory-formation, which I had learned by talking to Alison just 3 months before).


The Practical Importance of Philosophy

H.W. de Regt, 'Are physicists' philosophies irrelevant idiosyncrasies?' Philosophica 58 (1996, 2) 125-151.
This article argues that individual philosophical commitments of scientists can decisively influence scientific practice. Two examples from the history of physics, concerning controversies between physicists over central problems in their discipline, are presented to support this thesis. Confrontation of the examples with the theories of Kuhn, Lakatos, and Laudan, reveals their inadequacy to explain the role of individual commitments. It is concluded that an adequate model of scientific change should exhibit a three-level structure.


Theory Engineering

I have long criticized "Philosophy of X" when you already have a "Science of X". e.g. let X be "language". Why have a philosophy of language if we already have linguistics?
The standard answer, I think, is that linguists make assumptions that need to be questioned, and we need philosophers to do that. But my view is that linguists themselves should be doing this: they should know the real philosophical significance of their work, and science should not be divorced from philosophy. Is there ever a case when it would be better for the linguist to remain ignorant of what their work really says?

One big difference between the practice of science and the practice of philosophy seems to be that philosophers are always inventing new words, reinventing the wheel, creating new ontologies. Most scientists, on the other hand, don't do this enough. While philosophers are always trying to break out of their paradigms, most scientists seem content in "doing good work" within the frameworks that were set up by the pioneers of the field.

In other words, philosophers are always designing conceptual systems (often reinventing the wheel), whereas scientists will usually just use the libraries given to them (compare with Kuhn's idea of "puzzle solving"). For these reasons, software engineering should be a standard part of a philosophy education, and philosophy of science should be a standard part of a science education.

Problems like commensurability, theory change, theory relations, etc. should all yield to a formal approach. In fact, I cannot think of any kind of scientific reasoning that could not be automated. But this is not surprising coming from a "computational reductionist" like me (I don't like the term "Strong AI", because the concept of "self-awareness" is loaded)

Here's a disappointing Google search, except maybe for this.

Here's a more promising one.
gusl: (Default)

Kuhn would say that most of the theorizing you do, whether explaining new phenomena, predicting the result of novel experiments, etc involves reusing tricks from the examples you learned as a student (i.e. exemplars, "puzzle solutions").

Kitcher interprets Kuhn in an unusual way, and has said that ``Science advances our understanding of nature by showing us how to derive descriptions of many phenomena, using the same patterns of derivation again and again.''

To what extent are the above statements true, and what is a good example? In the course of doing physics, in what ways do you reuse examples from your experience? Are you just taking shortcuts by reusing previously-derived results (i.e. taking them for granted), or is there something else going on?
gusl: (Default)
Compare "accelerating under constant power"
with "accelerating under constant force" (constant acceleration)

My intuition tells me that they should be the same, but kinetic energy considerations show that the acceleration is decreasing on the first one (it takes 4 times the energy to get 2 times as fast).

This would seem to contradict "velocity is relative": if velocity were relative, then the energy needed to get faster by 1m/s would be the same whether you are stationary or already at 1 m/s.


My intuition also tells me that I should be able to come up with a similar paradox about predicting the outcome of a 1-dimensional elastic collision. If you do it with energy vs momentum.

Conservation of energy:
v1_before^2 + v2_before^2 = v1_after^2 + v2_after^2 (if we fix one side of the equation, then the point (v1,v2) falls in a circle)

Conservation of momentum:
v1_before + v2_before = v1_after + v2_after (if we fix one side of the equation, then (v1,v2) falls in a straight line)

The solutions are where circle and line intersect. I guess there's no paradox afterall.

I would like to do a transform to a moving reference frame, to make sure that everything is still alright. Transforming to a fast-moving reference frame will just make the circle bigger. Basically, the point and the line all get transposed diagonally up and to the right. The distance between the intersections still remains the same.

Oh I see, physics is fine. Nothing to worry about.


The concept of kinetic energy has always been problematic for me. Given the choice, I'll integrate over force instead.
gusl: (Default)

par·a·digm (plural par·a·digms)


1. typical example: a typical example of something

2. model that forms basis of something: an example that serves as a pattern or model for something, especially one that forms the basis of a methodology or theory

3. set of all forms of word: a set of word forms giving all of the possible inflections of a word

4. relationship of ideas to one another: in the philosophy of science, a generally accepted model of how ideas relate to one another, forming a conceptual framework within which scientific research is carried out

Now I see why Kuhn chose the word "paradigm": because he viewed science as the creation of explanations based on previous examplars. I wonder if meaning 2 was "coined" by him.

So a paradigm shift is when you throw out your set of examplars from which you used to model explanations: "No, electrons are not like tiny charged balls". This will often mean that you feel clueless, if you have no exemplars to draw from.

I used to think that Kuhn's notion of paradigm meant something like "way of doing/seeing things", similar what the idea of what matrix you're trapped in, or what colour glasses you're seeing things through (theory-ladenness). But now I see that it's based on the notion of exemplar, seeing that he chose to use the word "paradigm".
gusl: (Default)
A couple weeks ago I made a big edit on the Wikipedia article titled "Mathematical_models_in_physics", deleting some stuff that seemed to imply that "mathematics is not always faithful to physics" because of the Banach-Tarski Paradox.

I posted my justification for this here

Also here:
Read more... )
gusl: (Default)
thread at [ profile] tdj's about carbon-dating individual human cells (it's a very clever idea):

In discussing the required experimental precision / error, I proposed:
Here's a causation network:

A: atmospheric levels of C14 at time of cell's birth
B: initial amount of C14 in cell's DNA (i.e. at birth)
C: time passed since cell's birth
D: amount of C14 in the cell's DNA
E: "measured" amount of C14 in the cell's DNA (this is actually an estimation based on a measurement of radiation emitted by the cell)

  B   C
   \ /

In order to infer C, we need to know B and D (this inference step is pretty much dead-on if you have enough C14 atoms (by the law of large numbers)). We estimate D as E (noisy, experimental measurement), and B from A (also noisy, say due to non-uniform C14 levels + random variation in the cell birth process (?); one estimation for each point in history, although this "estimation" may be analytic, not statistical).

How many carbons atoms are there in DNA?
...discussion continues...

I really love making models like this.

I'm sure I've linked to CMU's Tetrad Project / Causality Lab before. But it never hurts to give them another plug.
gusl: (Default)
[ profile] r6 and I discuss his theory that entropy is subjective

I've never been satisfied with the solutions I've seen to Maxwell's Demon.
I take [ profile] r6's interpretation of entropy as an agent-dependent quantity related to his knowledge, and a measurement of what one can do with this knowledge: knowledge is power. According to his theory, an all-knowing being (Laplace's Genius) could make the entropy according to a more ignorant agent decrease, through a demon. The point seems to be that no one can decrease his/her own entropy.

I wonder what physicists have to say about this.
gusl: (Default)
Kevin T. Kelly - Simplicity, Truth, and the Unending Game of Science

Kevin Kelly claims to explain why Ockham's razor is useful without resorting to "God was kind enough to make the world simple". Also, check out the brilliant cartoons!

What would you say is the traditional explanation for why Ockham's razor works?
gusl: (Default)
There are two kinds of ways of looking at mathematics... the Babylonian tradition and the Greek tradition... Euclid discovered that there was a way in which all the theorems of geometry could be ordered from a set of axioms that were particularly simple... The Babylonian attitude... is that you know all of the various theorems and many of the connections in between, but you have never fully realized that it could all come up from a bunch of axioms... Even in mathematics you can start in different places... In physics we need the Babylonian method, and not the Euclidian or Greek method.
— Richard Feynman

The physicist rightly dreads precise argument, since an argument which is only convincing if precise loses all its force if the assumptions upon which it is based are slightly changed, while an argument which is convincing though imprecise may well be stable under small perturbations of its underlying axioms.
— Jacob Schwartz, "The Pernicious Influence of Mathematics on Science", 1960, reprinted in Kac, Rota, Schwartz, Discrete Thoughts, 1992.

When I say I'm an advocate of formalization, I'm not saying we need to understand all the precise details of what we're arguing for (although this usually is the case in mathematics, at least more so than in physics). What I want to do is to formalize the partial structure that does exist in these vague ideas. Favoring a dynamic approach, I hold that we must accept formalizing theories in small steps, each adding more structure. We will need "stubs", and multiple, parallel stories to slowly evolve into a formal form. The point is that a vague, general idea *can* be formalized to a point: this is evidenced by the fact that we humans use precise reasoning when talking about them.
Again, the idea is about doing what humans do, formally. If the humans' idea is irremediably vague, we don't hope to do any better, but we do hope to formalize it as far as the ideas are thought out / understood (even if vaguely). To the extent that there exists a systematic (not necessarily "logical", but necessarily normative) in the way we reason and argue, it will be my goal to formalize this in a concrete form.

Regarding the normative aspect, the reason we need one is: not all ideas make sense! For fully-formalized mathematics (i.e. vagueness-free mathematics), it's easy to come up with a normative criterion: a mathematical idea or argument is fully-formalized if it corresponds to a fully-formal definition or a fully-formal proof. One of the challenges of this broader approach is to define what it means for an idea to "make sense": what does it attempt to do? What is its relation with related concepts?

The "natural" medium expression of these ideas is English. The idea is to connect English words to concepts in the formal knowledge system. We say an English sentence makes sense in a given context iff it addresses the goal / there is sound reasoning behind it (not all may be applicable).
gusl: (Default)
Last night I introduced myself to my upstairs neighbour at 12:15AM, to friendlyly ask him to turn his music down (especially the bass).

I like thinking about acoustics, especially when I want to insulate myself from noise.

So I've been thinking about noise cancellation. The idea is that you copy the incoming noise (possible, since the signal can travel faster than sound) and reproduce out of phase by half a wavelength (or to be moe precise, in phase but with the amplitude inverted). So you're putting more energy in. But by the principle of energy conservation, either the sound gets louder in some places or the waves will all get turned to heat.

Btw (1), I have an excuse to leave my laptop on: saving gas! When I close my laptop, it stops emitting light outside, so all of its energy expenditure becomes heat. Since everything is regulated by a thermostat, my laptop saves a house radiator from getting warmer (though gas energy is probably cheaper).

Btw (2), I've once used a similar thought to prove that

amplitude is additive, energy is conserved in a closed system, energy is a function of amplitude |= energy is proportional to amplitude^2

(the argument was about a light interference pattern)

though perhaps I should refrain from using "|=" until I have a formal proof, or at least formal models for this stuff.

My physics professors were impressed with my "proof", but I just thought (and still think) that this should be normal science. Unfortunately, such a logical approach is missing from science education (and probably research too). Taking my physics classes as an example:
* premises are almost never explicit
* the structure of arguments is informal (even *more* informal than in mathematics classes)

This lack of formality doesn't bother them, and in fact in most cases.

But Feynman tells the story of the S-shaped sprinkler, where people had to resort to experiment in order to resolve the question. But it's the sort of question that should be decided by theory.
Even though there might be fluid effects that are not covered by the theory, the question was meant as a brainteaser: I believe Feynman was asking about the "ideal" sprinkler.


gusl: (Default)

December 2016

18 192021222324


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags