gusl: (Default)
[personal profile] gusl
Johan van Benthem has something to say about my ideas!

also, this reference:
J. van Benthem, 1982, 'The Logical Study of Science' , Synthese 51, 431-472.

is not to be found in the Netherlands!

from http://staff.science.uva.nl/~johan/298-2003.html


Week 5 Logic and theory structure

Last week we looked at explanation as a central inferential process in the
philosophy of science - and in ordinary reasoning. We pursued this non-
standard inference into intensional logics of conditionals and preferential
non-monotonic logics in AI and linguistics. Now we do the data structures.

Premise structure: from micro to macro A single inference from A to B is just
one micro-link, but we usually have larger conglomerates of premises around.
Mid-size level: everything we commit to in a conversation or ongoing discussion,
macro-level: some full-fledged theory that we build and use over and over again.

Theories in logic Historical inspiration, and paradigmatic cases mostly
mathematical examples (geometry, number theory, algebra, set theory).

The bleakest view: theory is set of sentences T, or just set of models M.
Inverse relationship: more sentences, fewer models. Also, for sets of
formulas A, B, MOD(A)intersectionMOD(B) equals MOD(A union B).

The syntactic view: organization into axioms and theorems via proofs.
Perhaps also different levels: compare earlier levels in explanation:
core theory L, facts F, auxiliary hypotheses. Or compare Quine's
view of core versus periphery, more or less immune to revision.

The semantic view: 'one model theories' trying to capture some intended
structure (example: Peano Arithmetic) versus 'many model theories'
with the more models the merrier (example: group theory).

Analogies for all of this in science: one model theories in cosmology
(zoom in on the actual development of the one and only Cosmos),
many model theories like Newtonian mechanics (the more things
turn out to be 'Newtonian systems' the better)

Polish logicians in the 1930s: calculus of theories, e.g., laws about
their unions and intersections. Mostly generalized Boolean algebra.

Semantic theory structure: a modern case Representative modern work
in formal philosophy of science: Theo Kuipers, 2000, From Instrumentalism
to Constructive Realism, Kluwer Academic Publishers, Synthese Library,
Vol. 287, and 2001, Structures in Science. See also his new textbook.

Example: even with theories just viewed as sets of models significant
notions can be formalized, such as Popper's verisimilitude. We gave the
Miller-Kuipers explanation of 'theory B is at least as close to T as A is':

(a) T intersectA is included in B: B gets everything right which A did
(b) B minusT is included in A: B gets nothing wrong which A did not

In terms of theories as (classes of models for) single formulas:
(a) T & A |= B and (b) B |= A or T

(Beware: different, though equivalent formulation again for theories viewed
as sets of formulas: my apologies for the symbol soup on the blackboard.)

This is actually an abstract notion of closeness: relation to conditionals?

More refined view of theories: inner set of accepted models (representing
the situations known to fall under the theory), outer set of models satisfying
all laws so far. (Is 'many-model view', with a bit of restrictiveness on top.)
Update now come in two kinds (a) new models representing systems
falling under the theory, (b) new laws. Formulations in partial logic, or at a
higher set-theoretic level again in classical logic with eliminative update.
See this Note for more elaborate discussion and explanation.

So far, this was all about weaker or stronger propositions in a theory.
But there is another logical dimension: the choice of a language,
whcih fixes the conceptual framework that the theory is proposing.

The importance of vocabulary Empirical theories have mixture of so-called
observational terms (directly measurable quantities) and theoretical terms:
representing theoretical entities, functions, predicates postulated by the
theory to give a compact description of the phenomena. Distinction T(L0, Lt).
One use: with well-chosen theoretical terms and axioms for them plus bridge
principles relating them to observational predicates, one can often give a
finite axiomatization for an infinite set of empirical observations. (Incidentally,
I personally think exactly the same distinction plays in ordinary language: we
immediately formulate what we observe with insidious theoretical predicates.)

Often criticized in critiques of logical empiricism. The distinction is often
hard to draw precisely, but it is really natural, and keeps returning in the
literature. E.g., flourishing model-theoretic investigations on the topic in
the 1970s, prompted by Sneed's analysis of scientific theories (the classic
book is "The Logical Structure of Mathematical Physics"), and also returns
in computer science (same formal structure, though different motivations).

Example: mass balance: objects, positions, observational predicates:
locations, theoretical predicate: mass, law for equilibrium
Example: mechanical system: observational predicates: position,
velocity, acceleration; theoretical: mass, force.
Example: persons with observable behavior, theoretical predicates:
dispositions, desires.
Of course, one can argue where to put what predicate, but it is also
possible to think of the distinction as a relative one: what do we
consider 'given', and what is 'postulated'?

One shows, e.g., that a certain physical system admits of mechanical
analysis by postulating force and mass functions which fit with the
observed behavior, and allow us to predict further behavior. Or we
observe the behavior of a real process and postulate a dynamical
system with internal predicates that might account for that behavior.

Typical logical issue. What is the empirical content of a theory T(L0, Lt)?
Candidate 1: T|L0: the set of all L--sentences that follow from T.
Candidate 2: MOD(T)|L0: the class of all models of T stripped
of their (interpretations for the) theoretical predicates.
The second class is defined by the 'Ramsey sentence' ELt• T(Lo, Lt),
an existential second-order quantification over the theoretical predicates,
saying that there exists a way of introducing them on an empirical model
which makes the total theory true. 'The observable phenomena admit
of an explanation by the theory', as was stated informally just before.

Some delicate logical questions. MOD(T)|L0 is always included in
MOD(T|L0), but the converse does not always hold (see homework).

The following two topics from the same milieu were not covered in class.

A Other basic aspect of vocabulary: definabilityof vocabulary.

Note that a given empirical situation, i.e., a model for the observational
language L0, can often be brought under the theory T(Lo, Lt) in different
ways. There need not be one unique force function turning a real pool
table into a model for Newtonian mechanics (though we are usually
content with finding just one). But what if there is indeed a unique way
of adding theoretical predicates to the observational base? As it hap-
pens, logic has something more specific to say in this special case.

Logic has many results on choice of axioms and derivability, but very few
on choice of language and definability. But here is one major exception:

(a) Predicate Q is implicitly definable in first-order theory T(P, Q):
if you have models (D, P, Q), (D, P, Q') for T, then Q = Q':
giving the P-structure fixes the Q-structure
(b) Predicate Q is implicitly definable in first-order theory T(P, Q):
there is a formula D(P) such that T implies Q <->D(P)
Beth's Theorem (1953) says (a) is equivalent with (b). (a) => (b) is surprising!

Connections with determinism (Montague's article 'Deterministic Theories').
Also with modern notions of supervenience in discussions of reductionism:
'if the lower-level predicates are the same, then so are the higher-level ones'.
Beth's Theorem says there is no such middle ground, at least for first-order
theories - and supervenience amounts to plain reductionism: L-high will be
explicitly definable in terms of L-low. (This also holds in other logics.)

B The web of theories. Scientific theories form a network, and one can also
study their interelations. Classifications in the 1970: extension, conservative
extension, relative interpretation/reducibility, approximation. Relations to the
model theory of such relations between mathematical theories, but more
refined because of the vocabulary level distinction. Calculus of theories
revived: modular structure of putting together scientific theories.

Many samples of this sort of logic/P-ofS interface: J. van Benthem,
1982, 'The Logical Study of Science' , Synthese 51, 431-472.

Discussion around 1980: the Suppes-Sneed 'structuralist view 'of theories
tries to do away with language altogether, viewing scientific theories as
set-theoretic structures. If you remove the language aspect, there is little
scope for logical analysis as normally understood. This one reason why
even formal philosophers of science turned away from logic. But through
1980s countercurrent: growing interest in computational views of theories
and information growth. But no computation without representation, and
hence code, language, and logic enter after all.

Computer science influences Formal theories returned as 'abstract data
types', and e.g. calculus of theories as module algebra. Vocabulary
distinction between visible user predicates and the designer's hidden
predicates. More sophisticated than in logic and philosophy of science,
with interplay of operations and hiding. E.g., some laws of module algebra
like (T1 + T2)|Lo = T1|L0 + T2|L0 hinge on the interpolation theorem of
classical logic (Craig 1957), a generalization of Beth's Theorem.

Further interest in theory structure: knowledge representation in AI has
revived many themes from the earlier philosophical tradition. (You can
find Carnap and other logical positivists make a come-back through the
1980s in the works of prominent AI researchers, with or without credits).

Of course, this changes the agenda. I am not claiming that CS/AI have
exactly the same concerns as earlier philosophers. But this is a good thing:
really creative ideas usually have complex intellectual histories, with often
surprising twists in questions and applications beyond the original goals.
On the other hand, there is enough family resemblance so that modern
philosophers of science can (and do) publish on topics in CS and AI.

But are there any significant formalized theories that fit these frameworks?
Often there are fewer than you would think, as authors repeat the same
examples, or cite an unread mythical past like "Principia Mathematica".

Examples in logic and foundations of mathematics: very many by now,
both first-wave (Peano, Whitehead, Russell) and second wave (e.g.,
Bishop's constructive mathematics) and from many automated deduction
projects in mathematics like AUTOMATH and modern successors.
Much less in other areas, but people do it when they have an axe to grind.
Check. e.g., Hartry Field's book "Science without Numbers", or various
formalizations of physical theories built into robot systems.

Stanford examples: the formalization projects in John McCarthy's project
Logical AI (see e.g. Aarati Parmar's upcoming dissertation). But also in
the business school: Pólos and Hannan's work on the formalization of
sociological theories, using non-monotonic default logics as their
inference engines (which also gives a link with last week's topic).

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags