gusl: (Default)
While arguing that CS is more fundamental than Physics, Daniel Lemire gives it the status of a "natural science", since it supposedly makes fundamental predictions about the universe. He claims that both (a) Digital Physics / Strong Church-Turing Thesis (SCTT) and (b) the possibility of AI are falsifiable predictions ("Yet, these predictions remain unfalsified to this day.")

I made a skeptical argument for each of the above. Below I'm quoting my objection to (a).

In our usage here, "Strong Church-Turing Thesis" refers to the idea that the computation that can occur in our universe is asymptotically equivalent to Turing Machine computation, as far as space- and time-complexity is concerned.

from the comments to Computer Science is the most fundamental natural science. [edited]

@Daniel
I was just thinking: is the Strong Church-Turing Thesis (SCTT) really falsifiable?
Any such falsification must involve a proof that nature solves a certain problem (i.e. via a "physical algorithm") asymptotically faster* than a Turing Machine. To my knowledge, there is no accepted way to prove complexity results empirically: remember you're trying to draw conclusions about what happens to the computation time (and/or space) as n goes to infinity! To extrapolate to infinity, you need a theory (a.k.a. computational model) to tell you how your algorithm scales (in this case, this will be a physical theory). With a computational model in hand, you are back in CS Theory land, and can compare things asymptotically, but it's hard to see how you could validate a physical model in the first place.

OTOH, it seems like we've accepted that the asymptotic behavior of PCs is modeled by Turing Machines... although we've seen in practice that this is an idealization, and PC performance is in fact slightly worse than what would be predicted by Random-Access Machines, as we scale up.


* - or the opposite: nature cannot compute as fast as some algorithm in a Turing Machine!


If one can prove that quantum computing can scale and that QP!=P, that would falsify SCTT (at least the classical SCTT), so it seems falsifiable afterall... to the extent that such a thing can be proven.
gusl: (Default)
I've been thinking a lot about self-control. What is the self? Where is consciousness?

I'm asking these questions partly because I've been planning to read an ACT-R paper reading titled "The Minimal Control Principle". But really, it's just one of those nagging questions that never go away.

People obviously have a folk theory of what they do and don't have control over, and also of what *others* do and don't have control over. In fact, *every* moral judgement we make of our own or others' actions is justified by this basic assumption that the person is an agent who is able to make choices, and to control him/herself at that level.

What is a little bit puzzling about this is that this ability to control oneself is a modal notion. How can one prove that one could have acted otherwise? We may be able to beat some predictability tests, but how many levels of determinism can you refute before the laws of physics (or neuroscience!) make you predictable?

This interesting paper (via MR) about willpower sugests that, like patience, willpower is a depletable resource. Does this mean that, after enough temptation, people cannot be expected to control themselves?

This ties back to video game design: what level of control should you give to the player? If the goal is to make the played feel immersed as seamlessly as possible, we should try to minimize the size of the (inevitable) cut between real and virtual world (min-cut <=> max-flow!). A game like Spore is interesting because you different levels of description, but it would be more interesting still (but maybe very annoying) if the game automated away parts of gameplay *just* as the user demonstrated that they reached expertise: that way, you'd always be growing to higher and higher levels (being an expert doesn't make an activity completely effortless).

Much of what we do in life and work is automatic. This means that our control is restricted to the higher-level. We can only be accountable for our automatic actions insofar as managers are accountable for the behavior of their employees.

---

ACT-R folks, when does conscious control correspond to the contents of the buffers? Should we define consciousness as being a meta-level of attention, i.e. attention to one's own cognition?

---

Interesting talk on determinism:
David Steinsaltz - Essay on chance and determinism
The philosopher Daniel Dennett has
compared the free decision-making self in psychology to the
center-of-gravity in physics. Both are fictions, but they are useful
fictions, because they tell you where to apply force to achieve desired
effects. Recognizing that they are fictions does not force us to abandon
them, but it does suggest to us that we may need to put them aside when
the problem at hand depends on the fine structure.
gusl: (Default)
The other day, I wrote the following on my PDA:

Why I am no longer a mathematician:
· Tired of working hard just to be clever. Life is short. The real world is more interesting.
· Phenomenology, introspection drove me towards cogsci.
· it's more productive to do meta work: computers will eventually do math much more cheaply than me. (see Zeilberger)


----


Here's something of an academic autobiography, of my time at Bucknell. It says nothing about my ideas, or what I read. I tell the story of how undergraduate curricula shaped my choice of majors:


Mathematics

The last time I did serious mathematical research was my junior year of college... and even that was very much empirically-aided: it was about counting the number of roots of polynomials over finite fields... my discoveries were made with the aid of a C++ compiler.
Since then, I have proven things about cute games (Nim, thanks to [livejournal.com profile] agnosticessence), toy theorems (prove that number_of_divisors_of(n) is always even except for when n is a perfect square), and created neat correspondences (e.g. if you represent natural numbers as multisets, GCD is intersection, LCM is union), but nothing that could count as serious mathematics.

Already my senior year, in topology class, I no longer saw the point of doing pure math. The only way I could interpret infinite products of topological spaces was as a game of symbols: it had no real meaning to me.

Not only was I starting to get a formalistic view of mathematics, but I was increasingly bothered by the normal approach to mathematics, the standard mathematical language and the paper medium. This was made much worse by the fact that I had grown intolerant of confusing notation/language and informal proofs. Thankfully, I didn't stay in mathematics. Advanced mathematics requires a lot of effort and things are not always beautiful. The real world has many more interesting things to understand. During this time, I considered going for a PhD in Applied Math, but became disappointed with that idea too. It was still too much like other math.

By my senior year, mathematics was no longer fun. Still not "hard", but I no had motivation left. I had become enthusiastic about statistical modelling... even if I got labelled a Bayesian by our frequentistics department (I think it was meant as a compliment). And it was my interest in AI, by far, that dominated my intellect.


Physics

The reason I had liked mathematics before that was that it had been, for me, easy and fun. And its formal structures were much more satisfactory and easier for me to understand than the things people did in physics, my original major. My physics teachers never seemed to explain things clearly, and never gave me good logical reasons for why they were doing what they were doing. It was often unclear which model and assumptions were being used. And even after pressing them, I still had foundational questions that went unanswered. Quantum Mechanics class was extremely frustrating: while "nobody understands quantum mechanics", the theory still has a reason to be, but they didn't give us a chance to try to make sense of the experimental results that motivated the theory, or convince me that the theory was the best we could do.

Although I started out with bad grades in physics, they were steadily improving. Still, my professors saw promise in me, and wanted me to stay. Despite liking and doing well on my last class on Thermodynamics & Statistical Mechanics, I decided that I was going to focus on math: I was just too different from the physicists, and talking to them took too much effort. Now I want Patrick Suppes to be my next physics teacher. Among the physicists, I was definitely a philosopher.


Computer Science

I had to overcome my initial prejudice against CS. I only started it because of my father's argument that it would be a good idea if I wanted to make money. As a freshman, I had thought that it was just going to be about programming techniques, and similar boring-sounding things. The sort of person who did CS at my school was not far from the "typical management major": financially ambitious, if not particularly mathematically-talented. When I joined the group, I learned that there were exceptions... so now, I realized that there were also "computer geeks", as well as the former type. I was never a "computer geek". Programming geek, yes, for a long time... but one who couldn't get Linux installed, and who would call a technician to troubleshoot my network. Among them, I was solidly seen as a math geek. It bothered me that their AI class assumed neither knowledge of basic probability or basic logic, and that the computer graphics class couldn't do a simple linear projection.
But I really liked ProgLan. Also, designing algorithms was fun. Algorithmic reductions even more. And I learned some useful programming techniques.


Philosophy

I've always been a philosopher. But I did not like the prospect of reading shelffuls of philosophy books, learning the ins and outs of useless arguments (for instance, about metaphysics), and rereading & struggling to understand what exactly writers mean. Philosophy is great for breaking people out of their epistemological vices: questioning their prejudices, intuitions, etc., but some things are just overanalyzed. I think this is because they talk past each other. Case in point: the Monty Hall problem. Why are they still writing papers about it?? I think that philosophers should benefit the most from computational aids to reasoning, argumentation maps and such. At least, they already know logic.


Psychology

It was fascinating. But it wasn't rigorous enough for me. If they had offered cognitive science, I probably would have taken lots of it.


Economics & Linguistics

I also flirted with economics, although never for credit. It was interesting, but they were too slow on the math. Like CS, only worse. I also took a class in linguistics (the only one offered!), but as I wasn't about to start doing NLP, it remained a mere curiosity.
gusl: (Default)
me- aren't all these philosophical questions, questions about AI? (unifying epistemic logic with probability, coherentism, etc)

somebody- what about metaphysics?

me- irrelevant things are, well, irrelevant.

Fitelson- early Hempel meets Herbert Simon!

me- ...uhmmm.. ok...

I haven't read any Hempel.
gusl: (Default)
In the style of Larry Gonick...

Action Philosophers, via http://logicandlanguage.net/

Plato )

Augustine )

Let me dedicate this to [livejournal.com profile] darkjewelz, my only remaining friend from my time in England, who told me today that she had a LiveJournal.
gusl: (Default)
In the philosophy of mathematics...

I am a formalist...

but let me tell you a story.


The characters are:

* Strawman Platonist
* We(1) is a mathematician who believes that if mathematical objects exist at all, then infinite-information mathematical objects exist too.
* We(2) takes a computer scientists' view. He's a total reductionist and in particular, believes in digital physics.


We(2) subscribes to an ontology of mathematics which says that only recursive objects exist:
* Infinite things do not really exist. We can talk about them as what would happen in the limit.
e.g. The set of natural numbers does not exist. What does exist is an algorithm which generates "them".

We(2) believes that the reason We(1) talks about infinite things is because it is simpler to talk about them as if they existed. The set N is a convenient fiction. Talking about algorithms just makes life more complicated. We(2) says that if We(1) were right, then We(1)'s access to it would be mediated through an oracle. Like most normal mathematicians, We(1) normally talks about mathematics as if oracles existed.

We(2) believes that oracles don't exist.

We(2) believes that the set N is an idealization of the algorithm which produces 0, S0, SS0, SSS0, etc. All sets are lists, and thus well-ordered.


WE(2)'s ONTOLOGY:

* Recursive objects. The algorithm producing it terminates.
* r.e. objects, which are the limit of what the algorithm would produce.
* co-r.e. objects, which are the same, since in the limit, a set can defined by its elements as well as by its non-elements.

While recursive objects really exist (or can be directly perceived by mathematical cognition), r.e. and co-r.e. objects live in the Platonic realm.

We(2) has read his Kevin Kelly and believes that the channel through which we communicate with the Platonic realm is restricted by computability. (Kelly talks about the scientific process as a channel between the scientist and nature... here we talk about the channel between mathematician and *** THE PLATONIC REALM ***)

BUT, since We(2) thinks it doesn't make sense to talk about unknowable things (especially in mathematics!), then the only mathematical objects that it makes sense to say exist are the ones which are somehow knowable (i.e. through this channel)!



In We(2)'s ontology, is pi recursive or r.e.?

In a sense, pi is r.e.... pi is an algorithm which successively approximates the Platonic pi.
However, proofs about pi are recursive. (proofs are always recursive!)
So, we could say that proofs about pi only make reference to Pi.
But do proofs ever do anything other than refer? Sure, when are objects are computable: e.g. a proof that 2 + 3 = 5 in PA uses the senses of 2, 3 and 5. (at least, a formalist would agree that "SS0" is the sense of 2)

So We(2) makes a distinction between sense and reference for all mathematical objects. While some references can be reduced to their senses (e.g. any finite number or any finite set), some references have senses that are "transcendental" and can only be accessed through an oracle (e.g. any infinite set)

While pi's reference is recursive, pi itself (i.e. its sense) is r.e.
All mathematical objects have recursive reference (at least I've never seen a non-recursive reference printed anywhere ;-) ). Just like you're sending me an email this very moment... I've never been contradicted on this either! (stolen from [livejournal.com profile] fare)


A little dialogue

Gustavo: Isn't it paradoxical that R has finite information (afterall it's easy to check whether something is in R), and yet some elements in it have infinite information? All objects used in mathematical discourse, even Platonic discourse, have finite information.

Henrik: No, we can reference R in a finite way, but R itself has infinite information.

Gustavo: and, also, sometimes it may be very hard to check whether a number is in R... especially if the input is coded in terms of a difficult question (like the Riemann hypothesis)

Henrik: but in principle, even the Riemann hypothesis could be decidable axiomatically. O buraco é mais embaixo. (The real problem is worse than you think)


Gustavo is struggling with his intuitions about which mathematical statements are meaningful, and therefore should be provable. Gustavo is not comfortable with mathematics being semi-empirical. He wants an algorithmic way of coming up with new axioms.

Tentative definition: A statement is meaningful iff it is in principle verifiable or falsifiable. It would be nice to have a logic of which statements are meaningful:

"A /\ B" is meaningful iff "A" is meaningful and "B" is meaningful
"A \/ B" is meaningful iff "A" is meaningful and "B" is meaningful
"A -> B" is meaningful iff "A" is meaningful and "B" is meaningful
"~A" is meaningful iff "A" is meaningful
"forall x in X, phi(x)" is meaningful iff it is meaningful for each "phi(x)" for each x in X.

(is it not an accident that I had to bring the X outside of the quote for the definition of the universal, unlike all good definitions of semantics?)



Interlude: The diagonalization argument according to our view of infinite objects as non-terminating algorithms...

Given any algorithm producing infinitely-many infinite decimal expansions (each of which is really an algorithm) (i.e. given any infinite list of real numbers) (NB: lists are necessarily countable),

our construction algorithm outputs an algorithm which produces an infinite decimal expansion which differs from all decimal expansions above in at least one place (i.e. a real number which will never be produced by the algorithm above)

So the set R of all real numbers is not even r.e.

Let's look at the types of these algorithms, so that we may see something about the nature of R. (Can we talk about types when algorithms don't terminate?)

A number from 0-9 : D (e.g. 5 : D)
An infinite decimal expansion is of type A, (decimal expansion : list_of D) (e.g. pi : list_of D)
An algorithm producing decimal expansions : list_of (list_of D) (e.g. any list of real numbers)
Our diagonal construction algorithm returning a novel real number : (list_of (list_of D)) -> (list_of D)

list_of is a dependent type, taking a type as input.

Gustavo is hoping that we can somehow dismiss R as meaningless, using the fact that it's not r.e.


Reinterpreting the Quantifiers

forall is an infinite conjunction, e.g. forall x in N.phi(x) is actually phi(0) /\ phi(1) /\ phi(2) /\ ...
exists is an infinite disjunction, e.g. exists x in N.phi(x) is actually phi(0) \/ phi(1) \/ ...

so, "forall x in N. phi(x)" is not verifiable (it may be provable using axioms, but that's a different story)
likewise, "exists x in N. phi(x)" is not refutable.

To verify "forall x in N. phi(x)", we would need a machine that checks phi for all natural numbers.... such a machine requires an oracle. Remember: oracles don't exist!



Topological semantics

verifiable : open
refutable : closed
/\ : intersection
\/ : union
~ : complement

Suppose phi(x) is verifiable for all x. i.e. forall x, phi(x) is an open set.
Can we say that "forall x in N. phi(x)" is verifiable? No, because an infinite intersection of open sets is not necessarily open.
gusl: (Default)
I am a realist-in-truth-value for recursively-enumerable statements. i.e. I believe that statements for which counterexamples could be found if they existed must be either true or false.

For example, the Goldbach conjecture must be either true or false (since if it were false, you could find a number >4 which is not a sum of two primes). Also, whether any Turing Machine halts is a recursively-enumerable problem.

Basically, it seems to me that any meaningful problem is recursively-enumerable.

A consequence of Gödel's Incompleteness:
If a set of axioms expressing PA is recursively-enumerable, there will be statements which are undecidable.

I believe there should be an algorithmic way of deciding all recursively-enumerable statements, i.e. an algorithmic way of proceeding in making mathematics prove more true things whenever you run into undecidabilities (i.e. whenever you correctly prove, using a meta-method, that a sentence G is true but not provable in the original system). But this seems not to be possible, for if there were any such algorithm, we could construct its halting problem, and *that* problem would be undecidable in the new system.

So I do think that Gödel's theorem is significant for the philosophy of mathematics. And it seems like Chaitin is onto something when he says that mathematics must be semi-empirical.


Again, thanks to Benedikt Löwe for reciting me the theorem above.
gusl: (Default)
There are two kinds of ways of looking at mathematics... the Babylonian tradition and the Greek tradition... Euclid discovered that there was a way in which all the theorems of geometry could be ordered from a set of axioms that were particularly simple... The Babylonian attitude... is that you know all of the various theorems and many of the connections in between, but you have never fully realized that it could all come up from a bunch of axioms... Even in mathematics you can start in different places... In physics we need the Babylonian method, and not the Euclidian or Greek method.
— Richard Feynman


The physicist rightly dreads precise argument, since an argument which is only convincing if precise loses all its force if the assumptions upon which it is based are slightly changed, while an argument which is convincing though imprecise may well be stable under small perturbations of its underlying axioms.
— Jacob Schwartz, "The Pernicious Influence of Mathematics on Science", 1960, reprinted in Kac, Rota, Schwartz, Discrete Thoughts, 1992.


When I say I'm an advocate of formalization, I'm not saying we need to understand all the precise details of what we're arguing for (although this usually is the case in mathematics, at least more so than in physics). What I want to do is to formalize the partial structure that does exist in these vague ideas. Favoring a dynamic approach, I hold that we must accept formalizing theories in small steps, each adding more structure. We will need "stubs", and multiple, parallel stories to slowly evolve into a formal form. The point is that a vague, general idea *can* be formalized to a point: this is evidenced by the fact that we humans use precise reasoning when talking about them.
Again, the idea is about doing what humans do, formally. If the humans' idea is irremediably vague, we don't hope to do any better, but we do hope to formalize it as far as the ideas are thought out / understood (even if vaguely). To the extent that there exists a systematic (not necessarily "logical", but necessarily normative) in the way we reason and argue, it will be my goal to formalize this in a concrete form.

Regarding the normative aspect, the reason we need one is: not all ideas make sense! For fully-formalized mathematics (i.e. vagueness-free mathematics), it's easy to come up with a normative criterion: a mathematical idea or argument is fully-formalized if it corresponds to a fully-formal definition or a fully-formal proof. One of the challenges of this broader approach is to define what it means for an idea to "make sense": what does it attempt to do? What is its relation with related concepts?

The "natural" medium expression of these ideas is English. The idea is to connect English words to concepts in the formal knowledge system. We say an English sentence makes sense in a given context iff it addresses the goal / there is sound reasoning behind it (not all may be applicable).
gusl: (Default)
http://www.kinderfilosofie.nl/ or in English (less content): http://www.kinderfilosofie.nl/engels/

Nice quotes from children
I am glad letters exist. Why? If there were no letters, there would be no sounds. If there were no sounds, there would be no words. If there were no words, we would not be able to think. And if we could not think, there would be no world.
Christine (from Gareth Matthews).

Who invented the world?
Who do you think?
I think the first people, because they invented language as well.
Sosha, 5 years old. Amsterdam


Following the cut, my own ideas, which I wrote on my website around 2000/2001.
Read more... )
gusl: (Default)
Philosophy of Cognition, I have a presentation about the Explanatory Gap. It's due in about 13 hours.

If you think you can help, please give feedback:


The Explanatory Gap

---------------------------------------------------------------------------------------------------------------------------

Kinds of Consciousness

* Cognitive Consciousness: internal cognitive representations, conscious computational tasks, etc.
* Phenomenal Consciousness: feeling (aka "sentience")

It is with the latter that we are concerned in this presentation.

---------------------------------------------------------------------------------------------------------------------------

Reductive Explanations

Water is explained by H2O
Thermodynamics is explained by Statistical Mechanics
etc.

But phenomenal consciousness is not explained by a physicalist theory of the brain. And it cannot be. Even a complete understanding of the brain would not be enough to answer "why does it feel that way?" satisfactorily

EXAMPLE: PAIN, REDNESS, HORNINESS, etc. may correspond to specific brain states, but knowing the states does not explain why they feel "painful", "red" or "horny".


This is the explanatory gap: reductive physical explanations of phenomenal events cannot be satisfactory. Chalmers call this "the Hard Problem of Consciousness".
Read more... )

Profile

gusl: (Default)
gusl

December 2016

S M T W T F S
    123
45678910
11121314151617
18 192021222324
25262728293031

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags