gusl: (Default)
While arguing that CS is more fundamental than Physics, Daniel Lemire gives it the status of a "natural science", since it supposedly makes fundamental predictions about the universe. He claims that both (a) Digital Physics / Strong Church-Turing Thesis (SCTT) and (b) the possibility of AI are falsifiable predictions ("Yet, these predictions remain unfalsified to this day.")

I made a skeptical argument for each of the above. Below I'm quoting my objection to (a).

In our usage here, "Strong Church-Turing Thesis" refers to the idea that the computation that can occur in our universe is asymptotically equivalent to Turing Machine computation, as far as space- and time-complexity is concerned.

from the comments to Computer Science is the most fundamental natural science. [edited]

I was just thinking: is the Strong Church-Turing Thesis (SCTT) really falsifiable?
Any such falsification must involve a proof that nature solves a certain problem (i.e. via a "physical algorithm") asymptotically faster* than a Turing Machine. To my knowledge, there is no accepted way to prove complexity results empirically: remember you're trying to draw conclusions about what happens to the computation time (and/or space) as n goes to infinity! To extrapolate to infinity, you need a theory (a.k.a. computational model) to tell you how your algorithm scales (in this case, this will be a physical theory). With a computational model in hand, you are back in CS Theory land, and can compare things asymptotically, but it's hard to see how you could validate a physical model in the first place.

OTOH, it seems like we've accepted that the asymptotic behavior of PCs is modeled by Turing Machines... although we've seen in practice that this is an idealization, and PC performance is in fact slightly worse than what would be predicted by Random-Access Machines, as we scale up.

* - or the opposite: nature cannot compute as fast as some algorithm in a Turing Machine!

If one can prove that quantum computing can scale and that QP!=P, that would falsify SCTT (at least the classical SCTT), so it seems falsifiable afterall... to the extent that such a thing can be proven.
gusl: (Default)

Gustavo Lacerda said...

I doubt the Doppler effect in your throat could be strong enough to be noticeable. For the first frequency doubling (octave), you'd need the sound source to be moving at half the speed of sound (i.e. v = 1/2 c). To triple the frequency (i.e. lambda = 3), you'd need it to move at 2/3 the speed of sound. In general, 1/lambda = 1-v/c, or v = c (1 - 1/lambda).

If we take the minimal noticeable change to be 20 cents (where 1200 is an octave), lambda = 1220/1200 = 61/60.
Thus v = c (1/61) = 1/61*340m/s ~= 5.57 m/s .

I don't think throats reach that speed when they vibrate.

I found that post by googling for Doppler effect + vibrato, because it's a cool idea.
gusl: (Default)
This blew me away.

Today, Ed Fredkin taught me that double-slit experiments can tell whether two particles (electrons, atoms, molecules) are *identical* or not: if they are, an interference pattern appears (since this makes it possible, in principle, to track which particle went through which hole).

When we see an interference pattern with gold atoms, we know that those two atoms really are *identical*. This means that we've reached the bottom-level in some sense: there is no deeper micro level in which these atoms could differ detectably ("detectably" according to nature's definition).

If we tag one of the atoms (e.g. by moving an electron in a small way), the interference pattern disappears completely.

I know I'm behind the times. But this is cool. Just imagine using this technology as a kind of quality assurance for nanotech replicators.


I'm wondering how these double-slit phenomena play out with reversibility, "irreversibility" (i.e. 2nd law of thermodynamics), and all other principles. It would seem to violate continuity... except that the universe is supposed to be discrete at that scale.


Tangentially, about interference patterns in general, I came up with a cute proof the Summer after my freshman year of college: from energy conservation, it follows that power is proportional to the square of the amplitude.
gusl: (Default)
My compiled rants against physicists. In response to like-minded [ profile] quale, I wrote the following:

[ profile] quale wrote: I dropped out of being a physics major because everyone was just dogmatically accepting the notion of entropy as the "log of the number of states" and didn't want to question what the hell that really meant.

Me too! Not just they way they gloss over entropy, but also where the Schroedinger equation comes from, etc., and the way they avoid thinking about paradoxes (e.g. Maxwell's demon: is entropy subjective?, this one about classical mechanics). And the fact that nobody bothers to fix the very bad notation traditionally used in some physics is a pretty bad sign too (nobody except for my hero Sussman).

In college physics, I was just told to plug-and-play, which made me very unhappy. I was interested in finding logical relationships between sets physical axioms (e.g. how to prove that energy is proportional to amplitude squared using only the additiveness of amplitude and energy conservation).

Since I like my knowledge network to be dense / tight (i.e. certain), ignoring foundational questions and paradoxes is totally against my cognitive style, but I wonder if being less conservative might sometimes be a good idea, if the goal is to make the science progress: it might sometimes be a good idea to ignore foundational questions.
gusl: (Default)
Shut up and calculate!

Why do physicists care about interpretations of QM? Do differerent interpretations make different predictions? If so, shouldn't they then be called theories instead?
gusl: (Default)
Compare "accelerating under constant power"
with "accelerating under constant force" (constant acceleration)

My intuition tells me that they should be the same, but kinetic energy considerations show that the acceleration is decreasing on the first one (it takes 4 times the energy to get 2 times as fast).

This would seem to contradict "velocity is relative": if velocity were relative, then the energy needed to get faster by 1m/s would be the same whether you are stationary or already at 1 m/s.


My intuition also tells me that I should be able to come up with a similar paradox about predicting the outcome of a 1-dimensional elastic collision. If you do it with energy vs momentum.

Conservation of energy:
v1_before^2 + v2_before^2 = v1_after^2 + v2_after^2 (if we fix one side of the equation, then the point (v1,v2) falls in a circle)

Conservation of momentum:
v1_before + v2_before = v1_after + v2_after (if we fix one side of the equation, then (v1,v2) falls in a straight line)

The solutions are where circle and line intersect. I guess there's no paradox afterall.

I would like to do a transform to a moving reference frame, to make sure that everything is still alright. Transforming to a fast-moving reference frame will just make the circle bigger. Basically, the point and the line all get transposed diagonally up and to the right. The distance between the intersections still remains the same.

Oh I see, physics is fine. Nothing to worry about.


The concept of kinetic energy has always been problematic for me. Given the choice, I'll integrate over force instead.
gusl: (Default)
The other day, I wrote the following on my PDA:

Why I am no longer a mathematician:
· Tired of working hard just to be clever. Life is short. The real world is more interesting.
· Phenomenology, introspection drove me towards cogsci.
· it's more productive to do meta work: computers will eventually do math much more cheaply than me. (see Zeilberger)


Here's something of an academic autobiography, of my time at Bucknell. It says nothing about my ideas, or what I read. I tell the story of how undergraduate curricula shaped my choice of majors:


The last time I did serious mathematical research was my junior year of college... and even that was very much empirically-aided: it was about counting the number of roots of polynomials over finite fields... my discoveries were made with the aid of a C++ compiler.
Since then, I have proven things about cute games (Nim, thanks to [ profile] agnosticessence), toy theorems (prove that number_of_divisors_of(n) is always even except for when n is a perfect square), and created neat correspondences (e.g. if you represent natural numbers as multisets, GCD is intersection, LCM is union), but nothing that could count as serious mathematics.

Already my senior year, in topology class, I no longer saw the point of doing pure math. The only way I could interpret infinite products of topological spaces was as a game of symbols: it had no real meaning to me.

Not only was I starting to get a formalistic view of mathematics, but I was increasingly bothered by the normal approach to mathematics, the standard mathematical language and the paper medium. This was made much worse by the fact that I had grown intolerant of confusing notation/language and informal proofs. Thankfully, I didn't stay in mathematics. Advanced mathematics requires a lot of effort and things are not always beautiful. The real world has many more interesting things to understand. During this time, I considered going for a PhD in Applied Math, but became disappointed with that idea too. It was still too much like other math.

By my senior year, mathematics was no longer fun. Still not "hard", but I no had motivation left. I had become enthusiastic about statistical modelling... even if I got labelled a Bayesian by our frequentistics department (I think it was meant as a compliment). And it was my interest in AI, by far, that dominated my intellect.


The reason I had liked mathematics before that was that it had been, for me, easy and fun. And its formal structures were much more satisfactory and easier for me to understand than the things people did in physics, my original major. My physics teachers never seemed to explain things clearly, and never gave me good logical reasons for why they were doing what they were doing. It was often unclear which model and assumptions were being used. And even after pressing them, I still had foundational questions that went unanswered. Quantum Mechanics class was extremely frustrating: while "nobody understands quantum mechanics", the theory still has a reason to be, but they didn't give us a chance to try to make sense of the experimental results that motivated the theory, or convince me that the theory was the best we could do.

Although I started out with bad grades in physics, they were steadily improving. Still, my professors saw promise in me, and wanted me to stay. Despite liking and doing well on my last class on Thermodynamics & Statistical Mechanics, I decided that I was going to focus on math: I was just too different from the physicists, and talking to them took too much effort. Now I want Patrick Suppes to be my next physics teacher. Among the physicists, I was definitely a philosopher.

Computer Science

I had to overcome my initial prejudice against CS. I only started it because of my father's argument that it would be a good idea if I wanted to make money. As a freshman, I had thought that it was just going to be about programming techniques, and similar boring-sounding things. The sort of person who did CS at my school was not far from the "typical management major": financially ambitious, if not particularly mathematically-talented. When I joined the group, I learned that there were exceptions... so now, I realized that there were also "computer geeks", as well as the former type. I was never a "computer geek". Programming geek, yes, for a long time... but one who couldn't get Linux installed, and who would call a technician to troubleshoot my network. Among them, I was solidly seen as a math geek. It bothered me that their AI class assumed neither knowledge of basic probability or basic logic, and that the computer graphics class couldn't do a simple linear projection.
But I really liked ProgLan. Also, designing algorithms was fun. Algorithmic reductions even more. And I learned some useful programming techniques.


I've always been a philosopher. But I did not like the prospect of reading shelffuls of philosophy books, learning the ins and outs of useless arguments (for instance, about metaphysics), and rereading & struggling to understand what exactly writers mean. Philosophy is great for breaking people out of their epistemological vices: questioning their prejudices, intuitions, etc., but some things are just overanalyzed. I think this is because they talk past each other. Case in point: the Monty Hall problem. Why are they still writing papers about it?? I think that philosophers should benefit the most from computational aids to reasoning, argumentation maps and such. At least, they already know logic.


It was fascinating. But it wasn't rigorous enough for me. If they had offered cognitive science, I probably would have taken lots of it.

Economics & Linguistics

I also flirted with economics, although never for credit. It was interesting, but they were too slow on the math. Like CS, only worse. I also took a class in linguistics (the only one offered!), but as I wasn't about to start doing NLP, it remained a mere curiosity.
gusl: (Default)
[ profile] r6 and I discuss his theory that entropy is subjective

I've never been satisfied with the solutions I've seen to Maxwell's Demon.
I take [ profile] r6's interpretation of entropy as an agent-dependent quantity related to his knowledge, and a measurement of what one can do with this knowledge: knowledge is power. According to his theory, an all-knowing being (Laplace's Genius) could make the entropy according to a more ignorant agent decrease, through a demon. The point seems to be that no one can decrease his/her own entropy.

I wonder what physicists have to say about this.


gusl: (Default)

December 2016

18 192021222324


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags