gusl: (Default)
[personal profile] gusl
I only believe in mathematical objects with finite information (i.e. with a finite expression). Things can expressed in any way you wish, intensionally, extensionally, whatever. So, while the natural numbers exist, not all subsets of it do.

While the set of real numbers exists (R can be expressed as the power set of N), not all of the traditional real numbers exist (only the computable ones do). So, in my definition, the cardinality of R is aleph_0 (i.e. the same as N). In fact, no set can have greater cardinality, for that would imply it had non-existing elements (since only computable things exist).

Can we design a set theory this way? How much of traditional set theory can be translated? Does we lose any good mathematics this way?

Whereas people seem to view uncomputability as a fundamental property of a problem, I tend to view it as another form of self-reference paradoxes. All "well-defined" problems are computable. Again, this is not a theorem or mathematical insight, but a re-definition, just like my "R only has countably many existing numbers" is a definition. But the point isn't just to change names and keep everything the same... it's to change the intuition that goes with the names.

(no subject)

Date: 2005-02-05 10:52 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
Fredkin is a physicist too. He studied with Feynman.

Secondly, while we "don't know what the hell it is", we need metaphysical theories from which to create physical theories. Fredkin predicts that digital physics will be falsifiable one day: from my understanding, it suggests that all symmetries will eventually be broken.

(no subject)

Date: 2005-02-05 11:29 pm (UTC)
From: [identity profile] pbrane.livejournal.com
Hmmm... he sure doesn't read like a physicist - he reads like a computer scientist, and doesn't seem to have that great of a way of describing his ideas in detail, from what I can see on his site.

we need metaphysical theories from which to create physical theories.

I'm not sure what you mean by this: the scientific method is based on inductive, not deductive, reasoning: we describe what we observe, and we model it in a variety of ways. The models which have the fewest moving parts are the ones we prefer. That's physics.

We don't "postulate" that physics is a discrete system, we, for example, observe that GR (in a weakly coupled regime where it's valid), mixed with QFT (in a locally flat piece of space where *it* is valid), predicts that black holes have a temperature, and contain information: a finite number of possible microstates which is the exponential of the area of the event horizon (in units of the planck length).

This gedanken-observation hints at the fact that the information contained in any finite volume of space is indeed finite, and yeilds entropy proportional to the area bounding said volume (in units of the planck length). So yes, there are reasons to consider theories that have finite amounts of information density, but you shouldn't assume it from the start. String theory seems to embody this concept *even though* it's completely continuous, all the way down to, and below, the planck scale. Maybe the naive descriptions of string theory are describing information redundantly, but that doesn't mean it's any less real of a description (assuming it's right, of course), just because it uses a completely contiuous basis.

I guess I'm really not sure we need a new "metaphysical theory" from which to create physical ones - direct observation points to continuity down to 10-19cm, and indirect (looking for violations of relativity, the equivalence principle) observations point further than that. Without a *physical*, not metaphysical, reason to assume discreteness, we shouldn't do it. It may turn out to be *true*, but that doesn't make the arguments for it compelling to me before the fact (especially because it's easy to go too far: it could be that space and time are discrete, but that quantum mechanics is still based on a whole bunch of continuous internal symmetries [i.e. superposition is still true, even if you take the superposition of two states with any complex [not just complex rational or constructable or whatever] coefficients in front of them).

(no subject)

Date: 2005-02-06 10:02 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
I'm not sure what you mean by this: the scientific method is based on inductive, not deductive, reasoning: we describe what we observe, and we model it in a variety of ways. The models which have the fewest moving parts are the ones we prefer. That's physics.

Unless you're doing it for expediency reasons, the fact that you use Occam's razor is due to your metaphysical theory. (i.e. we believe simple explanations tend to be true more often than complicated ones)

"Those who explicitly have a philosophy use an implicit one without realizing it."

Fredkin starts from an interesting metaphysical assumption. I know way too little to judge it further. But who knows, maybe it will be easier to understand things / make new discoveries if physicists start thinking this way.

(no subject)

Date: 2005-02-06 10:13 am (UTC)
From: [identity profile] pbrane.livejournal.com
The only part which uses Occam's razor is the "preferring fewer moving parts" part of it - the rest is just, "we describe what we observe", and so far, we've reached most of the conclusions of modern science with the premise of continuity. It's not *logically necessary* for there to be arbitrarily accurate continuity, but so far, that's all we have evidence for. But in some sense, Occam's razor in science is just for expediency: we don't claim that any model "is" the real world - it's just a model of it, and the more accurate the model, for the least amount of work, the better.

But who knows, maybe it will be easier to understand things / make new discoveries if physicists start thinking this way.

Possible, but so far, in the history of science, we typically have discretized continous things for computational purposes (recently) or to avoid singular problems that were artifacts of the construction (which we could remove and then take the discretization length to zero at the end of the day).

It still doesn't seem like *meta*physical assumption, but an actual physical one, and an unjustified one at that. Show me some evidence for discreteness, or else show me some nice theoretical problems solved by the proposal, or at least show me how it simplifies current descriptions, or else I'm not sure why it's being done: just because we're scared of the infinite? Seems pretty arbitrarily limiting.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags