One of the few places in which statistics gets applied to math (rather than the other way around) is in the experimental mathematics community.
This year, the Workshop on Discovery and Experimentation in Number Theory is split between two venues simultaneously, dividing the attendees into two physical communities.
Doron Zeilberger apparently won't be there. But Lenore Blum will. She has a book about computation over the reals called "Complexity and Real Computation" (classical computation is defined in binary terms, i.e. over Z2). There's a blurb about "Transfer Principles for Complexity Theory":
This reminds me of work on infinite-precision numerical computation, by Françoise Chaitin-Chatelin. (coincidentally or not, they are both married to famous mathematicians)
However, the two are completely different metaphors: in infinite-precision numerical computation the real number is (I think) the limit of the output of a Turing Machine over time. In Blum's work, the real numbers are the symbols that you read&write to the tape.
This year, the Workshop on Discovery and Experimentation in Number Theory is split between two venues simultaneously, dividing the attendees into two physical communities.
The workshop takes place at two sites: The Fields Institute in Toronto and The IRMACS Centre in Vancouver. While participants are welcome at either site, we anticipate that most participants will choose the closer or easier site. We will share (by technology of a sophisticated nature) the core of the lectures though we will have some individual sessions at each site, primarily because of the three hours time difference. The lectures which are not shared will be video streamed and archived.
This is obviously somewhat of an experimental format, though the primary focus is, of course, the content. Technical assistance will be available at each site, so as to minimize glitches.
Doron Zeilberger apparently won't be there. But Lenore Blum will. She has a book about computation over the reals called "Complexity and Real Computation" (classical computation is defined in binary terms, i.e. over Z2). There's a blurb about "Transfer Principles for Complexity Theory":
We can ask: Suppose we can show P = NP over the complex numbers (using all the mathematics that is natural here). Then can we conclude that P = NP over another field such as the algebraic numbers or even over Z2? (Answer: Yes and essentially yes.)
This reminds me of work on infinite-precision numerical computation, by Françoise Chaitin-Chatelin. (coincidentally or not, they are both married to famous mathematicians)
However, the two are completely different metaphors: in infinite-precision numerical computation the real number is (I think) the limit of the output of a Turing Machine over time. In Blum's work, the real numbers are the symbols that you read&write to the tape.
(no subject)
Date: 2009-09-18 04:14 am (UTC)(no subject)
Date: 2009-09-18 04:16 am (UTC)(no subject)
Date: 2009-09-18 04:17 am (UTC)(no subject)
Date: 2009-09-18 04:19 am (UTC)(no subject)
Date: 2009-09-18 04:20 am (UTC)(no subject)
Date: 2009-09-18 04:25 am (UTC)(no subject)
Date: 2009-09-18 04:38 am (UTC)This is probably an obvious question, but: how do real RAMs connect to questions in computational geometry?
(no subject)
Date: 2009-09-18 05:00 am (UTC)Geometry is basically the art of calculating functions of coordinates to get combinatorial structures. It's the calculating functions bit that bites. For many problems, if we can calculate just the *sign* of the function, the rest all follows. But even then that's nontrivial.
If you assume the coordinates are reals, and that you can take two numbers and add, multiply, or divide them in constant time, many things are relatively easy. If you assume the coordinates are integer or floating-point, then errors build up. For example, you can solve an LP in polynomial time on a real RAM. We don't know how to do this in polynomial time on a word RAM: if you do exact arithmetic, the size of the numbers grows exponentially with the number of multiplications you perform -- each multiplication doubles the number of bits you need to represent.
(no subject)
Date: 2009-09-18 06:18 am (UTC)Recursion theorists like to talk about "computability oracles": "if we had a machine that could decide the halting problem, then..."
I guess this stuff is about "complexity oracles": "if we had a machine that could multiply in constant time, then..."
Somehow the latter type of question strikes me as more interesting (I imagine the complexity zoo to be more diverse than the computability zoo), because it lets you focus on higher-level aspects of the problems/algorithms (it's a kind of idealization).
(no subject)
Date: 2009-09-18 06:45 am (UTC)RAM: You assume infinite memory, and some type of value that can be stored. The machine has unit-size pointers. You can read and write to a location in unit time. You can do any one binary arithmetic operation on the values in unit time.
Word RAM: the value type is w-bit integers (words), and usually says that pointers are w bits also (ergo, you'd better have w at least log n) -- you can convert between values and pointers.
Real RAM: the value type is reals; you often also get a word RAM to play with on the side, or maybe an integer RAM (with unbounded-length integers).
The word RAM model is pretty close to a physical machine, although typically not all binary operations exist (a common one is "count the number of bits in this word"). The main deviation from reality is that it ignores memory hierarchy effects.
If you squint and think of the floating point unit in your computer as operating in the reals rather than in floating-point, then the real RAM model kind of vaguely looks like a physical machine. It's close enough that we geometers manage to sleep at night despite the lies that we tell.
You have to be careful about converting between words and reals if you care about lower bounds: you can encode crazy-complicated things as arithmetic expressions on reals. I periodically have to look up the precise details so that I can stay correct when I make brash claims of optimality, but said details ooze out of my brain pretty quickly. On the other hand, some things are easier in word RAM: for example, it takes O(n log n) to sort n reals, but only linear time to sort n w-bit words assuming w is O(log n). Which is to say: the real RAM is more powerful, but its adversary is also more powerful.
(no subject)
Date: 2009-09-18 07:03 am (UTC)this sounds like the sorta thing where I'd try to use symbolic algebra programs. I guess somehow real RAMs help you construct lower-bound proofs?
<< it takes O(n log n) to sort n reals, but only linear time to sort n w-bit words assuming w is O(log n). >>
keyword: radix sort.
(no subject)
Date: 2009-09-18 03:18 pm (UTC)(no subject)
Date: 2009-09-18 11:08 pm (UTC)I never heard that before. Can you give an example?
I'm thinking of packing problems, kissing problems, .
Btw, what's the technique for solving LPs with multiplications? (name?)
(no subject)
Date: 2009-09-19 11:58 am (UTC)Every LP solver I'm aware of uses arithmetic operations; it's hard to imagine not doing so given that what you're asked to optimize is the dot product of two vectors. The interior point method is poly time if you can do real (in fact, rational) arithmetic in constant time. It's not if you only get to do arithmetic on w-bit words, because the rationals you operate on end up using a lot of bits.
(no subject)
Date: 2009-09-18 06:20 am (UTC)(no subject)
Date: 2009-09-18 06:30 am (UTC)(no subject)
Date: 2009-09-18 06:36 am (UTC)(no subject)
Date: 2009-09-18 06:39 am (UTC)(no subject)
Date: 2009-09-18 06:45 am (UTC)(no subject)
Date: 2009-09-18 07:07 am (UTC)(no subject)
Date: 2009-09-18 07:21 am (UTC)