gusl: (Default)
[personal profile] gusl
What is the relationship between O(n^log(n)) and O(2^n)? Is the set of complexity classes fully- or partially-ordered?

Some NP-Hard problems can be solved in O(2^n) time. Can we make this statement stronger1?


1 - using current technology

(no subject)

Date: 2008-11-07 01:07 am (UTC)
From: [identity profile] rdore.livejournal.com
nlog n grows much slower than 2n.

nlog n is 2f(log n) for a polynomial f (in this case just x2). I've heard O(2polynomial in log n) referred to as quasipolynomial, but I have no idea if this is standard. It's not very that far from polynomial.

2n is of course a long way from polynomial.

(no subject)

Date: 2008-11-07 01:47 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
<< n^(log n) grows much slower than 2^n. >>

how do you know this? how do we know that quasipolynomials grow slower than exponentials?
Edited Date: 2008-11-07 01:49 am (UTC)

(no subject)

Date: 2008-11-07 02:37 am (UTC)
From: [identity profile] rdore.livejournal.com
just take the log of both of them.

log(nlog n) = (log n)2

whereas

log(2n) = n

given that the second one has a logarithm that grows faster, it must grow a lot faster.

(no subject)

Date: 2008-11-07 04:46 am (UTC)
From: [identity profile] mdinitz.livejournal.com
It's totally standard to call that quasipolynomial -- for example, the class QP is defined as DTIME(2^{polylog(n)}) by the complexity zoo. Gustavo, the set of complexity classes is almost definitely not fully ordered. I can't think of any unconditional examples offhand (it's been a while since I thought about complexity stuff), but it's a standard complexity assumption to assume that NP is not equal to coNP, even though they both contain P and are both contained in Sigma_2 and Pi_2 (and thus obviously contained by the polynomial hierarchy). As for partially ordered, basically the goal of complexity theory is to figure out how different classes relate, i.e. figuring out what such a partial order would look like.

Oh, and quasipolynomial functions are called that because they're considered "close" to polynomial. In particular, assuming that NP is not in QP is a totally standard complexity assumption, and most complexity theorists would be pretty shocked if it weren't true. Finally, lots of hardness of approximation results that are derived from the PCP theorem require gadgets of quasipolynomial size, so it's an assumption that you see a lot in hardness of approximation results.

(no subject)

Date: 2008-11-07 01:39 am (UTC)
From: [identity profile] puellavulnerata.livejournal.com
What [livejournal.com profile] rdore said about O(n^log(n)) and O(2^n).

For complexity-theory purposes, big-O notation is a total order. In particular, it's true that if g(n) is monotonically increasing, then for all f(n), f(n) is either O(g(n)) or Omega(g(n)), and if both are true then f(n) is Theta(g(n)). This is not true in general for non-monotonic g(n), though. For example, sin(x) is not O(sin(sqrt(2)*x)) or Omega(sin(sqrt(2)*x)), because for any x0, there are always values of x > x0 where each function is zero while the other is not.

No, I don't think there are any known solutions of NP-hard problems faster than O(2^n).

(no subject)

Date: 2008-11-07 02:44 am (UTC)
From: [identity profile] rdore.livejournal.com
For complexity-theory purposes, big-O notation is a total order.

In some sense this is true, but I still cringe whenever people say it. In reality, the big-O partial order is horrible, even restricted to increasing functions from N to N. I'm pretty sure it contains an isomorphic copy of (P(N),subset)
Edited Date: 2008-11-07 02:45 am (UTC)

(no subject)

Date: 2008-11-07 03:05 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
<< In some sense this is true >>

In what sense is it false?


<< I'm pretty sure it contains an isomorphic copy of (P(N),subset) >>

What does this mean?

(no subject)

Date: 2008-11-07 03:25 am (UTC)
From: [identity profile] rdore.livejournal.com
In what sense is it false?

I'm not suggesting it is false so much as fundamentally ambiguous. What functions are, or are not, allowed to to count for "complexity-theory purpose"?

<< I'm pretty sure it contains an isomorphic copy of (P(N),subset) >>

What does this mean?


It means that there is a function f whose input is subsets of N and whose output is increasing functions from N to N so that:

(X is a subset of Y) if and only if ( f(X)(n)=O(f(Y)(n)) )

(no subject)

Date: 2008-11-07 07:13 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
<< I'm not suggesting it is false so much as fundamentally ambiguous. What functions are, or are not, allowed to to count for "complexity-theory purpose"? >>

I agree. So it is a partial order. But can you come up with monotonically increasing functions f,g such that they are incomparable (meaning, neither f in O(g) nor g in O(f)?


by "N", do you mean the set of naturals, i.e. P(N) is the set of sets of naturals?

(no subject)

Date: 2008-11-07 08:53 am (UTC)
From: [identity profile] easwaran.livejournal.com
Here's an example. Let f(2n)=n! and f(2n+1)=f(2n)+1. Let g(2n+1)=(n+1)! and g(2n+2)=g(2n+1)+1. (I've only added the "+1" to the second parts of the definitions to make sure these fit with any definition of monotonic you care about.) Then there is no constant k such that f(n)<kg(n) for all n, and also no constant k such that g(n)<kf(n) for all n, just because each one alternates being a larger multiple times the other. And I'm pretty sure that this is what he means.

(no subject)

Date: 2008-11-07 08:58 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
I see.

This example does seem unnatural though. I'd like to restrict it to functions were the "derivative" is monotonically increasing.
i.e. where the function f(n) - f(n-1) is monotonically increasing.

(no subject)

Date: 2008-11-07 09:09 am (UTC)
From: [identity profile] easwaran.livejournal.com
Let f(2n)=n! and f(2n+1)=2f(2n). Let g(2n+1)=(n+1)! and g(2n+2)=2g(2n+1). I think it'll be very tough to find the right notion of "natural" - computability seems to be the only natural way to classify functions from N to N.

(no subject)

Date: 2008-11-07 04:59 pm (UTC)
From: [identity profile] rdore.livejournal.com
Ok, take f(2i)= 2^(2^(2^(2i))) and f(2i+1)=3f(2i) and g(2i+1)=2^(2^(2^(2i+1))) and g(2i+2)=3g(2i+1).

(no subject)

Date: 2008-11-07 07:02 pm (UTC)
From: [identity profile] bhudson.livejournal.com
Oh, cute.

(no subject)

Date: 2008-11-07 05:00 pm (UTC)
From: [identity profile] rdore.livejournal.com
by "N", do you mean the set of naturals, i.e. P(N) is the set of sets of naturals?

Yes.

(no subject)

Date: 2008-11-07 03:17 am (UTC)
From: [identity profile] widdertwin.livejournal.com
G'day there! In case you hadn't noticed by my demeanour on the bus last night, I was not in my right mind at all. I'd blown my brain out on three hits of acid and was pleasantly albeit incoherently rolling balls and surfing the lights and energy around me. Hence the questionable first impression.

A pleasure to meet you, regardless :-)

(no subject)

Date: 2008-11-07 04:26 am (UTC)
From: [identity profile] en-ki.livejournal.com
...it must be nice in Canada.

(no subject)

Date: 2008-11-07 08:09 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
heh. I did find you oddly quiet, which meant that your warm handshake as you got off the bus came as a pleasant surprise. It didn't occur to me that your right mind might have been temporarily unavailable. I am generally oblivious to such things. :-)

(no subject)

Date: 2008-11-07 08:55 am (UTC)
From: [identity profile] easwaran.livejournal.com
I've twice had funny experiences talking to people on the bus who were currently on acid. Once I was heading to the SF Opera to see Turandot, and after I mentioned this, the two of us got into a discussion about which Mozart operas were better.

Hmm... I feel like there was another time in the past year or two, but it's not coming to me at the moment.

(no subject)

Date: 2008-11-07 09:01 am (UTC)
From: [identity profile] easwaran.livejournal.com
Also, the "using current technology" footnote is irrelevant, depending on how you define "can be solved". I think normally people assume a fixed theoretical representation of computation (for instance, Turing machines) and then calculate complexity in terms of the most efficient algorithm of that sort that solves the problem. Of course, there's no guarantee that different notions of computation that are Turing-equivalent give the same notions of complexity (for instance, consider a notion of computation in which any Turing-computable function can be calculated in a single step. But less trivial cases are possible.) So we really have to fix a notion of computation when defining complexity classes, even if we're restricted to algorithms that compute only computable things.

Scott Aaronson has conjectured that there is no physically realizable notion of computation that solves NP-complete problems in polynomial time.

As for solving all NP-hard problems in O(2^n) time, that somehow strikes me as unlikely, though I don't have a proof. The problem is that NP-complete problems are only known to have polynomial reductions to each other - but even if we assume a linear reduction, then solving one in O(2^n) time means solving the other in O(2^kn) time, which is not O(2^n). We'd either need a direct solution for each in that time, or a sub-linear reduction of each to the others, and that seems ridiculously unlikely.

(no subject)

Date: 2008-11-07 06:50 pm (UTC)
From: [identity profile] bhudson.livejournal.com
[Edit: I was on three puffs of crack, now I'm back to normal.]
Edited Date: 2008-11-07 09:53 pm (UTC)

(no subject)

Date: 2008-11-07 09:59 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
<< If you can solve one in O(2^n) time, you can solve all the others in O(2^n + poly(n)) time. You multiplied when you should have added. >>

For the record, I agree with this statement. Is it wrong?

(no subject)

Date: 2008-11-07 10:03 pm (UTC)
From: [identity profile] bhudson.livejournal.com
It is. If you have an instance of size a in problem A, then you reduce it to an instance of problem B, you're only assured that you reduce it to an instance of size poly(a) in problem B. So when you run the O(2^n) algorithm to solve problem B, the runtime is O(2^poly(a)).

(no subject)

Date: 2008-11-07 10:12 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
ah, the typical reductions that I'm familiar with don't change the size of the instance much.

(no subject)

Date: 2008-11-07 10:26 pm (UTC)
From: [identity profile] bhudson.livejournal.com
As [livejournal.com profile] easwaran mentioned, even if your gadgets just double the size of the instance, you're running in O(2^(2a)), which is a superset of O(2^a).

(no subject)

Date: 2008-11-07 10:20 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
<< Scott Aaronson has conjectured that there is no physically realizable notion of computation that solves NP-complete problems in polynomial time. >>

I imagine he wouldn't be the first!

(no subject)

Date: 2008-11-07 11:18 pm (UTC)
From: [identity profile] easwaran.livejournal.com
I think he's the one that's done the most to debunk claims of things that would do it (especially pointing out that quantum computation as currently imagined solves factoring and things like that, but nothing believed to be NP-complete), and he's also proposed this as a constraint on physical theorizing - if some physical theory would imply that you could solve NP-complete problems in polynomial time, then that theory is probably wrong. His stuff is interesting to read, as is his blog, which I occasionally glance at.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags