gusl: (Default)
Why does the Gomory-Hu algorithm work? i.e. why does the tree that it outputs have the property that you can do min-cut by removing its smallest edge and returning the connected components? (My hunch is that Fig. 4 has the answer if I stare at it long enough)



Every edge of this tree (green) represents a cut (in particular, the min-cut between the two nodes that it connects). But why must the min-cut between non-adjacent nodes in the tree (say, 2 hops away) be either one of the two min-cuts corresponding to the two edges of the tree? Where is the optimal substructure?

Answer: Let the min-cut-tree be A-B-C. Then any cut between A and C will cut either A-B or B-C and thus have a value at least as big as the smallest of them. However, A-B (or B-C) by itself is a cut of A-C.

The closest thing to a non-confusing explanation of the steps is this, but it's a bit high-level and glosses over details like which two nodes to pick as source/target (does it matter?), and has a very silly use of color on the last figure.

There's also supposed to be a simpler algorithm by Gusfield, but I wasn't able to find any substantive information about it. The paper isn't available online.

To Do:
(1) implement min-cut
(2) implement min-cut-tree (Gomory-Hu or Gusfield or other)
(3) implement the neat idea of doing graph clustering by min-cut-trees, as explained in Gary Flake's appropriately titled paper. In typical CS Theory fashion, this paper explains that all nice things are NP-Hard, but goes on to give formal guarantees that the proposed method is almost-optimal.
gusl: (Default)
Lately, I've been faced with problems of the following type:
* there are many ways to represent an object
* a canonical representation is either hard to define, or unnatural to work with

(1) optimizing over the space of k-dim linear subspaces (a.k.a. k-flats). They can be represented as bases (more canonically, orthogonal bases), but that's not enough to identify them uniquely. We can define k-flats to be points in a Grassmanian, which is the appropriate quotient space (in which bases of the same space form an equivalence class). But optimizing here involves doing calculus on surfaces, which requires knowledge of differential geometry... or easier, one could use Lagrange multipliers, but I don't have enough constraints (I don't know how to constrain the rotation of the orthogonal basis).

(2) suppose you have an unknown probability distribution over clusterings of nodes. You can easily assign a label to each node such that two nodes are in the same cluster if they have the same label. Of course, these labels are just names, and permuting them results in an equivalent structure, which necessarily has the same probability. Now, suppose you get several samples from this distribution (say, by MCMC), and you want to estimate the distribution. Now, for each sample, you could simply add the other n!-1 relabelings ("mirror images"), but can you do better?
This is known as the "label-switching problem", and in some cases it can addressed by "artificial identifiability constraints" (e.g. in the form of: WOLOG, node 1 is always labeled with "a"), but this too can become unmanageable and unnatural.

Tangentially, is there a nice way to represent graphs modulo isomorphism? I'm aware of spectral techniques (e.g. graph Laplacian), but these are usually lossy, i.e. querying whether two graphs are the "same" (i.e. isomorphic) will result in some false positives.
gusl: (Default)
While arguing that CS is more fundamental than Physics, Daniel Lemire gives it the status of a "natural science", since it supposedly makes fundamental predictions about the universe. He claims that both (a) Digital Physics / Strong Church-Turing Thesis (SCTT) and (b) the possibility of AI are falsifiable predictions ("Yet, these predictions remain unfalsified to this day.")

I made a skeptical argument for each of the above. Below I'm quoting my objection to (a).

In our usage here, "Strong Church-Turing Thesis" refers to the idea that the computation that can occur in our universe is asymptotically equivalent to Turing Machine computation, as far as space- and time-complexity is concerned.

from the comments to Computer Science is the most fundamental natural science. [edited]

@Daniel
I was just thinking: is the Strong Church-Turing Thesis (SCTT) really falsifiable?
Any such falsification must involve a proof that nature solves a certain problem (i.e. via a "physical algorithm") asymptotically faster* than a Turing Machine. To my knowledge, there is no accepted way to prove complexity results empirically: remember you're trying to draw conclusions about what happens to the computation time (and/or space) as n goes to infinity! To extrapolate to infinity, you need a theory (a.k.a. computational model) to tell you how your algorithm scales (in this case, this will be a physical theory). With a computational model in hand, you are back in CS Theory land, and can compare things asymptotically, but it's hard to see how you could validate a physical model in the first place.

OTOH, it seems like we've accepted that the asymptotic behavior of PCs is modeled by Turing Machines... although we've seen in practice that this is an idealization, and PC performance is in fact slightly worse than what would be predicted by Random-Access Machines, as we scale up.


* - or the opposite: nature cannot compute as fast as some algorithm in a Turing Machine!


If one can prove that quantum computing can scale and that QP!=P, that would falsify SCTT (at least the classical SCTT), so it seems falsifiable afterall... to the extent that such a thing can be proven.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags