Jun. 22nd, 2010

gusl: (Default)
After some slightly back-of-the-envelope analysis, I'm concluding out that my code runs in roughly O(n⁴k) time, even after giving up on being fully Bayesian. This means it would take longer than a month to run on the data we're interested in... Bummer!

Ok, back to the drawing board...
gusl: (Default)
sshfs (SSH filesystem) is really really handy! By mounting a remote directory locally, I can view / edit / attach / copy remote files as if they were local, bypassing awkward commands and password prompts (not to mention the junk that is left behind after running scp).

The only thing that bothers me a bit is that when I run things while in a locally-mounted remote directory, I have the illusion that they are running remotely. But in fact, it is my own CPU that is running and heating up... and it's probably slow for reading and writing to file. (In an ideal world, it would be the other way around: a remote process would be accessing the files in my HD, but that's beside the point.)

I'm wondering if there's a nice way to start remote processes from locally-mounted remote drives... a way that (a) understands where I am running the process from (i.e. which directory I'm in), (b) that can resolve references to locally-mounted remote files into actual remote files, and (c) has a simple syntax, like e.g. a "*" after the command name to indicate "run remotely", so that Rscript* run.R | grep pattern would run Rscript on the remote CPU, and grep with the local CPU.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags