Church

Jul. 13th, 2008 10:08 pm
gusl: (Default)
[personal profile] gusl
Church is a new stochastic programming language based on Scheme (Church is to Scheme what IBAL is to OCaml). The idea is that programs represent stochastic processes (they specify how the data is generated).

Statisticians always want to compute the likelihood the data under certain hypotheses (often plotted as a function of the model's parameters).

The easy way to do this is by rejection sampling, i.e. estimate the likelihood by simulating with the model and parameters, and count how often you get the same data. But this is "exponentially slow".

So these languages support smarter inference methods like MCMC. I would also like to see some support for analytic methods, possibly by integrating with algebraic packages.

Such languages tend to be based on functional languages because functional programs are easier to reason about (unlike Scheme, Church forbids mutation).

Stochastic programming languages seem like a good way to organize algorithms that work in specific situations, seeing how much they overlap, how well they generalize; making it possible to better evaluate the novelty/improvement of new approaches.

(no subject)

Date: 2008-07-13 08:15 pm (UTC)
From: [identity profile] simrob.livejournal.com
There's a lot of work interesting in this area. Sungwoo Park, one of Frank's former students, did a Ph.D. thesis on a similar programming language, and languages like IBAL take a different approach (that may be more related to Church, I'm not sure.

Is Church new? I don't remember encountering it when I was looking at this stuff a bit over a year ago.

(no subject)

Date: 2008-07-13 08:21 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
yes, brand new!

(no subject)

Date: 2008-07-13 08:37 pm (UTC)
From: [identity profile] the-locster.livejournal.com
The easy way to do this is by rejection sampling, i.e. estimate the likelihood by simulating with the model and parameters, and count how often you get the same data. But this is "exponentially slow".

Hi, I'm just starting to come around to these sorts of methods. Right now I'm pondering Restricted Boltzman Machines which I think work just like this - an RBM network has to be activated multiple times in order for the learning rule to settle on good weights based on the successive activations of binary stochastic hidden nodes. It seems a little crude and as you say, slow.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags