Machine Learning presentations:
* Lots of bioinformatics stuff, which I understood nothing of.
*
kutta's talk had a slide about "Lacerdian Multinoullis". (I coined the term "multinoulli" because "multinomial" sounds too much like "binomial". Kevin loved it, and has been pushing its usage since; he's promoting this term in his book)
* My presentation went well, it seems.
kutta found it one of the clearest. Kevin only pointed out a minor flaw in a scatterplot, and made a suggestion to couple my 24 autoregressions. I'm really enjoying this project, though I occasionally worry about having to solve a difficult inference problem and running out of time.
Then I met with Geoff Hinton, which was biggest the flurry of ideas I've discussed in a really long time. I don't remember ever having such an idea-dense meeting. Time really flew, and I have a full page of notes.
After that, I attended the FOPI reading group. Thanks to my nitpickiness about people who use "d-separation equivalence" interchangeably with "distribution equivalence", we got into tangent in which I essentially gave my UAI talk to everyone there. I really enjoyed that.
Then I did my extra Vision homework; and researched
1 a paper on Soft Weight-Sharing to solve my problem (by guess who? Hinton!).
I'm getting used to these 12h days, and I don't mind it at all. :-)
1 - what do you call it when you skim a paper, find the section you want, and read that section very thoroughly / several times? Did you "read" the paper?