gusl: (Default)
[personal profile] gusl
Last Thursday I heard the last half of DLS, featuring Doug James (who did his PhD with Dinesh Pai), about synthesizing sounds via physical simulation, using something called a "cubature", and some very serious-looking applied math. Like many engineering-type problems (e.g. vision), I didn't realize how difficult this was until looking at the state-of-the-art.

To realistically simulate the sound of a garbage can falling on the floor (5-10s of footage) apparently takes days in modern computers. He explained why bubbles produce a rising pitch as they pop, and that water drops falling on water don't make a sound: it's the resulting bubble that does!

Since I'm a machine learning guy, I asked him about data-oriented / "semi-synthetic" approaches, e.g. sampling from an existing database of event-sound pairs and trying to interpolate/extrapolate to the situation at hand. He said that this is what everyone else is doing. Another ML-y thought: it would be interesting to try to solve the inverse problem: from the sound, infer the event (just like "vision is inverse graphics"). This would be useful in forensics when you have audio but not video input.

Someone pointed out that many people don't know what real car crashes sound like, and that TV gives us a distorted idea. So Hollywood might not be that interested in synthetic audio.
(will be screened)
(will be screened if not validated)
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags