Hub for Simulation-based inference sources and research ideas
created: 2020-11-11 · modified: 2021-03-25
Research ideas
The field of Probabilistic programming and autodiff, combined with new ideas from Simulation-based inference seems pretty powerful. aiming to generalize the construction of arbitrary statistical models, performing inference on them, and/or learning and backpropagating signal within them. This is a really neat area to look at.
Pretty interesting how much regular reasoning feels like some sort of SBI. For example, when I hear a noise, I generally have an idea of where it might be. Implicitly my brain will build a quick model of that area, and simulate some possible situations to see what kinds of events might’ve led to that noise occurring. The same goes for pretty much any other observation where there is some uncertainty involved. The sensory input is the real data, our brain builds the simulator, and we try different things happening (parameter values). We usually land on something that feels correct, or at least that we can convince ourselves is our best guess. This is a point estimate of the parameters’ posterior, and we’ve internally carried something along the lines of SBI. The interesting things here are
How do we go about building the simulator? I suppose we have some latent mental models already in place. Can we reasonably build a simulator from scratch computationally, or do we need some existing implementation? Maybe the likelihood methods like SNLE can be responsible for learning the likelihood and then using it as a rough sim to work from…or really that’s probably all that’s already happening.
How do we converge so quickly on reasonable parameter estimates? It seems like a lot of options are tossed out almost immediately because we know they wouldn’t make sense, and so we don’t even consider them. This feels something like prior knowledge or intuition, but maybe we’re also just thinking very quickly through some very simple examples to find values to try that are likely to work.
Multi-fidelity approaches, meta-learning metacontrollers for learning from arbitrary “experts”
Hamiltonian flows as replacements for MAFs, etc in NDE
Differentiable environments and compatibility with SNPE, SNLE approaches where we have this built-in flexibility and ability to optimize. Neural networks as simulators, learning a NN approximation to the simulator itself as an efficient oracle