# Simulation-based inference

*Notes on simulators and modeling them*

# Simulators

A simulator is a probabilistic program that (possibly implicitly) defines a statistical model. It takes a vector of parameters $\theta \in \Theta$ as input ($\Theta$ is the parameter space), internally samples a series of states or latent variables $z_i \sim p_i(z_i|\theta,z_{\lt i})$, and produces a data vector $x \sim p(x|\theta,z)$ as output. In other words,

- A collection of real values in $\theta$ parametrize the simulator, effectively realizing the possibly implicit statistical model defined by the program (the implicit case is what SBI is meant to address since there is no straightforward way to apply standard inference procedures).
- Internal variables $z_i$ are sampled from distributions conditioned by the provided parameters and any previously sampled latent variables $z_{\lt i}$. These are the conditional distributions $p_i(z_i|\theta,z_{\lt i})$.
- A vector of simulated observations $x$ is sampled from a distribution conditioned on the parameters $\theta$ and the vector of internally sampled latent variables $z$. This is the distribution $p(x|z,\theta)$.

## Detailed example

# Inference in simulators

For starters, statistical inference is a class of analytical techniques for extracting information from data about underlying parameter values (of the global process which produced the data). This primarily takes place under one of two main perspectives: frequentist or Bayesian statistics. In the former, inference often consists of deriving confidence intervals around parameters of interest. Bayesians often incorporate a lot more uncertainty and produce an entire posterior distribution describing likely values of the parameter of interest. Both of these techniques require an explicit, tractable likelihood (or model) $p(x|\theta)$ described how data are sampled under certain parametric conditions.

SBI is an important inference technique since it can apply to such a flexible model formulation. Those statistical models with intractable or implicit likelihoods are often the ones we face in the real world, escaping the grasp of more restricted yet exact methods like Bayesian inference or Frequentist inference. In the general formulation above, we can see that the likelihood function implicitly defined by a simulator depends on an integral over all possible trajectories through the latent space (i.e. all possible internal states that the simulator could produce). This general

$p(x|\theta) = \int p(x,z|\theta)dz$

# Approximate Bayesian Computation (ABC)

# Implications

The usefulness of a general purpose SBI method is astounding. Virtually every real-world problem can be stated as the parametric solution (or at least a close approximation) to some underlying statistical process.

- Questions about scope, how is this different from the general learning problem?
- When the neural network is the simulator (literally simulating a neural network by passing in inputs and observing outputs), want to learn parameters of simulator (network) that produce close to real-world data. Is this not just the same formulation as the generative modeling problem?
- What about super general simulators, and learning a program as “parameters”

# Sources

- Simulation-based inference
- sbi package
- Flexible statistical inference for mechanistic models
- Fast free inference of simulation models
- Automatic Posterior Transformation for Likelihood-free Inference
- Sequential Neural Likelihood
- Likelihood-free MCMC with Amortized Approximate Likelihood Ratios
- On Contrastive Learning for Likelihood-free Inference
- The frontier of simulation-based inference

- Active learning
- Probabilistic programming
- Other related papers