# Simulation: Input-Uncertainty

The idea behind simulation models is that we can use them to make predictions of what might happen in the real world. As we want often want to use the results of these simulations to inform decision making it is important for the results to be accurate and precise. The way we do this is by keeping a careful eye on the errors.

Traditionally people have focused on trying to minimise the error that crops up in the actual simulation itself, the technical name for this is the ‘simulation-estimation error.

However there is another important error that is often overlooked and needs to be taken into account as well, the ‘input-uncertainty error‘. This is to do with the uncertainty in the actual values you plug into your simulation to start with.

Errors in Simulations; source: emilystori.wordpress.com

### Example

Consider a classic M/M/$\infty$ queuing system with an arrival rate $\lambda$ and a mean service time $\tau$. If we don’t know what the true values of $\lambda$ and $\tau$ actually are, then we have to estimate them from what we observe in the real world.

If we observe m arrivals into the system noting their inter-arrival times $A_i$ and their service times $X_i$, then we can estimate the true values of $\lambda$ and $\tau$ by,

$\hat\lambda = 1/\bar{A}$

$\hat\tau = \bar{X}$

Imagine that we then run $n$ replications (repeats) of our queuing system simulation and observe the number of customers in the system once we have reached steady state as $Y_j$.

The true number of people in the system once we have reached steady state is estimated as $\bar{Y} = \frac{1}{n} \sum_{j=1}^{n} Y_j$. Then clever people have found estimates for the expectation and variance of the approximated steady state mean $\bar{Y}$ to be,

$\mathbb{E}(\bar{Y})\ = \cfrac{m}{m-1}\lambda\tau$

$\text{Var}(\bar{Y}) \approx \cfrac{\lambda\tau}{n} + \cfrac{2(\lambda\tau)^2}{m}$

The term $\cfrac{2(\lambda\tau)^2}{m}$ is the input-uncertainty that we are interested in. We can see the relative importance of this error as compared to the simulation-estimation error $\cfrac{\lambda\tau}{n}$. If we make n large then the simulation-estimation error will decrease, but at some point it will be smaller than the input-uncertainty error.

Since the input-uncertainty error can dominate the simulation-estimation error, it is clear to see that we need to know about the input-uncertainty error. This is particularly true for extremely precise problems.

As you might expect real problems are not as simple as this example. We can often have lots of inputs into our simulation which complicates matters and the output is often a highly non-linear (complicated!) function of the inputs.

This means that we can’t write down the output and explicitly see the what the input-uncertainty error is. So we need new methods for quantifying input-uncertainty such as bootstrapping and confidence intervals.