From: federico on
Hello everybody!

I write in the hope somebody will be so kind to show me the right way to solve my problem.. It's a mixture of simulation theory and statistics, so I hope this is the right place for asking.. :)

In brief, I have a simulator (it simulates an IT system) from which I observe 2 correlated random variables, called "Events" and "Failures" (the meaning of "Events" is "total requests submitted to the system" while "Failures" counts the number of requests which experienced a failure (i.e. requests which have not been successfully served by the system)). During the simulation, every time a request is succesfully completed, "Events" is updated, while every time a failure occours both the measures are updated. My simulation, like every simulation, consists of more than one run: at the end of each run I print the value of both my variables to a trace file.

My problem is that I'm not directly interested to the 2 measures, but rather to their ratio, which I use to calculate the reliability of the system:

Rel=1-(Failures/Events)

Since I have to compare such value of Reliability with the value calculated by another tool (to whom of course I submit the same system), I'd like to say, with statistical evidence, whether the results provided by the 2 tools are similar or not.

This means that I should build a confidence interval for Rel: unfortunately, my sample of (Failures, Events) is very small, since a simulation consists usually of not more than 10-15 runs (which means having 10-15 samples).

So, my question is: is there an approach for building a confidence interval based on a sample consisting of just 10-15 observations? Otherwise, which approach could represent a good compromise?

Thank you so much for your attention and help!

Bye,
Federico