A common site optimization problem is that you have too many hypotheses and not enough time to test them all. So the hypotheses have to be prioritized. People prioritize using scope, predicted impact, timeline, dependencies and risk. An often overlooked factor, though, is confidence.
Confidence is simply a measure of how certain you are that the test will succeed.
A hypothesis that is well supported by evidence is more likely to be successful, and so you’ll have a higher confidence level. The more information you have to go on, the better you can refine your hypothesis and anticipate errors. The variables that could upset your results are decreased and the hypothesis is more precise. Knowing more doesn’t guarantee that your hypothesis will be successful, but it does make it more likely.
A hypothesis with less data to support it is less likely to be successful, and so you’ll have lower confidence. Your hypothesis is less trustworthy because the information that you are unaware of can radically change the interpretation of facts. Knowing less doesn’t mean that your hypothesis will fail, but it does make it more likely.
As a general rule, more information means greater confidence in a hypothesis and less information means a lower level of confidence. Depending on risk tolerance – both personally and within the company – low or high confidence in a hypothesis can be a significant prioritization factor.
So what do you do if you have a fantastic idea, but not a lot of data to back it up, i.e. your confidence level is low? My favorite solution is to run thought experiments to find evidence to prove, refine or disprove the hypothesis.
Sometimes, though, even if you can’t find a lot of information to back up a test prior to launch, it’s still worth running. It comes down to what the rest of the testing queue looks like and what a successful or losing test would tell you.