Do you really need statistical significance on that test?
Expecting research to remove all ambiguity is self-defeating.
A few days ago, I wrote about a common complaint of leaders: we aren’t moving fast enough. The stance leaders take toward research activities can be part of the issue.
For example, expecting research to remove all ambiguity and provide certainty is self-defeating: you will either seek a level of confidence that the data cannot provide, or you will slow things down too much, constantly seeking statistical significance when it is not needed.
Don’t get me wrong: there are absolutely times when seeking statistical significance is the right move. But these tend to be an exception, for one of two reasons:
(1) Many products don’t have enough traffic to get statistically significant answers fast
(2) Most decisions are 2-way doors and do not need a statistically significant level of evidence. If you’re making what amount to incremental product changes, and demanding statistical significance…you’ll be waiting awhile. (Remember, we must scale our prototyping efforts to the risks at hand.)
So, is there a third door?
One idea is to have the team try making bigger, more radical changes in prototype tests. There is an inverse relationship between effect size and sample size needed for statistical significance. If you create bigger changes in your tests, you may get bigger signals. Most tests need directional clarity anyway, not statistical significance.
If this feels untenable, the real issue may be an overly-conservative culture that hasn’t developed risk fluency.
You’ve explained so well something I’ve felt in my gut but couldn’t put into words. So many burned hours waiting for results that didn’t even tell the team much....