Every now and then, I hear people say that “user testing” is a method that only generates qualitative data. There are two problems with this statement.
First, we are not testing users. We are testing the usability of the product in question. It is usability testing, not user testing. In an age where “user experience” has superseded “usability,” we may want an even better name for this method, but it is most definitely not “user testing.”
Second, usability testing is a perfectly good way to gather quantitative data. It just so happens to also be a very good way to get qualitative data.
When you conduct a usability test, you can quantify the number of errors that happen during tasks (and categorize them for even cooler measurements), completion time, whether or not participants completed the test, and you can collect task-level and test-level survey data.
Usability testing generally takes place with small sample sizes (on the order of 6 people or so), and this isn’t a barrier to descriptive statistics. You just have some big confidence intervals around your averages, but you can just acknowledge that in your reporting and decision making and get on with it.
I do understand thinking that quantitative data gathered from usability testing isn’t so helpful if you’re taking some kind of lean approach to product development. You’re moving quickly and treating usability testing as a more generative activity. Maybe it would be worthwhile to draw a distinction between these activities.