What is adequate user research when developing or re-developing user interactions for a web application? Is it necessary to do an exhaustive, quantitative study equivalent to the studies done in the sciences, or is it adequate to do smaller, iterative tests to gain insights into the problematic aspects of user interaction? My preference is for the latter, but since I work in an academic library, there is a tendency to put more emphasis on the former. Anything else other than rigorous testing is pure speculation and opinion. I disagree.
If I use Jacob Neilsen’s findings on usability testing and test with five people, I should be able to use these insights to make adjustments to a user interaction especially when the majority of these people point out the same issues with the interface or workflow. When five people tell you, “I don’t know what to do next” or “I don’t understand what this means” it should be enough evidence to make changes to the user interaction and test again. These methods and findings should be repeatable, but it shouldn’t require a full-blown research study to get primary impressions of the user interactions.
We say all sorts of things about agile software development at the library, but the one thing I don’t hear enough is talk about action. We are making a thing and not the thing–especially at first. If we are actively developing in an agile environment, there should be more involvement with users and testing our assumptions. We don’t do an outstanding job of this at the library. We tend to over-intellectualize the conversation and have a preference for being or the appearance of being right in the argument about user interaction. The outcome of these lengthy conversations is that we put the user last.
I am not saying that we shouldn’t have reasonable discussions about what we are building and how we think it should work, but it shouldn’t take precedence over putting a thing in front of people who use the application to do their work. Usage trumps words.