I’m not a researcher, and I don’t play one on TV. I’m a QA engineer. And personally, I’m paranoid.
Sure, I test dscout’s apps sixty ways from Sunday with myriad tools and techniques. I’ve played the roles of various user types — new user, careless user, experienced user. But no matter how much time I spend testing products for the engineering team, I always have a minor (read: major) anxiety attack when it’s time to finally send a release to the app stores.
What if our users report all sorts of bugs? What if they start wondering, “wow, how did they miss that?”
After all, things are different in the wild. So different, in fact, a truly rigorous testing process includes getting your product out there with real users in natural environments and authentic scenarios.
As it happens, the dscout product I’m testing is the perfect channel for reporting back what users are doing. It’s the user app side of a research platform that product owners and researchers use to understand what people need, think about, and do “in the wild.”
Like our customers, dscout relies on our research tools to understand our people’s real-world experiences with our product. So I thought it would be interesting to share the insights we gained in using it to test the completely remodeled Android app we launched this week.
After standard QA testing and before launch, I took on the role of researcher and fielded a diary study with Android users to analyze their expectations and satisfaction with the very app they were using. By using our own research platform, I was able to quantify our product’s quality before full production, and begin to understand our product in the way our users understand it.
Here’s a download of our process and takeaways:
Putting the user in your user stories
On an agile engineering team, everything we do is about user stories: writing, estimating, arranging in neat Trello columns from Backlog to Baked to QA to Shipped.
What can sometimes be funny about user stories is the conspicuous absence of the user. And that’s where the dscout tools come in. Researchers use it to capture the moments that matter to people. Analyze those moments, and you can develop even more robust user stories.
So that’s what we did. With real-world testing you get input from your users. You find out how they use your product and what they expect to do with it. And you aren’t counting on co-workers, friends or family to tell you the real deal about your design.
You won’t even be relying on your engineers, who are immersed in your product all day. Testing is a team sport, and real users are the MVPs.
Using dscout to test our app helped us build more powerful user stories, prioritize our work, and pinpoint our improvements. Here’s how:
- Using 30-second user feedback videos, we matched the troubling events they described to the actual errors in our bug tracker. It humanized the errors and enabled us to craft stories of how they occurred.
- Giving users a set scale to quantify the degree to which a feature enhanced or disrupted their work flow, we were able to prioritize what to work on next.
- Using timestamps to pinpoint when our beta app misbehaves under stress enables us to enhance code to reduce vulnerabilities and improve performance — around media uploads, in this case.
Launching and analyzing the diary studies provided us with greater insight into how real people use our app, patterns in their activities, how they compare us to other apps they use, and how they genuinely feel about our product. With this feedback, we added important new user stories and modified existing ones.
Knowns, known unknowns, and unknown unknowns
User feedback helped us prioritize known issues. Hearing about the same bug in large volumes moved it from “Icebox” to the “Doing” list. But fielding a dscout diary study with real users helped us find two types of unknowns.
In quality testing, we always have anticipated there will be issues –our Known Unknowns — and planned for them. Adding user conversations to the mix was an invaluable supplement to our process. For example:
- They recognized problems that we anticipated but could not always create in the “sandbox” environment.
- With varied platforms, locations, and usage habits , users caught problems that we probably wouldn’t have been able to find and fix until the next version.
And then, there are the issues that completely blindside us.
Remember when I said I was paranoid? There are always unexpected scenarios that you just won’t encounter in testing. That’s where the unknown unknowns are lurking. The edge cases. They’re what really keep me up at night.
These are the bugs that leave us saying “how is that even possible?!” For example, one user didn’t grant location permissions to the dscout app. The result? A crash every time they submitted their feedback. In an uncontrolled environment, that problem could easily have scaled up to dire levels.
That sort of discovery is why we handed a beta to 10, 20 or 50 users instead of 10-, 20-, or 50-thousand. Better to let things fall apart — or be validated — in a smaller, controlled environment with people giving detailed feedback on what’s going wrong or right.
Running studies with the dscout app reminded me that there’s a lot more to quality assurance than smoke tests and manually pushing buttons. Sometimes you just don’t know until you know.
[Dog] food for thought
Approaching dscout’s revamped Android app from the researcher perspective instead of the engineering angle was a drastic and enlightening change.
I asked a lot of our participants. For three days, they answered several open-ended research questions in both text and video format. People told us what they liked and didn’t about the app, and what they wished it had. They also provided an overall reflection of their experience.
And they did all this while learning a new, potentially buggy, app. I reminded them the app was in the beta stage, and asked them to not hate us if it kept crashing, and still I worried that $20 was not going to stimulate the degree of detail and effort we desired. Yet, they provided us with genuinely invaluable feedback and data (which naturally we double-checked was being sent up correctly — QA!).
As I flipped through their videos, I noticed how much more understanding people were in their responses than I expected. Instead of losing faith in our product, this experience enhanced our participants’ connection with us. Engineers and users alike could see and interact with the real people behind the screen. This also gave us insight into researcher-to-respondent interactions.
Learning to use our research tool and actually getting the results we needed showed me exactly why people love, trust and use our product. This realization has amplified my confidence in our work and motivated our engineers to crank out new products and features.
And remember, I’m a QA expert, not a researcher. Working with a product like dscout isn’t something I do all the time. So fielding an actual study in the researcher role helped me experience the product and users in a way that felt more authentic, which is a scenario dscout strives to offer in all our research.
As for my high levels of inescapable anxiety that peak around product release dates…. well, let’s just say that eating one’s own dog food is the best type of anti-anxiety medication your company can offer you.