testing concerns

I was talking with a newer tester the other day, and the impression that I got was that there was a lot of insecurity around their testing. They felt like they didn’t have a right to say that they found a bug unless they came armed with bulletproof facts and basically a root cause analysis. They don’t feel like they have a right to an opinion, because they are still new to this. They feel like their bugs are more of a nuisance than a help. To all this, I say, “hogwash”. (/gets up on soapbox)

Testers have every right to raise concerns with the product, even if their concern is not directly related to a requirement or an acceptance criteria. Testers may not know the code, but they do become familiar enough with the product to understand how it should behave, consistent with itself and with other applications (e.g., making your shopping cart icon consistent with shopping carts on other applications rather than trying to be clever). Testers also have a sense for when a flow is too cumbersome, even if it’s “as designed”. Disregarding the opinions of a tester because they are new to testing or new to a team is something that puts a team at risk. My concerns are sometimes addressed as being too hard to change or not what the product owner wants to do, but that doesn’t mean that my concerns are invalid. Testers have a duty to advocate for the user (while recognizing that we are not the user).

In my first law job (goodness, that was ages ago), I went to a partner with a question. His response was, “What have you done to figure out the answer for yourself?” I went away, somewhat chastened, but I made sure that I had a list of things I had done the next time I went to him. That phrase has stuck with me through the years and has influenced my testing. I ask questions all the time, but I also seek the answers for myself through searching our backlog or closed stories, checking oracles, and trying other paths. However, I do not read code very well, so I don’t often engage in finding the source of the bug. And there are times when I can’t find steps to reproduce a bug, but something had gone very wrong. When that happens, I apologize to my devs and attach as much info as I can, while acknowledging that I know very little. If I have a gut feeling, I will state it but make sure it’s known that it has no evidence. I try to limit the times I do that, but sometimes it can’t be helped. At any rate, the lesson that law partner gave me in my second week of practice made me a better lawyer and makes me a better tester. The tricky part is knowing when I’m just spinning my wheels. And it can really suck to do that work, when I know that I could just ask. It’s worth it though.

The last concern worries me, about bugs being more of a nuisance than a help. This one, I think, requires setting the dev(s) straight about the necessity of testing, and the goal that we are trying to achieve. It may also require a change of the tester in how they are talking about testing. We all want to produce something wonderful and fun to use. If devs are treating the bugs testers find as problematic, even if they’re just joking around when sounding exasperated, that’s not okay.

If you are feeling like this tester, speak up! You’re not alone, and the team really does want good code, not just finished code. Be bold and assertive. You are valuable, and what you do matters.

stress cases, pt. 2: peppy messaging

I wrote about stress cases a couple months ago (http://racheljoi.com/2019/stress-cases-pt-1/), and since then, I’ve thought a lot more about stress cases and given a keynote at Agile Testing Days USA about them. (It was a great conference, and I’ll write about it another time.) I thought I should flesh out some of the ideas here too. Today’s topic is peppy messaging.

When we talk about stress cases versus edge cases, we look at the impact on a person, rather than just the steps that got them there. We look at how a person under stress could be frustrated by our product, and how a product can cause additional stress to a person.

One way that our products can cause stress to a person are through peppy messaging. For instance:

This man had written a tribute to his recently-deceased friend on Medium, a blogging site, and he received an email with “fun fact: Shakespeare only got 2 recommends on his first Medium story.” This may be appropriate in some circumstances, like people talking about skateboarding or the antics of kittens, but there are many circumstances where peppy messaging is inappropriate.

We know, generally, the pain points people are trying to solve when they come to our products. We should not be causing unexpected pain in the form of peppy messages. Companies do not need to show off personality or quirkiness in their messaging.

So… how do we test for this?

Read all your copy out loud. All of it. Grab a colleague and have them frown at you. Have them imagine that their dog just died if they need to. If you can read all your copy to a frowning colleague, then it’s okay to put in your product. If it makes you uncomfortable to say it, or if your colleague looks taken aback, rethink the copy.

If you have bad copy, advocate for better copy. Be bold, tell a story about a person who could be affected by it, and make sure the extent of the hurt it can cause is understood. As testers, we are situated in a place where we see things at the beginning, the end, and all the middle places (hopefully), and we have a critical eye that notices things. We can, and should, advocate for better messaging.

So this was round 2 of stress cases. I’ll keep writing about them.

virtual reality testing (stay tuned)

I was in Reef Migration, a virtual reality experience for the Vive, and it got me thinking about testing virtual reality. (A video of Reef Migration is here.) Reef Migration is one of three underwater VR experiences in TheBlu, and the others have a blue whale and then angler fish at the bottom of the ocean. They’re all visually compelling. This one has you on a coral reef, and sea turtles, jellyfish, and lots and lots of fish swim by. You’re surrounded by anemones and coral, and the sun shines down through the water. It’s beautiful but buggy.

Unlike TheBlu, which is visual only, in Reef Migration, you can interact with the environment, notably by punching jellyfish and tickling anemones. It’s certainly entertaining. However, the jellyfish don’t always respond as expected, and a couple times during a session, fish will glitch out and entire schools will suddenly be several feet away from where they were before. Sometimes fish move through you instead of around you.

This has my testing instincts buzzing. How do we test virtual reality? For games, is there much difference from regular game testing? How much does network speed affect the level of bugginess? What are the weaknesses that are common to all VR experiences? How is augmented reality (AR) different?

I’m going to find out. I’m thinking of getting my testers at work involved in this too as fun skill-building. Keep an eye on this space for this new avenue of exploration!

stress cases, pt. 1

Stress cases acknowledge the human element of using our software. Often, we say that something is an “edge case” or a “corner case”, which really ends up meaning that we don’t want to fix it or won’t fix it. That’s fine for some things, but when we consider whether it’s a stress case, as in, a person under stress trying to do something with our software, those “edge cases” transform into real use cases. Edge cases are marginal uses of our software, but stress cases recognize that people are not marginal.

An example: ride-sharing apps probably don’t consider what happens when the user has low battery. It’s the user’s problem, right? But it’s not really an edge case to need a ride late at night when your phone has been discharging all day and is low battery. If you’re in an unfamiliar part of town at 2am and have less than 10% battery, do you want to see what’s new, or do you want to get through the flow as quickly as possible to allow you to get home safely? This is not an edge case, but a stress case.

Sara Wachter-Boettcher and Eric Meyer have written some absolutely fantastic material on this. One of their books is Technically Wrong, another is Design for Real Life. In both, Eric recounts his harrowing experience with Facebook’s “Year in Review” in 2014. The essay can be found here. In short, he was confronted with the face of his dead daughter in his “great year” worthy of celebration. Other people saw sonograms of miscarried children, posts about breakups, pictures of house fires.

Facebook assumed, that first year, that everyone had a good year, that their most popular posts were positive ones, and that they wanted to remember the year. These three assumptions were faulty, and forcing celebration on people left them feeling bruised and hurt. That’s not a great way to go about creating software.

Identifying stress cases helps everyone, much in the same way caring about accessibility helps everyone. When we acknowledge the human element, and when we plan for that in our software, we build better software. Identifying the stress cases adds information to the conversation and unleashes creativity in problem-solving.

So what can testers do? It’s hard, but gathering the information is important. When we are stressed out, our cognitive function is diminished. We don’t think as clearly, and our fine motor skills are less accurate. Put yourself under stress before you test something. If you have a hard time getting up in the morning, set an alarm for 4am and do your testing then (record it if you don’t think you’ll take good notes). If you don’t like math, do twenty minutes of hard math problems before you start testing. If you’re prone to anger, go argue with someone on the internet… or read the comments on, well, anything. The point is to put yourself in a position where you aren’t thinking with a clear head. Make sure your flows make sense at that point, that your buttons are big enough, your text readable, and your interruptions not maddeningly intrusive.

People will use our software on really bad days, possibly on the worst days of their lives. We don’t know who just had to put down their dog, or who is worried about a diagnosis or their personal safety. But someone probably is using our software when they are severely under stress. If we assume someone is having a bad day and still needs to get things done with our software, we make it better for everyone.

This is the first of what I hope will be many posts on this topic. If you’d like to hear more, check out the Testing  Show’s April podcast, come to Agile Testing Days USA, or stay tuned here!

pair testing adventures

Last week, I finally did what I’ve been wanting to do for months (years) and engaged in pair testing three times with some differing results.

what is pair testing?

Pair testing is a lot like pair programming, where you have two sets of eyes and two minds engaged in the same problem. There can be a lot of benefits to it (Lisa Crispin talks about it here), such as increased creativity, better quality, and more exploration, but, just like pair programming, it can be difficult to start and requires a lot of focus and energy. I wanted to learn strong-style pairing, as it seems to involve the most engagement from both people.

learning with Maaret

First up was a lesson with Maaret Pyhäjärvi, and absolutely amazing and well-known tester who publishes and speaks on testing regularly. I had confessed to her that most of my bugs feel like the result of serendipity rather than skill, and I told her that I wanted to work on my exploration as well as pairing. She was kind about it, even writing a post about serendipity.

We started at 7am my time, and for the next hour, we talked and tested a “simple” application that consisted of a text box, a button, and text outputs. As we tested, we talked about assumptions, tools, resources, and how outputs are inputs and can be manipulated (using tools or fiddling with the html itself).

Maaret taught while she tested with me, and, I have to admit, I was rather star-struck and honored that she gave me her time (do I sound too much like a fangirl?). It was a wonderful experience, and I left for work on a high that lasted all morning. I felt energized to bring what I had learned to my team at work immediately, and I convinced the other tester on my team to pair test with me in the afternoon.

practical application

In spite of the high from the morning, work was not stellar that day, and by the time the afternoon rolled around, I was slightly grumpy, my normal afternoon walk with a colleague-turned-friend had been canceled because of meetings (and I guess I was reluctant to go alone for no good reason), and I was low energy. But I decided to push through it and pair test with my colleague.

It was rough. I did a poor job of explaining why I thought we would do a better job testing together. It was a form in one of the features I’m responsible for testing (which is a topic for another post), so we tested inputs and buttons. I felt like I was generating all the ideas, and though I was trying to nudge towards more creativity, the testing felt flat and generic. When I asked what my colleague thought about it, the response was, “I think you could have done the same work on your own and much faster, and because we have a deadline, this wasn’t really a good use of time.” I came away from that experience disappointed and disheartened. (Maybe I’m too easily swayed by experiences and need to work on emotional resilience, but that’s a different post as well.)

a “stop” with Lisi

Anyway, not one to be defeated by something, I had signed up for a “stop” on Lisi Hocke‘s testing tour. Saturday morning, I worked with her for 90 minutes, and it turned out amazing as well. We tested a sketching program, an application neither of us had seen before. We started out broadly, exploring what happened with various features, and focusing for probably 10-20 minutes on areas where we saw weird things. We talked the whole time about what we were seeing, both unexpectedly positive things and “weird” things. It started out looking like a good application that I might consider using, but then we saw the save function, and that made the entire thing seem like a terrible user experience.

I liked the positive energy that both of us had, and Lisi is such an engaging person. I also liked that we used a timer to switch off who was driving and who was navigating. We had 4 minutes at a time before we switched. It relieved a lot of the pressure, because towards the end, as my mental focus was waning, I knew I just had to think of ideas for another minute or two before I would get to drive and let Lisi tell me what to do, which then opened up new areas for exploration and such. I felt like we were really working together, and I felt the benefits of pair testing in a new way.


I think one big difference was that Maaret and Lisi are so experienced in pair testing that they made it easy to include me and guide me when I needed it. I’ve now done it formally a total of three times, and for one of those, I was the one responsible for keeping the energy up and extolling the benefits of it. That was rough, as I am both inexperienced in the practice and somewhat insecure about my own testing abilities, and thus apprehensive about letting colleagues see what I perceive as my ineptitude.

One thing that I have been unable to figure out is how to effectively take notes through a mind map when we’re working on the same machine. I didn’t want to interrupt the flow of the session to work on the map, but I also didn’t want to just write down notes on paper and then have to transcribe those into a consumable digital format later. This will be something to experiment with.

Next time, at work, I will use a timer and not be so timid. I may also press for testing together in the morning, before the day has a chance to get to me. I’ll be more attuned to my mood and energy, and I may try to write down some ideas before we start so that I at least have some direction or goal for things that really have to get done, not just areas that seem neat.

All in all, I’m really glad that I finally dove into this. I’m proud of myself for asking for help and for trying something new, and I want to continue learning and experimenting. Stay tuned.