stress cases, pt. 2: peppy messaging

I wrote about stress cases a couple months ago (http://racheljoi.com/2019/stress-cases-pt-1/), and since then, I’ve thought a lot more about stress cases and given a keynote at Agile Testing Days USA about them. (It was a great conference, and I’ll write about it another time.) I thought I should flesh out some of the ideas here too. Today’s topic is peppy messaging.

When we talk about stress cases versus edge cases, we look at the impact on a person, rather than just the steps that got them there. We look at how a person under stress could be frustrated by our product, and how a product can cause additional stress to a person.

One way that our products can cause stress to a person are through peppy messaging. For instance:

This man had written a tribute to his recently-deceased friend on Medium, a blogging site, and he received an email with “fun fact: Shakespeare only got 2 recommends on his first Medium story.” This may be appropriate in some circumstances, like people talking about skateboarding or the antics of kittens, but there are many circumstances where peppy messaging is inappropriate.

We know, generally, the pain points people are trying to solve when they come to our products. We should not be causing unexpected pain in the form of peppy messages. Companies do not need to show off personality or quirkiness in their messaging.

So… how do we test for this?

Read all your copy out loud. All of it. Grab a colleague and have them frown at you. Have them imagine that their dog just died if they need to. If you can read all your copy to a frowning colleague, then it’s okay to put in your product. If it makes you uncomfortable to say it, or if your colleague looks taken aback, rethink the copy.

If you have bad copy, advocate for better copy. Be bold, tell a story about a person who could be affected by it, and make sure the extent of the hurt it can cause is understood. As testers, we are situated in a place where we see things at the beginning, the end, and all the middle places (hopefully), and we have a critical eye that notices things. We can, and should, advocate for better messaging.

So this was round 2 of stress cases. I’ll keep writing about them.

virtual reality testing (stay tuned)

I was in Reef Migration, a virtual reality experience for the Vive, and it got me thinking about testing virtual reality. (A video of Reef Migration is here.) Reef Migration is one of three underwater VR experiences in TheBlu, and the others have a blue whale and then angler fish at the bottom of the ocean. They’re all visually compelling. This one has you on a coral reef, and sea turtles, jellyfish, and lots and lots of fish swim by. You’re surrounded by anemones and coral, and the sun shines down through the water. It’s beautiful but buggy.

Unlike TheBlu, which is visual only, in Reef Migration, you can interact with the environment, notably by punching jellyfish and tickling anemones. It’s certainly entertaining. However, the jellyfish don’t always respond as expected, and a couple times during a session, fish will glitch out and entire schools will suddenly be several feet away from where they were before. Sometimes fish move through you instead of around you.

This has my testing instincts buzzing. How do we test virtual reality? For games, is there much difference from regular game testing? How much does network speed affect the level of bugginess? What are the weaknesses that are common to all VR experiences? How is augmented reality (AR) different?

I’m going to find out. I’m thinking of getting my testers at work involved in this too as fun skill-building. Keep an eye on this space for this new avenue of exploration!

stress cases, pt. 1

Stress cases acknowledge the human element of using our software. Often, we say that something is an “edge case” or a “corner case”, which really ends up meaning that we don’t want to fix it or won’t fix it. That’s fine for some things, but when we consider whether it’s a stress case, as in, a person under stress trying to do something with our software, those “edge cases” transform into real use cases. Edge cases are marginal uses of our software, but stress cases recognize that people are not marginal.

An example: ride-sharing apps probably don’t consider what happens when the user has low battery. It’s the user’s problem, right? But it’s not really an edge case to need a ride late at night when your phone has been discharging all day and is low battery. If you’re in an unfamiliar part of town at 2am and have less than 10% battery, do you want to see what’s new, or do you want to get through the flow as quickly as possible to allow you to get home safely? This is not an edge case, but a stress case.

Sara Wachter-Boettcher and Eric Meyer have written some absolutely fantastic material on this. One of their books is Technically Wrong, another is Design for Real Life. In both, Eric recounts his harrowing experience with Facebook’s “Year in Review” in 2014. The essay can be found here. In short, he was confronted with the face of his dead daughter in his “great year” worthy of celebration. Other people saw sonograms of miscarried children, posts about breakups, pictures of house fires.

Facebook assumed, that first year, that everyone had a good year, that their most popular posts were positive ones, and that they wanted to remember the year. These three assumptions were faulty, and forcing celebration on people left them feeling bruised and hurt. That’s not a great way to go about creating software.

Identifying stress cases helps everyone, much in the same way caring about accessibility helps everyone. When we acknowledge the human element, and when we plan for that in our software, we build better software. Identifying the stress cases adds information to the conversation and unleashes creativity in problem-solving.

So what can testers do? It’s hard, but gathering the information is important. When we are stressed out, our cognitive function is diminished. We don’t think as clearly, and our fine motor skills are less accurate. Put yourself under stress before you test something. If you have a hard time getting up in the morning, set an alarm for 4am and do your testing then (record it if you don’t think you’ll take good notes). If you don’t like math, do twenty minutes of hard math problems before you start testing. If you’re prone to anger, go argue with someone on the internet… or read the comments on, well, anything. The point is to put yourself in a position where you aren’t thinking with a clear head. Make sure your flows make sense at that point, that your buttons are big enough, your text readable, and your interruptions not maddeningly intrusive.

People will use our software on really bad days, possibly on the worst days of their lives. We don’t know who just had to put down their dog, or who is worried about a diagnosis or their personal safety. But someone probably is using our software when they are severely under stress. If we assume someone is having a bad day and still needs to get things done with our software, we make it better for everyone.

This is the first of what I hope will be many posts on this topic. If you’d like to hear more, check out the Testing  Show’s April podcast, come to Agile Testing Days USA, or stay tuned here!

pair testing adventures

Last week, I finally did what I’ve been wanting to do for months (years) and engaged in pair testing three times with some differing results.

what is pair testing?

Pair testing is a lot like pair programming, where you have two sets of eyes and two minds engaged in the same problem. There can be a lot of benefits to it (Lisa Crispin talks about it here), such as increased creativity, better quality, and more exploration, but, just like pair programming, it can be difficult to start and requires a lot of focus and energy. I wanted to learn strong-style pairing, as it seems to involve the most engagement from both people.

learning with Maaret

First up was a lesson with Maaret Pyhäjärvi, and absolutely amazing and well-known tester who publishes and speaks on testing regularly. I had confessed to her that most of my bugs feel like the result of serendipity rather than skill, and I told her that I wanted to work on my exploration as well as pairing. She was kind about it, even writing a post about serendipity.

We started at 7am my time, and for the next hour, we talked and tested a “simple” application that consisted of a text box, a button, and text outputs. As we tested, we talked about assumptions, tools, resources, and how outputs are inputs and can be manipulated (using tools or fiddling with the html itself).

Maaret taught while she tested with me, and, I have to admit, I was rather star-struck and honored that she gave me her time (do I sound too much like a fangirl?). It was a wonderful experience, and I left for work on a high that lasted all morning. I felt energized to bring what I had learned to my team at work immediately, and I convinced the other tester on my team to pair test with me in the afternoon.

practical application

In spite of the high from the morning, work was not stellar that day, and by the time the afternoon rolled around, I was slightly grumpy, my normal afternoon walk with a colleague-turned-friend had been canceled because of meetings (and I guess I was reluctant to go alone for no good reason), and I was low energy. But I decided to push through it and pair test with my colleague.

It was rough. I did a poor job of explaining why I thought we would do a better job testing together. It was a form in one of the features I’m responsible for testing (which is a topic for another post), so we tested inputs and buttons. I felt like I was generating all the ideas, and though I was trying to nudge towards more creativity, the testing felt flat and generic. When I asked what my colleague thought about it, the response was, “I think you could have done the same work on your own and much faster, and because we have a deadline, this wasn’t really a good use of time.” I came away from that experience disappointed and disheartened. (Maybe I’m too easily swayed by experiences and need to work on emotional resilience, but that’s a different post as well.)

a “stop” with Lisi

Anyway, not one to be defeated by something, I had signed up for a “stop” on Lisi Hocke‘s testing tour. Saturday morning, I worked with her for 90 minutes, and it turned out amazing as well. We tested a sketching program, an application neither of us had seen before. We started out broadly, exploring what happened with various features, and focusing for probably 10-20 minutes on areas where we saw weird things. We talked the whole time about what we were seeing, both unexpectedly positive things and “weird” things. It started out looking like a good application that I might consider using, but then we saw the save function, and that made the entire thing seem like a terrible user experience.

I liked the positive energy that both of us had, and Lisi is such an engaging person. I also liked that we used a timer to switch off who was driving and who was navigating. We had 4 minutes at a time before we switched. It relieved a lot of the pressure, because towards the end, as my mental focus was waning, I knew I just had to think of ideas for another minute or two before I would get to drive and let Lisi tell me what to do, which then opened up new areas for exploration and such. I felt like we were really working together, and I felt the benefits of pair testing in a new way.

reflections

I think one big difference was that Maaret and Lisi are so experienced in pair testing that they made it easy to include me and guide me when I needed it. I’ve now done it formally a total of three times, and for one of those, I was the one responsible for keeping the energy up and extolling the benefits of it. That was rough, as I am both inexperienced in the practice and somewhat insecure about my own testing abilities, and thus apprehensive about letting colleagues see what I perceive as my ineptitude.

One thing that I have been unable to figure out is how to effectively take notes through a mind map when we’re working on the same machine. I didn’t want to interrupt the flow of the session to work on the map, but I also didn’t want to just write down notes on paper and then have to transcribe those into a consumable digital format later. This will be something to experiment with.

Next time, at work, I will use a timer and not be so timid. I may also press for testing together in the morning, before the day has a chance to get to me. I’ll be more attuned to my mood and energy, and I may try to write down some ideas before we start so that I at least have some direction or goal for things that really have to get done, not just areas that seem neat.

All in all, I’m really glad that I finally dove into this. I’m proud of myself for asking for help and for trying something new, and I want to continue learning and experimenting. Stay tuned.

CAST retrospective

I attended the Conference of the Association for Software Testing (CAST) two weeks ago in Cocoa Beach, Florida. It was a fantastic learning and networking experience over four days, and I’m still reeling a little bit.

I started with a full-day tutorial about how to coach testing with Anne-Marie Charrett (her site is here). She and James Bach built the coaching model together but have added their own flavors to it. Anne-Marie emphasized understanding the person and starting from where they are. We had a couple of exercises where we paired up to teach a concept or test something, and we realized how easy it was to move in and out of pairing and coaching alternately. It gave me a lot to think about if I move into coaching, and Anne-Marie’s style of teaching was engaging and thorough.

The overarching theme of the weekend was “Bridging Between Communities”, so a lot of the talks and workshops ended up stressing communication. This ranged from communicating in healthy ways to defuse situations (Don’t Take It Personally by Bailey Hanna – her blog post is here) to a keynote about Cynefin by Liz Keogh where she said:

Having conversations
is more important than
Documenting conversations
is more important than
Automating conversations

in response to BDD being used for automation more than anything else. Her whole talk was just fascinating, and now I want to learn everything I can about Cynefin.

Lisi Hocke (her site is here) gave a fantastic presentation about her adventures in cross-team pair testing, where she has paired with testers from all over the world to mutually learn something, whether it be a new technique or a new concept or just how to better pair. Now I want to do the same. There’s so much to learn!

A workshop about finding bugs before implementation by Kasia Balcerzak and Bart Szulc stressed the importance of conversation using key questions. By asking questions, we can find problem areas and weaknesses (and bugs) before we build anything, making a better product in the end. My team at work already tries to have those conversations during backlog refinement, but I want to use the questions from this workshop to do a better job of it.

Carl and I also gave a workshop about mobile and chatbot testing. A debrief on that will come in another post. Suffice it to say that I learned some lessons and will do some things differently in future.

Other Highlights:

  • Lean Coffee, run by Matt Heusser
  • Erik Davis teaching me the Dice Game
  • Night kayaking with some absolutely great people
  • Seeing a Space-X rocket launch from the beach outside our hotel
  • The many conversations I had with smart, dedicated, passionate people about software testing and life