pair testing adventures

Last week, I finally did what I’ve been wanting to do for months (years) and engaged in pair testing three times with some differing results.

what is pair testing?

Pair testing is a lot like pair programming, where you have two sets of eyes and two minds engaged in the same problem. There can be a lot of benefits to it (Lisa Crispin talks about it here), such as increased creativity, better quality, and more exploration, but, just like pair programming, it can be difficult to start and requires a lot of focus and energy. I wanted to learn strong-style pairing, as it seems to involve the most engagement from both people.

learning with Maaret

First up was a lesson with Maaret Pyhäjärvi, and absolutely amazing and well-known tester who publishes and speaks on testing regularly. I had confessed to her that most of my bugs feel like the result of serendipity rather than skill, and I told her that I wanted to work on my exploration as well as pairing. She was kind about it, even writing a post about serendipity.

We started at 7am my time, and for the next hour, we talked and tested a “simple” application that consisted of a text box, a button, and text outputs. As we tested, we talked about assumptions, tools, resources, and how outputs are inputs and can be manipulated (using tools or fiddling with the html itself).

Maaret taught while she tested with me, and, I have to admit, I was rather star-struck and honored that she gave me her time (do I sound too much like a fangirl?). It was a wonderful experience, and I left for work on a high that lasted all morning. I felt energized to bring what I had learned to my team at work immediately, and I convinced the other tester on my team to pair test with me in the afternoon.

practical application

In spite of the high from the morning, work was not stellar that day, and by the time the afternoon rolled around, I was slightly grumpy, my normal afternoon walk with a colleague-turned-friend had been canceled because of meetings (and I guess I was reluctant to go alone for no good reason), and I was low energy. But I decided to push through it and pair test with my colleague.

It was rough. I did a poor job of explaining why I thought we would do a better job testing together. It was a form in one of the features I’m responsible for testing (which is a topic for another post), so we tested inputs and buttons. I felt like I was generating all the ideas, and though I was trying to nudge towards more creativity, the testing felt flat and generic. When I asked what my colleague thought about it, the response was, “I think you could have done the same work on your own and much faster, and because we have a deadline, this wasn’t really a good use of time.” I came away from that experience disappointed and disheartened. (Maybe I’m too easily swayed by experiences and need to work on emotional resilience, but that’s a different post as well.)

a “stop” with Lisi

Anyway, not one to be defeated by something, I had signed up for a “stop” on Lisi Hocke‘s testing tour. Saturday morning, I worked with her for 90 minutes, and it turned out amazing as well. We tested a sketching program, an application neither of us had seen before. We started out broadly, exploring what happened with various features, and focusing for probably 10-20 minutes on areas where we saw weird things. We talked the whole time about what we were seeing, both unexpectedly positive things and “weird” things. It started out looking like a good application that I might consider using, but then we saw the save function, and that made the entire thing seem like a terrible user experience.

I liked the positive energy that both of us had, and Lisi is such an engaging person. I also liked that we used a timer to switch off who was driving and who was navigating. We had 4 minutes at a time before we switched. It relieved a lot of the pressure, because towards the end, as my mental focus was waning, I knew I just had to think of ideas for another minute or two before I would get to drive and let Lisi tell me what to do, which then opened up new areas for exploration and such. I felt like we were really working together, and I felt the benefits of pair testing in a new way.

reflections

I think one big difference was that Maaret and Lisi are so experienced in pair testing that they made it easy to include me and guide me when I needed it. I’ve now done it formally a total of three times, and for one of those, I was the one responsible for keeping the energy up and extolling the benefits of it. That was rough, as I am both inexperienced in the practice and somewhat insecure about my own testing abilities, and thus apprehensive about letting colleagues see what I perceive as my ineptitude.

One thing that I have been unable to figure out is how to effectively take notes through a mind map when we’re working on the same machine. I didn’t want to interrupt the flow of the session to work on the map, but I also didn’t want to just write down notes on paper and then have to transcribe those into a consumable digital format later. This will be something to experiment with.

Next time, at work, I will use a timer and not be so timid. I may also press for testing together in the morning, before the day has a chance to get to me. I’ll be more attuned to my mood and energy, and I may try to write down some ideas before we start so that I at least have some direction or goal for things that really have to get done, not just areas that seem neat.

All in all, I’m really glad that I finally dove into this. I’m proud of myself for asking for help and for trying something new, and I want to continue learning and experimenting. Stay tuned.

automation thoughts – an update

Last year, I wrote some thoughts about the benefits of manual testing. The posts are here and here. Now that I’ve been on an agile team for over a year, and doing some automation, I think (hope) I have some more sophisticated thoughts about automation.

Our team has two “manual testers” and one “automation engineer”. The automation engineer does all of the UI automation (for now), but I’ve written a lot of service layer tests using SOAP UI, and the other tester has been learning to write them too. The idea is to have the two of us start running (and eventually writing) UI automated tests as well, and we’re taking steps towards that end.

The benefits of the API testing through SOAP have become clear, but so have the challenges. It has cut down on our need to test all flows in regression testing manually. I can write service tests while the UI is still being coded, and it’s a good way to start our monthly testing cycle. After all, if something is broken at the service level, there’s no point in testing manually, and we can pinpoint the error more quickly and precisely. We have everything in our app that can be tested at the service level automated, which I feel really good about. Or, at least, we have all the calls exercised. There’s always more to test, isn’t there?

A big weakness of service testing is that it is more brittle than manual testing, because if something changes in the call (for instance, a value changes from being taken from a detail to being taken from a cookie), it breaks the test, though you wouldn’t notice a difference at the UI level. You also can’t run all the negative tests (like lockout tests) you might want to run, particularly if you think you’ll run the suite repeatedly through the day. We have two test cases that can only be run once every half hour, and one test case that requires a person to reset the user every time. Those test cases are fairly essential, so we make do with them.

I still very strongly believe that there is no substitute for getting in the features and playing around, or, to be more formal about it, doing exploratory testing. That’s how we find all the interesting bugs. Automated testing is checking; it ensures certain things are stable, not that a product is “good”. However, automated testing is (can be?) fast, and it can be an efficient use of time, particularly for regression testing. Automated testing can make sure that bugs that are found and fixed don’t break again, but would a person writing automated test think to try refreshing a page without first finding an issue in exploratory testing?

Perhaps I’ll have more thoughts about this later, but this is what I’ve been thinking about recently. Thoughts from y’all? Criticism? Questions?

training a tester

I may have mentioned a few months ago that I’m training a tester who is new to mobile. We’re four months in, and I’m surprised to find that I’m still answering (many) questions daily. Language does play an important role, and sometimes I realize that concepts that are organized in a certain way in my head don’t translate well to words or the way other people think. But also, for a long time, I was just answering questions, imparting information.

I’ve changed tactics, now that she’s been on our team for several months, to being more socratic. If she thinks she’s found a bug, I press her, ask her why she thinks it’s a bug, and how she can get additional information about the bug. Eventually, I want to know what her oracles are, what devices/OSes/app versions she’s tried on, what environments she’s been in, what the logs that we have access to say, and on and on. I know I don’t always follow all the steps myself, but I have a checklist published for our team that talks about all the different things to try. If she asks whether a feature is supposed to be in a specific app version, I push her to explain why it should or should not be before I give her an answer. Just this last week, I included her in writing SOAP tests, asking her questions about how we could modify certain things to get the right thing tested, instead of talking through my own thought process.

I think this method is more effective. Instead of trying to describe how my own synapses fire, I’m making her form her own way of thinking about things. It’s frustrating sometimes, because it takes a lot more effort to work with someone for fifteen minutes so they come to their own answer instead of just providing it, but the goal is for it to save time in the future.

In the couple months where I was the only tester on the team, I revamped how testing was documented, and how things in general were documented, to a way that made the most sense to me. I documented what I did rather than what I was going to do. My notes for things to test for were for my reference, not for anyone else to consume, and I made mind maps as artifacts. This worked great for me, and I think it’s working well for my partner, but I’m trying to be sensitive to the idea that not everyone thinks like me (nor should they), and to be open to doing things another way, should she come up with something better.

SOAP testing and finding the right assertions

I’ve been delving into the more technical side of testing, that of using tools to exercise code precisely rather than galumphing through UI, which really is my preferred method. Our mobile app uses a lightweight service to connect to the other moving pieces at the company and our vendors, so we can test those calls directly through SOAP, which, for you non-software people out there, is also called API testing.

The developers of our software took kind of a blanket approach to the envelopes for the service calls, so there are tons of superfluous fields for each of the calls, and it’s kind of a guessing game as to which are required for any particular service call. My developers didn’t create all the service calls, so I’ve actually been able to figure things out and share my envelopes with them (so, you know, I feel pretty cool).

Once I figure out which fields are necessary to get a call working, then it’s time to figure out an assertion. These are what the testing software checks against to make sure things are working appropriately. In our testing, they’re usually related to the content of a field, whether it is a “success” or “failure” message, or, in a case I was working on recently, whether the name of a document started with a specific year or not.

With this case, we modified the call to pull documents one year at a time, so the call for each year needed to return only documents for that year and none others. I ended up doing a check that the count of docs that started with something other than the year I was looking for was 0. This may sound simple, but it took quite awhile to get the xpath right, and I had some (a lot of?) help from a developer.

The SOAP tests are nothing without the correct assertions. You can go ahead and get all the information you want, but without an automated check on the information, the tests have very little advantage over UI testing in terms of speed, though they do have the ability to cut down on some of the noise that is inevitable with UI testing.

API testing is another tool in the box. It cannot be the only thing, and it should not be ignored. It’s great for getting precise information, for making sure you exercise calls at a lower level, and it can be faster than UI testing, but it is not a substitute for a full complement of testing, including exploratory testing (galumphing), scripted tests run manually, and scripted tests run through automation. And, of course, if your developers unit test their code, that helps too. 🙂

testing trainings – a comparison

In the past month, I’ve done two trainings on software testing: ASTQB’s Mobile Foundations course, and Satisfice’s Rapid Software Testing Applied course. The difference between them was marked.

Like the SQE training I reviewed earlier and subsequent ISTQB test for the Certified Tester, Foundation Level, the mobile course was heavy on vocabulary and “best practices” and light on how to do a good job. It gave me things to think about, such as using simulators and emulators to increase coverage and getting cell data on some of our phones so our testing can be more real-life and more, well, mobile. But when it came down to it, a lot of the class was about the differences between web-based, native, and hybrid apps, and the risks involved in testing them. Looking at the risks that are inherent to the different types of apps was useful, but three days of vocabulary became a little wearisome. The test, which I took about a week and a half later, went just fine. It included a decision table, which took me by surprise, but the test wasn’t a big deal with a little bit of careful reading. I don’t have much more to say about the training or the test. I wasn’t planning on doing either, but then a spot was offered to me, so… it was fine.

The training that I was really excited about, and that totally lived up to my expectations, was James Bach’s Rapid Software Testing Applied. We tested a vector graphics program called Inkscape, approaching it from some different angles. Each day was a combination of lecture, individual/team work, and review of that work. Some guys from another Utah company invited me to join their team, so I talked with them throughout the day and worked with them on the assignments. We talked about sanity testing, survey testing, risk analysis, coverage, deep testing, testing with tools, and how to report testing. It was a fascinating class, though I did receive criticism, both privately and then publicly the next morning, for some of my bug reports. (I still need to check with my developers to see if they’re annoyed by my reporting.) My ego was a little bruised, but I know he was trying to make me a better tester, and in the end, I appreciated (that might be too strong of a word) the criticism. This training brought out all my insecurities, particularly those surrounding tools, but I was also pleased to have some of my thoughts about testing affirmed. James Bach is something of an icon in software testing, and I really enjoyed learning from him. I’d like to take another RSTA class, as well as his lecture class of Rapid Software Testing.

It’s possible I was just way more excited for RSTA, but I felt like I got more out of it as well. I really appreciated my company letting me do these trainings, particularly as they came so close together. I’m hoping to convince them to bring James Bach to Utah – that would just be fantastic.