experiments in rework

I’ve been at my current company for over three months now. I got off to a rocky start, but I’m happy at 1-800 Contacts now. My team is great—they care about the work we do, they are data-driven, they come to solid solutions (in a little more combative of a way than I’m used to)—and the work is engaging. I’m testing at more levels than before, and we release code so quickly.

This is the first time I’ve been responsible for actually pushing the code into production, and it was a little terrifying at first. Most days, we get two or three releases out, other days, it can get up to five releases, and on quite rough days, nothing goes out. I can send a lot back to the developers, and even when we send a lot out, it’s not usually everything.

I’m transforming my role on this team from what felt like a gatekeeper to more of a coach. I have a long series of experiments lined up in my effort to reduce rework on the team. One thing that has already happened is that some of the developers are talking with me about the unit tests and integration tests that they’re writing, to get both ideas and affirmation for what they’re doing. They’re writing more negative tests, and they’re attempting to write better, more meaningful tests that prove or disprove things.

I’m looking for ways to increase the number of developers on my team that write tests at all. I want the team to put pressure on each other to cover their code with tests, not for that point of pressure to be entirely from me. I think that by now, I’ve shown that I care about the team and the work we produce, but I worry that I could be tuned out if I harp on this point too much.

My very first experiment, and one of the reasons for my rocky start, was introducing these amazing cards that Spartez produced with questions for story mapping and refinement. The product owner on my team balked at the amount of time we spent talking about what he thought were inconsequential things. I set them aside, pretty sure that I had introduced them too early (it was my second week there), and that we might need something more tailored.

The new experiment is the same idea as the cards but tailored specifically to our product. I call out specific dependencies or implications for others that we’ve missed before, as well as a bunch of other things that we’ve talked about wanting to consider but haven’t actually done so. The list is neither sufficient nor necessary, to use mathematical terms. In lay terms, it doesn’t cover everything, and not everything on the list may be useful. The product owner has (reluctantly?) agreed to try this for a couple weeks, so we’ll see.

I’m the only tester on my team, supporting six developers at the moment. They know they need to be more comfortable with testing each other’s code, and we’re taking small steps in that direction. For days when I’m out, I’m leaving behind a checklist of things to consider when testing. Again, this list is neither sufficient nor necessary, but it at least gives a few ideas to try beyond the happy path. We don’t have spare testers here, and the team knows they need to step up and test when I’m not around, or even just when I’m slammed.

Stay tuned for the outcome of these experiments and my next slate of experiments to reduce rework!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.