It’s Testing Tuesday! Let’s talk software testing!
To start, software testing is finding weaknesses in software that, when fixed, make it a better product. Software testers don’t break software, they expose how it’s already broken. It is rather fun to say I break things for my career, but some people, particularly developers, respond poorly to that. Software testing is a productive practice in that it improves the end product, though it can feel a little destructive when the tester finds bug after bug.
I’ve been at my new job since the end of March, ostensibly testing software, but mostly learning things. The first project I worked on was a small upgrade to existing software, and I didn’t have much to test, though I had plenty of time to test it. That was good, because I’m learning the formalities of testing and how this specific company does it, and I appreciated having time to get my bearings.
On this project, the vendor supplied recommended test cases, and I was left to my own devices to create a suite of regression test cases. The people who had done the prior upgrades hadn’t left any test cases to run for regression, and the documentation of the tests they did run was virtually non-existent. The learning curve wasn’t too bad, though I did have to ask a lot of questions about some things, and the institutional knowledge of the product wasn’t great. A couple times, I got the response that something was working as designed when it was really a bug acting consistently across a subset of items, and when I asked why something was supposed to behave like that, I was told that the person didn’t know. This was frustrating, but the project could have been much more frustrating with less congenial people.
I had a fairly high bug find rate, particularly in light of the number of test cases I ran and the amount of time I spent in ad hoc testing (which for me meant learning my way around and trying random and non-targeted things). As a novice tester, this has me very concerned that the software is really buggy (as opposed to me being very lucky or very good). Sometimes I wonder if the reassurances from my colleagues and managers are just false accolades, but that’s my own insecurity, not the topic of this post.
All told, I really like the company I’m working for, I love the cooperative and collaborative environment, and I find the work to be fun and sometimes challenging. Plus, no one is going to sue me because of my work, so that’s a bonus.
I was given training through SQE to prepare for the ISTQB exam. The certification is as a foundation level tester, and the exam is 40 questions long, with a passing grade of 65%. The training was fine. The teacher was engaging most of the time, and you could tell he had a real passion for testing. I didn’t find it at all useful in helping me to do my job though. It was about theory and vocabulary and forms. The only time we came to concrete techniques was a discussion about partitions and boundary analysis.
The training did help fill in some gaps in the self-study I had been doing, but I think the usefulness of the training comes in giving the team common vernacular to use. I have kind of a big problem with 65% being a passing grade though. How should that reassure anyone that the person knows what they’re talking about? Another problem with the certification, aside from its low passing grade, it that it’s a one-time certification with no renewals necessary. There’s no requirement for continuing education, no need for production of work product to show competence. It seems to me to be a meaningless badge of legitimacy that isn’t needed once you have a real job behind you. I think a more valuable thing for a resume would be an online portfolio with a test plan and test cases. But I say all this as someone with a job now, and had I not been given a chance, I was planning on getting the certification on my own to show that I at least know something.
I’ve been looking at a lot of resources to help make me a better tester quickly. These have included books, blogs, online resources, and streamed conference presentations. Of the resources I’ve consulted, one of my favorites is James Whittaker’s How to Break Software. Some of it isn’t applicable to what I do, but he gives real-world examples of how things can break. He talks about different kinds of tests to run through human interaction and manipulating file interaction as well. I just started reading Cem Kaner’s (et al.) Testing Computer Software. Just the first couple chapters are really useful so far. I’ve really enjoyed James Bach’s blog and Michael Bolton’s blog as well. They are both big into rapid software testing and rethinking the way exploratory testing is (and rechristening it simply “testing”). Their blogs are full of insights and good ideas for people who want to improve the way they think about software testing. As I finish or discover other resources, I’ll discuss them here.
Until another Tuesday!