job and training

It’s Test­ing Tues­day! Let’s talk soft­ware test­ing!

To start, soft­ware test­ing is find­ing weak­ness­es in soft­ware that, when fixed, make it a bet­ter prod­uct. Soft­ware testers don’t break soft­ware, they expose how it’s already bro­ken. It is rather fun to say I break things for my career, but some peo­ple, par­tic­u­lar­ly devel­op­ers, respond poor­ly to that. Soft­ware test­ing is a pro­duc­tive prac­tice in that it improves the end prod­uct, though it can feel a lit­tle destruc­tive when the tester finds bug after bug.

I’ve been at my new job since the end of March, osten­si­bly test­ing soft­ware, but most­ly learn­ing things. The first project I worked on was a small upgrade to exist­ing soft­ware, and I did­n’t have much to test, though I had plen­ty of time to test it. That was good, because I’m learn­ing the for­mal­i­ties of test­ing and how this spe­cif­ic com­pa­ny does it, and I appre­ci­at­ed hav­ing time to get my bear­ings.

On this project, the ven­dor sup­plied rec­om­mend­ed test cas­es, and I was left to my own devices to cre­ate a suite of regres­sion test cas­es. The peo­ple who had done the pri­or upgrades had­n’t left any test cas­es to run for regres­sion, and the doc­u­men­ta­tion of the tests they did run was vir­tu­al­ly non-exis­tent. The learn­ing curve was­n’t too bad, though I did have to ask a lot of ques­tions about some things, and the insti­tu­tion­al knowl­edge of the prod­uct was­n’t great. A cou­ple times, I got the response that some­thing was work­ing as designed when it was real­ly a bug act­ing con­sis­tent­ly across a sub­set of items, and when I asked why some­thing was sup­posed to behave like that, I was told that the per­son did­n’t know. This was frus­trat­ing, but the project could have been much more frus­trat­ing with less con­ge­nial peo­ple.

I had a fair­ly high bug find rate, par­tic­u­lar­ly in light of the num­ber of test cas­es I ran and the amount of time I spent in ad hoc test­ing (which for me meant learn­ing my way around and try­ing ran­dom and non-tar­get­ed things). As a novice tester, this has me very con­cerned that the soft­ware is real­ly bug­gy (as opposed to me being very lucky or very good). Some­times I won­der if the reas­sur­ances from my col­leagues and man­agers are just false acco­lades, but that’s my own inse­cu­ri­ty, not the top­ic of this post.

All told, I real­ly like the com­pa­ny I’m work­ing for, I love the coop­er­a­tive and col­lab­o­ra­tive envi­ron­ment, and I find the work to be fun and some­times chal­leng­ing. Plus, no one is going to sue me because of my work, so that’s a bonus.

I was giv­en train­ing through SQE to pre­pare for the ISTQB exam. The cer­ti­fi­ca­tion is as a foun­da­tion lev­el tester, and the exam is 40 ques­tions long, with a pass­ing grade of 65%. The train­ing was fine. The teacher was engag­ing most of the time, and you could tell he had a real pas­sion for test­ing. I did­n’t find it at all use­ful in help­ing me to do my job though. It was about the­o­ry and vocab­u­lary and forms. The only time we came to con­crete tech­niques was a dis­cus­sion about par­ti­tions and bound­ary analy­sis.

The train­ing did help fill in some gaps in the self-study I had been doing, but I think the use­ful­ness of the train­ing comes in giv­ing the team com­mon ver­nac­u­lar to use. I have kind of a big prob­lem with 65% being a pass­ing grade though. How should that reas­sure any­one that the per­son knows what they’re talk­ing about? Anoth­er prob­lem with the cer­ti­fi­ca­tion, aside from its low pass­ing grade, it that it’s a one-time cer­ti­fi­ca­tion with no renewals nec­es­sary. There’s no require­ment for con­tin­u­ing edu­ca­tion, no need for pro­duc­tion of work prod­uct to show com­pe­tence. It seems to me to be a mean­ing­less badge of legit­i­ma­cy that isn’t need­ed once you have a real job behind you. I think a more valu­able thing for a resume would be an online port­fo­lio with a test plan and test cas­es. But I say all this as some­one with a job now, and had I not been giv­en a chance, I was plan­ning on get­ting the cer­ti­fi­ca­tion on my own to show that I at least know some­thing.

I’ve been look­ing at a lot of resources to help make me a bet­ter tester quick­ly. These have includ­ed books, blogs, online resources, and streamed con­fer­ence pre­sen­ta­tions. Of the resources I’ve con­sult­ed, one of my favorites is James Whit­tak­er’s How to Break Soft­ware. Some of it isn’t applic­a­ble to what I do, but he gives real-world exam­ples of how things can break. He talks about dif­fer­ent kinds of tests to run through human inter­ac­tion and manip­u­lat­ing file inter­ac­tion as well. I just start­ed read­ing Cem Kan­er’s (et al.) Test­ing Com­put­er Soft­ware. Just the first cou­ple chap­ters are real­ly use­ful so far. I’ve real­ly enjoyed James Bach’s blog and Michael Bolton’s blog as well. They are both big into rapid soft­ware test­ing and rethink­ing the way explorato­ry test­ing is (and rechris­ten­ing it sim­ply “test­ing”). Their blogs are full of insights and good ideas for peo­ple who want to improve the way they think about soft­ware test­ing. As I fin­ish or dis­cov­er oth­er resources, I’ll dis­cuss them here.

Until anoth­er Tues­day!