the benefits of manual testing, episode 1

I attend­ed PyCon a few weeks ago, and it was a won­der­ful expe­ri­ence. I met lots of inter­est­ing peo­ple, heard great talks, and got inspired to get back to pro­gram­ming. One thing I encoun­tered from mul­ti­ple peo­ple is a lack of under­stand­ing of what testers do and why it’s nec­es­sary, par­tic­u­lar­ly man­u­al test­ing. Once I explained what I do and described the things I’ve found, I received a few job offers actu­al­ly. I think it reflect­ed more of an unhap­pi­ness with the rote check­ing that a lot of testers do instead of the tar­get­ed, intel­li­gent test­ing I was describ­ing. I was hon­est with peo­ple about my novice pro­gram­ming skills (solv­ing math prob­lems kind of counts, but I have more ambi­tious projects com­ing up). Peo­ple imme­di­ate­ly assumed that the only valid kind of test­ing was auto­mat­ed test­ing, though they tried to under­stand what I do. This post con­tains thoughts that came out of those con­ver­sa­tions. I antic­i­pate that this top­ic will be mer­it a few posts, hence “episode 1”.

First, auto­mat­ed test­ing is almost always nec­es­sary for per­for­mance test­ing and stress test­ing, though some­times the stress test­ing can be mim­ic­ked through the use of oth­er tools. What man­u­al test­ing offers is a more detailed look at how the soft­ware behaves at every step, because a tester can visu­al­ly inspect pages, fig­ure out alter­nate paths to what should be the same out­come, and test the fool­ish things and the unex­pect­ed things. I might be mis­tak­en that not all of these things can be auto­mat­ed, but automat­ing test­ing can be expen­sive and time-con­sum­ing to get off the ground and main­tained, and it’s not a sil­ver bul­let.

Man­u­al testers func­tion best when we can use cre­ativ­i­ty. “Check­ing”, the act of mere­ly inspect­ing doc­u­ments to make sure that some­thing in pro­gram A match­es some­thing in pro­gram B, is a waste of tal­ent and mon­ey. Man­u­al testers use expe­ri­ence and “hunch­es” real­ly to fig­ure out where the weak­ness­es are, and they try to exploit weak­ness­es in mul­ti­ple ways. Once that weak­ness is found and ful­ly under­stood, a test can be writ­ten for automa­tion that would make sure it does­n’t break in the future. Man­u­al testers can cov­er the bulk of the test­ing by just explor­ing the soft­ware and get­ting cre­ative with how they approach dif­fer­ent sce­nar­ios.

As a few exam­ples, I’ve been work­ing on a project that uses iframes. I knew a lit­tle bit about the secu­ri­ty in iframes and how it used to be exploit­ed (and how it can still be exploit­ed in Safari, which is anoth­er mat­ter). I thought I might be able to do some­thing with that, so I went through some of the http head­ers from the iframe to see what was there. It was­n’t quite as bad as I had hoped, but I did find some sus­pi­cious tags that made me ques­tion the vul­ner­a­bil­i­ty of the serv­er gen­er­al­ly. The ven­dor promis­es they aren’t using a home grown serv­er, but our infor­ma­tion secu­ri­ty team is prepar­ing to go to town to try to break into the soft­ware. With the same soft­ware, the math was­n’t adding up. I tried a bunch of dif­fer­ent com­bi­na­tions of things to make sure that peo­ple were con­sis­tent­ly able to do things a finan­cial insti­tu­tion would not want them to do. Basi­cal­ly, every thing that we cus­tomized or that the ven­dor bragged about had a high prob­a­bil­i­ty of being bro­ken ini­tial­ly. I had a bit of a high from the first few weeks of test­ing. This is our first expe­ri­ence with this soft­ware, so until we know the weak­ness­es ful­ly, man­u­al test­ing is most appro­pri­ate.

Man­u­al test­ing can have more cov­er­age in some instances, and it can be more effec­tive in find­ing spe­cif­ic bugs that may not expose them­selves until the tester plays a fan­tas­tic fool.