I attended PyCon a few weeks ago, and it was a wonderful experience. I met lots of interesting people, heard great talks, and got inspired to get back to programming. One thing I encountered from multiple people is a lack of understanding of what testers do and why it’s necessary, particularly manual testing. Once I explained what I do and described the things I’ve found, I received a few job offers actually. I think it reflected more of an unhappiness with the rote checking that a lot of testers do instead of the targeted, intelligent testing I was describing. I was honest with people about my novice programming skills (solving math problems kind of counts, but I have more ambitious projects coming up). People immediately assumed that the only valid kind of testing was automated testing, though they tried to understand what I do. This post contains thoughts that came out of those conversations. I anticipate that this topic will be merit a few posts, hence “episode 1”.
First, automated testing is almost always necessary for performance testing and stress testing, though sometimes the stress testing can be mimicked through the use of other tools. What manual testing offers is a more detailed look at how the software behaves at every step, because a tester can visually inspect pages, figure out alternate paths to what should be the same outcome, and test the foolish things and the unexpected things. I might be mistaken that not all of these things can be automated, but automating testing can be expensive and time-consuming to get off the ground and maintained, and it’s not a silver bullet.
Manual testers function best when we can use creativity. “Checking”, the act of merely inspecting documents to make sure that something in program A matches something in program B, is a waste of talent and money. Manual testers use experience and “hunches” really to figure out where the weaknesses are, and they try to exploit weaknesses in multiple ways. Once that weakness is found and fully understood, a test can be written for automation that would make sure it doesn’t break in the future. Manual testers can cover the bulk of the testing by just exploring the software and getting creative with how they approach different scenarios.
As a few examples, I’ve been working on a project that uses iframes. I knew a little bit about the security in iframes and how it used to be exploited (and how it can still be exploited in Safari, which is another matter). I thought I might be able to do something with that, so I went through some of the http headers from the iframe to see what was there. It wasn’t quite as bad as I had hoped, but I did find some suspicious tags that made me question the vulnerability of the server generally. The vendor promises they aren’t using a home grown server, but our information security team is preparing to go to town to try to break into the software. With the same software, the math wasn’t adding up. I tried a bunch of different combinations of things to make sure that people were consistently able to do things a financial institution would not want them to do. Basically, every thing that we customized or that the vendor bragged about had a high probability of being broken initially. I had a bit of a high from the first few weeks of testing. This is our first experience with this software, so until we know the weaknesses fully, manual testing is most appropriate.
Manual testing can have more coverage in some instances, and it can be more effective in finding specific bugs that may not expose themselves until the tester plays a fantastic fool.