the benefits of manual testing, episode 1

I attended PyCon a few weeks ago, and it was a wonderful experience. I met lots of interesting people, heard great talks, and got inspired to get back to programming. One thing I encountered from multiple people is a lack of understanding of what testers do and why it’s necessary, particularly manual testing. Once I explained what I do and described the things I’ve found, I received a few job offers actually. I think it reflected more of an unhappiness with the rote checking that a lot of testers do instead of the targeted, intelligent testing I was describing. I was honest with people about my novice programming skills (solving math problems kind of counts, but I have more ambitious projects coming up). People immediately assumed that the only valid kind of testing was automated testing, though they tried to understand what I do. This post contains thoughts that came out of those conversations. I anticipate that this topic will be merit a few posts, hence “episode 1”.

First, automated testing is almost always necessary for performance testing and stress testing, though sometimes the stress testing can be mimicked through the use of other tools. What manual testing offers is a more detailed look at how the software behaves at every step, because a tester can visually inspect pages, figure out alternate paths to what should be the same outcome, and test the foolish things and the unexpected things. I might be mistaken that not all of these things can be automated, but automating testing can be expensive and time-consuming to get off the ground and maintained, and it’s not a silver bullet.

Manual testers function best when we can use creativity. “Checking”, the act of merely inspecting documents to make sure that something in program A matches something in program B, is a waste of talent and money. Manual testers use experience and “hunches” really to figure out where the weaknesses are, and they try to exploit weaknesses in multiple ways. Once that weakness is found and fully understood, a test can be written for automation that would make sure it doesn’t break in the future. Manual testers can cover the bulk of the testing by just exploring the software and getting creative with how they approach different scenarios.

As a few examples, I’ve been working on a project that uses iframes. I knew a little bit about the security in iframes and how it used to be exploited (and how it can still be exploited in Safari, which is another matter). I thought I might be able to do something with that, so I went through some of the http headers from the iframe to see what was there. It wasn’t quite as bad as I had hoped, but I did find some suspicious tags that made me question the vulnerability of the server generally. The vendor promises they aren’t using a home grown server, but our information security team is preparing to go to town to try to break into the software. With the same software, the math wasn’t adding up. I tried a bunch of different combinations of things to make sure that people were consistently able to do things a financial institution would not want them to do. Basically, every thing that we customized or that the vendor bragged about had a high probability of being broken initially. I had a bit of a high from the first few weeks of testing. This is our first experience with this software, so until we know the weaknesses fully, manual testing is most appropriate.

Manual testing can have more coverage in some instances, and it can be more effective in finding specific bugs that may not expose themselves until the tester plays a fantastic fool.

2 thoughts on “the benefits of manual testing, episode 1

  1. The creative testing is the only way to find the boundaries of the system. Automation is good for proving that behavior is replicable. For example if the same test must be run for 6 organizations in a SaaS platform. It is also good for enshrining behavior to be retested across future releases.

    The creative testing will run a test once and prove it works or does not work in a specific way, which may or may not be worth automating.

    My current thinking is of 2 approaches to searching the problem space of possible bugs. One is wide coverage which is expensive and catches predictable bugs, but also prevents recurrence of bugs once fixed. The other is a random walk where a tester can find hidden oases of bugs that would otherwise not be found.

    To use an exploration metaphor: automated testing is building a fleet of trucks to drive shoulder-to-shoulder across a continent, finding gems. But they won’t find the island in the middle of a lake, or climb up the mountains to find things. To really know the extent of area, you need both types of exploration.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.