Last year, I wrote some thoughts about the benefits of manual testing. The posts are here and here. Now that I’ve been on an agile team for over a year, and doing some automation, I think (hope) I have some more sophisticated thoughts about automation.
Our team has two “manual testers” and one “automation engineer”. The automation engineer does all of the UI automation (for now), but I’ve written a lot of service layer tests using SOAP UI, and the other tester has been learning to write them too. The idea is to have the two of us start running (and eventually writing) UI automated tests as well, and we’re taking steps towards that end.
The benefits of the API testing through SOAP have become clear, but so have the challenges. It has cut down on our need to test all flows in regression testing manually. I can write service tests while the UI is still being coded, and it’s a good way to start our monthly testing cycle. After all, if something is broken at the service level, there’s no point in testing manually, and we can pinpoint the error more quickly and precisely. We have everything in our app that can be tested at the service level automated, which I feel really good about. Or, at least, we have all the calls exercised. There’s always more to test, isn’t there?
A big weakness of service testing is that it is more brittle than manual testing, because if something changes in the call (for instance, a value changes from being taken from a detail to being taken from a cookie), it breaks the test, though you wouldn’t notice a difference at the UI level. You also can’t run all the negative tests (like lockout tests) you might want to run, particularly if you think you’ll run the suite repeatedly through the day. We have two test cases that can only be run once every half hour, and one test case that requires a person to reset the user every time. Those test cases are fairly essential, so we make do with them.
I still very strongly believe that there is no substitute for getting in the features and playing around, or, to be more formal about it, doing exploratory testing. That’s how we find all the interesting bugs. Automated testing is checking; it ensures certain things are stable, not that a product is “good”. However, automated testing is (can be?) fast, and it can be an efficient use of time, particularly for regression testing. Automated testing can make sure that bugs that are found and fixed don’t break again, but would a person writing automated test think to try refreshing a page without first finding an issue in exploratory testing?
Perhaps I’ll have more thoughts about this later, but this is what I’ve been thinking about recently. Thoughts from y’all? Criticism? Questions?