automation thoughts — an update

Last year, I wrote some thoughts about the ben­e­fits of man­u­al test­ing. The posts are here and here. Now that I’ve been on an agile team for over a year, and doing some automa­tion, I think (hope) I have some more sophis­ti­cat­ed thoughts about automa­tion.

Our team has two “man­u­al testers” and one “automa­tion engi­neer”. The automa­tion engi­neer does all of the UI automa­tion (for now), but I’ve writ­ten a lot of ser­vice lay­er tests using SOAP UI, and the oth­er tester has been learn­ing to write them too. The idea is to have the two of us start run­ning (and even­tu­al­ly writ­ing) UI auto­mat­ed tests as well, and we’re tak­ing steps towards that end.

The ben­e­fits of the API test­ing through SOAP have become clear, but so have the chal­lenges. It has cut down on our need to test all flows in regres­sion test­ing man­u­al­ly. I can write ser­vice tests while the UI is still being cod­ed, and it’s a good way to start our month­ly test­ing cycle. After all, if some­thing is bro­ken at the ser­vice lev­el, there’s no point in test­ing man­u­al­ly, and we can pin­point the error more quick­ly and pre­cise­ly. We have every­thing in our app that can be test­ed at the ser­vice lev­el auto­mat­ed, which I feel real­ly good about. Or, at least, we have all the calls exer­cised. There’s always more to test, isn’t there?

A big weak­ness of ser­vice test­ing is that it is more brit­tle than man­u­al test­ing, because if some­thing changes in the call (for instance, a val­ue changes from being tak­en from a detail to being tak­en from a cook­ie), it breaks the test, though you would­n’t notice a dif­fer­ence at the UI lev­el. You also can’t run all the neg­a­tive tests (like lock­out tests) you might want to run, par­tic­u­lar­ly if you think you’ll run the suite repeat­ed­ly through the day. We have two test cas­es that can only be run once every half hour, and one test case that requires a per­son to reset the user every time. Those test cas­es are fair­ly essen­tial, so we make do with them.

I still very strong­ly believe that there is no sub­sti­tute for get­ting in the fea­tures and play­ing around, or, to be more for­mal about it, doing explorato­ry test­ing. That’s how we find all the inter­est­ing bugs. Auto­mat­ed test­ing is check­ing; it ensures cer­tain things are sta­ble, not that a prod­uct is “good”. How­ev­er, auto­mat­ed test­ing is (can be?) fast, and it can be an effi­cient use of time, par­tic­u­lar­ly for regres­sion test­ing. Auto­mat­ed test­ing can make sure that bugs that are found and fixed don’t break again, but would a per­son writ­ing auto­mat­ed test think to try refresh­ing a page with­out first find­ing an issue in explorato­ry test­ing?

Per­haps I’ll have more thoughts about this lat­er, but this is what I’ve been think­ing about recent­ly. Thoughts from y’all? Crit­i­cism? Ques­tions?