User Acceptance Testing - A Context-Driven Perspective: Biography
User Acceptance Testing - A Context-Driven Perspective: Biography
Biography
Michael Bolton is the co-author (with senior author James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. A testing trainer and consultant, Michael has over 17 years of experience in the computer industry testing, developing, managing, and writing about software. He is the founder of DevelopSense, a Toronto-based consultancy. He was with Quarterdeck Corporation for eight years, during which he delivered the companys flagship products and directed project and testing teams both in-house and around the world. Michael has been teaching software testing around the world for eight years. He was an invited participant at the 2003, 2005, 2006, and 2007 Workshops on Teaching Software Testing in Melbourne and Palm Bay, Florida; was a member of the first Exploratory Testing Research Summit in 2006. He is also the Program Chair for TASSQ, the Toronto Association of System and Software Quality, and a co-founder of the Toronto Workshops on Software Testing. He has a regular column in Better Software Magazine, writes for Quality Software (the magazine published by TASSQ), and sporadically produces his own newsletter. Michael lives in Toronto, Canada, with his wife and two children. Michael can be reached at [email protected], or through his Web site, http://www.developsense.com Abstract: Hang around a software development project for long enough and you'll hear two sentences: We need to keep the customer satisfied, and The Customer doesn't know what he wants. A more thoughtful approach might be to begin by asking a question: Who IS the customer of the testing effort? The idiom user acceptance testing appears in many test plans, yet few outline what it means and what it requires. Is this because it's to everyone obvious what user acceptance testing means? Is because there is no effective difference between user acceptance testing and other testing activities? Or might it be that there are so many possible interpretations of what might constitute user acceptance testing that the term is effectively meaningless? In this one-hour presentation, Michael Bolton will establish that there is far more to those questions than many testing groups consider. He doesn't think that user acceptance testing is meaningless if people using the words establish a contextual framework and understand what they mean by user, by acceptance, and by testing. Michael will discuss the challenges of user acceptance testing, and propose some remedies that testers can use to help to clarify user requirements--and meet them successfully.
Words (like user, acceptance, and testing) are fundamentally ambiguous, especially when they are combined into idioms (like user acceptance testing). People all have different points of view that are rooted in their own cultures, circumstances and experiences. If we are to do any
kind of testing well, it is vital to begin by gaining understanding of the ways in which other people, even though they sound alike, might be saying and thinking profoundly different things. Resolving the possible conflicts requires critical thinking, context-driven thinking, and general semantics: we must ask the questions what do we mean and how do we know? By doing this kind of analysis, we adapt usefully to the changing contexts in which we work; we defend ourselves from being fooled; we help to prevent certain kinds of disasters, both for our organizations and for ourselves. These disasters include everything from loss of life due to inadequate or inappropriate testing, or merely being thought a fool for using approaches that arent appropriate to the context. The alternativeunderstanding the importance of recognizing and applying context-driven thinkingis to have credibility, capability and confidence to apply skills and tools that will help us solve real problems for our managers and our customers. In 2002, with the publication of Lessons Learned in Software Testing, the authors (Kaner, Bach, and Pettichord) declared a testing community called the Context-Driven School, with these principles:
For context-driven testers, a discussion of user acceptance testing hinges on identifying aspects of the context: the problem to be solved; the people who are involved; the practices, techniques, and approaches that we might choose. In any testing project, there are many members of the project community who might be customers of the testing mission1. Some of these people include: The contracting authority The holder of the purse strings The legal or regulatory authority The development manager The test manager The test lead Technical Support Sales people Sales support Marketing people The shareholders of the company The CEO
1 Heres a useful way to think of this, by the way: in your head, walk through your companys offices and buildings. Think of everyone who works in each one of those roomshave you identified a different role?
Testers Developers The department manager for the people who are using the software Documenters The end-users line manager The end-user The end-users customers2 Business analysts Architects Content providers
The CFO The IT manager Network administrators and internal support Security personnel Production Graphic designers Development managers for other projects Designers Release control Strategic partners
Any one of these could be the user in a user acceptance test; several of these could be providing the item to be tested; several could be mandating the testing; and several could be performing the testing. The next piece of the puzzle is to ask the relevant questions: Which people are offering the item to be tested? Who are the people accepting it? Who are the people who have mandated the testing? Who is doing the testing?
With thirty possible project roles (there may be more), times four possible roles within the acceptance test (into each of which multiple groups may fall), we have a huge number of potential interaction models for a UAT project. Moreover, some of these roles have different (and sometimes competing) motivations. Just in terms of whos doing what, there are too many possible models of user acceptance testing to hold in your mind without asking some important context-driven questions for each project that youre on.
What is Testing?
Id like to continue our thinking about UAT by considering what testing itself is. James Bach and I say that testing is: Questioning the product in order to evaluate it.3
Cem Kaner says Testing is an empirical, technical investigation of a product, done on behalf of stakeholders, with the intention of revealing quality-related information of the kind that they seek. (citation to QAI November 2006)
The end-user of the application might be a bank teller; problems in a teller application have an impact on the banks customers in addition to the impact on the teller. 3 James Bach and Michael Bolton, Rapid Software Testing, available at http://www.satisfice.com/rst.pdf
Kaner also says something that I believe is so important that I should quote it at length. He takes issue with the notion of testing as confirmation over the vision of testing as investigation, when he says: The confirmatory tester knows what the "good" result is and is trying to find proof that the product conforms to that result. The investigator wants to see what will happen and is expecting to learn something new from the test. The investigator doesn't necessarily know how a test will come out, how a line of tests will come out or even whether the line is worth spending much time on. It's a different mindset.4 I think this distinction is crucial as we consider some of the different interpretations of user acceptance testing, because some in some cases, UAT follows an investigative path, and other cases it takes a more confirmatory path.
I would add assessing compatibility with other products or systems assessing readiness for internal deployment ensuring that that which used to work still works, and design-oriented testing, such as review or test-driven development. Finally, I would add the idea of tests that are not really tests at all, such as a demonstration of a bug for a developer, a ceremonial demonstration for a customer, or executing a set of steps at a trade show. Naturally, this list is not exhaustive; there are plenty of other potential motivations for testing
Kaner, Cem, The Ongoing Revolution in Software Testing. http://www.kaner.com/pdfs/TheOngoingRevolution.pdf, PNSQC, 2004
What is Acceptance?
Now that weve looked at testing, lets look at the notion of acceptance. In Testing Computer Software, Cem Kaner, Hung Nguyen, and Jack Falk talk about acceptance testing as something that the test team does as it accepts a build from the developers. The point of this kind of testing is to make sure that the product is acceptable to the testing team, with the goal of making sure that the product is stable enough to be tested. Its a short test of mainstream functions with mainstream data. Note that the expression user acceptance testing doesnt appear in TCS, which is the best-selling book on software testing in history. Lessons Learned in Software Testing, on which Kaner was the senior author with James Bach and Brett Pettichord, neither the term acceptance test nor user acceptance test appears at all. Neither term seems to appear in Black Box Software Testing, by Boris Beizer. Beizer uses acceptance test several times in Software Testing Techniques, but doesnt mention what he means by it. Perry and Rice, in their book Surviving the Top Ten Challenge of Software Testing, say that Users should be most concerned with validating that the system will support the needs of the organization. The question to be answered by user acceptance testing is will the system meet the business or operational needs in the real world?. But what kind of testing isnt fundamentally about that? Thus, in what way is there anything special about user acceptance testing? Perry and Rice add that user acceptance testing includes Identifying all the business processes to be tested; decomposing these processes to the lowest level of complexity, and testing real-life test cases (people or things (?)) through those processes. Finally, they beg the question by saying, the nuts and bolts of user acceptance test is (sic) beyond the scope of this book. Without a prevailing definition in the literature, I offer this definition: Acceptance testing is any testing done by one party for the purpose accepting another party's work. It's whatever the tester and the acceptor agree upon; whatever the key is to open the gate for acceptancehowever secure or ramshackle the lock. In this light, user acceptance testing could appear at any point on a continuum, with probing, investigative tests at one end, and softball, confirmatory tests at the other.
For example, when the Queen inspects the troops, does anyone expect her to perform an actual inspection? Does she behave like a drill sergeant, checking for errant facial hairs? Does she ask a soldier to disassemble his gun so that she can look down the barrel of it? In this circumstance, the inspection is ceremonial. Its not a fact-finding mission; its a stroll. We might call that kind of inspection a formality, or pro forma, or ceremonial, or perfunctory, or ritual; the point is that its not an investigation at all.
the prototype. At this stage, were asking someone who is unlikely to have testing skills to find bugs hat theyre unlikely to find, at the very time when were least likely to fix them. A fundamental restructuring of the GUI or the back-end logic is out of the question, no matter how clunky it may be, so long as it barely fits the users requirements. If the problem is one that requires no thinking, no serious development work, and no real testing effort to fix, it might get fixed. Thats because every change is a risk; when we change the software late in the game, we risk throwing away a lot that we know about the products quality. Easy changes, typos and such, are potentially palatable. The only other kind of problem that will be addressed at this stage is the opposite extremethe one thats so overwhelmingly bad that the product couldnt possibly ship. Needless to say, this is a bad time to find this kind of problem. Its almost worse, though, to find the middle ground bugsthe mundane, workaday kinds of problems that one would hope to be found earlier, that will irritate customers and that really do need to be fixed. These problems will tend to cause contention and agonized debate of a kind that neither of the other two extremes would cause, and that costs time. There are a couple of preventative strategies for this catastrophe. One is to involve the user continuously in the development effort and the project community, as the promoters of the Agile movement suggest. Agilists havent solved the problem completely, but they have been taking some steps in some good directions, and involving the user closely is a noble goal. In our shop, although our business analyst not sitting in the Development bearpit, as eXtreme Programming recommends, shes close at hand, on the same floor. And we try to make sure that shes at the daily standup meetings. The bridging of understanding and the mutual adjustment of expectations between the developers and the business is much easier, and can happen much earlier in this way of working, and thats good. Another antidote to the problem of finding bad bugs too late in the gamealthough rather more difficult to pull off successfully or quicklyis to improve your testing generally. User stories are nice, but they form a pretty weak basis for testing. Thats because, in my experience, they tend to be simple, atomic tasks; they tend to exercise happy workflows and downplay error conditions and exception handling; they tend to pay a lot of attention to capability, and not to the other quality criteriareliability, usability, scalability, performance, installability, compatibility, supportability, testability, maintainability, portability, and localizability. Teach testers more about critical thinking and about systems thinking, about science and the scientific method. Show them bugs, talk about how those bugs were found, and the techniques that found them. Emphasize the critical thinking part: recognize the kinds of bugs that those techniques couldnt have found; and recognize the techniques that wouldnt find those bugs but that would find other bugs. Encourage them to consider those other -ilities beyond capability.
parent, that can slow down and annoy the mature user. So: if your model for usability testing involves a short test cycle, consider that youre seeing the program for much less time than you (or the customers of your testing) will be using it. You wont necessarily have time to develop expertise with the program if its a challenge to learn but easy to use, nor will you always be able to tell if the program is both hard to learn and hard to use. In addition, consider a wide variety of user models in a variety of rolesfrom trainees to experts to managers. Consider using personas, a technique for creating elaborate and motivating stories about users.5
Validation seems to be used much more often when there is some kind of contractual model, where the product must pass a user acceptance test as a condition of sale. At the later stages, projects are often behind schedule, people are tired and grumpy, lots of bugs have been found and fixed, and there's lots of pressure to end the project, and a corresponding disincentive to find problems. At this point, the skilful tester faces a dilemma: should he look actively for problems (thereby annoying the client and his own organization should he find one), or should he be a team player? My final take about the validation sense of UAT: when people describe it, they tend to talk about validating the requirements. There are two issues here. First, can you describe all of the requirements for your product? Can you? Once youve done that, can you test for them? Are the requirements all clear, complete, up to date? The context-driven school loves talking about requirements, and in particular, pointing out that theres a vast difference between requirements and requirements documents. Second, shouldnt the requirements be validated as the software is being built? Any software development project that hasnt attempted to validate requirements up until a test cycle, late in the game, called user acceptance testing is likely to be in serious trouble, so I cant imagine thats what they mean. Here I agree with the Agilistas againthat its helpful to validate requirements continuously throughout the project, and to adapt them when new information comes in and the context changes. Skilled testers can be a boon to the project when they supply new, useful information.
5
Cooper, Alan, The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity. Pearson Education, 2004.
advice might be expected to succeed or fail. Without a healthy dose of context, theres a risk of pouring effort or resources into things that dont matter, and ignoring things that do matter.
11
I frequently hear peopledevelopers, mostly, saying things like, I dont know much about testing, and thats why I like using this tool without considering all of the risks inherent in that statement. I think the Agile community has some more thinking to do about testing. Many of the leading voices in the Agile community advocate automated acceptance tests as a hallmark of Agilism. I think automated acceptance tests are nifty in principlebut in practice, whats in them? When all the acceptance tests pass for a given user story, that story is considered complete. What might this miss? User stories can easily be atomic, not elaborate, not end-to-end, not thorough, not risk-oriented, not challenging. All forms of specification are to some degree incomplete; or unreadable; or both
12
industry, when we consider all of the different contexts in which software is developed. On a similar thread, wed have a hard time agreeing on who should set and hold the definitions for those terms. This is a very strong motivation for learning and practicing context-driven thinking.
Conclusion
Context-driven thinking is all about appropriate behaviour, solving a problem that actually exists, rather than one that happens in some theoretical framework. It asks of everything you touch, Do you really understand this thing, or do you understand it only within the parameters of your context? Are we folklore followers, or are we investigators? Context-driven thinkers try to look carefully at what people say, and how different cultures perform their practices. We're trying to make better decisions for ourselves, based on the circumstances in which we're working. This means that context-driven testers shouldnt panic and attempt to weasel out of the service role: Thats not user acceptance testing, so since our definition doesnt agree with ours, well simply not do it. We dont feel that thats competent and responsible behaviour. So Ill repeat the definition. Acceptance testing is any testing done by one party for the purpose of accepting another party's work. It's whatever the acceptor says it is; whatever the key is to open the gatehowever secure or ramshackle the lock. The key to understanding acceptance testing is to understand the dimensions of the context. Think about the distinctions between ceremony, demonstration, self-defense, scapegoating, and real testing. Think about the distinction between a decision rule and a test. A decision rule says yes or no; a test is information gathering. Many people who want UAT are seeking decision rules. That may be good enough. If it turns out that the purpose of your activity is ceremonial, it doesn't matter how badly you're testing. In fact, the less investigation youre doing, the better or as someone once said, if something isnt worth doing, its certainly not worth doing well.
13