In the last couple of weeks, we’ve given you a deeper look into the world of User Acceptance Testing. And over this period of time, we have received questions on the workflows and processes behind UAT.
It’s quite a special topic for us too, since our bug tracking- & testing software is used by a variety of people and companies helping them in their User Acceptance Testing efforts.
In today’s blog post I’d like to show you what the actual workflow of UAT looks like. From planning to executing and to analyzing your UAT efforts.
Let’s get started.
5 steps to your UAT workflow. Here’s how it works.
User Acceptance Testing is complicated. That’s what probably a lot of you think. From creating a UAT plan, to executing your test cases and analyzing them. All kinds of different departments and people are involved.
And worst of all: UAT takes place at an awkward time: at the end of a project.
Your development team is done with their tasks (at least they should be) and your QA agents start testing the application.
And they have just one main job: Testing if the application works for your users.
I’ll guide you through the following 5 steps:
- Reporting & Lessons Learned
Planning your User Acceptance Testing efforts is an absolute must-have. Without proper planning, UAT won’t get you anywhere. Planning your User Acceptance Tests must cover the following areas:
Planning & time management
In a first step you need to clarify some basic questions. You need to collect information regarding your UAT schedule and your QA agents and testers.
Make sure to have all information in one place in order to set up a realistic UAT plan. And most importantly: Draft a concept what your testing groups should look like.
You need to plan who of your team is involved in the UAT yUAT execution as well as ensure that all responsibilities and tasks are known.
Every person involved in the UAT process should have a clear understanding of what his or her responsibility is.
When setting up your UAT team ensure to bring everyone on the same page. Establish clear communication guidelines and prepare your target audience for the test case.
Communication & Issue strategy
While executing the defined UAT test cases you need to make sure to have a workflow in place which deals with bugs, issues and other problems.
- How are you going to document problems?
- How can testers communicate problems?
Before starting to execute test cases, I recommend making use of a User Acceptance Testing checklist. It will help you to stay focused and keep your efforts and to-dos in place.
Based on our UAT experience, we’ve put together this UAT workflow checklist template for you. You can download and view the checklist for free.
You have a couple of options when executing your UAT test cases. As the name suggests your testers (= potential users) will now test and evaluate your application regarding certain test scenarios.
If you provide a global product on various geographical markets, chances are high that you can’t meet these testers in person.
The test cases can be executed in one-on-one sessions via skype or any other video calling software. During the sessions you will gain a lot of quantitative and – even more important – qualitative data from your users.
Especially if the test cases are executed after the finished development process, you might end up with some new insights which you haven’t taken into consideration. Your theory of how much know-how your users have, will be tested.
Executing and documenting your User Acceptance tests should happen at the same time. I’d recommend setting up a system which lets you document all relevant information without losing any data.
Documentation is one thing, clearly defined responsibilities for ensuring the implementation of your users’ feedback is another key factor.
A lot of our customers have set up Usersnap for this step. Bugs, feedback and other abnormalities can be documented on your user’s browser screen. An easy-to-use project dashboard allows your colleagues to get a good overview of important issues. Making sure to delegate and assign priorities is another key thing here.
During this phase you need to evaluate if the defined criteria are tested and met. And most importantly if those criteria were successfully accomplished by your testers.
Has any test case failed? Which problems did occur? How can those problems be resolved and who is responsible for that?
The quantitative and qualitative data documented in the first place need to be analysed. The following questions need to be answered:
- How many testers did complete the test cases?
- What was the overall rating of these test cases?
- What was the overall state of mind of each tester?
- Which emotions did occur during the test cases?
The evaluation phase is a pretty extensive one, since each and every single test case needs to be analyzed and put into context.
5. Reporting & lessons learned
You might think that the evaluation and reporting phase are the same. They are not.
During the evaluation phase you are collecting, aggregating and analyzing data. In contrast, the reporting phase deals with the bigger picture.
The main goal is to gather insights and lessons learned which will help you to improve your future test cases and UAT workflows. You might also start building relationships with some of your UAT testers during that phase as they are a helpful source for further feedback and insights.
One last tip.
User Acceptance Tests are regularly conducted at the end of a software development phase, at a time when the product is nearly “finished”. This must not be the case. The later problems occur, the more expensive they are.
UAT workflows can be implemented way earlier. When pursuing a more agile approach to UAT, UAT itself is not a separate project step. Instead, it requires a continuous collaboration between all team members and stakeholders.
Throughout the entire development workflow, UAT becomes an essential part of every project stage.
The goal here is to integrate User Acceptance Tests in each and every project step. UAT is not a phase anymore. Ensuring continuous progress means ensuring continuous test and feedback cycles.