Back

How to reduce the bloated testing process

As your application grows, so does the flow of tasks for QA. And if you don’t reduce that amount of work in time, it will begin to threaten your releases. And just resemble a sprawling bush that’s not being looked after. I will tell you how we reduced the regression testing time from 5 to 3 days in a couple of months maintaining quality and relieving the level of stress for the QA team.



We at FunCorp live in two-week release cycles: we perform tasks, test logic and negative scenarios. Once every two weeks, we merge the tasks into one branch and freeze it.
Until three months ago, the process after the freeze looked like this:

  1. Integration testing — 2 days. We double-checked every task in the release version of the product, trying to make sure that nothing was broken in the merge.
  2. Smoke tests — a couple of hours. Just to feel confident before the beta test.
  3. Release to beta users — 1–3 days. We collected and analyzed technical and product metrics. Looked for crashes. And at the same time, we fixed the bugs found during integration testing.
  4. Regression testing — 5 days. We do a lot of A/B tests, which means we accumulate a lot of functionality, and the number of test cases grows. Even with automation (about 50% on my project), we had to manually check almost 200 cases.

During the whole working week of the regression testing, QA were not engaged in tasks for the next release. This has become an issue for ourselves: the developers were far ahead of us, and the testing queue was growing. It was stressful and made us feel like we were always chasing something. In addition, it was harder for programmers to switch context from current tasks to bugs that we found in tasks they had done a week or two before.

It hindered business — we even had to postpone the release a couple of times. And the rest of the time we could not include some of the tasks in it, because we did not have time to check them (but, of course, the most important tasks always got in the release). Given that we test product hypotheses with each release, and product data still needs to be collected, obtaining A/B test results was delayed. This caused even more stress.

It seemed to us that the problem was becoming systemic, which meant it was time to stop and review the process.

We decided to stop redundant testing. If we want to test hypotheses quickly, we need to test the hypotheses, not everything around them. So, the focus of testing will shift to them.

The first thing we agreed to do was to inject some tasks into the branch with developer testing rather than QA involvement. For example, if we are talking about updating some libraries, and if something is not updated, we will see it in the next steps. In this way, we reduced the number of tasks in the check queue and removed some of the routine.

And then we accelerated the process after the freeze:

  1. Integration testing and beta test now go in parallel. Immediately after the code freeze, we run smoke tests and, if everything is okay, we go to beta. At the same time, we reduced the time for integration testing — it takes about a day. If the problem was thoroughly checked before adding it to the branch, we only check the basic logic after the merge. The rest, if there is one, will come out in the beta and we will fix the problem on the same day. Now we have more time to do other tasks.
  2. Regression testing — 3 days. To reduce the checks at this stage, we decided that there was no point in checking the entire application thoroughly every time. We have 20 components, from registration to chat rooms, and two platforms. We selected the most important part of each component and reduced the number of test cases to 50–60 per release. Some of the extra free time and effort we use for further automation of checks. Meanwhile, we are considering how to reduce the regression testing time to 2 days.

Bottom line: we’ve improved overall task planning, overhauled the testing process itself, increased developer involvement in tasks, and come to the point where A/B tests get to the production 30% faster. No QA or release was affected during the improvement process.