How does data migration affect regression testing

Let the monkeys test - monkey testing in UI test automation

Can you test user interfaces (UI) automatically without test cases? Clearly, yes.

With Monkey-Testing, user interfaces can be tested automatically from the first version. The robustness of the software is significantly increased right from the start without creating a single test case. However, there are limits. This article explains how this works and where the limits of monkey testing are.

The long road to automated user interface testing (UI test)

In software projects, automated UI tests are often (if at all) available late.

The reasons for this are complex. In order to be able to program a UI test automation system, the requirements for the user interface should be known, a first operating concept including draft forms should be available, the framework conditions for the test scope should be defined and a basic understanding of the architecture of the test system should be available.

Even in agile projects, the high initial effort to set up a UI test automation system is postponed to later iterations, although the requirements (e.g. user stories) and acceptance criteria are available early on. Until the UI test automation is ready for use, the project team knowingly foregoes the possibility of regular feedback on the quality achieved. Instead, attempts are being made to compensate for the lack of automated UI tests by increasing the use of manual test procedures (e.g. exploratory testing).

What are the strengths of monkey testing?

Monkey testing requires little information about the user interface. Thus, the Monkeys can be released from the first testable version. Monkey testing increases the software quality right from the start, as the robustness of the user interface is tested automatically without manual effort.

Monkey testing closes the gap between the first testable version and the UI test automation that is usually available later.

Even after the introduction of UI test automation, monkey testing shows its strengths. The UI test automation is often incomplete because program changes still have to be made or the functionalities to be tested have not yet been implemented.

What is monkey testing?

Monkey testing is a random test procedure based on the Infinite Monkey Theorem. This says that a monkey who taps around on a typewriter long enough at some point produced the complete works of William Shakespeare. If you transfer this to the software test, a monkey that interacts with a user interface long enough would have to go through all functions and discover all existing errors.

So much for theory - in practice this claim did not last very long. There are no monkeys typing on test computers, nor is there error-free software. Even if you want to come very close to this ideal, a monkey or a test system that simulates it would still take a very, very long time.

When does monkey testing make sense?

The biggest advantage of monkey testing lies in the early availability and the low cost of setting up the test infrastructure. A moderately intelligent monkey testing system can be installed and configured within a few hours (at least with our monkey testing tool). Monkey testing can therefore be used as soon as the user interface is operational. In most projects, this is the case at a time when automated functional tests are still a long way off.

In contrast to automated functional tests, the monkey testing system does not have to be adapted to every change, no matter how small, in the technical requirements. Even if the test monkey finds comparatively few errors, these are known at a very early stage of delivery and can be corrected promptly. In addition, the prompt feedback enables potential technical risks and problems to be identified more quickly and countermeasures to be taken at an early stage.

The risk of regression can also be significantly reduced through monkey testing. The desire to automate all functional tests in order to be able to automatically test all features that have already been implemented at any time often fails due to a lack of resources in the test team or insufficient prioritization of the UI test automation.

In an agile project environment, teams also have to deal with frequent changes in technical requirements. Without UI test automation that meets the current requirements, regression tests can hardly be carried out with sufficient regularity and breadth. Trying to compensate for the gaps in UI test automation with more manual regression tests quickly leads to UI test automation losing ground even more. As a result, regression tests are often performed incompletely or too rarely. A simple but regular (connection) test that calls up and saves all forms of a web application is sufficient. In this way, careless errors, such as those resulting from source code merges or incorrect application configurations, can be detected.

What are the limits of monkey testing?

Monkey testing does not provide any information about whether the requirements of an application have been implemented correctly in terms of content. For this purpose, functional tests still have to be created and run through manually or automatically. However, monkey testing can make a valuable contribution to checking the robustness of an application against incorrect operation. In many cases, this can reduce the number of negative tests to be explicitly checked.

Monkey testing CANNOT replace the functional tests of the technical requirements, but can supplement them. Some examples such as switch-on tests or reducing the number of negative test cases have already been mentioned in this article. Other main areas of application are crash test or load test scenarios.

Here are a few examples from our day-to-day project:

Missing functional tests in the Scrum team
After the second sprint, the CI / CD system is so well established that automated builds can be delivered. However, functional tests are still missing to verify their quality.

Technical migration to another database system
A web application was migrated to another database system and installed on the test environment. The automated tests have not yet been adapted to the new database system due to a lack of time. It is unclear whether the deployment was successful and whether the test environment can be released for manual testing.

Incorrect source code merge
A feature branch was integrated into the wrong version of the trunk. The developer responsible for this noticed and corrected his mistake in good time with the help of the automated unit tests. However, it is unclear whether all configuration files were checked in with the correct version.

Integration of new features
An extensive feature has been integrated for which there are no automated tests yet. In order to relieve the test team, the release candidate should only be released for a manual integration test if it is sufficiently stable.

Optimization of the functional test automation
A release is to be installed on the customer's environment for the acceptance test. For reasons of time, only positive test cases were automated and the last manual regression test was a few weeks ago. Since several serious bugs were discovered by chance in the recent past, there are doubts about the robustness of the release.

For which type of software does monkey testing produce the greatest benefits?

Monkey testing achieves the greatest benefit when user interfaces have standardized behavior and are simply structured.

The user can perform many actions in one area of ​​the user interface, but does not have to go through complex navigation sequences to reach other areas of the user interface. This mainly includes software for schematized data maintenance (search, create, edit, delete objects).

For complex user interfaces with many different navigation sequences that enable different junction conditions depending on the data constellation, the probability that a test monkey can run through the various junction conditions in a finite time is significantly reduced.


We use monkey testing in our web projects to carry out connection tests with the new software versions that can be delivered. If these run smoothly, the sometimes very complex functional tests are then run through.

So far, after the introduction of the test monkeys, we have been able to increase the robustness and thus the quality of the software versions supplied - in some cases significantly. The increase in quality goes hand in hand with no manual testing.

For us, monkey testing has established itself as a further, permanent component of test automation. The low implementation and usage costs are offset by fast and meaningful feedback on the quality of each new software version.

Keywords:Agile testing, monkey testing, Selenium, software testing, test automation, test planning, UI testing, random-based testing