UI testing is thus a test of the entire system and reveals those errors that are especially relevant to the user. But how can such testing activities be automated? The first part of this article answers this question by outlining approaches for automated control of a UI. However,this is only half of the job: The UI encapsulates a complete system.In order to create reproducible tests,it must be possible to set up a defined,initial state of the system before testing, and to restore this initial state after a test. The second part of this article outlines a number of approaches that can be used to achieve this.
1. Automated UI Operation
There are two alternatives for automating the operation of the UI. In a first approach, the UI is tested from the outside. The second method involves establishing internal access for controlling the UI.
1.1 APPROACH 1: TESTING THE PRESENTATION
In this approach, the UI is regarded as a black box, and tests are carried out via the interface that is available to the end user as well, i.e. the actual UI (see Figure 1). If, for example, a web application is tested, then the tests simulate user actions in the browser. In this case, it is not relevant whether the application has been implemented with JEE, PHP, ASP or jQuery. Depending on the platform on which the tests are executed, there are various tools that enable the simulation of user interaction; for example Selenium [i] for the automation of tests in web browsers, or DeviceAnywhere [ii] for the automation of tests on mobile devices.
- The actual UI presentation can be tested, e.g. the arrangement of elements, fonts and colours.
- It is possible to conduct testing on different platforms, e.g. testing with different browsers, operating systems or mobile devices.
- The application does not need to be modified to create the tests. Likewise, there is no need to have the source code at hand. In extreme cases, the application to be tested can even be available in the installed form only.
- It is possible that tests can no longer be executed as a result of even minor adjustments in the presentation. In the worst case, new identifiers for UI elements are assigned during compilation, which are then no longer traceable even though nothing has been changed, from a functional point of view. This may entail major efforts to keep the tests up-to-date and executable at all times.
1.2 APPROACH 2: TESTING THE PRESENTATION MODEL
The second approach is based on Martin Fowler’s [iii] separation of the UI into ‘view’ and ‘presentation model’, as also demonstrated by the MVVM Design Pattern [iv]. It involves moving the behaviour and the state from the view to the presentation model. Thus, the view only concerns the presentation, while the behaviour and the state are implemented in the presentation model. To control the application, the tests can make direct use of the presentation model (see Figure 2). They can be implemented in the same programming language as the presentation model.
- The strong linkage between the presentation model and the tests simplify any refactoring.
- Provided the rendering of the UI can be switched off, the execution is accelerated.
- It is possible to proceed in keeping with the Test First approach [v].
- If this approach has not been adopted at the beginning of a software project, laborious adjustments are often required to establish a presentation model later on.
- The view itself is not tested.
2. INITIAL STATES
The preceding sections have shown how to automate test execution, which raises the follow-up question that was hinted at already in the introduction: How can a defined initial state of the system be set up before testing, and how can it be restored after a test? Figure 3 provides an overview of the four approaches which are considered in more detail below.
2.1 APPROACH 1: VIA THE UI
The simplest solution is to write a specific test that establishes the required initial state. That is to say, the same approach is used to create the test data as for the test itself. ADVANTAGES
- The system can be tested from the UI perspective only.
- It is only possible to execute tests using data and states that can also be created via the UI. So, it is not possible to use fictitious test data that cannot be created by a user.
- No additional techniques are needed to obtain the test data. This approach can therefore be used for a quick creation of a first set of tests.
- The application does not need to be changed in order to make the test data available. This enables testing on productive systems.
- Sometimes this is the only way of generating the test states, e.g. during testing of pre-installed systems.
- It is not always possible to restore the system to its original state via the UI, especially when a test has not been successfully completed.
- It is not always possible to create the desired initial state via the UI.
2.2 APPROACH 2: DATA SET EXPORT/IMPORT
In this approach, the test data are generated in the application. This persistent data set is subsequently exported and imported into the test system before each test run. This means that it is always exactly the same test data that are available for a test. Accordingly, the application needs to have an export/import or backup/restore function. This is implicitly the case if a database is used. If the data are stored in the file system, the files can simply be copied. ADVANTAGES
- Exact reproducibility of the test data, including automatically assigned unique identifiers.
- Comprehensive test data can be loaded in a short time before executing the test.
- Tests with live production data are possible by exporting the data from the productive system and importing them into the test system.
- Maintenance of the test data is time-consuming in cases of modified data structures.
- With this method, it is not possible to simulate states that are held only in memory (e.g. cache).
- The data imported can be wrong, meaning that the application could never have created the data in that form. This is the case, for example, if the data exported were manipulated manually.
2.3 APPROACH 3: MOCKING
In this approach, the behaviour is controlled by so-called mocks. The mocks take the place of specific objects in the application and override their original behaviour. For example, the logic for storing an address can be overridden with a mock in such a way that it deliberately generates an error. The Inversion of Control Pattern – also known as Dependency Injection [vi] – is useful when inserting a mock in an application. Another possibility is to use specific mocking libraries such as JMock [vii] for Java or NMock [viii] for .NET. ADVANTAGES
- Mocks enable to specifically override a certain application behaviour.
- Wrong behaviour can be simulated deliberately and easily.
- The ‘mocked’ application differs from the real application. It cannot be guaranteed that the real application will work if the mocked application does.
- Too many mocks further increase complexity and can impede clear understanding of application behaviour.
2.4 APPROACH 4: SIMULATOR
The prerequisite for this fourth and last approach is that there is interaction with peripheral systems which have a clearly defined interface. Instead of working with the actual peripheral systems, they are being simulated. In a case where an application accesses a CRM system via a web service interface, for example, this means that instead of using a real CRM system for the tests, a simulator can be created that simulates the CRM system behind the web service interface. As a matter of course, physical peripheral systems such as sensors, instruments or machines can also be simulated. ADVANTAGES
- Decoupling of peripheral systems can be achieved, because the desired state can be easily simulated even for complex systems such as larger machines.
- Simulation makes it easy to simulate exceptions and borderline cases.
- Often, a simulator can respond to queries more quickly than the real peripheral system, which enables faster test execution.
- The simulation of complex systems is a laborious undertaking. In most cases, the simulation must be limited to a few use cases, and hence, it may then be needed to extend the simulator when new tests are written.
Automated testing via the UI is a powerful approach, but it also poses a number of challenges! As demonstrated in this article, it is not sufficient to automate all of the user’s actions. The system itself also needs to be set up in a defined initial state, and this has to be done in a reproducible way. A variety of approaches have been outlined how this can be achieved. But which of these is the best? There is no general answer to this question, because the appropriate approach has to be chosen that suits best every individual situation. It is, however, advisable to keep an eye on test automation as early as possible in the process of designing a system, because approaches such as mocking and testing of the presentation model require an adequate architecture. If the application is designed from the beginning also for the automation of tests, they can be created more quickly and are easier to maintain. Other approaches discussed above, such as data set export/import or simulation can be easily implemented during or even after the development of a system. Even if an application is already in production, there are opportunities for automated testing anyway – for example by testing the presentation and generating test data via the UI. Especially for productive systems, this is an excellent option because the system is stable and the UI will no longer be subject to changes.