Learning Software Testing with Test Studio
上QQ阅读APP看书,第一时间看更新

Getting started with automation

Many tasks underlie the testing process of the software life cycle. They fall under the different testing activities, which slightly vary based on the company's development life cycle strategy and sometimes based on the project criticality. Despite the different names attributed to these tasks or their classification and grouping, we can still denote a fundamental relationship between testing activities and their tasks.

The test plan activity comprises planning tasks related to testing strategy, skills, timelines, tools, environment, and others.

The design activity comprises tasks with purpose to design tests and environment based on the strategy and approach adopted in test planning. It consists of choosing and applying design techniques on the software specifications in order to prepare for the manual test case generation, test data generation, test case automation, the environment, and others.

The implementation and execution activity is closely tied to the design phase where it extends it by elaborating on each of the precedent tasks. Hence, during this activity, the test cases for the prepared designs are generated, the test data is produced, the automation is scripted, and the environment is implemented. At the end of this stage, the test cases are executed in the testing environment and the results are logged.

The reporting and evaluation activity is responsible for assessing the results against the objectives and exit criteria and producing test reports afterwards.

In this chapter, we are interested in test automation, which participates in the test implementation and execution activities, but when and what to automate?

Where does automation fit best?

Test automation solves the problem of software high quality with less cost. This statement is relative to the project under test and the work environment's maturity. Therefore, subduing factors with negative impact shifts the automation outcome to our benefit. If we succeed, the advantages that test automation offer us in comparison with manual testing are in terms of efficiency and time duration.

Efficiency includes less error vulnerability during execution, automatic repeatability, objectiveness in test verification, automatic test parameter tuning in order to increase defect detection coverage, and realistic load tests.

In terms of time duration, the automation is capable of running a greater number of tests in the same period of time compared to manual testing. For regression testing purposes, the automated solution can run continuously and repetitively against the incrementing builds of the application under test.

Out of the risks that threatens the success of the automation comes the training cost, the unrealistic expectations as to what the automation tool is capable of doing, the nonaccountability of the startup time after which the automation return is much higher than that of the resources needed, the proper choice of the tests to automate, and the development of non-maintainable test scripts.

These factors are not to be underestimated since they have a direct effect on failing automation and inflicting damage on the cost and delivery deadline. The next examples give you an idea about the areas where making use of test automation benefits you the most. The typical test types to be automated are as follows:

  • Tiresome to be performed by human such as those which are repetitive in nature; for example, when executing the same test case with 100 different inputs
  • Complex tasks necessitating high accuracy such as tasks which involve computations having decimal numbers with n-digit precision
  • Requiring highly consistent results if they are to be repeated the same way such as those calculating code coverage after executing the same test suite consecutively against the same source code

This list is not limited to these test types. All these examples are more error prone when they are performed by humans where on the other hand they become more efficient with automation tools.

The following table illustrates the advantages versus the disadvantages of test automation:

Having said all this, automation is still a losing case if the underlying automated tests are not designed with long term vision and maintainability in mind. For this purpose, the next section talks about a few automation strategies, which we are going to apply throughout this book.

Test strategies

Software quality has functional and nonfunctional characteristics. The functional tests are used to verify that the software abides by its functional specifications. In other words, they are used to make sure that the software succeeds in performing what it is supposed to do. Alternatively, the nonfunctional tests are concerned with testing software attributes, which are not related to the functionality by testing how the software is performing the needed functionalities rather than what are the needed functionalities. Out of the nonfunctional testing types we list:

  • Performance testing: It measures the degree to which the application performs in terms of time and resource usage when exposed to predefined user and activity rates
  • Load testing: This measures the application's behavior when subjected to an increasing load of parallel users and transactions while varying the last two factors within the boundaries of the defined specifications
  • Stress testing: This measures your application's behavior beyond the limits of the defined concurrent users and transactions
  • Usability testing: This measures how easy it is for a new user to understand and intuitively find the means to achieve the wanted operations

Test strategies provide solid designs for functional tests where the nonfunctional ones build on these tests by repetitively executing them during a time period, with high user load and perhaps scarce resources.

Capture and playback

The first approach to start automating functional tests is by capture and playback. The outcome of this task is an automated test with specialized steps built on top of the test execution tool language. It is usually done by the tool's automatic recording during the manual execution of the test case.

If you got the impression that this type of automation is brittle, it's true. During recording, capture and playback records the UI interactions performed within the context of the system's state at that time. The system state is known to vary based on some factors, such as environment or components upgrade, where after all we automate the test cases to rerun them when we doubt that the system has regressed. System changes could, for example, result in unexpected windows, UI component changes, or unreachable data. From here emerges the need to improve our tests in order to cater for such situations by inserting some validation and default handling steps.

Other steps could also be inserted to offer generic functionalities related to formatting data, verifying against databases, or others. As these steps are inserted, we find ourselves repeating them in all the automated tests since they all revolve around the same set of UI elements. Therefore, you start to feel the need to:

  • Unify the call to these steps whenever possible since duplication is not favored in development practices
  • Add common data source utilities to interact with databases or Excel sheets
  • Add string utilities to parse strings based on specific business logic within an application

Eventually, these customized functions grow into modular reusable test libraries, which will come in between the test scripts and the tool automation framework.

Data-driven architecture

Another problem in capture and playback tests is that they have their input values tied to them. Hardcoded input values also promote repetitive test UI functionalities. In addition, it is exhaustive and time consuming to create one more test for each input parameter. The lack of data scalability suggests another form of evolution to a data-driven architecture. The purpose of data-driven testing is to cover massive data combination for the same test procedure. Instead of having one test for each data combination we end up having one test for all data. During execution, these tests populate the UI elements on the screen by reading their input from data sources containing the data-driven data in tables.

Keyword-driven architecture

Among the listed application development risks, is the cost. Automation maintenance is one of the costs recurring whenever the application's specifications change. The preceding two architectures expand the test scripts' durability with respect to requirements volatility. They can also be pushed to a higher level where a function inside the application under test or even a whole test will take part in the procedure of another test. As an example, we will take two scenarios for an online shopping application.

The first scenario comprises this sequence of actions: user logs in, adds items to the cart, buys the items, and logs out. The second scenario comprises a different sequence: user logs in, adds items to the cart, cancels the operation, and logs out. Both the scenarios share a common function, that is, adding items to a cart. It is very probable that the way to add items to a cart changes during the project's life cycle. Therefore, if we have 100 test cases that include adding items to the cart, all 100 test cases will have to change!

Some atomic functions can be extracted from the preceding two scenarios such as: logging in/out, adding items to cart, and cancelling. Had we automated each of these functions independently in its own test and let the other tests build on it, then we would have had only one change to do, which is on that atomic function's automated test. So imagine each of the preceding scenarios, representing the main test, calling within its procedures the subtests responsible for executing the needed atomic functions. This approach is called keyword-driven architecture. At the end of this chapter, there is a section that illustrates this approach through the call of random keyword tests.