- The importance of building automation-ready architecture and test designs
- When identifying tests, focus on the business level test design to ensure the tests themselves remain valuable
- Testability of the overall system has to be designed in from the beginning and built from the bottom up
- Quality test data is necessary to enable continuous testing in DevOps
- Any and all test failures must be analyzed carefully
With the coming of Continuous Integration (CI) and Continuous Delivery (CD), software testing has become even more important to delivering quality applications and code to users. Tests are often the first and last line of defense in the CI/CD pipeline – if the tests don’t catch issues that arise, they will be released from the pipeline. However, to fit in the pipeline, the tests need to be fully automated, and that automation needs to be “100 percent stable” — meaning no test cases should fail by reasons other than issues in the system(s) under test.
In the past, test automation engineers would automate one version of a test but would have to revert back to manual runs with each new version. Through this process, many fails would occur due to changes in the application under test. However, in the CI/CD pipeline, developers don’t have the time to enter in a new source file and build a new system, so they need automation to be 100% stable. If the automation breaks, so does the CI/CD pipeline.
Though development teams have made strides to automate their tests in the CI/CD pipeline, many still struggle to make this a reality. In fact, 4 out of 5 developers cite a lack of automation as an obstacle to timely code delivery, according to a survey conducted by Codefresh. As developers explore ways to better incorporate automated testing into their pipeline, they should consider these success factors: build automation-ready architecture and test design, ensure system testability, and implement continuous testing in DevOps.
Build Automation-Ready Architecture and Test Design
Test automation is not just a technical challenge, it is also in large part a matter of good test design. If tests are not well-structured or if they are more detailed than necessary, they will be difficult to stabilize and maintain over time and will be sensitive to changes in the systems they’re testing. A modularized method like Action Based Testing (ABT) can help achieve automation-friendly test designs. In ABT, tests are organized in modules and written as sequences of keyword-based “actions.”
- Modules: In ABT, test modules are the core products. They consist of test objectives, which outline a clear and differentiated scope defining what needs to be tested in the module. The tests in the test module are written in a spreadsheet-like format, making it easy for non-technical team members to read and understand (such as professional functional testers and domain experts), and are defined by a series of “action lines.”
- Actions: Tests are written as a sequence of actions, each with an action keyword (“action word”) defining the action, and zero or more arguments defining the data for the action (including input values and expected results). For example, actions can be UI or API operations-based, such as “select menu” or “start service.” They can also be business-based actions like “create order,” which could place a customer order in an ERP system (with another action checking the invoicing of the order later). Since business tests should not contain navigation, all business actions will also hide navigation details.
Focus on High-Level Test Design: When creating a test case, try to specify those, and only those, high-level details that are relevant for the test. For example, from the end-user perspective “login” or “change customer phone number” is one action; it is not necessary to specify any low-level details such as clicks and inputs. These low-level details should be “hidden” at this time in separate, reusable automation functions common to all tests. This makes a test more concise and readable, but most of all, it helps maintain the test since the left out low-level details will not need to be changed one-by-one every time the underlying system undergoes changes. The low-level details can then be re-specified (or have their automation revised) only once and reused many times in all tests.
Include Legacy Applications in the Test Suite: When going through an automation transformation, the tests running on the legacy application and the tests running on the new architecture will need to be executed as one test suite. This works best when tests are designed to be reused and leveraged while transforming. Thus, the test automation and CI/CD tool chain must be considered to support both environment and architectures while transforming.
Ensure System Testability
How testers organize and design tests has a big impact on outcomes, but developers also have a role in making automation easier. This ease or lack of ease is part of what is known as “testability.” Everyone involved in the project should understand and agree on testability as a priority. When a plan is made for a system or a feature, one of the first questions should be “how do we test this?” Incorporating a testing and automation strategy early in the life cycle will pay off down the line.
A lot of testability can come from the design and architecture of an application. If a system has a clear structure with tiers, components, and services, tests can be more focused and easier to automate, and also effectively include non-UI automation. In addition to good application design and architecture, there are dedicated measures that can be taken to specifically help testing and automation. Such measures usually require very little effort but have a great deal to do with enhancing testability. These include:
- Identifying UI Properties: Testing via user interfaces (UIs) can be challenging because the test has to interact with elements that are meant for human interaction, and UIs are inherently volatile, as they usually undergo frequent changes and improvements. While UI testing tools can help testers identify screen elements with mapping features, creating and maintaining a working version of the automation can be difficult. However, the process can be made much easier when developers assign values for hidden identifying properties. For example, add a “qa id” attribute to an HTML element, or assign the internal “name” property to a control in a desktop application, or even add custom-made control properties for this purpose. While these identifiers cannot be seen by human users, the automation can access them easily, and they usually don’t have to change for new application versions. Developers and testers can work together to define and share such values long before the UI is even built.
- Giving White-Box Access: Another item to help testing is white-box access. User-facing systems often visualize data and events that are processed in an internal model; for example, a set of numbers is shown as a graph. Many tests do not need the user-facing representation. An application should therefore provide non-UI ways for a tester to access — and even modify — data that otherwise would be hard to access. When a tester has internal knowledge of an application being tested (including structure, design, and implementation), they are able to perform better tests. For example, in game testing projects, it is difficult to interpret the graphical representation of a game. A tester can better test a game with monsters if there is an API call to answer questions, such as “where is the monster?” A game tester may also benefit from abilities like control over the games time clocks to slow down and freeze what is happening, or if the test can set a stage or trigger an event. However, this must be done with care to avoid making bugs found not credible. If possible, testers should get control over any random generator (forcing a test to be deterministic), or - if that cannot be done - allow it to set the seed of the random function.
- Achieving the Right Timing by Actively Waiting: Timing can be hard. A test tool often needs to wait for an application’s operation to finish in order to complete the test. A common example is a table that needs to be populated with values before a test can take a next step. The automation script can pause a preset amount of time, but this can lead to tests breaking if wait times are too short or becoming slow if they are too long. In particular on virtual machines, waiting times can be hard to predict. It’s better to actively wait for a condition to become true. If a condition is not readily available, developers can help by adding one to the application. In the example of the table, they could set a “ready” property for the table that becomes true if the table is not loading.
Agile teams are particularly well suited for achieving testability, since it facilitates a natural cooperation between development, testing, and automation. This is even more true when a DevOps approach is followed. To achieve a smooth and automatic integration and deployment situation, efficient and stable automated testing can be a key contributor throughout the cycle. For example, the ability to redirect part of the traffic through a new version of a component (“A/B testing”) — something that is doable only if it is addressed as part of development. Close collaboration between testers and developers can improve the testability of an application, and in turn, provide better project outcomes.
Implement Continuous Testing in DevOps
While building automation-ready test designs and ensuring system testability are key steps to achieving automated testing, businesses can take their automation to the next level by implementing continuous testing (CT). Continuous testing can lead to faster feedback, quicker release turnaround, and higher customer satisfaction and loyalty, giving businesses the best chance of not only surviving the future of software delivery but thriving in it. Because continuous testing is the most advanced form of testing, it is also the most challenging. For the best continuous testing results, businesses should:
Use Quality Test Data for Continuous Testing: To get started, testers need to set up test scripts for the automated tests. This can save test teams the time it would take to manually set up tests and import the proper test data before running the tests. From there, they can determine when and how to load and refresh the data depending on their test suites and necessary datasets. For example, a smoke test used to verify basic functionalities will only need a small data set including basic data entities, while a test suite to verify bugs related to real data requires a much bigger data set. During test script execution, the test will modify (either add, edit, or delete), so it’s important to refresh the master data after each regression run to make sure the tests have expected pre-condition before the next run. Testers can also set up small dedicated data pipelines to prepare the test data for the test suite that will run in the main development pipeline.
Automate Test Distribution Systems: Testers should also consider the test agents when provisioning test machines (on a PC, laptop, etc.) or virtual machines (in the cloud, containers, etc.). Once the machine is set, the dispatcher should efficiently distribute the tests, and any developer or tester should kick off the tests to run in parallel. This way, if there is more than one development or scrum team with different needs, they can run automated tests without waiting for another team to complete their run. After the job queue becomes empty, in particular for cloud machines, teams should shut down the test machines to avoid fees, even if tests on other machines are still running.
Select the Optimal Test Suite: Next, test teams need to identify the test cases or test modules they will use in their test suite. While most testers turn to unit tests, these are not the only option. In fact, functional tests can be used to run against the code changes submitted to reduce the test run time, shorten feedback cycles, and optimize costs. AI techniques could be conceivably used to identify which areas or components of the application were impacted due to code changes and help determine which tests to run.
Analyze Any Test Failures: Finally, to prevent test failures from being repeated, testers need to quickly identify any new failures due to new code changes. They can then store and analyze failure data to reduce the test result investigation in the future, minimizing manual effort and improving their ability to successfully conduct continuous testing.
Successfully implementing test automation is more than just a technical challenge. It requires careful planning and cooperation across the entire development and test team. By making a clear progression from well-designed tests and architectures to setting up the system under test for optimal testability to implementing testing practices that align with automation, there are a number of ways testers and developers alike can improve test automation outcomes.
About the Authors
Hans Buwalda has been working with information technology since his high school years. Buwalda currently resides as Chief Technology Officer for LogiGear. In his career, Buwalda has gained experience as a developer, manager, and principal consultant for companies and organizations worldwide. He was a pioneer of the keyword approach to testing and automation, now widely used throughout the industry. His approaches to testing — action-based testing and soap opera testing — have helped a variety of customers achieve scalable and maintainable solutions for large and complex testing challenges. Buwalda is a frequent speaker at STAR conferences and is lead author of Integrated Test Design and Automation.
Tuan Truong is Director of Test Automation and DevOps Solutions for LogiGear Corporation. He has over 18 years’ experience working in the Software Engineering Industry and a decade in Software Development Management. Truong has a passion for creativity, especially in software development and testing, how to deliver quality software faster. Truong holds a MS in Computer Science from the Institute of Francophone for Information Technology and a Bachelor’s Degree in Computer Science from Ho Chi Minh City University of Technology.