Here at Moneybox, our QA team have recently started setting up an automated testing framework. This means tests that were previously manual are now potential candidates for being automated and run by a machine! Obviously this isn’t a magic bullet – by automating tests we are writing code and adding to our codebase, which needs to be optimised and maintained. However, once set up, the ongoing maintenance isn’t a huge burden and the benefits are immense. The automated pack can be run on a regular basis, which means the QA team are free to focus on testing other sections of the app, like new functionality or particularly complicated flows. In this post we’ll give you a look into how our automated tests are set up and how they work.
What kind of automation tests are we running?
We’re focusing on mobile app automation, in particular ‘native apps’, which are specifically created to run on a single platform. At Moneybox we have both iOS and Android apps and run automated tests on both, specifically UI automation. This means that the user interface and the general functionality of the app are checked and tested against the requirements.
As for which tests are good candidates to automate, it’s a matter of perspective. If a test is to be performed infrequently, it doesn’t make much sense to make the effort to automate and maintain it. On the flip side, if a test is being performed very frequently, say every release or even multiple times a release, then it makes sense to consider automation.
Adding to that, if the test is business critical – for example, a login feature or a feature to withdraw money – automation would ensure it’s tested very frequently. It also makes sense to automate tests that require a lot of manual effort or are time consuming, like setting up data or entering a lot of information. There’s plenty of other criteria to consider when deciding whether to automate or not, but generally the more repetitive a test is, the more likely it’ll be a prime candidate for automation.
How is our test automation structured and what components have we chosen?
Page Object Model (POM)
For the design pattern we chose the Page Object Model (POM), which is popular for test automation as it reduces code duplication and makes it significantly easier to maintain the tests. When writing tests against screens within an app, we need to refer to elements on that screen in order to click on and read them to determine if they work correctly. However, if we were to just write the tests directly, we’d refer to the same element numerous times and our tests would be brittle to any changes made, and the same code would be repeated multiple times throughout.
Using the POM approach, you create a separate class for each page in the application that contains methods that represent all the functionality on that page. The basic rule of thumb for a Page Object is that the methods in the page class should correspond to the actual actions a user can make on a page. In other words, it should allow a software client to do anything and see anything that a human can. The benefit of this is that if there are any changes to the page, we’ll only need to update the Page Object in question once, instead of refactoring all the tests that interact with that page.
Next up, we’re using Cucumber, which is a tool based on Behaviour Driven Development (BDD) that allows test automation in an easily readable and understandable format. This makes it significantly easier for the wider team to understand and contribute to the current automated tests. Cucumber reads specifications written in plain English and checks that the software performs as the specs dictate. These specs (or “feature files”) consist of multiple scenarios, each containing multiple steps for Cucumber to follow, and it then verifies if the software conforms with the specification.
In order for Cucumber to follow a set of steps, they must be written using certain grammar rules, called Gherkin, which follow the Given When Then (GWT) structure. GWT is a structured way to write tests, where ‘Given’ is used to set various pre-conditions, ‘When’ is used when an action is to be performed, and ‘Then’ is used to dictate the expected outcome. In addition to feature files, Cucumber requires step definitions – the “glue” that maps each step in a feature file to a particular set of code to carry out a certain action.
Maven and TestNG
We’re also using both Maven and TestNG. Maven is a powerful tool that can be used for a project’s build, dependency and documentation, and it connects to remote repositories and automatically downloads all of the dependencies needed to build your project. Each project contains a file called a POM (Project Object Model), which is an XML file containing details of the project. Details can include project name, version, package type, dependencies, plugins, etc.
TestNG is our choice of unit test framework to build our framework on as it inherits all the benefits that JUnit has to offer but is more powerful. In particular, while writing automated tests we need to add validations to report them as passed or failed, and TestNG provides assertions which allow us to do this within a single line of code. For instance, if we were checking that logging in worked correctly, we would need to add a validation – like that the user was on the ‘This Week’ screen. In order to do this, we would need the automated tests to check something on the page. For example, we could check that the green circle was visible, or that ‘This Week’ was selected in the bottom navigation. Assertions allow us to do this very easily, and if the assertion is incorrect, the test will fail and we’ll be alerted.
Finally, and arguably most interesting of all, we’re using Appium as our mobile test automation framework. Appium is a great choice for this as it can be used for almost everything: native apps, hybrid apps and mobile web apps for both iOS and Android. As mentioned earlier, at Moneybox we have native iOS and Android apps, and Appium leverages the vendor-provided frameworks under the hood to perform automated tests. This means it uses Apple or Google’s own framework – in Apple’s case, UIAutomation, and in Google’s case, UiAutomator for Android 4.2+ and Selendroid for Android 2.3+, which is important as it means we’re testing the exact same app that we’ll be releasing to our users.
We’re very excited to be pushing ahead with our automated tests – by significantly reducing the time taken to run regression testing for each release, we can deliver faster changes to our customers. Automated tests also make our overall release process overall much smoother and more controlled, as issues are flagged much earlier in the cycle. We are continuing to improve our automation test coverage and get as many of the QA team involved as possible, and we look forward to a future where most tests are automated. Exciting times are ahead!