Estafet: QA for a Leading smart-homes project
Estafet, founded in 2001, is now one of the largest Oracle Fusion, SOA, Process Excellence providers to the Finance, Telecom, Retail, Insurance and Professional Services in the UK, as well as to the UK Government. They help companies to cut costs and increase efficiency by delivering SOA integration, BPM, mobile and cloud solutions, having completed over 100 successful SOA and Integration projects for FTSE 250 & Fortune 500 companies.
The company has been working with British Gas, as a trusted partner, on various aspects of the Hive project – a specialized Smart-Home integration platform – including back-end work, smart-devices integration (smart meters, sensors, alarms, cameras, etc.) front-end (web and mobile), and testing. The tech stack for the Hive project includes Java, Linux, Amazon AWS, Cassandra/ Hazelcast, Rest, RabbitMQ/Erlang, Kafka and Zookeeper.
Summary
A leading Oracle Fusion solutions provider needed purposive QA services for a long-term Connected Homes/IoT project with one of the largest companies in the UK. Initially, the Minds Technologies QA team joined the project as a part of a mixed automaton and manual QA team, mostly doing regression testing of the latest version of the devices and their software.
Later, Minds Technologies joined a strictly manual QA team, responsible for the regression testing of both the older and the most recent releases of the devices. The project is constantly evolving, alongside IoT technology and end-user expectations, making a flexible and scalable QA service a must.
Client:
EstafetDuration:
OngoingProject:
QA Team for Leading Smart-Homes ProjectChallenges
The basic British Gas Hive system consists of a Hub, a Thermostat and a Boiler unit, the Hub being the smartest device, which controls all the other elements in the entire smart-home system. Additionally, more smart-devices can be connected to the Hive home system, such as:
- Electric plugs, controllable via phone or web-app;
- Tunable light bulbs that change light strength and warmth;
- RGB bulbs that change color;
- Window and Door sensors that detect when a window or door has been opened;
- Motion sensors that detect movement in a room;
- Etc.
Since the system is modular and constantly expanding, the introduction of new devices, or even of new functionalities for existing devices, is an ever-present challenge. While developers do conduct basic tests controlling for major flaws, better in-depth QA management is crucial before introducing any changes to the live system. Continuous regression testing is needed, in order to ensure that all existing devices and the software that drives them will still function flawlessly even after the introduction of updates, upgrades, new functionalities or even options for new connected gadgets.
What we do
After we receive the latest code from the development team, we do a run of fresh, full-fledged Hive kit installations and deployments, mimicking regular end-user case scenarios. Afterwards, we follow the test suites we have scheduled in Test-rail – our specialized test project management system – until all test cases have been completed. A representative test suite could include any of the following cases:
- Testing that heating and hot water can be controlled via the web-app, iOS apps, Android app;
- Verifying that all data is correctly displayed and visualized over the various interfaces;
- Testing that heating and hot water scheduling functions as expected;
- Verifying that bulb- and plug-controls function as expected – both manual and scheduled;
- Carrying out mock-runs for so-called “Reci pes” – rules that dictate interactions between the sensors, bulbs and plugs (such as activating a bulb whenever motion is detected in a certain room, or turning on a plug if a specified door or window is opened);
- Etc.
These test cases are more than just plain user-runs, as our specialized tools allow us to dig deeper and obtain more precise data regarding what’s happening under the hood of the Hive. Alongside industry standards such as Testrail, JIRA, Hubbuger and Swagger API, we use FLST and SRT – specialized testing tools, developed entirely in-house, in order to achieve full control over the testing process and data. Any test data that deviates from the set KPIs is promptly reported to the development team. Once it has been corrected, our team does a resolution testing run to confirm that no faults persist.
Results
After the unit testing of the individual services, they were optimized in means of performance, user experience and workflow. Practice- Dent and ProductDent received considerable UX overhauls of some specialized modules, such as the PracticeDent Calendar/Reception and Finances (mostly Cashbook and Remunerations) modules, and the ProductDent Inventory- and Order-tracking modules. The improvements and enhancements enhanced the performance of the services (especially in the case of the PracticeDent Calendar/Reception module), as well as their ease-of-use (with greatest effect in the Product- Dent Inventory-tracking module). Additionally, some of the scenarios uncovered inaccuracies that would have only surfaced in odd and uncommon situations, but could have led to further complications.
The pre-launch MediCloud ecosystem beta release to participating early-adopter customers showed that the extensive QA support offered by Minds Technologies had indeed been a worthwhile endeavor. It had greatly optimized system performance and load times, increased user comfort on all levels (even reducing the complexity of user on-boarding in several cases), and greatly reduced the expected financial overhead that would have arisen had the testing been done by an in-house team.