How the Internet of Things (IoT) will break QA as we know it.
I’ve been writing automated Quality Assurance systems for quite some time. I’ve noticed things have changed drastically over the past few years that I’ve been in this career.
Load testing and performance testing were a breeze in those days. We knew there were a limited number of requests the web app was going to handle because the chances of more than 100 people at a time using your business app was laughable and a thought of science fiction. If scalability did become an issue there were System Architects that could easily stand up more servers to handle the load. Bug catching was straight forward. Locating bottlenecks wasn’t that time consuming. Preparing a feature report and a performance report for management was a breeze. Back then, it was much easier to say without a doubt that an application was ready for public use.
Most QA processes, methodologies and practices, as well as most of the popular tools were developed around this time. Identifying risks, writing out a list of features needing to be checked, and marking down your pass/fail criteria, constituted the whole QA work flow. QA was much easier back then because everything was supposed to be feature focused. This narrowed our testing focus to identifying all the possible use cases for a feature, and coming up with different scenarios that would let you know if a feature worked or not. Once you gathered the information about the features it was up to the Engineering team, Management, and the QA members to collectively determine whether or not the release was ready. That was how things worked. This worked well, that is until now…..
A giant elephant of complexity has entered the room, and it seeks to crush those QA Engineers that ignore it. This elephant started small during the browser wars. Sauce Labs and Selenium and QTP, Capybara, Cucumber etc.. to name a few came to the rescue, and made dealing with this small elephant manageable. Then the elephant started growing again with the advent of the mobile app fiesta both Android and Apple have enjoyed. Once again we needed to take control of this growing elephant and this time we were rescued by mobile tools like Appium etc,. Unfortunately, the elephant has returned and this time it's really big. With the rise of the IoT it is about to get even bigger.
The IoT is the largest elephant of complexity that we have ever seen in the realm of Computer Science. This level of complexity will grow at a breakneck speed because millions, or possibly billions of users will now be connected to your business application or service at any given time. We will no longer have a world where apps will stand alone in an isolated environment. We are now entering the phase of ubiquitous computing.
This new world will need to handle possibly millions of concurrent connections from a vast array of devices. Devices like Virtual reality sets, phones, AI assistants, PC’s, tablets, vehicles, buildings, clothes will all be interconnected! And they all better work! The list of all the things that could go wrong is too exhaustive to name, but as the reader you get the point.
Everything will literally be its own internal object that has the ability to communicate with the outside world. Millions of different businesses will have unique ways of tapping in and out of this sea of data to bring the world their specific product or service. Many of the user interfaces will be integrated seamlessly into these different devices. It is a much, much different problem to think about than the one QA Engineers face today. Today we know most users are going to be on mobile apps, or web apps, but what about tomorrow? With us moving the technology products more towards ubiquity we’ve exposed ourselves to more risks than we ever thought possible. Test Engineering will be forever changed in this world.
Think for a moment how you would test an application in a world where millions of possibilities are fair game? How would you assure quality on the millions of devices that will possibly be in use? Browser and Mobile testing will no longer be as relevant as they are today. Manual testing processes will require hundreds of thousands of beta testers, and that approach will not scale.
What do we do to tackle this growing problem? There are a few techniques that I’ve been using that I think might help in this new world. I have composed a quick list of them.
- Move away from QA scripting, and use AI techniques to solve QA problems. QA scripting has a bad habit of being sequential. In a ubiquitous world the user flows will not be sequential, so a sequential test script will tell you absolutely nothing about the quality of your software. An AI program considering hundreds of thousands, or even millions of possible scenarios will.
- Use Programming languages that make concurrent programming much easier like Elixir/Erlang or Clojure, Rust, or Go etc. Concurrent programming is hard; you need a language that makes these challenges much simpler. Imperative Programming styles won't scale.
- Move away from automating feature tests as this will become impossible to maintain in this new world of hundreds of thousands and perhaps millions of use cases, and focus more on Anomaly Detection and Alerting. This is much more scalable and flexible, and will catch way more issues than any feature driven test suite ever could.
The ubiquitous computing phase will force QA teams towards AI driven test tools in order to keep up with tomorrow's quality problems. If our processes, tools and methodologies don’t change there will be no way for a test team to confidently convince our companies that their software is ready for the zoo that will be production. The more connected things get, the more Quality Assurance will be front and center of the Business process. The coming years will bring an array of opportunities for new ideas, and approaches that will likely revolutionize the way we see testing. The question is, will we be ready to implement them?