The Battle of the Bugs
For the past two weeks, I’ve been sitting in on meetings between two teams – one representing the business users and the other the developers. It’s been quite the enlightening experience, observing the back-and-forth as they try to figure out how to prevent those pesky software bugs from rearing their ugly heads.
The conversation usually goes something like this:
Business Team: “Look, we’ve got another production error. This has got to stop – we’re just lucky these are all minor errors, but one day we’re going to hit the jackpot. We need to get rid of all these bugs, period. We need to talk about how to stop software bugs from occurring.”
Dev Team: “We hear you loud and clear. I think we really need the business team to sit in more tightly on our sprint sessions and help us craft more scenarios to catch those edge cases. We need you guys in with those business requirements.”
Business Team: “Sure, but we don’t understand the system as well as you developers, right? You guys literally built this thing. And why don’t you just regression test every single thing? Have you guys even done a post-mortem on these bugs?”
Dev Team: “Okay, let’s go to the post-mortem and let the team figure it out. And yes, we can make the regression tests more robust, but again, we can’t catch bugs if the scenarios aren’t there. So, again, do have the business side sit in more.”
Business Team: “Wait, before we end this meeting – you guys keep talking about this ‘test-driven development’ thing. Why isn’t it working? I was in quite a fix because I couldn’t quite discern whose side to take. Was I even to take a side?”
Needless to say, we left the meetings quite dissatisfied, not knowing what else to do other than to let the development team carry out a post-mortem. But that’s where I stepped in, thanks to some insightful conversations with various quality engineers in software development.
Squashing Bugs: A Lesson in Teamwork
So, what was wrong with the conversations we just witnessed? Well, the key issue was that they were working in silos, each team stuck in their own perspective. The truth is, in a cross-functional team – especially in a Scrum team – everyone needs to wear multiple hats.
It’s true that business users happen to be better at making requests and crafting acceptance criteria for features, but developers would know how to write in architecture-specific criteria as well. The same goes for the business users – quality engineers just happen to be better at writing test cases, but business users should also be able to see things from the user’s angle.
It’s not that you’re just performing one role; it’s just that you happened to be better at that one role, so you do it more. And that, my friends, is how you squash a software bug.
Busting the Myth of Test-Driven Development
Speaking of test-driven development, it’s a common misconception that it’s all about having test cases written for every single feature. Well, yes, it does mean that you have test cases written for each feature before any code is written. Code is then written in order to pass these tests in small increments, and new code is only written when tests are passed. This helps the code remain error-free because the code is written with the end in mind.
Sounds great, right? But why isn’t that going to help us stop all production errors? Well, that’s because these tests are ultimately unit tests – tests that are performed at the level of each feature you’re coding for. Huge errors and edge cases that usually crop up are those that arise from one feature breaking another feature, which are typically caught by regression testing.
So, swinging around that big phrase doesn’t mean anything. We still need critical test cases for regression testing.
Regression Testing: A Delicate Balance
Now, you might think that the solution for a bug-free product is to infinitely test it before sending it off to production. Well, that would certainly be the case if you had infinite time and resources. But you don’t.
First, regression tests are not that easy to write or compile. A single regression test is actually a specific scenario corresponding to a specific user journey through your web application. The more features and components your web application has, the more complicated the user journey becomes. That’s why customization scales the complexity factor exponentially, and having a product with excessive features makes regression testing difficult.
Before a product is pushed to production, it’s also unlikely that the regression tests will have time for automation. Quality engineers actually sit down through a slow, tiresome process of manual testing, and only after the product is pushed to production do they have the time to write the manual tests into automated tests.
Therefore, coming up with an exhaustive list of regression tests is also impractical – you’d never be able to push anything to production within a single sprint. And it’s hardly the case that anyone would be able to come up with an exhaustive list anyway. It’s more important to think of test cases that have a huge impact owing to their universal nature or criticality.
Think of testing also as a scarce resource that requires some art in balancing. In most web applications, automated testing simply picks a random set of scenarios, with certain critical ones fixed, of course, from the universe, usually running in the thousands, and takes up to 3 days to complete their run. And so, one needs to think of testing as a sort of iterative cycle as well, which becomes more robust as the product matures.
The Art of Anticipation
So, what’s the secret to squashing those pesky software bugs? It all boils down to anticipation and a collaborative effort.
First and foremost, we need to stop expecting to have zero bugs in our software. Perfection is asymptotically attained, not immediately achieved. Instead, we should focus on being poised to quickly remedy bugs the first time they occur.
Secondly, we need to bring the entire team together – the business users, the developers, and the quality engineers. It’s not enough for the business users to know the expected requirements and the developers to know the architecture. We need that third-party perspective from the quality engineers, the ones who are adept at spotting edge cases and asking the questions that no one else will think of asking.
Finally, we need to strike a balance with our regression testing. We can’t afford to over-test, but we also can’t afford to under-test. It’s all about prioritizing the critical scenarios and keeping an iterative mindset as the product matures.
By embracing this collaborative, anticipatory approach, we can start to squash those software bugs like the pros. And who knows, maybe we’ll even stumble upon a few unexpected surprises along the way – like a friendly little moth.
So, the next time you find yourself in the midst of a software bug battle, remember the lessons we’ve learned here today. Gather your team, ditch the silos, and get ready to squash those bugs like a pro. And if you ever need a little extra help, you can always visit our website for all your computer repair needs.