Skip to main content

Why Agile and Test Driven Development (Part 2)

Classic physics...
(Originally published by me on www.servicevirtualization.com)

In part 1, we briefly examined the reasons why application development is challenged: namely, architectures have to be more complex to address the similarly more complex needs of the business.  We also briefly looked at the primary goals of Agile (SCRUM, specifically) and Test Driven Development (TDD) with the promise of further scrutiny to see why, although they take steps in the right direction toward better management of this complexity, they still fall short.


The impact of increased application complexity is that there is an increased rate of change in all of the cogs and wheels that will (hopefully) mesh together to produce the final result expected by the business.  Over the same amount of time, this increased rate of change will yield more points where failure can occur.  And given the same distribution of probabilities, this will ultimately yield a greater number of defects.

This becomes a problem in even SCRUM and TDD but for very different reasons:

...becomes code creation
In SCRUM, the functionality that needs to be added in any given sprint will require the inclusion of multiple applications components that need to be written and validated.  This may be approached from the "everyone jump in the pool together" perspective where all components are being written simultaneously and thus the potential for error is large, or artificial latencies are introduced because functionality further along the project timeline has prerequisites that need to be written and validated before they can be built upon.

In TDD the objective is to understand use cases (written against the business requirements) that are not met by the application in its current state and then make the necessary code changes so that failure becomes success.  In other words, the potential for failure is already realized, but the challenge is that the onus is on the developer to understand the complete set of defects that exist within the components being modified.  This is extremely challenging at best and NP-complete at worst (if the portion that is changing is large enough).  As a result, the responsibility of validating the overall application is also challenging or nearly impossible depending on the size and complexity of the application.

In the third and final part, we'll examine how Service Virtualization (also known as LISA) by CA Technologies addresses both of these scenarios to substantially reduce the risk of successful project completion.

Popular posts from this blog

It's Easier to Fail at DevOps than it is to Succeed

Slippery when wet Since the term DevOps was coined in Belgium back in 2009, it is impossible to avoid the term whether in discussions with colleagues or in professional trade magazines.  And during the years while this movement has gained momentum, many things have been written to describe what elements of a DevOps strategy are required for it to be successful. Yet in spite of this, there is an interesting data point worth noting: not many organizations feel there is a need for DevOps.  In a Gartner report entitled DevOps Adoption Survey Results (published in September 2015),  40%  of respondents said they had no plans to implement DevOps and 31% of respondents said they hadn't implemented it but planned to start in the 12 months after the survey was conducted. That left only 29% who had implemented DevOps in a pilot project or in production systems, which isn't a lot. "Maybe it's because there truly isn't a need for DevOps," you say.  While t...

So What is this IPaaS Stuff, Anyway?

 In my last post , I discussed how no-code/low-code platforms fulfill rapid development of business applications - addressing the needs of the Citizen Developer (a Gartner term  first used around 2009).  I also commented on how this specific objective limits their ability to provide true integration capabilities, which require the flexibility to adapt to the myriad variations of infrastructure.  This is a concern because companies often have acquired legacy systems via M&A activity while simultaneously investing in new technology solutions, resulting in a mishmash of systems with multiple ways of accessing them. In this post, I'd like to examine how the needs of the latter group are met by describing some key capabilities that are "must-haves" for any company looking to execute on a digital transformation strategy.  In order to do this, let's define who the target user base is for such a technology platform. Disclaimer:   I work for MuleSoft (a division...

Application Development Done Right

In a previous article, entitled DevOps as the Ultimate Panacea? , I described how developing code without thinking about the current needs of the end user as well as the future needs once they've become accustomed to using your application ends up not only frustrating them but also can result in customer churn and ultimately lower revenues.  In this article, I'd like to describe something simple that I came across today that shows a definite degree of effort to do quite the opposite. Recently, we had a severe snowstorm, one with blizzard-like conditions, which is unheard of in central New Jersey.  Being responsible adults, my wife and I went to the grocery store to stock up on essentials (read:  chips, chocolate, etc.) in case we get stuck at home. As we were ringing up our order, the cashier mentioned to us that the store has a mobile application.  Since both of us are in technology oriented professions, we were skeptical about the need for a grocery store mob...