Wednesday, September 18, 2013

Follow Through

As a young kid growing up in a small town, one of the very few things to do was join a bowling league.  My father, who has been bowling his entire life, used to coach me about the importance of following through your shot.  In a nutshell, the idea is that you aren't finished once you let go of the bowling ball and it's the act of continuing with the swing of your arm that increases the consistency of the shot and ultimately the score of your game.

Professionally, the same is true.  Recently, I met the EVP of Corporate Strategy for CA Technologies, Jacob Lamm, for the first time in an executive briefing for one of our customers.  I introduced myself, and we chatted briefly.  In essence, I let go of the proverbial bowling ball.  And, following my father's advice, I sent him an email the moment he finished speaking and left the room.  In that email I described how I had been recommended by another executive for a position in his organization recently and, if he had a few moments, I'd love to sit down with him over coffee or tea to just chat. Because I had just met him, he remembered me easily and replied the following morning with an offer to do just that.  I now have a 30 minute timeslot in mid-October to talk with him.

(Incidentally, I met the other executive in much the same way.  He was conducting an internal webinar; I asked a question during the Q&A that resulted in some good discussion; and I followed through with an email as soon as the call was finished.)

Will I "just chat?"  Last month, there was a great article in Business Insider about an intern who scored a meeting with CEO Jeff Weiner.  As the story goes, VP Steve Johnson had said in a speaker series to a group of interns that you need to be prepared with a question you'd like a CEO to answer in the event that you ever get a meeting.  Similarly, I intend to be prepared before I walk in Jacob's office.  I will be reaching out to the other executive that I named in that email to see if Jacob reached out to him.  I will be looking into his background to see where my experience and skills mesh with his.  If I cannot find some overlap in our professional backgrounds or some similar reason to want to speak with him then those 30 minutes will be quite awkward.

This same type of activity applies whether you are simply trying to expand your network or doing other things like look for a new job.  I introduced a friend of mine, recently unemployed due to a Reduction In Force at her previous job that resulted in several management positions (including hers) being eliminated, to several executives that would benefit from having a conversation with her at the very least.  To her credit, she did not want to pester them by emailing them just because they didn't respond immediately.  I had to point out, however, that executives are busy and if you do not follow through then you run the risk of either that introduction getting lost in the busy-ness of each day or they would think you weren't interested.

In the interest of contrast (and at the risk of sounding condescending, which is quite the opposite of my intention), I received my first job out of college in the most prestigious place to work at that time for Computer Science graduates (according to a poll of graduate students at the top Computer Science school in the country at that time, Carnegie Mellon University) because of follow through.  As it happens, I was looking for a place to work while at college in SC and reached out to my future boss (based in NY).  His intention was to fly me up for an interview to see if I would be a good fit, but after several emails back and forth he told me that his request for expense reimbursement to pay for the trip was finally rejected.

However, he said that he was quite impressed with my determination and my refusal to let something go on its own and offered me the job anyway.  That is why I mentioned the ranking of that IBM location - getting a job there should have been impossible but you can make the impossible possible!  (To this day, he still says that my tenacity and intention to follow through to completion is the quality that he remembers most.)

People remember you for the impression you leave.  If that impression is that you left no impression, people will forget or, worse, remember that you left no impression.  "Luck favors the prepared," it is true, but if you don't have a reason to be prepared in the first place then there's no luck to be found.  Instead, you have to make your own luck by seizing opportunities as they present themselves even if they don't appear to be an opportunity to begin with.

Friday, September 6, 2013

The Reign of Mayer

This is a short entry, written simply because I feel a need to document this beyond the 140 character limit on Twitter.

Marissa Mayer is great at being a reactionary leader.  She's stirred things up a bit, to be sure, but the vast majority of her work has revolved around correcting problems at Yahoo.  I am fairly certain you'll arrive at a similar conclusion after reading the very detailed and in depth history about her that was published by Business Insider a few weeks ago. 

Immediately after reading that, however, I wanted to author this but it slipped through the cracks.  This may have been to my benefit, though, given the brouhaha that has sprung up over the choice of a new company logo. Let's face it, folks:  this is a logo consisting of a five letter word followed by an exclamation point.  The most interesting thing about it is the color purple.  Why is so much attention being given to this when there isn't much difference between the proposed new logos and the existing logo?

The answer, described in the article, is that she's ultimately a researcher type.  She thrives on details, data, and the smallest minutiae.  I know the type well since I am the same type of thinker.  When I started my career at the awesome T. J. Watson Research Center I was encouraged to think outside the box but to be sure to always tie my work back to some business initiative, and that required approaching my work from every possible angle to ensure it was justified and had a high probability of generating revenue.  It's taken me a long time and a lot of effort to overcome this in order to allow me to view things from the perspective of having a longer timeline to acquire a return on whatever it is I am investing in.

So my question posed to my hypothetical audience of the Yahoo board is this:  why did you hire a COO when a CEO is needed?  I understand that Yahoo was in extraordinarily bad shape at the time and that good execution trumps a great plan, but sooner or later you will run out of things to correct and will have to demonstrate thought leadership if you want to regain the glory from days of old.  This is where I think your plan will fail, Yahoo.  With all due respect to what she's managed to accomplish in her relatively short tenure at the helm, the day is rapidly approaching when your shareholders are going to demand a greater ROE that is unattainable by correcting operational problems, and it'll be very evident on your Income Statement. 

Is this, perhaps, some conspiracy plot to correct things in the short term only to jettison her when a more strategic plan of action is ready to be started?  Who knows, but time will reveal the answer.

Thoughts?  Comments?

Tuesday, August 20, 2013

Being a Good Leader

The Leader
A few weeks ago, I came across an article that described the 7 top qualities of a C-level executive.  And while I found the article to be a good read, I also recognized that there are key differences between a C-level executive and a "normal" manager.  This got me to thinking about the qualities that make up a good leader in general.

The archetype for this person is my father who used his lack of a High School education as motivation to push himself harder than anyone.  He did ultimately get his GED and an Associates Degree via correspondence courses, but it was his ascension to upper middle management during his 30 year career with the company now known as CenturyLink that I focused on.  His qualities as a leader earned him such admiration that his former staff - he retired over 10 years ago - still respect him today as though they were still working together.  Here are the things that I learned from him over the years of my life.

Communication is the key to everything.  I remember him telling me this when I watched him study during those correspondence courses.  He was learning vocabulary and composition / communication skills at the time and remarked that good communication is what separates you from everyone else.  In a business setting, this means outlining your expectations clearly in a way where it is possible for your staff to perform self evaluations on their own to assess their progress.

Transparency earns trust.  In any relationship, business or personal, trust is the foundation upon which everything else rests.  In business, however, the manager / staff relationship has to be founded on the trust that the manager represents the staff and the collective best interests of the manager's organization.  This means that transparency in all things must be maintained.  Ulterior motives and alternate agendas do nothing but sow seeds of discord when they are discovered.

Empathy earns loyalty.  There is an expression that says "truth without love is cruel; and love without truth is foolishness."  Applying this to a business context, it is not important to also earn the trust of your staff but also their loyalty because it is that loyalty that will inspire them to greater heights.  Ultimately, the action of explicitly acknowledging to them that they are people with real world concerns both at work and at home will result in their support of you even if they are unsure themselves of the end result of your actions as their leader.

Respect is earned.  It's obvious that you shouldn't ask your staff to do something that you aren't willing and able to do yourself.  But it's better still if you don't ask your staff to do something that you aren't already doing.  Tom Brownlee was a manager of mine for a number of years, and he was the strictest person I've ever worked for.  He was incredibly demanding, was not afraid to tell you to your face when he thought you made a mistake, and set the highest standard for quality in any sales organization of which I have been a part.  And while he rubbed many people the wrong way, those who stuck it out did so for one reason:  he held himself to an even higher standard than he did his staff, and he acknowledged when he made mistakes himself.

My father had a similar work ethic.  He knew that he didn't have a four year college degree like every one of his peers.  But instead of allowing that to discourage him as a seemingly impossible mountain, he strove to outdo every one of his peers.  And ultimately, he became the only engineer in the history of the company to ever earn a top rating of 1 on his annual evaluation.

Being a good leader, regardless of whether you agree with this list of traits, enables you to be an effective one as well because not only are you respected and trusted when things are going well, but you are respected, trusted, and your staff reaffirms their loyalty when things aren't going well also.

Thursday, July 11, 2013

Why Do Software Defects Exist? (Part 2)

(Originally published at www.servicevirtualization.com.)
In Part 1, I proposed that application release decisions are not actually time-based but are instead risk-based.  To summarize, when the Lines of Business demand a specific time to release (from project inception) the Project Lead considers the risk to the business of releasing the application at that time.  This is illustrated to the right.

Application Development Constraints

Let's take a quick look at the "risks to the risk."  These are also known as constraints of the application development process.  I'll start by describing them as they are defined in the book Reality is Overrated (link goes to Amazon).
  • Incomplete Development refers to the fact that software that a developer requires to validate their own code production is unavailable, requiring that developer to stub downstream systems.  The net result is that testing coverage early on is very sparse and puts the onus on the Quality Assurance team to find every defect, something that rarely happens if at all.
  • Infrastructure Unavailability refers to the fact that hardware that is required to run software that a developer requires to validate their own code is unavailable with the same result.  An additional side effect is that sometimes the infrastructure that is unavailable is required to run the developer's own code, meaning they are "dead in the water."
  • Third Party Access Fees refers to the fact that integration with other, internal applications (in situations where charge back policies are in effect) or external applications where fees are assessed for test accounts or, worse, per test transaction.  What I've personally seen happen is that a self-imposed availability constraint is put into effect to avoid "funding bleed" and the inevitable question, "why didn't you hire more consultants?"
  • Finally, Test Data and Scenario Management refers to the long turnaround times when production quality test data is requested.  The slow responses are due to the fact that the DBAs have other, higher priority activities to attend to typically plus the effort to scrub production data to remain in conformance with regulatory requirements (PII, PHI, etc.) is no small feat.
Availability is "Risk to the Risk"

I've defined these four constraints in these specific ways with the intention of highlighting that every one of them, at their core, is a problem of availability:  unavailable software components or applications; unavailable infrastructure; and unavailable test data.  It is this availability problem that is the "risk to the risk" for the following reason:  every time an availability problem manifests itself, a delay in the SDLC is introduced.

What does this mean?  This means that the original assessment of being able to, in our example, deliver the application at 6 months with 75% correctness is no longer valid.  Instead, the inability of the development team to complete their normal activities prevents them from ensuring correctness, so the point in time where 75% correctness is achieved may be 7, 8, 9 months or more from project inception.

The Net Impact

This puts the Project Lead in a bad position because they had the chance to negotiate timelines at the outset.  Now either the delivery date gets pushed in the name of risk while making the Project Lead and their management (who committed the date to the Lines of Business) look unreliable, or the project is released "on time" with quality that's further reduced.

Regardless of which road is taken, quality is not improving and frequently declines in applications released to production.  This adds to the risk that the business will suffer a production outage and ultimately has the possibility of materially impacting revenue; causing a decline in brand equity; or even resulting in shareholder lawsuits in severe enough instances.

Obviously, alleviating the availability constraint in any or all of its four forms has a snowball effect on the quality of the result.  It is the removal of this constraint that the discipline of Service Virtualization effects.  You'll find more excellent material on the industry leading Service Virtualization solution provided by CA Technologies at www.servicevirtualization.com as well in the few discussion groups on LinkedIn.

Monday, July 1, 2013

Why Do Software Defects Exist? (Part 1)

(Originally published at www.servicevirtualization.com.)

After my recent webinar (entitled Agile is Dead, replay is here, registration required but is free) I was having a follow up discussion with someone about it when the discussion turned to the nuances between what Agile actually promised and what people perceived it was supposed to deliver.  From my perspective, the simplest explanation is that Agile promised to help ensure that business requirements were being met while people thought it meant that applications would be produced with far fewer defects.  In the webinar, I described how Agile, if anything, increased the total number of defects due to its attempt to be more adjustable to the needs of the business mid-implementation.

The question was then asked:  are software defects inevitable?  If not then why do they exist?  We're not talking about an insignificant problem.  As I've often quoted, NIST produced a study in 2002 that illustrated a cost multiple of 30 to fix a defect discovered in production and another study that same year showing the net impact on the US Economy of production defects to be $60 billion.

To answer those first two questions, let's look at a few things. 

Businesses Exist to Generate Revenue

This seems obvious, but it's worth stating the obvious here since we're going to be chaining a few items like this together into a cohesive whole.  "But what about government agencies whose sole purpose is to provide free services?" you ask.  Let's redefine (slightly) the phrase "generate revenue":  by this I mean they are trying to increase the amount of cash flowing into the entity.  I hesitate to say "cash flow positive" because that has a specific accounting definition that isn't met here.  For our purposes, the ability to convince the Federal, State, and/or Local government that more money would allow them to produce better or more services is considered "generating revenue." 

Technology Isn't a Luxury

This also seems obvious, so let me explain why I'm pointing this out using a question:  could an accounting firm offer a legitimate service to potential customers using paper based ledger books only?  The answer is yes - they would be fulfilling the definition of "accountant" - but they would probably have no customers.

The reason they would have no customers:  manual data entry and calculations are slow, error prone, and prohibit value added services like quick financial analysis, etc.  Even I, the most accounting challenged individual in the entire world, stopped using my checkbook registry (the personal version of a ledger) years ago in lieu of an Excel sheet that I created because the latter let's me see, at a glance, where all of my money is going; perform cash flow analysis; and defend my argument that groceries are expensive to the point now that they are the modern day equivalent of highway robbery.

The net result of this is that technology is at the very least indirectly responsible for the influx of cash to a business, profit, non-profit, or government entity alike.

When Would You Release Your Software?

The next question I asked my conversation partner was, "If you were the only Amazon, when would you release the next version of the website?"  The answer was quick:  they said they would release it when the following two conditions are met:
  1. When the user's new needs were met
  2. When no defects exist 
The latter point needs some clarification.  I'm not describing the situation where no defects are found at UAT.  Instead, I'm talking about a theoretical point in time when the code could be mathematically proven to be 100% correct.

Obviously, that last point is no easy task (if it's achievable at all) and would take significant amounts of time to achieve.  And, unfortunately, you aren't the only "Amazon," i.e. you have competition. Therefore, when software is being developed a decision has to be made:
  • If I can't achieve "nerd-vana" by waiting until the code is 100% perfect, what is the highest probability that a critical function will work incorrectly that I am willing to accept, i.e. what's my risk threshold that a production defect will cause material harm to the business?
To illustrate that second point, "nerd-vana" for me may be 1 year for a new release but I'm willing to release it after 6 months with 75% correctness.

In part 2, we'll continue by examining how all of this amasses in a tidal wave of process problems that result in software defects being released to production in spite of all of the nasty side effects their presence causes.

Tuesday, June 11, 2013

Software Release Management - A Problem Overlooked (Part 2)

In part 1 I expounded on the size of the problem surrounding software release.  We saw how the problem is bigger than I suspect many people realize by enumerating statistics published by IDC and Forrester.  Finally, I promised to describe how CA LISA Release Automation alleviates this problem.

When I was a child, one of my favorite movies was The Wizard of Oz.  In one defining scene, Dorothy and her three companions enter the "throne room" where the Wizard agreed to meet them.  As they were talking to this giant, floating head, Toto discovered the curtain behind which the real "wizard" was operating this contraption.

"Pay no attention to that man behind the curtain!" was the admonishment of the floating head.

My question, and the relevance to software release, is:  what if it were not the "wizard" behind the curtain but was instead Glinda, the good witch?  What if it were a gaggle of the flying monkeys?  What if it was not any character from the story but was instead Frodo, Samwise, Meriadoc and Pippin?  If you're Dorothy, should you really care if you're talking to the floating head?  Maybe in Dorothy's instance she would have cared, but the question about interacting with a specific role vs. how that role is implemented in reality remains.

Abstracting the Infrastructure

In 1994, the self described "Gang of Four" released a highly influential book entitled Design Patterns: Elements of Reusable Object-Oriented Software (link goes to Amazon) that described ways of designing applications such that they are easier to implement with lower defect rates.  One such pattern, the Adapter pattern, described a way for a block of software code to interact with another block of software code even though the two are incompatible.  (Think of this as trying to get a native Chinese speaker to order dinner from a waiter that only speaks Italian.)  The Adapter, as they described it, provided a "translation" mechanism so that interaction was possible and thus coding could continue unabated.  It is this concept that LISA Release Automation borrows from to make software release significantly easier and far less prone to errors.

Consider for a moment the importance of role vs. infrastructure.  Are you more concerned with releasing your application to a web server, application server and database server?  Or are you more concerned with releasing it to machines USEWR01, USNYC13 and USPHI04?  I'm sure that some of the readers are saying to themselves that, ultimately, those machines are where the application resides so they care more about that.

What then about releasing the same application to a more complex environment as the SDLC progresses?  Are you more concerned about releasing your application to a web server, application server and database server?  (Note that the same roles are listed.)  Or are you concerned now with web servers USCHI01, USCHI02, USCHI03, and USCHI04; application servers USDAL11 and USDAL12; and database servers USATL07 and USATL08?  What about when it finally moves to production and you now have 2 farms of web servers, each containing 10 servers; 2 application server farms of 2 servers each; and a full cluster of database servers?  And (why not?) let's change the web servers from JBoss in development to WebSphere clusters in production.  How would you normally handle this?

The Complexity of Software Release

In part 1, I said the following:

A single server might be fine to house a web server, application server, and database server for a single developer, but it'll never do for QA, Performance Testing, and Integration Testing, much less UAT or a Disaster Recovery environment or even Production. And so the process of deployment requires uniquely designed scripts that are unto themselves applications that must be tested and validated for correctness. Ultimately, they are complex enough to require maintenance, and in the event that the infrastructure architecture changes beyond the trivial, changes must be coded, tested and validated again.

I want to reiterate this, especially since I've provided some contrast in the example above.

By developing software release "scripts" - this term is used more loosely than you are envisioning; see below - that are role based rather than server based, you get a few distinct advantages:
  1. As the implementation of the role changes from environment to environment, the script itself does not need to change since it operates with respect to the role rather than the infrastructure;
  2. The removal of the underlying infrastructure from the script substantially lowers the need for maintenance of the script itself; and,
  3. The ability of the script to act based on role allows for much easier automation capabilities with full rollback in the event of errors that are encountered during its execution.
The only thing that needs to be done is define the Adapter (borrowing from the Design Pattern concept, above).  In other words, you are describing the concrete mapping between role and underlying infrastructure in a simple to edit file called a Manifest.  In there, you specify that the "web server role" maps to USEWR01 in Development but USCHI01, USCHI02, USCHI03, and USCHI04 in UAT.  Or that the "application server role" maps to USDAL11 and USDAL12 in UAT but USPAR03, USPAR04, USPAR05, and USPAR06 in Production.  From that point on, LISA Release Automation takes care of mapping the actions in the "script" to the underlying infrastructure components.

Scripts vs. Run Books

I promised some clarification for the term "script."  I like to contrast the traditional concept of software release scripts (written in bash or some other interpreted language) to Process Automation software, e.g. CA Process Automation, BMC Atrium Orchestrator or others.  In the latter, drag and drop GUIs make designing "run books" (as they are known in the operations world) a lot easier because people can see in a graphical format how the script will execute.  Assertions are denoted in an easy to consume format so that the author or user can determine what will happen if error conditions are encountered, etc.

LISA Release Automation is similar in its usability to that of the Run Book Automation software described here in that it provides a drag and drop, Visio-like GUI that allows you to easily define the actions to be taken during a software release.  And with a huge library of actions that are workflow related or specific to the various types of infrastructure that fulfill a particular role (e.g. having several types of web servers like JBoss, WebSphere, WebLogic, IIS, etc.) you are practically guaranteed to be able to develop these release run books in a fraction of the time it would take to do it manually in bash, Windows Scripting Host, etc.

The Numbers Are Real

But the real "bread in butter" is in their usage.  Taking advantage of the ability to now fully automate the process of software release, the ultimate goal of continuous application delivery is now possible.  Take a look at the figures in the illustration.  These are from existing customers, and the benefit in terms of process efficiency should be immediately obvious.  Not only this but the rate of "friendly fire incidents," i.e. errors created during the deployment, drops dramatically (though I unfortunately do not have hard statistics on this).

Comments and feedback are always welcome!

Thursday, May 30, 2013

Software Release Management - A Problem Overlooked (Part 1)

(Originally published by me on www.servicevirtualization.com)

This is the first of two parts discussing the problem of software release management and how automation can be properly used to alleviate these problems.
 
ITIL has Release Management. There are also Six Sigma and CMM. These are all process-oriented "libraries" that deal with the development and release of tangible products or business services.  Yet ask any application development professional if any of these deal with the actual problems of software release and they will unanimously answer "no."

After CA Technologies announced that it had acquired Nolio, the leader in software release automation, I started taking a closer look at the actual problem that Nolio addresses.  And what I found is this: current literature on software release addresses the process-related problems but none has yet discussed how to address the realities of the actual movement of software from one environment to the next.

Remember: Environments Vary

For example, take a look at this excellent case study written about a British telecommunications provider. In the article, the authors describe seven steps they took to turn around the Release Management process at their client. And even though they give a nod to automating the deployment of the software (step 5, entitled Automate and standardize as much as you can), they gloss over one important point in software release.

That point is this: environments vary. A single server might be fine to house a web server, application server, and database server for a single developer, but it'll never do for QA, Performance Testing, and Integration Testing, much less UAT or a Disaster Recovery environment or even Production. And so the process of deployment requires uniquely designed scripts that are unto themselves applications that must be tested and validated for correctness. Ultimately, they are complex enough to require maintenance, and in the event that the infrastructure architecture changes beyond the trivial, changes must be coded, tested and validated again.

Do you want fault tolerance, i.e. rollback capabilities? You need to code that. Do you want auto-configuration of your WebSphere environment? You need to code that too. In fact, for most operations beyond simple file transfers, administrative programs need to be invoked with complex configuration parameters that vary from environment to environment.

Survey: Most Respondents Unhappy With Release Processes

How bad is the problem?  According to a 2008 study by IDC, 30 percent of all defects are caused by incorrectly configured application environments. For the remaining 70 percent, things don't look much better. Forrester's Q4 2010 Global Release Management Online Survey suggested that the time needed to roll out a single change to an application was between a day and a week for 39 percent of respondents. For 11 percent of respondents, the time was between a week and two weeks. For 18 percent, it was two weeks to a month.

Here are some other eye-opening statistics from the Forrester survey:
  • 64 percent of respondents were dissatisfied with the level of automation in their software release processes.
  • 54 percent of respondents were dissatisfied with their ability to recover in the event of a problem either during release or with the application.
  • 50 percent of respondents were dissatisfied with the speed of each iteration of the release process.
With the need to code individualized and highly tailored scripts for each application per environment, it should be no surprise that the numbers are so high. In fact, when you consider that it takes a developer-class professional to author and test these release scripts, you have one of two problems that ultimately need to be considered:
  1. You are forced to split the time of a developer on the application team so that part of their time is dedicated to creating and maintaining these scripts, or;
  2. You are forced to hire someone, paying them the same FTE rate for the sole purpose of release management.
With operating budgets as tight as they have been since the global meltdown in 2008, it is my suspicion that option 1 is the more common choice. This detracts from application quality as the time the developer should be spending writing new code or fixing defects discovered during test is spent instead doing operational work.

In Part 2 of our discussion, I'll cover how Nolio addresses the problem.

Thursday, April 11, 2013

Who are you?

I'm writing this from the lobby of a Hilton hotel in Toronto where I have been staying while I conducted meetings with a prominent Canadian bank.  Last night after a 13 hour day, I met a most amazing individual in the hotel bar.  The CEO of a successful consulting firm, his area of specialty is in the psychology of human behavior especially within the confines of a corporate setting.

After a night's worth of very interesting, intellectual, and "i"ntertaining discussion with him and the others who were sitting at the bar with us I had the opportunity to speak with him one on one about my personal career ambitions.  During this he asked me a seemingly simple question:  "who are you?"

I was taken aback.  "Who am I?" I thought.  "I'm Larry." "I'm a technologist whose name is on the cover of two programming books." "I'm a musician."  I finally answered that I am a "business strategist" but that answer was hollow because I finally realized that what he was asking was "what is your core value that no one else can do as well as you?"

Several years ago, the CEO of BMC Software, Bob Beauchamp, told me something similar.  During a unique opportunity to have a 20 minute intimate conversation with him, I asked him how he became CEO.  His response was twofold:

Be the authority. People finally got tired of hearing secondhand what he was saying and instead started inviting him to the important meetings to hear it directly from him.  That gave him the visibility into the senior ranks of management.

Be the best. He understood where he excelled in business and nurtured that to the point that he was the only person that provided the value that he did.  "Pick an area that interests you and be the best that anyone can be in that area" was the way he described it.  This is why people were seeking to hear his thoughts on matters of business, first in a secondhand fashion and then as the primary deliverer.

While I took Mr. Beauchamp's advice as a call to action, I failed to see the larger significance of what he was saying.  This led me to respond in a tactical sense, striving to check the boxes of both parts of his response.  But the greater guidance for me as a professional was overlooked until last night when that simple, three word question was asked.

"Who are you?"

I now realize that my greatest value is as an information broker and analyst.  The true value is revealed when this core competency is coupled with a business context.  Do you need to develop a 3-5 business strategy?  I'm fully aware of business trends especially as they are impacted by technology that will allow you to determine the future direction of your company.  Do you need to re-engineer a failing process?  I can dissect its current state, develop metrics to measure improvement, identify broken linkages and deadlocks, and finally devise ways around them.  Do you need to evaluate corporate projects at the PMO level? I can quantify the impact of each project by defining measurable success criteria, allowing you to shepherd the process from design through implementation.

This has been an eye opener for me, and I challenge the reader to truly understand the answer to that simple question.  What is your strength?  What application does it have in your professional and personal life?  How can you emphasize your ability to impact both by playing to those strengths (and, of course, recognizing your weaknesses and mitigating their impact)?

Tuesday, March 12, 2013

Why Agile and Test Driven Development (Part 3)

(Originally published by me on www.servicevirtualization.com)

Complexity yields defects
In part 2, we examined why SCRUM and TDD exhibit problems when measured from the perspective of the number of defects that they both yield. Before we can begin to understand why Service Virtualization helps address both of these reasons, it's worth elaborating on statements made in part 2.

You'll recall the equation to the right, presented last time. c represents the degree of complexity, which has a direct correlation to the amount of code that must be written to meet the business requirements that yielded the complexity to begin with. Because t is fixed and c continues to trend upward (over several releases) then the number of defects will also increase over time. Therefore, t is the primary constraint around which everything else revolves.
Expected number of defects

In SCRUM, an increase in the complexity of an individual sprint or the total application expressed as a sum of the functionality in each sprint, as seen to the left (s is a single sprint, n is the total sprints required to implement the full set of business requirements, and cs is the code to be developed in any given sprint), results in a corresponding increase in the number of defects produced given a fixed time.

TDD is slightly different. Here, the time must be "divided" (figuratively, not mathematically) into the time required to implement the tests that initially fail and the time required to implement the code that satisfies the test conditions. As the complexity increases, the number of tests that must be written also increases since the number of execution paths also increases. But, as we see in the illustration to the right, if more time must be allocated to writing tests then that leaves less time to write the code to satisfy those tests.
Mutually exclusive goals in TDD

So what about Service Virtualization? You've undoubtedly read elsewhere that the four primary benefits of CA's solution are...
  1. Shifting quality left. Allowing developers the ability to test code earlier in the SDLC substantially compresses each release cycle.
  2. Reduce infrastructure costs. Providing an environment that behaves "just like the real McCoy" but runs on commodity hardware reduces the need to purchase expensive development and test environments.
  3. Enable performance readiness. Stability is one aspect of availability management, but scalability is as well. While many companies still struggle to understand and implement cloud bursting to handle peak usage periods, their applications must still handle increased load or else suffer outages due to events like "the Oprah effect."
  4. Manage test scenarios. Drastically reducing the need to acquire and consume production data (which has a long turnaround time for each request due to data scrubbing requirements for regulatory reasons) allows testing to be both more effective and efficient.
With SCRUM, shifting quality left is the benefit we are primarily concerned with here. When examined in more detail, we find that this is possible because the amount of complexity is reduced by removing the constraints on other components or even entire applications or infrastructure components that would normally prevent efficient validation of the code produced.

"But hold on!" you say. "If complexity is directly related to the amount of code produced, aren't you contradicting yourself by implying that Service Virtualization reduces the amount of code that needs to be written?" You are correct and clarification is necessary: c in the equations at the top refers to the complexity of the code that is the responsibility of each individual developer. It does not include the additional complexity that results from other aspects of the architecture that are out of the control of the developer.

So if developers have dependencies on other components in the application, authored by other developers, or downstream infrastructure components then their complexity increases further since they represent variability that they cannot control. Therefore, the overall complexity is reduced when the variability is removed by replacing live systems with virtualized services that behave like their live equivalents but do not change while they are writing code to interact with them.

With TDD, test scenario management is the benefit we are primarily concerned with here. Since developers have a fixed amount of time to write tests that fail as well as the code to address those failures, the probability of having enough variance in the data driving the test harnesses to completely cover every permutation (or at least a large percentage of them) of data that causes the test to fail is incredibly small. This pushes defect discovery to the quality organization in spite of the best efforts of TDD as a discipline to avoid this exact scenario.

Furthermore, the time to develop test harnesses themselves results in inefficiencies. Service Virtualization solution includes a full featured test script IDE with a Visio-like interface that makes test development much easier than it would be using other, data panel-based applications from other solution producers.

Questions or comments? Leave a comment below!

Wednesday, March 6, 2013

Why Agile and Test Driven Development (Part 2)

Classic physics...
(Originally published by me on www.servicevirtualization.com)

In part 1, we briefly examined the reasons why application development is challenged: namely, architectures have to be more complex to address the similarly more complex needs of the business.  We also briefly looked at the primary goals of Agile (SCRUM, specifically) and Test Driven Development (TDD) with the promise of further scrutiny to see why, although they take steps in the right direction toward better management of this complexity, they still fall short.


The impact of increased application complexity is that there is an increased rate of change in all of the cogs and wheels that will (hopefully) mesh together to produce the final result expected by the business.  Over the same amount of time, this increased rate of change will yield more points where failure can occur.  And given the same distribution of probabilities, this will ultimately yield a greater number of defects.

This becomes a problem in even SCRUM and TDD but for very different reasons:

...becomes code creation
In SCRUM, the functionality that needs to be added in any given sprint will require the inclusion of multiple applications components that need to be written and validated.  This may be approached from the "everyone jump in the pool together" perspective where all components are being written simultaneously and thus the potential for error is large, or artificial latencies are introduced because functionality further along the project timeline has prerequisites that need to be written and validated before they can be built upon.

In TDD the objective is to understand use cases (written against the business requirements) that are not met by the application in its current state and then make the necessary code changes so that failure becomes success.  In other words, the potential for failure is already realized, but the challenge is that the onus is on the developer to understand the complete set of defects that exist within the components being modified.  This is extremely challenging at best and NP-complete at worst (if the portion that is changing is large enough).  As a result, the responsibility of validating the overall application is also challenging or nearly impossible depending on the size and complexity of the application.

In the third and final part, we'll examine how Service Virtualization (also known as LISA) by CA Technologies addresses both of these scenarios to substantially reduce the risk of successful project completion.

Thursday, February 21, 2013

Why Agile and Test Driven Development (Part 1)

(Originally published by me on www.servicevirtualization.com)

Because I work closely with application development professionals on an on-going basis, I am fairly in tune with the happenings of that profession.  (It doesn’t hurt that I, too, was in an application development related role for 18 years.)  So when I heard more and more people extol the virtues of Test Driven Development (TDD) I wanted to look into it myself to see what the hullabaloo was all about.

Application code is written to fulfill the requirements outlined by the Line of Business.  Taken as a whole, the result is an entire application that provides a business service, ultimately allowing an organization to either add new revenue streams or expand the capacity of existing ones.

Architectural complexity increases with time
The problem that often occurs is that “this isn’t your father’s application development job” anymore.  The need to remain competitive in the marketplace often adds the requirement of being both incredibly agile (resulting in more aggressive / shorter release cycles) while at the same supporting the latest trends in technology as a business enabler.  Currently, big data, cloud computing, mobile device support and “the Facebook effect” (meaning highly interactive applications taking great advantage of asynchronous processing to provide nearly instantaneous results) are the darlings of the industry but it could be anything.

As a result, the applications that are being demanded by the Lines of Business are increasing in their complexity.  And that means the task of managing the resulting application quality has also become more complex.  This spawned the Agile development movement, which ultimately evolved to TDD. Both of these were devised to manage the complexity so that the rate of change does not make the ability to validate the correctness of the result time- and cost-prohibitive.

For those of you who have not been exposed to TDD, the primary difference between Agile (we’ll use SCRUM here as the reference since that is arguably the most prevalent Agile methodology in use) and TDD is the following:
  • SCRUM defines success as the successful implementation of a set of features and functionality to be completed by the end of the next sprint, and the developers write code to meet those goals
  • TDD, however, defines success as the implementation of code that successfully addresses a set of (initially) failing tests that are developed in parallel by the developers
In part 2, we’ll take a look at why both of these are not the panacea that they were initially hailed as when they were gaining in popularity.

Saturday, January 19, 2013

Wake Me Up

Last week, I was driving to and from various meetings with the business news on the radio when I heard the story that Facebook had some secret "thing" to announce to the media.  Perhaps I'm jaded but I immediately recognized this as an attempt to emulate the mastery that Steve Jobs had regarding his relationship with the media, so I was curious but thought little more than that.

As you all know, the announcement really wasn't that exciting after all.  The concept behind Friend Search (not the official name, but one that sounds better than the official name: Graph Search) is a decent one.  But the main problem I have with it is that most people aren't using Facebook for posting information that is worth searching through for answers to your problems.  David Hersh phrased it very nicely in my interview with him in 2010:

"Rather than continuing down the path of becoming a place to share meaningful content with 'real' friends, the focus on status updates flowing through the news feed has, in my opinion, shifted the focus squarely from utility to entertainment."

A lot of professionals that I know avoid mixing work with Facebook because of the huge potential it has to be detrimental to their career.  Granted, simply behaving like adults would mitigate much of that risk. But until the Federal courts decide that demands by a potential employer for your login credentials is an invasion of privacy (especially if your Facebook content is viewable by a restricted audience since information that is available to the general public cancels a person's ability to claim privileges to the privacy of that information) the number of professionals hawking their talents on Facebook will be minimal relative to the total user base.

What we're left with to sift through for answers to life's most challenging problems is your friends' postings about their kid's day at school; links to music or other videos; or images that exist solely to display the text of some witty saying that isn't searchable in the first place because it's an image.  And I'm supposed to be able to solve world hunger via a lolcat?

It should be obvious that I considered this announcement to be a rather boring affair, and it isn't the first time that Facebook has let me down.  Would you pay them to allow a message to reach someone's Inbox?  "Not I," said the pig.  And it is my guess that the moment this feature is enabled and people like me start receiving spam from telemarketers, there will be a rather substantial exodus to other social media websites.  Maybe Instagram will finally get those users back who defected after they attempted to change their Terms of Service, eh?

Edit:  maybe that defection has already begun, according to an article in the WSJ.

All of this reminds me of the smart phone commercial where Apple is ridiculed as causing mind-blowing experiences simply because they moved the headphone jack to the bottom of the phone.  Since there is some truth to that commercial, perhaps Facebook is more closely emulating Apple than I first thought.

Disclaimer:  I have an iPod Touch, iPhone, and iPad.

When Facebook productizes their platform so that corporations can use it internally a la Chatter then I'll find reason to rejoice.  I realize they aren't doing too badly from a financial perspective (ignoring, for the moment, the overhyped IPO and its impact on the stock price because they weren't literally printing their own money like many of those who bought those shares seemed, incredulously, to believe), but I can't help but wonder if the person steering the ship from a strategy perspective really is clueless on how to tap into the enormous potential this company has from a revenue generation perspective.  The answer to that, however, won't be known until the first anniversary of its IPO date (and subsequent release of the 10-K along with comments by the executive management team on future directions for the company).

Or maybe someone will take a break from posting their latest Paleo diet recipe to write the answer to that question on their wall so that I can find it with Friend Search.