In part 1 I expounded on the size of the problem surrounding software release. We saw how the problem is bigger than I suspect many people realize by enumerating statistics published by IDC and Forrester. Finally, I promised to describe how CA LISA Release Automation alleviates this problem.
When I was a child, one of my favorite movies was The Wizard of Oz. In one defining scene, Dorothy and her three companions enter the "throne room" where the Wizard agreed to meet them. As they were talking to this giant, floating head, Toto discovered the curtain behind which the real "wizard" was operating this contraption.
"Pay no attention to that man behind the curtain!" was the admonishment of the floating head.
My question, and the relevance to software release, is: what if it were not the "wizard" behind the curtain but was instead Glinda, the good witch? What if it were a gaggle of the flying monkeys? What if it was not any character from the story but was instead Frodo, Samwise, Meriadoc and Pippin? If you're Dorothy, should you really care if you're talking to the floating head? Maybe in Dorothy's instance she would have cared, but the question about interacting with a specific role vs. how that role is implemented in reality remains.
Abstracting the Infrastructure
In 1994, the self described "Gang of Four" released a highly influential book entitled Design Patterns: Elements of Reusable Object-Oriented Software (link goes to Amazon) that described ways of designing applications such that they are easier to implement with lower defect rates. One such pattern, the Adapter pattern, described a way for a block of software code to interact with another block of software code even though the two are incompatible. (Think of this as trying to get a native Chinese speaker to order dinner from a waiter that only speaks Italian.) The Adapter, as they described it, provided a "translation" mechanism so that interaction was possible and thus coding could continue unabated. It is this concept that LISA Release Automation borrows from to make software release significantly easier and far less prone to errors.
Consider for a moment the importance of role vs. infrastructure. Are you more concerned with releasing your application to a web server, application server and database server? Or are you more concerned with releasing it to machines USEWR01, USNYC13 and USPHI04? I'm sure that some of the readers are saying to themselves that, ultimately, those machines are where the application resides so they care more about that.
What then about releasing the same application to a more complex environment as the SDLC progresses? Are you more concerned about releasing your application to a web server, application server and database server? (Note that the same roles are listed.) Or are you concerned now with web servers USCHI01, USCHI02, USCHI03, and USCHI04; application servers USDAL11 and USDAL12; and database servers USATL07 and USATL08? What about when it finally moves to production and you now have 2 farms of web servers, each containing 10 servers; 2 application server farms of 2 servers each; and a full cluster of database servers? And (why not?) let's change the web servers from JBoss in development to WebSphere clusters in production. How would you normally handle this?
The Complexity of Software Release
In part 1, I said the following:
A single server might be fine to house a web server, application server, and database server for a single developer, but it'll never do for QA, Performance Testing, and Integration Testing, much less UAT or a Disaster Recovery environment or even Production. And so the process of deployment requires uniquely designed scripts that are unto themselves applications that must be tested and validated for correctness. Ultimately, they are complex enough to require maintenance, and in the event that the infrastructure architecture changes beyond the trivial, changes must be coded, tested and validated again.
I want to reiterate this, especially since I've provided some contrast in the example above.
By developing software release "scripts" - this term is used more loosely than you are envisioning; see below - that are role based rather than server based, you get a few distinct advantages:
Scripts vs. Run Books
I promised some clarification for the term "script." I like to contrast the traditional concept of software release scripts (written in bash or some other interpreted language) to Process Automation software, e.g. CA Process Automation, BMC Atrium Orchestrator or others. In the latter, drag and drop GUIs make designing "run books" (as they are known in the operations world) a lot easier because people can see in a graphical format how the script will execute. Assertions are denoted in an easy to consume format so that the author or user can determine what will happen if error conditions are encountered, etc.
LISA Release Automation is similar in its usability to that of the Run Book Automation software described here in that it provides a drag and drop, Visio-like GUI that allows you to easily define the actions to be taken during a software release. And with a huge library of actions that are workflow related or specific to the various types of infrastructure that fulfill a particular role (e.g. having several types of web servers like JBoss, WebSphere, WebLogic, IIS, etc.) you are practically guaranteed to be able to develop these release run books in a fraction of the time it would take to do it manually in bash, Windows Scripting Host, etc.
The Numbers Are Real
But the real "bread in butter" is in their usage. Taking advantage of the ability to now fully automate the process of software release, the ultimate goal of continuous application delivery is now possible. Take a look at the figures in the illustration. These are from existing customers, and the benefit in terms of process efficiency should be immediately obvious. Not only this but the rate of "friendly fire incidents," i.e. errors created during the deployment, drops dramatically (though I unfortunately do not have hard statistics on this).
Comments and feedback are always welcome!
When I was a child, one of my favorite movies was The Wizard of Oz. In one defining scene, Dorothy and her three companions enter the "throne room" where the Wizard agreed to meet them. As they were talking to this giant, floating head, Toto discovered the curtain behind which the real "wizard" was operating this contraption.
"Pay no attention to that man behind the curtain!" was the admonishment of the floating head.
My question, and the relevance to software release, is: what if it were not the "wizard" behind the curtain but was instead Glinda, the good witch? What if it were a gaggle of the flying monkeys? What if it was not any character from the story but was instead Frodo, Samwise, Meriadoc and Pippin? If you're Dorothy, should you really care if you're talking to the floating head? Maybe in Dorothy's instance she would have cared, but the question about interacting with a specific role vs. how that role is implemented in reality remains.
Abstracting the Infrastructure
In 1994, the self described "Gang of Four" released a highly influential book entitled Design Patterns: Elements of Reusable Object-Oriented Software (link goes to Amazon) that described ways of designing applications such that they are easier to implement with lower defect rates. One such pattern, the Adapter pattern, described a way for a block of software code to interact with another block of software code even though the two are incompatible. (Think of this as trying to get a native Chinese speaker to order dinner from a waiter that only speaks Italian.) The Adapter, as they described it, provided a "translation" mechanism so that interaction was possible and thus coding could continue unabated. It is this concept that LISA Release Automation borrows from to make software release significantly easier and far less prone to errors.
Consider for a moment the importance of role vs. infrastructure. Are you more concerned with releasing your application to a web server, application server and database server? Or are you more concerned with releasing it to machines USEWR01, USNYC13 and USPHI04? I'm sure that some of the readers are saying to themselves that, ultimately, those machines are where the application resides so they care more about that.
What then about releasing the same application to a more complex environment as the SDLC progresses? Are you more concerned about releasing your application to a web server, application server and database server? (Note that the same roles are listed.) Or are you concerned now with web servers USCHI01, USCHI02, USCHI03, and USCHI04; application servers USDAL11 and USDAL12; and database servers USATL07 and USATL08? What about when it finally moves to production and you now have 2 farms of web servers, each containing 10 servers; 2 application server farms of 2 servers each; and a full cluster of database servers? And (why not?) let's change the web servers from JBoss in development to WebSphere clusters in production. How would you normally handle this?
The Complexity of Software Release
In part 1, I said the following:
A single server might be fine to house a web server, application server, and database server for a single developer, but it'll never do for QA, Performance Testing, and Integration Testing, much less UAT or a Disaster Recovery environment or even Production. And so the process of deployment requires uniquely designed scripts that are unto themselves applications that must be tested and validated for correctness. Ultimately, they are complex enough to require maintenance, and in the event that the infrastructure architecture changes beyond the trivial, changes must be coded, tested and validated again.
I want to reiterate this, especially since I've provided some contrast in the example above.
By developing software release "scripts" - this term is used more loosely than you are envisioning; see below - that are role based rather than server based, you get a few distinct advantages:
- As the implementation of the role changes from environment to environment, the script itself does not need to change since it operates with respect to the role rather than the infrastructure;
- The removal of the underlying infrastructure from the script substantially lowers the need for maintenance of the script itself; and,
- The ability of the script to act based on role allows for much easier automation capabilities with full rollback in the event of errors that are encountered during its execution.
Scripts vs. Run Books
I promised some clarification for the term "script." I like to contrast the traditional concept of software release scripts (written in bash or some other interpreted language) to Process Automation software, e.g. CA Process Automation, BMC Atrium Orchestrator or others. In the latter, drag and drop GUIs make designing "run books" (as they are known in the operations world) a lot easier because people can see in a graphical format how the script will execute. Assertions are denoted in an easy to consume format so that the author or user can determine what will happen if error conditions are encountered, etc.
LISA Release Automation is similar in its usability to that of the Run Book Automation software described here in that it provides a drag and drop, Visio-like GUI that allows you to easily define the actions to be taken during a software release. And with a huge library of actions that are workflow related or specific to the various types of infrastructure that fulfill a particular role (e.g. having several types of web servers like JBoss, WebSphere, WebLogic, IIS, etc.) you are practically guaranteed to be able to develop these release run books in a fraction of the time it would take to do it manually in bash, Windows Scripting Host, etc.
The Numbers Are Real
But the real "bread in butter" is in their usage. Taking advantage of the ability to now fully automate the process of software release, the ultimate goal of continuous application delivery is now possible. Take a look at the figures in the illustration. These are from existing customers, and the benefit in terms of process efficiency should be immediately obvious. Not only this but the rate of "friendly fire incidents," i.e. errors created during the deployment, drops dramatically (though I unfortunately do not have hard statistics on this).
Comments and feedback are always welcome!