Skip to main content

Provisioning For What Purpose? (Part 2)

In part 1, we discussed how provisioning is part of the overall process of releasing an application, and how application release is a specific case of process automation.  In this part, we are going to look at the general capabilities of an automation platform with the overall view of applying those capabilities to application release, service orchestration / provisioning, and workload / job scheduling.

The earliest use of automation can be traced back to IT Operations where Run Books were used heavily to "begin, stop, supervise and debug the system(s)" (from Wikipedia) in the Network Operation Center (NOC).  Run Books were initially index cards containing a set of instructions to accomplish a certain task; these were then listed on 8.5" x 11" paper; and ultimately moved to huge three ring binders due to the complexity of the underlying systems with which the operator interacted.

At some point, companies such as Opsware, RealOps, and Opalis recognized that well defined, mature processes were simply a set of repeatable steps with simple branch logic built-in.  They built products that were then sold to HP (in 2007), BMC (in 2007), and Microsoft (in 2009), respectively, to allow Run Book Scripts to be defined in a computerized form and then initiated via an operator console and, later, via integration with ITSM solutions.

From a capabilities standpoint, all automation (Run Book Automation [RBA] now often referred to as Process Automation, Workload Automation, or Release Automation) requires similar capabilities.  These are listed below:

Support for a broad number of platforms.  Distributed systems are widely used, of course, but the mainframe also didn't die as many "industry experts" predicted it would in the last decade.  Combined with the many Unix variants and even lesser known platforms like IBM's iSeries, all of these operating systems have enough market share that they cannot be ignored and should be supported.

Built-in integration with the surrounding ecosystem.  While being able to enter commands manually via some script window is no worse than writing BASH or WSH scripts, having built-in support for commonly used IT infrastructure (e.g. monitoring systems, typical system metrics such as CPU usage or free disk space available, web / application / database servers, etc.) allows the workflow designer to simply enter a few values in a data form and the underlying system takes care of translating the action to the underlying commands.

Parse and react.  Taking the output of executed commands or their result codes and either extracting values to be used in subsequent steps or branching based on those values is critical. 

Complex scheduling.  Email systems like Outlook standardized the use of calendars for scheduling meetings or tasks to be completed.  The use of scheduling in an automation platform, however, needs to be much more capable since automated IT processes are often run according to very complex scheduling rules.

Integration capabilities cannot be emphasized enough.  To illustrate the former, the need to query free disk space (for example) exists no matter what the ecosystem is so using high level, abstract commands frees the author from having to explicitly add support for new platforms as they are adopted by their IT department.  Instead, they can simply drag and drop a step called "query disk space" into their workflow and do not have to worry if the workflow will be running on Windows, Unix, OS/400, etc.

Similarly, the ability to support very complex scheduling rules is also a "must-have."  For example, end of month financial reporting may need to run on the last business day of each month (which varies in length) unless it is a holiday, in which case it would run on the next business day after that.  Rules like this cannot easily be expressed using "Monday, Tuesday, ..." or "Every n weeks" types of criteria that end users are typically familiar with.

Other core capabilities that are not automation specific are multi-tenancy, Role Based Access Control (RBAC), High Availability (HA), and auditing features.  These will not be discussed here due to their use in several other types of IT Operations systems with which you are ultimately familiar.

All of these capabilities do not belong in one type of automation system or another.  Instead, they belong in a core platform that can be utilized by several types of solutions to meet various business needs.  Whether it is something general (e.g.job scheduling or application release) or something  specific (e.g. processing large Hadoop datasets or copying your SAP system from one instance to another), having a feature rich automation-centric foundation ensures that all of your operations systems will not only meet your current needs but will also grow as your needs do.

Popular posts from this blog

"Ni jiang yi yang de hua ma?"

Last week, I wrote about the necessity of having a clear message . Because this topic is so important I decided to follow-up with another entry on this general subject. This week we will approach it from another angle. (For the curious, the title says " Do you speak the same language? " in pinyin, which is a transliterated Mandarin Chinese.) Recently, a good friend of mine (who is Chinese, ironically) and I were playing pool. He had to bank the 8-ball in the pocket to win the game, and since it was an informal game and bank shots are my area of expertise, he asked me for advice. I told him, "you just need to strike the cue ball with medium speed so that it hits the 8-ball right in the middle." He didn't believe me so we marked the positions of the balls, and then he took his shot only to watch the 8-ball sail past the pocket. "A-ha!" he exclaimed. "I told you it wasn't that easy." But when we reset the positions and I made an attemp

It's Easier to Fail at DevOps than it is to Succeed

Slippery when wet Since the term DevOps was coined in Belgium back in 2009, it is impossible to avoid the term whether in discussions with colleagues or in professional trade magazines.  And during the years while this movement has gained momentum, many things have been written to describe what elements of a DevOps strategy are required for it to be successful. Yet in spite of this, there is an interesting data point worth noting: not many organizations feel there is a need for DevOps.  In a Gartner report entitled DevOps Adoption Survey Results (published in September 2015),  40%  of respondents said they had no plans to implement DevOps and 31% of respondents said they hadn't implemented it but planned to start in the 12 months after the survey was conducted. That left only 29% who had implemented DevOps in a pilot project or in production systems, which isn't a lot. "Maybe it's because there truly isn't a need for DevOps," you say.  While that

Is No/Low-Code the Key to IT Nirvana?

 Unless you've had your head in the sand for the past year or so, you've seen the phrases low-code  and no-code  bandied about quite frequently everywhere you look.  You've probably wondered if this is something new that's here to stay or just a "flash in the pan."  Although the terms have been in the fore of the IT trade publications recently, Low Code Development Platforms (LCDP) (and the corresponding No Code Development Platforms) have been in existence since 2011.  Their roots can be traced to the 90's with 4th generation programming languages and GUI-assisted programming paradigms, e.g. IBM VisualAge for Basic, which was discontinued in 1998. For those of you who aren't familiar with either, the premise is that these platforms allow someone to quickly build applications using a WYSIWYG interface and a "click and configure" paradigm to Isn't this the source code to Roblox? rapidly build full applications with little or no coding requ