Software gets deployed. Way back in the day, software used to work through its development life cycle via one or other development methodologies and end up on a floppy disk or a CD-ROM. Some software application development teams, often known as “shops,” would use the waterfall model to break down a project into linear phases. Some would use the agile approach to “release early and often,” while others would pick from the other options including rapid, lean, feature-driven and extreme.
Whatever the approach taken, software application code would ultimately work its way through its alpha and beta phases to emerge as the “release candidate” that would pave the way to the general availability phase.
A cyclical sense of perpetuity
A lot of software is still built that way, but the era of cloud and the web has given rise to a more dynamic cadence that sees software deployed with a sense of continuous perpetuity.
Because the app on your smartphone might need to be updated, and thus be deployed, more than once in a single day, the notion of continuous integration and continuous delivery (CI/CD) is now married with DevOps (development and operations) to provide a whole new understanding of what software deployment actually means.
SEE: How to recruit and hire a DevOps engineer (TechRepublic Premium)
Software delivery platform specialist CloudBees calls this dynamism end-to-end release orchestration. This is software deployment—often now in cloud-native computing environments—driven by the acceleration advantages of low-code automations. These automations provide an acute level of visibility into the state of the software application’s working operations that would not have existed back in the CD-ROM era or even perhaps more recently.
As we have said, this is software engineering enriched through DevOps. It is a cultural approach, rather than a methodology or defined workflow system, that enables increasingly cloud-native programmers to be more aware of what the operations team (database administrators, sysadmins, and testers) has to shoulder. In turn, it also enables operations to know more about developer requirements.
CloudBees goes a step further and adds security engineering team functions right in the middle. This produces the now widely lauded DevSecOps.
To achieve this cloud software release proficiency, and to elevate software shops to the point where they can enjoy release orchestration, is no small task. CloudBees has now precision-engineered its own engineering, so-to-speak, with the 2022 acquisition of ReleaseIQ to expand the company’s DevSecOps capabilities. This is a corporate move designed to empower customers with a low-code, end-to-end release orchestration and visibility solution.
The new software-as-a-service (SaaS)-based offering from CloudBees, with ReleaseIQ integrated into its stack, claims to enable DevOps organizations to rapidly compose and analyze cloud developer workflows, orchestrating any combination of CI and CD technologies—including Jenkins, an open-source automation server service that helps software build, test and deploy processes—without migration or replacement.
Tool choice vs. forced tool sets
CloudBees says that the decision to acquire ReleaseIQ was rooted in three of its core company beliefs: choice, visibility and continuous value. It insists that first and foremost, businesses need to empower developers by providing a choice of tools versus forcing a tool set. Second, as DevSecOps matures, it is no longer acceptable to have a limited view of any software delivery ecosystem. The third cornerstone relating to value is perhaps inevitable; what enterprise software vendor doesn’t talk about customer deliverables, outcomes and value with a side order of always-present innovation?
We know that today’s DevOps teams often face excessive development complexity, inefficiency, and cost caused by incoherent, disconnected CI and CD pipelines. A limited view of a singular pipeline causes intelligence gaps and ineffective processes. CloudBees says that its new capability enables teams to coordinate coherent, effective deployments and releases across teams, applications and environments. It also provides complete visibility into the software delivery practice to advance performance.
SEE: Best DevOps Tools and Solutions for 2022 (TechRepublic)
CEO Kapur: A software philosophy
What we need to realize at this point is that cloud computing changed software and the way software applications are developed and delivered. It also changed the way organizations need to think about their IT infrastructure and operations layer.
“Any company that has been around for the last couple of decades will inevitably have a mix of modern cloud technologies and a degree of legacy technologies,” said Anuj Kapur, president and chief executive officer at CloudBees. “Let’s remember, Docker is only nine years old, and AWS only embraced containers some four or five years ago—memories fade quickly.
“We often assume that the technology we use today [at the upper tier] is matched to the implementation operations layer below, but clearly this is not always the case.”
When we look at how the cloud computing landscape actually works in real-world engineering terms, Kapur explains how divergent and variable it is in terms of the heterogeneous tiers of technology that are now being brought together. There are different applications, different teams with different specialized skill sets, different software tools, and different execution environments where code has to work.
“If all of this is happening against a backdrop of enterprises starting to move from being ‘consumers’ of software to being ‘producers’ of more of their own applications—and it is—then we need to think about addressing the fabric of our IT operations,” asserted Kapur. “As we apply DevOps today in cloud-native environments and elsewhere, there are points of sensitivity that we need to get right.”
What Kapur is referring to is the human factor. For some developers, DevOps represents an opportunity to grasp greater execution control over how their applications will work. Given the shift to create more cloud-native development today, this is a positive for them. For others, it’s an administrative responsibility they don’t want to take on; these are the programmers that just want to write code.
No standardized rubric
There’s a friction parallel in security too in the world of DevSecOps. With so much open-source software out there in enterprise use, we clearly need to be able to look into code production pipelines. While some software teams will welcome DevSecOps and its ability to automate security IP into automation to ensure it is embedded in every process, others will find it intrusive and would rather get on with choosing the open-source components they want without the hassle.
“Whatever the environment and whatever the mix of tools, teams, applications and cloud services being used, there is no standardized rubric to apply across all industries in the application of DevSecOps,” argued Kapur. “To attempt to work to one would be difficult and perhaps even dangerous.”
As we now work to build the immediate future of cloud computing, we would do well to look back at where we came from half a century ago. For those of us that remember when software came on CD-ROMs, 3.5-inch floppy disks, and before that on cassette tape and even more rudimentary formats such as the printed page, the pace of modern software feels like some kind of warp speed journey through a new universe.
But we can’t stop and think like that; this cadence is second nature to Generation Z, and these are the people now driving the next phase of the software industry’s growth.
Back in the 1980s, we used to write off for a product advertised in a magazine by posting a bank check or payment order to a depot somewhere, wait for the money to clear, and then sit and wait patiently for one to two weeks before a post worker delivered a package to our house. In the age of Amazon, NetFlix and Uber, that sounds ridiculous. Software itself is now similarly speedy; welcome to release orchestration.