Back in the Day…
In the long-ago pre-cloud era, application delivery looked very different than it does today. That’s because nothing moved faster than the speed at which IT could procure and provision the servers on which the applications ran. Planning for new applications or major application updates took many months or years. At the same time, enterprise applications became increasingly complex with growing lists of interdependencies that were difficult to track and manage. Security threats emerged that could shut down entire systems for days at time or lead to career-ending data leaks. Even without outside bad actors, the risk of implementing a change that brought the network (and the applications it supported) to its knees continued to rise alongside the complexity index.
To manage these risks and ensure the safety and reliability of every application under their care, network and security teams developed processes and policies that were heavy on manual review and generally involved hand-crafted policies designed to meet the unique needs of each application. While cumbersome, these procedures were not necessarily bottlenecks. They often paced the procurement process.
Out of a Clear Blue Sky…
Then the Cloud happened. Suddenly deployment processes began to appear as a drag on an otherwise fast-moving system. Around the same time, GitHub and other social coding platforms made it easier for developers to collaborate on code, whether it was within the same team or on open source projects. New application architectures dramatically increased developer efficiency by reducing the opportunities for work done in one part of the application to conflict with other parts—a major source of toil, rework, and delay. Microservices and Service Mesh architectures reduce dependencies between parts of the application itself, while Containers and Serverless reduce dependencies on the underlying infrastructure—freeing individual developers and application teams to move at their own pace. Developers could now deploy applications in minutes rather than months. Initially, such deployments were restricted to dev and test projects, but demand for faster access to production systems grew quickly.
Digital Transformation: A Work in Progress
Network administrators and security engineers struggled to keep up in part because, unlike developers, their professional experience centered on keeping the business safe rather than optimizing workflows. For developers many of the core concepts behind what came to be known as the DevOps movement were already second nature: automating processes, streamlining systems, reducing dependencies between systems, and leveraging reusable processes and code where possible. Network administrators and Security engineers, by contrast, were trained as artisans. The critical nature of their work demanded that every application be treated with manual reviews, maintained with change review boards, and managed through hand-crafted policies which could be updated (manually) in response to changing conditions.
The good news is, that despite starting the race well behind developers in terms of understanding automated deployment processes, network and security teams are catching up.
The Application Factory: Applying a Systems Mindset to Application Delivery
A good way to think about the transition is to picture an application factory. Instead of handcrafted policies and manual review processes, network and security experts need to define reusable policies and then push them down to developers to deploy with their applications as part of an automated deployment pipeline.
Admittedly, that is easier said than done. Just as the move from handcrafted goods to factory-produced products was not simply a matter of buying heavy equipment and reassigning workers to new roles, the shift to a systems approach is as much about mindset and culture as it is about tooling and processes. Instead of change review boards and security teams laboriously investigating every possible security threat vector and performance risk, a well-designed automated application delivery system keeps failure domains small to minimize impact and builds in effective feedback loops to enable early detection and response. Tooling and pipeline decisions should balance developer freedom against the efficiency value of consistent services. The ideal system offers developers maximum freedom to make decisions around tooling that impacts feature development but provides a consistent set of multi-cloud-capable infrastructure and security services. Building consistent services into the core of the application delivery pipeline reduces technical debt and improves operational and compliance efficiency. That, in turn, frees developers to focus on delivering more innovation value to the business.