Deployment Complexity in Distributed System (Part 1 of 2)
This is the first in a two-part series; we will dive into the motivations for Dynamic Delivery with distributed systems and a real-world example of how Dynamic Delivery is a simpler solution for managing distributed systems.
Risks in Production
It has been said that the biggest risk to reliability is a bad configuration file going out. That was true in a world of monolithic servers with a database and cache backend. However, we now live in a microservice-dominated world.
With the rise of the cloud and you-build-it-you-own-it, microservices setup as distributed systems is the default architecture.
This drastically complicates deployments. It is more important than ever for development teams to have tools that model today’s problems, not yesterday's.
You’re not shipping Gold Master CDs; you’re deploying cloud-native software!
Microservice architectures were introduced to push technical decision-making down, increase reliability, and reduce overhead.
This freedom of choice creates siloed systems and generates anti-patterns within production. Examples of these anti-patterns include circular dependencies between services, services that depend on each other across failure domains, or varying degrees of operational hygiene, including deployment frequencies.
The complexity becomes unbounded as the number of services per company exponentially increases. The promise of low overhead starts to evaporate when everyone needs to know how everything else works.
According to the CD.Foundation, only 24% of companies, released at least 1x a day in 2021.
Using this data point, let’s look at a sample model of a high-performing 100-person engineering organization with the following assumptions:
Org Shape for 100 People:
90 engineers + 10 people in managerial capacities
10 teams of 9
Each team owns 3 services
This means that there are ~27 services that the organization manages. Let's also assume that these 27 services interact with one another—with cloud services and potentially other SaaS or PaaS technology stacks.
How do you manage this? Change is always occurring!
Typically, teams will build schedules, coordinate on Slack, not deploy daily, and/or just wing it.
This model is not an ideal deployment system. These solutions slow down deployments and invite complexity, leading to inefficiencies, bloat, and high risk.
What if there was a better way?
What if we built an intelligent delivery system that simply handled the coordination instead?
The ideal deployment system would look like this:
The deployment system knows all 27 services and understands the interactions and dependencies across and amongst them.
The system has the context of the current and desired state of each of the 27 services.
All ten teams can push changes for their services to production using the same interface.
Business constraints are understood within the system, such as maintenance windows, compliance requirements, etc.
An intelligent deployment system will use this data to determine if it’s safe to deploy.
Now, the 90 engineers would have zero overhead in this process. No more lengthy Slack threads to coordinate their deploys, and no mistakes made over calendar dates or timezones! They just push a button, and the deployment system coordinates the release of their service.
This is what we’ve built at Prodvana. We are moving away from the archaic nature of pipelines to Dynamic Delivery — powered by declarative configurations + convergence loops.
Our next post will detail a real-world distributed system deployment and how Prodvana’s Dynamic Delivery eliminates large engineering cycles by removing complex pipelines, ultimately decreasing risk.
Deployment of distributed systems is one of the most challenging software control systems to get right. At Prodvana, we aim to nail it, so you don’t have to.