How CI/CD is Sidetracking Optimization, and What You Can Do About It
High-velocity code changes are making it impossible to optimize infrastructure. But not all is lost in the battle for improved performance.Read more
Asaf EzraSep 1, 2020
Change is part of DevOps history. Starting with the merge of Development and Operations teams, the DevOps role is continuously evolving at a fast pace. From IT system admin and titles such as “Functional Operations” and “Technical Operations” to full ownership over CI/CD production environments, DevOps teams are being assigned new roles, responsibilities, and authority constantly. Unsurprisingly, each business defines the role a bit differently as it is continuously being adapted. In general, the role today aims to shorten the systems development life cycle and provide continuous delivery with high software quality; the main aim is gaining the ability to respond better to customer needs and achieving business goals faster. Today, industry trends show DevOps teams taking on increased ownership of KPIs impacting business results, putting them right at the intersection between business, development and operations. At this intersection, Infrastructure performance is the next issue to overcome.
DevOps crossed the chasm in 2019 – elite teams grew from 7% to 20% of all DevOps teams, according to DORA State of DevOps. Elite DevOps teams strive to use the right tools for monitoring and best practices to impact business and organizational performance. They’re driven by the cloud with 24x more cloud utilization than non-elite teams using automation for everything from code release to deployment to migration, testing, monitoring and more for a nimble CI/CD pipeline. Elite and high performing teams approach the changing role of DevOps by focusing on process automation and cloud usage to deliver CI/CD stability, throughput and availability and raise staff productivity.
As for the next step in DevOps evolution, research reveals that: one of the important missing areas for streamlining high-quality software is performance optimization – yet it’s often neglected due to the accelerating pace of feature and functionality changes in the software. This aligns with yet simultaneously contradicts the main goals of DevOps teams: shortening systems development lifecycle but sometimes failing to achieve business goals faster. Currently, many organizations are monitoring performance separately in the CI/CD pipeline with SPE (Software Performance Engineering) for development and APM (Application Performance Monitoring) for operations. In the next DevOps stage, organizations need to work towards a continuous performance process with automation to measure response time, throughput and resource utilization. Performance optimization balances cost/performance tradeoffs in the cloud and in-house servers making a huge impact on the business, customer and organizational KPIs.
According to a recent industry survey on Performance in DevOps, what’s holding performance optimization back is tools! Ease of use is what’s needed for success – the complexity of performance engineering approaches used today is a barrier for wide-spread adoption. Performance engineering approaches must be lightweight and smoothly integrate with existing tools in the DevOps pipeline. By introducing new software-based performance optimization approaches, DevOps can now have the right automation tools.
There are several alternatives varying in how they balance cost/performance tradeoffs and how easily they can be integrated. Resource management and workload orchestration methods automatically match workloads to resources for provisioning, placement and sizing. Configuration tuning optimization methods provide automated parameter modification across the technical stack for determining the best workload configuration. Some of these tools need to be updated after each deployment and sometimes manually configured, which makes integrating them into the pipeline a difficult and costly (in terms of time commitment from the team) undertaking for DevOps teams.
Application-driven resource management optimization is the newest performance optimization approach and it meets the pressing needs of DevOps teams. Operating autonomously in real-time and requiring no development, this real-time alternative adapts to specific workload needs in the moment, continuously optimizing performance in the OS and kernel with AI. These new real-time optimization tools automatically work to maximize the quality of service (QoS) by increasing throughput rates and reduce cloud and IT infrastructure costs by boosting server utilization rates to remove idle servers.
With minimal impact on staff and maximum impact on business results, real-time performance optimization is a perfect solution for DevOps. It’s lightweight, automatic and connects to existing monitoring tools to solve the big DevOps challenges. Intelligent agents automatically analyze traffic and app behavior to optimize response times, memory allocation and throughput. Real-time kernel-based software works to update the OS thread scheduler, resolve bottlenecks and release memory for performance improvement. The result is reduced response time, less resources required and more cloud savings with better customer experience and more potential revenue.
With accelerated growth, businesses enter a never-ending performance/cost loop to control costs and maintain QoS. DevOps struggle with many common characteristics of modern Business Apps, including running in the cloud, handling heavy transaction-driven traffic and keeping strict SLA’s. Take for example Apps running AWS spot instances, with 80K-100K or more requests per minute and with response/processing times of only 120 milliseconds for each request. By simply introducing real-time performance optimization, DevOps can reap quick benefits, such as: saving 33%-60% in cloud compute costs, reducing latency (response time) and CPU processing by 25%-70% or more, and increasing throughput by 15%.
With the right mindset and real-time performance optimization tools for easy implementation in the cloud and on-prem, DevOps teams are now primed and ready to face the performance challenge – and win – with no entry barriers. They have the right support to streamline the end-to-end performance engineering process and help modern businesses be more efficient for better throughput and less latency. They also have the tools to increase resource utilization while mitigating the risks previously associated with this. It’s a simple, cost-effective way to take DevOps into the next stage of evolution closing the CI/CD performance gap. Now’s the time to make the changes to reduce costs, ensure profitability and boost customer experience. The right optimization tools are available to integrate smoothly into the DevOps environment.