Back to blog

Enhancing Kube-Proxy Efficiency in Kubernetes Networking

Tamir Mizrachi

Kube-proxy plays a critical role within the Kubernetes ecosystem, tasked with the essential function of facilitating Pod networking. This component ensures seamless communication within the cluster, making its efficiency paramount for optimal cluster performance.

However, through detailed observation and analysis, our team has identified significant inefficiencies in kube-proxy‘s mechanism for handling network state changes. These inefficiencies become particularly pronounced in large-scale Kubernetes clusters, leading to considerable performance degradation that can impact the overall functionality and responsiveness of services within the cluster.

To address these inefficiencies, we devised an innovative approach that involves the application of real-time patches directly to the operational machine code of kube-proxy across each cluster node. Recognizing the diversity of builds and versions of kube-proxy deployed across various clusters, we developed a versatile and adaptive solution. This solution employs advanced run-time debugging techniques, leveraging tools like the Linux “perf probe” for identifying specific memory addresses and build offsets. Furthermore, we utilize “gdb,” the GNU Debugger, to dynamically apply modifications, while ensuring that kube-proxy’s operation remains uninterrupted.

Our research and development team undertook an exhaustive investigation into kube-proxy’s architecture and operational nuances. This involved pinpointing the underlying causes of the observed inefficiencies and crafting a solution that is generic, catering to the myriad of kube-proxy builds and versions. Through this comprehensive approach, we aim to significantly enhance the performance of kube-proxy, thereby optimizing network efficiency across Kubernetes clusters.

The Challenge of Network State Management in Large-Scale Clusters

A performance issue exists in kube-proxy iptables mode within large Kubernetes clusters due to the significant number of iptables rules that kube-proxy needs to manage. In environments hosting tens of thousands of Pods and Services, kube-proxy is tasked with handling a similarly vast number of iptables rules. This extensive rule set originates from kube-proxy’s operation, which entails generating multiple rules for each Service, alongside additional rules for each Service’s endpoint IP addresses.

The core of the challenge is the inherent limitation of iptables: whenever a Service or its EndpointSlices undergo changes, kube-proxy must update the corresponding iptables rules in the kernel. Due to iptables’ operational mechanics, this update process involves reading the entire existing ruleset and then rewriting it with the necessary modifications. This read-write cycle is particularly time-consuming and becomes a significant performance bottleneck in large-scale deployments. The requirement to parse and rewrite the entire ruleset for any update amplifies the workload, leading to delays and increased CPU usage as the system struggles to keep pace with the frequent changes inherent in dynamic Kubernetes environments with vast numbers of Pods and Services.

The Impact and Adjustment of minSyncPeriod Parameter

The minSyncPeriod parameter in kube-proxy is a critical configuration that dictates the minimum interval between successive synchronizations of iptables rules with the kernel. When this parameter is set to 0 seconds, kube-proxy enters an aggressive mode of operation where it updates iptables rules instantly following any modifications to Services or Endpoints. While this setting ensures that network configurations are immediately reflective of the current state, especially in smaller or less dynamic environments, it poses significant challenges in larger, more volatile clusters.

To illustrate the impact of a minSyncPeriod set to 0, consider the scenario of removing a Deployment linked to a Service that spans 100 Pods. In this case, kube-proxy would initiate an iptables rule update for each Pod termination, cumulatively triggering 100 separate updates. This behavior not only imposes a heavy load on the CPU but also slows down the overall process of rule updating. The system finds itself ensnared in a relentless loop of modifications, as it tries to keep pace with each individual change, thereby exacerbating the inefficiency and leading to potential bottlenecks in network performance.

To address this, adjusting the minSyncPeriod allows kube-proxy to consolidate multiple changes and apply iptables rule updates in batches. Rather than reacting to each change separately, kube-proxy can accumulate changes over a set period and implement them collectively, minimizing the number of updates and, subsequently, the CPU usage. This approach enhances rule synchronization efficiency and more swiftly aligns iptables rules with the current state of Services and Endpoints.

Nevertheless, excessively extending the minSyncPeriod could introduce delays in processing individual changes, which might have to wait until the end of the minSyncPeriod to be executed. This presents a balance between minimizing workload and ensuring timely iptables rule updates.

Inefficiencies in Default minSyncPeriod Settings

The default minSyncPeriod setting in kube-proxy, while suitable for many environments, may not be optimal for all, especially in the context of large-scale clusters. These clusters often experience a high volume of changes, which can quickly overwhelm the default configuration. This overload can result in significant performance issues, primarily due to the increased processing demands required to keep the network configurations synchronized with the rapid pace of changes.

Our solution aims to address and mitigate these inefficiencies, ensuring a more adaptable and efficient operation of kube-proxy across diverse cluster environments.

Runtime Optimization of Kube-Proxy’s Synchronization Parameters

Our research and development team embarked on a comprehensive project to dynamically optimize kube-proxy’s synchronization behavior, focusing on the crucial syncPeriod parameter. Through meticulous debugging and in-depth analysis of the source code, we discovered the potential to exert dynamic control over syncPeriod. We observed that during the startup sequence, kube-proxy intricately processes this parameter through various structures and calculations, ultimately determining the operational parameters that dictate update frequencies.

This process revealed two key parameters for runtime adjustment: minInterval and qps, each residing within distinct structures. Our initiative aims to refine the control over these parameters during runtime, enhancing kube-proxy’s efficiency and adaptability in managing network state updates.

Relevant structs from Kubernetes’ kube-proxy source code 

The minInterval and qps (queries per second) fields play pivotal roles in controlling the synchronization frequency of certain operations within a runtime environment, particularly in the context of Kubernetes’ kube-proxy operation and its interaction with iptables rules. The minInterval parameter defines the minimum time gap between consecutive executions of a specific function, managed by the BoundedFrequencyRunner. This interval ensures that there’s a controlled delay between operations, preventing an overload of function executions in a short period. For instance, in the context of kube-proxy, setting a minInterval would mean that updates to iptables rules are not attempted more frequently than the specified interval, allowing for a batching of updates and more efficient use of resources.

The qps field, on the other hand, is related to the tokenBucketRateLimiter structure and is a measure of the allowed rate of operations per second. It is directly used to configure a rate limiter that employs a token bucket approach, allowing operations to burst up to a specified limit while maintaining an average rate of operations defined by the qps. This mechanism provides flexibility in handling bursts of changes while ensuring the overall rate does not exceed a manageable level. For kube-proxy, this means that during periods of high change rates (e.g., many Services or Endpoints updating simultaneously), the system can temporarily accommodate these bursts up to the burst capacity, but over time, the average rate of updates is kept in check by the qps limit.

Together, the minInterval and qps parameters provide a dual-layered control mechanism over the frequency of updates, combining a minimum enforced delay between operations with a rate-limited capacity to handle bursts.

Dynamic Control of minSyncPeriod in Kubernetes Kube-Proxy

Our team developed an innovative plugin designed to dynamically adjust the minSyncPeriod parameter within an actively running kube-proxy process. The core challenge involved applying in-memory modifications to the live process, necessitating precise identification and alteration of specific memory locations associated with the minInterval and qps parameters.

Memory Location Discovery and Patching

Our initial step entailed extensive research to pinpoint the memory addresses of the minInterval and qps parameters. We discovered that the BoundedFrequencyRunner struct, which houses these parameters, is routinely passed to the BoundedFrequencyRunner::tryRun method. This method executes in accordance with the BoundedFrequencyRunner’s operational frequency, presenting a strategic point for intervention.


IDA screenshot of BoundedFrequencyRunner::tryRun method

To leverage this, we employed a performance probe (perf probe)—a sophisticated Linux tool powered by eBPF (Extended Berkeley Packet Filter). This tool enables us to probe the memory of a process during specific events seamlessly. By applying a probe to the BoundedFrequencyRunner::tryRun function’s execution, we successfully located the BoundedFrequencyRunner struct in memory. Subsequently, we accessed its pertinent fields, including a pointer to the limiter struct that contains the qps field.

In Go 1.17, a significant modification was introduced in the compiler’s approach to passing function arguments and results. Unlike previous versions where the stack was predominantly used, Go 1.17 employs registers for this purpose. Consequently, for x86_64 binaries, the initial parameter, which is the focus of our analysis, is now located in the rax register, as opposed to the traditional stack position of (rsp + 8).

To adeptly navigate this shift in a comprehensive manner, our strategy encompasses examining both potential locations for the pointer. Leveraging the robust nature of perf probe, which is designed to gracefully handle attempts to access unmapped memory without crashing, we simultaneously probe both the rax register and the previously utilized stack location. This dual-probing approach ensures no disruption in our process, as any unsuccessful attempts to read from an unmapped address are seamlessly managed. Subsequent to the data collection phase, we employ a filtering process at the stage of data analysis to ascertain the correct pointer location, as elucidated in the following section.

Fragment of the plugin’s code: This function constructs the eBPF pattern for the structs, incorporating the pertinent offset and enumerating all potential locations

Dynamic Field Offset Determination

One of the significant hurdles we faced was the absence of exported field offsets within the struct. To overcome this, we utilized the mathematical relationship between the qps and minInterval values, alongside known valid ranges and default settings for minInterval. Additionally, the constancy of the maxInterval field value served as a verification mechanism, ensuring the accuracy of our identified memory locations for the intended patching. These helped us make sure we found the correct locations of the fields.


Reverse engineering k8s_io_kubernetes_pkg_util_async_construct function to identify the field offsets in the BoundedFrequencyRunner, which are confirmed during each runtime execution.

Calculating Patch Values

The next final stage involved translating the desired synchronization interval into appropriate values for patching. The minInterval is measured in nanoseconds, while the qps requires conversion into a double-precision floating-point representation, maintaining a 1/_ relationship with the minInterval.

To apply these modifications to the active system, we utilized the GNU Debugger (GDB), which allowed us to precisely alter the memory values.

Address Resolution for BoundedFrequencyRunner::tryRun

Another aspect of our approach was determining the memory address of the BoundedFrequencyRunner::tryRun function. Given that Go binaries don’t export function symbols, we turned our attention to the .gopclntab section within the ELF (Executable and Linkable Format) binary. This section, known as the “Program Counter Line Table,” provides a mapping of virtual memory addresses to symbol names, facilitating the generation of detailed stack traces.

To navigate this complexity, we crafted a specialized Go program capable of parsing this section, effectively creating a Go symbol resolver. This tool was instrumental in our ability to dynamically adjust kube-proxy’s behavior, enhancing its adaptability and performance in real-time operational environments.

Conclusion: Enhancing Kube-Proxy for Kubernetes Efficiency

Our approach provides a targeted solution to the operational challenges faced by kube-proxy, particularly in large-scale Kubernetes clusters. By introducing dynamic adjustments to the minSyncPeriod parameter and leveraging advanced debugging and patching methods, we aim to improve kube-proxy’s responsiveness and efficiency.

The essence of our project lies in addressing the complexities of network state management in expansive Kubernetes ecosystems. Through the thoughtful application of dynamic optimization techniques to the minSyncPeriod, coupled with refined debugging and patching strategies, we have worked towards enhancing the operational efficiency of kube-proxy.

Optimize application performance.

Save on cloud costs.

Start Now
Back to blog