Skip to content

[k8s] Auto-clamp cpu and mem for capacity#9179

Open
kyuds wants to merge 3 commits intomasterfrom
kyuds/k8s-ux
Open

[k8s] Auto-clamp cpu and mem for capacity#9179
kyuds wants to merge 3 commits intomasterfrom
kyuds/k8s-ux

Conversation

@kyuds
Copy link
Collaborator

@kyuds kyuds commented Mar 24, 2026

Tested (run the relevant ones):

  • Code formatting: install pre-commit (auto-check on commit) or bash format.sh
  • Any manual or new tests for this PR (please specify below)
  • All smoke tests: /smoke-test (CI) or pytest tests/test_smoke.py (local)
  • Relevant individual tests: /smoke-test -k test_name (CI) or pytest tests/test_smoke.py::test_name (local)
  • Backward compatibility: /quicktest-core (CI) or pytest tests/smoke_tests/test_backward_compat.py (local)

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a crucial enhancement for Kubernetes resource management by automatically clamping CPU and memory requests. It addresses a common pain point where pods fail to schedule when their requested resources precisely match a node's total capacity, without accounting for the system overhead reserved by Kubernetes components. By dynamically adjusting these requests to the node's allocatable capacity, the system ensures more robust and successful pod scheduling, particularly in scenarios where resource requests are at the upper limit of available nodes.

Highlights

  • Automatic Resource Adjustment for Kubernetes: Implemented a new mechanism to automatically adjust requested CPU and memory resources for Kubernetes deployments. This prevents scheduling errors when user-requested resources exactly match a node's capacity but are slightly larger than the allocatable resources due to system overhead.
  • System Overhead Calculation: The adjustment logic calculates system overhead (capacity - allocatable) and subtracts it from the requested resources. This ensures pods fit within the actual schedulable capacity of a node.
  • Conditional Adjustment: The resource adjustment is only applied if no larger nodes (with strictly more CPU capacity than requested) are available in the cluster. If a larger node exists, the scheduler can place the pod there without adjustment.
  • Updated Resource Fitting Logic: Modified the resource fitting checks to consider nodes with 'equal or greater' CPU/memory capacity, aligning with the new adjustment logic that handles cases where requested resources match node capacity.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new utility function, adjust_resources_to_allocatable, to dynamically adjust requested CPU and memory resources for Kubernetes pods. This adjustment accounts for system overhead when the request matches node capacity and no larger node is available, aiming to prevent scheduling errors. The make_deploy_resources_variables function is updated to utilize this new utility. Additionally, resource fitting checks in check_cpu_mem_fits and check_tpu_fits were modified to use inclusive comparisons ('>=') for CPU and memory requirements. However, two critical issues were identified: the overhead calculation incorrectly subtracts a safety margin, which could lead to adjusted requests exceeding allocatable limits, and the has_larger_node check is insufficient as it only considers CPU, potentially causing pods to get stuck in a pending state due to unadjusted memory requests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant