Skip to Main Content
Cloud Management and AIOps


This is an IBM Automation portal for Cloud Management and AIOps products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Please use the following category to raise ideas for these offerings for all environments (traditional on-premises, containers, cloud):
  • Cloud Pak for Multicloud Management

  • Cloud Pak for Network Automation - including Orchestration and Performance Management

  • Cloud Pak for Watson AIOps - including Netcool Operations Management portfolio

  • Edge Application Manager

  • IBM Observability with Instana

  • IBM Turbonomic ARM

  • Instana

  • ITM-APM Products - including IBM Tivoli Monitoring v6 and Application Performance Monitoring v8

  • Workload Automation - including Workload Scheduler

  • Tivoli System Automation - including Tivoli System Automation Application Manager (SA AM), Tivoli System Automation for Multiplatforms (SA MP)

  • Tivoli Application Dependency Discovery Manager - TADDM


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.


Status Needs more information
Workspace IBM Turbonomic ARM
Created by Guest
Created on Nov 18, 2022

Ability to rightsize Namespace quota

My customer's challenge is that their OpenShift clusters are full in Request while actual resource consumption is very low. There are thousands of namespaces with request quotas in their environment. They would like to have Turbonomic identify the most optimal namespace quota size without human intervention and make the change, in addition to what we already can do with workload spec.

Idea priority High
  • Admin
    Eva Tuczai
    Dec 19, 2022

    The challenge here is that our rightsizing optimization algorithm is not what is really wanted. Quotas are not related to any physical capacity, and are usually leveraged for allocation only based capacity planning. Since quotas have no relationship to utilization, the use of quotas forces app owners to set resource specs (limits/requests as applicable) on every workload that is deployed in order to comply with the quota, the real challenge is optimizing the workload sizes within the namespace/project, and modifying the quota is only meaningful if the workload within the quota is optimized FIRST. Consider these scenarios wrt quotas, and the answer may be a spreadsheet, not an optimization algorithm:

    1. NS with quota and no workload. Usage is 0. Resize to what? If someone is using quotas for allocation based capacity planning, then is this capacity being "held" for an app owner until they are ready? What is the definition of ready? How long do you hold onto this "capacity"?

    2. NS with quota, and workload sizes have "filled up" this quota. Usage is 100%. Should I resize up? What if this quota is full because the app owner deployed oversized workloads, and Turbo has a bunch of resize down actions? In this scenario, we have 2 problems:

      1. Resizing up just because the quota is 100% makes no sense unless there is more workload coming, which is usually a capacity planning task.

      2. In this scenario, to get the quota optimization the workload must first resize down, not give more quota

    3. NS with quota, and some workload deployed. Usage is x%. What to resize to? How far back in history do you need to go to accommodate horizontal scaling of workloads?

      1. How long do I keep that "peak"? And should I resize down to the max or leave "headroom" which really makes no sense.

      2. In this scenario, with horizontally scalable workload could be oversized, so every replica is contributing to a bloated quota requirement.

    4. NS with CPU Limit Quota, and workloads need to resize up for CPU Throttling.

      1. Want to understand if Platform owners would really allow the quota resize up, and let a cluster to be more overcommitted on CPU Limit knowing that it was needed to mitigate Throttling and that TURBO would manage resources based on UTILIZATION?

      2. What would they need to do to convince themselves that the Quota should go up, and so should the overcommit ratio (CPU Limit Quota : Physical CPU capacity)? Or would they convince themselves that they need more nodes - which is waste.

    5. For those that use quotas for allocation-only based capacity planning in disguise, the next ask may be to reflect Quota:Cluster Overcommit: sum up all the Quotas and maintain a ratio of quota:physical cluster resources (of some low number like 2:1) so that quotas do not overcommit. Since quotas drive a behavior to oversize workloads, I do not want to propagate inefficient cluster capacity planning through an allocation only model. The real problem is rightsizing the Workloads withing the Namespace, and then assigning a reasonable quota.

    Wanted to share some thoughts and challenges I see when the real problem is the need to optimize the size of the workload running in the namespace, and manage cluster capacity to a combination of allocation (to accommodate requests) and actual utilization (that will optimize both limits and requests)