Skip to Main Content
Cloud Management and AIOps


This is an IBM Automation portal for Cloud Management, Technology Cost Management, Network Automation and AIOps products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).

Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.

Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

My subscriptions: Container Platforms (K8s & OpenShift)

Showing 6

Ability to set a Maximum Utilization and Request Constraint for nodes in OCP/K8s clusters

As a customer, I need to be able to set a maximum utilization and requests I don't want to pass for nodes, regarding the sum of all requests of all pods running there. Use case is: for production environment, I want to be sure to have enough capac...
about 1 year ago in IBM Turbonomic ARM / Container Platforms (K8s & OpenShift) 2 Planned for future release

Please version KubeTurbo helm chart in Chart.yaml when the Helm chart changes

Whenever the Helm chart changes for KubeTurbo, please also update the version number in the associated Chart.yaml file. The Helm chart for deploying KubeTurbo is here: https://github.com/turbonomic/kubeturbo/tree/master/deploy/kubeturbo We are wor...
5 months ago in IBM Turbonomic ARM / Container Platforms (K8s & OpenShift) 2 Planned for future release

Allow a user to set min/max number of nodes per any node pool in k8s

Long term: have a definition of a machineset/node group/node pool configuration that a user can define min/max number in terms of action generation and execution - A MachineSet (or any Node Group) could have a policy to control horizontal scaling ...
about 1 year ago in IBM Turbonomic ARM / Container Platforms (K8s & OpenShift) 0 Planned for future release

KubeTurbo helm chart - enable a toleration

Hello, I have an enhancement request for the KubeTurbo helm chart, which is here: https://github.com/turbonomic/kubeturbo/tree/master/deploy/kubeturbo-operator/helm-charts/kubeturbo The ask is to please add the ability to include a toleration in t...
9 months ago in IBM Turbonomic ARM / Container Platforms (K8s & OpenShift) 2 Planned for future release

Horizontal scaling for CPU

As a user, I’d like to use resource metrics such as CPU and Memory that is captured from Kubeturbo to leverage for a Horizontal Scaling policy on a horizontally scalable k8s service. Note today, the user does have a workaround for CPU. Use Prometu...
over 1 year ago in IBM Turbonomic ARM / Container Platforms (K8s & OpenShift) 0 Planned for future release

CPU pinning workloads

As a user, I’d like to express a tolerance for CPU and memory for whole-core increments, like what we did for throttling. I'd like to set equal CPU request and limit (for OCR). PM update to requirement: We will implement support for rightsizing CP...
over 1 year ago in IBM Turbonomic ARM / Container Platforms (K8s & OpenShift) 1 Planned for future release