Skip to Main Content
Cloud Management and AIOps


This is an IBM Automation portal for Cloud Management and AIOps products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).

Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.

Please use the following category to raise ideas for these offerings for all environments (traditional on-premises, containers, cloud):
  • Cloud Pak for Multicloud Management

  • Cloud Pak for Network Automation - including Orchestration and Performance Management

  • Cloud Pak for Watson AIOps - including Netcool Operations Management portfolio

  • Edge Application Manager

  • IBM Observability with Instana

  • IBM Turbonomic ARM

  • Instana

  • ITM-APM Products - including IBM Tivoli Monitoring v6 and Application Performance Monitoring v8

  • Workload Automation - including Workload Scheduler

  • Tivoli System Automation - including Tivoli System Automation Application Manager (SA AM), Tivoli System Automation for Multiplatforms (SA MP)

  • Tivoli Application Dependency Discovery Manager - TADDM

Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Not under consideration
Workspace IBM Turbonomic ARM
Categories Documentation
Created by Guest
Created on Dec 5, 2022

Turbonomic on EKS with Turbonomic provided ingress and TLS termination at Load Balancer - Documentation is incomplete/inaccurate

Scenario: Customer is deploying Turbonomic platform on EKS cluster with Turbonomic provided nginx ingress. It needs to terminate TLS at the ELB.
Issue with Documentation:  The provided documentation at "https://github.com/turbonomic/t8c-install/wiki/4.-Turbonomic-Multinode-Deployment-Steps" provisions a Classic Load Balancer, which is not recommended and has been deprecated by AWS. Also the provisioned load balancer does not work, because after TLS termination at the LB, it tries to send HTTP traffic to HTTPS port on the nginx pod that results into the failure.
Workaround solution: I worked with Turbonomic support to workaround this (https://support.turbonomic.com/hc/requests/122619). Below steps were taken to make this scenario working:
         1) Turbonomic XL custom resource was updated to provision a Network Load Balancer with a 'http' backend protocol. Below annotations were added for ingress:

                    global:
                        ingress:
                           annotations:
                               service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
                               service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
                               service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <ARN or certificate>
                               service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
                               service.beta.kubernetes.io/aws-load-balancer-type: nlb
        2) After Deploying the custom resource, the 'nginx' deployment was modified to set an Environment variable (DISABLE_HTTPS_REDIRECT=true). Turbonomic operator tolerates the environment variables during reconcilation (This should also be indicated in documentation).

                     spec:
                         containers:
                         - env:
                            - name: DISABLE_HTTPS_REDIRECT
                               value: "true"


        3) After the NLB was provisioned, the default forward action of it's TLS listener was edited to send traffic to HTTP port of the backend.

Documentation needs to be updated to get this scenario working.

Idea priority High
  • Admin
    Eva Tuczai
    Jan 5, 2023

    Dear submitter - thank you for your enhancement. The online documentation was updated to remove examples and calls out that the user must specifically understand service annotations requirements of their cloud provider where the k8s cluster is running. If your issue is with the Turbonomic ngnix parameters outside of the service annotations, then this is a defect, which should be handled with a support ticket. Thank you for your input!

  • Admin
    Eva Tuczai
    Dec 6, 2022

    Dear Guest - I am not sure what you mean "tried all permutations and combinations". I looked at the support ticket. There were 2 issues:

    • AWS specific knowledge as to what service annotations are required to create the best AWS LB for the use case and support of AWS provided cert. This is outside of Turbo.

    • If the issue was ensuring that the nginx service has https redirect disabled, and following the parameters required in the helm chart did not work, then this should be taken back to Turbo Engineering because something was not working in the way the Custom Resource was configuring the nginx deployment. It would have easily been solved with an update to the helm chart and the CR/CRD.

  • Guest
    Dec 6, 2022

    Yes, tried all permutations and combinations. It did not work. Please review the history in the support ticket: https://support.turbonomic.com/hc/requests/122619

  • Admin
    Eva Tuczai
    Dec 5, 2022

    Regarding setting an environment parameter for the nginx container component via the deployment this is already supported:
    https://github.com/turbonomic/t8c-install/blob/10eebb4664b92624b2bcace66a50a737e6815cb9/operator/helm-charts/base/nginx/templates/nginx.yaml#L70

    Did you try setting the following in the CR?
    spec:
    nginx:
    nginxIsPrimaryIngress: true
    httpsRedirect: false

    But this helm chart says that both nginxIsPrimaryIngress and httpsRedirect are required. Maybe the enhancement is that the user should not need to specify the parameter of nginxIsPrimaryIngress (which is what the helm chart is telling me both are required).

  • Admin
    Eva Tuczai
    Dec 5, 2022

    We will not provide examples, and I will remove from the documentation.

    The expectation is that a user understands how LBs work in their CLoud Provider, AND that our nginx component is only a SERVICE. So the user must understand all service annotations options to work with their cloud provider's load balancers.