This is an IBM Automation portal for Cloud Management, Technology Cost Management, Network Automation and AIOps products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post an idea.
Get feedback from the IBM team and other customers to refine your idea.
Follow the idea through the IBM Ideas process.
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
See this idea on ideas.ibm.com
Scenario: Customer is deploying Turbonomic platform on EKS cluster with Turbonomic provided nginx ingress. It needs to terminate TLS at the ELB.
Issue with Documentation: The provided documentation at "https://github.com/turbonomic/t8c-install/wiki/4.-Turbonomic-Multinode-Deployment-Steps" provisions a Classic Load Balancer, which is not recommended and has been deprecated by AWS. Also the provisioned load balancer does not work, because after TLS termination at the LB, it tries to send HTTP traffic to HTTPS port on the nginx pod that results into the failure.
Workaround solution: I worked with Turbonomic support to workaround this (https://support.turbonomic.com/hc/requests/122619). Below steps were taken to make this scenario working:
1) Turbonomic XL custom resource was updated to provision a Network Load Balancer with a 'http' backend protocol. Below annotations were added for ingress:
global:
ingress:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <ARN or certificate>
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
2) After Deploying the custom resource, the 'nginx' deployment was modified to set an Environment variable (DISABLE_HTTPS_REDIRECT=true). Turbonomic operator tolerates the environment variables during reconcilation (This should also be indicated in documentation).
spec:
containers:
- env:
- name: DISABLE_HTTPS_REDIRECT
value: "true"
3) After the NLB was provisioned, the default forward action of it's TLS listener was edited to send traffic to HTTP port of the backend.
Documentation needs to be updated to get this scenario working.
Idea priority | High |
By clicking the "Post Comment" or "Submit Idea" button, you are agreeing to the IBM Ideas Portal Terms of Use.
Do not place IBM confidential, company confidential, or personal information into any field.
Dear submitter - thank you for your enhancement. The online documentation was updated to remove examples and calls out that the user must specifically understand service annotations requirements of their cloud provider where the k8s cluster is running. If your issue is with the Turbonomic ngnix parameters outside of the service annotations, then this is a defect, which should be handled with a support ticket. Thank you for your input!
Dear Guest - I am not sure what you mean "tried all permutations and combinations". I looked at the support ticket. There were 2 issues:
AWS specific knowledge as to what service annotations are required to create the best AWS LB for the use case and support of AWS provided cert. This is outside of Turbo.
If the issue was ensuring that the nginx service has https redirect disabled, and following the parameters required in the helm chart did not work, then this should be taken back to Turbo Engineering because something was not working in the way the Custom Resource was configuring the nginx deployment. It would have easily been solved with an update to the helm chart and the CR/CRD.
Yes, tried all permutations and combinations. It did not work. Please review the history in the support ticket: https://support.turbonomic.com/hc/requests/122619
Regarding setting an environment parameter for the nginx container component via the deployment this is already supported:
https://github.com/turbonomic/t8c-install/blob/10eebb4664b92624b2bcace66a50a737e6815cb9/operator/helm-charts/base/nginx/templates/nginx.yaml#L70
Did you try setting the following in the CR?
spec:
nginx:
nginxIsPrimaryIngress: true
httpsRedirect: false
But this helm chart says that both nginxIsPrimaryIngress and httpsRedirect are required. Maybe the enhancement is that the user should not need to specify the parameter of nginxIsPrimaryIngress (which is what the helm chart is telling me both are required).
We will not provide examples, and I will remove from the documentation.
The expectation is that a user understands how LBs work in their CLoud Provider, AND that our nginx component is only a SERVICE. So the user must understand all service annotations options to work with their cloud provider's load balancers.