This is an IBM Automation portal for Cloud Management, Technology Cost Management, Network Automation and AIOps products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post an idea.
Get feedback from the IBM team and other customers to refine your idea.
Follow the idea through the IBM Ideas process.
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
FYI, I removed the customer name, as this idea was initially submitted as world readable (visible to all)
Hi Kurtulus,
can you use S3-comptabile object storage, such as minio https://www.ibm.com/docs/en/instana-observability/current?topic=openshift-api-reference#s3config for rawSpans? This is what SaaS and other large customers are using to scale.
After discussing with the team, sharding into two or multiple PVCs is not feasible on the application level (the performance gain you seek would be likely be removed by the added complexity in the application). I will therefore close this AHA idea and suggest you look at object storage, which should not have scaling limits.
Best,
Hubert
Hi Hubert,
I don't know the exact details but for example, they told us that metadata of cephfs storage increase when the PVC size increase, and as a result metadata search time also increases which leads to performance bottlenecks. Instead of this, small-size PVCs like 1TB x 8 or 2TB x 4 perform better.
Thank you.
Hi Hubert,
The remediation comes from ISBANK storage team , we want more flexible on the storage side for instana,we can't request 2*4TB disk from yaml,for now we can only commit storage: 3Ti at the operator yaml , our need is to write multiple pvc on the yaml file of operator because for now we have +1000 agent and we really need to expand raw-span pvc in a short time.
Regards
Hi Doruk,
thanks for your idea to Instana Self-Hosted. Can you give more context how 2*4TB PVCs are more performant then 1*8TB PVC for your customer? For cloud providers it is usually the opposite, you need to request (and pay) for larger PVCs to get more performance.