Skip to Main Content
Cloud Management and AIOps


This is an IBM Automation portal for Cloud Management, Technology Cost Management, Network Automation and AIOps products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).

Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.

Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Future consideration
Workspace Instana
Created by Guest
Created on Oct 5, 2022

Dashboards with large timespans reduce the peaks and are less use-full

The "problem" I have is if you take a look as an example to the timings of the garbage-collection graph.
I have here an app who does every 5 seconds a 300ms gc, if you are in the 10 minutes frame this is visible. I can go back 24 hours in this granularity. But if it goes out of this 24 hours frame, the gc time reduced to 22ms(resolution 1 minute if it sees correct).
I know where it comes from, but this makes the graph useless, as it told me the gc time is 22 ms and not 300ms.

Similar happens, if I switch from 10 minutes frame to 30 Minutes frame. The max time goes down to below 150ms, which may also leads to wrong decisions. Larger peak in the 10 Minutes(1s rollup) view by 16 seconds in the 30 Minutes(5s rollup) frame 3.2 seconds. I was here in the live view.

If something happens in the last 24 hours, i can dive deep enough to identify the exacter timings. But if it goes out of this, I can't see the real values.

Normally i take a look at the last 24 hours, and if there something special, I make a deeper look. But if the graph shows me 22 ms i don't will do it. This is for every graph more or less and goes more invisible if you have larger deltas between max and min values.

Maybe something like save for every rollup the max value. So we have for the 1 minute resolution a mean value of 22ms but the peak is at 300ms, and so on, and also the possibility to visualize it in the graphs.

Hope this is now not too confusing. The problem I think are both points.

Idea priority Medium