Shape the future of IBM!
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Post your ideas
Start by posting ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Help IBM prioritize your ideas and requests
The IBM team may need your help to refine the ideas so they may ask for more information or feedback. The offering manager team will then decide if they can begin working on your idea. If they can start during the next development cycle, they will put the idea on the priority list. Each team at IBM works on a different schedule, where some ideas can be implemented right away, others may be placed on a different schedule.
Receive notifications on the decision
Some ideas can be implemented at IBM, while others may not fit within the development plans for the product. In either case, the team will let you know as soon as possible. In some cases, we may be able to find alternatives for ideas which cannot be implemented in a reasonable time.
Please use the following category to raise ideas for these offerings for all environments (traditional on-premises, containers, cloud):
Cloud Pak for Multicloud Management
Cloud Pak for Network Automation - incl Orchestration and Performance Management
Cloud Pak for Watson AIOps - incl Netcool Operations Management portfolio
Edge Application Manager
IBM Observability with Instana
ITM-APM Products - incl IBM Tivoli Monitoring v6 and Application Performance Monitoring v8
Workload Automation - incl Workload Scheduler
Tivoli System Automation - inc Tivoli System Automation Application Manager (SA AM), Tivoli System Automation for Multiplatforms (SA MP)
Tivoli Application Dependency Discovery Manager - TADDM
If you encounter any issues accessing the Ideas portals, please send email describing the issue to ideasibm@us.ibm.com for resolution.
For more information about IBM's Ideas program visit ibm.com/ideas.
Due to processing by IBM, this request was reassigned to have the following updated attributes:
Brand - Cloud
Product family - Operations Management
Product - Tivoli Application Dependency Discovery Mgmt (TADDM)
For recording keeping, the previous attributes were:
Brand - WebSphere
Product family - ITSM Operations
Product - Tivoli Application Dependency Discovery Mgmt (TADDM)
#publish
Please confirm below understanding on the "Discovery Summary" details:
1. Number of Discovery Servers Used : We understand, this is the count of different TADDM discovery servers used for selected discoveries.
2. Number of Discovery Sessions Started : This is the total count of session sensors triggered for all the selected discoveries.
3. Number of Devices Discovered : Here, by devices do you mean the app servers, storage devices, database servers or component type ? If yes, then this count will give the total number of component types for all the selected discoveries.
4. Number of Profiles Used : This is the total count of discovery profiles used in selected discoveries. Here, Our assumption is that you are referring to the "distinct" profiles being used.
5. Number of Sensors Used : This is the count of all the sensors triggered in selected discoveries. Here, Our assumption is that you are refering to the "distinct" sensors being used.
6. Number of Sensors Discoveries Started : What is difference between this metric and point 5 metric (Number of Sensors Used)? Is only difference that here total count of sensors is being referred, i.e. repeated sensors also counted.
7. Number of Sensors Discoveries Succeed: This is the count of all the sensors which executed successfully for all the selected discoveries.
8. Number of Sensors Discoveries Fail : This is the count of all the sensors which are not successful, i.e. have reported errors. (warnings are not considered in this count)
9. Number of Sensors Discoveries Warnings : This is the count of all the sensors which have reported warnings.
10. Number of Events : What do you mean by events here ? Does it mean the different error types like Timeout Issue, Connect Issue or Storage Issue ?
Please confirm below understanding on the "Discovery Summary" details:
1. Number of Discovery Servers Used : We understand, this is the count of different TADDM discovery servers used for selected discoveries.
2. Number of Discovery Sessions Started : This is the total count of session sensors triggered for all the selected discoveries.
3. Number of Devices Discovered : Here, by devices do you mean the app servers, storage devices, database servers or component type ? If yes, then this count will give the total number of component types for all the selected discoveries.
4. Number of Profiles Used : This is the total count of discovery profiles used in selected discoveries. Please confirm, if you are referring to the "distinct" profiles Only being counted here in this metric.
5. Number of Sensors Used : This is the count of all the sensors triggered in selected discoveries. Please confirm, if you are referring to "distinct" sensors be only counted here in this metric.
6. Number of Sensors Discoveries Started : What is difference between this metric and point 5 metric (Number of Sensors Used)? Is only difference that here total count of sensors is being referred, i.e. repeated sensors also counted.
7. Number of Sensors Discoveries Succeed: This is the count of all the sensors which executed successfully for all the selected discoveries.
8. Number of Sensors Discoveries Fail : This is the count of all the sensors which are not successful, i.e. have reported errors. (warnings are not considered in this count)
9. Number of Sensors Discoveries Warnings : This is the count of all the sensors which have reported warnings.
10. Number of Events : What do you mean by events here ? Does it mean the different error types like Timeout Issue, Connect Issue or Storage Issue ?
Attachment (Description): Feedback
Adrian, so that we can get a better clarity on the needs you have, can you please tell us what types of errors you look for, and/or how a error is recognized? The initial thought is, if there is a way you needs could be addressed by a diagnostic utility that transverses the data looking for key indicators.
In addition, I have split the RFE into 3 new RFE's, dividing down your requirement into individuate requirements, to making reviewing and tracking the requirements easier. All 4 RFE's are associated to the same dW RFE ID, but you may not see the new internal numbers reflected. As a note, the new RFE numbers, in addition to this one, based upon your original order are as follows: (2) 87690, (3) 87691, and (4) 87692.
Attachment (Description): Screenshot with enhancements applied