Skip to Main Content
Cloud Management and AIOps


This is an IBM Automation portal for Cloud Management, Technology Cost Management, Network Automation and AIOps products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).

Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.

Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Future consideration
Created by Guest
Created on Feb 11, 2025

Bring ITNM Topology Collections to AIOps

We are currently not sending ITNM Topology collections to AIOps, which is hindering some correlations due to the lack of the necessary collections in AIOps.

Idea priority High
  • Guest
    Reply
    |
    Mar 11, 2025

    Support Case - TS018312193 is one example.


    Here is set of use-cases (this is ask in writing from another client who is tryinig to replace Smarts) that can be fulfilled by bringing collections from existing ITNM instance to AIOps:

    • BGP peering going down between A and B devices that are physically connected via two ports. In this scenario failure of two links for X reason BGP will also go down.

      • Expectation: In this fault scenario, A and B devices via poll/trap let fault management tool know about failure of interface and BGP, expectation is to have co-relation done at fault management and only one event forwarded to Fabric.

    • Device went down that have IBGP peering to multiple devices over transit infrastructure. In this scenario, when device fail – it will take down physical interfaces on peer devices plus IBGP peers will also report the BGP peering failure to the fault management system.

      • Expectation: Fault management system should co-relate device down, BGP failure reported by IBGP peers and connected devices reporting interface down into single event as device down and forward it to fabric.

    • In our infrastructure, Multiple offices connected via MPLS provider, Each office has BGP peering to two DC’s that has overlay of GRE tunnel. When office ‘X’ losses the MPLS connection to provider, we expect events of BGP down between CE-PE of that office, GRE tunnel line protocol going down between office and remote DC, BGP peering going down between remote office and DC. Likewise, when any DC MPLS circuit goes down – we expect same events but from multiple offices in this case.

      • Expectation: All above events should co-relate into one as circuit being down and forwarded to fabric.

    • Multiple overlay L2 VPN pseudo wires running over underlay paths: we have many environment where we run multiple L2 VPN pseudo wires over a underlay IP/MPLS technology.

      • Scenario 1: Multiple L2 VPN pseudowires running over redundant underlay paths – When primary underlay path fails traffic engineering kicks in and fails the L2 VPN on secondary underlay path. In this situation, fault management expects to have event for Primary underlay circuit failure plus failover of L2 VPN pseudowires to redundant underlay paths : in this situation, we expect to have single event to fabric for primary underlay being down, tunnel failover should be co-related to this main event.

      • Scenario 2: Multiple L2 VPN pseudowires running over single underlay paths: When underlay path fails, all L2 VPN pseudowires also go down. We expect IBLM fault management to co-relate L2 VPN going down events to underlay going down.

    • Wireless controller: We have two clusters for wireless controller per region, each cluster consists of two controllers. All remote offices Access points are configured with two controller cluster IPs and always registered with one of cluster and fails to other in case of issues.

      • Scenario 1: One of the controller in cluster fails – this will trigger events for switchport connection to controller going down, 1000’s of AP failover to redundant controller in cluster and Wireless controller down. We expect IBM fault management to co-relate this in one event.

      • Scenario 2: Multiple AP’s from a specific office failed to redundant cluster – In this scenario, we expect AP UP events from new controller and AP down event from old controller – IBLM fault management system should co-relate into one.