This is an IBM Automation portal for Cloud Management, Technology Cost Management, Network Automation and AIOps products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post an idea.
Get feedback from the IBM team and other customers to refine your idea.
Follow the idea through the IBM Ideas process.
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
Hello, I am part of the group that requested this feature. Let me explain the request.
First, let me say that this behavior is not specific to MapR, but we are seeing it with MapR. Minio is another example of a software that works like this.
MapR uses the disks in the systems directly, as a block device. There is no mounted file system on the disks that MapR uses.
As seen, sda is the system disk, while sd[b-e] have no file system. These are the disks that are used by MapR. As list of disks that is used by MapR is available at /opt/mapr/conf/disktab.
Regarding stats that are important to us, the tool sar gives a good starting point.
The important stats are utilization in percentage and await (the average time for I/O requests issued to the device to be served).
The goal is to find highly utilized disks or disks with a high latency within the cluster.
It may or may not make sense to capture the processes that generates the I/O, as this should always be the executable /opt/mapr/server/mfs.
Please let me know if there is any further information that is required to implement I/O monitoring of non-mounted disks.
Kind regards,
Markus
This feature has been planned for 4Q 2023.