Skip to main content
NetApp Stage KB

Aggregate offline with disk missing in single node Cloud Volumes ONTAP in AWS

Views:
Visibility:
Public
Votes:
0
Category:
cloud-volumes-ontap-cvo
Specialty:
cloud
Last Updated:

Helpful 

Applies to

  • Cloud Volumes ONTAP (CVO)
  • AWS

Issue

  • Aggregate shows failed in an AWS Cloud Volumes ONTAP environment:
 
clipboard_ee9a7659612a9b69a62aa21e986ddc90e.png

Your_AWS_node::> aggr show


Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0_Your_AWS_node_01
           124.0GB    6.01GB   95% online       1 Your_AWS_node-01 raid0,
                                                                   normal
aggr1           0B        0B    0% failed       0 Your_AWS_node-01 raid0,
                                                                   partial
2 entries were displayed.

 
  • Following messages are seen in EMS:
 
12/9/2021 15:48:15  Your_AWS_node-01 ALERT         sk.panic: Panic String: aggr aggr1: raid volfsm, fatal disk error in RAID group with no parity disk..
Raid type - raid0
Group name plex0/rg0 state NORMAL. 1 disk failed in the group.
Disk 0b.6 S/N [00000000V-xNBTca/C86] UID [00000000V-xNBTca/C86] error: no valid path to disk. in SK process config_thread on
release 9.10.1RC1 (C)
 
 
12/9/2021 15:48:15  Your_AWS_node-01 ERROR         scsi.cmd.adapterHardwareErrorEMSOnly: Disk device 0b.6L0: Adapter detected hardware error: HA status 0x6: cdb 0x2a:00034c30:0078. Disk 0b.6 S/N [00000000V-xNBTca/C86] UID [00000000V-xNBTca/C86] Target Address [nvme (null)6]
12/9/2021 15:48:15  Your_AWS_node-01 ERROR         scsi.cmd.abortedByHost: Disk device 0b.6L0: Command aborted by host adapter: HA status 0x4: cdb 0x2a:00034c30:0078. Disk 0b.6 S/N [00000000V-xNBTca/C86] UID [00000000V-xNBTca/C86] Target Address [nvme (null)6]

 
  • The disk show output does not list all of the disks that make up the aggregate as expected. In this example, we can see only 5 disks for aggr1, although we know from previous AutoSupports that there should be 6 disks in the aggregate:

Your_AWS_node::> disk show
                     Usable           Disk    Container   Container
Disk                   Size Shelf Bay Type    Type        Name      Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
NET-1.1             137.8GB     -   - VMDISK  aggregate   aggr0_Your_AWS_node_01
                                                                    Your_AWS_node-01
NET-1.3              1007GB     -   - VMDISK  aggregate   aggr1     Your_AWS_node-01
NET-1.4              1007GB     -   - VMDISK  aggregate   aggr1     Your_AWS_node-01
NET-1.5              1007GB     -   - VMDISK  aggregate   aggr1     Your_AWS_node-01
NET-1.7              1007GB     -   - VMDISK  aggregate   aggr1     Your_AWS_node-01
NET-1.8              1007GB     -   - VMDISK  aggregate   aggr1     Your_AWS_node-01
6 entries were displayed.

 

We can see in this example that disk NET-1.6 is missing when comparing the current disk show output to previous AutoSupports.

Sign in to view the entire content of this KB article.

New to NetApp?

Learn more about our award-winning Support

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.
bd4a9dfe-9ee3-47a3-997e-6db088817064