Delete foreign aggregates in NetApp cDOT 8.3.x

# After adding a new disk shelf I came to the situation that these “new” disks already had a owner before. So after adding them to the shelf stack they looked like this:

cluster01::> disk show -shelf 9

Usable           Disk                               Container   Container
Disk                 Size Shelf Bay Type    Type        Name      Owner
—————- ———- —– — ——- ———– ———
2.9.0                    –     9   0 SAS     unknown     –         clusterB-01
2.9.1                     –     9   1 SAS     unknown     –         clusterB-01
2.9.2                    –     9   2 SAS     unknown     –         clusterB-01
2.9.3                    –     9   3 SAS     unknown     –         clusterB-01
2.9.4                    –     9   4 SAS     unknown     –         clusterB-01
2.9.5                    –     9   5 SAS     unknown     –         clusterB-01
2.9.6                    –     9   6 SAS     unknown     –         clusterB-01
2.9.7                    –     9   7 SAS     unknown     –         clusterB-01
2.9.8                    –     9   8 SAS     unknown     –         clusterB-01
2.9.9                    –     9   9 SAS     unknown     –         clusterB-01
2.9.10                   –     9  10 SAS     unknown     –         clusterB-01
2.9.11                   –     9  11 SAS     unknown     –         clusterB-01
2.9.12                   –     9  12 SAS     unknown     –         clusterB-01
2.9.13                   –     9  13 SAS     unknown     –         clusterB-01
2.9.14                   –     9  14 SAS     unknown     –         clusterB-01
2.9.15                   –     9  15 SAS     unknown     –         clusterB-01
2.9.16                   –     9  16 SAS     unknown     –         clusterB-01
2.9.17                   –     9  17 SAS     unknown     –         clusterB-01
2.9.18                   –     9  18 SAS     unknown     –         clusterB-01
2.9.19                   –     9  19 SAS     unknown     –         clusterB-01
2.9.20                   –     9  20 SAS     unknown     –         clusterB-01
2.9.21                   –     9  21 SAS     unknown     –         clusterB-01
2.9.22                   –     9  22 SAS     unknown     –         clusterB-01
2.9.23                   –     9  23 SAS     unknown     –         clusterB-01

24 entries were displayed.

# So I quickly removed the owner to assign them to the new cluster:

cluster01::> disk removeowner -disk 2.9.*
Warning: Disks may be automatically assigned to the node because the disk’s auto-assign option is enabled. If the affected volumes are not offline, the disks may be auto-assigned during the remove owner operation, which will cause unexpected results. To verify that the volumes are offline, abort this command and use “volume show”.
Do you want to continue? {y|n}: y
24 entries were acted on.

# The second node automatically took the new disks because they are unassigned and attached to an already existing stack. But there were already some aggrs on it:

cluster01::> disk show -shelf 9

Usable                     Disk   Container  Container
Disk             Size       Shelf Bay Type    Type        Name      Owner
————————————————————-
2.9.0               836.9GB     9   0 SAS     aggregate   metrocluster_aggr_siteB_1
cluster01-01
2.9.1               836.9GB     9   1 SAS     aggregate   aggr0_clusterB_01
cluster01-01
2.9.2               836.9GB     9   2 SAS     spare       Pool0     cluster01-01
2.9.3               836.9GB     9   3 SAS     spare       Pool0     cluster01-01
2.9.4               836.9GB     9   4 SAS     spare       Pool0     cluster01-01
2.9.5               836.9GB     9   5 SAS     spare       Pool0     cluster01-01
2.9.6               836.9GB     9   6 SAS     spare       Pool0     cluster01-01
2.9.7               836.9GB     9   7 SAS     spare       Pool0     cluster01-01
2.9.8               836.9GB     9   8 SAS     spare       Pool0     cluster01-01
2.9.9               836.9GB     9   9 SAS     spare       Pool0     cluster01-01
2.9.10             836.9GB     9  10 SAS     spare       Pool0     cluster01-01
2.9.11             836.9GB     9  11 SAS     spare       Pool0      cluster01-01
2.9.12             836.9GB     9  12 SAS     spare       Pool0     cluster01-01
2.9.13             836.9GB     9  13 SAS     spare       Pool0      cluster01-01
2.9.14             836.9GB     9  14 SAS     spare       Pool0     cluster01-01
2.9.15             836.9GB     9  15 SAS     spare       Pool0     cluster01-01
2.9.16             836.9GB     9  16 SAS     spare       Pool0     cluster01-01
2.9.17             836.9GB     9  17 SAS     spare      Pool0      cluster01-01
2.9.18             836.9GB     9  18 SAS     spare       Pool0     cluster01-01
2.9.19             836.9GB     9  19 SAS     spare       Pool0     cluster01-01
2.9.20             836.9GB     9 20 SAS     spare       Pool0     cluster01-01
2.9.21             836.9GB     9  21 SAS     spare       Pool0      cluster01-01
2.9.22             836.9GB     9  22 SAS     spare       Pool0     cluster01-01
2.9.23             836.9GB     9  23 SAS     spare       Pool0     cluster01-01

24 entries were displayed.

# It is not possible to see these aggrs on the clustershell only within the nodeshell:

cluster01::> aggr show
Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
——— ——– ——— —– ——- —— —————- ———–
aggr0_cluster01_01 753.2GB   200.1GB   73% online       1 cluster01-01 raid_dp,normal
aggr0_cluster01_02 2.18TB    1.64TB   25% online       1 cluster01-02 raid_dp,normal
aggr1_cluster01_01_sas 112.5TB   41.15TB   63% online     534 cluster01-01 raid_dp,normal
aggr1_cluster01_02_sata 283.2TB   56.74TB   80% online     859 cluster01-02 raid_dp,normal
aggr5_cluster01_01_ssd 5.24TB    4.95TB    5% online       5 cluster01-01 raid_dp,normal
5 entries were displayed.

cluster01::*> node run * aggr status
2 entries were acted on.

Node: cluster01-01
Aggr State Status Options
metrocluster_aggr_siteB_1 offline raid_dp, aggr lost_write_protect=off
foreign
degraded
mirror degraded
64-bit
aggr0_clusterB_01 offline raid_dp, aggr diskroot, lost_write_protect=off
foreign
degraded
mirror degraded
64-bit
aggr5_cluster01_01_ssd online raid_dp, aggr nosnap=on, raidsize=10
64-bit
aggr1_cluster01_01_sas online raid_dp, aggr raidsize=19
64-bit
aggr0_cluster01_01 online raid_dp, aggr root
64-bit

Node: cluster01-02
Aggr State Status Options
aggr1_cluster01_02_sata online raid_dp, aggr raidsize=12
64-bit
aggr0_cluster01_02 online raid_dp, aggr root
64-bit

# I then just removed the old aggrs in diag mode withing cDOT and all looked nice:

cluster01::> set diag

cluster01::*> storage aggregate remove-stale-record -aggregate aggr0_clusterB_01 -nodename cluster01-01

cluster01::*> storage aggregate remove-stale-record -aggregate metrocluster_aggr_siteB_1 -nodename cluster01-01

cluster01::*>  set admin

cluster01::> disk show -shelf 9
Usable           Disk    Container   Container
Disk                   Size Shelf Bay Type    Type        Name      Owner
—————- ———- —– — ——- ———– ———
2.9.0               836.9GB     9   0 SAS     spare       Pool0     cluster01-01
2.9.1               836.9GB     9   1 SAS     spare       Pool0     cluster01-01
2.9.2               836.9GB     9   2 SAS     spare       Pool0     cluster01-01
2.9.3               836.9GB     9   3 SAS     spare       Pool0     cluster01-01
2.9.4               836.9GB     9   4 SAS     spare       Pool0     cluster01-01
2.9.5               836.9GB     9   5 SAS     spare       Pool0     cluster01-01
2.9.6               836.9GB     9   6 SAS     spare       Pool0     cluster01-01
2.9.7               836.9GB     9   7 SAS     spare       Pool0     cluster01-01
2.9.8               836.9GB     9   8 SAS     spare       Pool0     cluster01-01
2.9.9               836.9GB     9   9 SAS     spare       Pool0     cluster01-01
2.9.10              836.9GB     9  10 SAS     spare       Pool0     cluster01-01
2.9.11              836.9GB     9  11 SAS     spare       Pool0     cluster01-01
2.9.12              836.9GB     9  12 SAS     spare       Pool0     cluster01-01
2.9.13              836.9GB     9  13 SAS     spare       Pool0     cluster01-01
2.9.14              836.9GB     9  14 SAS     spare       Pool0     cluster01-01
2.9.15              836.9GB     9  15 SAS     spare       Pool0     cluster01-01
2.9.16              836.9GB     9  16 SAS     spare       Pool0     cluster01-01
2.9.17              836.9GB     9  17 SAS     spare       Pool0     cluster01-01
2.9.18              836.9GB     9  18 SAS     spare       Pool0     cluster01-01
2.9.19              836.9GB     9  19 SAS     spare       Pool0     cluster01-01
2.9.20              836.9GB     9  20 SAS     spare       Pool0     cluster01-01
2.9.21              836.9GB     9  21 SAS     spare       Pool0     cluster01-01
2.9.22              836.9GB     9  22 SAS     spare       Pool0     cluster01-01
2.9.23              836.9GB     9  23 SAS     spare       Pool0     cluster01-01
24 entries were displayed.

# Don’t forget to zero the spares to use them:

cluster01::> disk zerospares

 

 

Advertisements