Delete foreign aggregates in NetApp cDOT 8.3.x

# After adding a new disk shelf I came to the situation that these “new” disks already had a owner before. So after adding them to the shelf stack they looked like this:

cluster01::> disk show -shelf 9

Usable           Disk                               Container   Container
Disk                 Size Shelf Bay Type    Type        Name      Owner
—————- ———- —– — ——- ———– ———
2.9.0                    –     9   0 SAS     unknown     –         clusterB-01
2.9.1                     –     9   1 SAS     unknown     –         clusterB-01
2.9.2                    –     9   2 SAS     unknown     –         clusterB-01
2.9.3                    –     9   3 SAS     unknown     –         clusterB-01
2.9.4                    –     9   4 SAS     unknown     –         clusterB-01
2.9.5                    –     9   5 SAS     unknown     –         clusterB-01
2.9.6                    –     9   6 SAS     unknown     –         clusterB-01
2.9.7                    –     9   7 SAS     unknown     –         clusterB-01
2.9.8                    –     9   8 SAS     unknown     –         clusterB-01
2.9.9                    –     9   9 SAS     unknown     –         clusterB-01
2.9.10                   –     9  10 SAS     unknown     –         clusterB-01
2.9.11                   –     9  11 SAS     unknown     –         clusterB-01
2.9.12                   –     9  12 SAS     unknown     –         clusterB-01
2.9.13                   –     9  13 SAS     unknown     –         clusterB-01
2.9.14                   –     9  14 SAS     unknown     –         clusterB-01
2.9.15                   –     9  15 SAS     unknown     –         clusterB-01
2.9.16                   –     9  16 SAS     unknown     –         clusterB-01
2.9.17                   –     9  17 SAS     unknown     –         clusterB-01
2.9.18                   –     9  18 SAS     unknown     –         clusterB-01
2.9.19                   –     9  19 SAS     unknown     –         clusterB-01
2.9.20                   –     9  20 SAS     unknown     –         clusterB-01
2.9.21                   –     9  21 SAS     unknown     –         clusterB-01
2.9.22                   –     9  22 SAS     unknown     –         clusterB-01
2.9.23                   –     9  23 SAS     unknown     –         clusterB-01

24 entries were displayed.

# So I quickly removed the owner to assign them to the new cluster:

cluster01::> disk removeowner -disk 2.9.*
Warning: Disks may be automatically assigned to the node because the disk’s auto-assign option is enabled. If the affected volumes are not offline, the disks may be auto-assigned during the remove owner operation, which will cause unexpected results. To verify that the volumes are offline, abort this command and use “volume show”.
Do you want to continue? {y|n}: y
24 entries were acted on.

# The second node automatically took the new disks because they are unassigned and attached to an already existing stack. But there were already some aggrs on it:

cluster01::> disk show -shelf 9

Usable                     Disk   Container  Container
Disk             Size       Shelf Bay Type    Type        Name      Owner
————————————————————-
2.9.0               836.9GB     9   0 SAS     aggregate   metrocluster_aggr_siteB_1
cluster01-01
2.9.1               836.9GB     9   1 SAS     aggregate   aggr0_clusterB_01
cluster01-01
2.9.2               836.9GB     9   2 SAS     spare       Pool0     cluster01-01
2.9.3               836.9GB     9   3 SAS     spare       Pool0     cluster01-01
2.9.4               836.9GB     9   4 SAS     spare       Pool0     cluster01-01
2.9.5               836.9GB     9   5 SAS     spare       Pool0     cluster01-01
2.9.6               836.9GB     9   6 SAS     spare       Pool0     cluster01-01
2.9.7               836.9GB     9   7 SAS     spare       Pool0     cluster01-01
2.9.8               836.9GB     9   8 SAS     spare       Pool0     cluster01-01
2.9.9               836.9GB     9   9 SAS     spare       Pool0     cluster01-01
2.9.10             836.9GB     9  10 SAS     spare       Pool0     cluster01-01
2.9.11             836.9GB     9  11 SAS     spare       Pool0      cluster01-01
2.9.12             836.9GB     9  12 SAS     spare       Pool0     cluster01-01
2.9.13             836.9GB     9  13 SAS     spare       Pool0      cluster01-01
2.9.14             836.9GB     9  14 SAS     spare       Pool0     cluster01-01
2.9.15             836.9GB     9  15 SAS     spare       Pool0     cluster01-01
2.9.16             836.9GB     9  16 SAS     spare       Pool0     cluster01-01
2.9.17             836.9GB     9  17 SAS     spare      Pool0      cluster01-01
2.9.18             836.9GB     9  18 SAS     spare       Pool0     cluster01-01
2.9.19             836.9GB     9  19 SAS     spare       Pool0     cluster01-01
2.9.20             836.9GB     9 20 SAS     spare       Pool0     cluster01-01
2.9.21             836.9GB     9  21 SAS     spare       Pool0      cluster01-01
2.9.22             836.9GB     9  22 SAS     spare       Pool0     cluster01-01
2.9.23             836.9GB     9  23 SAS     spare       Pool0     cluster01-01

24 entries were displayed.

# It is not possible to see these aggrs on the clustershell only within the nodeshell:

cluster01::> aggr show
Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
——— ——– ——— —– ——- —— —————- ———–
aggr0_cluster01_01 753.2GB   200.1GB   73% online       1 cluster01-01 raid_dp,normal
aggr0_cluster01_02 2.18TB    1.64TB   25% online       1 cluster01-02 raid_dp,normal
aggr1_cluster01_01_sas 112.5TB   41.15TB   63% online     534 cluster01-01 raid_dp,normal
aggr1_cluster01_02_sata 283.2TB   56.74TB   80% online     859 cluster01-02 raid_dp,normal
aggr5_cluster01_01_ssd 5.24TB    4.95TB    5% online       5 cluster01-01 raid_dp,normal
5 entries were displayed.

cluster01::*> node run * aggr status
2 entries were acted on.

Node: cluster01-01
Aggr State Status Options
metrocluster_aggr_siteB_1 offline raid_dp, aggr lost_write_protect=off
foreign
degraded
mirror degraded
64-bit
aggr0_clusterB_01 offline raid_dp, aggr diskroot, lost_write_protect=off
foreign
degraded
mirror degraded
64-bit
aggr5_cluster01_01_ssd online raid_dp, aggr nosnap=on, raidsize=10
64-bit
aggr1_cluster01_01_sas online raid_dp, aggr raidsize=19
64-bit
aggr0_cluster01_01 online raid_dp, aggr root
64-bit

Node: cluster01-02
Aggr State Status Options
aggr1_cluster01_02_sata online raid_dp, aggr raidsize=12
64-bit
aggr0_cluster01_02 online raid_dp, aggr root
64-bit

# I then just removed the old aggrs in diag mode withing cDOT and all looked nice:

cluster01::> set diag

cluster01::*> storage aggregate remove-stale-record -aggregate aggr0_clusterB_01 -nodename cluster01-01

cluster01::*> storage aggregate remove-stale-record -aggregate metrocluster_aggr_siteB_1 -nodename cluster01-01

cluster01::*>  set admin

cluster01::> disk show -shelf 9
Usable           Disk    Container   Container
Disk                   Size Shelf Bay Type    Type        Name      Owner
—————- ———- —– — ——- ———– ———
2.9.0               836.9GB     9   0 SAS     spare       Pool0     cluster01-01
2.9.1               836.9GB     9   1 SAS     spare       Pool0     cluster01-01
2.9.2               836.9GB     9   2 SAS     spare       Pool0     cluster01-01
2.9.3               836.9GB     9   3 SAS     spare       Pool0     cluster01-01
2.9.4               836.9GB     9   4 SAS     spare       Pool0     cluster01-01
2.9.5               836.9GB     9   5 SAS     spare       Pool0     cluster01-01
2.9.6               836.9GB     9   6 SAS     spare       Pool0     cluster01-01
2.9.7               836.9GB     9   7 SAS     spare       Pool0     cluster01-01
2.9.8               836.9GB     9   8 SAS     spare       Pool0     cluster01-01
2.9.9               836.9GB     9   9 SAS     spare       Pool0     cluster01-01
2.9.10              836.9GB     9  10 SAS     spare       Pool0     cluster01-01
2.9.11              836.9GB     9  11 SAS     spare       Pool0     cluster01-01
2.9.12              836.9GB     9  12 SAS     spare       Pool0     cluster01-01
2.9.13              836.9GB     9  13 SAS     spare       Pool0     cluster01-01
2.9.14              836.9GB     9  14 SAS     spare       Pool0     cluster01-01
2.9.15              836.9GB     9  15 SAS     spare       Pool0     cluster01-01
2.9.16              836.9GB     9  16 SAS     spare       Pool0     cluster01-01
2.9.17              836.9GB     9  17 SAS     spare       Pool0     cluster01-01
2.9.18              836.9GB     9  18 SAS     spare       Pool0     cluster01-01
2.9.19              836.9GB     9  19 SAS     spare       Pool0     cluster01-01
2.9.20              836.9GB     9  20 SAS     spare       Pool0     cluster01-01
2.9.21              836.9GB     9  21 SAS     spare       Pool0     cluster01-01
2.9.22              836.9GB     9  22 SAS     spare       Pool0     cluster01-01
2.9.23              836.9GB     9  23 SAS     spare       Pool0     cluster01-01
24 entries were displayed.

# Don’t forget to zero the spares to use them:

cluster01::> disk zerospares

 

 

WFA command for querying Active Directory user groups

Below is the source code of a OnCommand Workflow Automation command that searches the Active Directory for a specific user group.

Input from the Workflow should be: DOMAIN\usergroup
Output would be: found or not found with error (Workflow should be stopped)
Installation: Just add a new command to WFA and copy/paste the below source code into the code window. Additionally install the Windows Feature “Active Directory module for Windows PowerShell” onto the WFA Server.

param (
[parameter(Mandatory=$true, HelpMessage=”AD group to check for”)]
[string]$ADgroup

)

Get-WFALogger -Info -message $(“Checking for AD group ” + $ADgroup)

$pos = $ADgroup.IndexOf(“\”)
$groupname = $ADgroup.Substring($pos+1)

# Get the latest job which moves the specified volume to the specified aggregate
$result = get-adgroup -Filter {SamAccountName -eq $groupname}

if(!$result)
{
Get-WFALogger -Info -message $(“Specified AD group was not found”)
Set-WfaCommandProgress -Total 1 -Current 0 -ProgressPercentage 100 -Note “AD group was not found”
throw “Failed to find AD group ‘” + $ADgroup + “‘”
}
else
{
Get-WFALogger -Info -message $(“Found specified AD group”)

Set-WfaCommandProgress -Total 1 -Current 1 -ProgressPercentage 100
}

Script: Direct-SVM support for SnapVault in VSC

Attached is a script to do SnapVault (SnapMirror XDP) with Virtual Storage Console for VMWare from NetApp without adding the whole cluster into the VSC GUI.

VSC_Add_job

VSC 5.0 added support for SnapVault into the VSC for VMWare. If you want to use this function you had to add the whole destination cluster into the VSC. There is no possiblity to add only the SVM via a management interface into VSC and use SnapVault and SnapMirror.
I normally don’t add the whole cluster because of the permissions.

The second reason was VSC 4.2.x hadn’t support for SnapVault updates.

The best solution was to create a script which handles these problems and adds the capability to update SnapVault to SnapMirror cascades. It was tested with a number of different VSC versions including 4.2.1, 4.2.2, 6.0 and 6.1 and OnTap 8.2.x to 8.3.x.

I named the script SVVMcDOT -> SnapVault for VMWare with cDOT.
Along with the script you can find a little manual and the configuration file attached. Just rename the svvmcdot.doc to svvmcdot.ps1 (PowerShell Script), the configuration file from svvmcdot_config.doc to svvmcdot.conf and read the documentation.

svvmcdot_config
svvmcdot
Documentation_svvmcdot_v1.0

More Infos:
https://community.netapp.com/t5/VMware-Solutions-Discussions/VSC-4-2-1-and-Clustered-Data-OnTap-SnapMirror/m-p/64113/highlight/true#M6005

Set SPN if Kerberos authentication not works or if using an DNS alias for a CIFS Share on NetApp

Our example CIFS SVM has the name SVM1 and is joined in Active Directory with the CIFS name SVM1 as well. We then need a DNS alias for CIFS access which is CIFSSHARES.DOMAIN.LOCAL.

# We had to set the Service Principal Name if the use of a DNS alias like in the example is needed or the output for the cifs sessions shows NTLM2 instead of Kerberos:

Node: NODE1
Vserver: SVM1
Session ID: 1
Connection ID: 12984403
Incoming Data LIF IP Address: 192.168.1.10
Workstation IP address: 192.168.50.101
Authentication Mechanism: NTLMv2
Windows User: DOMAIN\USER
UNIX User: pcuser
Open Shares: 1
Open Files: 0
Open Other: 0
Connected Time: 1d 14h 16m 52s
Idle Time: 28s
Protocol Version: SMB2_1
Continuously Available: No
Is Session Signed: false

# The SPN can be set and viewed with the setspn.exe utility on the domain controller servers like this:

setspn.exe -A HOST/CIFSSHARES SVM1

setspn.exe -A HOST/CIFSSHARES.DOMAIN.LOCAL SVM1

C:\Users\administrator.DOMAIN>setspn.exe -l SVM1
Registered ServicePrincipalNames for CN=SVM1,CN=Computers,DC=DOMAIN,DC=LOCAL:
HOST/CIFSSHARES.DOMAIN.LOCAL
HOST/CIFSSHARES
HOST/SVM1.DOMAIN.LOCAL
HOST/SVM1

# After setting the correct SPN entry the output of cifs sessions should be like this:

Node: NODE1
Vserver: SVM1
Session ID: 1
Connection ID: 263802468
Incoming Data LIF IP Address: 192.168.1.10
Workstation IP address: 192.168.50.101
Authentication Mechanism: Kerberos
Windows User: DOMAIN\USER
UNIX User: pcuser
Open Shares: 5
Open Files: 11
Open Other: 0
Connected Time: 3h 25m 46s
Idle Time: 35s
Protocol Version: SMB2_1
Continuously Available: No
Is Session Signed: false

Reference Site from NetApp:
https://kb.netapp.com/support/index?page=content&id=1013601&pmv=print&impressions=false

4 awesome PowerCLI Commands

Here are some useful PowerCLI Commands:

#Get all VirtualPortGroups and Loadbalance Policy
Get-VirtualPortGroup | ft Name, @{Label=”LoadbalancingPolicy”; Expression = { $_.ExtensionData.config.defaultportconfig.uplinkteamingpolicy.policy.value}}

#Get all VirtualPortGroups which are not IP-Hash
Get-VirtualPortGroup | ? { $_.ExtensionData.config.defaultportconfig.uplinkteamingpolicy.policy.value -ne “loadbalance_ip” }

#Get all VMs with CDROM attached
Get-VM | FT Name, @{Label=”ISOfile”; Expression = { ($_ | Get-CDDrive).ISOPath }}

#Get all VMs with Snapshots
Foreach($vm in get-vm){ get-snapshot $vm |select VM,Name,Description,Created}

Performance Charts service returned an invalid response

When I wanted to check the performance of a datastore I got the following error message:test

I am using vSphere Appliance 6.0 with Update 1.
This problem occurs with my domain account only and not with the administrator@vsphere.local account.

There is an easy way to fix this.
# SSH to the vCenter Appliance and enter the following two commands:

shell.set –enabled true
shell

# Go to the directory /usr/lib/vmware-perfcharts/tc-instance/conf/. Add the option maxHttpHeaderSize=”65536″ to your server.xml as follows (don’t forget to make a backup of the server.xml file):

<!– IPv4 configuration –>
<Connector address=”127.0.0.1″
acceptCount=”300″
maxThreads=”300″
connectionTimeout=”20000″
executor=”tomcatThreadPool”
maxKeepAliveRequests=”15″
port=”${bio.http.port}”
maxHttpHeaderSize=”65536″
protocol=”org.apache.coyote.http11.Http11Protocol”/>
<!– IPv6 configuration –>
<Connector address=”::1″
acceptCount=”300″
maxThreads=”300″
connectionTimeout=”20000″
executor=”tomcatThreadPool”
maxKeepAliveRequests=”15″
port=”${bio.http.port}”
maxHttpHeaderSize=”65536″
protocol=”org.apache.coyote.http11.Http11Protocol”/>

# Then simply restart the performance charts daemon:

service-control –stop vmware-perfcharts
service-control –start vmware-perfcharts

This issue should be resolved in vCenter Server 6.0 Update 2.

This Knowledge Base article is for Windows vCenter Server only but applies to Appliance as well:
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2131040