Veeam 9.5 FLR from NetApp storage fails using NFS

If you are using NFS volumes from a NetApp cDOT Cluster as datastore and you want to do a File Level Restore (FLR) with Veeam 9.5 the datastore fails to mount with permission denied from server.

This is because Veeam makes a new Export Rule in your root Export Policy if you have set the ip address of your ESXi to read-only as per Best Practises of NetApp. Veeam puts the new rule as number 1. So it is not possible to mount the datastore obviously:

cluster::> export-policy rule show -vserver vmwaresvm -fields rorule,rwrule
vserver policyname ruleindex rorule rwrule ipaddress
———– ——————- ——— —— —— ———–
vmwaresvm ex_vmwaresvm_lab1 8 any any  10.10.1.20
vmwaresvm ex_vmwaresvm_lab1 9 any any  10.10.1.21
vmwaresvm ex_vmwaresvm_lab1 10 any any 10.10.1.22
vmwaresvm ex_vmwaresvm_root 1 none any 10.10.1.20
vmwaresvm ex_vmwaresvm_root 9 any none 10.10.1.20
vmwaresvm ex_vmwaresvm_root 10 any none 10.10.1.21
vmwaresvm ex_vmwaresvm_root 11 any none 10.10.1.22

The workaround is to set the Export Rule for the ip address of the ESXi to read-write before the restore:

cluster::> export-policy rule show -vserver vmwaresvm -fields rorule,rwrule
vserver policyname ruleindex rorule rwrule ipaddress
———– ——————- ——— —— —— ———–
vmwaresvm ex_vmwaresvm_lab1 8 any any 10.10.1.20
vmwaresvm ex_vmwaresvm_lab1 9 any any 10.10.1.21
vmwaresvm ex_vmwaresvm_lab1 10 any any 10.10.1.22
vmwaresvm ex_vmwaresvm_root 9 any any 10.10.1.20
vmwaresvm ex_vmwaresvm_root 10 any any 10.10.1.21
vmwaresvm ex_vmwaresvm_root 11 any any 10.10.1.22

In my opinion it is not a secure workaround because someone can mount your SVM root volume and write to it.

Let me know if you have the same issues…

BTW the same problem exists with Veeam 9.0 but the new rule will not be placed as number 1 so it works as expected…

UPDATE: This problem is solved with Veeam 9.5U2 by unchecking the following tick:

Advertisements

Add and remove volumes in a SVM DR relationship

Every modification in a SVM DR relationship needs a “snapmirror resync” on the destination site. So it’s very easy to add and remove volume to/from relationship.
Here is a brief introduction on how to do these two tasks.

Add a new volume to SVM-DR
# Create a new volume in the already existing SVM-DR Source SVM

source_cluster::> volume create -vserver source_svm -volume NewDRVolume -aggregate aggr1_node01_sas -size 10g -state online -type RW
[Job 2706] Job succeeded: Successful

# Do a snapmirror resync on the destination SVM

dest_cluster::> snapmirror resync -destination-path dest_svm:

# Wait until the resync is done (Relationship Status -> Idle)

dest_cluster::> snapmirror show dest_svm:

Source Path: source_svm:
Destination Path: dest_svm:
Relationship Type: DP
Relationship Group Type: –
SnapMirror Schedule: 30min
SnapMirror Policy Type: async-mirror
SnapMirror Policy: DPDefault
Tries Limit: –
Throttle (KB/sec): unlimited
Mirror State: Snapmirrored
Relationship Status: Idle
File Restore File Count: –
File Restore File List: –
Transfer Snapshot: –
Snapshot Progress: –
Total Progress: –
Network Compression Ratio: –
Snapshot Checkpoint: –
Newest Snapshot: vserverdr.2.8a2831d4-558f-11e6-b980-00a0989f2857.2016-09-28_100000
Newest Snapshot Timestamp: 09/28 10:00:00
Exported Snapshot: vserverdr.2.8a2831d4-558f-11e6-b980-00a0989f2857.2016-09-28_100000
Exported Snapshot Timestamp: 09/28 10:00:00
Healthy: true
Unhealthy Reason: –
Constituent Relationship: false
Destination Volume Node: –
Relationship ID: d2cbabb9-558f-11e6-b980-00a0989f2857
Current Operation ID: –
Transfer Type: –
Transfer Error: –
Current Throttle: –
Current Transfer Priority: –
Last Transfer Type: update
Last Transfer Error: –
Last Transfer Size: 1.64MB
Last Transfer Network Compression Ratio: –
Last Transfer Duration: 0:0:11
Last Transfer From: source_svm:
Last Transfer End Timestamp: 09/28 10:00:11
Progress Last Updated: –
Relationship Capability: –
Lag Time: 0:21:7
Identity Preserve Vserver DR: true
Number of Successful Updates: –
Number of Failed Updates: –
Number of Successful Resyncs: –
Number of Failed Resyncs: –
Number of Successful Breaks: –
Number of Failed Breaks: –
Total Transfer Bytes: –
Total Transfer Time in Seconds: –

# Check the new volume on the DR site

dest_cluster::> vol show -vserver dest_svm
Vserver Volume Aggregate State Type Size Available Used%
——— ———— ———— ———- —- ———- ———- —–
dest_svm
dest_svm_root
aggr1_node01_sas
online RW 1GB 972.6MB 5%
dest_svm
dest_svm_nfs_prod01
aggr1_node01_sas
online DP 4TB 2.00TB 50%
dest_svm
NewDRVolume
aggr1_node01_sas
online DP 10GB 9.50GB 5%
3 entries were displayed.

Remove a no longer used volume from SVM-DR
# Remove the volume from the SVM-DR Source SVM

source_cluster::> volume offline -vserver source_svm -volume OldDRVolume
Volume “source_svm:OldDRVolume” is now offline.
source_cluster::> volume delete -vserver source_svm -volume OldDRVolume
Warning: Are you sure you want to delete volume “OldDRVolume” in Vserver “source_svm” ?
{y|n}: y
[Job 2711] Job succeeded: Successful

# Resync the destination svm

dest_cluster::> snapmirror resync -destination-path dest_svm:

# Wait until the resync is done (Relationship Status -> Idle)

dest_cluster::> snapmirror show dest_svm:

Source Path: source_svm:
Destination Path: dest_svm:
Relationship Type: DP
Relationship Group Type: –
SnapMirror Schedule: 30min
SnapMirror Policy Type: async-mirror
SnapMirror Policy: DPDefault
Tries Limit: –
Throttle (KB/sec): unlimited
Mirror State: Snapmirrored
Relationship Status: Idle
File Restore File Count: –
File Restore File List: –
Transfer Snapshot: –
Snapshot Progress: –
Total Progress: –
Network Compression Ratio: –
Snapshot Checkpoint: –
Newest Snapshot: vserverdr.0.8a2831d4-558f-11e6-b980-00a0989f2857.2016-09-28_131416
Newest Snapshot Timestamp: 09/28 13:14:16
Exported Snapshot: vserverdr.0.8a2831d4-558f-11e6-b980-00a0989f2857.2016-09-28_131416
Exported Snapshot Timestamp: 09/28 13:14:16
Healthy: true
Unhealthy Reason: –
Constituent Relationship: false
Destination Volume Node: –
Relationship ID: d2cbabb9-558f-11e6-b980-00a0989f2857
Current Operation ID: –
Transfer Type: –
Transfer Error: –
Current Throttle: –
Current Transfer Priority: –
Last Transfer Type: resync
Last Transfer Error: –
Last Transfer Size: 400KB
Last Transfer Network Compression Ratio: –
Last Transfer Duration: 0:0:35
Last Transfer From: source_svm:
Last Transfer End Timestamp: 09/28 13:14:51
Progress Last Updated: –
Relationship Capability: –
Lag Time: 0:0:40
Identity Preserve Vserver DR: true
Number of Successful Updates: –
Number of Failed Updates: –
Number of Successful Resyncs: –
Number of Failed Resyncs: –
Number of Successful Breaks: –
Number of Failed Breaks: –
Total Transfer Bytes: –
Total Transfer Time in Seconds: –

# Check for the new volume on the DR site

dest_cluster::> vol show -vserver dest_svm
Vserver Volume Aggregate State Type Size Available Used%
——— ———— ———— ———- —- ———- ———- —–
dest_svm
dest_svm_root
aggr1_node01_sas
online RW 1GB 972.6MB 5%
dest_svm
dest_svm_nfs_prod01
aggr1_node01_sas
online DP 4TB 2.00TB 50%
2 entries were displayed.

Delete foreign aggregates in NetApp cDOT 8.3.x

# After adding a new disk shelf I came to the situation that these “new” disks already had a owner before. So after adding them to the shelf stack they looked like this:

cluster01::> disk show -shelf 9

Usable           Disk                               Container   Container
Disk                 Size Shelf Bay Type    Type        Name      Owner
—————- ———- —– — ——- ———– ———
2.9.0                    –     9   0 SAS     unknown     –         clusterB-01
2.9.1                     –     9   1 SAS     unknown     –         clusterB-01
2.9.2                    –     9   2 SAS     unknown     –         clusterB-01
2.9.3                    –     9   3 SAS     unknown     –         clusterB-01
2.9.4                    –     9   4 SAS     unknown     –         clusterB-01
2.9.5                    –     9   5 SAS     unknown     –         clusterB-01
2.9.6                    –     9   6 SAS     unknown     –         clusterB-01
2.9.7                    –     9   7 SAS     unknown     –         clusterB-01
2.9.8                    –     9   8 SAS     unknown     –         clusterB-01
2.9.9                    –     9   9 SAS     unknown     –         clusterB-01
2.9.10                   –     9  10 SAS     unknown     –         clusterB-01
2.9.11                   –     9  11 SAS     unknown     –         clusterB-01
2.9.12                   –     9  12 SAS     unknown     –         clusterB-01
2.9.13                   –     9  13 SAS     unknown     –         clusterB-01
2.9.14                   –     9  14 SAS     unknown     –         clusterB-01
2.9.15                   –     9  15 SAS     unknown     –         clusterB-01
2.9.16                   –     9  16 SAS     unknown     –         clusterB-01
2.9.17                   –     9  17 SAS     unknown     –         clusterB-01
2.9.18                   –     9  18 SAS     unknown     –         clusterB-01
2.9.19                   –     9  19 SAS     unknown     –         clusterB-01
2.9.20                   –     9  20 SAS     unknown     –         clusterB-01
2.9.21                   –     9  21 SAS     unknown     –         clusterB-01
2.9.22                   –     9  22 SAS     unknown     –         clusterB-01
2.9.23                   –     9  23 SAS     unknown     –         clusterB-01

24 entries were displayed.

# So I quickly removed the owner to assign them to the new cluster:

cluster01::> disk removeowner -disk 2.9.*
Warning: Disks may be automatically assigned to the node because the disk’s auto-assign option is enabled. If the affected volumes are not offline, the disks may be auto-assigned during the remove owner operation, which will cause unexpected results. To verify that the volumes are offline, abort this command and use “volume show”.
Do you want to continue? {y|n}: y
24 entries were acted on.

# The second node automatically took the new disks because they are unassigned and attached to an already existing stack. But there were already some aggrs on it:

cluster01::> disk show -shelf 9

Usable                     Disk   Container  Container
Disk             Size       Shelf Bay Type    Type        Name      Owner
————————————————————-
2.9.0               836.9GB     9   0 SAS     aggregate   metrocluster_aggr_siteB_1
cluster01-01
2.9.1               836.9GB     9   1 SAS     aggregate   aggr0_clusterB_01
cluster01-01
2.9.2               836.9GB     9   2 SAS     spare       Pool0     cluster01-01
2.9.3               836.9GB     9   3 SAS     spare       Pool0     cluster01-01
2.9.4               836.9GB     9   4 SAS     spare       Pool0     cluster01-01
2.9.5               836.9GB     9   5 SAS     spare       Pool0     cluster01-01
2.9.6               836.9GB     9   6 SAS     spare       Pool0     cluster01-01
2.9.7               836.9GB     9   7 SAS     spare       Pool0     cluster01-01
2.9.8               836.9GB     9   8 SAS     spare       Pool0     cluster01-01
2.9.9               836.9GB     9   9 SAS     spare       Pool0     cluster01-01
2.9.10             836.9GB     9  10 SAS     spare       Pool0     cluster01-01
2.9.11             836.9GB     9  11 SAS     spare       Pool0      cluster01-01
2.9.12             836.9GB     9  12 SAS     spare       Pool0     cluster01-01
2.9.13             836.9GB     9  13 SAS     spare       Pool0      cluster01-01
2.9.14             836.9GB     9  14 SAS     spare       Pool0     cluster01-01
2.9.15             836.9GB     9  15 SAS     spare       Pool0     cluster01-01
2.9.16             836.9GB     9  16 SAS     spare       Pool0     cluster01-01
2.9.17             836.9GB     9  17 SAS     spare      Pool0      cluster01-01
2.9.18             836.9GB     9  18 SAS     spare       Pool0     cluster01-01
2.9.19             836.9GB     9  19 SAS     spare       Pool0     cluster01-01
2.9.20             836.9GB     9 20 SAS     spare       Pool0     cluster01-01
2.9.21             836.9GB     9  21 SAS     spare       Pool0      cluster01-01
2.9.22             836.9GB     9  22 SAS     spare       Pool0     cluster01-01
2.9.23             836.9GB     9  23 SAS     spare       Pool0     cluster01-01

24 entries were displayed.

# It is not possible to see these aggrs on the clustershell only within the nodeshell:

cluster01::> aggr show
Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
——— ——– ——— —– ——- —— —————- ———–
aggr0_cluster01_01 753.2GB   200.1GB   73% online       1 cluster01-01 raid_dp,normal
aggr0_cluster01_02 2.18TB    1.64TB   25% online       1 cluster01-02 raid_dp,normal
aggr1_cluster01_01_sas 112.5TB   41.15TB   63% online     534 cluster01-01 raid_dp,normal
aggr1_cluster01_02_sata 283.2TB   56.74TB   80% online     859 cluster01-02 raid_dp,normal
aggr5_cluster01_01_ssd 5.24TB    4.95TB    5% online       5 cluster01-01 raid_dp,normal
5 entries were displayed.

cluster01::*> node run * aggr status
2 entries were acted on.

Node: cluster01-01
Aggr State Status Options
metrocluster_aggr_siteB_1 offline raid_dp, aggr lost_write_protect=off
foreign
degraded
mirror degraded
64-bit
aggr0_clusterB_01 offline raid_dp, aggr diskroot, lost_write_protect=off
foreign
degraded
mirror degraded
64-bit
aggr5_cluster01_01_ssd online raid_dp, aggr nosnap=on, raidsize=10
64-bit
aggr1_cluster01_01_sas online raid_dp, aggr raidsize=19
64-bit
aggr0_cluster01_01 online raid_dp, aggr root
64-bit

Node: cluster01-02
Aggr State Status Options
aggr1_cluster01_02_sata online raid_dp, aggr raidsize=12
64-bit
aggr0_cluster01_02 online raid_dp, aggr root
64-bit

# I then just removed the old aggrs in diag mode withing cDOT and all looked nice:

cluster01::> set diag

cluster01::*> storage aggregate remove-stale-record -aggregate aggr0_clusterB_01 -nodename cluster01-01

cluster01::*> storage aggregate remove-stale-record -aggregate metrocluster_aggr_siteB_1 -nodename cluster01-01

cluster01::*>  set admin

cluster01::> disk show -shelf 9
Usable           Disk    Container   Container
Disk                   Size Shelf Bay Type    Type        Name      Owner
—————- ———- —– — ——- ———– ———
2.9.0               836.9GB     9   0 SAS     spare       Pool0     cluster01-01
2.9.1               836.9GB     9   1 SAS     spare       Pool0     cluster01-01
2.9.2               836.9GB     9   2 SAS     spare       Pool0     cluster01-01
2.9.3               836.9GB     9   3 SAS     spare       Pool0     cluster01-01
2.9.4               836.9GB     9   4 SAS     spare       Pool0     cluster01-01
2.9.5               836.9GB     9   5 SAS     spare       Pool0     cluster01-01
2.9.6               836.9GB     9   6 SAS     spare       Pool0     cluster01-01
2.9.7               836.9GB     9   7 SAS     spare       Pool0     cluster01-01
2.9.8               836.9GB     9   8 SAS     spare       Pool0     cluster01-01
2.9.9               836.9GB     9   9 SAS     spare       Pool0     cluster01-01
2.9.10              836.9GB     9  10 SAS     spare       Pool0     cluster01-01
2.9.11              836.9GB     9  11 SAS     spare       Pool0     cluster01-01
2.9.12              836.9GB     9  12 SAS     spare       Pool0     cluster01-01
2.9.13              836.9GB     9  13 SAS     spare       Pool0     cluster01-01
2.9.14              836.9GB     9  14 SAS     spare       Pool0     cluster01-01
2.9.15              836.9GB     9  15 SAS     spare       Pool0     cluster01-01
2.9.16              836.9GB     9  16 SAS     spare       Pool0     cluster01-01
2.9.17              836.9GB     9  17 SAS     spare       Pool0     cluster01-01
2.9.18              836.9GB     9  18 SAS     spare       Pool0     cluster01-01
2.9.19              836.9GB     9  19 SAS     spare       Pool0     cluster01-01
2.9.20              836.9GB     9  20 SAS     spare       Pool0     cluster01-01
2.9.21              836.9GB     9  21 SAS     spare       Pool0     cluster01-01
2.9.22              836.9GB     9  22 SAS     spare       Pool0     cluster01-01
2.9.23              836.9GB     9  23 SAS     spare       Pool0     cluster01-01
24 entries were displayed.

# Don’t forget to zero the spares to use them:

cluster01::> disk zerospares

 

 

WFA command for querying Active Directory user groups

Below is the source code of a OnCommand Workflow Automation command that searches the Active Directory for a specific user group.

Input from the Workflow should be: DOMAIN\usergroup
Output would be: found or not found with error (Workflow should be stopped)
Installation: Just add a new command to WFA and copy/paste the below source code into the code window. Additionally install the Windows Feature “Active Directory module for Windows PowerShell” onto the WFA Server.

param (
[parameter(Mandatory=$true, HelpMessage=”AD group to check for”)]
[string]$ADgroup

)

Get-WFALogger -Info -message $(“Checking for AD group ” + $ADgroup)

$pos = $ADgroup.IndexOf(“\”)
$groupname = $ADgroup.Substring($pos+1)

# Get the latest job which moves the specified volume to the specified aggregate
$result = get-adgroup -Filter {SamAccountName -eq $groupname}

if(!$result)
{
Get-WFALogger -Info -message $(“Specified AD group was not found”)
Set-WfaCommandProgress -Total 1 -Current 0 -ProgressPercentage 100 -Note “AD group was not found”
throw “Failed to find AD group ‘” + $ADgroup + “‘”
}
else
{
Get-WFALogger -Info -message $(“Found specified AD group”)

Set-WfaCommandProgress -Total 1 -Current 1 -ProgressPercentage 100
}

Script: Direct-SVM support for SnapVault in VSC

Attached is a script to do SnapVault (SnapMirror XDP) with Virtual Storage Console for VMWare from NetApp without adding the whole cluster into the VSC GUI.

VSC_Add_job

VSC 5.0 added support for SnapVault into the VSC for VMWare. If you want to use this function you had to add the whole destination cluster into the VSC. There is no possiblity to add only the SVM via a management interface into VSC and use SnapVault and SnapMirror.
I normally don’t add the whole cluster because of the permissions.

The second reason was VSC 4.2.x hadn’t support for SnapVault updates.

The best solution was to create a script which handles these problems and adds the capability to update SnapVault to SnapMirror cascades. It was tested with a number of different VSC versions including 4.2.1, 4.2.2, 6.0 and 6.1 and OnTap 8.2.x to 8.3.x.

I named the script SVVMcDOT -> SnapVault for VMWare with cDOT.
Along with the script you can find a little manual and the configuration file attached. Just rename the svvmcdot.doc to svvmcdot.ps1 (PowerShell Script), the configuration file from svvmcdot_config.doc to svvmcdot.conf and read the documentation.

svvmcdot_config
svvmcdot
Documentation_svvmcdot_v1.0

More Infos:
https://community.netapp.com/t5/VMware-Solutions-Discussions/VSC-4-2-1-and-Clustered-Data-OnTap-SnapMirror/m-p/64113/highlight/true#M6005

Set SPN if Kerberos authentication not works or if using an DNS alias for a CIFS Share on NetApp

Our example CIFS SVM has the name SVM1 and is joined in Active Directory with the CIFS name SVM1 as well. We then need a DNS alias for CIFS access which is CIFSSHARES.DOMAIN.LOCAL.

# We had to set the Service Principal Name if the use of a DNS alias like in the example is needed or the output for the cifs sessions shows NTLM2 instead of Kerberos:

cluster::> vserver cifs session show -session-id 1 -instance
Node: NODE1
Vserver: SVM1
Session ID: 1
Connection ID: 12984403
Incoming Data LIF IP Address: 192.168.1.10
Workstation IP address: 192.168.50.101
Authentication Mechanism: NTLMv2
Windows User: DOMAIN\USER
UNIX User: pcuser
Open Shares: 1
Open Files: 0
Open Other: 0
Connected Time: 1d 14h 16m 52s
Idle Time: 28s
Protocol Version: SMB2_1
Continuously Available: No
Is Session Signed: false

# The SPN can be set and viewed with the setspn.exe utility on the domain controller servers like this:

setspn.exe -A HOST/CIFSSHARES SVM1

setspn.exe -A HOST/CIFSSHARES.DOMAIN.LOCAL SVM1

C:\Users\administrator.DOMAIN>setspn.exe -l SVM1
Registered ServicePrincipalNames for CN=SVM1,CN=Computers,DC=DOMAIN,DC=LOCAL:
HOST/CIFSSHARES.DOMAIN.LOCAL
HOST/CIFSSHARES
HOST/SVM1.DOMAIN.LOCAL
HOST/SVM1

# After setting the correct SPN entry the output of cifs sessions should be like this:

Node: NODE1
Vserver: SVM1
Session ID: 1
Connection ID: 263802468
Incoming Data LIF IP Address: 192.168.1.10
Workstation IP address: 192.168.50.101
Authentication Mechanism: Kerberos
Windows User: DOMAIN\USER
UNIX User: pcuser
Open Shares: 5
Open Files: 11
Open Other: 0
Connected Time: 3h 25m 46s
Idle Time: 35s
Protocol Version: SMB2_1
Continuously Available: No
Is Session Signed: false

Reference Site from NetApp:
https://kb.netapp.com/support/index?page=content&id=1013601&pmv=print&impressions=false

4 awesome PowerCLI Commands

Here are some useful PowerCLI Commands:

#Get all VirtualPortGroups and Loadbalance Policy
Get-VirtualPortGroup | ft Name, @{Label=”LoadbalancingPolicy”; Expression = { $_.ExtensionData.config.defaultportconfig.uplinkteamingpolicy.policy.value}}

#Get all VirtualPortGroups which are not IP-Hash
Get-VirtualPortGroup | ? { $_.ExtensionData.config.defaultportconfig.uplinkteamingpolicy.policy.value -ne “loadbalance_ip” }

#Get all VMs with CDROM attached
Get-VM | FT Name, @{Label=”ISOfile”; Expression = { ($_ | Get-CDDrive).ISOPath }}

#Get all VMs with Snapshots
Foreach($vm in get-vm){ get-snapshot $vm |select VM,Name,Description,Created}