Is there a way to set or see what the snapshot reserve is for the iSCSI volumes created by Abiquo on a NetApp filer

 Abiquo uses the snapshot reserve default value for iSCSI LUNs from the hosting volume and can be obtained from the snap autodelete command.
So in the case of vfiler0 the value is typically the following:
sim3> snap autodelete vol0 show
snapshot autodelete settings for vol0:
state                           : off
commitment                      : try
trigger                         : volume
target_free_space               : 20%
delete_order                    : oldest_first
defer_delete                    : user_created
prefix                          : (not specified)
destroy_list                    : none
 
So the iSCSI LUN will receive the same values as the following:
sim3> snap autodelete ABI_d8415134_980e_4db3_97bc_5eb292ab0a46 show
snapshot autodelete settings for ABI_d8415134_980e_4db3_97bc_5eb292ab0a46:
state                           : off
commitment                      : try
trigger                         : volume
target_free_space               : 20%
delete_order                    : oldest_first
defer_delete                    : user_created
prefix                          : (not specified)
destroy_list                    : none
sim3>
 
Changing the host volume parameters should change any new LUNs created on the vfiler, however you will need to manually change parameters for previously created LUNs.
 
Depending on your school of thought a lot of storage admins use the following but mileage varies:
Commitment=try
Trigger=snap_reserve
Target_free_space=20%
Delete_order= oldest_first
Defer_delete=user_created

Existing Snapshot reserves are configurable for each volume but can be globally set at the host volume level.

For any pre-2.0 Abiquo netapp integration the snap policy must be disabled at the aggregate level.

To disable all snapshots at the aggregate level do the following:

1)  Check the aggregate snapshot policy where the Abiquo volumes are created

 sim3> aggr status aggr1 -v

           Aggr State           Status            Options

          aggr1 online          raid_dp, aggr     nosnap=off, raidtype=raid_dp,

                                32-bit            raidsize=7,

                                                  ignore_inconsistent=off,

                                                  snapmirrored=off,

                                                  resyncsnaptime=60,

                                                  fs_size_fixed=off,

                                                  snapshot_autodelete=on,

                                                  lost_write_protect=on,

                                                  ha_policy=cfo

                 Volumes: vf1rootvol, vf1datavol, vf1pkvolX, vf1pkvolY

                 Plex /aggr1/plex0: online, normal, active

                    RAID group /aggr1/plex0/rg0: normal

 sim3>

2)  Set the nosnap option to disable snapshots for volumes created in this aggregate

sim3> aggr options aggr1 nosnap on

3)  Now any new volumes created in this aggregate will not have snapshots enabled by default

4)  Existing volumes already created on the aggregate will need to manually disable snapshots and reclaim space

5)  For each volume created on the aggregate Disable scheduled snapshots using the snap command

sim3> snap sched vf1pkvolX 0 0 0

6)  For each volume created reclaim existing snapshot space using the snap command

sim3> snap reserve vf1pkvolX 0

7)  Validate the space has been reclaimed

sim3> df /vol/vf1pkvolX

Filesystem              kbytes       used      avail capacity  Mounted on

/vol/vf1pkvolX/          25600        312      25288       1%  /vol/vf1pkvolX/

/vol/vf1pkvolX/.snapshot          0        196          0     ---%  /vol/vf1pkvolX/.snapshot

sim3>

Notice the snapshot reserve has been set to 0.

All snapshots are disabled and currently used .snapshot space reclaimed

Note. observe precautions when disabling volume snapshots as volume data recovery recovery from snapshots is not possible .

0 Comments

Please sign in to leave a comment.