Archive

Archive for the ‘NetApp’ Category

Setup NetApp AutoSupport

12/13/2012 Leave a comment

Use the following commands to setup NetApp AutoSupport.

options autosupport.enable on

options autosupport.support.enable on

options autosupport.support.transport https

Only if Proxy is used options autosupport.proxy.url MYPROXYSERVERURL

options autosupport.to  Customer@Email1,Customer@Email2,Customer@Email3

You can include up to 5 email addresses in the options autosupport.to

options autosupport.noteto Customer@Email1,Customer@Email2,Customer@Email3

You can include up to 5 email addresses in the options autosupport.noteto

options autosupport.partner.to partner@email1,partner@email2,partner@email3

You can include up to 5 email addresses in the options autosupport.partner.to

options autosupport.support.enable on

Check that addresses are correctly configured by listing the destinations using the following command.

autosupport show destinations

configure SMTP by setting the following options

autosupport.mailhost mysmtpserver1,mysmtpserver2, mysmtpserver3,mysmtpserver4,mysmtpserver5

autosupport.from filername@customerDNS.com

Check the overall configuration using the options autosupport command.

Test that AutoSupport messages are being sent and received:
a) Use the options autosupport.doit test command.
b) Confirm that NetApp is receiving your AutoSupport messages by checking the email address
that NetApp technical support has on file for the system owner, which should have received
an automated response from the NetApp mail handler.
c) Optional. Confirm that the AutoSupport message is being sent to your internal support
organization or to your support partner by checking the email of any address that you
configured for the autosupport.to, autosupport.noteto, or
autosupport.partner options.

Categories: NetApp Tags:

Recommended Storage Protocol for NetApp VMware vSphere Solutions

10/04/2012 Leave a comment

Although NFS is not required, it’s the recommended protocol of use by NetApp as well as VMware engineers and professional services when deploying a VMware+NetApp solution. Officially both companies will say that they support all protocol’s which is true. However, all NetApp and VMware SE’s or PSE’s recommend NFS in discussions I’ve had. Additionally, NFS supports datastore sizes of up to 100TB dependent on the NetApp model vs. a maximum datastore size of 64TB for FC/FCoE and ISCSI.

To achieve 64TB datastores, you would have to concatenate multiple LUNs into a single logical datastore to create the 64TB dtatastore, potentially 32 luns x 2TB = 64TB. NetApp does not recommend the use of spanned VMFS datastores.

Another benefit to NFS is flexibility of sizing. A NFS datastore can be grown and shrank. However a lun cannot be shrank only grown.

Lastly, NFS is much more simple to configure and support.

For more detail you can read: NetApp Storage Best Practices for VMware vSphere:

http://www.vmwaregrid.com/realworld_netapp/beardata/assets/tr-3749.pdf

Additional note from the referenced best practice article:

NFS DATASTORES ON NETAPP

Deploying VMware with the NetApp advanced NFS results in a high-performance, easy-to-manage implementation that provides VM-to-datastore ratios that cannot be accomplished with other storage protocols such as FC. This architecture can result in a 10x increase in datastore density with a correlating reduction in the number of datastores. With NFS, the virtual infrastructure receives operational savings because there are fewer storage pools to provision, manage, back up, replicate, and so on.

Through NFS, customers receive an integration of VMware virtualization technologies with WAFL® (Write Anywhere File Layout), which is a NetApp advanced data management and storage virtualization engine. This integration provides transparent access to VM-level storage virtualization offerings such as production-use data deduplication, immediate zero-cost VM and datastore clones, array-based thin provisioning, automated policy-based datastore resizing, and direct access to array-based Snapshot™ copies. NetApp provides integrated tools such as Site Recovery Manager, SnapManager for Virtual Infrastructure (SMVI), the RCU, and the VSC.

Categories: NetApp, VMware, vSphere

Redirect-on-write SnapShot is a Reliable Backup Solution

01/03/2011 1 comment

I hear a common comment from people whom either favor EMC over NetApp or do not understand the differences between the SnapShot technologies provided by each vendor. Following are some of my thoughts and opinion. First, NetApp’s SnapShot technology leverages Redirect-on-write technology compared to EMC which utilizes copy-on-write snapshot technology. My opinion, similar to that of Mr. Curtis Preston “Mr. Backup”, is that:

“In order for snapshot-based backup to work, I thoroughly believe that the array vendor must not use copy-on-write snapshot technology.” Source: http://www.backupcentral.com/mr-backup-blog-mainmenu-47/13-mr-backup-blog/328-tech-field-day-post-1-compellent-and-nimble-storage.html

I continue to argue the fact with other IT professionals that SnapShot technology is a great, reliable, fast technology which when complimented with a storage vendor whom provides a unified suite of backup, management and storage technologies can be a far superior enterprise backup solution rather than traditional backup technologies. Frankly, NetApp provides a unified backup solution suite which integrates and supports most of the major enterprise software or services which require backups such as MOSS, SQL, Exchange, CIFS and VMware vSphere and more.

If I were running an EMC storage solution, I would have to lean on a traditional backup solution such as EMC Avamar or combination of third party vendor plus Data Domain. Is that unified?

Categories: NetApp, SAN Tags: ,

NetApp Disk Sanitization

12/16/2010 2 comments

The disk sanitize feature performs a disk format operation and uses 3 successive byte overwrite patterns per cycle and a default 6 cycles per operation, for a total of 18 complete disk overwrite passes, in compliance with the United States Department of Defense and Department of Energy security requirements.

How selective disk sanitization works

Disk sanitization is the process of physically obliterating data by overwriting disks with specified byte patterns or random data so that recovery of the original data becomes impossible. You use the disk sanitize command if you want to ensure that no one can recover the data on the disks.

The disk sanitize command uses three successive default or user-specified byte overwrite patterns for up to seven cycles per operation. Depending on the disk capacity, the patterns, and the number of cycles, the process can take several hours. Sanitization runs in the background. You can start, stop, and display the status of the sanitization process.

After you enter the disk sanitize start command, Data ONTAP begins the sanitization process on each of the specified disks. The process consists of a disk format operation, followed by the specified overwrite patterns repeated for the specified number of cycles.

Note: The formatting phase of the disk sanitization process is skipped on ATA disks.

If the sanitization process is interrupted by power failure, system panic, or a user-invoked disk sanitize abort command, the disk sanitize command must be re-invoked and the process repeated from the beginning in order for the sanitization to take place.

When the sanitization process is complete, the specified disks are in a sanitized state. You return the sanitized disks to the spare disk pool with the disk sanitize release command.

Selective disk sanitization

Selective disk sanitization consists of physically obliterating data in specified files or volumes while preserving all other data located on the affected aggregate for continued user access. Because a file can be stored on multiple disks, there are three parts to the process.

To selectively sanitize data contained in an aggregate, you must carry out three general tasks.

  • Delete the files, directories or volumes from the aggregate that contains them.

You must also delete any volume Snapshot copies that contain data from those files, directories, or volumes.

  • Migrate the data that you want to preserve to a new set of disks in a destination aggregate on the same storage system.

You migrate data using the ndmpcopy command.

  • Destroy the original aggregate and sanitize all the disks that were RAID group members in that aggregate.

 

Tips for creating and backing up aggregates containing data that will be sanitized

If you are creating or backing up aggregates to contain data that might need to be sanitized, following some simple guidelines will reduce the time it takes to sanitize your data.

  • Make sure your aggregates containing sensitive data are not larger than they need to be.

If they are larger than needed, sanitization requires more time, disk space, and bandwidth.

  • When you back up aggregates containing sensitive data, avoid backing them up to aggregates that also contain large amounts of nonsensitive data.

This will reduce the resources required to move nonsensitive data before sanitizing sensitive data.

Before You Begin

Before you can use the disk sanitization feature, you must install the disk sanitization license.

Attention:

Once installed on a storage system, the license for disk sanitization is permanent.

The disk sanitization license prohibits the following commands from being used on the storage system:

  • dd (to copy blocks of data)
  • dumpblock (to print dumps of disk blocks)
  • setflag wafl_metadata_visible (to allow access to internal WAFL files)

For more information about licenses, see the System Administration Guide

Considerations

You can sanitize any disk that has spare status.

If your storage system is using software-based disk ownership, you must ensure that the disks you want to sanitize have been assigned ownership. You cannot sanitize unowned disks.

Steps

  1. Verify that the disks that you want to sanitize do not belong to a RAID group in any existing aggregate by entering the following command: sysconfig -r

The disks that you want to sanitize should be listed with spare status.

Note: If the expected disks are not displayed, they have not been assigned ownership. You must assign ownership to a disk before you can sanitize it.

  1. Sanitize the specified disk or disks of all existing data by entering the following command:disk sanitize start [-p pattern1|-r [-p pattern2|-r [-p pattern3|-r]]] [-c cycle_count] disk_list

 

Attention:

Do not turn off the storage system, disrupt the storage connectivity, or remove target disks while sanitizing. If sanitizing is interrupted while target disks are being formatted, the disks must be reformatted before sanitizing can finish.

If you need to abort the sanitization process, you can do so by using the disk sanitize abort command. If the specified disks are undergoing the disk formatting phase of sanitization, the abort will not occur until the disk formatting is complete. Once the sanitizing is stopped, Data ONTAP displays a message informing you that sanitization was stopped.

-p pattern1 -p pattern2 -p pattern3 specifies a cycle of one to three user-defined hex byte overwrite patterns that can be applied in succession to the disks being sanitized. The default pattern is three passes, using 0x55 for the first pass, 0xaa for the second pass, and 0x3c for the third pass.

-r replaces a patterned overwrite with a random overwrite for any or all of the passes.

-c cycle_count specifies the number of times the specified overwrite patterns will be applied. The default value is one cycle. The maximum value is seven cycles.

disk_list specifies a space-separated list of the IDs of the spare disks to be sanitized.

  1. 3.     To check the status of the disk sanitization process, enter the following command:
    disk sanitize status [disk_list]
  2. 4.     To release sanitized disks from the pool of maintenance disks for reuse as spare disks, enter the following command:
    disk sanitize release disk_list

Data ONTAP moves the specified disks from the maintenance pool to the spare pool.

Note: Rebooting the storage system or removing and reinserting a disk that has been sanitized moves that disk from the maintenance pool to the broken pool.

The specified disks are sanitized, put into the maintenance pool, and displayed as sanitized. The serial numbers of the sanitized disks are written to /etc/sanitized_disks.

Examples

The following command applies the default three disk sanitization overwrite patterns for one cycle (for a total of 3 overwrites) to the specified disks, 7.6, 7.7, and 7.8:

disk sanitize start 7.6 7.7 7.8

The following command would result in three disk sanitization overwrite patterns for six cycles (for a total of 18 overwrites) to the specified disks:

disk sanitize start -c 6 7.6 7.7 7.8

After You Finish

You can monitor the status of the sanitization process by using the /etc/sanitized_disks and /etc/sanitization.log files:

  • Status for the sanitization process is written to the /etc/sanitization.log file every 15 minutes.
  • The /etc/sanitized_disks file contains the serial numbers of all drives that have been successfully sanitized. For every invocation of the disk sanitize start command, the serial numbers of the newly sanitized disks are appended to the file.

You can verify that all of the disks were successfully sanitized by checking the /etc/sanitized_disks file.

Categories: NetApp Tags:

How to Migrate MOSS Index’s onto NetApp LUN to Support SMMOSS

07/15/2010 Leave a comment

NetApp SMMOSS or “SnapManager 5.0 for Microsoft Office SharePoint Server” now protects SharePoint search server index files. Unfortunately, the “SnapManager 5 for Microsoft Office SharePoint Server Installation and Administration Guide” does not fully document this procedure. Additionally, a search of the NetApp NOW site also results in lack of explanation.

The process becomes even more complicated when you have separated the two indexing services that run within a SharePoint farm onto two separate servers. These two indexing services are:

  1. “Office SharePoint Server Search” service
  2. “Windows SharePoint Services Help Search” service

Steps to Migrate Index Files onto a LUN:

  1. Install the SMMOSS Agent. The agent must be installed on both servers if you have seperated the indexing services.
  2. Configure the agent as a member agent
  3. From the SMMOSS web administration interface insure that this agent is joined and active.
  4. Log onto your Index Server with Administrator privileges.

NOTE: If your Index Service and Query Service are installed on separate dedicated servers for each individual service, you will have to mount one LUN to each of the two servers and then log into each to execute the command. Insure that you run the correct command.

The –o flag should be run on the server running Indexing for the Office SharePoint Server Search service.

The –s flag should be run on the server running the Windows SharePoint Services Help Search service.

Check Central Administration to determine which servers in your farm are running these services.

  1. Click Start > Run
  2. Type cmd then click ok or hit enter. The command prompt window should appear.
  3. Navigate to: %PROGRAMFILES%\NetApp\SnapManager for SharePoint Server\VaultClient\bin\
  4. If the LUN that you are migrating to has drive letter “F”, enter the following commands to move the two index files:

smmossindextool.exe –o –d F
smmossindextool.exe –s –d F

Note: -o and -s cannot be run in the same command, they must be run separately. For example, the command smmossindextool.exe -o –s -d F will fail.

 Once the migration is complete, the osearch service should be restarted. You will know when it has successfully completed when you see “Operation completed Successfully” at the command prompt. It may take several minutes to complete.

Following is the help info for the command options.

SMMOSSIndexTool.EXE 5.0.0.0 – SharePoint Index File Migration Tool

Copyright c 2001-2010 NetApp, Inc.

 Usage:

   SMMOSSIndexTool.EXE  [-o|-s] [-ssp sspname] -d mountpoint

List of commands and parameters:

================================

   -o

     Move OSearch search index files.

   -s

     Move SPSearch search index files.

   -ssp sspName

     Specify name of the SSP to move. If not specified, all SSPs will be moved.

   -d mountpoint

     Specify the destination mount point where the search index files will be moved to.

 Examples:

———

  SMMOSSIndexTool.exe  -d G

  SMMOSSIndexTool.exe  -o -d H

  SMMOSSIndexTool.exe  -s -d I

  SMMOSSIndexTool.exe  -o -ssp sspName -d E

Categories: SMMOSS Tags: ,

How to Access NetApp SnapShots

07/14/2010 Leave a comment

The following explains the process of accessing NetApp snapshots on a filer form a client machine.

Prerequisites: The volume option nosnapdir=off must be set. To check the nosnapdir option log into the CLI and run:

vol option volname

Snapshots without a qtree:

  1. From a Windows Client, Click Start > Run
  2. Enter the UNC \\filer\~snapshots
  3. Click ok or hit Enter
  4. The browser open presenting the available snapshots

Snapshots with a qtree:

  1. From a Windows Client, Click Start > Run
  2. Enter the UNC \\filer\qtree\~snapshots
  3. Click ok or hit Enter
  4. The browser open presenting the available snapshots
Categories: NetApp Tags:

Script NetApp SnapDrive Commands

06/22/2010 Leave a comment

To script NetApp SnapDrive commands you must use the “sdcli” or SnapDrive Command Line Interface. 

The sdcli commands consist of three input parameters, which must be specified in the correct order, followed by one or more command-line switches. You can specify the command-line switches in any order.  

Before You Begin

When you use the sdcli command-line utility on a Windows 2008 server, you must be logged in as Administrator, not as a user with administrative rights.  

Considerations

Command-line switches are case-sensitive. For instance, the -d switch refers to a single drive letter, while the -D switch refers to one or more drive letters separated by spaces.

Steps

  1. Using a host that has SnapDrive installed, select Start > Run.
  2. Type cmdin the dialog box entry field, and then click OK.
  3. After the Windows command prompt window opens, navigate to the directory on your host where SnapDrive is installed.Example C:  cd \Program Files\NetApp\SnapDrive\   
  4. Enter the individual command you want to run. Make sure to include all input parameters in the proper order and to specify both required and desired command-line switches in any order.Example sdcli disk disconnect -d R Alternatively, enter the name and path of the automation script you want to run. Example C:\SnapDrive Scripts\disconnect_R_from_host4.bat  
Categories: NetApp Tags: , ,
Follow

Get every new post delivered to your Inbox.