Archive

Archive for the ‘NetApp’ Category

Redirect-on-write SnapShot is a Reliable Backup Solution

01/03/2011 1 comment

I hear a common comment from people whom either favor EMC over NetApp or do not understand the differences between the SnapShot technologies provided by each vendor. Following are some of my thoughts and opinion. First, NetApp’s SnapShot technology leverages Redirect-on-write technology compared to EMC which utilizes copy-on-write snapshot technology. My opinion, similar to that of Mr. Curtis Preston “Mr. Backup”, is that:

“In order for snapshot-based backup to work, I thoroughly believe that the array vendor must not use copy-on-write snapshot technology.” Source: http://www.backupcentral.com/mr-backup-blog-mainmenu-47/13-mr-backup-blog/328-tech-field-day-post-1-compellent-and-nimble-storage.html

I continue to argue the fact with other IT professionals that SnapShot technology is a great, reliable, fast technology which when complimented with a storage vendor whom provides a unified suite of backup, management and storage technologies can be a far superior enterprise backup solution rather than traditional backup technologies. Frankly, NetApp provides a unified backup solution suite which integrates and supports most of the major enterprise software or services which require backups such as MOSS, SQL, Exchange, CIFS and VMware vSphere and more.

If I were running an EMC storage solution, I would have to lean on a traditional backup solution such as EMC Avamar or combination of third party vendor plus Data Domain. Is that unified?

Categories: NetApp, SAN Tags: ,

NetApp Disk Sanitization

12/16/2010 2 comments

The disk sanitize feature performs a disk format operation and uses 3 successive byte overwrite patterns per cycle and a default 6 cycles per operation, for a total of 18 complete disk overwrite passes, in compliance with the United States Department of Defense and Department of Energy security requirements.

How selective disk sanitization works

Disk sanitization is the process of physically obliterating data by overwriting disks with specified byte patterns or random data so that recovery of the original data becomes impossible. You use the disk sanitize command if you want to ensure that no one can recover the data on the disks.

The disk sanitize command uses three successive default or user-specified byte overwrite patterns for up to seven cycles per operation. Depending on the disk capacity, the patterns, and the number of cycles, the process can take several hours. Sanitization runs in the background. You can start, stop, and display the status of the sanitization process.

After you enter the disk sanitize start command, Data ONTAP begins the sanitization process on each of the specified disks. The process consists of a disk format operation, followed by the specified overwrite patterns repeated for the specified number of cycles.

Note: The formatting phase of the disk sanitization process is skipped on ATA disks.

If the sanitization process is interrupted by power failure, system panic, or a user-invoked disk sanitize abort command, the disk sanitize command must be re-invoked and the process repeated from the beginning in order for the sanitization to take place.

When the sanitization process is complete, the specified disks are in a sanitized state. You return the sanitized disks to the spare disk pool with the disk sanitize release command.

Selective disk sanitization

Selective disk sanitization consists of physically obliterating data in specified files or volumes while preserving all other data located on the affected aggregate for continued user access. Because a file can be stored on multiple disks, there are three parts to the process.

To selectively sanitize data contained in an aggregate, you must carry out three general tasks.

  • Delete the files, directories or volumes from the aggregate that contains them.

You must also delete any volume Snapshot copies that contain data from those files, directories, or volumes.

  • Migrate the data that you want to preserve to a new set of disks in a destination aggregate on the same storage system.

You migrate data using the ndmpcopy command.

  • Destroy the original aggregate and sanitize all the disks that were RAID group members in that aggregate.

 

Tips for creating and backing up aggregates containing data that will be sanitized

If you are creating or backing up aggregates to contain data that might need to be sanitized, following some simple guidelines will reduce the time it takes to sanitize your data.

  • Make sure your aggregates containing sensitive data are not larger than they need to be.

If they are larger than needed, sanitization requires more time, disk space, and bandwidth.

  • When you back up aggregates containing sensitive data, avoid backing them up to aggregates that also contain large amounts of nonsensitive data.

This will reduce the resources required to move nonsensitive data before sanitizing sensitive data.

Before You Begin

Before you can use the disk sanitization feature, you must install the disk sanitization license.

Attention:

Once installed on a storage system, the license for disk sanitization is permanent.

The disk sanitization license prohibits the following commands from being used on the storage system:

  • dd (to copy blocks of data)
  • dumpblock (to print dumps of disk blocks)
  • setflag wafl_metadata_visible (to allow access to internal WAFL files)

For more information about licenses, see the System Administration Guide

Considerations

You can sanitize any disk that has spare status.

If your storage system is using software-based disk ownership, you must ensure that the disks you want to sanitize have been assigned ownership. You cannot sanitize unowned disks.

Steps

  1. Verify that the disks that you want to sanitize do not belong to a RAID group in any existing aggregate by entering the following command: sysconfig -r

The disks that you want to sanitize should be listed with spare status.

Note: If the expected disks are not displayed, they have not been assigned ownership. You must assign ownership to a disk before you can sanitize it.

  1. Sanitize the specified disk or disks of all existing data by entering the following command:disk sanitize start [-p pattern1|-r [-p pattern2|-r [-p pattern3|-r]]] [-c cycle_count] disk_list

 

Attention:

Do not turn off the storage system, disrupt the storage connectivity, or remove target disks while sanitizing. If sanitizing is interrupted while target disks are being formatted, the disks must be reformatted before sanitizing can finish.

If you need to abort the sanitization process, you can do so by using the disk sanitize abort command. If the specified disks are undergoing the disk formatting phase of sanitization, the abort will not occur until the disk formatting is complete. Once the sanitizing is stopped, Data ONTAP displays a message informing you that sanitization was stopped.

-p pattern1 -p pattern2 -p pattern3 specifies a cycle of one to three user-defined hex byte overwrite patterns that can be applied in succession to the disks being sanitized. The default pattern is three passes, using 0x55 for the first pass, 0xaa for the second pass, and 0x3c for the third pass.

-r replaces a patterned overwrite with a random overwrite for any or all of the passes.

-c cycle_count specifies the number of times the specified overwrite patterns will be applied. The default value is one cycle. The maximum value is seven cycles.

disk_list specifies a space-separated list of the IDs of the spare disks to be sanitized.

  1. 3.     To check the status of the disk sanitization process, enter the following command:
    disk sanitize status [disk_list]
  2. 4.     To release sanitized disks from the pool of maintenance disks for reuse as spare disks, enter the following command:
    disk sanitize release disk_list

Data ONTAP moves the specified disks from the maintenance pool to the spare pool.

Note: Rebooting the storage system or removing and reinserting a disk that has been sanitized moves that disk from the maintenance pool to the broken pool.

The specified disks are sanitized, put into the maintenance pool, and displayed as sanitized. The serial numbers of the sanitized disks are written to /etc/sanitized_disks.

Examples

The following command applies the default three disk sanitization overwrite patterns for one cycle (for a total of 3 overwrites) to the specified disks, 7.6, 7.7, and 7.8:

disk sanitize start 7.6 7.7 7.8

The following command would result in three disk sanitization overwrite patterns for six cycles (for a total of 18 overwrites) to the specified disks:

disk sanitize start -c 6 7.6 7.7 7.8

After You Finish

You can monitor the status of the sanitization process by using the /etc/sanitized_disks and /etc/sanitization.log files:

  • Status for the sanitization process is written to the /etc/sanitization.log file every 15 minutes.
  • The /etc/sanitized_disks file contains the serial numbers of all drives that have been successfully sanitized. For every invocation of the disk sanitize start command, the serial numbers of the newly sanitized disks are appended to the file.

You can verify that all of the disks were successfully sanitized by checking the /etc/sanitized_disks file.

Categories: NetApp Tags:

How to Migrate MOSS Index’s onto NetApp LUN to Support SMMOSS

07/15/2010 Leave a comment

NetApp SMMOSS or “SnapManager 5.0 for Microsoft Office SharePoint Server” now protects SharePoint search server index files. Unfortunately, the “SnapManager 5 for Microsoft Office SharePoint Server Installation and Administration Guide” does not fully document this procedure. Additionally, a search of the NetApp NOW site also results in lack of explanation.

The process becomes even more complicated when you have separated the two indexing services that run within a SharePoint farm onto two separate servers. These two indexing services are:

  1. “Office SharePoint Server Search” service
  2. “Windows SharePoint Services Help Search” service

Steps to Migrate Index Files onto a LUN:

  1. Install the SMMOSS Agent. The agent must be installed on both servers if you have seperated the indexing services.
  2. Configure the agent as a member agent
  3. From the SMMOSS web administration interface insure that this agent is joined and active.
  4. Log onto your Index Server with Administrator privileges.

NOTE: If your Index Service and Query Service are installed on separate dedicated servers for each individual service, you will have to mount one LUN to each of the two servers and then log into each to execute the command. Insure that you run the correct command.

The –o flag should be run on the server running Indexing for the Office SharePoint Server Search service.

The –s flag should be run on the server running the Windows SharePoint Services Help Search service.

Check Central Administration to determine which servers in your farm are running these services.

  1. Click Start > Run
  2. Type cmd then click ok or hit enter. The command prompt window should appear.
  3. Navigate to: %PROGRAMFILES%\NetApp\SnapManager for SharePoint Server\VaultClient\bin\
  4. If the LUN that you are migrating to has drive letter “F”, enter the following commands to move the two index files:

smmossindextool.exe –o –d F
smmossindextool.exe –s –d F

Note: -o and -s cannot be run in the same command, they must be run separately. For example, the command smmossindextool.exe -o –s -d F will fail.

 Once the migration is complete, the osearch service should be restarted. You will know when it has successfully completed when you see “Operation completed Successfully” at the command prompt. It may take several minutes to complete.

Following is the help info for the command options.

SMMOSSIndexTool.EXE 5.0.0.0 – SharePoint Index File Migration Tool

Copyright c 2001-2010 NetApp, Inc.

 Usage:

   SMMOSSIndexTool.EXE  [-o|-s] [-ssp sspname] -d mountpoint

List of commands and parameters:

================================

   -o

     Move OSearch search index files.

   -s

     Move SPSearch search index files.

   -ssp sspName

     Specify name of the SSP to move. If not specified, all SSPs will be moved.

   -d mountpoint

     Specify the destination mount point where the search index files will be moved to.

 Examples:

———

  SMMOSSIndexTool.exe  -d G

  SMMOSSIndexTool.exe  -o -d H

  SMMOSSIndexTool.exe  -s -d I

  SMMOSSIndexTool.exe  -o -ssp sspName -d E

Categories: SMMOSS Tags: ,

How to Access NetApp SnapShots

07/14/2010 Leave a comment

The following explains the process of accessing NetApp snapshots on a filer form a client machine.

Prerequisites: The volume option nosnapdir=off must be set. To check the nosnapdir option log into the CLI and run:

vol option volname

Snapshots without a qtree:

  1. From a Windows Client, Click Start > Run
  2. Enter the UNC \\filer\~snapshots
  3. Click ok or hit Enter
  4. The browser open presenting the available snapshots

Snapshots with a qtree:

  1. From a Windows Client, Click Start > Run
  2. Enter the UNC \\filer\qtree\~snapshots
  3. Click ok or hit Enter
  4. The browser open presenting the available snapshots
Categories: NetApp Tags:

Script NetApp SnapDrive Commands

06/22/2010 Leave a comment

To script NetApp SnapDrive commands you must use the “sdcli” or SnapDrive Command Line Interface. 

The sdcli commands consist of three input parameters, which must be specified in the correct order, followed by one or more command-line switches. You can specify the command-line switches in any order.  

Before You Begin

When you use the sdcli command-line utility on a Windows 2008 server, you must be logged in as Administrator, not as a user with administrative rights.  

Considerations

Command-line switches are case-sensitive. For instance, the -d switch refers to a single drive letter, while the -D switch refers to one or more drive letters separated by spaces.

Steps

  1. Using a host that has SnapDrive installed, select Start > Run.
  2. Type cmdin the dialog box entry field, and then click OK.
  3. After the Windows command prompt window opens, navigate to the directory on your host where SnapDrive is installed.Example C:  cd \Program Files\NetApp\SnapDrive\   
  4. Enter the individual command you want to run. Make sure to include all input parameters in the proper order and to specify both required and desired command-line switches in any order.Example sdcli disk disconnect -d R Alternatively, enter the name and path of the automation script you want to run. Example C:\SnapDrive Scripts\disconnect_R_from_host4.bat  
Categories: NetApp Tags: , ,

Move MOSS Propagation Index to Support SMMOSS 5.0

06/21/2010 Leave a comment

In order to support NetApp SMMOSS 5.0, the index file folder “propagationlocation” created on the Query server within a MOSS farm must be on a LUN. In the following scenario, the Index service and Query service reside on separate dedicated servers. The Index server copied contents of the “C:\Program Files\Microsoft Office Servers\12.0\Data\Office Server\Applications” folder to the share on the Query server which is named “searchindexpropagation”. The location of the actual “searchindexpropagation” share is the same path as the Index server by default “C:\Program Files\Microsoft Office Servers\12.0\Data\Office Server\Applications”.

Steps:

  1. On your NetApp SAN, create a new vol. I first checked the size of my Index files to determine how large of a vol would be needed. My index files located at “C:\Program Files\Microsoft Office Servers\12.0\Data\Office Server\Applications” were about 3.5GB in size, so I created a 25GB vol with volume reservations and 20% snapshot reserve.
  2. Create a qtree on the new vol.
  3. Install Microsoft iSCSI Initiator on server and configure to connect to the NetApp SAN that will host your new vol.
  4. Reboot Server after install & config of iSCSI Initiator.
  5. Install SnapDrive 6.1.0.x onto the MOSS query server.
  6. Use SnapDrive to create a new LUN on your new vol.
  7. Once complete, your LUN is presented to the server as a new drive.
  8. Create a folder on your new drive. In my example, my new drive letter is “e:”. I created a folder named “propagationlocation” which results in the following new path “e:\ propagationlocation”.
  9. You can move the propagationlocation folder to a new location via Central Administration of command line.
  • Central Administration:
    • Access the Search Service Settings page for the Query Server:
      “Central Administration > Operations > Services on Server > Office SharePoint Server Search Service Settings”
    • Change the default location within the “Query server index file location:” filed to your new location, in my example that is: e:\ propagationlocation
  • STSADM.EXE
    • On the Query Server navigate to:
      “cd %commonprogramfiles%\Microsoft Shared\Web Server extensions\12\Bin”
    • Enter the following command:
      “STSADM.EXE –o osearch –propagationlocation e:\ propagationlocation”

To confirm your changes, go to Central Administration > Operations > Services on Server > Office SharePoint Server Search Service Settings and look to see if the “Query server index file location:” filed has been updated. You can also check the original location “C:\Program Files\Microsoft Office Servers\12.0\Data\Office Server\Applications” which should no longer contain files. Latly, check the new folder location to insure the files are present e:\ propagationlocation .

How to use the option cifs.home_dir_namestyle domain

06/10/2010 Leave a comment
Symptoms
Example of how to enable CIFS homedir with the home_dir_namestyle option set to domain.
Solution
1. Edit the /etc/cifs_homedir.cfg file to represent a path where home directories will exist.

 Note:  The home directories will actually exist within folders named the NetBios name of the domain that each user belongs to.

 >wrfile /etc/cifs_homedir.cfg /vol/vol1/homedir

2. Create a folder for each domain in the directory that you entered in the /etc/cifs_homedir.cfg file.

3. Name each folder the same name as the Netbios domain name of each domain that the users will belong to HQ UK (example)

You will now have folders like the following:

/vol/vol1/homedir/hq/ /vol/vol1/homedir/uk/

4. Create a folder for each user within the correct domain named folder.

5. For a user in the HQ domain, create a folder in the HQ folder.

6. For a user in the UK domain, create a folder in the UK folder.

You will now have folders like the following:

/vol/vol1/homedir/hq/hq_user /vol/vol1/homedir/uk/uk_user

 7. Load the new CIFS homedir configuration into the filer

 >cifs homedir load -f 

8. Test that the CIFS homedir domain name style is working:

 >cifs homedir showuser hq/hq_user

>cifs homedir showuser uk/uk_user

Categories: NetApp Tags: ,
Follow

Get every new post delivered to your Inbox.