Upgrade ACP firmware on Netap 6240

June 20th, 2011

netapp1a> priv set advanced

mount /vol/vol0 of the filer to another host and copy ACP firmware files to etc/acpp_fw/directory.

# ls -al /6240_ntap_1/etc/acpp_fw/
total 10668
drwxrwxrwx 2 root root 4096 Jun 20 17:18 .
drwxr-xr-x 28 root root 4096 Jun 20 17:10 ..
-r–r–r– 1 root root 10886984 Jun 20 17:17 ACP-IOM3.0120.AFW
-r–r–r– 1 root root 128 Jun 20 17:17 ACP-IOM3.0120.AFW.FVF

netapp1a*>
netapp1a*> storage download acp 7a.54.B ### 7a.54.B is disk shelf which I am upgrading with new firmware

Downloading ACP firmware will not disrupt client access during that time.
However, normal ACP recovery capabilities will not be
available while the firmware upgrade is in progress.

Are you sure you want to continue with ACP processor firmware update? y
netapp1a*> Mon Jun 20 17:19:14 EDT [netapp1a: acp.command.sent:info]: Sent firmware download (image: ACP-IOM3.0120.AFW) command to 7a.54.B (192.168.1.71), (disk shelf serial number: SHJ000000002096).
Mon Jun 20 17:19:14 EDT [netapp1a: acp.command.response:info]: Command firmware download to 7a.54.B (192.168.1.71) was successful, (disk shelf serial number: SHJ000000002096).

netapp1a*> storage show acp

Alternate Control Path: Enabled
Ethernet Interface: e0P
ACP Status: Active
ACP IP Address: 192.168.3.130
ACP Domain: 192.168.0.1
ACP Netmask: 255.255.252.0
ACP Connectivity Status: Full Connectivity

Shelf Module Reset Cnt IP Address FW Version Module Type Status
—————– ———— ————— ———— ———— ——-
…..
7a.54.A 000 192.168.2.63 01.20 IOM3 active
7a.54.B 000 192.168.1.71 01.05 IOM3 inactive (upgrading firmware)P firmware can take several minutes
…..

then you’ll see console message: “netapp1a*> netapp1a Mon Jun 20 17:24:49 EDT [netapp1a: acp.upgrade.successful:info]: ACP module 7a.54.B (192.168.1.71) successfully upgraded firmware, (disk shelf serial number: SHJ000000002096).

netapp1a*> storage show acp

Alternate Control Path: Enabled
Ethernet Interface: e0P
ACP Status: Active
ACP IP Address: 192.168.3.130
ACP Domain: 192.168.0.1
ACP Netmask: 255.255.252.0
ACP Connectivity Status: Full Connectivity

Shelf Module Reset Cnt IP Address FW Version Module Type Status
—————– ———— ————— ———— ———— ——-
……
7a.53.B 000 192.168.2.11 01.20 IOM3 active
7a.54.A 000 192.168.2.63 01.20 IOM3 active
7a.54.B 000 192.168.1.71 01.20 IOM3 active
……

netapp1a*> priv set admin

Done.

Disable ZFS file caching

July 7th, 2010

bash-3.00# zpool get all GTtest01
NAME PROPERTY VALUE SOURCE
GTtest01 size 1016G -
GTtest01 used 1.72G -
GTtest01 available 1014G -
GTtest01 capacity 0% -
GTtest01 altroot – default
GTtest01 health ONLINE -
GTtest01 guid 3542498748803264599 default
GTtest01 version 15 default
GTtest01 bootfs – default
GTtest01 delegation on default
GTtest01 autoreplace off default
GTtest01 cachefile – default
GTtest01 failmode wait default
GTtest01 listsnapshots on default

bash-3.00# zfs list -r GTtest01
NAME USED AVAIL REFER MOUNTPOINT
GTtest01 1.72G 998G 1.71G /GTtest01
bash-3.00# zfs get all GTtest01
NAME PROPERTY VALUE SOURCE
GTtest01 type filesystem -
GTtest01 creation Fri Jul 2 11:24 2010 -
GTtest01 used 1.72G -
GTtest01 available 998G -
GTtest01 referenced 1.71G -
GTtest01 compressratio 1.00x -
GTtest01 mounted yes -
GTtest01 quota none default
GTtest01 reservation none default
GTtest01 recordsize 128K default
GTtest01 mountpoint /GTtest01 default
GTtest01 sharenfs off default
GTtest01 checksum on default
GTtest01 compression off default
GTtest01 atime on default
GTtest01 devices on default
GTtest01 exec on default
GTtest01 setuid on default
GTtest01 readonly off default
GTtest01 zoned off default
GTtest01 snapdir hidden default
GTtest01 aclmode groupmask default
GTtest01 aclinherit restricted default
GTtest01 canmount on default
GTtest01 shareiscsi off default
GTtest01 xattr on default
GTtest01 copies 1 default
GTtest01 version 4 -
GTtest01 utf8only off -
GTtest01 normalization none -
GTtest01 casesensitivity sensitive -
GTtest01 vscan off default
GTtest01 nbmand off default
GTtest01 sharesmb off default
GTtest01 refquota none default
GTtest01 refreservation none default
GTtest01 primarycache all default
GTtest01 secondarycache all default

GTtest01 usedbysnapshots 0 -
GTtest01 usedbydataset 1.71G -
GTtest01 usedbychildren 2.55M -
GTtest01 usedbyrefreservation 0 -
bash-3.00# zfs set primarycache=none GTtest01
bash-3.00# zfs set secondarycache=none GTtest01
bash-3.00# zfs get all GTtest01
NAME PROPERTY VALUE SOURCE
GTtest01 type filesystem -
GTtest01 creation Fri Jul 2 11:24 2010 -
GTtest01 used 1.72G -
GTtest01 available 998G -
GTtest01 referenced 1.71G -
GTtest01 compressratio 1.00x -
GTtest01 mounted yes -
GTtest01 quota none default
GTtest01 reservation none default
GTtest01 recordsize 128K default
GTtest01 mountpoint /GTtest01 default
GTtest01 sharenfs off default
GTtest01 checksum on default
GTtest01 compression off default
GTtest01 atime on default
GTtest01 devices on default
GTtest01 exec on default
GTtest01 setuid on default
GTtest01 readonly off default
GTtest01 zoned off default
GTtest01 snapdir hidden default
GTtest01 aclmode groupmask default
GTtest01 aclinherit restricted default
GTtest01 canmount on default
GTtest01 shareiscsi off default
GTtest01 xattr on default
GTtest01 copies 1 default
GTtest01 version 4 -
GTtest01 utf8only off -
GTtest01 normalization none -
GTtest01 casesensitivity sensitive -
GTtest01 vscan off default
GTtest01 nbmand off default
GTtest01 sharesmb off default
GTtest01 refquota none default
GTtest01 refreservation none default
GTtest01 primarycache none local
GTtest01 secondarycache none local

GTtest01 usedbysnapshots 0 -
GTtest01 usedbydataset 1.71G -
GTtest01 usedbychildren 3.14M -
GTtest01 usedbyrefreservation 0 -

Discover storage in Solaris 10

July 7th, 2010

Zoning is already done and LUNs are provisioned from Sun 7410 appliance

bash-3.00# format
Searching for disks…done
AVAILABLE DISK SELECTIONS: // there are 2 new drives in addition to root drive
0. c0d0
/virtual-devices@100/channel-devices@200/disk@0
1. c3t2100001B329A5064d4
///shows 2 disks for 2 paths
/pci@600/pci@0/pci@9/bfa@0/fp@0,0/ssd@w2100001b329a5064,4
2. c4t2100001B329A1F65d4
/pci@600/pci@0/pci@c/bfa@0/fp@0,0/ssd@w2100001b329a1f65,4
Specify disk (enter its number): 1
selecting c3t2100001B329A5064d4
[disk formatted]
Disk not labeled. Label it now? y

FORMAT MENU:
disk – select a disk
type – select (define) a disk type
partition – select (define) a partition table
current – describe the current disk
format – format and analyze the disk
repair – repair a defective sector
label – write label to the disk
analyze – surface analysis
defect – defect list management
backup – search for backup labels
verify – read and display labels
save – save new disk/partition definitions
inquiry – show vendor, product and revision
volname – set 8-character volume name
! – execute , then return
quit
format> q

bash-3.00# cfgadm -a // list FC disks
Ap_Id Type Receptacle Occupant Condition
c3 fc-fabric connected configured unknown
c3::2100001b329a5064 disk connected configured unknown
c4 fc-fabric connected configured unknown
c4::2100001b329a1f65 disk connected configured unknown
bash-3.00# cfgadm -c configure c3::2100001b329a5064 ///discover LUNs on all paths
bash-3.00# cfgadm -c configure c4::2100001b329a1f65

bash-3.00# format
Searching for disks…done

c3t2100001B329A5064d6: configured with capacity of 1023.95GB

AVAILABLE DISK SELECTIONS: // 2 LUNs with 2 paths each, total 4 disks visible to OS
0. c0d0
/virtual-devices@100/channel-devices@200/disk@0
1. c3t2100001B329A5064d4

/pci@600/pci@0/pci@9/bfa@0/fp@0,0/ssd@w2100001b329a5064,4
2. c3t2100001B329A5064d6

/pci@600/pci@0/pci@9/bfa@0/fp@0,0/ssd@w2100001b329a5064,6
3. c4t2100001B329A1F65d4
/pci@600/pci@0/pci@c/bfa@0/fp@0,0/ssd@w2100001b329a1f65,4
4. c4t2100001B329A1F65d6

/pci@600/pci@0/pci@c/bfa@0/fp@0,0/ssd@w2100001b329a1f65,6
Specify disk (enter its number): 2
selecting c3t2100001B329A5064d6
[disk formatted]
Disk not labeled. Label it now? y

FORMAT MENU:
disk – select a disk
type – select (define) a disk type
partition – select (define) a partition table
current – describe the current disk
format – format and analyze the disk
repair – repair a defective sector
label – write label to the disk
analyze – surface analysis
defect – defect list management
backup – search for backup labels
verify – read and display labels
save – save new disk/partition definitions
inquiry – show vendor, product and revision
volname – set 8-character volume name
! – execute , then return
quit
format> p

PARTITION MENU:
0 – change `0′ partition
1 – change `1′ partition
2 – change `2′ partition
3 – change `3′ partition
4 – change `4′ partition
5 – change `5′ partition
6 – change `6′ partition
7 – change `7′ partition
select – select a predefined table
modify – modify a predefined partition table
name – name the current table
print – display the current table
label – write partition map and label to the disk
! – execute , then return
quit
partition> p
Current partition table (default):
Total disk cylinders available: 44556 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 0 0 (0/0/0) 0
1 swap wu 0 0 (0/0/0) 0
2 backup wu 0 – 44555 1023.95GB (44556/0/0) 2147376420
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 0 – 44555 1023.95GB (44556/0/0) 2147376420
7 unassigned wm 0 0 (0/0/0) 0

partition> q

FORMAT MENU:
disk – select a disk
type – select (define) a disk type
partition – select (define) a partition table
current – describe the current disk
format – format and analyze the disk
repair – repair a defective sector
label – write label to the disk
analyze – surface analysis
defect – defect list management
backup – search for backup labels
verify – read and display labels
save – save new disk/partition definitions
inquiry – show vendor, product and revision
volname – set 8-character volume name
! – execute , then return
quit
format> disk

AVAILABLE DISK SELECTIONS:
0. c0d0
/virtual-devices@100/channel-devices@200/disk@0
1. c3t2100001B329A5064d4

/pci@600/pci@0/pci@9/bfa@0/fp@0,0/ssd@w2100001b329a5064,4
2. c3t2100001B329A5064d6

/pci@600/pci@0/pci@9/bfa@0/fp@0,0/ssd@w2100001b329a5064,6
3. c4t2100001B329A1F65d4
/pci@600/pci@0/pci@c/bfa@0/fp@0,0/ssd@w2100001b329a1f65,4
4. c4t2100001B329A1F65d6

/pci@600/pci@0/pci@c/bfa@0/fp@0,0/ssd@w2100001b329a1f65,6
Specify disk (enter its number)[2]: 1
selecting c3t2100001B329A5064d4
[disk formatted]
format> p

PARTITION MENU:
0 – change `0′ partition
1 – change `1′ partition
2 – change `2′ partition
3 – change `3′ partition
4 – change `4′ partition
5 – change `5′ partition
6 – change `6′ partition
7 – change `7′ partition
select – select a predefined table
modify – modify a predefined partition table
name – name the current table
print – display the current table
label – write partition map and label to the disk
! – execute , then return
quit
partition> p
Current partition table (original):
Total disk cylinders available: 16250 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 0 0 (0/0/0) 0
1 swap wu 0 0 (0/0/0) 0
2 backup wu 0 – 16249 499.91GB (16250/0/0) 1048385000
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 0 – 16249 499.91GB (16250/0/0) 1048385000
7 unassigned wm 0 0 (0/0/0) 0

partition> q

FORMAT MENU:
disk – select a disk
type – select (define) a disk type
partition – select (define) a partition table
current – describe the current disk
format – format and analyze the disk
repair – repair a defective sector
label – write label to the disk
analyze – surface analysis
defect – defect list management
backup – search for backup labels
verify – read and display labels
save – save new disk/partition definitions
inquiry – show vendor, product and revision
volname – set 8-character volume name
! – execute , then return
quit
format> disk

AVAILABLE DISK SELECTIONS:
0. c0d0
/virtual-devices@100/channel-devices@200/disk@0
1. c3t2100001B329A5064d4

/pci@600/pci@0/pci@9/bfa@0/fp@0,0/ssd@w2100001b329a5064,4
2. c3t2100001B329A5064d6

/pci@600/pci@0/pci@9/bfa@0/fp@0,0/ssd@w2100001b329a5064,6
3. c4t2100001B329A1F65d4
/pci@600/pci@0/pci@c/bfa@0/fp@0,0/ssd@w2100001b329a1f65,4
4. c4t2100001B329A1F65d6

/pci@600/pci@0/pci@c/bfa@0/fp@0,0/ssd@w2100001b329a1f65,6
Specify disk (enter its number)[1]:
selecting c3t2100001B329A5064d4
[disk formatted]
format> q

There are 2 LUNs which show as 4 disks (2 paths each), so we run ‘stmboot -e’ to configure multipath devices for the new LUNs

bash-3.00# stmsboot -e

WARNING: stmsboot operates on each supported multipath-capable controller
detected in a host. In your system, these controllers are

/pci@600/pci@0/pci@9/bfa@0/fp@0,0
/pci@600/pci@0/pci@c/bfa@0/fp@0,0

If you do NOT wish to operate on these controllers, please quit stmsboot
and re-invoke with -D { fp | mpt | mpt_sas} to specify which controllers you wish
to modify your multipathing configuration for.

Do you wish to continue? [y/n] (default: y) y
WARNING: This operation will require a reboot.
Do you want to continue ? [y/n] (default: y) y
The changes will come into effect after rebooting the system.
Reboot the system now ? [y/n] (default: y) y
updating /platform/sun4v/boot_archive

After the reboot:

bash-3.00# format
Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c0d0
/virtual-devices@100/channel-devices@200/disk@0
1. c5t600144F0B16CBD0D00004BFD867B0004d0

/scsi_vhci/ssd@g600144f0b16cbd0d00004bfd867b0004
2. c5t600144F0B16CBD0D00004C29EFC80002d0

/scsi_vhci/ssd@g600144f0b16cbd0d00004c29efc80002
Specify disk (enter its number): ^D

Now there are only 2 disks visible because multipathing is now configured and only 1 disk is visible (instead of previous 2).

With ‘luxadm’ command we can see disk info and path for that disk:

bash-3.00# luxadm display /dev/rdsk/c5t600144F0B16CBD0D00004BFD867B0004d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c5t600144F0B16CBD0D00004BFD867B0004d0s2
Vendor: SUN
Product ID: Sun Storage 7410
Revision: 1.0
Serial Num:
Unformatted capacity: 512000.000 MBytes
Read Cache: Enabled
Minimum prefetch: 0×0
Maximum prefetch: 0×0
Device Type: Disk device
Path(s):

/dev/rdsk/c5t600144F0B16CBD0D00004BFD867B0004d0s2
/devices/scsi_vhci/ssd@g600144f0b16cbd0d00004bfd867b0004:c,raw
Controller /devices/pci@600/pci@0/pci@9/bfa@0/fp@0,0
Device Address 2100001b329a5064,4
Host controller port WWN 100000051ea29ba5
Class primary
State ONLINE
Controller /devices/pci@600/pci@0/pci@c/bfa@0/fp@0,0
Device Address 2100001b329a1f65,4
Host controller port WWN 100000051e7dd29d
Class secondary
State STANDBY

There are 2 paths, 1 is active (Online) other one is standby path.

bash-3.00# luxadm -v display /dev/rdsk/c5t600144F0B16CBD0D00004BFD867B0004d0s2
Displaying information for: /dev/rdsk/c5t600144F0B16CBD0D00004BFD867B0004d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c5t600144F0B16CBD0D00004BFD867B0004d0s2
Vendor: SUN
Product ID: Sun Storage 7410
Revision: 1.0
Serial Num:
Unformatted capacity: 512000.000 MBytes
Read Cache: Enabled
Minimum prefetch: 0×0
Maximum prefetch: 0×0
Device Type: Disk device
Path(s):

/dev/rdsk/c5t600144F0B16CBD0D00004BFD867B0004d0s2
/devices/scsi_vhci/ssd@g600144f0b16cbd0d00004bfd867b0004:c,raw
Controller /devices/pci@600/pci@0/pci@9/bfa@0/fp@0,0
Device Address 2100001b329a5064,4
Host controller port WWN 100000051ea29ba5
Class primary
State ONLINE
Controller /devices/pci@600/pci@0/pci@c/bfa@0/fp@0,0
Device Address 2100001b329a1f65,4
Host controller port WWN 100000051e7dd29d
Class secondary
State STANDBY

Same thing is for the 2nd disk.

‘mpathadm’ shows similar information:

bash-3.00# mpathadm list LU
/dev/rdsk/c5t600144F0B16CBD0D00004C29EFC80002d0s2
Total Path Count: 2
Operational Path Count: 2
/dev/rdsk/c5t600144F0B16CBD0D00004BFD867B0004d0s2
Total Path Count: 2
Operational Path Count: 2
Extended ‘mpathadm’ info:

bash-3.00# mpathadm list logical-unit /dev/rdsk/c5t600144F0B16CBD0D00004BFD867B0004d0s2
mpath-support: libmpscsi_vhci.so
/dev/rdsk/c5t600144F0B16CBD0D00004BFD867B0004d0s2
Total Path Count: 2
Operational Path Count: 2
bash-3.00# mpathadm show logical-unit /dev/rdsk/c5t600144F0B16CBD0D00004BFD867B0004d0s2
Logical Unit: /dev/rdsk/c5t600144F0B16CBD0D00004BFD867B0004d0s2
mpath-support: libmpscsi_vhci.so
Vendor: SUN
Product: Sun Storage 7410
Revision: 1.0
Name Type: unknown type
Name: 600144f0b16cbd0d00004bfd867b0004
Asymmetric: yes
Current Load Balance: round-robin
Logical Unit Group ID: NA
Auto Failback: on
Auto Probing: NA

Paths:
Initiator Port Name: 100000051ea29ba5
Target Port Name: 2100001b329a5064
Override Path: NA
Path State: OK
Disabled: no

Initiator Port Name: 100000051e7dd29d
Target Port Name: 2100001b329a1f65
Override Path: NA
Path State: OK
Disabled: no

Target Port Groups:
ID: 1
Explicit Failover: no
Access State: active optimized
Target Ports:
Name: 2100001b329a5064
Relative ID: 257

ID: 0
Explicit Failover: no
Access State: standby
Target Ports:
Name: 2100001b329a1f65
Relative ID: 1

To see traffic going through each path run:

bash-3.00# iostat -xYnt 1
tty
tin tout
1 38
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t600144F0B16CBD0D00004C29EFC80002d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t600144F0B16CBD0D00004C29EFC80002d0.t2100001b329a5064
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t600144F0B16CBD0D00004C29EFC80002d0.t2100001b329a5064.fp2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t600144F0B16CBD0D00004C29EFC80002d0.t2100001b329a1f65
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t600144F0B16CBD0D00004C29EFC80002d0.t2100001b329a1f65.fp3
0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t600144F0B16CBD0D00004BFD867B0004d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t600144F0B16CBD0D00004BFD867B0004d0.t2100001b329a5064
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t600144F0B16CBD0D00004BFD867B0004d0.t2100001b329a5064.fp2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t600144F0B16CBD0D00004BFD867B0004d0.t2100001b329a1f65
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t600144F0B16CBD0D00004BFD867B0004d0.t2100001b329a1f65.fp3
4.5 3.2 355.9 86.3 0.0 0.3 0.0 43.7 0 4 c0d0

To see mapping of non-STMS to STMS devices:

bash-3.00# stmsboot -L

non-STMS device name STMS device name
——————————————————————
/dev/rdsk/c4t2100001B329A1F65d4 /dev/rdsk/c5t600144F0B16CBD0D00004BFD867B0004d0
/dev/rdsk/c3t2100001B329A5064d4 /dev/rdsk/c5t600144F0B16CBD0D00004BFD867B0004d0
/dev/rdsk/c3t2100001B329A5064d6 /dev/rdsk/c5t600144F0B16CBD0D00004C29EFC80002d0
/dev/rdsk/c4t2100001B329A1F65d6 /dev/rdsk/c5t600144F0B16CBD0D00004C29EFC80002d0

wget behind proxy

March 26th, 2010

One of the ways to get wget to work behind proxy is to create .wgetrc file in the user’s home directory. In it put this:

http_proxy = http://proxy_server_yourdomain.com:8080/

if your proxy is using authentication, add also the following lines to your .wgetrc file

proxy_user=user
proxy_password=password

getting Blastwave to work on Solaris

March 26th, 2010

Blastwave is Solaris packages repository and offers similar features as many Linux package repositories. It also comes with its own package manager application that you need to install first before you can start using it. If the host you want to install Blastwave package manager is behind proxy, read this page to configure wget to work behind proxy server

To start using it, go to http://www.blastwave.org/ and download pkgutil for Sparc or Intel.

bash-3.00# wget http://download.blastwave.org/csw/pkgutil_sparc.pkg

bash-3.00# pkgadd -d pkgutil_sparc.pkg

then you need to download the package catalog:

/opt/csw/bin/pkgutil –catalog

then you can search packages, I needed iozone, so here I am checking if iozone is available:

bash-3.00# /opt/csw/bin/pkgutil -a |grep -i iozone
iozone CSWiozone 3.217,REV=2004.08.12 948.2 KB

now I’ll install iozone package:

bash-3.00# /opt/csw/bin/pkgutil -i CSWiozone
Parsing catalog, may take a while…
New packages: CSWcommon-1.4.6,REV=2008.04.28 CSWiozone-3.217,REV=2004.08.12
Total size: 971.2 KB
2 packages to fetch. Do you want to continue? [Y,n] y
Fetching CSWcommon-1.4.6,REV=2008.04.28…
–2010-03-26 09:56:45– http://download.blastwave.org/csw/unstable/sparc/5.10/common-1.4.6,REV=2008.04.28-SunOS5.8-sparc-CSW.pkg
Connecting to 165.135.4.112:8080… connected.
Proxy request sent, awaiting response… 200 OK
Length: 23552 (23K) [application/octet-stream]
Saving to: `/var/opt/csw/pkgutil/packages/common-1.4.6,REV=2008.04.28-SunOS5.8-sparc-CSW.pkg’

100%[================================================================================>] 23,552 87.1K/s in 0.3s

2010-03-26 09:56:45 (87.1 KB/s) – `/var/opt/csw/pkgutil/packages/common-1.4.6,REV=2008.04.28-SunOS5.8-sparc-CSW.pkg’ saved [23552/23552]

MD5 for CSWcommon-1.4.6,REV=2008.04.28 matched.
Fetching CSWiozone-3.217,REV=2004.08.12…
–2010-03-26 09:56:45– http://download.blastwave.org/csw/unstable/sparc/5.10/iozone-3.217,REV=2004.08.12-SunOS5.8-sparc-CSW.pkg.gz
Connecting to 165.135.4.112:8080… connected.
Proxy request sent, awaiting response… 200 OK
Length: 971005 (948K) [application/x-gzip]
Saving to: `/var/opt/csw/pkgutil/packages/iozone-3.217,REV=2004.08.12-SunOS5.8-sparc-CSW.pkg.gz’

100%[================================================================================>] 971,005 539K/s in 1.8s

2010-03-26 09:56:47 (539 KB/s) – `/var/opt/csw/pkgutil/packages/iozone-3.217,REV=2004.08.12-SunOS5.8-sparc-CSW.pkg.gz’ saved [971005/971005]

MD5 for CSWiozone-3.217,REV=2004.08.12 matched.
Installing CSWcommon-1.4.6,REV=2008.04.28

Processing package instance from

common – common files and dirs for CSW packages(sparc) 1.4.6,REV=2008.04.28
http://www.blastwave.org/ packaged for CSW by Philip Brown
## Executing checkinstall script.
## Processing package information.
## Processing system information.
7 package pathnames are already properly installed.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

Installing common – common files and dirs for CSW packages as

## Installing part 1 of 1.
/opt/csw/bin/sparc
/opt/csw/doc
/opt/csw/info
/opt/csw/lib/32
/opt/csw/lib/64
/opt/csw/lib/locale
/opt/csw/lib/sparc
/opt/csw/lib/sparcv8
/opt/csw/man
/opt/csw/share/locale/bg/LC_TIME
/opt/csw/share/locale/cs/LC_TIME
/opt/csw/share/locale/da/LC_TIME
/opt/csw/share/locale/de/LC_TIME
/opt/csw/share/locale/el/LC_TIME
/opt/csw/share/locale/es/LC_TIME
/opt/csw/share/locale/fr/LC_TIME
/opt/csw/share/locale/gl/LC_TIME
/opt/csw/share/locale/it/LC_TIME
/opt/csw/share/locale/ja/LC_TIME
/opt/csw/share/locale/ko/LC_TIME
/opt/csw/share/locale/locale.alias
/opt/csw/share/locale/nl/LC_TIME
/opt/csw/share/locale/no/LC_TIME
/opt/csw/share/locale/pl/LC_TIME
/opt/csw/share/locale/pt/LC_TIME
/opt/csw/share/locale/pt_BR/LC_TIME
/opt/csw/share/locale/ru/LC_TIME
/opt/csw/share/locale/sk/LC_TIME
/opt/csw/share/locale/sl/LC_TIME
/opt/csw/share/locale/sv/LC_TIME
/opt/csw/share/locale/zh/LC_TIME
[ verifying class ]

Installation of was successful.
Installing CSWiozone-3.217,REV=2004.08.12

Processing package instance from

iozone – IO benchmarking tool(sparc) 3.217,REV=2004.08.12
” Original Author: William Norcott (wnorcott@us.oracle.com)”,
” 4 Dunlap Drive”,
” Nashua, NH 03060″,
” “,
” Enhancements: Don Capps (capps@iozone.org)”,
” 7417 Crenshaw”,
” Plano, TX 75025″,
” “,
” Copyright 1991, 1992, 1994, 1998, 1999, 2002 William D. Norcott”,
” “,
” License to freely use and distribute this software is hereby granted “,
” by the author, subject to the condition that this copyright notice “,
” remains intact. The author retains the exclusive right to publish “,
” derivative works based on this work, including, but not limited to, “,
” revised versions of this work”,
” “,
” Other contributors:”,
” “,
” Don Capps (Hewlett Packard) capps@iozone.org”,
Using as the package base directory.
## Processing package information.
## Processing system information.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

Installing iozone – IO benchmarking tool as

## Installing part 1 of 1.
/opt/csw/bin/iozone
/opt/csw/bin/iozone64
/opt/csw/share/doc/iozone/docs/IOzone_msword_98.doc
/opt/csw/share/doc/iozone/docs
/opt/csw/share/doc/iozone/docs/IOzone_msword_98.pdf
/opt/csw/share/doc/iozone/docs/Iozone_ps.gz
/opt/csw/share/doc/iozone/docs/Run_rules.doc
/opt/csw/share/man/man1/iozone.1
[ verifying class ]

Installation of was successful.

Done.

Configure Opensolaris host to have HBAs in target mode (COMSTAR)

February 24th, 2010

Configuring Opensolaris host to have HBAs in target mode (COMSTAR) will effectively turn your Opensolaris host into host that will act as a storage array. Most of the Emulex 2GB and up and Qlogic 4GB HBAs and up are supported to work in target mode in Comstar project, se this thread for more info about supported HBAs http://opensolaris.org/jive/thread.jspa?threadID=81627

We start checking installed HBAs:

root@opensolaris:~# fcinfo hba-port
HBA Port WWN: 10000000c9328447
Port Mode: Initiator
Port ID: 0
OS Device Name: Not Applicable
Manufacturer: Emulex
Model: LP9002L
Firmware Version: 3.93a0 (C2D3.93A0)
FCode/BIOS Version: none
Serial Number: 0000C9328447
Driver Name: emlxs
Driver Version: 2.50i (2009.11.10.12.30)
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb
Current Speed: not established
Node WWN: 20000000c9328447
HBA Port WWN: 10000000c9328448
Port Mode: Initiator
Port ID: 0
OS Device Name: Not Applicable
Manufacturer: Emulex
Model: LP9002L
Firmware Version: 3.93a0 (C2D3.93A0)
FCode/BIOS Version: none
Serial Number: 0000C9328448
Driver Name: emlxs
Driver Version: 2.50i (2009.11.10.12.30)
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb
Current Speed: not established
Node WWN: 20000000c9328448

We need to change port mode to target, that is done by editing emlxs driver config file (/kernel/drv/emlxs.conf), you need to change target-mode from 0 to 1:

# target-mode: Controls COMSTAR target mode support for an adapter port.
#
# 0 = Disables target mode support. Enables initiator mode support.
# 1 = Enables target mode support. Disables initiator mode support.
#
# Usage examples:
# target-mode=1; Sets global default for target mode
# emlxs0-target-mode=0; emlxs0 will be an initiator port
# emlxs1-target-mode=1; emlxs1 will be a target port
#
# Range: Min:0 Max:1 Default:0
#
target-mode=1;

and in the next section you need to remove comment in the ddi-forceattach line, so you’ll have

ddi-forceattach=1;

In order for driver settings to take effect, you need to reboot.

After reboot make sure that Comstar services are running:

# svcs stmf
# svcadm enable stmf

And now we can see that there are 2 targets:

root@opensolaris:~# stmfadm list-target
Target: wwn.10000000C9328448
Target: wwn.10000000C9328447

Which is also confirmed by fcinfo command:

root@opensolaris:~# fcinfo hba-port
HBA Port WWN: 10000000c9328447
Port Mode: Target
Port ID: 0
OS Device Name: Not Applicable
Manufacturer: Emulex
Model: LP9002L
Firmware Version: 3.93a0 (C2D3.93A0)
FCode/BIOS Version: none
Serial Number: 0000C9328447
Driver Name: emlxs
Driver Version: 2.50i (2009.11.10.12.30)
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb
Current Speed: not established
Node WWN: 20000000c9328447
HBA Port WWN: 10000000c9328448
Port Mode: Target
Port ID: 0
OS Device Name: Not Applicable
Manufacturer: Emulex
Model: LP9002L
Firmware Version: 3.93a0 (C2D3.93A0)
FCode/BIOS Version: none
Serial Number: 0000C9328448
Driver Name: emlxs
Driver Version: 2.50i (2009.11.10.12.30)
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb
Current Speed: not established
Node WWN: 20000000c9328448

Perform Live Upgrade on Solaris 10 x86, update 7 to update 8

February 16th, 2010

Download and mount ISO image you want to use for live upgrade.

bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
BE1 yes yes yes no –
BE2 yes no no yes –

Specify source boot environment (BE1) and new boot environment (Solaris_10_u8) which is a snapshot of source boot environment (BE1):

bash-3.00# lucreate -c “BE1″ -n “Solaris_10_u8″
Checking GRUB menu…
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Cloning file systems from boot environment to create boot environment .
Creating snapshot for on .
Creating clone for on .
Setting canmount=noauto for in zone on .
Creating snapshot for on .
Creating clone for on .
Setting canmount=noauto for in zone on .
Creating snapshot for on .
Creating clone for on .
Setting canmount=noauto for in zone on .
WARNING: split filesystem file system type cannot inherit
mount point options <-> from parent filesystem file
type <-> because the two file systems have different types.
Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev.
Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev.
File propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE in GRUB menu
Population of boot environment successful.
Creation of boot environment successful.

bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
BE1 yes yes yes no –
BE2 yes no no yes –
Solaris_10_u8 yes no no yes –

if you are doing straight upgrade (not changing mounting of any partitions), then you can verify filesystems that are part of your old and new boot environment (they should be the same):

bash-3.00# lufslist BE1
boot environment name: BE1
This boot environment is currently active.
This boot environment will be active on next system boot.

Filesystem fstype device size Mounted on Mount Options
———————– ——– ———— ——————- ————–
/dev/zvol/dsk/rpool-vigilante/swap02 swap 12884901888 – -
/dev/zvol/dsk/rpool-vigilante/swap swap 2147483648 – -
rpool-vigilante/ROOT/BE1 zfs 36503444992 / -
rpool-vigilante/ROOT/BE1/opt zfs 813439488 /opt -
rpool-vigilante/ROOT/BE1/var zfs 4603225600 /var -
rpool-vigilante/export zfs 3617869312 /export -
rpool-vigilante/export/home zfs 3617849856 /export/home -
rpool-vigilante zfs 90195877888 /rpool-vigilante -
rpool-vigilante/suspend-data zfs 19456 /rpool-vigilante/vda/suspend-data -
bash-3.00# lufslist Solaris_10_u8
boot environment name: Solaris_10_u8

Filesystem fstype device size Mounted on Mount Options
———————– ——– ———— ——————- ————–
/dev/zvol/dsk/rpool-vigilante/swap02 swap 12884901888 – -
/dev/zvol/dsk/rpool-vigilante/swap swap 2147483648 – -
rpool-vigilante/ROOT/Solaris_10_u8 zfs 57545728 / -
rpool-vigilante/export zfs 3617869312 /export -
rpool-vigilante/export/home zfs 3617849856 /export/home -
rpool-vigilante/ROOT/Solaris_10_u8/opt zfs 274432 /opt -
rpool-vigilante zfs 90207728128 /rpool-vigilante -
rpool-vigilante/suspend-data zfs 19456 /rpool-vigilante/vda/suspend-data -
rpool-vigilante/ROOT/Solaris_10_u8/var zfs 28694528 /var -

To upgrade Solaris_10_u8 boot environment, run:

bash-3.00# luupgrade -u -n Solaris_10_u8 -s /mnt/

System has findroot enabled GRUB
No entry for BE in GRUB menu
Uncompressing miniroot
Copying failsafe kernel from media.
63093 blocks
miniroot filesystem is
Mounting miniroot at
Validating the contents of the media .
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE .
Checking for GRUB menu on ABE .
Saving GRUB menu on ABE .
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE .
Performing the operating system upgrade of the BE .
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Restoring GRUB menu on ABE .
Updating package information on boot environment .
Package information successfully updated on boot environment .
Adding operating system patches to the BE .
The operating system patch installation is complete.
ABE boot partition backing deleted.
PBE GRUB has no capability information.
PBE GRUB has no versioning information.
ABE GRUB is newer than PBE GRUB. Updating GRUB.
GRUB update was successfull.
Configuring failsafe for system.
Failsafe configuration is complete.
INFORMATION: The file on boot
environment contains a log of the upgrade operation.
INFORMATION: The file on boot
environment contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment . Before you activate boot
environment , determine if any additional system
maintenance is required or if additional media of the software
distribution must be installed.
The Solaris upgrade of the boot environment is complete.
Installing failsafe
Failsafe install is complete.

Now we need to activate new boot environment so that system will boot from it:

bash-3.00# luactivate Solaris_10_u8
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE
Saving existing file in top level dataset for BE as //etc/bootsign.prev.
A Live Upgrade Sync operation will be performed on startup of boot environment .

Setting failsafe console to .
Generating boot-sign for ABE
Saving existing file in top level dataset for BE as //etc/bootsign.prev.
Generating partition and slice information for ABE
Copied boot menu from top level dataset.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Boot from Solaris failsafe or boot in single user mode from the Solaris
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

mount -Fzfs /dev/dsk/c0t0d0s0 /mnt

3. Run utility with out any arguments from the Parent boot
environment root slice, as shown below:

/mnt/sbin/luactivate

4. luactivate, activates the previous working boot environment and
indicates the result.

5. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File propagation successful
File propagation successful
File propagation successful
File propagation successful
Deleting stale GRUB loader from all BEs.
File deletion successful
File deletion successful
File deletion successful
Activation of boot environment successful.

Important note: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

We can verify that Solaris_10_u8 will be active on reboot:

bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
BE1 yes yes no no –
BE2 yes no no yes –
Solaris_10_u8 yes no yes no –

Before the reboot here is the content of /etc/release:
bash-3.00# cat /etc/release
Solaris 10 5/09 s10x_u7wos_08 X86
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 30 March 2009
bash-3.00# uname -a
SunOS vigilante 5.10 Generic_142901-01 i86pc i386 i86pc

Now, we are ready to reboot the system:

bash-3.00# sync
bash-3.00# init 6
updating /platform/i86pc/boot_archive
propagating updated GRUB menu
Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev.
Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev.
File propagation successful
File propagation successful
File propagation successful
File propagation successful
bash-3.00# Hangup

after the boot:

bash-3.00# cat /etc/release
Solaris 10 10/09 s10x_u8wos_08a X86
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 16 September 2009
bash-3.00# uname -a
SunOS vigilante 5.10 Generic_141445-09 i86pc i386 i86pc

Mount ISO image in Solaris 10 x86

February 16th, 2010

To mount ISO image in Solaris 10 x86 (this should work with Sparc as well), do the following:

bash-3.00# lofiadm -a /export/home/user/sol-10-u8-ga-x86-dvd.iso /dev/lofi/1 ### create new lofi device
bash-3.00# mount -F hsfs -o ro /dev/lofi/1 /mnt ### mount lofi device
bash-3.00# ls -al /mnt/ ### Solaris 10 x86 install DVD image is mounted
total 1007
dr-xr-xr-x 2 root sys 4096 Sep 16 18:31 .
drwxr-xr-x 25 root root 26 Feb 2 13:33 ..
-r–r–r– 1 root root 2048 Sep 16 18:31 .catalog
-r–r–r– 1 root root 68 Aug 21 15:32 .cdtoc
dr-xr-xr-x 5 root root 2048 Sep 16 18:31 .install
lr-xr-xr-x 1 root root 33 Sep 16 18:30 .install_config -> ./Solaris_10/Misc/.install_config
-r–r–r– 1 root root 388 Aug 21 15:32 .slicemapfile
-r–r–r– 1 root root 20 Aug 21 15:32 .volume.inf
-r–r–r– 1 root root 27 Sep 16 18:16 .volume.inf.1
-r–r–r– 1 root root 27 Sep 16 18:17 .volume.inf.2
-r–r–r– 1 root root 22 Sep 16 18:15 .volume.inf.3
-r–r–r– 1 root root 22 Sep 16 18:15 .volume.inf.4
-r–r–r– 1 root root 22 Sep 16 18:16 .volume.inf.5
-r–r–r– 1 root root 6582 Aug 21 15:34 Copyright
-r–r–r– 1 root root 487593 Aug 21 15:32 JDS-THIRDPARTYLICENSEREADME
dr-xr-xr-x 2 root root 2048 Sep 16 18:30 License
dr-xr-xr-x 7 root root 2048 Sep 16 18:31 Solaris_10
dr-xr-xr-x 5 root root 2048 Sep 16 18:31 boot
-r-xr-xr-x 1 root root 257 Sep 16 18:16 installer

Downgrade Iphone 3G(s) 3.1.3 firmware to 3.1.2 firmware

February 14th, 2010

In case you have accidentally (or not) upgraded your Iphone 3G(s) to 3.1.3 firmware, you can downgrade it to 3.1.2 firmware and jailbreak it provided that you have signature hashes of your Iphone (SHSH) on Cydia (Saurik’s server).

This post will help you to downgrade to 3.1.2, but you may not have service on your Iphone after downgrading to 3.1.2. In order to get service, follow instructions in this post. Don’t forget to change your /etc/hosts file in OS X (or equivalent file under windows) and add this line: 74.208.10.249 gs.apple.com. This will yell iTunes to authenticate 3.1.2 firmware upgrade (or downgrade in this case) on Cydia’s server and not on Apple’s server. Without this change you’ll get an error and you’ll not be able to start firmware downgrade. After this, install afc2add package in Cydia which will fix directory structure of your Iphone which has changed with 3.1.2 firmware.

To setup tethering on your Iphone, follow this post. I was able to get quite good download speed while tethering on my Iphone 3Gs. Speed test was done on Speakeasy’s DC servers:

Iphone 3Gs tethering speed

Iphone 3Gs tethering speed

Download Speed: 2783 kbps (347.9 KB/sec transfer rate)
Upload Speed: 292 kbps (36.5 KB/sec transfer rate)

Here you can read how to put your phone to recovery and DFU modes. Finally, all firmwares necessary to perform above tasks you can find here.
After upgrading to 3.1.3 firmware you’ll loose (for now) ability to unlock your phone, which is also true after you downgrade from 3.1.3 firmware to 3.1.2 firmware.

Enable NFS v4 on Netapp filers

February 4th, 2010

If you run: “options nfs” you can see list of tunables related to NFS present in Ontap (I used Ontap 7.3.2), you’ll notice that NFS v4 is off by default

netapp> options nfs
nfs.acache.persistence.enabled on
nfs.assist.queue.limit       40
nfs.export.allow_provisional_access on
nfs.export.auto-update       on
nfs.export.exportfs_comment_on_delete on
nfs.export.harvest.timeout   1800
nfs.export.neg.timeout       3600
nfs.export.pos.timeout       36000
nfs.export.resolve.timeout   6
nfs.hide_snapshot            off
nfs.ipv6.enable              off
nfs.kerberos.enable          off
nfs.locking.check_domain     on
nfs.max_num_aux_groups       32
nfs.mount_rootonly           on
nfs.mountd.trace             off
nfs.netgroup.strict          off
nfs.notify.carryover         on
nfs.ntacl_display_permissive_perms off
nfs.per_client_stats.enable  off
nfs.require_valid_mapped_uid off
nfs.response.trace           off
nfs.response.trigger         60
nfs.tcp.enable               on
nfs.thin_prov.ejuke          off
nfs.udp.enable               on
nfs.udp.xfersize             32768
nfs.v2.df_2gb_lim            off
nfs.v3.enable                on
nfs.v4.acl.enable            off
nfs.v4.enable                off
nfs.v4.id.domain             mydomain.com
nfs.v4.read_delegation       off
nfs.v4.setattr_acl_preserve  off
nfs.v4.write_delegation      off
nfs.webnfs.enable            off
nfs.webnfs.rootdir           XXX
nfs.webnfs.rootdir.set       off

To enable NFS v4 run:
netapp> options nfs.v4.enable on

netapp> options nfs
nfs.acache.persistence.enabled on
nfs.assist.queue.limit       40
nfs.export.allow_provisional_access on
nfs.export.auto-update       on
nfs.export.exportfs_comment_on_delete on
nfs.export.harvest.timeout   1800
nfs.export.neg.timeout       3600
nfs.export.pos.timeout       36000
nfs.export.resolve.timeout   6
nfs.hide_snapshot            off
nfs.ipv6.enable              off
nfs.kerberos.enable          off
nfs.locking.check_domain     on
nfs.max_num_aux_groups       32
nfs.mount_rootonly           on
nfs.mountd.trace             off
nfs.netgroup.strict          off
nfs.notify.carryover         on
nfs.ntacl_display_permissive_perms off
nfs.per_client_stats.enable  off
nfs.require_valid_mapped_uid off
nfs.response.trace           off
nfs.response.trigger         60
nfs.tcp.enable               on
nfs.thin_prov.ejuke          off
nfs.udp.enable               on
nfs.udp.xfersize             32768
nfs.v2.df_2gb_lim            off
nfs.v3.enable                on
nfs.v4.acl.enable            off
nfs.v4.enable                on
nfs.v4.id.domain             mydomain.com
nfs.v4.read_delegation       off
nfs.v4.setattr_acl_preserve  off
nfs.v4.write_delegation      off
nfs.webnfs.enable            off
nfs.webnfs.rootdir           XXX
nfs.webnfs.rootdir.set       off

While you are at it, it is useful to enable stats to be collected on per client basis:

netapp> options nfs.per_client_stats.enable on
netapp> options nfs
nfs.acache.persistence.enabled on
nfs.assist.queue.limit       40
nfs.export.allow_provisional_access on
nfs.export.auto-update       on
nfs.export.exportfs_comment_on_delete on
nfs.export.harvest.timeout   1800
nfs.export.neg.timeout       3600
nfs.export.pos.timeout       36000
nfs.export.resolve.timeout   6
nfs.hide_snapshot            off
nfs.ipv6.enable              off
nfs.kerberos.enable          off
nfs.locking.check_domain     on
nfs.max_num_aux_groups       32
nfs.mount_rootonly           on
nfs.mountd.trace             off
nfs.netgroup.strict          off
nfs.notify.carryover         on
nfs.ntacl_display_permissive_perms off
nfs.per_client_stats.enable  on
nfs.require_valid_mapped_uid off
nfs.response.trace           off
nfs.response.trigger         60
nfs.tcp.enable               on
nfs.thin_prov.ejuke          off
nfs.udp.enable               on
nfs.udp.xfersize             32768
nfs.v2.df_2gb_lim            off
nfs.v3.enable                on
nfs.v4.acl.enable            off
nfs.v4.enable                on
nfs.v4.id.domain             mydomain.com
nfs.v4.read_delegation       off
nfs.v4.setattr_acl_preserve  off
nfs.v4.write_delegation      off
nfs.webnfs.enable            off
nfs.webnfs.rootdir           XXX
nfs.webnfs.rootdir.set       off

As you can see there are many NFS tunables on in Ontap, so you may experiment with them to get better performance, out of whcich nfs.v4.read_delegation and nfs.v4.write delegation  may be interesting to help with the performance. Also don’t forget to mount shares on your client with appropriate mount options to use NFS v4:

Solaris:

# mount -o vers=4 192.168.1.200:/vol/myvol /mnt/myvol

Linux:

# mount -t nfs4 192.168.1.200:/vol/myvol  /mnt/myvol

Some interesting articles on tuning network parameters for faster transfers can be found here http://www.psc.edu/networking/projects/tcptune/ and here http://fasterdata.es.net/TCP-tuning//tcp-wan-perf.pdf. Good overview on NFS and list of client mount options can be found here http://www.troubleshooters.com/linux/nfs.htm