Jump to content United States-English
HP.com Home Products and Services Support and Drivers Solutions How to Buy
» Contact HP
More options
HP.com home
HP-UX System Administrator's Guide: Routine Management Tasks: HP-UX 11i Version 3 > Chapter 6 Managing System Performance

Measuring Performance

» 

Technical documentation

Complete book in PDF
» Feedback
Content starts here

 » Table of Contents

 » Index

The saying, “you can’t manage what you don’t measure,” is especially true of system and workgroup performance. Here are some ways to gauge your workgroup’s performance against the “Guidelines” earlier in this section.

Checking Disk Load with sar and iostat

To see how disk activity is distributed across your disks, run sar -d with a time interval and frequency, for example:

sar -d 5 10

This runs sar -d ten times with a five-second sampling interval. The %busy column shows the percentage of time the disk (device) was busy during the sampling interval.

Compare the numbers for each of the disks the shared file systems occupy (note the Average at the end of the report).

Another way to sample disk activity is to run iostat with a time interval, for example:

iostat 5

This will report activity every five seconds. Look at the bps and sps columns for the disks (device) that hold shared file systems. bps shows the number of kilobytes transferred per second during the period; sps shows the number of seeks per second (ignore msps).

If some disks with shared file systems are consistently much busier than others, you should consider redistributing the load. See HP-UX System Administrator’s Guide: Logical Volume Management.

NOTE: On disks managed by the Logical Volume Manager (LVM), it can be hard to keep track of what file systems reside on what disks. It’s a good idea to create hardcopy diagrams of your servers’ disks; see HP-UX System Administrator’s Guide: Logical Volume Management.

Checking NFS Server/Client Block Size

In the case of an HFS file system, the client’s NFS read/write block size should match the block size for that file system on the server.

  • On the NFS server, you can use dumpfs to check the block size for an HFS file system; for example:

    dumpfs /work | grep bsize

    In the resulting output, bsize is the block size, in bytes, of the file system /work.

    NOTE: For a JFS file system, you can use mkfs -m to see the parameters the file system was created with. But adjusting the client’s read/write buffer size to match is probably not worthwhile because the configured block size does not govern all of the blocks. See “Examining File System Characteristics”.
  • On the NFS client, use HP SMH to check read/write block size.

    Go to Tools, Disks and File Systems, File Systems and select each imported file system in turn to view read and write buffer sizes. Refer to the Detailed View at the bottom of the page under Mount Options.

    Read Buffer Size and Write Buffer Size should match the file system’s block size on the server.

    If it does not, you can use HP SMH to change it.

Modify NFS Server/Client Block Size

  1. Access the HP SMH Homepage as root.

  2. Select Tools, Disks and File Systems, File Systems.

  3. Unmount the file system by clicking on the Unmount/Remove... action on the right side of the page.

  4. Check the Unmount box and click on the Unmount/Remove button at the bottom of the page. The file system will be unmounted.

  5. Click on the Done button to return to the File Systems page.

  6. Your file system should still be selected. Click on the ModifyNFS... action on the right side of the page. This will display the Modify NFS File System page.

  7. Enter the desired Read and Write buffer sizes, select Mount now and save configuration in /etc/fstab , and click on the Modify NFS button.

  8. Click on the Done button. You will be returned to the File Systems page. The selected file system will be remounted with the new buffer sizes

Checking for Asynchronous Writes

Enabling asynchronous writes tells the NFS server to send the client an immediate acknowledgment of a write request, before writing the data to disk. This improves NFS throughput, allowing the client to post a second write request while the server is still writing out the first.

This involves some risk to data integrity, but in most cases the performance improvement is worth the risk.

You can use HP SMH to see whether asynchronous writes are enabled on a server’s shared file systems.

  1. Access the HP SMH Homepage as root.

  2. Select Tools → Network Services Configuration → Networked File Systems → Share/Unshare File Systems (Export FS). The Share page will be displayed.

  3. Select the desired file system and a table of shared file properties will be displayed. Check to see that Asynchornous Writes are allowed.

If needed you can change the setting of the Asynchronous Writes flag, while the file system is still mounted and shared.

  • Select View/Modify Shared (exported) File System... to display the setting for the selected file system. Check the Allow Asynchronous Writes box and click on OK.

Checking for Server Overload with nfsstat -rc

Run nfsstat -rc on an NFS client to get an idea of how the server is performing.

You’ll get a report that looks like this:

Client rpc: calls      badcalls   retrans    badxid     timeout    wait       newcred 43467543   848        6          3868       27942      0          0

badxid should be small in relation to timeout. If these numbers are nearly the same, it may mean the server is overloaded and generating duplicate replies to RPC requests that have timed out and been retransmitted. Check the server’s memory, disk and NFS configuration; see the “Guidelines” in the previous section.

NOTE: A badxid that is close to zero and a large number for timeout may indicate packets are being dropped; that is, the client’s requests are timing out because they never reach the server. In this case the problem is likely to be a network card on the server or client, or the network hardware.

Measuring Memory Usage with vmstat

vmstat displays a wealth of information; use the -n option to make it more readable on an 80-column display.

The column to watch most closely is po. If it is not zero, the system is paging. If the system is paging consistently, you probably need more RAM.

Checking for Socket Overflows with netstat -s

Although many different processes use sockets, and can contribute to socket overflows, regular socket overflows on an NFS server may indicate that you need to run more nfsd processes. The command,

netstat -s | grep overflow

will show you a cumulative number for socket overflows (since the last boot). If you see this number rising significantly, and NFS clients are seeing poor response from this server, try starting more nfsds; see “Increasing the Number of nfsd Daemons”.

Checking for Network Overload with netstat -i

If you have followed all the “Guidelines” and are still seeing poor response time, the problem may be with the network itself - either with a particular piece of hardware or with the configuration of the network.

To see cumulative statistics on a server, run

netstat -i

If your system has been running for a long time, the numbers will be large and may not reliably reflect the present state of things. You can run netstat iteratively; for example

netstat -I lan0 -i 5

In this case (after the first line), netstat reports activity every five seconds.

Input and output errors should be very low in relation to input and output packets - much less than 1%. A higher rate of output errors on only one server may indicate a hardware problem affecting the server’s connection to the network.

Collisions (colls) should be less than 5%; a higher rate indicates heavy network use which your users are probably experiencing as poor performance. Network traffic and configuration may be beyond your control, but you can at least raise a flag with your network administrator.

Printable version
Privacy statement Using this site means you accept its terms Feedback to webmaster
© 2008 Hewlett-Packard Development Company, L.P.