Jump to content United States-English
HP.com Home Products and Services Support and Drivers Solutions How to Buy
» Contact HP
More options
HP.com home
HP-UX System Administrator's Guide: Logical Volume Management: HP-UX 11i Version 3 > Chapter 2 Configuring LVM

Planning for Performance

» 

Technical documentation

Complete book in PDF
» Feedback
Content starts here

 » Table of Contents

 » Glossary

 » Index

This section describes strategies to obtain the best possible performance using LVM. It addresses the following topics:

General Performance Factors

The following factors affect overall system performance, but not necessarily the performance of LVM.

Memory Usage

The amount of memory used by LVM is based on the values used at volume group creation time and on the number of open logical volumes. The largest portion of LVM memory is used for extent maps. The memory used is proportional to the maximum number of physical volumes multiplied by the maximum number of physical extents per physical volume for each volume group.

The other factors to be concerned with regarding memory parameters are expected system growth and number of logical volumes required. You can set the volume group maximum parameters to exactly what is required on the system today. However, if you want to extend the volume group by another disk (or perhaps replace one disk with a larger disk), you must use the vgmodify command.

CPU Usage

Compared to the non-LVM case, no significant impact to system CPU usage (by observing idle time) has been observed.

With LVM, extra CPU cycles are required to perform mirror write consistency cache operations, which is the only configurable option that impacts CPU usage.

Disk Space Usage

LVM reserves some disk space on each physical volume for its own metadata. The amount of space used is proportional to the maximum values used at volume group creation time.

Internal Performance Factors

The following factors directly affect the performance of I/O through LVM.

Scheduling Policy

The scheduling policy is significant only with mirroring. When mirroring, the sequential scheduling policy requires more time to perform writes proportional to the number of mirrors. For instance, a logical volume with three copies of data requires three times as long to perform a write using the sequential scheduling policy, as compared to the parallel policy. Read requests are always directed to only one device. Under the parallel scheduling policy, LVM directs each read request to the least busy device. Under the sequential scheduling policy, LVM directs all read requests to the device shown on the left hand side of an lvdisplay –v output.

Mirror Write Consistency Cache

The purpose of the Mirror Write Consistency cache (MWC) is to provide a list of mirrored areas that might be out of sync. When a volume group is activated, LVM copies all areas with an entry in the MWC from one of the good copies to all the other copies. This process ensures that the mirrors are consistent but does not guarantee the quality of the data.

On each write request to a mirrored logical volume that uses MWC, LVM potentially introduces one extra serial disk write to maintain the MWC. Whether this condition occurs depends on the degree to which accesses are random.

The more random the accesses, the higher the probability of missing the MWC. Getting an MWC entry can involve waiting for one to be available. If all the MWC entries are currently being used by I/O in progress, a given request might have to wait in a queue of requests until an entry becomes available.

Another performance consideration for mirrored logical volumes is the method of reconciling inconsistencies between mirror copies after a system crash. Two methods of resynchronization are available: Mirror Consistency Recovery (MCR) and none. Whether you use the MWC depends on which aspect of system performance is more important to your environment, run time or recovery time.

For example, a customer using mirroring on a database system might choose "none" for the database logical volume because the database logging mechanism already provides consistency recovery. The logical volume used for the log uses the MWC if quick recovery time was an issue, or MCR if higher runtime performance is required. A database log is typically used by one process and is sequentially accessed, which means it suffers little performance degradation using MWC because the cache is hit most of the time.

Disk Spanning

For disk areas that see the most intensive use by multiple processes, HP recommends spreading the data space for this disk area across as many physical volumes as possible.

Number of Volume Groups

The number of volume groups is directly related to the MWC issues. Because there is only one MWC per volume group, disk space that is used for many small random write requests must be kept in distinct volume groups if possible when the MWC is being used. This is the only performance consideration that affects the decision regarding the number of volume groups.

Physical Volume Groups

This factor can be used to enforce the separation of different mirror copies across I/O channels. You must define the physical volume groups. This factor increases the availability by decreasing the single points of failure and provides faster I/O throughput because of less contention at the hardware level.

For example, in a system with several disk devices on each card and several cards on each bus converter, create physical volume groups so that all disks off of one bus converter are in one group and all the disks on the other are in another group. This configuration ensures that all mirrors are created with devices accessed through different I/O paths.

Increasing Performance Through Disk Striping

Disk striping distributes logically contiguous data blocks (for example, chunks of the same file) across multiple disks, which speeds I/O throughput for large files when they are read and written sequentially (but not necessarily when access is random).

The disadvantage of disk striping is that the loss of a single disk can result in damage to many files because files are purposely spread across two or more disks.

Consider using disk striping on file systems where large files are stored, if those files are normally read and written sequentially and I/O performance is important.

When you use disk striping, you create a logical volume that spans multiple disks, allowing successive blocks of data to go to logical extents on different disks. For example, a three-way striped logical volume has data allocated on three disks, with each disk storing every third block of data. The size of each of these blocks is called the stripe size of the logical volume. The stripe size (in K) must be a power of two in the range 4 to 32768 for a Version 1.0 volume group, and a power of two in the range 4 to 262144 for a Version 2.x volume group.

Disk striping can increase the performance of applications that read and write large, sequentially accessed files. Data access is performed over the multiple disks simultaneously, resulting in a decreased amount of required time as compared to the same operation on a single disk. If all of the striped disks have their own controllers, each can process data simultaneously.

You can use standard commands to manage your striped logical volumes. For example, the lvcreate, diskinfo, newfs, fsck, and mount commands all work with striped logical volumes.

The following guidelines, most of which apply to LVM disk usage, apply to striped logical volumes for performance reasons:

  • Best performance results from a striped logical volume that spans similar disks. The more closely you match the disks by speed, capacity, and interface type, the better the performance you can expect. When striping across several disks of varying speeds, performance is no faster than that of the slowest disk.

  • If you have more than one interface card or bus to which you can connect disks, distribute the disks as evenly as possible among them. That is, each interface card or bus must have roughly the same number of disks attached to it. You can achieve the best I/O performance when you use more than one bus and interleave the stripes of the logical volume. For example, if you have two buses with two disks on each bus, order the disks so that disk 1 is on bus 1, disk 2 is on bus 2, disk 3 is on bus 1, and disk 4 is on bus 2, as shown in Figure 2-2.

    Figure 2-2 Interleaving Disks Among Buses

    Interleaving
Disks Among Buses
  • Increasing the number of disks might not improve performance because the maximum efficiency that can be achieved by combining disks in a striped logical volume is limited by the maximum throughput of the file system itself and by the buses to which the disks are attached.

  • Disk striping is highly beneficial for applications with few users and large, sequential transfers. However, applications that exhibit small, concurrent, random I/O (such as databases) often see no performance gain through disk striping. Consider four disks with a stripe size of 512 bytes. Each 2K request is sent to all disks. One 2K request completes in about the same time when the disk has been striped. However, several 2K requests are all serialized because all the disks must seek for each request. On the nonstriped system, performance might actually be better because each disk might service separate requests in parallel.

Determining Optimum Stripe Size

The logical volume stripe size identifies the size of each of the blocks of data that make up the stripe. You can set the stripe size to a power of two in the range 4 to 32768 for a Version 1.0 volume group, or a power of two in the range 4 to 262144 for a Version 2.x volume group. The default is 8192.

NOTE: The stripe size of a logical volume is not related to the physical sector size of a disk, which is typically 512 bytes.

How you intend to use the striped logical volume determines what stripe size you assign to it.

For best results follow these guidelines:

  • If you plan to use the striped logical volume for an HFS file system, select the stripe size that most closely reflects the block size of the file system. The newfs command enables you to specify a block size when you build the file system and provides a default block size of 8K for HFS.

  • If you plan to use the striped logical volume for a JFS (VxFS) file system, use the largest available size, 64K. For I/O purposes, VxFS combines blocks into extents, which are variable in size and may be very large. The configured block size, 1K by default, is not significant in this context.

  • If you plan to use the striped logical volume as swap space, set the stripe size to 16K for best performance. For more information on configuring swap, see “Administering Swap Logical Volumes”.

  • If you plan to use the striped logical volume as a raw data partition (for example, for a database application that uses the device directly), the stripe size must be as large or larger than the I/O size for the application.

You might need to experiment to determine the optimum stripe size for your particular situation. To change the stripe size, re-create the logical volume.

Interactions Between Mirroring and Striping

Mirroring a striped logical volume improves the read I/O performance in a same way that it does for a nonstriped logical volume. Simultaneous read I/O requests targeting a single logical extent are served by two or three different physical volumes instead of one. A striped and mirrored logical volume follows a strict allocation policy; that is, the data is always mirrored on different physical volumes.

Increasing Performance Through I/O Channel Separation

I/O channel separation is an approach to LVM configuration requiring that mirrored copies of data reside on LVM disks accessed using separate host bus adapters (HBAs) and cables. I/O channel separation achieves higher availability and better performance by reducing the number of single points of possible hardware failure. If you mirror data on two separate disks, but through one card, your system can fail if the card fails.

You can separate I/O channels on a system with multiple HBAs and a single bus, by mirroring disks across different HBAs. You can further ensure channel separation by establishing a policy called PVG-strict allocation, which requires logical extents to be mirrored in separate physical volume groups. Physical volume groups are subgroups of physical volumes within a volume group.

An ASCII file, /etc/lvmpvg, contains all the mapping information for the physical volume group, but the mapping is not recorded on disk. Physical volume groups have no fixed naming convention; you can name them PVG0, PVG1, and so on. The /etc/lvmpvg file is created and updated using the vgcreate, vgextend, and vgreduce commands, but you can edit the file with a text editor.

I/O channel separation is useful for databases, because it heightens availability (LVM has more flexibility in reading data on the most accessible logical extent), resulting in better performance. If you define your physical volume groups to span I/O devices, you ensure against data loss even if one HBA fails.

When using physical volume groups, consider using a PVG-strict allocation policy for logical volumes.

Printable version
Privacy statement Using this site means you accept its terms Feedback to webmaster
© 2008 Hewlett-Packard Development Company, L.P.