Jump to content United States-English
HP.com Home Products and Services Support and Drivers Solutions How to Buy
» Contact HP
More options
HP.com home
Managing Serviceguard Fifteenth Edition > Chapter 3 Understanding Serviceguard Software Components

Volume Managers for Data Storage

» 

Technical documentation

Complete book in PDF
» Feedback
Content starts here

 » Table of Contents

 » Index

A volume manager is a tool that lets you create units of disk storage known as storage groups. Storage groups contain logical volumes for use on single systems and in high availability clusters. In Serviceguard clusters, storage groups are activated by package control scripts.

Types of Redundant Storage

In Serviceguard, there are two types of supported shared data storage: mirrored individual disks (also known as JBODs, for “just a bunch of disks”), and external disk arrays which configure redundant storage in hardware. Two types of mirroring are RAID1 and RAID5. Here are some differences between the two storage methods:

  • If you are using JBODs, the basic element of storage is an individual disk. This disk must be paired with another disk to create a mirror (RAID1). (Serviceguard configurations usually have separate mirrors on different storage devices).

  • If you have a disk array, the basic element of storage is a LUN, which already provides storage redundancy via hardware RAID1 or RAID5.

About Device File Names (Device Special Files)

HP-UX releases up to and including 11i v2 use a naming convention for device files that encodes their hardware path. For example, a device file named /dev/dsk/c3t15d0 would indicate SCSI controller instance 3, SCSI target 15, and SCSI LUN 0. HP-UX 11i v3 introduces a new nomenclature for device files, known as agile addressing (sometimes also called persistent LUN binding).

Under the agile addressing convention, the hardware path name is no longer encoded in a storage device’s name; instead, each device file name reflects a unique instance number, for example /dev/[r]disk/disk3, that does not need to change when the hardware path does.

Agile addressing is the default on new 11i v3 installations, but the I/O subsystem still recognizes the pre-11.i v3 nomenclature. This means that you are not required to convert to agile addressing when you upgrade to 11i v3, though you should seriously consider its advantages.

For instructions on migrating a system to agile addressing, see the white paper Migrating from HP-UX 11i v2 to HP-UX 11i v3 at http://docs.hp.com.

CAUTION: There are special requirements for migrating cluster lock volumes to agile addressing; see “Updating the Cluster Lock Configuration”.
NOTE: It is possible, though not a best practice, to use legacy DSFs (that is, DSFs using the older naming convention) on some nodes after migrating to agile addressing on others; this allows you to migrate different nodes at different times, if necessary.

For more information about agile addressing, see following documents at http://www.docs.hp.com.:

  • the Logical Volume Management volume of the HP-UX System Administrator’s Guide (in the 11i v3 -> System Administration collection on docs.hp.com)

  • the HP-UX 11i v3 Installation and Update Guide (in the 11i v3 -> Installing and Updating collection on docs.hp.com)

  • the white papers

    • The Next Generation Mass Storage Stack (under Network and Systems Management -> Storage Area Management on docs.hp.com)

    • Migrating from HP-UX 11i v2 to HP-UX 11i v3

    • HP-UX 11i v3 Native Multi-Pathing for Mass Storage

See also the HP-UX 11i v3 intro(7) manpage, and “About Multipathing” of this manual.

Examples of Mirrored Storage

Figure 3-20 “Physical Disks Within Shared Storage Units” shows an illustration of mirrored storage using HA storage racks. In the example, node1 and node2 are cabled in a parallel configuration, each with redundant paths to two shared storage devices. Each of two nodes also has two (non-shared) internal disks which are used for the root file system, swap etc. Each shared storage unit has three disks, The device file names of the three disks on one of the two storage units are c0t0d0, c0t1d0, and c0t2d0. On the other, they are c1t0d0, c1t1d0, and c1t2d0.

NOTE: Under agile addressing (see “About Device File Names (Device Special Files)”), the storage units in this example would have names such as disk1, disk2, disk3, etc.

Figure 3-20 Physical Disks Within Shared Storage Units

Physical Disks Within Shared Storage Units

Figure 3-21 “Mirrored Physical Disks” shows the individual disks combined in a multiple disk mirrored configuration.

Figure 3-21 Mirrored Physical Disks

Mirrored Physical Disks

Figure 3-22 “Multiple Devices Configured in Volume Groups” shows the mirrors configured into LVM volume groups, shown in the figure as /dev/vgpkgA and /dev/vgpkgB. The volume groups are activated by Serviceguard packages for use by highly available applications.

Figure 3-22 Multiple Devices Configured in Volume Groups

Multiple Devices Configured in Volume Groups

Examples of Storage on Disk Arrays

Figure 3-23 “Physical Disks Combined into LUNs” shows an illustration of storage configured on a disk array. Physical disks are configured by an array utility program into logical units or LUNs which are then seen by the operating system.

Figure 3-23 Physical Disks Combined into LUNs

Physical Disks Combined into LUNs
NOTE: LUN definition is normally done using utility programs provided by the disk array manufacturer. Since arrays vary considerably, you should refer to the documentation that accompanies your storage unit.

Figure 3-24 “Multiple Paths to LUNs” shows LUNs configured with multiple paths (links) to provide redundant pathways to the data.

NOTE: Under agile addressing, the storage units in these example would have names such as disk1, disk2, disk3, etc. See “About Device File Names (Device Special Files)”

.

Figure 3-24 Multiple Paths to LUNs

Multiple Paths to LUNs

Finally, the multiple paths are configured into volume groups as shown in Figure 3-25 “Multiple Paths in Volume Groups”.

Figure 3-25 Multiple Paths in Volume Groups

Multiple Paths in Volume Groups

Types of Volume Manager

Serviceguard allows a choice of volume managers for data storage:

  • HP-UX Logical Volume Manager (LVM) and (optionally) Mirrordisk/UX

  • Veritas Volume Manager for HP-UX (VxVM)—Base and add-on Products

  • Veritas Cluster Volume Manager for HP-UX

Separate sections in Chapters 5 and 6 explain how to configure cluster storage using all of these volume managers. The rest of the present section explains some of the differences among these available volume managers and offers suggestions about appropriate choices for your cluster environment.

NOTE: The HP-UX Logical Volume Manager is described in the HP-UX System Administrator’s Guide. Release Notes for Veritas Volume Manager contain a description of Veritas volume management products.

HP-UX Logical Volume Manager (LVM)

Logical Volume Manager (LVM) is the default storage management product on HP-UX. Included with the operating system, LVM is available on all cluster nodes. It supports the use of Mirrordisk/UX, which is an add-on product that allows disk mirroring with up to two mirrors (for a total of three copies of the data).

Currently, the HP-UX root disk can be configured as an LVM volume group. (Note that, in this case, the HP-UX root disk is not the same as the Veritas root disk group, rootdg, which must be configured in addition to the HP-UX root disk on any node that uses Veritas Volume Manager 3.5 products. The rootdg is no longer required with Veritas Volume Manager 4.1 and later products.) The Serviceguard cluster lock disk also is configured using a disk configured in an LVM volume group.

LVM continues to be supported on HP-UX single systems and on Serviceguard clusters.

Veritas Volume Manager (VxVM)

The Base Veritas Volume Manager for HP-UX (Base-VxVM) is provided at no additional cost with HP-UX 11i. This includes basic volume manager features, including a Java-based GUI, known as VEA. It is possible to configure cluster storage for Serviceguard with only Base-VXVM. However, only a limited set of features is available.

The add-on product, Veritas Volume Manager for HP-UX provides a full set of enhanced volume manager capabilities in addition to basic volume management. This includes features such as mirroring, dynamic multipathing for active/active storage devices, and hot relocation.

VxVM can be used in clusters that:

  • are of any size, up to 16 nodes.

  • require a fast cluster startup time.

  • do not require shared storage group activation. (required with CFS)

  • do not have all nodes cabled to all disks. (required with CFS)

  • need to use software RAID mirroring or striped mirroring.

  • have multiple heartbeat subnets configured.

Propagation of Disk Groups in VxVM

A VxVM disk group can be created on any node, whether the cluster is up or not. You must validate the disk group by trying to import it on each node.

Package Startup Time with VxVM

With VxVM, each disk group is imported by the package control script that uses the disk group. This means that cluster startup time is not affected, but individual package startup time might be increased because VxVM imports the disk group at the time the package starts up.

Veritas Cluster Volume Manager (CVM)

NOTE: Check the Serviceguard, SGeRAC, and SMS Compatibility and Feature Matrix and the latest Release Notes for your version of Serviceguard for up-to-date information on CVM support: http://www.docs.hp.com -> High Availability - > Serviceguard.

You may choose to configure cluster storage with the Veritas Cluster Volume Manager (CVM) instead of the Volume Manager (VxVM). The Base-VxVM provides some basic cluster features when Serviceguard is installed, but there is no support for software mirroring, dynamic multipathing (for active/active storage devices), or numerous other features that require the additional licenses.

VxVM supports up to 16 nodes, and CVM supports up to 8. CFS 5.0 also supports up to 8 nodes; earlier versions of CFS support up to 4.

The VxVM Full Product and CVM are enhanced versions of the VxVM volume manager specifically designed for cluster use. When installed with the Veritas Volume Manager, the CVM add-on product provides most of the enhanced VxVM features in a clustered environment. CVM is truly cluster-aware, obtaining information about cluster membership from Serviceguard directly.

Cluster information is provided via a special system multi-node package, which runs on all nodes in the cluster. The cluster must be up and must be running this package before you can configure VxVM disk groups for use with CVM. Disk groups must be created from the CVM Master node. The Veritas CVM package for version 3.5 is named VxVM-CVM-pkg; the package for CVM version 4.1 and later is named SG-CFS-pkg.

CVM allows you to activate storage on one node at a time, or you can perform write activation on one node and read activation on another node at the same time (for example, allowing backups). CVM provides full mirroring and dynamic multipathing (DMP) for clusters.

CVM supports concurrent storage read/write access between multiple nodes by applications which can manage read/write access contention, such as Oracle Real Application Cluster (RAC).

CVM 4.1 and later can be used with Veritas Cluster File System (CFS) in Serviceguard. Several of the HP Serviceguard Storage Management Suite bundles include features to enable both CVM and CFS.

CVM can be used in clusters that:

  • run applications that require fast disk group activation after package failover;

  • require storage activation on more than one node at a time, for example to perform a backup from one node while a package using the volume is active on another node. In this case, the package using the disk group would have the disk group active in exclusive write mode while the node that is doing the backup would have the disk group active in shared read mode;

  • run applications, such as Oracle RAC, that require concurrent storage read/write access between multiple nodes.

Heartbeat is configured differently depending on whether you are using CVM 3.5 or 4.1 and later. See “Redundant Heartbeat Subnet Required ”.

Shared storage devices must be connected to all nodes in the cluster, whether or not the node accesses data on the device.

Cluster Startup Time with CVM

All shared disk groups (DGs) are imported when the system multi-node’s control script starts up CVM. Depending on the number of DGs, the number of nodes and the configuration of these (number of disks, volumes, etc.) this can take some time (current timeout value for this package is 3 minutes but for larger configurations this may have to be increased). Any failover package that uses a CVM DG will not start until the system multi-node package is up. Note that this delay does not affect package failover time; it is a one-time overhead cost at cluster startup.

Propagation of Disk Groups with CVM

CVM disk groups are created on one cluster node known as the CVM master node. CVM verifies that each node can see each disk and will not allow invalid DGs to be created.

Redundant Heartbeat Subnet Required

HP recommends that you configure all subnets that connect cluster nodes as heartbeat networks; this increases protection against multiple faults at no additional cost.

Heartbeat configurations are configured differently depending on whether you are using CVM 3.5, or 4.1 and later. You can create redundancy in the following ways:

1) dual (multiple) heartbeat networks
2) single heartbeat network with standby LAN card(s)
3) single heartbeat network with APACVM 3.5 supports only options 2 and 3. Options 1 and 2 are the minimum recommended configurations for CVM 4.1 and later.

Comparison of Volume Managers

The following table summarizes the advantages and disadvantages of the volume managers.

Table 3-4 Pros and Cons of Volume Managers with Serviceguard

ProductAdvantagesTradeoffs
Logical Volume Manager (LVM)
  • Software is provided with all versions of HP-UX.

  • Provides up to 3-way mirroring using optional Mirrordisk/UX software.

  • Dynamic multipathing (DMP) is active by default as of HP-UX 11i v3.

  • Supports exclusive activation as well as read-only activation from multiple nodes

  • Can be used to configure a cluster lock disk

  • Supports multiple heartbeat subnets; the one with the faster failover time is used to re-form the cluster.

  • Lacks flexibility and extended features of some other volume managers

Mirrordisk/UX

  • Software mirroring

  • Lower cost solution

  • Lacks extended features of other volume managers

Shared Logical Volume Manager (SLVM)

  • Provided free with SGeRAC for multi-node access to RAC data

  • Supports up to 16 nodes in shared read/write mode for each cluster

  • Supports exclusive activation

  • Supports multiple heartbeat subnets.

  • Online node configuration with activated shared volume groups (using specific SLVM kernel and Serviceguard revisions)

  • Lacks the flexibility and extended features of some other volume managers.

  • Limited mirroring support

Base-VxVM

  • Software is supplied free with HP-UX 11i releases.

  • Java-based administration through graphical user interface.

  • Striping (RAID-0) support.

  • Concatenation.

  • Online resizing of volumes.

  • Supports multiple heartbeat subnets.

  • Cannot be used for a cluster lock

  • root/boot disk supported only on VxVM 3.5 or later, on HP-UX 11i

  • Supports only exclusive read or write activation

  • Package delays are possible, due to lengthy vxdg import at the time the package is started or failed over

Veritas Volume Manager— Full VxVM product B9116AA (VxVM 3.5)
B9116BA (VxVM 4.1)
B9116CA (VxVM 5.0)
  • Disk group configuration from any node.

  • DMP for active/active storage devices.

  • Supports exclusive activation.

  • Hot relocation and unrelocation of failed subdisks

  • Supports up to 32 plexes per volume

  • RAID 1+0 mirrored stripes

  • RAID 1 mirroring

  • RAID 5

  • RAID 0+1 striped mirrors

  • Supports multiple heartbeat subnets, which could reduce cluster reformation time.

  • Requires purchase of additional license

  • Cannot be used for a cluster lock

  • Using the disk as a root/boot disk is only supported for VxVM 3.5 or later, when installed on HP-UX 11i.

  • Does not support activation on multiple nodes in either shared mode or read-only mode

  • May cause delay at package startup time due to lengthy vxdg import

Veritas Cluster Volume Manager—
B9117AA (CVM 3.5)
B9117BA (CVM 4.1)
B9117CA (CVM 5.0)
  • Provides volume configuration propagation.

  • Supports cluster shareable disk groups.

  • Package startup time is faster than with VxVM.

  • Supports shared activation.

  • Supports exclusive activation.

  • Supports activation in different modes on different nodes at the same time

  • CVM versions 4.1 and later support the Veritas Cluster File System (CFS)

  • Disk groups must be configured on a master node

  • CVM can only be used with up to 8 cluster nodes. CFS can be used with up to 4 nodes.

  • Cluster startup may be slower than with VxVM

  • Requires purchase of additional license

  • No support for striped mirrors or RAID 5

  • Version 3.5 supports only a single heartbeat subnet (Version 4.1 and later support more than one heartbeat)

  • CVM requires all nodes to have connectivity to the shared disk groups

  • Not currently supported on all versions of HP-UX

 

Printable version
Privacy statement Using this site means you accept its terms Feedback to webmaster
© Hewlett-Packard Development Company, L.P.