This section contains information about documentation content changes made to the SUSE Linux Enterprise Server Storage Administration Guide since the initial release of SUSE Linux Enterprise Server 11.
This document was updated on the following dates:
Updates were made to the following section. The changes are explained below.
Updates were made to the following section. The changes are explained below.
|
Location |
Change |
|---|---|
|
Added issues related to partitioning multipath devices. |
Updates were made to the following section. The changes are explained below.
|
Location |
Change |
|---|---|
|
Added the -p ip:port option to the following commands: iscsiadm -m node -t iqn.2006-02.com.example.iserv:systems -p ip:port --op=update --name=node.startup --value=automatic iscsiadm -m node -t iqn.2006-02.com.example.iserv:systems -p ip:port --op=delete |
Updates were made to the following sections. The changes are explained below.
|
Location |
Change |
|---|---|
|
Added a link to | |
|
Added examples for using the
|
|
Location |
Change |
|---|---|
|
For clarity, changed “original volume” to “source logical volume”. | |
|
In a Xen host environment, you can use the snapshot function to back up the virtual machine’s storage back-end or to test changes to a virtual machine image such as for patches or upgrades. | |
|
Section 17.5, “Using Snapshots for Virtual Machines on a Virtual Host” |
This section is new. |
|
If the source logical volume contains a virtual machine image, you must shut down the virtual machine, deactivate the source logical volume and snapshot volume (by dismounting them in that order), issue the merge command, and then activate the snapshot volume and source logical volume (by mounting them in that order). Because the source logical volume is automatically remounted and the snapshot volume is deleted when the merge is complete, you should not restart the virtual machine until after the merge is complete. After the merge is complete, you use the resulting logical volume for the virtual machine. |
|
Location |
Change |
|---|---|
|
Added information about how to prepare for a SLES 10 SP4 to SLES 11 upgrade if the system device is managed by EVMS. |
Updates were made to the following sections. The changes are explained below.
|
Location |
Change |
|---|---|
|
Added information about thin provisioning of LVM logical volumes. |
Chapter 15, Mass Storage over IP Networks: iSCSI LIO Target Server is new.
|
Location |
Change |
|---|---|
|
This section is new. | |
|
Section 7.2.11.1, “Storage Arrays That Are Automatically Detected for Multipathing” |
Updated the list of storage arrays that have a default definition
in the
|
|
The | |
|
This section is new. | |
|
Section 7.4.3, “Configuring the Device Drivers in initrd for Multipathing” |
Added information about the SCSI hardware handlers for
|
|
Section 7.7, “Configuring Default Policies for Polling, Queueing, and Failback” |
Updated the default values list to annotate the deprecated
attributes |
|
Section 7.11.2.1, “Understanding Priority Groups and Attributes” |
The default for
The
Added |
|
Section 7.14, “Scanning for New Devices without Rebooting” Section 7.15, “Scanning for New Partitioned Devices without Rebooting” |
Warning
In EMC PowerPath environments, do not use the
|
|
Location |
Change |
|---|---|
|
Section 10.3.3, “Creating a Complex RAID10 with the YaST Partitioner” |
This section is new. |
|
Location |
Change |
|---|---|
|
The cleanup frequency is defined in the Snapper configuration for the mount point. Lower the TIMELINE_LIMIT parameters for daily, monthly, and yearly to reduce how long and the number of snapshots to be retained. | |
|
This section is new. |
Chapter 12, Storage Enclosure LED Utilities for MD Software RAIDs is new.
|
Location |
Change |
|---|---|
|
Added information about thin provisioning for LVM logical volume snapshots. | |
|
This section is new. |
|
Location |
Change |
|---|---|
|
This section is new. |
|
Location |
Change |
|---|---|
|
This section is new. |
Updates were made to the following sections. The changes are explained below.
|
Location |
Change |
|---|---|
|
Section 1.2.3.4, “Ext3 File System Inode Size and Number of Inodes” |
This section was updated to discuss changes to the default settings for inode size and bytes-per-inode ratio in SLES 11. |
Updates were made to the following sections. The changes are explained below.
|
Location |
Change |
|---|---|
|
Section 4.6, “Automatically Activating Non-Root LVM Volume Groups” |
This section is new. |
|
Location |
Change |
|---|---|
|
Multiple device support that allows you to grow or shrink the file system. The feature is planned to be available in a future release of the YaST Partitioner. |
Updates were made to the following sections. The changes are explained below.
This section has been modified to focus on the software RAID1 type. Software RAID0 and RAID5 are not supported. They were previously included in error. Additional important changes are noted below.
|
Location |
Change |
|---|---|
|
Section 9.1, “Prerequisites for Using a Software RAID1 Device for the Root Partition” |
You need a third device to use for the |
|
Step 4 in Section 9.4, “Creating a Software RAID1 Device for the Root (/) Partition” |
Create the |
|
Step 7.b in Section 9.4, “Creating a Software RAID1 Device for the Root (/) Partition” |
Under , select . |
Updates were made to the following sections. The changes are explained below.
|
Location |
Change |
|---|---|
Important
If you add multipath support after you have configured LVM, you
must modify the |
|
Location |
Change |
|---|---|
|
Ensure that the configuration settings in the
| |
|
Section 7.2.3, “Using WWID, User-Friendly, and Alias Names for Multipathed Devices” |
When using links to multipath-mapped devices in the
|
|
To accept both raw disks and partitions for Device Mapper names,
specify the path as follows, with no hyphen (-) before
filter = [ "a|/dev/disk/by-id/dm-uuid-.*mpath-.*|", "r|.*|" ] | |
|
Ensure that the configuration files for
| |
|
Section 7.9, “Configuring User-Friendly Names or Alias Names” |
Before you begin, review the requirements in Section 7.2.3, “Using WWID, User-Friendly, and Alias Names for Multipathed Devices”. |
Updates were made to the following section. The changes are explained below.
|
Location |
Change |
|---|---|
|
This section has been updated to be consistent with File System Support and Sizes (http://www.suse.com/products/server/technical-information/#FileSystem) on the SUSE Linux Enterprise Server Technical Information Web site (http://www.suse.com/products/server/technical-information/). | |
|
This section is new. |
Corrections for front matter and typographical errata.
Updates were made to the following sections. The changes are explained below.
|
Location |
Change |
|---|---|
|
Section 7.6, “Creating or Modifying the /etc/multipath.conf File” |
Specific examples for configuring devices were moved to a higher organization level. |
|
Section 7.6.3, “Verifying the Multipath Setup in the /etc/multipath.conf File” |
It is assumed that the |
|
Location |
Change |
|---|---|
|
Section 1.2.3.4, “Ext3 File System Inode Size and Number of Inodes” |
To allow space for extended attributes and ACLs for a file on Ext3 file systems, the default inode size for Ext3 was increased from 128 bytes on SLES 10 to 256 bytes on SLES11. |
Updates were made to the following sections. The changes are explained below.
|
Location |
Change |
|---|---|
|
In SLES 11 SP2, the
Multipath Tools 0.4.9 and later uses the | |
|
Section 7.11.2.1, “Understanding Priority Groups and Attributes” |
Added information about using |
|
Section 7.11.2.1, “Understanding Priority Groups and Attributes” |
In SLES 11 SP2, the rr_min_io multipath attribute is obsoleted and replaced by the rr_min_io_rq attribute. |
|
Section 7.19.1, “PRIO Settings for Individual Devices Fail After Upgrading to Multipath 0.4.9” |
This section is new. |
|
Section 7.19.2, “PRIO Settings with Arguments Fail After Upgrading to multipath-tools-0.4.9” |
This section is new. |
|
Fixed broken links. |
|
Location |
Change |
|---|---|
|
The Btrfs tools package is |
Updates were made to the following sections. The changes are explained below.
This section was updated to use the
/dev/disk/by-id directory for device paths in all
examples.
|
Location |
Change |
|---|---|
|
Revised the order of the procedures so that you reduce the size of the RAID before you reduce the individual component partition sizes. |
Updates were made to the following sections. The changes are explained below.
This section is new. Open Fibre Channel over Ethernet (OpenFCoE) is supported beginning in SLES 11.
This section is new.
|
Location |
Change |
|---|---|
|
This section is new. |
This section is new.
|
Location |
Change | ||||
|---|---|---|---|---|---|
|
Section 7.7, “Configuring Default Policies for Polling, Queueing, and Failback” |
The default | ||||
|
In the | |||||
|
Section 7.11.2.1, “Understanding Priority Groups and Attributes” |
Recommendations were added for the no_path_retry and failback settings when multipath I/O is used in a cluster environment. | ||||
|
Section 7.11.2.1, “Understanding Priority Groups and Attributes” |
The path-selector option names and settings were corrected:
|
|
Location |
Change |
|---|---|
|
This section is new. |
|
Location |
Change |
|---|---|
|
This section is new. | |
|
Section 14.5.4, “iSCSI Targets Are Mounted When the Configuration File Is Set to Manual” |
This section is new. |
|
Location |
Change |
|---|---|
|
This section is new. With SUSE Linux Enterprise 11 SP2, the Btrfs file system is supported as root file system, that is, the file system for the operating system, across all architectures of SUSE Linux Enterprise 11 SP2. | |
ImportantThe ReiserFS file system is fully supported for the lifetime of SUSE Linux Enterprise Server 11 specifically for migration purposes. SUSE plans to remove support for creating new ReiserFS file systems starting with SUSE Linux Enterprise Server 12. | |
|
This section is new. | |
|
The values in this section were updated to current standards. | |
|
This section is new. |
|
Location |
Change |
|---|---|
|
Section 5.1, “Guidelines for Resizing” Section 5.4, “Decreasing the Size of an Ext2 or Ext3 File System” |
The resize2fs command allows only the Ext3 file system to be resized if mounted. The size of an Ext3 volume can be increased or decreased when the volume is mounted or unmounted. The Ext2/4 file systems must be unmounted for increasing or decreasing the volume size. |
|
Location |
Change |
|---|---|
|
The |
Updates were made to the following section. The changes are explained below.
|
Location |
Change |
|---|---|
|
Section 7.2.4, “Using LVM2 on Multipath Devices” Section 7.13, “Configuring Multipath I/O for an Existing Software RAID” |
Running |
Updates were made to the following section. The changes are explained below.
|
Location |
Change |
|---|---|
|
Section 7.11.2.1, “Understanding Priority Groups and Attributes” |
The default setting for path_grouping_policy changed from multibus to failover in SLES 11. |
|
Location |
Change |
|---|---|
|
This section is new. |
This release fixes broken links and removes obsolete references.
Updates were made to the following section. The changes are explained below.
|
Location |
Change |
|---|---|
|
LVM2 does not restrict the number of physical extents. Having a large number of extents has no impact on I/O performance to the logical volume, but it slows down the LVM tools. |
|
Location |
Change |
|---|---|
|
Tuning the Failover for Specific Host Bus Adapters |
This section was removed. For HBA failover guidance, refer to your vendor documentation. |
|
Location |
Change |
|---|---|
|
Decreasing the size of the file system is supported when the file system is unmounted. |
Updates were made to the following sections. The changes are explained below.
|
Location |
Change |
|---|---|
|
The discussion and procedure were expanded to explain how to configure a partition that uses the entire disk. The procedure was modified to use the Hard Disk partitioning feature in the YaST Partitioner. | |
|
All LVM Management sections |
Procedures throughout the chapter were modified to use Volume Management in the YaST Partitioner. |
|
This section is new. | |
|
This section is new. | |
|
This section is new. | |
|
This section is new. |
Updates were made to the following sections. The changes are explained below.
|
Location |
Change |
|---|---|
|
Details were added to the procedure. |
|
Location |
Change |
|---|---|
|
Section 7.9, “Configuring User-Friendly Names or Alias Names” |
Using user-friendly names for the root device can result in data loss. Added alternatives from TID 7001133: Recommendations for the usage of user_friendly_names in multipath configurations (http://www.suse.com/support/kb/doc.php?id=7001133). |
|
Location |
Change |
|---|---|
|
Errata in the example were corrected. |
|
Location |
Change |
|---|---|
|
Section 14.5.1, “Hotplug Doesn’t Work for Mounting iSCSI Targets” |
This section is new. |
Updates were made to the following sections. The changes are explained below.
|
Location |
Change |
|---|---|
|
The example in Step 3 was corrected. | |
|
Section 7.2.8, “SAN Timeout Settings When the Root Device Is Multipathed” |
This section is new. |
|
The file list for a package can vary for different server architectures. For a list of files included in the multipath-tools package, go to the Web page (http://www.suse.com/products/server/technical-information/), find your architecture and select , then search on “multipath-tools” to find the package list for that architecture. | |
|
If the SAN device will be used as the root device on the server, modify the timeout settings for the device as described in Section 7.2.8, “SAN Timeout Settings When the Root Device Is Multipathed”. | |
|
Section 7.6.3, “Verifying the Multipath Setup in the /etc/multipath.conf File” |
Added example output for -v3 verbosity. |
|
Section 7.12.1.1, “Enabling Multipath I/O at Install Time on an Active/Active Multipath Storage LUN” |
This section is new. |
|
This section is new. |
|
Location |
Change |
|---|---|
|
Step 7.g in Section 14.2.2, “Creating iSCSI Targets with YaST” |
In the function, the option allows you to export the iSCSI target information, which makes it easier to provide this information to consumers of the resources. |
|
This section is new. |
|
Location |
Change |
|---|---|
|
The Software RAID HOW-TO has been deprecated. Use the Linux RAID wiki (https://raid.wiki.kernel.org/index.php/Linux_Raid) instead. |
|
Location |
Change |
|---|---|
|
This section is new. |
Updates were made to the following sections. The changes are explained below.
|
Location |
Change |
|---|---|
|
Section 9.1, “Prerequisites for Using a Software RAID1 Device for the Root Partition” |
Corrected an error in the RAID 0 definition.. |
|
Location |
Change |
|---|---|
|
Added information about using the
| |
|
Section 7.15, “Scanning for New Partitioned Devices without Rebooting” |
Added information about using the
|
Updates were made to the following sections. The changes are explained below.
|
Location |
Change |
|---|---|
|
Section 7.2.4, “Using LVM2 on Multipath Devices” Section 7.13, “Configuring Multipath I/O for an Existing Software RAID” |
The -f mpath option changed to -f multipath: mkinitrd -f multipath |
|
prio_callout in Section 7.11.2.1, “Understanding Priority Groups and Attributes” |
Multipath prio_callout programs are located in shared libraries in
|
|
Location |
Change |
|---|---|
|
The resize2fs utility supports online or offline resizing for the ext3 file system. |
|
Location |
Change |
|---|---|
|
Section 2.4.11, “Location Change for Multipath Tool Callouts” |
This section is new. |
|
Section 2.4.12, “Change from mpath to multipath for the mkinitrd -f Option” |
This section is new. |
Updates were made to the following sections. The changes are explained below.
|
Location |
Change |
|---|---|
|
In the YaST Control Center, select › . |
|
Location |
Change |
|---|---|
|
The keyword devnode_blacklist has been deprecated and replaced with the keyword blacklist. | |
|
Section 7.7, “Configuring Default Policies for Polling, Queueing, and Failback” |
Changed getuid_callout to getuid. |
|
Section 7.11.2.1, “Understanding Priority Groups and Attributes” |
Changed getuid_callout to getuid. |
|
Section 7.11.2.1, “Understanding Priority Groups and Attributes” |
Added descriptions for path_selector of least-pending, length-load-balancing, and service-time options. |
|
Location |
Change |
|---|---|
|
Section 2.4.10, “Advanced I/O Load-Balancing Options for Multipath” |
This section is new. |
Updates were made to the following section. The change is explained below.
|
Location |
Change |
|---|---|
|
This section is new. |
Updates were made to the following sections. The changes are explained below.
|
Location |
Change |
|---|---|
|
Section 7.12, “Configuring Multipath I/O for the Root Device” |
Added Step 4 and Step 6 for System Z. |
|
Section 7.15, “Scanning for New Partitioned Devices without Rebooting” |
Corrected the syntax for the command lines in Step 2. |
|
Section 7.15, “Scanning for New Partitioned Devices without Rebooting” |
Step 7 replaces old Step 7 and Step 8. |
|
Location |
Change |
|---|---|
|
To see the rebuild progress while being refreshed every second, enter watch -n 1 cat /proc/mdstat |
|
Location |
Change |
|---|---|
|
Section 14.3.1, “Using YaST for the iSCSI Initiator Configuration” |
Re-organized material for clarity. Added information about how to use the settings for the Start-up option for iSCSI target devices:
|
Updates were made to the following section. The changes are explained below.
|
Location |
Change |
|---|---|
|
Section 7.2.11.1, “Storage Arrays That Are Automatically Detected for Multipathing” |
Testing of the IBM zSeries device with multipathing has shown that
the dev_loss_tmo parameter should be set to 90 seconds, and the
fast_io_fail_tmo parameter should be set to 5 seconds. If you are
using zSeries devices, you must manually create and configure the
|
|
Multipathing is supported for the | |
|
Section 7.10, “Configuring Default Settings for zSeries Devices” |
This section is new. |
|
Section 7.12, “Configuring Multipath I/O for the Root Device” |
DM-MP is now available and supported for
|