The features and behavior changes noted in this section were made for SUSE Linux Enterprise Server 11.
In addition to bug fixes, the features and behavior changes in this section were made for the SUSE Linux Enterprise Server 11 SP3 release:
Btrfs quota support for subvolumes on the root
file system has been added in the btrfs(8) command.
YaST supports the iSCSI LIO Target Server software. For information, see Chapter 15, Mass Storage over IP Networks: iSCSI LIO Target Server.
The following enhancements were added for Linux software RAIDs:
The software RAID provides improved support on the Intel RSTe+ (Rapid Storage Technology Enterprise) platform to support RAID level 0, 1, 4, 5, 6, and 10.
The LEDMON utility supports PCIe-SSD enclosure LEDs for MD software RAIDs. For information, see Chapter 12, Storage Enclosure LED Utilities for MD Software RAIDs.
In the wizard in the YaST Partitioner, the option allows you to specify the order in which the selected devices in a Linux software RAID will be used to ensure that one half of the array resides on one disk subsystem and the other half of the array resides on a different disk subsystem. For example, if one disk subsystem fails, the system keeps running from the second disk subsystem. For information, see Step 4.d in Section 10.3.3, “Creating a Complex RAID10 with the YaST Partitioner”.
The following enhancements were added for LVM2:
LVM logical volumes can be thinly provisioned. For information, see Section 4.5, “Configuring Logical Volumes”.
Thin pool: The logical volume is a pool of space that is reserved for use with thin volumes. The thin volumes can allocate their needed space from it on demand.
Thin volume: The volume is created as a sparse volume. The volume allocates needed space on demand from a thin pool.
LVM logical volume snapshots can be thinly provisioned. Thin provisioning is assumed if you to create a snapshot without a specified size. For information, see Section 17.2, “Creating Linux Snapshots with LVM”.
The following changes and enhancements were made for multipath I/O:
The mpathpersist(8) utility is new. It can be used
to manage SCSI persistent reservations on Device Mapper Multipath
devices. For information, see
Section 7.3.5, “Linux mpathpersist(8) Utility”.
The following enhancement was added to the
multipath(8) command:
The -r option allows you to force a device map
reload.
The Device Mapper - Multipath tool added the following enhancements
for the /etc/multipath.conf file:
udev_dir.
The udev_dir attribute is deprecated. After you
upgrade to SLES 11 SP3, you can remove the following line from the
defaults section of your
/etc/multipath.conf file:
udev_dir /dev
getuid_callout.
In the defaults section of the
/etc/multipath.conf file, the
getuid_callout attribute is deprecated and
replaced by the uid_attribute parameter. This
parameter is a udev attribute that provides a unique path
identifier. The default value is ID_SERIAL.
After you upgrade to SLES 11 SP3, you can modify the attributes in
the defaults section of your
/etc/multipath.conf file:
Remove the following line from the defaults
section:
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
Add the following line to the defaults section:
uid_attribute "ID_SERIAL"
path_selector.
In the defaults section of the
/etc/multipath.conf file, the default value for
the path_selector attribute was changed from
"round-robin 0" to "service-time
0". The service-time option chooses the
path for the next bunch of I/O based on the amount of outstanding
I/O to the path and its relative throughput.
After you upgrade to SLES 11 SP3, you can modify the attribute value
in the defaults section of your
/etc/multipath.conf file to use the recommended
default:
path_selector "service-time 0"
user_friendly_names.
The user_friendly_names attribute can be
configured in the devices section and in the
multipaths section.
max_fds.
The default setting for the max_fds attribute was
changed to max. This allows the multipath daemon
to open as many file descriptors as the system allows when it is
monitoring paths.
After you upgrade to SLES 11 SP3, you can modify the attribute value
in your /etc/multipath.conf file:
max_fds "max"
reservation_key.
In the defaults section or
multipaths section of the
/etc/multipath.conf file, the
reservation_key attribute can be used to assign a
Service Action Reservation Key that is used with the
mpathpersist(8) utility to manage persistent
reservations for Device Mapper Multipath devices. The attribute is
not used by default. If it is not set, the
multipathd daemon does not check for persistent
reservation for newly discovered paths or reinstated paths.
reservation_key <reservation key>
For example:
multipaths {
multipath {
wwid XXXXXXXXXXXXXXXX
alias yellow
reservation_key 0x123abc
}
}
For information about setting persistent reservations, see Section 7.3.5, “Linux mpathpersist(8) Utility”.
hardware_handler.
Four SCSI hardware handlers were added in the SCSI layer that can be used with DM-Multipath:
scsi_dh_alua
|
scsi_dh_rdac
|
scsi_dh_hp_sw
|
scsi_dh_emc
|
These handlers are modules created under the SCSI directory in the Linux kernel. Previously, the hardware handler in the Device Mapper layer was used.
Add the modules to the initrd image, then
specify them in the /etc/multipath.conf file as
hardware handler types alua,
rdac, hp_sw, and
emc. For information about adding the device
drivers to the initrd image, see
Section 7.4.3, “Configuring the Device Drivers in initrd for Multipathing”.
In addition to bug fixes, the features and behavior changes in this section were made for the SUSE Linux Enterprise Server 11 SP2 release:
Btrfs File System. See Section 1.2.1, “Btrfs”.
Open Fibre Channel over Ethernet. See Chapter 16, Fibre Channel Storage over Ethernet Networks: FCoE.
Tagging for LVM storage objects. See Section 4.7, “Tagging LVM2 Storage Objects”.
NFSv4 ACLs tools. See Chapter 18, Managing Access Control Lists over NFSv4.
--assume-clean option for mdadm
resize command. See
Section 11.2.2, “Increasing the Size of the RAID Array”.
In the defaults section of the
/etc/multipath.conf file, the
default_getuid parameter was obsoleted and replaced
by the getuid_callout parameter:
The line changed from this:
default_getuid "/sbin/scsi_id -g -u -s /block/%n"
to this:
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
In addition to bug fixes, the features and behavior changes noted in this section were made for the SUSE Linux Enterprise Server 11 SP1 release.
Section 2.3.2, “Modifying Authentication Parameters in the iSCSI Initiator”
Section 2.3.3, “Allowing Persistent Reservations for MPIO Devices”
Section 2.3.5, “Boot Loader Support for MDRAID External Metadata”
Section 2.3.6, “YaST Install and Boot Support for MDRAID External Metadata”
Section 2.3.7, “Improved Shutdown for MDRAID Arrays that Contain the Root File System”
Section 2.3.11, “Updating Storage Drivers for Adapters on IBM Servers”
In the function, a option was added that allows you to export the iSCSI target information. This makes it easier to provide information to consumers of the resources.
In the function, you can modify the authentication parameters for connecting to a target devices. Previously, you needed to delete the entry and re-create it in order to change the authentication information.
A SCSI initiator can issue SCSI reservations for a shared storage device, which locks out SCSI initiators on other servers from accessing the device. These reservations persist across SCSI resets that might happen as part of the SCSI exception handling process.
The following are possible scenarios where SCSI reservations would be useful:
In a simple SAN environment, persistent SCSI reservations help protect against administrator errors where a LUN is attempted to be added to one server but it is already in use by another server, which might result in data corruption. SAN zoning is typically used to prevent this type of error.
In a high-availability environment with failover set up, persistent SCSI reservations help protect against errant servers connecting to SCSI devices that are reserved by other servers.
Use the latest version of the Multiple Devices Administration (MDADM,
mdadm) utility to take advantage of bug fixes and
improvements.
Support was added to use the external metadata capabilities of the MDADM utility version 3.0 to install and run the operating system from RAID volumes defined by the Intel Matrix Storage Technology metadata format. This moves the functionality from the Device Mapper RAID (DMRAID) infrastructure to the Multiple Devices RAID (MDRAID) infrastructure, which offers the more mature RAID 5 implementation and offers a wider feature set of the MD kernel infrastructure. It allows a common RAID driver to be used across all metadata formats, including Intel, DDF (common RAID disk data format), and native MD metadata.
The YaST installer tool added support for MDRAID External Metadata for
RAID 0, 1, 10, 5, and 6. The installer can detect RAID arrays and
whether the platform RAID capabilities are enabled. If multipath RAID
is enabled in the platform BIOS for Intel Matrix Storage Manager, it
offers options for DMRAID, MDRAID (recommended), or none. The
initrd was also modified to support assembling
BIOS-based RAID arrays.
Shutdown scripts were modified to wait until all of the MDRAID arrays are marked clean. The operating system shutdown process now waits for a dirty-bit to be cleared until all MDRAID volumes have finished write operations.
Changes were made to the startup script, shutdown script, and the
initrd to consider whether the root
(/) file system (the system volume that contains
the operating system and application files) resides on a software RAID
array. The metadata handler for the array is started early in the
shutdown process to monitor the final root file system environment
during the shutdown. The handler is excluded from the general
killall events. The process also allows for writes
to be quiesced and for the array’s metadata dirty-bit (which
indicates whether an array needs to be resynchronized) to be cleared at
the end of the shutdown.
The YaST installer now allows MD to be configured over iSCSI devices.
If RAID arrays are needed on boot, the iSCSI initiator software is
loaded before boot.md so that the iSCSI targets
are available to be auto-configured for the RAID.
For a new install, Libstorage creates an
/etc/mdadm.conf file and adds the line
AUTO -all. During an update, the line is not added.
If /etc/mdadm.conf contains the line
AUTO -all
then no RAID arrays are auto-assembled unless they are explicitly
listed in /etc/mdadm.conf.
The MD-SGPIO utility is a standalone application that monitors RAID
arrays via sysfs(2). Events trigger an LED change
request that controls blinking for LED lights that are associated with
each slot in an enclosure or a drive bay of a storage subsystem. It
supports two types of LED systems:
2-LED systems (Activity LED, Status LED)
3-LED systems (Activity LED, Locate LED, Fail LED)
The lvresize, lvextend, and
lvreduce commands that are used to resize logical
volumes were modified to allow the resizing of LVM 2 mirrors.
Previously, these commands reported errors if the logical volume was a
mirror.
Update the following storage drivers to use the latest available versions to support storage adapters on IBM servers:
Adaptec: aacraid, aic94xx
Emulex: lpfc
LSI: mptas, megaraid_sas
The mptsas driver now supports native EEH
(Enhanced Error Handler) recovery, which is a key feature for all of
the IO devices for Power platform customers.
qLogic: qla2xxx, qla3xxx,
qla4xxx
The features and behavior changes noted in this section were made for the SUSE Linux Enterprise Server 11 release.
Section 2.4.5, “OCFS2 File System Is in the High Availability Release”
Section 2.4.7, “Device Name Persistence in the /dev/disk/by-id Directory”
Section 2.4.9, “User-Friendly Names for Multipathed Devices”
Section 2.4.10, “Advanced I/O Load-Balancing Options for Multipath”
Section 2.4.11, “Location Change for Multipath Tool Callouts”
Section 2.4.12, “Change from mpath to multipath for the mkinitrd -f Option”
The Enterprise Volume Management Systems (EVMS2) storage management solution is deprecated. All EVMS management modules have been removed from the SUSE Linux Enterprise Server 11 packages. Your non-system EVMS-managed devices should be automatically recognized and managed by Linux Volume Manager 2 (LVM2) when you upgrade your system.
If you have EVMS managing the system device (any device that contains
the root (/), /boot, or
swap), try these things to prepare the SLES 10
server before you reboot the server to upgrade:
In the /etc/fstab file, modify the boot and swap disks to the default
/dev/system/sys_lx directory:
Remove /evms/lvm2 from the path for the
swap and root (/)
partitions.
Remove /evms from the path for
/boot partition.
In the /boot/grub/menu.lst file, remove
/evms/lvm2 from the path.
In the /etc/sysconfig/bootloader file, verify
that the path for the boot device is the /dev
directory.
Ensure that boot.lvm and
boot.md are enabled:
In YaST, click .
Select boot.lvm.
Click .
Select boot.md.
Click .
Click , then click .
Reboot and start the upgrade.
For information about managing storage with EVMS2 on SUSE Linux Enterprise Server 10, see the SUSE Linux Enterprise Server 10 SP3: Storage Administration Guide (http://www.suse.com/documentation/sles10/stor_admin/data/bookinfo.html).
The Ext3 file system has replaced ReiserFS as the default file system recommended by the YaST tools at installation time and when you create file systems. ReiserFS is still supported. For more information, see File System Support (http://www.suse.com/products/server/technical-information/?tab=2) on the SUSE Linux Enterprise 11 Tech Specs Web page.
To allow space for extended attributes and ACLs for a file on Ext3 file systems, the default inode size for Ext3 was increased from 128 bytes on SLES 10 to 256 bytes on SLES11. For information, see Section 1.2.3.4, “Ext3 File System Inode Size and Number of Inodes”.
The JFS file system is no longer supported. The JFS utilities were removed from the distribution.
The OCFS2 file system is fully supported as part of the SUSE Linux Enterprise High Availability Extension.
The /dev/disk/by-name path is deprecated in SUSE
Linux Enterprise Server 11 packages.
In SUSE Linux Enterprise Server 11, the default multipath setup relies
on udev to overwrite the existing symbolic links in
the /dev/disk/by-id directory when multipathing is
started. Before you start multipathing, the link points to the SCSI
device by using its scsi-xxx name. When
multipathing is running, the symbolic link points to the device by
using its dm-uuid-xxx name. This ensures that the
symbolic links in the /dev/disk/by-id path
persistently point to the same device regardless of whether
multipathing is started or not. The configuration files (such as
lvm.conf and md.conf) do not
need to be modified because they automatically point to the correct
device.
See the following sections for more information about how this behavior change affects other features:
The deprecation of the /dev/disk/by-name directory
(as described in Section 2.4.6, “/dev/disk/by-name Is Deprecated”)
affects how you set up filters for multipathed devices in the
configuration files. If you used the
/dev/disk/by-name device name path for the
multipath device filters in the /etc/lvm/lvm.conf
file, you need to modify the file to use the
/dev/disk/by-id path. Consider the following when
setting up filters that use the by-id path:
The /dev/disk/by-id/scsi-* device names are
persistent and created for exactly this purpose.
Do not use the /dev/disk/by-id/dm-* name in the
filters. These are symbolic links to the Device-Mapper devices, and
result in reporting duplicate PVs in response to a
pvscan command. The names appear to change from
LVM-pvuuid to dm-uuid and
back to LVM-pvuuid.
For information about setting up filters, see Section 7.2.4, “Using LVM2 on Multipath Devices”.
A change in how multipathed device names are handled in the
/dev/disk/by-id directory (as described in
Section 2.4.7, “Device Name Persistence in the /dev/disk/by-id Directory”) affects your
setup for user-friendly names because the two names for the device
differ. You must modify the configuration files to scan only the device
mapper names after multipathing is configured.
For example, you need to modify the lvm.conf file
to scan using the multipathed device names by specifying the
/dev/disk/by-id/dm-uuid-.*-mpath-.* path instead
of /dev/disk/by-id.
The following advanced I/O load-balancing options are available for Device Mapper Multipath, in addition to round-robin:
Least-pending
Length-load-balancing
Service-time
For information, see path_selector in Section 7.11.2.1, “Understanding Priority Groups and Attributes”.
The mpath_* prio_callouts for the Device Mapper
Multipath tool have been moved to shared libraries
in/lib/libmultipath/lib*. By using shared
libraries, the callouts are loaded into memory on daemon startup. This
helps avoid a system deadlock on an all-paths-down scenario where the
programs need to be loaded from the disk, which might not be available
at this point.
The option for adding Device Mapper Multipath services to the
initrd has changed from -f
mpath to -f multipath.
To make a new initrd, the command is now:
mkinitrd -f multipath
The default setting for the path_grouping_policy in
the /etc/multipath.conf file has changed from
multibus to failover.
For information about configuring the path_grouping_policy, see Section 7.11, “Configuring Path Failover Policies and Priorities”.