This section briefly describes the principles behind Logical Volume Manager (LVM) and its basic features that make it useful under many circumstances. The YaST LVM configuration can be reached from the YaST Expert Partitioner. This partitioning tool enables you to edit and delete existing partitions and create new ones that should be used with LVM.
Using LVM might be associated with increased risk, such as data loss. Risks also include application crashes, power failures, and faulty commands. Save your data before implementing LVM or reconfiguring volumes. Never work without a backup.
LVM enables flexible distribution of hard disk space over several physical volumes (hard disks, partitions, LUNs). It was developed because the need to change the segmentation of hard disk space might arise only after the initial partitioning has already been done during installation. Because it is difficult to modify partitions on a running system, LVM provides a virtual pool (volume group or VG) of memory space from which logical volumes (LVs) can be created as needed. The operating system accesses these LVs instead of the physical partitions. Volume groups can span more than one disk, so that several disks or parts of them can constitute one single VG. In this way, LVM provides a kind of abstraction from the physical disk space that allows its segmentation to be changed in a much easier and safer way than through physical repartitioning.
Figure 4.1, “Physical Partitioning versus LVM” compares physical partitioning (left) with LVM segmentation (right). On the left side, one single disk has been divided into three physical partitions (PART), each with a mount point (MP) assigned so that the operating system can access them. On the right side, two disks have been divided into two and three physical partitions each. Two LVM volume groups (VG 1 and VG 2) have been defined. VG 1 contains two partitions from DISK 1 and one from DISK 2. VG 2 contains the remaining two partitions from DISK 2.
In LVM, the physical disk partitions that are incorporated in a volume group are called physical volumes (PVs). Within the volume groups in Figure 4.1, “Physical Partitioning versus LVM”, four logical volumes (LV 1 through LV 4) have been defined, which can be used by the operating system via the associated mount points (MP). The border between different logical volumes need not be aligned with any partition border. See the border between LV 1 and LV 2 in this example.
LVM features:
Several hard disks or partitions can be combined in a large logical volume.
Provided the configuration is suitable, an LV (such as
/usr) can be enlarged when the free space is
exhausted.
Using LVM, it is possible to add hard disks or LVs in a running system. However, this requires hot-swappable hardware that is capable of such actions.
It is possible to activate a striping mode that distributes the data stream of a logical volume over several physical volumes. If these physical volumes reside on different disks, this can improve the reading and writing performance like RAID 0.
The snapshot feature enables consistent backups (especially for servers) in the running system.
With these features, using LVM already makes sense for heavily used home PCs or small servers. If you have a growing data stock, as in the case of databases, music archives, or user directories, LVM is especially useful. It allows file systems that are larger than the physical hard disk. Another advantage of LVM is that up to 256 LVs can be added. However, keep in mind that working with LVM is different from working with conventional partitions.
You can manage new or existing LVM storage objects by using the YaST Partitioner. Instructions and further information about configuring LVM are available in the official LVM HOWTO (http://tldp.org/HOWTO/LVM-HOWTO/).
If you add multipath support after you have configured LVM, you must
modify the /etc/lvm/lvm.conf file to scan only the
multipath device names in the /dev/disk/by-id
directory as described in
Section 15.12, “Using LVM2 on Multipath Devices”, then reboot
the server.
An LVM volume group (VG) organizes the Linux LVM partitions into a logical
pool of space. You can carve out logical volumes from the available space
in the group. The Linux LVM partitions in a group can be on the same or
different disks. You can add partitions or entire disks to expand the size
of the group. If you want to use an entire disk, it must not contain any
partitions. If using partitions, they must not be mounted. YaST will
automatically change their partition type to 0x8E Linux
LVM when adding them to a VG.
Launch YaST and open the .
In case you need to reconfigure your existing partitioning setup, proceed as follows. Refer to Section “Using the YaST Partitioner”, Chapter 15, Advanced Disk Setup, Deployment Guide for details. Skip this step if you only want to make use of unused disks or partitions that already exist.
To use an entire hard disk that already contains partitions, delete all partitions on that disk.
To use a partition that is currently mounted, unmount it.
To use unpartitioned, free space on a hard disk, create a new primary
or logical partition on that disk. Set its type to 0x8E
Linux LVM. Do not format or mount it.
In the left panel, select .
A list of existing Volume Groups opens in the right panel.
At the lower left of the Volume Management page, click › .
Define the volume group as follows:
Specify the .
If you are creating a volume group at install time, the name
system is suggested for a volume group that will
contain the SUSE Linux Enterprise Server system files.
Specify the .
The defines the size of a physical block in the volume group. All the disk space in a volume group is handled in chunks of this size. Values can be from 1 KB to 16 GB in powers of 2. This value is normally set to 4 MB.
In LVM1, a 4 MB physical extent allowed a maximum LV size of 256 GB because it supports only up to 65534 extents per LV. LVM2, which is used on SUSE Linux Enterprise Server, does not restrict the number of physical extents. Having a large number of extents has no impact on I/O performance to the logical volume, but it slows down the LVM tools.
Different physical extent sizes should not be mixed in a single VG. The extent should not be modified after the initial setup.
In the list, select the Linux LVM partitions that you want to make part of this volume group, then click to move them to the list.
Click .
The new group appears in the list.
On the Volume Management page, click , verify that the new volume group is listed, then click .
To check which physical devices are part of the volume group, open the YaST Partitioner at any time in the running system and click › › . Leave this screen with .
A logical volume provides a pool of space similar to what a hard disk does. To make this space usable, you need to define physical volumes. A physical volume is similar to a regular partition—you can format and mount it.
Use The YaST Partitioner to create logical volumes from an existing volume group. Assign at least one logical volume to each volume group. You can create new logical volumes as needed until all free space in the volume group has been exhausted.
An LVM logical volume can optionally be thinly provisioned. Thin provisioning allows you to create logical volumes with sizes that overbook the available free space. You create a thin pool that contains unused space reserved for use with an arbitrary number of thin volumes. A thin volume is created as a sparse volume and space is allocated from a thin pool as needed. The thin pool can be expanded dynamically when needed for cost-effective allocation of storage space. Thinly provisioned volumes also support snapshots which can be managed with Snapper—see Chapter 4, System Recovery and Snapshot Management with Snapper, Administration Guide for more information.
To use thinly provisioned volumes in a cluster, the thin pool and the thin volumes that use it must be managed in a single cluster resource. This allows the thin volumes and thin pool to always be mounted exclusively on the same node.
Normal volume: (Default) The volume’s space is allocated immediately.
Thin pool: The logical volume is a pool of space that is reserved for use with thin volumes. The thin volumes can allocate their needed space from it on demand.
Thin volume: The volume is created as a sparse volume. The volume allocates needed space on demand from a thin pool.
Launch YaST and open the .
In the left panel, select . A list of existing Volume Groups opens in the right panel.
Select the volume group in which you would like to create the volume and choose › .
Provide a for the volume and choose (refer to Section 4.3.1, “Thinly Provisioned Logical Volumes” for setting up thinly provisioned volumes). Proceed with .
Specify the size of the volume and whether to use multiple stripes.
Using a striped volume, the data will be distributed among
several physical volumes. If these physical volumes reside on
different hard disks, this generally results in a better reading and
writing performance (like RAID 0). The maximum number of available
stripes is equal to the number of physical volumes. The default
(1 is to not use multiple stripes.
Choose a for the volume. Your choice here only affects the default values for the upcoming dialog. They can be changed in the next step. If in doubt, choose .
Under , select , then select the . The content of the menu depends on the file system. Usually there is no need to change the defaults.
Under , select , then select the mount point. Click to add special mounting options for the volume.
Click .
Click , verify that the changes are listed, then click .
An LVM logical volume can optionally be thinly provisioned. Thin provisioning allows you to create logical volumes with sizes that overbook the available free space. You create a thin pool that contains unused space reserved for use with an arbitrary number of thin volumes. A thin volume is created as a sparse volume and space is allocated from a thin pool as needed. The thin pool can be expanded dynamically when needed for cost-effective allocation of storage space. Thinly provisioned volumes also support snapshots which can be managed with Snapper—see Chapter 4, System Recovery and Snapshot Management with Snapper, Administration Guide for more information.
To set up a thinly provisioned logical volume, proceed as described in Procedure 4.1, “Setting Up a Logical Volume”. When it comes to choosing the volume type, do not choose , but rather or .
The logical volume is a pool of space that is reserved for use with thin volumes. The thin volumes can allocate their needed space from it on demand.
The volume is created as a sparse volume. The volume allocates needed space on demand from a thin pool.
To use thinly provisioned volumes in a cluster, the thin pool and the thin volumes that use it must be managed in a single cluster resource. This allows the thin volumes and thin pool to always be mounted exclusively on the same node.
The space provided by a volume group can be expanded at any time in the running system without service interruption by adding more physical volumes. This will allow you to add logical volumes to the group or to expand the size of existing volumes as described in Section 4.5, “Resizing a Logical Volume”.
It is also possible to reduce the size of the volume group by
removing physical volumes. YaST only allows to remove physical volumes
that are currently unused. To find out which physical volumes are
currently in use, run the following command. The partitions (physical
volumes) listed in the PE Ranges column are the ones in
use:
tux > sudo pvs -o vg_name,lv_name,pv_name,seg_pe_ranges
root's password:
VG LV PV PE Ranges
/dev/sda1
DATA DEVEL /dev/sda5 /dev/sda5:0-3839
DATA /dev/sda5
DATA LOCAL /dev/sda6 /dev/sda6:0-2559
DATA /dev/sda7
DATA /dev/sdb1
DATA /dev/sdc1Launch YaST and open the .
In the left panel, select . A list of existing Volume Groups opens in the right panel.
Select the volume group you want to change, then click .
Do one of the following:
Add: Expand the size of the volume group by moving one or more physical volumes (LVM partitions) from the list to the list.
Remove: Reduce the size of the volume group by moving one or more physical volumes (LVM partitions) from the list to the list.
Click .
Click , verify that the changes are listed, then click .
In case there is unused free space available in the volume group, you can enlarge a logical volume to provide more usable space. You may also reduce the size of a volume to free space in the volume group that can be used by other logical volumes.
When reducing the size of a volume, YaST automatically resizes its file system, too. Whether a volume that is currently mounted can be resized “online” (that is while being mounted), depends on its file system. Growing the file system online is supported by Btrfs, XFS, Ext3, and ReiserFS.
Shrinking the file system online is only supported by Btrfs. In order to shrink XFS, Ext2/3/4, and ReiserFS volumes, you need to unmount them. Shrinking volumes formatted with XFS is not possible at all, since XFS does not support file system shrinking.
Launch YaST and open the .
In the left panel, select . A list of existing Volume Groups opens in the right panel.
Select the logical volume you want to change, then click .
Set the intended size by using one of the following options:
Maximum Size. Expand the size of the logical volume to use all space left in the volume group.
Minimum Size. Reduce the size of the logical volume to the size occupied by the data and the file system metadata.
Custom Size.
Specify the new size for the volume. The value must be within the
range of the minimum and maximum values listed above. Use K, M, G, T
for Kilobytes, Megabytes, Gigabytes and Terabytes (for example
20G).
Click .
Click , verify that the change is listed, then click .
Deleting a volume group destroys all of the data in each of its member partitions. Deleting a logical volume destroys all data stored on the volume.
Launch YaST and open the .
In the left panel, select . A list of existing volume groups opens in the right panel.
Select the volume group or the logical volume you want to remove and click .
Depending on your choice warning dialogs are shown. Confirm them with .
Click , verify that the deleted volume group is listed (deletion is indicated by a red colored font), then click .
For information about using LVM commands, see the man pages for the
commands described in the following table. All commands need to be
executed with root privileges. Either use sudo
command (recommended) or execute them
directly as root.
|
Command |
Description |
|---|---|
pvcreate device |
Initializes a device (such as |
pvdisplay device |
Displays information about the LVM physical volume, such as whether it is currently being used in a logical volume. |
vgcreate -c y vg_name dev1 [dev2...] |
Creates a clustered volume group with one or more specified devices. |
vgchange -a [ey|n] vg_name |
Activates ( Important: Activating a Volume in a Cluster
Ensure that you use the |
vgremove vg_name |
Removes a volume group. Before using this command, remove the logical volumes, then deactivate the volume group. |
vgdisplay vg_name |
Displays information about a specified volume group. To find the total physical extent of a volume group, enter vgdisplay vg_name | grep "Total PE" |
lvcreate -L size -n lv_name vg_name |
Creates a logical volume of the specified size. |
lvcreate -L size --thinpool pool_name vg_name |
Creates a thin pool named
The following example creates a thin pool with a size of 5 GB from
the volume group |
lvcreate -T vg_name/pool_name -V size \ -n lv_name |
Creates a thin logical volume within the pool
pool_name. The following example creates a
1GB thin volume named |
lvcreate -T vg_name/pool_name -V size -L size \ -n LV_name |
It is also possible to combine thin pool and thin logical volume
creation in one command: |
lvcreate -s [-L size] -n snap_volume \ source_volume_path vg_name |
Creates a snapshot volume for the specified logical volume. If the
size option ( |
lvremove /dev/vg_name/lv_name |
Removes a logical volume.
Before using this command, close the logical volume by dismounting
it with the |
lvremove snap_volume_path |
Removes a snapshot volume. |
lvconvert --merge snap_volume_path |
Reverts the logical volume to the version of the snapshot. |
vgextend vg_name device |
Adds the specified device (physical volume) to an existing volume group. |
vgreduce vg_name device |
Removes a specified physical volume from an existing volume group. Important
Ensure that the physical volume is not currently being used by a
logical volume. If it is, you must move the data to another
physical volume by using the |
lvextend -L size /dev/vg_name/lv_name |
Extends the size of a specified logical volume. Afterwards, you must also expand the file system to take advantage of the newly available space. See Chapter 2, Resizing File Systems for details. |
lvreduce -L size /dev/vg_name/lv_name |
Reduces the size of a specified logical volume. ImportantEnsure that you reduce the size of the file system first before shrinking the volume, otherwise you risk losing data. See Chapter 2, Resizing File Systems for details. |
lvrename /dev/vg_name/lv_name \ /dev/vg_name/new_lv_name |
Renames an existing LVM logical volume. It does not change the volume group name. |
The lvresize, lvextend, and
lvreduce commands are used to resize logical volumes.
See the man pages for each of these commands for syntax and options
information. To extend an LV there must be enough unallocated space
available on the VG.
The recommended way to grow or shrink a logical volume is to use the YaST Partitioner. When using YaST, the size of the file system in the volume will automatically be adjusted, too.
LVs can be extended or shrunk manually while they are being used, but this may not be true for a file system on them. Extending or shrinking the LV does not automatically modify the size of file systems in the volume. You must use a different command to grow the file system afterwards. For information about resizing file systems, see Chapter 2, Resizing File Systems.
Ensure that you use the right sequence when manually resizing an LV:
If you extend an LV, you must extend the LV before you attempt to grow the file system.
If you shrink an LV, you must shrink the file system before you attempt to shrink the LV.
To extend the size of a logical volume:
Open a terminal console.
If the logical volume contains an Ext2 or Ext4 file system, which do not support online growing, dismount it. In case it contains file systems that are hosted for a virtual machine (such as a Xen VM), shut down the VM first.
At the terminal console prompt, enter the following command to grow the size of the logical volume:
sudo lvextend -L +size /dev/vg_name/lv_name
For size, specify the amount of space you
want to add to the logical volume, such as 10 GB. Replace
/dev/vg_name/lv_name
with the Linux path to the logical volume, such as
/dev/LOCAL/DATA. For example:
sudo lvextend -L +10GB /dev/vg1/v1
Adjust the size of the file system. See Chapter 2, Resizing File Systems for details.
In case you have dismounted the file system, mount it again.
For example, to extend an LV with a (mounted and active) Btrfs on it by 10 GB:
sudo lvextend −L +10G /dev/LOCAL/DATA sudo btrfs filesystem resize +10G /dev/LOCAL/DATA
To shrink the size of a logical volume:
Open a terminal console.
If the logical volume does not contain a Btrfs file system, dismount it. In case it contains file systems that are hosted for a virtual machine (such as a Xen VM), shut down the VM first. Note that volumes with the XFS file system cannot be reduced in size.
Adjust the size of the file system. See Chapter 2, Resizing File Systems for details.
At the terminal console prompt, enter the following command to shrink the size of the logical volume to the size of the file system:
sudo lvreduce /dev/vg_name/lv_name
In case you have dismounted the file system, mount it again.
For example, to shrink an LV with a Btrfs on it by 5 GB:
sudo btrfs filesystem resize -size 5G /dev/LOCAL/DATA sudo lvreduce /dev/LOCAL/DATA
lvmetad
#Most LVM commands require an accurate view of the LVM metadata stored on the disk devices in the system. With the current LVM design, if this information is not available, LVM must scan all the physical disk devices in the system. This requires a significant amount of I/O operations in systems that have a large number of disks. In case a disk fails to respond, LVM commands might run into a timeout while waiting for the disk.
Dynamic aggregation of LVM metadata via lvmetad provides a solution for this
problem. The purpose of the lvmetad daemon is to eliminate the need for
this scanning by dynamically aggregating metadata information each time
the status of a device changes. These events are signaled to lvmetad by udev rules. If the daemon is not
running, LVM performs a scan as it normally would do.
This feature is disabled by default. To enable it, proceed as follows:
Open a terminal console.
Stop the lvmetad daemon:
sudo systemctl stop lvm2-lvmetad
Edit /etc/lvm/lvm.conf and set
use_lvmetad to 1:
use_lvmetad = 0
Restart the lvmetad daemon:
sudo systemctl start lvm2-lvmetad
A tag is an unordered keyword or term assigned to the metadata of a storage object. Tagging allows you to classify collections of LVM storage objects in ways that you find useful by attaching an unordered list of tags to their metadata.
After you tag the LVM2 storage objects, you can use the tags in commands to accomplish the following tasks:
Select LVM objects for processing according to the presence or absence of specific tags.
Use tags in the configuration file to control which volume groups and logical volumes are activated on a server.
Override settings in a global configuration file by specifying tags in the command.
A tag can be used in place of any command line LVM object reference that accepts:
a list of objects
a single object as long as the tag expands to a single object
Replacing the object name with a tag is not supported everywhere yet. After the arguments are expanded, duplicate arguments in a list are resolved by removing the duplicate arguments, and retaining the first instance of each argument.
Wherever there might be ambiguity of argument type, you must prefix a
tag with the commercial at sign (@) character, such as
@mytag. Elsewhere, using the “@” prefix
is optional.
Consider the following requirements when using tags with LVM:
An LVM tag word can contain the ASCII uppercase characters A to Z, lowercase characters a to z, numbers 0 to 9, underscore (_), plus (+), hyphen (-), and period (.). The word cannot begin with a hyphen. The maximum length is 128 characters.
You can tag LVM2 physical volumes, volume groups, logical volumes, and logical volume segments. PV tags are stored in its volume group’s metadata. Deleting a volume group also deletes the tags in the orphaned physical volume. Snapshots cannot be tagged, but their origin can be tagged.
LVM1 objects cannot be tagged because the disk format does not support it.
Add a tag to (or tag) an LVM2 storage object. Example:
sudo vgchange --addtag @db1 vg1
Remove a tag from (or untag) an LVM2 storage object. Example:
sudo vgchange --deltag @db1 vg1
Specify the tag to use to narrow the list of volume groups or logical volumes to be activated or deactivated.
Enter the following to activate the volume if it has a tag that matches the tag provided (example):
sudo lvchange -ay --tag @db1 vg1/vol2
lvm.conf File
#
Add the following code to the /etc/lvm/lvm.conf
file to enable host tags that are defined separately on host in a
/etc/lvm/lvm_<hostname>.conf
file.
tags {
# Enable hostname tags
hosttags = 1
}
You place the activation code in the
/etc/lvm/lvm_<hostname>.conf
file on the host. See Section 4.8.4.3, “Defining Activation”.
tags {
tag1 { }
# Tag does not require a match to be set.
tag2 {
# If no exact match, tag is not set.
host_list = [ "hostname1", "hostname2" ]
}
}
You can modify the /etc/lvm/lvm.conf file to
activate LVM logical volumes based on tags.
In a text editor, add the following code to the file:
activation {
volume_list = [ "vg1/lvol0", "@database" ]
}
Replace @database with your tag. Use
"@*" to match the tag against any tag set on the
host.
The activation command matches against vgname, vgname/lvname, or @tag set in the metadata of volume groups and logical volumes. A volume group or logical volume is activated only if a metadata tag matches. The default if there is no match is not to activate.
If volume_list is not present and any tags are
defined on the host, then it activates the volume group or logical
volumes only if a host tag matches a metadata tag.
If volume_list is not present and no tags are
defined on the host, then it does not activate.
You can use the activation code in a host’s configuration file
(/etc/lvm/lvm_<host_tag>.conf)
when host tags are enabled in the lvm.conf file.
For example, a server has two configuration files in the
/etc/lvm/ folder:
lvm.conf
|
lvm_<host_tag>.conf
|
At start-up, load the /etc/lvm/lvm.conf file, and
process any tag settings in the file. If any host tags were defined,
it loads the related
/etc/lvm/lvm_<host_tag>.conf
file. When it searches for a specific configuration file entry, it
searches the host tag file first, then the lvm.conf
file, and stops at the first match. Within the
lvm_<host_tag>.conf
file, use the reverse order that tags were set in. This allows the file
for the last tag set to be searched first. New tags set in the host
tag file will trigger additional configuration file loads.
You can set up a simple hostname activation control by enabling the
hostname_tags option in the
/etc/lvm/lvm.conf file. Use the same file on every
machine in a cluster so that it is a global setting.
In a text editor, add the following code to the
/etc/lvm/lvm.conf file:
tags {
hostname_tags = 1
}Replicate the file to all hosts in the cluster.
From any machine in the cluster, add db1 to the
list of machines that activate vg1/lvol2:
sudo lvchange --addtag @db1 vg1/lvol2
On the db1 server, enter the following to
activate it:
sudo lvchange -ay vg1/vol2
The examples in this section demonstrate two methods to accomplish the following:
Activate volume group vg1 only on the database
hosts db1 and db2.
Activate volume group vg2 only on the file
server host fs1.
Activate nothing initially on the file server backup host
fsb1, but be prepared for it to take over from
the file server host fs1.
In the following solution, the single configuration file is replicated among multiple hosts.
Add the @database tag to the metadata of volume
group vg1. In a terminal console, enter
sudo vgchange --addtag @database vg1
Add the @fileserver tag to the metadata of volume
group vg2. In a terminal console, enter
sudo vgchange --addtag @fileserver vg2
In a text editor, modify the /etc/lvm/lvm.conf
file with the following code to define the
@database, @fileserver,
@fileserverbackup tags.
tags {
database {
host_list = [ "db1", "db2" ]
}
fileserver {
host_list = [ "fs1" ]
}
fileserverbackup {
host_list = [ "fsb1" ]
}
}
activation {
# Activate only if host has a tag that matches a metadata tag
volume_list = [ "@*" ]
}
Replicate the modified /etc/lvm/lvm.conf file
to the four hosts: db1,
db2, fs1, and
fsb1.
If the file server host goes down, vg2 can be
brought up on fsb1 by entering the following
commands in a terminal console on any node:
sudo vgchange --addtag @fileserverbackup vg2 sudo vgchange -ay vg2
In the following solution, each host holds locally the information about which classes of volume to activate.
Add the @database tag to the metadata of volume
group vg1. In a terminal console, enter
sudo vgchange --addtag @database vg1
Add the @fileserver tag to the metadata of volume
group vg2. In a terminal console, enter
sudo vgchange --addtag @fileserver vg2
Enable host tags in the /etc/lvm/lvm.conf file:
In a text editor, modify the
/etc/lvm/lvm.conf file with the following
code to enable host tag configuration files.
tags {
hosttags = 1
}
Replicate the modified /etc/lvm/lvm.conf file
to the four hosts: db1,
db2, fs1, and
fsb1.
On host db1, create an activation configuration
file for the database host db1. In a text
editor, create /etc/lvm/lvm_db1.conf file and
add the following code:
activation {
volume_list = [ "@database" ]
}
On host db2, create an activation configuration
file for the database host db2. In a text
editor, create /etc/lvm/lvm_db2.conf file and
add the following code:
activation {
volume_list = [ "@database" ]
}
On host fs1, create an activation configuration file for the file
server host fs1. In a text editor, create
/etc/lvm/lvm_fs1.conf file and add the
following code:
activation {
volume_list = [ "@fileserver" ]
}
If the file server host fs1 goes down, to bring
up a spare file server host fsb1 as a file server:
On host fsb1, create an activation
configuration file for the host fsb1. In a
text editor, create /etc/lvm/lvm_fsb1.conf
file and add the following code:
activation {
volume_list = [ "@fileserver" ]
}In a terminal console, enter one of the following commands:
sudo vgchange -ay vg2 sudo vgchange -ay @fileserver