systemd Daemonjournalctl: Query the systemd JournaludevDistributing and sharing file systems over a network is a common task in corporate environments. The well-proven network file system (NFS) works together with NIS, the yellow pages protocol. For a more secure protocol that works together with LDAP and may also use Kerberos, check NFSv4. Combined with pNFS, you can eliminate performance bottlenecks.
NFS with NIS makes a network transparent to the user. With NFS, it is possible to distribute arbitrary file systems over the network. With an appropriate setup, users always find themselves in the same environment regardless of the terminal they currently use.
The following are terms used in the YaST module.
A directory exported by an NFS server, which clients can integrate it into their system.
The NFS client is a system that uses NFS services from an NFS server over the Network File System protocol. The TCP/IP protocol is already integrated into the Linux kernel; there is no need to install any additional software.
The NFS server provides NFS services to clients. A running server
depends on the following daemons:
nfsd (worker),
idmapd (user and group name
mappings to IDs and vice versa),
statd (file locking), and
mountd (mount requests).
NFSv3 is the version 3 implementation, the “old” stateless NFS that supports client authentication.
NFSv4 is the new version 4 implementation that supports secure user authentication via kerberos. NFSv4 requires one single port only and thus is better suited for environments behind a firewall than NFSv3.
The protocol is specified as http://tools.ietf.org/html/rfc3530.
Parallel NFS, a protocol extension of NFSv4. Any pNFS clients can directly access the data on an NFS server.
For installing and configuring an NFS server, see the SUSE Linux Enterprise Server documentation.
To configure your host as an NFS client, you do not need to install additional software. All needed packages are installed by default.
Authorized users can mount NFS directories from an NFS server into the local file tree using the YaST NFS client module. Proceed as follows:
Start the YaST NFS client module.
Click in the tab. Enter the host name of the NFS server, the directory to import, and the mount point at which to mount this directory locally.
When using NFSv4, select in the
tab. Additionally, the must contain the same value as used by the NFSv4
server. The default domain is localdomain.
To use Kerberos authentication for NFS, GSS security must be enabled. Select .
Enable in the tab if you use a Firewall and want to allow access to the service from remote computers. The firewall status is displayed next to the check box.
Click to save your changes.
The configuration is written to /etc/fstab and the
specified file systems are mounted. When you start the YaST
configuration client at a later time, it also reads the existing
configuration from this file.
The prerequisite for importing file systems manually from an NFS server
is a running RPC port mapper. The nfs.service takes
care to start it properly; thus, start it by entering systemctl
start nfs.service as
root. Then remote file systems
can be mounted in the file system like local partitions using
mount:
mount host:remote-pathlocal-path
To import user directories from the nfs.example.com
machine, for example, use:
mount nfs.example.com:/home /home
The autofs daemon can be used to mount remote file systems
automatically. Add the following entry in the your
/etc/auto.master file:
/nfsmounts /etc/auto.nfs
Now the /nfsmounts directory acts as the root for
all the NFS mounts on the client if the auto.nfs
file is filled appropriately. The name auto.nfs is
chosen for the sake of convenience—you can choose any name. In
auto.nfs add entries for all the NFS mounts as
follows:
localdata -fstype=nfs server1:/data nfs4mount -fstype=nfs4 server2:/
Activate the settings with systemctl start
autofs.service as root. In this example,
/nfsmounts/localdata, the
/data directory of
server1, is mounted with NFS and
/nfsmounts/nfs4mount from
server2 is mounted with NFSv4.
If the /etc/auto.master file is edited while the
service autofs is running, the automounter must be restarted for the
changes to take effect with systemctl restart
autofs.service.
/etc/fstab #
A typical NFSv3 mount entry in /etc/fstab looks
like this:
nfs.example.com:/data /local/path nfs rw,noauto 0 0
For NFSv4 mounts, use nfs4 instead of
nfs in the third column:
nfs.example.com:/data /local/pathv4 nfs4 rw,noauto 0 0
The noauto option prevents the file system from
being mounted automatically at start-up. If you want to mount the
respective file system manually, it is possible to shorten the mount
command specifying the mount point only:
mount /local/path
If you do not enter the noauto option, the init
scripts of the system will handle the mount of those file systems at
start-up.
NFS is one of the oldest protocols, developed in the '80s. As such, NFS is usually sufficient if you want to share small files. However, when you want to transfer big files or large numbers of clients want to access data, an NFS server becomes a bottleneck and significantly impacts on the system performance. This is because of files quickly getting bigger, whereas the relative speed of your Ethernet has not fully kept up.
When you request a file from a “normal” NFS server, the server looks up the file metadata, collects all the data and transfers it over the network to your client. However, the performance bottleneck becomes apparent no matter how small or big the files are:
With small files most of the time is spent collecting the metadata.
With big files most of the time is spent on transfering the data from server to client.
pNFS, or parallel NFS, overcomes this limitation as it separates the file system metadata from the location of the data. As such, pNFS requires two types of servers:
A metadata or control server that handles all the non-data traffic
One or more storage server(s) that hold(s) the data
The metadata and the storage servers form a single, logical NFS server. When a client wants to read or write, the metadata server tells the NFSv4 client which storage server to use to access the file chunks. The client can access the data directly on the server.
SUSE Linux Enterprise supports pNFS on the client side only.
Proceed as described in
Procedure 24.1, “Importing NFS Directories”, but click the
check box and optionally . YaST will do all the necessary steps and will write
all the required options in the file /etc/exports.
Refer to Section 24.3.2, “Importing File Systems Manually” to start. Most of the
configuration is done by the NFSv4 server. For pNFS, the only
difference is to add the minorversion option and the
metadata server MDS_SERVER to your
mount command:
mount -t nfs4 -o minorversion=1 MDS_SERVER MOUNTPOINT
To help with debugging, change the value in the
/proc file system:
echo 32767 > /proc/sys/sunrpc/nfsd_debug echo 32767 > /proc/sys/sunrpc/nfs_debug
In addition to the man pages of exports,
nfs, and mount, information about
configuring an NFS server and client is available in
/usr/share/doc/packages/nfsidmap/README. For further
documentation online refer to the following Web sites:
Find the detailed technical documentation online at SourceForge (http://nfs.sourceforge.net/).
For instructions for setting up kerberized NFS, refer to NFS Version 4 Open Source Reference Implementation (http://www.citi.umich.edu/projects/nfsv4/linux/krb5-setup.html).
If you have questions on NFSv4, refer to the Linux NFSv4 FAQ (http://www.citi.umich.edu/projects/nfsv4/linux/faq/).