

Disaster Recovery with AutoYaST together with a ReaR NETFS backup.

##################################################################

The rear-SUSE package provides the script RecoveryImage which
creates a bootable ISO image to recover this particular system.

To create the bootable ISO image RecoveryImage does the following:

1. Run 'rear mkbackuponly' to store a backup.tar.gz on a NFS server.

2. Run AutoYaST clone_system.ycp to make an autoinst.xml file.

3. Make a bootable system recovery ISO image which is based on
   an install medium, for example a SUSE Linux Enterprise install DVD
   plus autoinst.xml so that AutoYaST can recover this particular system.
   In particular a so called 'chroot script' is added to autoinst.xml
   which is run by AutoYaST to restore the backup from the NFS server.

A recovery medium which is made from the ISO image would run
AutoYaST with autoinst.xml to recreate the basic system,
in particular the partitioning with filesystems and mount points.
Then AutoYaST runs the 'chroot script' to fill in the backup data into
the recreated basic system when the mount points of the recreated system
are still below '/mnt' and when the boot loader is not yet installed.
After the backup was restored, AutoYaST does a chroot into '/mnt' so that
it is now in the recreated system and installs the boot loader.
Then the recreated system boots for its very first time and AutoYaST
does the system configuration, in particular the network configuration.
Finally the configured system moves forward to its final runlevel
so that all system services should then be up and running again.

Currently rear-SUSE is a preliminary version under development.
This means there are both missing features and bugs
and it must be tested in various different environments.

Some current restrictions and shortcomings:

Only a ReaR BACKUP_URI of the form 'nfs://host/path/file.tar.gz' is supported
and the backup file must be a tar.gz file so that 'tar -xzf file.tar.gz' works.


Usage:
======

RecoveryImage -h

RecoveryImage [ -d BASE_URI
                -l { log-to-base-dir | LOG_DIR }
                -b { make-rear-backup | use-existing-rear-backup
                                      | BACKUP_URI }
                -a { clone-system | AUTOINST_FILE
                                  | use-autoinst-from-base-dir }
                -m { autodetect-dvd | MEDIUM_URI
                                    | use-existing-medium-ISO
                                    | use-existing-ISO-files }
                -i { install-RPMs | skip-RPM-install
                                  | no-RPM-payload }
                -r { restore-all | restore-exclude-default
                                 | RESTORE_EXCLUDE }
                -c { configure-all | CONFIGURE_EXCLUDE
                                   | skip-second-stage } ]

All parameters are required (no guesswork how to recover this particular
system).
Either none or all parameters must be provided on the commandline.
If none is specified, the parameters are requested via a simple dialog.

-d specifies the destination:
BASE_URI specifies the base directory where RecoveryImage will create
its files except the ReaR backup which is stored according to the ReaR
configuration and the log file location can be specified separately
via the LOG_DIR parameter.
In particular the recovery ISO image is created in the base directory
as RecoveryImage.TIMESTAMP.iso
BASE_URI is either an absolute path of a local directory of the form
'/path/to/directory' or it must have the form 'nfs://host/directory'
so that 'mount -t nfs host:/directory' works.
In the latter case the NFS share must allow to read and write files
(e.g. 'autoinst.xml') which means that the NFS server should export
the NFS share with 'rw' and unless 'use-existing-ISO-files' is used,
'chown' and 'chmod' must also work so that 'cp -a' can preserve attributes
when copying files from the install medium which means that the NFS server
should export it with 'no_root_squash'.
If BASE_URI is a local directory it is used as base directory.
If BASE_URI is of the form 'nfs://host/directory' the directory on that host
is used as base directory.
If the directory which is specified via BASE_URI contains a HOSTNAME
sub-directory then RecoveryImage uses that as its base directory.
Unless 'use-existing-ISO-files' is used, RecoveryImage creates in its base
directory a RecoveryImage.TIMESTAMP.ISOfiles sub-directory into which it
copies the files from the install medium to make the ISO image.
No file will be removed from the ISO files directory so that it can be reused
with 'use-existing-ISO-files' but with 'no-RPM-payload' the RPM package files
in the ISO files directory will be made empty.
The loader/isolinux.cfg and loader/message files in the ISO files directory
will be modified unless 'use-existing-ISO-files' is used.
In any case an AutoYaST control file will be copied into the ISO files
directory and modified therein (in particular the 'chroot script' will
be added).
The base directory must have usually 10GB free space to make a DVD ISO image
(5GB for the files from a DVD install medium plus 5GB to make the ISO image).
With 'use-existing-ISO-files' usually 5GB for only a DVD ISO image are
sufficient.
With a MEDIUM_URI of the form 'http://...' additional 5GB free space
are needed to download an ISO image of a DVD install medium.

-l specifies the logging:
LOG_DIR specifies an absolute path to a directory where RecoveryImage
writes its log files.
With 'log-to-base-dir' RecoveryImage write its log files in the
base directory.

-b specifies the backup:
The backup must be a 'tar.gz' file so that 'tar -xzf backup.tar.gz' works
to restore it.
With 'make-rear-backup' ReaR makes a new backup (may overwrite an existing
backup) according to the ReaR configuration in /etc/rear/local.conf.
With 'use-existing-rear-backup' RecoveryImage inspects the ReaR configuration
in /etc/rear/local.conf and tests if an existing ReaR backup is readable
if the backup is stored at a ReaR NETFS_URL of the form 'nfs://host/path'.
BACKUP_URI is of the form 'nfs://host/path/backup.tar.gz'
so that 'mount -t nfs host:/path' works to access the backup file.
If host is not an IP address but a hostname, DNS must work when the backup
is restored.
Alternatively BACKUP_URI is either an absolute path of a local executable
file of the form '/path/to/executable' or it is a relative path of an
executable file of the form 'path/to/executable'.
If it is a relative path, it is relative to the base directory.
The executable must output on its standard output 'stdout' the actual URI of
the backup file of the form 'nfs://host/path/backup.tar.gz' so that it works
to extract the URI with the command: grep -o 'nfs://.*\.tar\.gz' | tail -n 1
This way the executable can make the backup (e.g. a script which runs 'tar')
and finally it tells RecoveryImage the URI of the backup via 'stdout'.

-a specifies the autoinst file:
With 'clone-system' AutoYaST will create a control file /root/autoinst.xml
on this system which RecoveryImage copies into its base directory
and from there into the ISO files directory.
AUTOINST_FILE specifies a AutoYaST control file which matches to this system
either as absolute path of a local file of the form '/path/to/file'
or as relative path of the form 'path/to/file'.
If it is a relative path, it is relative to the base directory.
Alternatively AUTOINST_FILE is either an absolute path of a local executable
file of the form '/path/to/executable' or it is a relative path of an
executable file of the form 'path/to/executable'.
If it is a relative path, it is relative to the base directory.
The executable must output on its standard output 'stdout' the path of the
actual 'autoinst.xml' file so that it works to extract the path with the
command: grep -o '[^[:space:]]*autoinst\.xml' | tail -n 1
If it is a relative path, it is relative to the base directory.
This way the executable can make the AutoYaST control file and finally
it tells RecoveryImage the path to 'autoinst.xml' via 'stdout'.
With 'use-autoinst-from-base-dir' an AutoYaST control file in the base
directory named 'autoinst.xml' or 'RecoveryImage.autoinst.xml' is used.
If such a file does not exist, RecoveryImage will use the latest
'RecoveryImage.TIMESTAMP.autoinst.xml'.
The AutoYaST control file must not yet contain the 'chroot script' because
this will be added by RecoveryImage in any case.

-m specifies the medium:
With 'autodetect-dvd' RecoveryImage requests and tries to autodetect
an install medium. Autodetection depends on whether or not
automated media mounting magic works on this system.
MEDIUM_URI specifies the directory where the medium files are located
which are copied into an ISO files directory to make the ISO image.
MEDIUM_URI is either an absolute path of a local directory of the
form '/path/to/directory' or it is a relative path 'path/to/directory'
If it is a relative path, it is relative to the base directory.
Alternatively MEDIUM_URI is of the form 'http://server/path/medium.iso'
so that 'wget http://server/path/medium.iso' works.
In this case RecoveryImage will download an ISO image of the medium
into the base directory as file 'RecoveryImage.TIMESTAMP.MEDIUM.iso'
and mount it with 'mount -o loop' and use the mount point
as the directory where the medium files are located.
Alternatively MEDIUM_URI is either an absolute path to a local
medium ISO file of the form '/path/to/medium.iso' or it is a relative path
of the form 'path/to/medium.iso' which is relative to the base directory.
Alternatively MEDIUM_URI is either an absolute path of a local executable
file of the form '/path/to/executable' or it is a relative path of an
executable file of the form 'path/to/executable'.
If it is a relative path, it is relative to the base directory.
The executable must output on its standard output 'stdout' the path of the
actual 'medium.iso' file or the actual 'medium.dir' directory so that
it works to extract the path with the command:
egrep -o '[^[:space:]]*medium\.iso|[^[:space:]]*medium\.dir' | tail -n 1
If it is a relative path, it is relative to the base directory.
This way the executable can make a medium ISO file or a medium directory
and finally it tells RecoveryImage the path via 'stdout'.
RecoveryImage will mount a medium ISO file with 'mount -o loop'
and use the mount point as the directory where the medium files are located.
With 'use-existing-medium-ISO' RecoveryImage will look in its base directory
for an existing medium ISO file named 'MEDIUM.iso' or
'RecoveryImage.MEDIUM.iso' otherwise.
If such a file does not exist, RecoveryImage will use the latest file
named 'RecoveryImage.TIMESTAMP.MEDIUM.iso' as medium ISO file and
mount it with 'mount -o loop' and use the mount point
as the directory where the medium files are located.
With 'use-existing-ISO-files' RecoveryImage will look in its base directory
for an existing ISO files sub-directory named 'ISOfiles' or
'RecoveryImage.ISOfiles' otherwise.
If such a sub-directory does not exist, RecoveryImage will use the latest
sub-directory 'RecoveryImage.TIMESTAMP.ISOfiles' as ISO files directory.
Such an existing ...ISOfiles sub-directory must already contain all files
to make the ISO image except autoinst.xml but with already appropriate
modified loader/isolinux.cfg and loader/message files.

-i specifies the installation:
With 'install-RPMs' AutoYaST installs RPMs from the recovery medium
to recreate the system.
With 'skip-RPM-install' AutoYaST basically recreates only partitioning
with filesystems and mount points but AutoYaST does not install RPMs.
Nevertheless the RPMs are still included in the ISO image
(unless 'use-existing-ISO-files' is used without RPM payload in the
ISO files directory) so that the recovery medium could still be used
for a manual installation without AutoYaST.
A complete backup is mandatory to recreate the system when 'skip-RPM-install'
is used.
With 'no-RPM-payload' the RPM files in the ISO files directory are made empty
so that there is no longer RPM payload which results a small recovery medium.
In this case only 400MB free space is needed in the base directory to make
a CD ISO image.
Without RPM payload on the recovery medium AutoYaST cannot install RPMs
so that also with 'no-RPM-payload' AutoYaST does not install RPMs
(same as with 'skip-RPM-install').

-r specifies the backup restore:
With 'restore-all' all files from the backup are restored using the 'tar'
program.
With 'restore-exclude-default' files matching the 'tar' patterns
'etc/fstab etc/mtab var/adm/autoinstall'
are by default excluded from the restore.
When AutoYaST recreates partitioning with filesystems and mount points
a matching /etc/fstab is created anew by AutoYaST and /etc/mtab is
maintained by the 'mount' program so that it may cause damage if those
files would be overwritten by possibly outdated files from the backup.
Furthermore scripts and log files of the current restore which are
in /var/adm/autoinstall/ should not be overwritten by outdated files
from the backup.
RESTORE_EXCLUDE are 'tar' patterns separated by space which are excluded
from the backup restore. If RESTORE_EXCLUDE contains 'restore-exclude-default'
the above listed 'tar' patterns for 'restore-exclude-default' are also
excluded together with the other 'tar' patterns in RESTORE_EXCLUDE.

-c specifies the configuration:
With 'configure-all' AutoYaST does all configurations which are specified
in the autoinst file.
CONFIGURE_EXCLUDE are configuration section names separated by space
which will be removed by RecoveryImage from the autoinst file so that
those configurations are skipped.
With 'skip-second-stage' the whole 'second stage' of the installation
is skipped which means that no configuration at all is done by AutoYaST.
In this case all configuration files must come from the backup so that
a complete backup is mandatory when 'skip-second-stage' is used.
'skip-second-stage' is an experimental feature which is currently
implemented by a dirty hack to circumvent the usual AutoYaST workflow.
It may cause issues because AutoYaST is not yet prepared for the case
when the whole 'second stage' of the installation workflow is skipped.

Exported variables:
-------------------

All variables which are modified or created are automatically exported
to have them available in a backup executable, an autoinst executable,
and a medium executable. In particular those variables:
PATH = /sbin:/usr/sbin:/usr/bin:/bin
LANG = POSIX
LC_ALL = POSIX
MY_NAME = RecoveryImage
TIMESTAMP : date and time when RecoveryImage was started
MY_PREFIX = RecoveryImage.TIMESTAMP
BASE_URI
BASE_DIR : the base directory
LOG_DIR
LOG_FILE : the primary log file
BACKUP_URI
AUTOINST_FILE
MEDIUM_URI
RPM_PAYLOAD_HANDLING : install-RPMs/skip-RPM-install/no-RPM-payload
RESTORE_EXCLUDE_DEFAULT : what restore-exclude-default results
RESTORE_EXCLUDE
CONFIGURE_EXCLUDE
BACKUP_EXECUTABLE_STDOUT : whereto stdout of a backup executable goes
BACKUP_EXECUTABLE_STDERR : whereto stderr of a backup executable goes
AUTOINST_EXECUTABLE_STDOUT : whereto stdout of an autoinst executable goes
AUTOINST_EXECUTABLE_STDERR : whereto stderr of an autoinst executable goes
MEDIUM_EXECUTABLE_STDOUT : whereto stdout of a medium executable goes
MEDIUM_EXECUTABLE_STDERR : whereto stderr of a medium executable goes

Examples:
---------

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m autodetect-dvd -i install-RPMs -r restore-all \
              -c configure-all

RecoveryImage -d nfs://host/basedir -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m autodetect-dvd -i install-RPMs -r restore-all \
              -c configure-all

RecoveryImage -d /var/tmp -l /var/log -b make-rear-backup -a clone-system \
              -m autodetect-dvd -i install-RPMs -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b use-existing-rear-backup \
              -a clone-system -m autodetect-dvd -i install-RPMs -r restore-all \
              -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b nfs://host/path/backup.tar.gz \
              -a clone-system -m autodetect-dvd -i install-RPMs -r restore-all \
              -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b /path/to/make_my_backup.sh \
              -a clone-system -m autodetect-dvd -i install-RPMs -r restore-all \
              -c configure-all

RecoveryImage -d nfs://host/basedir -l log-to-base-dir -b make_my_backup.sh \
              -a clone-system -m autodetect-dvd -i install-RPMs -r restore-all \
              -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a /root/autoinst.xml -m autodetect-dvd -i install-RPMs \
              -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a autoinst.xml -m autodetect-dvd -i install-RPMs \
              -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a /path/to/make_my_autoinst_xml.sh -m autodetect-dvd \
              -i install-RPMs -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a make_my_autoinst_xml.sh -m autodetect-dvd -i install-RPMs \
              -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a use-autoinst-from-base-dir -m autodetect-dvd -i install-RPMs \
              -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m /media/SLES11-DVD.123 -i install-RPMs \
              -r restore-all -c configure-all

RecoveryImage -d nfs://host/basedir -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m SLES11-DVD.123 -i install-RPMs \
              -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m http://server/path/medium.iso \
              -i install-RPMs -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m /path/to/medium.iso -i install-RPMs \
              -r restore-all -c configure-all

RecoveryImage -d nfs://host/basedir -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m medium.iso -i install-RPMs \
              -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m /path/to/make_my_medium_iso.sh \
              -i install-RPMs -r restore-all -c configure-all

RecoveryImage -d nfs://host/basedir -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m make_my_medium_iso.sh -i install-RPMs \
              -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m /path/to/make_my_medium_dir.sh \
              -i install-RPMs -r restore-all -c configure-all

RecoveryImage -d nfs://host/basedir -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m make_my_medium_dir.sh -i install-RPMs \
              -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m use-existing-medium-ISO -i install-RPMs \
              -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m use-existing-ISO-files -i install-RPMs \
              -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m autodetect-dvd -i skip-RPM-install \
              -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m autodetect-dvd -i no-RPM-payload \
              -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m autodetect-dvd -i install-RPMs \
              -r restore-exclude-default -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m autodetect-dvd -i install-RPMs \
              -r 'var/log var/tmp' -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m autodetect-dvd -i install-RPMs \
              -r 'var/log var/tmp restore-exclude-default' \
              -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m autodetect-dvd -i install-RPMs \
              -r restore-all -c configure-all

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m autodetect-dvd -i install-RPMs \
              -r restore-all -c 'printer sound'

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m autodetect-dvd -i install-RPMs \
              -r restore-all -c skip-second-stage

Exit status:
------------

0 : ISO image made
1 : Failed because of an error
2 : ISO image made but there have been serious warnings



Proposal for an initial workflow:
#################################

Because the backup is done by ReaR, a ReaR configuration /etc/rear/local.conf
must be set up first of all, for example like the following:
------------------------------------------------------------------------------
# Create ReaR rescue media as ISO image:
OUTPUT=ISO
# Store the backup file via NFS:
BACKUP=NETFS
# Only a NETFS_URL of the form 'nfs://host/path' is supported
# so that 'mount -o nolock -t nfs host:/path' works.
# If host is not an IP address but a hostname,
# DNS must work when the backup is restored.
NETFS_URL=nfs://IP.of.NFS.server/path/to/rear/backups
# Keep an older copy of the backup in a HOSTNAME.old directory
# provided there is no '.lockfile' in the HOSTNAME directory:
NETFS_KEEP_OLD_BACKUP_COPY=yes
------------------------------------------------------------------------------
Replace at least 'nfs://IP.of.NFS.server/path/to/rear/backups' appropriately
and read the ReaR documentation for more information to get the backup
according to your needs.

Because ReaR stores its backup on a NFS share, a NFS server is needed.
To set up a NFS server, you could use the YaST2 NFS server configuration
which is provided by the yast2-nfs-server package.

Because the recovery ISO image is based on a SUSE installation medium,
such a medium, for example a SUSE Linux Enterprise install DVD,
is needed at least once.

Because 'autodetect-dvd' depends on whether or not automated media
mounting magic works on the system, the more reliable way is
to manually mount an installation medium which matches to the system
and provide its mount point as a MEDIUM_URI directory.
Assume a SUSE Linux Enterprise install DVD is mounted at /tmp/SLEdvd.

Assume there is at least 10 GB free space in /var/tmp for the installation
medium files (about 5 GB) and for the ISO image (also about 5 GB)
the very first run of RecoveryImage could be like:

RecoveryImage -d /var/tmp -l log-to-base-dir -b make-rear-backup \
              -a clone-system -m /tmp/SLEdvd -i install-RPMs \
              -r restore-all -c configure-all

This very first run of RecoveryImage would do:

- run 'rear mkbackuponly' which makes a backup and stores it
  on the NFS server as path/to/rear/backups/HOSTNAME/backup.tar.gz
  where HOSTNAME is the host name of the system

- run AutoYaST clone_system.ycp which makes an /root/autoinst.xml file

- copy all files from the installation medium to an ISO files directory
  of the form /var/tmp/RecoveryImage.TIMESTAMP.ISOfiles
  where TIMESTAMP is the 'date +%Y%m%d%H%M%S' output at the time
  when RecoveryImage was started.

- copy /root/autoinst.xml to /var/tmp/RecoveryImage.TIMESTAMP.autoinst.xml
  and then copy this as 'autoinst.xml' into the ISO files directory

- modify the boot/ARCH/loader/isolinux.cfg file in the ISO files directory
  to get an 'autorecover' boot option

- modify the boot/ARCH/loader/message file in the ISO files directory
  to show an appropriate boot screen message

- add an AutoYaST 'chroot script' which would restore the backup
  to the autoinst.xml file in the ISO files directory

- run 'mkisofs' to make an ISO image /var/tmp/RecoveryImage.TIMESTAMP.iso
  of all what there is in the ISO files directory

The ISO files directory and its files are not removed so that
from now on a SUSE installation medium is no longer needed
so that it can be un-mounted and the medium can be removed.

From now on it is possible to reuse the ISO files directory
with 'use-existing-ISO-files' plus BASE_URI.

Currently the ISO files directory and the ISO image contain
a complete installation medium so that a 4.7 GB DVD is needed
when a system recovery medium is made from the ISO image.

When the system is booted with such a big recovery medium,
AutoYaST would do the following:
- recreate partitioning with filesystems and mount points
- install the software packages
- restore the backup
- install the boot loader
- boot the system for the very first time
- configure the system
- let the system go forward to its final runlevel

If and only if the backup is complete, there is no need to let
AutoYaST first install software packages and then overwrite
all of them when the backup is restored.

If the backup is complete, there is no need to install software packages
and accordingly there is no need to have software packages on the
system recovery medium.

When an existing ISO files directory should be reused, the files
to make the ISO image must be in a directory named 'ISOfiles' or
'RecoveryImage.ISOfiles' in the base directory.
If such a sub-directory does not exist, RecoveryImage will use
the latest sub-directory 'RecoveryImage.TIMESTAMP.ISOfiles'
in the base directory according to the TIMESTAMP.

When an existing AutoYaST control file should be reused, a file
named 'autoinst.xml' or 'RecoveryImage.autoinst.xml' must exist
in the base directory. If such a file does not exist,
RecoveryImage will use the latest 'RecoveryImage.TIMESTAMP.autoinst.xml'
in the base directory according to the TIMESTAMP.

In the current example, the base directory is /var/tmp.

If the backup is complete and up-to-date, a subsequent second run
of RecoveryImage could be like:

RecoveryImage -d /var/tmp -l log-to-base-dir -b use-existing-rear-backup \
              -a use-autoinst-from-base-dir -m use-existing-ISO-files \
              -i no-RPM-payload -r restore-all -c configure-all

Such a subsequent second run of RecoveryImage would basically do:

- empty the software package RPM files in the existing ISO files
  directory /var/tmp/RecoveryImage.TIMESTAMP.ISOfiles

- copy the existing AutoYaST control file /var/tmp/RecoveryImage.TIMESTAMP.autoinst.xml
  from the base directory as 'autoinst.xml' into the ISO files directory

- add an AutoYaST 'chroot script' which would restore the backup
  to the autoinst.xml file in the ISO files directory

- replace the software section in autoinst.xml in the ISO files directory
  by a dummy which lets AutoYaST skip the software package installation

- run 'mkisofs' to make an ISO image /var/tmp/RecoveryImage.NEW_TIMESTAMP.iso
  of all what there is in the ISO files directory

TIMESTAMP is when very first run of RecoveryImage was started and
NEW_TIMESTAMP is when the subsequent second run of RecoveryImage was started.

Now the ISO files directory and the ISO image still contain all files
of a complete installation medium but the RPM files are now empty
so that only about 400 MB is left and a CD is sufficient
when a system recovery medium is made from the ISO image.

When the system is booted with such a small recovery medium,
AutoYaST would do the following:
- recreate partitioning with filesystems and mount points
- restore the backup
- install the boot loader
- boot the system for the very first time
- configure the system
- let the system go forward to its final runlevel

The AutoYaST control file becomes outdated when the basic system configuration
changes, in particular when partitioning, filesystems or mount points change.
In this case RecoveryImage must be run which could be (if the backup is up-to-date) as follows:

RecoveryImage -d /var/tmp -l log-to-base-dir -b use-existing-rear-backup \
              -a clone-system -m use-existing-ISO-files -i skip-RPM-install \
              -r restore-all -c configure-all

Because the backup is independent from the ISO image, there is no need
to run RecoveryImage when only the backup is outdated.

To get an up-to-date backup, running 'rear mkbackuponly' is sufficient
provided the NETFS_URL in the ReaR configuration /etc/rear/local.conf
is still the same because the NETFS_URL value is hardcoded in the
AutoYaST 'chroot script' which would restore the backup.



Proposal how to use RecoveryImage via network:
##############################################

The main idea behind is to have no files for the system recovery
stored on the local host where RecoveryImage runs.
Instead all files for the system recovery should be directly
stored on a different host in the network so that there is
no need to manually copy the files for the system recovery from
the local host where RecoveryImage was run to a safe place.

Because ReaR stores its backup on a NFS share, a NFS server is needed anyway.
Therefore NFS is also used by RecoveryImage to store its files via network.
The NFS server where ReaR stores its backup and the NFS server where
RecoveryImage stores its files can be different hosts.

When the BASE_URI parameter is of the form 'nfs://server/directory'
RecoveryImage will store its files in that directory on the NFS server.

Assume there is a NFS server with a directory '/recovery' with at least 10 GB
free space which it exports with 'rw' and 'no_root_squash' (see "Usage" above).

Assume on the local host where RecoveryImage runs there is a ReaR configuration
as in the above "proposal for an initial workflow".

Assume a SUSE Linux Enterprise install DVD is mounted at /tmp/SLEdvd
on the local host where RecoveryImage runs.

In this case a run of RecoveryImage could be like:

RecoveryImage -d nfs://NFS.server.IP.address/recovery -l log-to-base-dir \
              -b make-rear-backup -a clone-system -m /tmp/SLEdvd \
              -i install-RPMs -r restore-all -c configure-all

Such a run of RecoveryImage would do:

- mount the '/recovery' directory on the NFS server

- run 'rear mkbackuponly' which makes a backup and stores it
  on the NFS server which is specified in the ReaR configuration

- run AutoYaST clone_system.ycp which makes an /root/autoinst.xml file
  on the local host where RecoveryImage runs

- on the NFS server where RecoveryImage stores its files create an
  ISO files directory of the form /recovery/RecoveryImage.TIMESTAMP.ISOfiles
  and copy all files from the installation medium into it

- copy /root/autoinst.xml to /recovery/RecoveryImage.TIMESTAMP.autoinst.xml
  and then copy this as 'autoinst.xml' into the ISO files directory 

- modify the boot/ARCH/loader/isolinux.cfg and boot/ARCH/loader/message files
  in the ISO files directory

- add an AutoYaST 'chroot script' which would restore the backup
  to the autoinst.xml file in the ISO files directory

- run 'mkisofs' to make an ISO image /recovery/RecoveryImage.TIMESTAMP.iso
  of all what there is in the ISO files directory

- unmount the '/recovery' directory on the NFS server

Now the '/recovery' directory on the NFS server contains
- the AutoYaST control file /recovery/RecoveryImage.TIMESTAMP.autoinst.xml
- the ISO files directory /recovery/RecoveryImage.TIMESTAMP.ISOfiles
- the ISO image /recovery/RecoveryImage.TIMESTAMP.iso
- the log files /recovery/RecoveryImage.TIMESTAMP...
where TIMESTAMP is the 'date +%Y%m%d%H%M%S' output at the time
when RecoveryImage was started.

To get a small small recovery medium without RPM payload, a subsequent run
of RecoveryImage could be like:

RecoveryImage -d nfs://NFS.server.IP.address/recovery -l log-to-base-dir \ 
              -b use-existing-rear-backup -a use-autoinst-from-base-dir \
              -m use-existing-ISO-files -i no-RPM-payload \
              -r restore-all -c configure-all

This subsequent run of RecoveryImage would do:

- mount the '/recovery' directory on the NFS server

- empty the software package RPM files in the existing ISO files
  directory /recovery/RecoveryImage.TIMESTAMP.ISOfiles
  on the NFS server where RecoveryImage stores its files

- copy the existing /recovery/RecoveryImage.TIMESTAMP.autoinst.xml
  as 'autoinst.xml' into the ISO files directory

- add an AutoYaST 'chroot script' which would restore the backup
  to the autoinst.xml file in the ISO files directory

- replace the software section in autoinst.xml in the ISO files directory
  by a dummy which lets AutoYaST skip the software package installation

- run 'mkisofs' to make an ISO image /recovery/RecoveryImage.NEW_TIMESTAMP.iso
  of all what there is in the ISO files directory

- unmount the '/recovery' directory on the NFS server

Now the '/recovery' directory on the NFS server contains
- the old AutoYaST control file /recovery/RecoveryImage.TIMESTAMP.autoinst.xml
- the old ISO files directory /recovery/RecoveryImage.TIMESTAMP.ISOfiles
  but now without RPM payload and with an appropriate autoinst.xml therein
- the old ISO image /recovery/RecoveryImage.TIMESTAMP.iso
- the old log files /recovery/RecoveryImage.TIMESTAMP...
- the new ISO image /recovery/RecoveryImage.NEW_TIMESTAMP.iso
- the new log files /recovery/RecoveryImage.NEW_TIMESTAMP...
where TIMESTAMP is when the first RecoveryImage run was started
and NEW_TIMESTAMP is when the subsequent RecoveryImage run was started.



Proposal how to use RecoveryImage for many hosts via network:
#############################################################

The main idea behind is when there are many "similar hosts" in the network
where RecoveryImage should run on each of them, then it should be
possible to setup an ISO files directory only once and then reuse it
for all those "similar hosts" via the 'use-existing-ISO-files' parameter.

When the BASE_URI parameter is of the form 'nfs://server/directory'
and when that directory on the NFS server contains a sub-directory named
as the local host where RecoveryImage runs, RecoveryImage
will store its files in that sub-directory on the NFS server.

It is crucial to understand what is meant with "similar hosts" here:
In this particular case hosts are "similar" if and only if the hosts
can be recreated with the same ISO image files except 'autoinst.xml'.

This condition what "similar" means in this case follows from how
RecoveryImage works with 'use-existing-ISO-files' where RecoveryImage
will reuse all files in the ISO files directory except autoinst.xml.

Assume the host where RecoveryImage was run in the above
"proposal how to use RecoveryImage via network"
is one of many such "similar hosts".

Then the ISO files directory /recovery/RecoveryImage.TIMESTAMP.ISOfiles
can be reused for all those "similar hosts" as follows:

For each of those "similar hosts" a sub-directory named as the host
must be created in the /recovery directory on the NFS server and
in each of those /recovery/HOSTNAME sub-directories a symbolic link
'ISOfiles -> ../RecoveryImage.TIMESTAMP.ISOfiles' must be created.

Now on each of those "similar hosts" a RecoveryImage run could be like:

RecoveryImage -d nfs://NFS.server.IP.address/recovery -l log-to-base-dir \
              -b make-rear-backup -a clone-system -m use-existing-ISO-files \
              -i no-RPM-payload -r restore-all -c configure-all

Such a run of RecoveryImage would do:

- mount the '/recovery' directory on the NFS server

- run 'rear mkbackuponly' which makes a backup and stores it
  on the NFS server which is specified in the ReaR configuration file
  /etc/rear/local.conf on the local host where RecoveryImage runs

- run AutoYaST clone_system.ycp which makes an /root/autoinst.xml file
  on the local host where RecoveryImage runs

- on the NFS server where RecoveryImage stores its files reuse the
  existing ISO files directory /recovery/RecoveryImage.TIMESTAMP.ISOfiles via the
  symbolic link /recovery/HOSTNAME/ISOfiles -> ../RecoveryImage.TIMESTAMP.ISOfiles

- copy /root/autoinst.xml to /recovery/HOSTNAME/RecoveryImage.NEW_TIMESTAMP.autoinst.xml
  and then copy this as 'autoinst.xml' into the ISO files directory

- add an AutoYaST 'chroot script' which would restore the backup
  to the autoinst.xml file in the ISO files directory

- replace the software section in autoinst.xml in the ISO files directory
  by a dummy which lets AutoYaST skip the software package installation

- run 'mkisofs' to make an ISO image /recovery/HOSTNAME/RecoveryImage.TIMESTAMP.iso
  of all what there is in the ISO files directory

- unmount the '/recovery' directory on the NFS server

Now the '/recovery/HOSTNAME' directory on the NFS server contains
- the ISO files directory link ISOfiles -> ../RecoveryImage.TIMESTAMP.ISOfiles
- the AutoYaST control file RecoveryImage.NEW_TIMESTAMP.autoinst.xml
- the ISO image RecoveryImage.NEW_TIMESTAMP.iso
- the log files RecoveryImage.NEW_TIMESTAMP...
where NEW_TIMESTAMP is when the last RecoveryImage run was started.

Because the same ISO files directory /recovery/RecoveryImage.TIMESTAMP.ISOfiles
will be used by all those "similar hosts" which run RecoveryImage as above,
two such RecoveryImage cannot run at the same time because there would be
a conflict when two RecoveryImage write and modify the one autoinst.xml file
in the ISO files directory which could result a broken autoinst.xml file
on the recovery medium which is made from the ISO image so that in the end
it could fail to recreate the system with such a recovery medium.

RecoveryImage tests that autoinst.xml on the ISO image is a valid XML file
which should (hopefully) detect a broken autoinst.xml file.

Nevertheless to be on the safe side mutual exclusion of RecoveryImage runs
is mandatory when the same ISO files directory should be reused for
many "similar hosts" via the 'use-existing-ISO-files' parameter.



How to aviod that it fails when you need to recreate your system:
#################################################################

First and foremost:
There is no such thing as a disaster recovery solution that "just works".

Even if RecoveryImage created the ISO image without an error or warning and
you made a recovery medium from the ISO image without an error or warning,
there is no guarantee that it will work in your particular case to recreate
your system with your recovery medium.

You must test in advance that it works in your particular case to recreate
your particular system with your particular recovery medium and that the
recreated system can boot on its own and that the recreated system with all
its system services still work as you need it in your particular case.

Therefore you must have replacement hardware available on which your system
can be recreated and you must try out if it works to recreate your system
with your recovery medium on your replacement hardware.

When a running system is recreated on replacement hardware,
afterwards two same systems exist with same IP addresses
and with the same network services running.
Therefore the test must be done while the original system is shut down
or the test-system must be in a separated network where it still
works to access the backup via NFS as specified by the BACKUP_URI
so that the 'chroot script' can restore the backup.


The intrinsic limitation what RecoveryImage can do:
====================================================

With RecoveryImage the recovery of the basic system (i.e. partitioning,
filesystems, mount points, boot loader, network configuration,...)
is delegated to AutoYaST and AutoYaST delegates the particular
tasks to the matching YaST modules.

The mandatory work during recreation by AutoYaST is:
- recreating partitioning with filesystems and mount points
- reinstalling the boot loader

What AutoYaST does is specified in the autoinst.xml file so that
the content of the autoinst.xml file is crucial whether or not
it works to recreate the system.

Before you test if recreating the system works, you should inspect
your autoinst.xml file if its content regarding partitioning,
filesystems, mount points, and boot loader looks correct.

Whether or not the content in autoinst.xml is correct depends on
whether or not the run of AutoYaST clone_system.ycp can determine
in particular partitioning, filesystems, mount points,
and the boot loader configuration correctly.

The run of AutoYaST clone_system.ycp triggers various YaST modules to run
where each YaST module determines its matching kind of data.

The limitation is what the matching YaST modules can do.


Regarding partitioning, filesystems, and mount points in autoinst.xml:
======================================================================

If all partitions, filesystems, and mount points of your system
have been created by YaST when the system was initially installed with
the SUSE installation medium (e.g. a SUSE Linux Enterprise install DVD),
then a run of AutoYaST clone_system.ycp should determine partitioning,
filesystems, and mount points correctly.
Additionally when your recovery medium is based on the same SUSE
installation medium, AutoYaST plus the YaST modules on your
recovery medium should recreate partitioning, filesystems,
and mount points correctly.

It depends on the particular SUSE product which filesystems are supported
by YaST on the SUSE installation medium. Filesystems which should be
usually supported by YaST are ext2, ext3, ReiserFS, XFS, and btrfs.

If another filesystem is used on the system, it is likely that the
run of AutoYaST clone_system.ycp cannot determine the content in
autoinst.xml which is related to this filesystem correctly.

Some examples of filesystems which are not supported by YaST
(to only name some more known ones): AFS, GFS, GPFS, JFFS, Lustre,
OCFS2, StegFS, TrueCrypt, UnionFS, ...

There are lots of filesystems that one can create manually
for which AutoYaST clone_system.ycp cannot work correctly.

If such a filesystem is used on your particular system, you must check
the content of autoinst.xml in particular regarding such a filesystem
and you may have to manually change autoinst.xml so that it works
in your particular case to recreate the system.

In some cases it may work to adjust particular entries in autoinst.xml
so that still the whole system can be recreated.

But in many cases it means to remove sections regarding unsupported
filesystems from autoinst.xml and as a consequence what is related
in the system to such filesystems cannot be recreated. In this case
you need to manually recreate such filesystems and what depends on it.

A particular consequence is that nothing what is stored on a filesystem
that is not supported by YaST can be included in the backup.tar.gz
because the filesystem would not exist when the 'chroot script' restores
the backup.tar.gz so that it would fail to restore the data.

Therefore data on filesystems that are not supported by YaST
must be saved in a separated way on a separated second backup.

It should be possible to automate the recreation of such filesystems
and the restore of its data from its separated second backup.
See the section regarding the so called "second stage" during
installation by AutoYaST below.


Regarding the boot loader in autoinst.xml:
==========================================

There is a general problem that it is practically impossible
to determine in a reliable way how a system is actually booted.
Imagine during the initial system installation GRUB was installed
in the boot sector of the active partition like /dev/sda1 and
afterwards LILO was installed manually in the master boot record
of the /dev/sda harddisk. Then actually LILO is used to boot the
system but the GRUB installation is still there. When such a system
is recreated by AutoYaST it would not reinstall LILO so that
the system would no longer boot.

Therefore check the content in autoinst.xml which is related to
the boot loader whether or not it is correct.

Threre are basically the same restrictions as described above
for filesystems whether or not the content in autoinst.xml
is correct regarding the boot loader.

If the boot loader which actually boots your system was installed
by YaST when the system was initially installed with the SUSE
installation medium (e.g. a SUSE Linux Enterprise install DVD),
then a run of AutoYaST clone_system.ycp should determine the
boot loader and how it is installed correctly.
Additionally when your recovery medium is based on the same
SUSE installation medium, AutoYaST plus the YaST modules on
your recovery medium should reinstall the boot loader correctly.

It depends on the particular SUSE product which boot loader
is supported by YaST on the SUSE installation medium.
Basically GRUB is the only boot loader which is supported by YaST.

If another boot loader (e.g. LILO) is used on your system, the
run of AutoYaST clone_system.ycp cannot determine the content in
autoinst.xml which is related to your boot loader correctly.
In this case you must manually change autoinst.xml to make it work
with your particular boot loader to recreate your particular system.
This could become tricky because AutoYaST must successfully install
a boot loader when it recreates the system because AutoYaST reboots
the recreated system.


Regarding manual changes of the autoinst.xml file:
==================================================

The crucial place where you can make changes so that it works
to recreate your particular system, is the autoinst.xml file.

For documentation reagrding AutoYaST and the autoinst.xml file
see /usr/share/doc/packages/autoyast2/html/index.html
in the autoyast2 package on your system on online at
http://www.suse.de/~ug/
but have in mind that the current up-to-date online documentation at
http://www.suse.de/~ug/autoyast_doc/index.html
may not match the AutoYaST which is installed in your system.

Initially you could run RecoveryImage with 'clone-system' to
get an autoinst.xml file as reasonable starting point.
RecoveryImage runs via 'clone-system' the command

yast2 --ncurses /usr/share/YaST2/clients/clone_system.ycp

which you could also run directly as root to get
a new /root/autoinst.xml file.

When a SUSE Linux Enterprise system is initially installed
there is at the end of the installation usually a check-box
named "Clone This System for AutoYaST" which is usually checked
by default so that you should already have a /root/autoinst.xml
file which matches the initial system installation status.

Carefully save such a first-time /root/autoinst.xml file because
it is a good initial starting point and when clone_system.ycp
is run it would overwrite it (except when clone_system.ycp is
run by RecoveryImage which makes a copy of an existing
/root/autoinst.xml file).

Then you would inspect the initial autoinst.xml file
if its content looks correct.

If its content looks correct, you would test if it works 
to recreate your system with the initial autoinst.xml file.

If the content in the initial autoinst.xml file is wrong or 
when it fails to recreate your system, you need to change the
autoinst.xml file and then run RecoveryImage again either with
an AUTOINST_FILE parameter or with 'use-autoinst-from-base-dir'
so that your changed autoinst.xml file is used.

Then you would test again if it works to recreate your system 
with your changed autoinst.xml file and so on until you got
an autoinst.xml file which works to recreate your system.



General explanation:
####################

In this particular case "Disaster Recovery" means that a particular system
which was destroyed can be recreated as much as possible as it was before
regardless what exactly was destroyed, from messed up configuration
up to broken hardware.

To recreate a particular system two different kind of data must be available:

Primary:
Information how to recreate the basic system on new hardware:
- Information about storage access like harddisk partitioning, filesystems, and mount-points.
- Information how to boot the system (boot loader configuration and kernel).
- Information about the basic system configuration (e.g. networking setup).
- Information which software packages need to be installed.
- The actual software packages to install the basic system.

Secondary:
The specific configuration and data payload of the particular system:
- Configuration files.
- System services data like mails, print jobs, web-server content.
- Changes and addons (e.g. changed software and additional software).
- Application data.
- User data.

With the primary information and data (software packages) the basic system
can be recreated which is a precondition to restore the specific configuration
and data payload in a subsequent second step.

The primary information and data is made available on a bootable system
recovery medium to that the basic system can be reinstalled on new hardware.

The primary information is stored in an AutoYaST autoinst.xml file in the
root directory of the system recovery medium.

The secondary data is stored in a backup tar.gz file which must be
accessible via NFS.

When the hardware boots the system recovery medium, a boot screen appears
which shows the topmost boot entries 'harddisk' (boot installed system)
and 'autorecover' (recreates the system) according to the
boot/<ARCH>/loader/message file on the recovery medium.

After one minute timeout it boots the installed system by default.

Explicit typing 'autorecover' is neded to recreate the system.

The default to boot the installed system is required for two reasons:

- If 'autorecover' was the default, it would be dangerous to have
  such a system recovery medium in the DVD tray because any reboot
  could lead to an automated complete re-installation from scratch.

- When a system was intentionally recreated by typing 'autorecover',
  the recreated system boots for its very first time and when then the
  recovery medium is still in the DVD tray, it would do the 'autorecover'
  again and again.
  In contrast with booting the installed system by default an unattended
  system recreation is possible.  

The recreation via 'autorecover' has the following additional boot parameters set
in the boot/<ARCH>/loader/isolinux.cfg file on the recovery medium:

- autoyast=default
  Starts an automated installation with AutoYaST which uses the autoinst.xml
  file in the root directory of the system recovery medium.

- netsetup=default
  Lets linuxrc prompt for network parameters and setup the network.
  Network is required to access the backup file via NFS so that the
  secondary data (specific configuration and data payload) can be restored.
  When there is an appropriate DHCP server running, it should work to
  use a network setup from the DHCP server. The details (like IP address)
  should not matter as long as the network setup is appropriate
  (in particular DNS should work) to access the backup file via NFS.
  When AutoYaST does the system configuration it would overwrite
  such a provisional network setup by the real network setup.
  If DHCP is not used, linuxrc prompts for the network parameters
  like IP address, netmask, gateway, and nameserver.
 
Whether or not an automated installation with AutoYaST works depends
on the content of the autoinst.xml file.

The content of the autoinst.xml file depends on to which extent
the run of AutoYaST clone_system.ycp can determine at least the basic
system configuration (in particular partitioning, filesystems, mount points
and boot loader) correctly and completely.

In particular when the system should be recreated on new hardware, it may happen
that the content of the autoinst.xml file does not match the new hardware,
for example when autoinst.xml contains hardware-specific values like
harddisk IDs or MAC addresses.

If it does not work with the autoinst.xml file in the root directory of the
system recovery medium, it should help to provide a better matching
autoinst.xml file for example on a floppy disk because with autoyast=default
AutoYaST will look for an autoinst.xml file first in the root directory
of the floppy disk and then in the root directory of the installation medium.
For details see
/usr/share/doc/packages/autoyast2/html/invoking_autoinst.html

It is recommended to use RecoveryImage via network so that all files
for the system recovery are stored on a NFS server, see the proposal
how to use RecoveryImage via network above. In this case you can adapt
the autoinst.xml file on the NFS server and then run RecoveryImage on
the NFS server to make a new ISO image with the adapted autoinst.xml
for example like:

RecoveryImage -d nfs://NFS.server.IP.address/directory -l log-to-base-dir \
              -b use-existing-rear-backup -a use-autoinst-from-base-dir \
              -m use-existing-ISO-files -i ... -r ... -c ...

When you run RecoveryImage on another host than what is stored as hostname
in autoinst.xml (like in this case where RecoveryImage runs on the NFS server
using an autoinst.xml for another host) there will be a message like:

  Note: autoinst.xml
  Note: seems to belong to host 'AUTOINST_HOST'
  Note: but this host is 'THIS_HOST'.
  Note: A system which is recreated with this autoinst file
  Note: will get its hostname set to 'AUTOINST_HOST'.

This makes you aware that the used autoinst.xml does not match the host
on which RecoveryImage runs (which is intended in this particular case).

To avoid that it does not work with the autoinst.xml on the recovery medium
it is crucial to verify in advance that it works in your particular case
to recreate your particular system with your particular recovery medium
on your particular hardware, see the section regarding how to aviod
that it fails when you need to recreate your system above.

When 'install-RPMs' was specified, AutoYaST may show an information popup
regarding missing software packages which have been installed on the system
but are not available on the system recovery medium.
Usually the reason is that the system recovery medium provides only software
packages to install the basic system but no packages for additional software.

All additional software must be stored in the backup file so that
additional software is re-installed when the backup is restored.

After AutoYaST had recreated the basic system, in particular partitioning,
filesystems, and mount points, AutoYaST runs a so called 'chroot script'
to fill in the backup data into the recreated basic system when the
mount points of the recreated system are still below '/mnt' and
when the boot loader of the recreated system is not yet installed.

While the 'chroot script' runs, the YaST screen "Finishing Basic Installation"
shows the step "Copy files to installed system". There is an unfortunate
non-progressing progress bar which shows a static "0%" at
"Executing autoinstall scripts in the installation environment..."
regardless that the 'chroot script' proceeds to restore the backup
because the 'chroot script' cannot trigger the YaST progress bar.

After the backup was restored, AutoYaST does a chroot into '/mnt' so that
it is now in the recreated system and installs the boot loader.

Then the recreated system boots for its very first time.

After the very first boot first of all AutoYaST does the system configuration,
in particular the network configuration according to the autoinst.xml file.

Finally the configured system moves forward to its final runlevel
so that all system services should now be up and running.


Regarding the backup:
=====================

It is crucial that the backup provides all changes and addons,
in particular all changed software and all additional software,
otherwise the system will not be recreated as it was before.

In particular updates of software packages for the basic system
may get lost if the updated software is not stored in the backup
because the software package on the system recovery medium
may be a version prior to the update.

On the other hand, if you use special additional backup software
in particular to save special application data like special database
backup software, it is crucial that the backup tar.gz file provides
exactly what you needed in your particular special case.

For example the backup tar.gz file should provide special database backup software
so that you get the special database backup software restored but there is probably
no need that the backup tar.gz file also provides the database content because
the database will be restored with the special database backup software.

There could be a complication when the system should be recreated on new hardware.
For example hardware-specific values like harddisk IDs or MAC addresses
may be used by special software or are stored in configuration files
so that this or that may no longer work after the system was recreated
on new hardware.

The reason is that the restore of the backup results software and configuration
files which may contain outdated hardware-specific values like harddisk ID
or MAC address matching to the old hardware where the backup was created.

It could even happen that AutoYaST recreates the basic system correctly
on new hardware but afterwards the restore of the backup overwrites
the new configuration files from AutoYaST matching to the new hardware
with no longer matching outdated configuration files for the old hardware
from the backup.

It may make sense to exclude particular files from the backup,
in particular configuration files which are recreated by AutoYaST.
On the other hand in general the backup should be complete
and not depend on recreation of particular files by AutoYaST.

Other configuration files may not make sense in the backup in any case:

For example it is questionable if /etc/mtab makes sense in the backup.
On newer systems /etc/mtab is only a link to /proc/self/mounts which is o.k.
in the backup but a regular file /etc/mtab in the backup would overwrite
the current one in the recreated system with probably outdated content
from the time when the backup was made.

It is also questionable if it makes sense to restore /etc/fstab from
the backup because when AutoYaST recreates partitioning, filesystems,
and mount points, it also creates an exact matching /etc/fstab anew
which should not be overwritten by a probably outdated /etc/fstab
from the time when the backup was made.
In particular if you changed in autoinst.xml entries regarding partitioning,
filesystems, and mount points, the /etc/fstab from backup.tar.gz will
no longer match what AutoYaST recreates. Therefore in such cases
you must exclude /etc/fstab from the backup restore.
On the other hand /etc/fstab from backup.tar.gz may contain additional
entries (e.g. manually added entries) which may not be automatically
recreated by AutoYaST.
Therefore it is recommended to have /etc/fstab in the backup and exclude it
only if needed when the backup is restored.

Therefore 'tar' patterns which should be excluded from the backup restore
can be specified via the RESTORE_EXCLUDE parameter.

Via the RESTORE_EXCLUDE parameter it must be known in advance (i.e. when
RecoveryImage makes the ISO image) what should be excluded from the backup
restore. To deal with issues when the system is recreated because actually
something should have been excluded from the backup restore which was not
specified in advance in the RESTORE_EXCLUDE parameter, it is recommended
to use RecoveryImage via NFS server, see the proposal how to use RecoveryImage
via network above. In this case you can run RecoveryImage on the NFS server
to make a new ISO image with adapted RESTORE_EXCLUDE parameter like:

RecoveryImage -d nfs://NFS.server.IP.address/directory -l log-to-base-dir \
              -b use-existing-rear-backup -a use-autoinst-from-base-dir \
              -m use-existing-ISO-files -i ... -r 'this that' -c ...

In this case RecoveryImage runs on the NFS server using an autoinst.xml
for another host which shows a note, see the general explanation above.

When the backup is made with 'rear mkbackuponly' what is included in the backup
is specified in the ReaR configuration, see the ReaR documentation 'man rear'
and in particular see /usr/share/rear/conf/default.conf what can be set.

A sample ReaR configuration in the file /etc/rear/local.conf
which lets ReaR make the backup using its built-in defaults is:
--------------------------------------------------------------------------
# Create ReaR rescue media as ISO image:
OUTPUT=ISO
# Store the backup file via NFS:
BACKUP=NETFS
# Only a NETFS_URL of the form 'nfs://host/path' is supported
# so that 'mount -o nolock -t nfs host:/path' works.
# If host is not an IP address but a hostname,
# DNS must work when the backup is restored.
NETFS_URL=nfs://IP.of.NFS.server/path/to/rear/backups
--------------------------------------------------------------------------

For example when the /home/ directory is mounted at a separated
partition, an additional entry like
--------------------------------------------------------------------------
# Enforce that the home partition is included in the backup:
BACKUP_PROG_INCLUDE=( '/home' )
--------------------------------------------------------------------------
could be needed to make sure that the content of the 'home' partition
is included in the backup.

In any case you must verify that the content of your backup is correct.

By default ReaR creates a directory with the name of the host on which
ReaR makes the backup at the specified path on the NFS server and
stores a backup file named 'backup.tar.gz' there, for example:
IP.of.NFS.server:/path/to/rear/backups/HOSTNAME/backup.tar.gz

If such a backup file already exists, ReaR will by default overwrite it
when it makes a new backup. This means that there is no backup at all
while ReaR makes it anew unless the old existing backup was saved
before ReaR overwrites it which can be set in /etc/rear/local.conf via
--------------------------------------------------------------------------
# Keep an older copy of the backup in a HOSTNAME.old directory
# provided there is no '.lockfile' in the HOSTNAME directory:
NETFS_KEEP_OLD_BACKUP_COPY=yes
--------------------------------------------------------------------------
This lets ReaR save an already existig backup if there is no '.lockfile'
in the directory of the already existig backup. The '.lockfile' may have
to be manually removed after ReaR had made the backup.


Regarding the system recovery medium:
=====================================

To be on the safe side the system recovery medium should be based on a
complete installation medium, for example a SUSE Linux Enterprise install DVD.

On the one hand this results a big system recovery medium (a 4.7 GB DVD).

On the other hand such a system recovery medium provides the complete
functionality of an installation medium. In particular the usual
boot options of an installation medium are still available,
see the boot/<ARCH>/loader/isolinux.cfg file on the recovery medium.
For example at the system recovery boot screen you could type 'linux'
to start the usual manual installation provided the RPMs were included
in the ISO image so that the RPMs are available on the recovery medium
(i.e. when 'no-RPM-payload' was not used when making the ISO image).

It depends on which software is provided by the backup and which software
packages are actually needed by AutoYaST to recreate the basic system
whether or not the data payload of software packages could be removed
from the system recovery medium to get it smaller.

If the backup is complete, the basic system does not need to have anything installed.
Only partitioning, filesystems, and mount points are needed to fill in the backup data.
In this case no software package is needed on the system recovery medium.
The AutoYaST autoinst.xml file must be modified accordingly so that AutoYaST
does not try to install any software package. AutoYaST should only set up
partitioning, filesystems, and mount points and then run the 'chroot script'
to fill in the backup data into the recreated basic system, see
/usr/share/doc/packages/autoyast2/html/createprofile.scripts.html#chroot.scripts

For SUSE Linux Enterprise Server 11 an experimental imaging feature
was added to AutoYaST, see
http://www.suse.com/~ug/autoyast_changes_SLES10_SLES11.html
-------------------------------------------------------------------------------------
an experimental imaging feature was added. Instead of installing via RPM,
you can provide an image script that is doing the installation:

  <software>
    
-------------------------------------------------------------------------------------

To change as little as possible in autoinst.xml the existing 'chroot script'
which does already fill in the backup data at the right stage
(system still mounted at '/mnt' and boot loader not yet installed)
is still used but the whole '<software>...</software>' section in autoinst.xml
is replaced by using a dummy image script '/bin/true' which lets AutoYaST
skip the whole software package installation successfully:
-------------------------------------------------------------------------------------
  <software>
    
  </software>
-------------------------------------------------------------------------------------
This is what is set as '<software>...</software>' section in autoinst.xml
when 'skip-RPM-install' is used when making the ISO image.

Now the actual payload of the RPM software packages can be removed
from the system recovery medium. The RPM files in the various
suse/<ARCH>/ directories (like suse/i586 suse/x86_64 suse/noarch)
should not be removed because the exact file names may be needed
during the installation. Instead each RPM file should be made empty
for example via 'cat /dev/null > suse/i586/package-version.rpm'.

This is done when 'no-RPM-payload' is used when making the ISO image.


Regarding the system configuration by AutoYaST:
===============================================

After the backup was restored and the boot loader was installed
the basic system is recreated. When it boots for its very first time
AutoYaST triggers various YaST modules to do the system configuration
like network configuration and so on. This system configuration
during the initial boot is the so called "second stage" of a
system installation.

When the backup is complete, the restore of the backup would restore
all configuration files so that a plain reboot of the system should
boot it correctly with the restored configuration.

When the backup is complete, a subsequent configuration by YaST
is not needed and it could even damage the restored configuration
when the configuration settings in autoinst.xml do not match the
configuration in the restored configuration files.

Unfortunately there is no readymade setting in autoinst.xml which
completely skips the so called "second stage" and does instead
after partitioning, software installation (i.e. backup restore)
and bootloader installation (which is the so called "first stage"
of a system installation) only a plain normal reboot.

But AutoYaST does not trigger a YaST configuration module when
there is no matching configuration section in autoinst.xml
so that it is at least possible to skip a particular configuration
by removing its matching configuration section from autoinst.xml.

This is what RecoveryImage provides with the '-c' option.

In the file /usr/share/YaST2/control/control.xml there is a section
'clone_modules' and that is the list of modules AutoYaST should clone
per default.

Mandatory configurations are 'partitioning' and 'bootloader'
which are needed by AutoYaST to recreate partitioning with filesystems
and mount points and to reinstall the boot loader.

For network setup required configurations are at least 'host'
and 'networking' because without them YaST could fail
to set up the network in the "second stage".

A usually required configuration is 'x11' which specifies in particular
the display manager and the window manager and desktop (the fallback
display manager is XDM and the fallback window manager is fvwm2).

Usually needed configurations are 'keyboard', 'language', 'runlevel',
and 'timezone'.

Depending on your particular system, you probably need many other
configurations too.

Regarding YaST configuration module for network setup during
the "second stage":

Currently it is usually required to run the YaST configuration modules for
network setup during the "second stage" because otherwise the network setup
is likely to be wrong.

The reason is that currently when the system boots into the "second stage"
it boots with the same provisional network setup that was used during
the "first stage" via the 'netsetup=default' boot parameter, see the
general explanation above.

Experimental 'skip-second-stage' feature:
-----------------------------------------

RecoveryImage provides with the '-c skip-second-stage' option
an experimental feature to completely skip the "second stage".

Currently this is implemented by a dirty hack which breaks
the usual AutoYaST workflow and does a plain normal reboot
after the "first stage" has finished. 

This is done by an addendum in /etc/init.d/boot.local which
basically removes the file /var/lib/YaST2/runme_at_boot that
would trigger the "second stange" to run.

Finally the addendum removes itself from /etc/init.d/boot.local
so that /var/lib/YaST2/runme_at_boot could work again as intended
e.g. for another system recovery without 'skip-second-stage'.

If /var/lib/YaST2/runme_at_boot exists, the system
is currently booting for its very first time after
the "first stage" during system installation has completed.

If /var/lib/YaST2/runme_at_boot exists, /etc/init.d/boot
would run /usr/lib/YaST2/startup/YaST2.Second-Stage
but the "second stange" should not run in this case
so that it is removed via /etc/init.d/boot.local
which is run by /etc/init.d/boot prior to that.

This may cause issues because AutoYaST is not yet prepared
for the case when the whole "second stage" of the installation
workflow is skipped.

During the "first stage" of an Auto-YaST installation there is
the screen "Finishing Basic Installation" with the following steps:
- Copying files to installed system...
- Saving configuration...
- Installing boot mamager...
- Saving installation settings...
- Preparing system for initial boot...

While "Copying files to installed system" the backup is restored.

During the subsequent steps some files in the installed system
are changed (i.e. after the backup was restored).

Some of those changes are intended:

In particular changes related to the bootloader installation
because the bootloader is installed anew according to
the 'bootloader' configuration settings in autoinst.xml to get
a clean new bootloader installation instead of a possibly outdated
bootloader configuration from the backup which may not match the
recreated partitioning.

Other changes are not intended and could even damage the restored
configuration from the backup.

In particular changes in sysconfig files like /etc/sysconfig/clock,
/etc/sysconfig/language, /etc/sysconfig/keyboard, /etc/sysconfig/mouse
/etc/sysconfig/displaymanager, /etc/sysconfig/windowmanager,
and the regular files in the /etc/sysconfig/network/ directory.

Therefore in the "chroot script" which restores the backup those
sysconfig files are saved as /tmp/RecoveryImage.sysconfig.*.backup
and /tmp/RecoveryImage.sysconfig.network.*.backup and restored by the
addendum in /etc/init.d/boot.local to revert unintended changes.

Depending on your particular system, you may need to adapt the
SAVE_SYSCONFIG_FILES and SAVE_SYSCONFIG_NETWORK_FILES variables
in the /usr/sbin/RecoveryImage script so that 'skip-second-stage'
works for your particular system.

This shows that currently '-c skip-second-stage' is an experimental
dirty hack which should be replaced by a solution in a future AutoYaST
version so that AutoYaST natively supports to skip the "second stage"
and do a plain reboot without unintended changes.

In other words:
A future AutoYaST version should natively support the following
minimal installation workflow (provided the backup is complete):
- create partitioning with filesystems and mount points
- restore the backup
- install the boot loader
- boot the system


Regarding the "second stage" during installation by AutoYaST:
=============================================================

In particular when you use special additional backup software
like special database backup software (see above), you may like
to restore the database via its special database restore software
in an automated way during system recovery.

Via AutoYaST custom user scripts (almost) anything can be done
during and/or after the basic system was recovered, see
http://www.suse.de/~ug/autoyast_doc/createprofile.scripts.html

See above "regarding manual changes of the autoinst.xml file" and
http://www.suse.de/~ug/autoyast_doc/createprofile.scripts.html
how this could be done.

An example is that a special database restore program is run:

The precondition is that the database restore program and all
what it needs to run is included in the backup.tar.gz so that
during the "first stage" (see above) when the 'chroot script'
restores the backup.tar.gz it restores in particular the database
restore program and all what it needs to run.

Then during the "second stage" AutoYaST could run a so called
'post-install script' or an 'init script' (depending on which
state of the system during its recovery process is needed)
which restores the database via its database restore program.

Another example is the recreation of filesystems that are not
supported by YaST, see the section regarding partitioning,
filesystems, and mount points in autoinst.xml above:

The precondition is that all what is needed to recreate such
filesystems is included in the backup.tar.gz and that nothing
what is included in the backup.tar.gz was stored on a filesystem
that is not supported by YaST.

Data on filesystems that are not supported by YaST must be saved
in a separated second backup.

During the "first stage" when the 'chroot script' restores
the backup.tar.gz it restores in particular all what is needed
to recreate filesystems that are not supported by YaST but
it cannot restore anything that was stored on such a filesystem
because the filesystem does not yet exist at this stage.

During the "second stage" AutoYaST could run a 'post-install script'
or 'init script' which recreates filesystems that are not supported
by YaST.

Afterwards data on filesystems that are not supported by YaST
can be restored from a separated second backup.

Such a separated second backup restore can also be done in an
automated way via a 'post-install script' or 'init script',
see the example that a special database restore program
is run above.

