EVMS Release 2.5.5
==================

See the INSTALL file for installation instructions. The instructions are also
available at http://evms.sourceforge.net/install/.

See the User-Guide at http://evms.sourceforge.net/user_guide/ for detailed
usage information. The User-Guide is also available in multiple formats on
The Linux Documentation Project web site at http://www.tldp.org/guides.html.

Important notes concerning this release:

- New plugins.

   A new FSIM for OCFS2 was contributed by Robert Whitehead from Novell, and
   a new FSIM for FAT was contributed by Anton D. Kachalov from ALT Linux.
   Also, an alternative Linux-HA-2 cluster manager was contributed by Changju
   Gao from Novell. This new HA2 plugin will only be built if the original
   HA plugin is disabled during configuration (using the --disable-ha option).

- Clustering and Linux-HA

   EVMS has been updated to work with either Linux-HA version 1 or version 2.
   When you build EVMS, it will detect which version of HA you have installed.
   HA-1 and HA-2 link against different versions of the glib library, and
   therefore EVMS must link against the same version of glib. If you have HA-1
   installed (or if you don't have HA installled and are not using clustering),
   EVMS will use glib-1 when building the HA plugin, and also when building the
   GUI and text-mode UIs. If you have HA-2, EVMS will use glib-2 when building
   the HA plugin and the text-mode UI. However, the GUI is written against gtk+
   version 1 (which in turn requires glib-1) and it would be a very significant
   rewrite of the GUI to get it to work with gtk+ 2. Thus, even if you have
   HA-2, the GUI will still be built against gkt+-1 and glib-1. Unfortunately,
   this means that you cannot use the GUI when you are running EVMS in a
   Linux-HA-2 cluster, since the glib-1 and glib-2 libraries will conflict with
   each other. If you run the GUI in this situation, it will warn you about
   this incompatibility, recommend that you run the text-mode UI, and then
   refuse to load the HA plugin, thus preventing you from using the clustering
   features in EVMS. We appologize for this inconvenience, but we simply are
   not able to invest the time right now for a rewrite of the GUI. Luckily,
   the text-mode UI presents a look-and-feel that's nearly identical to the
   GUI, so the overall usability and functionality should not be effected.

- Disk Plugin Discovery

   When running with a 2.6 kernel, the EVMS Disk plugin will search for disks
   using the information in sysfs (in /sys/block/), whereas on a 2.4 kernel,
   since sysfs doesn't exist, the plugin will simply search through the /dev
   tree for the system's disks. These searches can be configured using the
   "sysfs_devices" and "legacy_devices" sections in the /etc/evms.conf file.
   However, there are cases when running with a 2.6 kernel that it would be
   preferable to still search the /dev tree instead of sysfs. Therefore, a
   parameter has been added to the "sysfs_devices" section of the config file,
   called "ignore_sysfs". If this is set to "yes", the Disk plugin will
   revert to using the "legacy_devices" section of the config file, even when
   running on a 2.6 kernel.

- Disk Plugin File-Descriptors

   In EVMS, the Disk plugin is responsible for finding all the disks on the
   system and using them to create the first layer of objects to build on. In
   previous versions, when the Engine was opened (by running one of the UIs),
   the Disk plugin would open each disk as they were found, and leave it open
   until the Engine was closed. For systems with a small number of disks,
   this isn't a problem. But for systems with hundreds or thousands of disks,
   this actually leads to the possibility of running out of available file
   descriptors.

   To fix this, a file descriptor cache was created to limit the number of
   disks that the Disk plugin would have at any given point. Once the limit
   is reached, the next time a disk needs to be opened the least recently
   used disk is closed to free up a file descriptor. The limit can be set in
   the /etc/evms.conf file, using the "max_open_disks" settings in the
   "legacy_devices" and "sysfs_devices" sections. The default value is 64,
   and the allowable range is 1 to 1024.

- Init-Ramdisk Changes

   The EVMS sample init-ramdisk has gone through some changes to bring it
   more up-to-date with common conventions for initrds. In particular, the
   change-root method of mounting the root filesystem has been dropped in
   favor of the pivot-root method. In addition, the error-handling has been
   significantly improved, and will provide directions in the event that
   something goes wrong during the activation and mounting of the root volume.

   The only change that most users will actually notice is that the EVMS
   initrd now requires that a /initrd directory be created on the root
   filesystem before rebooting. Other less-noticeable changes include adding
   support for detecting the "rootflags" and "rootfstype" kernel parameters.
   If you specify these parameters, the EVMS initrd will use them when
   mounting the root filesystem.

   Thanks to Michel Bouissou and Syrius for their help in developing and
   testing these updates to the EVMS initrd.

- New MD Superblock

   This release of EVMS now includes support for the new version of the
   MD/Software-RAID superblock. The superblock is the piece of metadata that
   EVMS and the MD kernel driver use to identify a device as belonging to a
   Software-RAID region. The new superblock format is simpler, while providing
   improved flexibility and scalability.

   For the most part, the functionality of the RAID regions is not affected
   by the superblock format. All the different RAID levels provide the same
   options as before. The most noticeable change is that with the new
   superblock, the MD kernel driver can have a resync of a RAID-1 or RAID-5
   interrupted and later restart that resync from the point it left off.

   IMPORTANT: The new superblock is only supported on 2.6.10 and later kernels.
   The MD driver in the 2.4 kernel does not understand this format, so any
   Software-RAID regions you create using the new superblock will only work
   with 2.6 kernels. Also, the superblock format was modified slightly after
   2.6.9 was released, so the EVMS support will only work with 2.6.10 and later
   versions. And finally, there is a couple minor MD bugs in 2.6.10 that need
   to be fixed for the new superblock format to work correctly. Make sure
   you've applied md-fixes.patch from the kernel/2.6/ directory.

- Metadata Backup and Restore

   EVMS now provides the capability of backing up all the metadata that
   defines the current volume configuration. This backup information is
   stored in a file which can later be used to restore all or parts of that
   configuration in the event that the volume metadata is damaged or
   corrupted.

   EVMS metadata backups do not include a backup of any of the filesystem
   information. It is strictly limited to the metadata that defines the
   volumes, storage-objects, and containers in the system.

   Two new tools are provided for using these backups: evms_metadata_backup
   and evms_metadata_restore. Please see the manual pages for these tools,
   as well as the corresponding section in the EVMS User-Guide for more
   information about the new metadata backup capabilities.

- LVM2 Mapping-Move

   The ability to "move" all or portions of a region has now been added to
   the LVM2 plugin. This is similar to the Move-PV and Move-Extent functions
   in the LVM1 plugin. The new function in LVM2 is called Move-Mapping. Each
   LVM2 region is made of one or more logical mappings, with each mapping
   representing a contiguous area on one of the container's PV-objects. This
   new function allows you to move a mapping to a different physically-
   contiguous area in the container, and automatically copies the data to
   that location. This copying can be performed while the region is mounted
   and in use.

   In addition to Move-Mapping, two other functions have been added to the
   LVM2 plugin to assist with moves. The first, called Split-Mapping, allows
   you to split a single mapping into two separate mappings at a given offset
   within the mapping. This is helpful in situations where you want to move
   a mapping but don't have enough contiguous freespace to move it all at
   once. The second, called Merge-Mappings, allows you to find all the split
   mappings that are actually consecutive on disk and merge them back into
   a single logical mapping.

   See the LVM2 appendix in the EVMS User-Guide for more details on how to
   use the Move-Mapping functionality.

- BBR Segments

  - Metadata Update For All BBR Segments

    In EVMS 2.4.0 and earlier versions, the size of BBR segments was always
    calculated based on the size of the child object (during volume discovery
    and when creating or resizing BBR segments). However, this calculation was
    based on the child object's block-size, which is not a fixed value. If the
    block-size changes, the BBR segment size and start could change, which
    could lead to not properly discovering objects on top of the BBR segment.
    This behavior has been seen frequently when switching from a 2.4 kernel to
    a 2.6 kernel, since the two kernels provide different default block-sizes
    for some disks.

    To fix this behavior, we've updated the BBR metadata to include a size and
    start field, so these values will not change depending on the underlying
    disk's block-size. If your volume configuration contains any BBR segments,
    the first time you run EVMS 2.4.1 (or later) it will detect the need for
    the metadata update. EVMS will prompt you to update the metadata and save
    changes to write the new metadata to disk.

    IMPORTANT: Only perform this metadata update if all your volumes have been
    discovered and activated correctly. You may want to skip the update
    initially so you can check your volumes. If everything looks normal, you
    can then restart the EVMS UI and complete the BBR metadata update.

    If you notice that any of your volumes have not been discovered properly or
    if you have any other configuration problems, please revert back to a
    version of EVMS and a version of the Linux kernel that are known to work
    correctly. When you are back to a working configuration, upgrade to the
    latest version of EVMS without changing kernels. Then you can complete the
    BBR metadata update.

    If you don't use BBR segments, then there is no metadata update for your
    system.

- Selective Activation

   EVMS now allows users to specify which volumes and objects should be
   activated and which should be left inactive.

   There is a new section in the EVMS config file (/etc/evms.conf) called
   "activate". This section has two entries, "include" and "exclude", which
   work similarly to the entries in the legacy_devices and sysfs_devices
   sections. The user can specify exact names of volumes or objects, or provide
   a pattern to match multiple volume and object names. Everything in the
   include list will be added to the list of volumes and objects to activate,
   and then everything in the exclude list will be removed from this activation
   list. Thus, if a name matches in both the include and exclude lists, the
   exclude list has precedence.

   Activation and deactivation dependencies are automatically enforced. This
   means that for an object to be activated, all of its child objects must
   also be activated. Likewise, for an object to be deactivated, all of its
   parent objects must also be deactivated. Specifying a volume or object in
   the "include" list in the config file's "activate" section implies that all
   child objects will also be included, and specifying an object in the
   "exclude" list implies that all parents of that object will also be
   excluded. (For clarity, volumes are always the highest parents in the stack
   and disks are always the lowest children in the stack. See the TERMINOLOGY
   file for more details.)

   In addition to the new config file section, the EVMS user-interfaces offer
   the ability to activate or deactivate a particular volume or object. These
   options are availble from the "Actions" menu in the GUI and text-mode UIs,
   and also on the context pop-up menus for each volume or object. In addition,
   the CLI provides new commands called "activate" and "deactivate". Upon
   saving, the appropriate volume or object will be activated or deactivated
   (along with any activation dependencies as mentioned above). Currently,
   however, EVMS does not update the config file following a manual activation
   or deactivation in the UIs. If the user does not also add the appropriate
   entry to their config file, this activation or deactivation will be
   temporary. The next time the user-interface runs and the state is saved,
   objects that had been deactivated from the UI will be reactivated.

   By default, all volumes and objects are included and none are excluded,
   which will activate everything in the system. This matches the previous
   behavior of EVMS.

- LVM2 Volumes

   EVMS has a new plugin for recognizing and managing the new volume format
   introduced by the LVM2 tools. Just as with the existing LVM plugin, the
   LVM2 plugin will discover your LVM2 volume groups as EVMS containers and
   your logical volumes as EVMS regions. The regions will also automatically
   be made into compatibility volumes the first time you run EVMS. An LVM2 LV
   named /dev/group1/vol1 will have a region name of lvm2/group1/vol1 and a
   compatibility volume name of /dev/evms/lvm2/group1/vol1.

   Some users may experience a problem with this new plugin not discovering
   all of their LVM2 PVs. This is most likely due to a size-check inconsistency
   between the LVM2 tools and the EVMS LVM2 plugin. If you notice that not all
   of your LVM2 PVs are discovered by EVMS, please edit your EVMS config file
   (/etc/evms.conf). There is a new "lvm2" section at the end, with an entry
   called "device_size_prompt". Set this entry to "yes", and EVMS will then
   prompt you when it finds an object that might be a PV, but isn't passing
   the size-checks for that object. Answer the questions to proceed with
   discovering your LVM2 containers and regions.

   On the other hand, if you get these prompts during discovery, and you know
   that the specified object is not an LVM2 PV, you can set the
   "lvm2.device_size_prompt" entry in your EVMS config file to "no" to prevent
   these discovery prompts in the future. You might be in this situation if
   you have LVM2 groups/volumes on top of MD software-RAID devices.

   The EVMS LVM2 plugin does not support LVM2 snapshots. EVMS provides its
   own snapshot plugin which you can use to create snapshots of your LVM2
   volumes or any other volume within EVMS. Please delete any LVM2 snapshots
   you have before migrating your setup to EVMS. Any remaining LVM2 snapshot
   volumes will be treated as simple regions.

   The EVMS LVM2 plugin does not modify any of the files in /etc/lvm/ that are
   maintained by the LVM2 tools. If you make modifications to your LVM2 groups
   and/or volumes using EVMS and you later decide to use the LVM2 tools again,
   you will need to run "vgscan" for the LVM2 tools to detect the changes you
   made using EVMS.

   The EVMS LVM2 plugin does not yet provide PE-move and PV-move capabilities.
   This feature will be added in a future release.

- Software-RAID

 - RAID-0 and RAID-5 Resize

   RAID-0 and RAID-5 regions can now be resized by adding new objects to the
   region or removing objects from the region. The data in that region will be
   "re-striped" to account for the change in number of child objects.
   To prevent data corruption, this operation must be performed while the region
   is unmounted and deactivated.

   Be forewarned, the expand and shrink process can take a *long* time. During
   the "re-striping" process, each chunk of data in the RAID region must be
   moved from it's current location to it's new location. During initial tests,
   it seems that a larger RAID chunk-size will decrease the time necessary to
   complete an expand or shrink. Unfortunately, the chunk-size cannot be
   changed after the RAID region is created. If you are creating new RAID
   regions that you might want to expand or shrink in the future, you might
   want to consider a larger chunk-size.

   IMPORTANT: Please have a suitable backup available before attempting a
   RAID-0 or RAID-5 resize. If the expand or shrink process is interrupted
   before it completes (e.g., the EVMS process gets killed, the machine
   crashes, or a disk in the RAID region starts returning I/O errors), then
   the state of that region cannot be ensured in all situations.
   **DO NOT INTERRUPT THE RESIZE PROCESS BEFORE IT FINISHES**.
   
   EVMS will *attempt* to recover following a problem during a RAID resize. The
   MD plugin does keep track of the progress of the resize in the MD metadata.
   Each time a data chunk is moved, the MD metadata is updated to reflect which
   chunk is currently being processed. If EVMS or the machine crashes during a
   resize, the next time you run EVMS the MD plugin will try to restore the
   state of that region based on the latest metadata information. If an expand
   was taking place, the region will be "rolled-back" to its state before the
   expand. If a shrink was taking place, the shrink will continue from the
   point it stopped. However, this recovery is not always enough to ensure
   that the entire volume stack is in the correct state. If the RAID region is
   made directly into a volume, then it will likely be restored to the correct
   state. On the other hand, if the RAID region is a consumed-object in an
   LVM container, or a child-object of another RAID region, then the metadata
   for *those* plugins may not always be in the correct state. Thus, the
   containers, objects, and volumes built on top of the RAID region may not
   reflect the correct size.

   ALSO IMPORTANT: Because RAID-resizes can be so long-running, there is the
   potential for the EVMS engine log to grow very large if the logging level is
   set too high. In one test, the log level grew to the maximum file size for
   the underlying filesystem and caused the EVMS engine process to be killed.
   When performing a RAID-resize, be sure to set the EVMS logging level to
   "default" or lower. This can be done by editting the engine.debug_level in
   the /etc/evms.conf file or running the EVMS UI with the "-d" option.

 - Disabling RAID Auto-detect

   If you have existing Software-RAID devices that you would like to migrate
   to using EVMS, please make sure you are not using RAID auto-detect. EVMS
   requires volume discovery to be done in user-space. Having the kernel
   auto-detect just the RAID arrays will cause some inconsistencies in the
   RAID superblocks.

   If you are using auto-detect, you will need to use fdisk to change the
   partition types from 0xfd to 0x83.

 - For further information about the EVMS MD plugin, please see the newly
   rewritten MD appendix of the User-Guide at
   http://evms.sourceforge.net/user_guide/#appxmdreg.

- Snapshots

 - Snapshot Activation

   Due to the new selective-activation capabilities, there are some minor
   changes to when snapshots are activated and deactivated. In previous versions
   of EVMS, creating a snapshot object did not activate that snapshot. The
   snapshot would only be activated when an EVMS volume was added on top of the
   snapshot object. When the EVMS volume was removed, the snapshot would be
   deactivated, even if the snapshot object wasn't deleted.

   Under the new scheme, snapshot objects will always be activated once they are
   created, regardless of whether there are EVMS volumes on top of the snapshot
   objects. In order to keep a snapshot object from being activated, users
   should add an appropriate entry to the activate.exclude entry in their EVMS
   config file.

   Any time that a snapshot object is inactive or deactivated while its origin
   volume remains active, that shapshot will be forceably reset. The next time
   that snapshot is activated, it will be a new, fresh snapshot of its origin
   volume. Not doing this would create an inconsistent snapshot, since the data
   flowing through the origin volume would not be subject to the monitoring that
   takes place when the snapshot is active.

 - Snapshots of Software-RAID volumes.

   Snapshots cannot be taken of compatibility or EVMS volumes that are made
   directly from MD RAID-1 and RAID-5 regions or full disks. In order to take
   a snapshot of a volume, the top object in that volume must be a Device-
   Mapper-managed device. This is necessary because that object's mapping must
   be modified to include hooks for copy-on-write to the snapshot device. Since
   RAID objects are handled by the MD kernel driver, and full disks are managed
   by the IDE or SCSI drivers, their "mappings" cannot change.

   For now, the snapshot plugin will simply not give the option of taking
   snapshots of these types of volumes. Future releases of EVMS will try to
   get around this restriction.

 - For further information about EVMS snapshots, please see the Snapshot section
   of the User-Guide at http://evms.sourceforge.net/user_guide/#evmscreatesnap.

- Expanding and Shrinking Containers

   In previous versions of EVMS, the only method for resizing a container was
   to add or remove entire objects from the container. As of EVMS 2.4.0, LVM1
   and LVM2 containers also allow expanding and shrinking objects that are
   already consumed by the container.

   If a container's consumed-object is expandable, then the LVM plugins will
   allow that object to expand, and then add the appropriate number of
   physical-extents to fill in that new space. If a container's consumed-
   object is shrinkable, and that object has physical-extents at the end of
   the object which aren't allocated to LVM regions, then the LVM plugin will
   allow that object to shrink by the number of unallocated PEs at the end
   of the object.

   This new feature is especially useful in conjunction with the new RAID-0 and
   RAID-5 resize capabilities. If an LVM container is created from a RAID-0 or
   RAID-5 region, that RAID region can be expanded by adding a new disk, which
   in turn will increase the amount of freespace available in the LVM
   container. That new freespace can then be used to expand existing LVM
   regions or create new LVM regions.

