Proposed home directory architecture, configuration and management

From techdocs
Jump to navigation Jump to search

Preamble

The way it was

In the Old World the majority of the CSE home directories were stored on six main NFS servers, each server having local RAID disk storage for the home directories.

These servers had names such as ravel, kamen, etc. Home directories were located in ext3/ext4/xfs file systems created on local logical block devices and mounted under /export (e.g., /export/kamen/1, /export/ravel/2, etc.) from whence it was exported via the NFS to lab computers, VLAB servers and login servers. For ease of management and speed of fsck on reboot, each physical server had its available disk storage divided into multiple logical block devices, each of less than 1TB.

User home directory paths would be hard-coded to be on a particular file system on a particular server, e.g., /export/kamen/1/plinich.

This arrangement meant that a problem on a particular server could potentially affect 1/6th of all accounts until such time as the server itself was fixed, or until the affected home directories were restored from backup to another server and the users' home directory locations were updated. This arrangement also meant that home directories were stored in one geographic location in one of six physical servers.

The way it might be

The proposed architecture disassociates home directories from specific physical/virtual NFS servers and places home directories in virtual hard disk block devices which are actually stored either in the Amazon AWS S3 cloud, or in CSE's distributed Ceph storage cluster.

Home directory proposed.png

The key ideas are:

  • Home directories are stored on block devices defined and existing either in Amazon's AWS S3 storage, or in CSE's own Ceph storage cluster.
  • NFS servers may be physical or virtual and DO NOT have local storage for home directories. They do, however, have their own IP addresses which have NO ASSOCIATION with any of the home directory block devices. Ihe IP address of an NFS server is used solely to manage the NFS server itself and has no role in making available the home directory NFS exports.
  • Home directory storage is attached to the particular NFS servers which export it via either iSCSI, in the case of AWS, or as Ceph RADOS Block Devices (RBD). Both of these attachment methods use TCP/IP and allow the home directory storage to then be mounted under Linux as a normal ext3/ext4/xfs file system.
  • Each home directory storage block device is associated with a same-named DNS entry with its own IP address. E.g., "home02" would be associated with a DNS entry of "home02.cse.unsw.edu.au" or "home02.cseunsw.site" with a static IP address of, say, 129.94.242.ABC. Only ONE (and only one) home directory block storage device is associated with a single IP address.

Importantly, the NFS servers have no implicit or explicit association with any particular home directory stores. Instead, an NFS server can export any number of arbitrary home directory file systems as follows[1]:

  1. Select the particular home directory block device in either AWS or Ceph. E.g., "homeXX"
    • If homeXX is in AWS:
      1. Associate the "homeXX" block device in AWS with one of the on-site AWS Storage Gateways using the AWS web console, and make it available to local hosts via iSCSI. See ???
      2. Attach the homeXX block device via iSCSI to the NFS server you want to export that block device's file system.
    • If homeXX is in the Ceph storage cluster:
      1. Map the home directory RBD into the NFS server.
  2. mount the attached device on the NFS server as /export/homeXX/1[2].
  3. Add an entry to /etc/exports for the home directory file system and then run exportfs -ra to tell the NFS server deamon to make the file system available.
  4. Find the IP address associated with the home directiory file system (homeXX.cse.unsw.edu.au, etc.) and add this address to the server's network interface with ip addr add 129.94.242.ABC/24 dev eth0 (check this!)

Most steps above can be automated/scripted.

Repeat the above for each file system.

At this point, NFS clients should be able to mount the home directory file system(s) like this (or equivalent by the automounter):

# mount -t nfs homeXX:/export/homeXX/1 /mnt

Notes

  1. Constrained, of course, by the limitation that only ONE NFS server at a time can attach a storage block device and, consequently, any single home directory file system can be exported by only one NFS server at a time. This constraint has nothing to do with the architecture being discussed but is due to a file system being mountable by only one host at a time without corruption occurring.
  2. This will always be mounted under "…/1" to maintain compatibility with Old World heuristics.