Proposed home directory architecture, configuration and management

From techdocs
Jump to navigation Jump to search

Preamble

The way it was

In the Old World the majority of the CSE home directories were stored on six main NFS servers, each server having local RAID disk storage for the home directories.

These servers had names such as ravel, kamen, etc. Home directories were located in ext3/ext4/xfs file systems created on local logical block devices and mounted under /export (e.g., /export/kamen/1, /export/ravel/2, etc.) from whence it was exported via the NFS to lab computers, VLAB servers and login servers. For ease of management and speed of fsck on reboot, each physical server had its available disk storage divided into multiple logical block devices, each of less than 1TB.

User home directory paths would be hard-coded to be on a particular file system on a particular server, e.g., /export/kamen/1/plinich.

This arrangement meant that a problem on a particular server could potentially affect 1/6th of all accounts until such time as the server itself was fixed, or until the affected home directories were restored from backup to another server and the users' home directory locations were updated. This arrangement also meant that home directories were stored in one geographic location in one of six physical servers.

The way it might be

The proposed architecture disassociates home directories from specific physical/virtual NFS servers and places home directories in virtual hard disk block devices which are actually stored either in the Amazon AWS S3 cloud, or in CSE's distributed Ceph storage cluster.

Home directory proposed.png

The key ideas are:

  • Home directories are stored on block devices defined and existing either in Amazon's AWS S3 storage, or in CSE's own Ceph storage cluster.
  • NFS servers may be physical or virtual and DO NOT have local storage for home directories. They have their own IP addresses which have NO ASSOCIATION with any of the home directory block devices. I.e., the IP address of an NFS server is used solely to manage the NFS server itself and has no role in making available the home directory NFS exports.
  • Home directory storage is attached to the particular NFS servers which export it via either iSCSI, in the case of AWS, or as Ceph RADOS Block Devices (RBD). Both of these attachment methods use TCP and allow the home directory storage to then be mounted under Linux as a normal ext3/ext4/xfs file system.
  • Each home directory storage block device is associated with a same-named DNS entry with its own IP address. E.g., "home02" would be associated with a DNS entry of "home02.cse.unsw.edu.au" or "home02.cseunsw.site" with a static IP address of, say, 129.94.242.ABC. Only one block storage device is associated with a single IP address.

Importantly, the NFS servers have no implicit or explicit association with an particular home directory stores. Instead, an NFS server can export any number of arbitrary home directory file systems as follows:

  1. Select the particular home directory block device in either AWS or Ceph. E.g., "homeXX"
    • If homeXX is in AWS:
      1. Associate the "homeXX" block device in AWS with one of the on-site AWS Storage Gateways using the AWS web console, and make it available to local hosts via iSCSI. See ???
      2. Attach the homeXX block device via iSCSI to the NFS server you want to export that block device's file system.
      3. mount the attached device on the NFS server as /export/homeXX/1[1]
  1. This will always be mounted under "…/1" to maintain compatibility with Old World heuristics.