QDF - The Quota Data File

What it is

The qdf defines the pool of disks that are used in the global quota system for Uniform Access computers and keeps track of which users have files on each of the disks. The global quota system works off of three files:
qdf.inf
Defines the pools and what clusters can use them for home directories. The qdf.inf file is a text file and has volumes of comments at the beginning of it. Basically each disk is assigned a unique column number from 0 to 255. This column number is arbitrary and is used to index into the qdf.dat file.
qdf.dat
Lists which users have files on each filesystem. This is in the form of a matrix of one row for each user, one column for each disk. An element of the matrix consists of 8 bits, only one of which is used indicating that the user has files on this disk. The qdf data is rebuilt periodically by the /usr/local/etc/bldqdf script on melville. It is also updated on the fly by new, newdir and newweb.
qlf.dat
Lists what limits users have on each galaxy. This is another matrix with one row for each user, one column for each galaxy. An element in the matrix contains that user's disk quota as a floating point number of megabytes. Galaxy zero is a global value, the definition of the rest of the galaxy ordinals is in the idf.dat file for the galaxy and is referenced in the qdf.dat file.

The idfflags QDFHOST command will display the current QDFHOST and should be used in scripts. Routines to access these files through rpc calls to the helper dæmon "kelper" on the QDFHOST exist in libkelp.a.

The qdf.inf file is located on the NFS volume that is referenced by the /sy99/dat symlink on the "QDFHOST" server and can be updated on any server with write access to this NFS volume. The qdf.dat and qlf.dat files exist natively on the QDFHOST. They should only be accessed by the kelper dæmon, but there are symlinks in the /sy99/dat directory that will take you to them, should you need to reference them directly.

Notifying the QDF helper, "kelper"

After making any changes to the qdf.inf file, the helper on the QDFHOST must be told to reread it. This is done by sending it "clear" ding:
   ovid03> qdf_host=`idfflags QDFHOST`
   ovid03> ding ${qdf_host} kelper clear
   Sent message 'clear' to 128.208.181.43 (6227).
   Recv message from   128.208.181.43 (6227) len=22.
   Got: P0Accumulators cleared
Or by stopping and restarting the kelper process there:
   #seuss01> service kelper stop; sleep 2; service kelper start
   KELPER daemon stopped
   KELPER daemon started
You can send a "status" ding to the kelper to verify it's still functioning:
   ovid03> ding ${qdf_host} kelper s
   Sent message 's' to 128.208.181.43 (6227).
   Recv message from   128.208.181.43 (6227) len=117.
   Got: P2Ready
     
        Accumulators since: Jan 21 09:38:22 2020
     
        Quota Get:          4
        QDF Info:           4
        QDF Get Row:        4

Adding a filesystem

When a filesystem is added to the pool of disks that a particular cluster can use the following steps should be taken:
  1. Update the /tulsa/fstab.cfg file and run bldfs on each of the client systems to ensure that the filesystem is mounted properly.
  2. Update the /sy99/dat/qdf.inf file on the QDFHOST.
    1. Select one of the unused columns as defined in the last line of the comments. Update that comment so it doesn't get reused (you might check to verify that the last person remembered to update it and your column is not already in use).
    2. Add your entry at an appropriate place (presumably with the rest of the like filesytems) with your chosen column number.
    3. Notify the helper on the QDFHOST.

Removing a filesystem

When a filesystem is deleted from the pool of disks the following steps should be taken:
  1. Make sure the "N" (new) flag in the qdf for the disk to be removed is set to zero so no new directories get assigned to it. Edit the /sy99/dat/qdf.inf file and notify the helper if necessary.
  2. Make sure there are no users on the disk. There are some instructions on how to remove users from a disk around somewhere.
  3. Update the /sy99/dat/qdf.inf file on the QDFHOST.
    1. Comment out or delete the appropriate line in the file and add the column back to the list of unused columns. To comment, simply change the column to #XXX -- you can pick a new column (or reuse the previously used column) if you ever put the disk back in service later.
    2. Notify the helper on the QDFHOST.
  4. Get the filesystem unmounted on all the clients. It's important to get this done before deleting the entry in the fstab.cfg file. You may have to use lsof and kill processes if the filesystem shows up as busy on one or more clients when you try to dismount it.
  5. Change the /tulsa/fstab.cfg file from "nfs" to "del" for the filesystem and run bldfs on all the clients to clean up the mount points. If bldfs complains about busy filesystems then you didn't do the previous step properly. Now you'll have to recreate the symbolic link manually (with "ln -s /nfs/host/xxx /"), get that filesystem dismounted and rerun bldfs.
  6. Delete the entry from /etc/fstab.cfg and then run bldfs on the server to update the exports file there. Unexport the filesystem and then manually remove the entry from the top of the /etc/exports file on the server.

Ken Lowe
Email -- ken@u.washington.edu
Web -- http://staff.washington.edu/krl/

This page intentionally left blank

This page intentionally right blank