[OpenAFS] Re: Storing system binaries in the /afs tree

Kevin openafs@gnosys.biz
Thu, 19 May 2005 09:26:07 -0400


Wow, Owen, this sounds very interesting and appealing to me.  It seems
that much of what you have happening is based upon your own code
implementations.  True?  If so, would you mind sharing some of the
implementation details and/or the code?  I'd very much like to try the
same sort of thing here.

On Thu, 2005-05-19 at 08:46 +0100, Dr A V Le Blanc wrote:
> On Tue 17 May 2005 at 14:37:53 -0400, Kevin <openafs@gnosys.biz> wrote:
> > Any recommendations on storing portions of a Debian system (binaries,

> I've been running a system here for a year which has nearly everything
> in afs space.  I use this to run Linux on public cluster machines and
> on teaching machines, which I don't have time to maintain separately.
> The system in AFS is a slightly modified Debian sarge.  Each machine
> boots from a kernel and ramdisk, which can even be easily loaded over
> the network, though I normally have local copies.  The linux on the
> ramdisk sets up the hard disk, transfers to it, starts networking and
> AFS.  I then run package, which sets up a directory tree with some
> files and a lot of symbolic links, including links for /usr and
> most files in /lib, /bin, and /var.  Package also cleans up the local
> file system.  After this I exec /sbin/init, and the system comes up
> as normal.  The interesting thing is that virtually everything is
> cached in /var/cache/openafs, and this provides a surprisingly fast
> performance, even for KDE applications, except of course on older,
> slower machines.  (The system even works on a Pentium I 75mhz machine,
> but I don't advise using KDE on that!)
> 

I have an old Pentium I 75MHz too!  Until the disk failed in it, it was
my print server and did fine in that role.  If only for the sake of
nostalgia, I'd kinda like to put it back into service.  Maybe your setup
would help me do so.

> User home directories are mounted using ncpfs; the user IDs are
> created dynamically and password file entries added at the first
> login, when the passwords are verified using ldap, then the
> novell home directory is mounted.
> 

I'll have to read more about ncpfs.  I'm unfamiliar with it.

> The size of the full system in AFS is about 3.2 gb at present; the
> initrd and kernel are together less than 4mb, and I use about
> 500mb of cache, though of course I can increase that quite easily.
> Currently we use 2.6.11.9 with openafs 1.3.82 and a patch.
> 

So, it sounds like you basically have the image of a single computer's
HDD stored (as you say below, outside of afs) on a computer's local HDD
(presumably separate from the filesystem tree used by that computer for
normal operation?).  Are you mounting this image with losetup when you
want to do an install of some software to it or make other
modifications?  In some ways, it sounds like User Mode Linux.  And then,
if I understand correctly, you're basically sort-of exporting this image
via afs and making it available to whatever machines you've configured
to use it, thereby making each machine you've configured to use this
image a clone (software-wise anyway) of the image.  And you said that
you've preserved the capability to have individual, per-machine
differences if you so desire.

That sounds extremely interesting and like a real time-saver.  It seems
like it could make the task of administering the machines that use this
image much, MUCH easier!  I'd love to learn more.

> On Tue 17 May 2005 at 15:29:19 -0400, Jim Rees <rees@umich.edu> wrote:
> > The main problem I've had with putting system directories like /usr/bin in
> > afs is that when you go to install something it will fail because
> > chmod/chown will fail.
> 
> I've got round this by simply not installing anything on the client
> machines.  I think of the system as a single Linux machine, and I can
> install one or another package in it in a more-or-less usual fashion.
> 

It's what you've written here that gave me the impression I wrote about
above.  Have I understood correctly?

> > One way around this is to install as root/admin but that scares me.  I have
> > modified /usr/bin/install so that if chmod or chown fails, it pretends to
> > succeed, and that helps.
> 
> I actually maintain the image in an area completely outside of afs, then
> use rsync as admin to update the read-write copy.  I have a special
> procedure which allows me to test the read-write copy before doing
> my vos releases, which can then in principle be scheduled to run
> automatically, though in fact I do it by hand early in the morning.
> 

Would you elaborate more on the use of rsync here?  If the image is
outside afs, are you using rsync to put the files in afs-space before
exporting them over the network to the "diskless" machines?  I'm not
sure I follow here.

> > A smaller problem is no cross-dir links, but that is usually only a problem
> > for man pages.
> 
> I had problems at the first install with terminfo and with timezone files.
> 

How did you resolve those?

> > After things are installed they usually just work.
> 
> Yes, but I didn't actually copy all of /var into the system.
> Here's the actual sizes:
> 
>      4       /var/local
>      4       /var/mail
>      4       /var/opt
>      4       /var/www
>      8       /var/lock
>      12      /var/games
>      12      /var/state
>      60      /var/spool
>      84      /var/run
>      220     /var/lib
>      1468    /var/tmp
>      1632    /var/backups
>      1848    /var/log
>      391340  /var/cache
> 

Is this the output of du -k?  du -m?  kilobytes or megabytes?  And are
these directory trees on the local HDD of each machine then?

> Most of the files that I don't want to have changeable on the systems,
> including all of /var/lib/dpkg, is simply in afs with a symlink on
> the local disk.  Of course, this sometimes means that I've overlooked
> the need to write something or other, and it fails, but I can fix
> it by just changing the package files.
> 

So, these would be readable and lookupable by system:anyuser, but
writable only with root/admin credentials?  Something like that?

> I wrote a little utility that takes the hostname and IP address
> of the machine and adds some %defines or %undefs at the beginning
> of the package files, before macro processing with mpp.  This
> allows me to have some things on certain machines only, to
> configure some machines differently from others, or even to set up
> a system to perform some maintentance task the next time it boots.
> And on each machine I install a command which gets run before
> init starts; this can be customised, for example to change the
> default window manager when a teacher wants something different
> for his course.
> 

Care to share this utility?  Your arrangement sounds like a very
desirable one to me from the standpoint of removing or minimizing
sysadmin tasks on a per-machine basis, and instead, allowing me to
maintain just one image that some (or all, depending on the
installation) machines use.

Thanks very much for your reply Owen.  I'd love to learn more about your
particular arrangement, but even your discussing it here may be enough
for me to do something similar on my own.  Again, many thanks.

-Kevin