[OpenAFS] AFS and MIT Kerberos 5 (RedHat 8.0, RPMS from openafs.org)
Thu, 13 Mar 2003 13:56:58 +0100
Mikkel Kruse Johnsen schrieb:
> Hi All
> I have been trying to set up AFS using Kerberos 5 as auth server. I have
> AFS running but have a few issues:
> I have created a user in the kerberos database called afs@CBS.DK
> <mailto:afs@CBS.DK> which and I have installed AFS with that key using
> asetkey (with the right kvno number). I have created it with
> "des-cbc-crc:afs3" but what is "afs3" good for ? Should I use
> "des-cbc-crc:v4" instead ?
I used des-cbc-crc:v4 and it works, i don't know whether :afs3 should
work or not.
> Also when trying to get the AFS ticket from a client I do: "kinit" to
> get the users krbtgt then I do "aklog -d" and I get:
> Authenticating to cell cbs.dk (server afs-1.cbs.dk).
> We've deduced that we need to authenticate to realm CBS.DK.
> Getting tickets: afs/cbs.dk@CBS.DK <mailto:cbs.dk@CBS.DK>
> Kerberos error code returned by get_cred: -1765328228
> aklog: Couldn't get cbs.dk AFS tickets:
> aklog: Cannot contact any KDC for requested realm while getting AFS tickets
> Should I have created the afs ticket in kerberos as "afs/cbs.dk@CBS.DK
> <mailto:cbs.dk@CBS.DK>" instead ?
Yes, the Linux Cache Manager or the aklog needs afs/domain@REALM. If you
decide to use windows clients as well, you will also need afs/@REALM
with another key version number.
> Also I haven't trouble creating the rights on the /afs filesystem (Maybe
> this is all due to me not getting the AFS ticket). When doing:
> fs checkvolumes
> fs Input/Output error"
> when trying to set user rights
> fs setacl /afs system:anyuser rl
> fs: Invalid argument; it is possible that /afs is not in AFS.
> Is this because I don't have the AFS token (doing aklog). Or should I
> add the /afs to AFS somehow. I can see that the openafs-client package
> create the /afs and chmod 755 on it. Is that all or should it be added
> somehow ?
You should check whether your afs cache manager (the afs-client) is
running. Watch the syslog for error messages.
> Another question about my setup if any comments.
> I trying to set up a cluster for my web servers. I don't want to by a
> lot of expensive hardware so I'm trying to use low or mid size computers
> for the setup. I was thinking of using to load balancers (with failover)
> to balance load between my front end servers. The frontend servers is
> just some pizza size computers. These computers have to get the HTML
> files from a fileserver but I don't want the fileserver to be one point
> of failure, so some kind of distributed file system must be used (or a
> SAN backend, but they are so expensive). My question is:
> Is AFS the right distributed file system for the job ?
You can make read-only replicas of volumes, so a faildown of one server
will not affekt the file service. But take care to have to database
manaegment servers as well, because otherwise this will be the single
point of failure.
I am not sure whether it could do load balancing the right way, but you
can give different clients (frontend-server) different preferred file
serves if you want.
But this whole thing only works, if your html-files are not generated
dynmically or being changed by the web server. When you have read-only
replicas, these are read-only. The read-write "master" replica is still
on only one server.
> Many on the net talks about CODA (but from what I understand it is a
> branch of the AFS filesystem) or should I go for SAN or CODA ?
> Hope to get some input, thanks.
> Mikkel Kruse Johnsen <firstname.lastname@example.org <mailto:email@example.com>>