[OpenAFS] AFS Cache on Parallel File system
Matt W. Benjamin
Thu, 7 Jul 2011 13:10:22 -0400 (EDT)
If you're primarily interested in direct access, you might want to look at =
rxosd's vicep-access, which makes AFS volumes directly available to the cli=
ent over a cluster file system--Lustre or GPFS, but perhaps you could make =
glusterfs go easily, I'm not sure. This actually has demonstrated good per=
formance properties in some scenarios (don't know about wide area). Object=
ions have been raised about security and potential consistency problems for=
vicep-access, too--but it works for some definition of works that includes=
serious load testing, and it has a user community. There have been presen=
tations on vicep-access at OpenAFS conferences, over the last few years.
----- "Spenser Gilliland" <firstname.lastname@example.org> wrote:
> From my understanding, in the CEPH/Gluster projects a gateway would
> a way to access the parallel file system without using the native
> client. This is actually not want I want. My approach is instead to
> layer AFS on top of a PFS such that the cache is store locally to the
> whole cluster.
> The idea is closest to the second extension in the list but differs
> because there is no need for the cache managers to communicate
> through shared files.) as the data is already present on all of the
> On Thu, Jul 7, 2011 at 7:53 AM, Jeffrey Altman
> <email@example.com> wrote:
> > Spenser:
> > The AFS cache cannot be used the way you are thinking. =C2=A0What you
> > looking for is a ceph/gluster to afs gateway which does not exist.
> > The AFS cache is part of the security boundary that is enforced by
> > Cache Manager in the kernel. =C2=A0As such, it is stored either in
> > memory or on local disk accessible only to the one kernel. =C2=A0It is
> > designed for shared access. =C2=A0Pointing multiple AFS cache managers =
> > same cache will most likely result in data corruption.
> > There are two extensions to AFS that are being developed that will
> > cluster access to data stores from far away locations
> > =C2=A01. read/write replication which permits a single copy of the data
> > generated at the slow site to be replicated to file servers near
> > cluster.
> > =C2=A02. peer-to-peer cache sharing which permits an AFS cache manager
> > securely access data from another cache manager on the same subnet
> > avoid retransmitting it across a slow link.
> > The first option is preferred when it is possible to deploy file
> > in the cluster data center because it doesn't involve adding
> workload to
> > client nodes and provides for the possibility of parallel reads.
> > Jeffrey Altman
> > On 7/7/2011 4:01 AM, Spenser Gilliland wrote:
> >> Hello,
> >> Can the AFS cache be placed on a parallel file system (IE: ceph or
> >> If the cache can be placed on a parallel file system,
> >> When data is read into or written to the cache will all of the
> >> nodes in the cluster have access to this cached data for both
> >> and writing? =C2=A0And will every write block until it is written to
> >> AFS cell (IE: is it write back or write-through)?
> >> FYI: I'm going to give this a go here in a couple weeks and wanted
> >> know if anyone has tried it.
> >> The idea is to have an AFS Cell at home (very slow especially
> >> and a cluster at School which accesses this AFS Cell but only
> >> downloads a file once for all of the servers in the cluster
> >> saving time and bandwidth. =C2=A0Additionally, because the file is now
> >> the parallel file system all nodes can access the data
> >> When the program is finished the results will be available in the
> >> directory as the program.
> >> I'm thinking this could be immensely valuable for grid computing;
> if it works.
> >> Let me know if there is anything I should be looking out for along
> the way.
> >> Thanks,
> >> Spenser
> Spenser Gilliland
> Computer Engineer
> Illinois Institute of Technology
> OpenAFS-info mailing list
The Linux Box
206 South Fifth Ave. Suite 150
Ann Arbor, MI 48104