[OpenAFS] OpenAFS on ZFS (Was: Salvaging user volumes)

Dan Van Der Ster daniel.vanderster@cern.ch
Sat, 15 Jun 2013 07:24:21 +0000

We deployed a ZFS on Linux server (on Scientific Linux) in the past week. P=
retty simple stuff...the only non-default options are atime=3Doff and recor=
dsize=3D64K (which may be wrong, though some posts about ZFS and AFS sugges=
t it).

About Ceph, we had a test server serving an RBD /vicep partition. And it wo=
rked. We're still building up the Ceph cluster (primarily to provide OpenSt=
ack Cinder volumes) and once it is in production we plan to run a few virtu=
alized AFS servers with Ceph volumes behind.

All of this is in testing, and though we've not had deal breaking incidents=
, the long term stability is still in question.


Steven Presser <spresse1@acm.jhu.edu> wrote:

Out of pure curiosity, does anyone care to share experiences from
running OpenAFS on ZFS?

If any one is running OpenAFS on top of or in a cluster which also uses
Ceph, would you care to share experience, as well as your architecture?

Background:  I have 4 Thumpers (SunFire x4500s) with 48tb a pop and am
wondering how best to set up my storage layer.  This cluster will both
serve user files and be the backend for a VM cluster.


On 06/14/2013 06:13 AM, Robert Milkowski wrote:
>>   >>   ... And am I right in
>>   >>   thinking that volumes shouldn't just show up as being>>   corrupt
>> like this?  Should I be looking harder for some>>   kind of hardware
>> problem?
>>   >
>>   >  Volumes shouldn't just show up as corrupt like that, yes.
>> It now looks like it's a hardware problem with the SAN storage for that
>> viceb partition.  Ouch.
> And this is one of the reasons why ZFS is so cool :)
OpenAFS-info mailing list