[OpenAFS] OpenAFS on ZFS (Was: Salvaging user volumes)

Neil Davies semanticphilosopher@gmail.com
Sat, 15 Jun 2013 09:50:37 +0100


We've been running AFS over ZFS over LUKS over EC2/EBS volumes for two =
years now - with no incidents or issues.

I realise that sounds like a lot of indirection, but it meets our needs =
of flexibility, security and cost.

This approach has also allowed us to incrementally upgrade as needed =
(replacing underlying EBS volumes) and, through concurrency, get the =
throughputs we need.=20

Neil

On 15 Jun 2013, at 08:24, Dan Van Der Ster <daniel.vanderster@cern.ch> =
wrote:

> We deployed a ZFS on Linux server (on Scientific Linux) in the past =
week. Pretty simple stuff...the only non-default options are atime=3Doff =
and recordsize=3D64K (which may be wrong, though some posts about ZFS =
and AFS suggest it).
>=20
> About Ceph, we had a test server serving an RBD /vicep partition. And =
it worked. We're still building up the Ceph cluster (primarily to =
provide OpenStack Cinder volumes) and once it is in production we plan =
to run a few virtualized AFS servers with Ceph volumes behind.
>=20
> All of this is in testing, and though we've not had deal breaking =
incidents, the long term stability is still in question.
>=20
> --
> Dan
> CERN IT
>=20
> Steven Presser <spresse1@acm.jhu.edu> wrote:
>=20
>=20
> Out of pure curiosity, does anyone care to share experiences from
> running OpenAFS on ZFS?
>=20
> If any one is running OpenAFS on top of or in a cluster which also =
uses
> Ceph, would you care to share experience, as well as your =
architecture?
>=20
> Background:  I have 4 Thumpers (SunFire x4500s) with 48tb a pop and am
> wondering how best to set up my storage layer.  This cluster will both
> serve user files and be the backend for a VM cluster.
>=20
> Thanks,
> Steve
>=20
> On 06/14/2013 06:13 AM, Robert Milkowski wrote:
>>>>>  ... And am I right in
>>>>>  thinking that volumes shouldn't just show up as being>>   corrupt
>>> like this?  Should I be looking harder for some>>   kind of hardware
>>> problem?
>>>>=20
>>>> Volumes shouldn't just show up as corrupt like that, yes.
>>>=20
>>> It now looks like it's a hardware problem with the SAN storage for =
that
>>> viceb partition.  Ouch.
>> And this is one of the reasons why ZFS is so cool :)
>>=20
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info