[OpenAFS] OpenAFS on ZFS (Was: Salvaging user volumes)

Steven Presser spresse1@acm.jhu.edu
Sat, 15 Jun 2013 09:46:36 -0400


Thanks all.  This plan no longer sounds as nuts as it did at the start.

On 06/15/2013 04:50 AM, Neil Davies wrote:
> We've been running AFS over ZFS over LUKS over EC2/EBS volumes for two years now - with no incidents or issues.
>
> I realise that sounds like a lot of indirection, but it meets our needs of flexibility, security and cost.
>
> This approach has also allowed us to incrementally upgrade as needed (replacing underlying EBS volumes) and, through concurrency, get the throughputs we need.
>
> Neil
>
> On 15 Jun 2013, at 08:24, Dan Van Der Ster<daniel.vanderster@cern.ch>  wrote:
>
>> We deployed a ZFS on Linux server (on Scientific Linux) in the past week. Pretty simple stuff...the only non-default options are atime=off and recordsize=64K (which may be wrong, though some posts about ZFS and AFS suggest it).
>>
>> About Ceph, we had a test server serving an RBD /vicep partition. And it worked. We're still building up the Ceph cluster (primarily to provide OpenStack Cinder volumes) and once it is in production we plan to run a few virtualized AFS servers with Ceph volumes behind.
>>
>> All of this is in testing, and though we've not had deal breaking incidents, the long term stability is still in question.
>>
>> --
>> Dan
>> CERN IT
>>
>> Steven Presser<spresse1@acm.jhu.edu>  wrote:
>>
>>
>> Out of pure curiosity, does anyone care to share experiences from
>> running OpenAFS on ZFS?
>>
>> If any one is running OpenAFS on top of or in a cluster which also uses
>> Ceph, would you care to share experience, as well as your architecture?
>>
>> Background:  I have 4 Thumpers (SunFire x4500s) with 48tb a pop and am
>> wondering how best to set up my storage layer.  This cluster will both
>> serve user files and be the backend for a VM cluster.
>>
>> Thanks,
>> Steve
>>
>> On 06/14/2013 06:13 AM, Robert Milkowski wrote:
>>>>>>   ... And am I right in
>>>>>>   thinking that volumes shouldn't just show up as being>>    corrupt
>>>> like this?  Should I be looking harder for some>>    kind of hardware
>>>> problem?
>>>>> Volumes shouldn't just show up as corrupt like that, yes.
>>>> It now looks like it's a hardware problem with the SAN storage for that
>>>> viceb partition.  Ouch.
>>> And this is one of the reasons why ZFS is so cool :)
>>>
>> _______________________________________________
>> OpenAFS-info mailing list
>> OpenAFS-info@openafs.org
>> https://lists.openafs.org/mailman/listinfo/openafs-info
>> _______________________________________________
>> OpenAFS-info mailing list
>> OpenAFS-info@openafs.org
>> https://lists.openafs.org/mailman/listinfo/openafs-info
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info