[OpenAFS] Replication, Fail-over, Disconnected Operation and Caching
Derrick Brashear
shadow@gmail.com
Fri, 25 Jan 2008 12:35:16 -0500
------=_Part_977_20326844.1201282516567
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
On Jan 25, 2008 12:20 PM, <openafs@bobich.net> wrote:
> Hi,
>
> I've looked through the documentation, but couldn't find any specifics on
> this, so I'd be grateful if somebody could point me at the page I've
> missed.
>
> 1) How do OpenAFS clients pick a server to access a volume from if the
> volume is replicated on multiple servers?
>
preferences. Look at e.g. fs get/setserverpref. They default "sensibly"
based on classful networking, sadly.
>
> 2) From the documentation, it looks like the replication mechanism is
> single-master / multiple-slaves, i.e. one read-write server, multiple
> read-only servers. Is that correct?
Yes
> If so, do clients transparently handle
> this? Are writes transparently routed to the read-write server while still
> allowing reads to come from a more local, faster, read-only server.
Not in the manner you're suggesting, since the volumes don't auto-replicate
(you can publish at discrete times but it's not you write and the change is
autopushed) you don't want that anyway.
>
> 3) Can the root volume be replicated? What I am really looking to do is
> have 2 servers, one as master and the other with all the volumes
> replicated. Is that possible?
Yes, but, as above, is it what you want?
>
>
> 4) If the read-write server fails, how does OpenAFS handle failing over to
> the replicated backup? When the original master comes back up, how
> transparently / gracefully does this happen?
>
For read-write, you can't see it. For readonly, a volume is a volume.
>
> 5) Is disconnected operation supported via local caching (as per Coda)?
Not yet.
> If
> so, are there limits on sane cache sizes
Regardless, there are. I wouldn't try something over 20gb.
> ?Is it reasonable to expect to
> have tens of GB of cached content available on the client nodes?
>
> I am currently using GFS in reliable environments, and Coda on a small
> scale in environments that have to tollerate disconnections, but I have
> concerns about Coda's stability (perpetual betaware, or so it seems) in
> larger and harsher environments (terabytes of storage, hundreds of
> clients, thousands of users), hence why I am looking at OpenAFS as a
> possible more stable alternative.
>
> Thanks in advance.
>
> Gordan
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info
>
------=_Part_977_20326844.1201282516567
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
<br><br><div class="gmail_quote">On Jan 25, 2008 12:20 PM, <<a href="mailto:openafs@bobich.net">openafs@bobich.net</a>> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi,<br><br>I've looked through the documentation, but couldn't find any specifics on<br>this, so I'd be grateful if somebody could point me at the page I've<br>missed.<br><br>1) How do OpenAFS clients pick a server to access a volume from if the<br>
volume is replicated on multiple servers?<br></blockquote><div><br>preferences. Look at e.g. fs get/setserverpref. They default "sensibly" based on classful networking, sadly.<br> <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>2) From the documentation, it looks like the replication mechanism is<br>single-master / multiple-slaves, i.e. one read-write server, multiple<br>read-only servers. Is that correct? </blockquote><div><br>Yes<br> <br></div>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">If so, do clients transparently handle<br>this? Are writes transparently routed to the read-write server while still<br>
allowing reads to come from a more local, faster, read-only server.</blockquote><div><br>Not in the manner you're suggesting, since the volumes don't auto-replicate (you can publish at discrete times but it's not you write and the change is autopushed) you don't want that anyway.<br>
</div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>3) Can the root volume be replicated? What I am really looking to do is<br>have 2 servers, one as master and the other with all the volumes<br>
replicated. Is that possible?</blockquote><div><br>Yes, but, as above, is it what you want? <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br><br>4) If the read-write server fails, how does OpenAFS handle failing over to<br>the replicated backup? When the original master comes back up, how<br>transparently / gracefully does this happen?<br></blockquote><div>
<br>For read-write, you can't see it. For readonly, a volume is a volume. <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>
5) Is disconnected operation supported via local caching (as per Coda)?</blockquote><div><br>Not yet.<br> <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
If<br>so, are there limits on sane cache sizes</blockquote><div><br>Regardless, there are. I wouldn't try something over 20gb.<br> <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
?Is it reasonable to expect to<br>have tens of GB of cached content available on the client nodes?<br><br>I am currently using GFS in reliable environments, and Coda on a small<br>scale in environments that have to tollerate disconnections, but I have<br>
concerns about Coda's stability (perpetual betaware, or so it seems) in<br>larger and harsher environments (terabytes of storage, hundreds of<br>clients, thousands of users), hence why I am looking at OpenAFS as a<br>
possible more stable alternative.<br><br>Thanks in advance.<br><br>Gordan<br>_______________________________________________<br>OpenAFS-info mailing list<br><a href="mailto:OpenAFS-info@openafs.org">OpenAFS-info@openafs.org</a><br>
<a href="https://lists.openafs.org/mailman/listinfo/openafs-info" target="_blank">https://lists.openafs.org/mailman/listinfo/openafs-info</a><br></blockquote></div><br>
------=_Part_977_20326844.1201282516567--