[OpenAFS] R/W replication
Touretsky, Gregory
gregory.touretsky@intel.com
Tue, 13 Feb 2001 09:48:30 -0800
Well, speaking about the need in RW replication - I can imagine (actually,
even to see alive) an infrastructure that has 100s compute servers writing
simultaneously to the same volume. Sometimes it can kill the server...
As for the implementation - I thought once about something like DCE CDS
master implementation - actually, one could configure which file server will
become master for, let's say, every file - and it will propagate changes to
other servers that will act as slaves for this specific record (file)...
Indeed, would be a real challenge to provide such feature.
Regards,
Gregory Touretsky
IDC Computing / Systems Engineering Group
Unix Server Platforms
gregory.touretsky@intel.com
> (+) 972-4-865-6377, Fax: 04-865-5999
iNET: 465-6377, M/S: IDC-1B
To: Dirk Heinrichs <heinrichs@qis-systemhaus.de>
Cc: OpenAFS Info <openafs-info@openafs.org>
Subject: Re: [OpenAFS] R/W replication
From: Derek Atkins <warlord@MIT.EDU>
Date: 13 Feb 2001 10:29:56 -0500
I doubt that RW clones will ever happen. It would imply huge
consistency issues. Who's job is it to make sure data is consistent
across all the servers? In the current model, data is "pushed" from
the RW volume to the RO clones by a sysadmin (or a cron job). You
would need more immediate data consistency with multiple RW volumes.
Worse, you also have really hard race conditions. For example, what
if two clients are writing to the same file on different servers? Who
wins? How do you let the clients know that there was a conflict?
Mind if I ask to what purpose you want replicated RW volumes? You
certainly don't need to replicate user's homedirs. And most system
software should be RO anyways (you _do_ realize that you can still
mount the RW volume of a replicated RO volume in order to make changes
to the volume?)
-derek