[AFS3-std] File Server/Cache Manager Extended Status Information
Jeffrey Hutzelman
jhutz@cmu.edu
Mon, 07 Apr 2008 14:53:43 -0400
--On Monday, April 07, 2008 02:42:56 PM -0400 Tom Keiser
<tkeiser@gmail.com> wrote:
> On 4/7/08, Jeffrey Hutzelman <jhutz@cmu.edu> wrote:
>> --On Monday, April 07, 2008 12:30:06 PM -0400 Matt Benjamin
>> <matt@linuxbox.com> wrote:
>>
> [snip]
>>
>> > As discussed in previous mail, it seems that there's a natural
>> > compression in batching notifications to one cache manager, especially
>> > to one file, grouped as Tom says, closely in time. I assumed we would
>> > wish to support this.
>> >
>>
>> You'd think that, but the problem is that you generally can't. Cache
>> consistency demands that when a file's contents are changed, you break
>> callbacks to any online clients before the RPC that made the change
>> returns. That means you can't queue them up to combine later.
>>
>
> No. That entirely depends on the consistency model you're trying to
> support. What you suggest we do would be the equivalent of saying a
> microprocessor must wait for a store to hit main memory, and all
> caches to be invalidated, before the instruction can retire. Nobody
> in the hardware business follows that type of consistency model
> anymore (because it does not scale, and is unnecessary once atomics
> and membars are supported), and I don't think we should use it either.
People in the hardware business have different constraints than we do.
For example, they can change everything in a system at once, whereas we
must maintain backward compatibility with existing clients which are based
on the consistency model we actually have.
If you want your protocol changes to be adopted, it must be possible to
deploy them incrementally. It also needs to be possible to take advantage
of new functionality without completely rearchitecting the cache manager.