[OpenAFS-devel] datagrams really arent big enough?

Derrick J Brashear shadow@dementia.org
Wed, 08 Nov 2000 11:07:20 -0500


--On Wednesday, November 08, 2000 08:45:10 AM -0500 Chas Williams 
<chas@cmf.nrl.navy.mil> wrote:

> In message <200011080134.UAA22208@oo.yi.org>,Default writes:
>> With MAX_FRAGS=1, the bottleneck is rx.
>> Better than a total cache bypass would be a more efficient cache filling
>> mechanism, so data doesn't have to get copied around all the time, and
>> perhaps some way of discarding dirty UFS pages without ever writing them
>> to the cache  at all.  Then you wouldn't need to hack at deciding which
>> files should be  cached and which shouldn't.
>
> it would be a worthwhile effort to fix the caching scheme.  but i see a
> possible benefit to bypassing the cache completely for large files (i.e.
>>> larger cache size) espc if you are just reading them once.  they tend
> to flush all the other files out of the cache.  i could be wrong.
> optimizing the cache would be great but is currently beyond my
> understanding of afs.

Then you've either made a decision for too many people, or you need an 
interface to pick and choose what files will be cached and what won't. The 
problem with allowing per-file selection of this is you've then made end 
users "know" they're using AFS and deal with it if the default is not what 
they want. I'm not sure that's a good thing.

More particularly anyone with a relatively slow connection who wants to 
look at a large file more than once will either have to learn how this 
works, or hate us.