[OpenAFS-devel] Re: afscache on UFS+logging or ZFS?
Andrew Deason
adeason@sinenomine.net
Thu, 5 Aug 2010 10:27:18 -0500
On Wed, 04 Aug 2010 23:50:29 +0100
Robert Milkowski <milek@task.gda.pl> wrote:
> This should't be a big issue. You can always set recordsize to
> something smaller.
This doesn't get rid of the issue completely. It's been a little while
since I was looking at this... but my recollection is that if you set
the recordsize to something small, like 1K, you still get non-trivial
overhead. IIRC, a 1M file took up about 1.1M, and a 1K file took up
about 5K. Of course, the numbers vary depending on the situation (if
they didn't, afs could calculate the disk usage and I'd be happy)
> From the afs point of view as someone else suggested instead of
> truncating a file we would create a new one and unlink the old one.
Yes, but we have performance considerations to think about. If we do
that for all FSs, some I think would not take to that kindly
performance-wise. If we do it for just ZFS... well, I don't recall the
code being really well organized for it. It could be done, but I don't
think anyone's cared enough to seriously consider it.
> Afsd could check recordsize during startup and issue a warning with
> recommendation to lower it to a smaller value.
If this is easy and you know the calls to make, go ahead.
> I need to investigate a little bit more to understand the issue
> better. Correct me if I'm wrong but a dedicated partition or a
> filesystem is fine, right?
Yes, but only because we are assuming that there is _no_ other disk
activity on that partition aside from the AFS kernel module in that
case, as I understand it.
--
Andrew Deason
adeason@sinenomine.net