[OpenAFS] Extremely poor write performance.
Rubino Geiß
kb44@rz.uni-karlsruhe.de
Fri, 17 Jan 2003 00:12:30 +0100
Hi Paul,
> It seems that you looked at the performance of 1 client with
> 1 server? Are your figures for a single access to a server?
I know that AFS is better performing then NFS if you count many clients
especially if you have a read bias on these clients. We are using AFS
because of that since a year. After all we had the NFS thing for over 10
years ... it was real pain.
So, please note: you are completely right -- but for compiler research as we
do
"make clean; make; make test ... minor editing ... do it all again" is a
major task. So we are really suffering from a factor 10 to 100 slower delete
performance!!! This is affecting single users, true, but it is a not
neglectable cost and acceptance problem!
>
> Since AFS caches data, it is likely that multiple access to
> the data will show AFS has better performance.
>
> You could also try using an AFS RAM cache to see how that
> improves AFS access over say 10 accesses to same data.
>
> What would be more interesting than a
> single-server:single-client comparison is to look at how
> performance varies as the number of clients increases.
>
> Such as study was done a few years ago using the Andrew
> Benchmark [0]. A graph [1] of the results is attached below.
>
> What is interesting about the graph is that AFS is clearly
> shown to have much better scalability than NFS: as the number
> of clients rises, AFS outshines NFS.
>
> From this data, it is clear that AFS has better scalability than NFS.
>
> The other thing to remember is that, when comparing AFS with
> NFS, you are not really comparing like for like [2].
>
> For example: AFS has features and capabilities not available
> in NFS. In AFS:
>
> + You can move data between fileservers with little
> or no impact on users accessing that data.
>
> + You can replicate data across several servers so that
> AFS clients will automagically switch to access another fileserver
> if the first fileserver becomes inaccessible for that
> replicated data.
>
> + User-IDs and group-IDs are managed consistently across
> all clients/servers. There is no dependence on local
> /etc/passwd file.
>
> + Users can create their own group-ids and add members
> to these groups.
>
> + If you have multiple database servers and one db server
> fails the cell still functions and clients fallback to using
> the remaining db servers automagically.
>
> + Authentication has from the start been done using Kerberos
> which is much more secure than the NIS method used in NFS.
> (OK, some implementations of NFS now have kerberos but
> implementation is not consistent for all platforms).
>
> + Caching has always been used to provide good access to data
> for second and subsequent access.
> This also reduces network traffic.
>
> + Caching can be done in disk or RAM (for performance).
>
> + There is mutual authentication: not only do users have
> to authenticate but servers do also.
>
> + In my experience, there have been fewer security problems
> in AFS than NFS.
>
> + AFS has better "systems management" tools for administrators.
>
> + AFS administration can be done from any client machine.
>
> + AFS does not have the (IMHO ugly) client mount of server resource.
> AFS filespace is consistent across all clients and has
> "location independence": users do not need to know which server
> to access to find a resource. Users just need the pathname.
> "location mapping" is done at the server not the client.
> (see also [3] "location independence")
>
>
> I hope this helps.
> --
> cheers
> paul http://acm.org/~mpb
>
> References:
>
> [0] What is the Andrew Benchmark?
> http://www.angelfire.com/hi/plutonic/afs-faq.html#sub3.18
>
> [1] Graph of AFS versus NFS The Andrew Benchmark results
> http://www.angelfire.com/hi/plutonic/images/andrew1.jpg
>
> [2] How does AFS compare with NFS?
> http://www.angelfire.com/hi/plutonic/afs-faq.html#sub1.11
>
> [3] AFS "location independence"
> http://www.angelfire.com/hi/plutonic/afs-faq.html#sub1.05.b
>
>