[OpenAFS] High Speed AFS Access

Sven Oehme oehmes@de.ibm.com
Tue, 20 Jan 2004 14:33:00 +0100


This is a multipart message in MIME format.
--=_alternative 004A7291C1256E21_=
Content-Type: text/plain; charset="US-ASCII"

-------------------------------------------------------------------------------------------------------------------------
Dept. A141,  TG/SSG EMEA AIS
Development Leader Stonehenge 
IBM intranet ---> http://w3.ais.mainz.de.ibm.com/stonehenge/
internet ---> http://www-5.ibm.com/services/de/its/filestore.html
Phone (+49)-6131-84-3151
Fax      (+49)-6131-84-6708
Mobil   (+49)-171-970-6664
E-Mail : oehmes@de.ibm.com

openafs-info-admin@openafs.org wrote on 20.01.2004 14:11:02:

> 
> So I've been kicking around an idea for a very large storage array to 
use
> for on-line access to hard drive images.  I already have a stable,
> multi-city AFS cluster and would really like to integrate this disk 
server
> into the AFS cluster for security and simplicity reasons.
> 
> What I'm wondering, as I've seen the disk speed statistics from
> http://www.e.kth.se/~jimmy/afsfsperf/afsfsperf.html and I'm wondering if
> anybody here has been able to benchmark their fileservers above the
> seemingly 22MB/sec boundary.

yes 35 MB/sec per Backend System .

> 
> I'd like to provide 12 ports of channel-bonded gigabit Ethernet 
connectivity
> to 10-12 gigabit connected clients (or a direct, crossed-over gigabit 
pipe
> per client, whatever is faster) so that clients can pull files,
> theoretically, at about 80MB/sec.  Of course I need to get the disks to 
run
> that fast though I have some ideas of using a RAID 5x5 matrix of about 
30 of
> the fastest 137G SCSI disks I can find spread across 8 channels of U320 
SCSI
> so that I can get, again theoretically, throughput in excess of a 
Gigabyte
> per second.
> 
> Can anybody see a specific problem with AFS and these types of speed
> requirements?  I would love to be able to force crypt on this server, 
but
> given the stats on kth.se I wouldn't expect the server to be able to 
handle
> the disk bandwidth and the encryption with any grace.
> 

yes , there are multiple Problems and there was also a lot of 
discussion on the list about the Problems.

We already run and sell a setup , that locks like your idea :-)
We run multiple AFS Servers on AIX or Linux that are connected to Multiple 
SAN`s 
with fast drives in RAID5 and RAID1 configurations.

we have multiple separated Partitions on each Server and can with multiple 
accesses 
to the local Filesystem (jfs under AIX and ext3 under Linux) get datarates 
up to 70 MB/sec
on each partitions . if we access the data trough AFS , we can`t pass the 
35 MB/sec limit 
the Server System is most of the time in i/o wait state and we see no real 
stream from or 
to the drives it always looks like you flush a cash on the Server , fill 
it up , flush it .......#

there where different Bottlenecks discussed (RX protocol , UDP , Thread 
model .. )
i think there are different Problems , but nobody is working on them , 
because most 
Admins use more Systems with less speed and not just a few System with 
high Speed .

> Any thoughts are greatly appreciated and if I do go forward with the 
plans I
> will happily share the benchmarks and specs.
> 
> Thanks.
> 
>  - AB
> 
> 
> -- 
> Aaron Stanley
> Director, Information Technology
> Stroz Friedberg, LLC
> 15 Maiden Lane, 12th Floor
> New York, NY  10038
> 212/981.6534[o] | 917/859.1503[c] | 815/642.0223[f]
> 
> 
> ***********************************************************************
> 
> This message is for the named person's use only.  It may contain
> confidential, proprietary or legally privileged information. No right to
> confidential or privileged treatment of this message is waived or lost
> by any error in transmission.  If you have received this message in
> error, please immediately notify the sender by e-mail or by telephone at
> 212 981 6540, delete the message and all copies from your system and
> destroy any hard copies.  You must not, directly or indirectly, use,
> disclose, distribute, print or copy any part of this message if you are
> not the intended recipient.
> 
> ************************************************************************
> 
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info


Sven

--=_alternative 004A7291C1256E21_=
Content-Type: text/html; charset="US-ASCII"


<br><font size=2 face="sans-serif">-------------------------------------------------------------------------------------------------------------------------<br>
Dept. A141, &nbsp;TG/SSG EMEA AIS<br>
Development Leader Stonehenge <br>
IBM intranet ---&gt; http://w3.ais.mainz.de.ibm.com/stonehenge/<br>
internet ---&gt; http://www-5.ibm.com/services/de/its/filestore.html<br>
Phone (+49)-6131-84-3151<br>
Fax &nbsp; &nbsp; &nbsp;(+49)-6131-84-6708<br>
Mobil &nbsp; (+49)-171-970-6664<br>
E-Mail : oehmes@de.ibm.com</font>
<br>
<br><font size=2><tt>openafs-info-admin@openafs.org wrote on 20.01.2004
14:11:02:<br>
<br>
&gt; <br>
&gt; So I've been kicking around an idea for a very large storage array
to use<br>
&gt; for on-line access to hard drive images. &nbsp;I already have a stable,<br>
&gt; multi-city AFS cluster and would really like to integrate this disk
server<br>
&gt; into the AFS cluster for security and simplicity reasons.<br>
&gt; <br>
&gt; What I'm wondering, as I've seen the disk speed statistics from<br>
&gt; http://www.e.kth.se/~jimmy/afsfsperf/afsfsperf.html and I'm wondering
if<br>
&gt; anybody here has been able to benchmark their fileservers above the<br>
&gt; seemingly 22MB/sec boundary.<br>
</tt></font>
<br><font size=2><tt>yes 35 MB/sec per Backend System .</tt></font>
<br>
<br><font size=2><tt>&gt; <br>
&gt; I'd like to provide 12 ports of channel-bonded gigabit Ethernet connectivity<br>
&gt; to 10-12 gigabit connected clients (or a direct, crossed-over gigabit
pipe<br>
&gt; per client, whatever is faster) so that clients can pull files,<br>
&gt; theoretically, at about 80MB/sec. &nbsp;Of course I need to get the
disks to run<br>
&gt; that fast though I have some ideas of using a RAID 5x5 matrix of about
30 of<br>
&gt; the fastest 137G SCSI disks I can find spread across 8 channels of
U320 SCSI<br>
&gt; so that I can get, again theoretically, throughput in excess of a
Gigabyte<br>
&gt; per second.<br>
&gt; <br>
&gt; Can anybody see a specific problem with AFS and these types of speed<br>
&gt; requirements? &nbsp;I would love to be able to force crypt on this
server, but<br>
&gt; given the stats on kth.se I wouldn't expect the server to be able
to handle<br>
&gt; the disk bandwidth and the encryption with any grace.<br>
&gt; <br>
</tt></font>
<br><font size=2><tt>yes , there are multiple Problems and there was also
a lot of </tt></font>
<br><font size=2><tt>discussion on the list about the Problems.</tt></font>
<br>
<br><font size=2><tt>We already run and sell a setup , that locks like
your idea :-)</tt></font>
<br><font size=2><tt>We run multiple AFS Servers on AIX or Linux that are
connected to Multiple SAN`s </tt></font>
<br><font size=2><tt>with fast drives in RAID5 and RAID1 configurations.</tt></font>
<br>
<br><font size=2><tt>we have multiple separated Partitions on each Server
and can with multiple accesses </tt></font>
<br><font size=2><tt>to the local Filesystem (jfs under AIX and ext3 under
Linux) get datarates up to 70 MB/sec</tt></font>
<br><font size=2><tt>on each partitions . if we access the data trough
AFS , we can`t pass the 35 MB/sec limit </tt></font>
<br><font size=2><tt>the Server System is most of the time in i/o wait
state and we see no real stream from or </tt></font>
<br><font size=2><tt>to the drives it always looks like you flush a cash
on the Server , fill it up , flush it .......#</tt></font>
<br>
<br><font size=2><tt>there where different Bottlenecks discussed (RX protocol
, UDP , Thread model .. )</tt></font>
<br><font size=2><tt>i think there are different Problems , but nobody
is working on them , because most </tt></font>
<br><font size=2><tt>Admins use more Systems with less speed and not just
a few System with high Speed .</tt></font>
<br>
<br><font size=2><tt>&gt; Any thoughts are greatly appreciated and if I
do go forward with the plans I<br>
&gt; will happily share the benchmarks and specs.<br>
&gt; <br>
&gt; Thanks.<br>
&gt; <br>
&gt; &nbsp;- AB<br>
&gt; <br>
&gt; <br>
&gt; -- <br>
&gt; Aaron Stanley<br>
&gt; Director, Information Technology<br>
&gt; Stroz Friedberg, LLC<br>
&gt; 15 Maiden Lane, 12th Floor<br>
&gt; New York, NY &nbsp;10038<br>
&gt; 212/981.6534[o] | 917/859.1503[c] | 815/642.0223[f]<br>
&gt; <br>
&gt; <br>
&gt; ***********************************************************************<br>
&gt; <br>
&gt; This message is for the named person's use only. &nbsp;It may contain<br>
&gt; confidential, proprietary or legally privileged information. No right
to<br>
&gt; confidential or privileged treatment of this message is waived or
lost<br>
&gt; by any error in transmission. &nbsp;If you have received this message
in<br>
&gt; error, please immediately notify the sender by e-mail or by telephone
at<br>
&gt; 212 981 6540, delete the message and all copies from your system and<br>
&gt; destroy any hard copies. &nbsp;You must not, directly or indirectly,
use,<br>
&gt; disclose, distribute, print or copy any part of this message if you
are<br>
&gt; not the intended recipient.<br>
&gt; <br>
&gt; ************************************************************************<br>
&gt; <br>
&gt; _______________________________________________<br>
&gt; OpenAFS-info mailing list<br>
&gt; OpenAFS-info@openafs.org<br>
&gt; https://lists.openafs.org/mailman/listinfo/openafs-info<br>
</tt></font>
<br>
<br><font size=2><tt>Sven</tt></font>
<br>
--=_alternative 004A7291C1256E21_=--