[OpenAFS] AFS namei file servers, SAN, any issues elsewhere? We've had some. Can AFS _cause_ SAN issues?
Dale Ghent
daleg@umbc.edu
Thu, 20 Mar 2008 18:14:00 -0400
Note the you only need SUNWsan if you're running Solaris < 10.
Why one would run Solaris < 10 these days is beyond me, but...
/dale
On Mar 20, 2008, at 3:40 PM, Kim Kimball wrote:
> Thanks, Jason.
>
> Is the hardware the same as what you tested last year?
>
> Kim
>
>
> Jason Edgecombe wrote:
>> Is this what you need?
>>
>> PKGINST: SUNWsan
>> NAME: SAN Foundation Kit
>> CATEGORY: system
>> ARCH: sparc
>> VERSION: 1.0
>> BASEDIR: /
>> VENDOR: Sun Microsystems, Inc.
>> DESC: This package provides a support for the SAN Foundation
>> Kit.
>> PSTAMP: sanserve-a20031029172438
>> INSTDATE: Jan 15 2008 10:37
>> HOTLINE: Please contact your local service provider
>> STATUS: completely installed
>> FILES: 22 installed pathnames
>> 4 shared pathnames
>> 1 linked files
>> 11 directories
>> 2 executables
>> 239 blocks used (approx)
>>
>>
>> Running Solaris 9 09/05HW Sparc with Sun SAN foundation.
>>
>> Jason
>>
>> Kim Kimball wrote:
>>> Hi Jason,
>>>
>>> Thanks!
>>>
>>> Can you tell me which flavor of SAN you're using?
>>>
>>> Kim
>>>
>>>
>>> Jason Edgecombe wrote:
>>>> Robert Banz wrote:
>>>>>
>>>>>
>>>>> AFS can't really cause "san issues" in that it's just another
>>>>> application using your filesystem. In some cases, it can be
>>>>> quite a heavy user of such, but since its only interacting
>>>>> through the fs, its not going to know anything about your
>>>>> underlying storage fabric, or have any way of targeting it for
>>>>> any more badness than any other filesystem user.
>>>>>
>>>>> One of the big differences that would effect the filesystem IO
>>>>> load that occurred between 1.4.1 & 1.4.6 was the removal
>>>>> functions that made copious fsync operations. These operations
>>>>> were called in fileserver/volserver functions that modified
>>>>> various in-volume structures, specifically file creations and
>>>>> deletions, and would lead to rather underwhelming performance
>>>>> when doing vos restores, deleting, or copying large file trees.
>>>>> In many configurations, this causes the OS to pass on a call to
>>>>> the underlying storage to verify that all changes written have
>>>>> been written to *disk*, causing the storage controller to flush
>>>>> its write cache. Since this defeats many of the benefits (wrt I/
>>>>> O scheduling) on your storage hardware of having a cache, this
>>>>> could lead to overloaded storage.
>>>>>
>>>>> Some storage devices have the option to ignore these calls from
>>>>> devices, assuming your write cache is reliable.
>>>>>
>>>>> Under UFS, I would suggest that you'd be running in 'logging'
>>>>> mode when using the namei fileserver on Solaris, as yes, fsck is
>>>>> rather horrible to run. Performance on reasonably recent
>>>>> versions of ZFS were quite acceptable as well.
>>>>
>>>> I can confirm Robert's observations. I recently tested openafs
>>>> 1.4.1 inode vs 1.4.6 namei on solaris 9 sparc with a Sun Storedge
>>>> 3511 Expansion tray fibre channel device. The difference is
>>>> stagerring with vos move and such. We have been using the 1.4.6
>>>> namei config on a SAN for a few months now with no issues.
>>>>
>>>> Jason
>>>> _______________________________________________
>>>> OpenAFS-info mailing list
>>>> OpenAFS-info@openafs.org
>>>> https://lists.openafs.org/mailman/listinfo/openafs-info
>>>>
>>>>
>>>>
>>> _______________________________________________
>>> OpenAFS-info mailing list
>>> OpenAFS-info@openafs.org
>>> https://lists.openafs.org/mailman/listinfo/openafs-info
>>>
>>
>> _______________________________________________
>> OpenAFS-info mailing list
>> OpenAFS-info@openafs.org
>> https://lists.openafs.org/mailman/listinfo/openafs-info
>>
>>
>>
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info
>