[OpenAFS] sanity check please.
Lars Schimmer
l.schimmer@cgv.tugraz.at
Sun, 04 Sep 2005 13:01:48 +0200
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Pucky Loucks schrieb:
| Hi everyone, I've been playing with openafs for a couple of months now
| and I'm just wanting some to sanity checks of my idea of an AFS
| deployment.
|
| I'm wanting to build a scalable system that will have millions of
| images stored on a file system. (this is where AFS comes in) It looks
| to me like AFS is able to deal with scaling the partitions and volumes
| i.e. total storage. In the end I could have terabytes of data. Still
| my belief is that AFS can handle it. My concern is that I want to
| replicate the data so that I have some redundancy, AFS can handle this
| too.
|
| 1) Is this going to become a huge management issue?
Depends.
If you use some nice scripts, it could be managed easy.
At all its just kinda "vos create volume" "fs mkmount path volume" "fs
setacl path rights" and "backup volume". And after all, the big overview
;-) As long as you manage your cell well enough, itīs easy. Please donīt
create one dir with all volumes in it.
| 2) If I end up getting 5 thousand images a day would I want it to be in
| it's own volume so I could replicate each "day"?
What do you mean "images"? CD images? DVD images?
You can create one volume for each day, for each 5000 images. But think
about: 5000 images in one directory is a real mess to search through.
And if that are "big" images, the size of this daily volume grows fast,
so a replicate volume takes far more time. Replication is near line
speed (as long as there are no large amounts of small files >1kb; but
you talk about images, so that files should be larger), but e.g. 100 gig
in a volume takes its time to replicate at 100Mbit.
I suggest 50 volumes a 100 images/day, e.g. numbered "day.001.01" or
else, as you can find the volume easy and you can easy replicate them
with a script. And if you distribute these volumes over 5-10 file
servers, the replication process span over the network and is faster
ended at all. Speed is a question of design.
| 3) what's the recommend max size for a volume?
I once worked with 250 GB volumes. But to replicate these big volumes suxx.
And there is limit in files in a volume: max. 64k files with <16 letters
allowed in one volume.
~From a mailinglist-entry:
The directory structure contains 64K slots.
filenames under 16 chars occupy 1 slot.
filenames between 16 and 32 chars occupy 2 slots
filenames between 33 and 48 chars occupy 3 slots, and on
And please be aware, big file support (>2 GB/File) was introduced in
1.3.7x of OpenAFS.
| 4) These files will be served via apache is that an issue? (my
| understanding is it's not)
As far as openafs.org runs from openafs in a apache, I assume not.
| Hope someone can help shed some light on my subject. :)
As far as my knowledge let me talk about it.
| Pucky Loucks
Cya
Lars
- --
- -------------------------------------------------------------
TU Graz, Institut für ComputerGraphik & Wissensvisualisierung
Tel.: +43 316 873-5405 E-Mail: l.schimmer@cgv.tugraz.at
PGP-Key-ID: 0xB87A0E03
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (MingW32)
Comment: GnuPT-Light 0.3 by EQUIPMENTE.DE
iD8DBQFDGtQcVguzrLh6DgMRAnLQAJ9OTuvoXFlzqlYDRfIhFuI9E5cX3ACgqi5e
hsia+xuDD3H3Fpw9hsDI+oQ=
=VsTs
-----END PGP SIGNATURE-----