[OpenAFS] Re: OpenAFS-info digest, Vol 1 #4842 - 12 msgs

J skyliner306@yahoo.com
Mon, 22 Mar 2010 13:09:47 -0700 (PDT)


I appreciate the feedback on the other filesystems available.=20

Hypothetically, if my choices were (1) ext2 and routine fsck vs. (2) ext3 a=
nd no fsck, is one better than the other?  I know "better" is a loaded work=
, so how about "safer" in terms of data preservation and recovery, ignoring=
 other factors like speed/performance.  I'm also not concerned with downtim=
e, since this is purely a test environment.  Is journaling designed to redu=
ce the need to run fsck?  In my case, it seems like running it at all on ex=
t3 isn't an option, but perhaps I just need to further familiarize myself w=
ith the program's options.

Note: I'm nowhere close to 24GB of memory on this.. er, system.


--- On Mon, 3/22/10, openafs-info-request@openafs.org <openafs-info-request=
@openafs.org> wrote:

> From: openafs-info-request@openafs.org <openafs-info-request@openafs.org>
> Subject: OpenAFS-info digest, Vol 1 #4842 - 12 msgs
> To: openafs-info@openafs.org
> Date: Monday, March 22, 2010, 12:01 PM
> Send OpenAFS-info mailing list
> submissions to
> =A0=A0=A0 openafs-info@openafs.org
>=20
> To subscribe or unsubscribe via the World Wide Web, visit
> =A0=A0=A0 https://lists.openafs.org/mailman/listinfo/openafs-info
> or, via email, send a message with subject or body 'help'
> to
> =A0=A0=A0 openafs-info-request@openafs.org
>=20
> You can reach the person managing the list at
> =A0=A0=A0 openafs-info-admin@openafs.org
>=20
> When replying, please edit your Subject line so it is more
> specific
> than "Re: Contents of OpenAFS-info digest..."
>=20
>=20
> Today's Topics:
>=20
> =A0=A0=A01. Re: Re: about failover - 2 servers
> (one "master" one
> =A0 =A0 =A0=A0=A0replicas) - a bit long
> (Vladimir Konrad)
> =A0=A0=A02. Re: about failover - 2 servers (one
> "master" one
> =A0 =A0 =A0=A0=A0replicas) - a bit long
> (Harald Barth)
> =A0=A0=A03. Re: Filesystem Types & FSCK
> (Harald Barth)
> =A0=A0=A04. Re: Re: about failover - 2 servers
> (one "master" one
> =A0 =A0 =A0=A0=A0replicas) - a bit long
> (Harald Barth)
> =A0=A0=A05. Re: Filesystem Types & FSCK (Dirk
> Heinrichs)
> =A0=A0=A06. Re: about failover - 2 servers (one
> "master" one replicas) - a bit
> =A0 =A0 =A0=A0=A0long (Andrew Deason)
> =A0=A0=A07. Re: Filesystem Types & FSCK (Lars
> Schimmer)
> =A0=A0=A08. Re: Filesystem Types & FSCK (Chaz
> Chandler)
>=20
> --__--__--
>=20
> Message: 1
> Date: Mon, 22 Mar 2010 15:00:55 +0000
> From: Vladimir Konrad <v.konrad@lse.ac.uk>
> To: openafs-info@openafs.org
> Organization: lse
> Subject: Re: [OpenAFS] Re: about failover - 2 servers (one
> "master" one
>  replicas) - a bit long
>=20
>=20
> Hello Andrew,
>=20
> > > Cheers, I forgot to say _by hand_.
> > You can do this with 'vos convertROtoRW', but it's
> intended to be more
> > of a tool for disaster recovery (when you've
> permanently lost the RW,
> > and all you have are ROs). Not generally for keeping
> up availability
> > while a server is temporarily down.
>=20
> > Note that if A goes down, you convertROtoRW on B, and
> A comes back up,
> > you'll now have 2 copies of the RW. The one on B will
> be the one used,
> > but A has another copy that may contain data you want.
> This can get
> > rather confusing if you try to sync the VLDB with the
> list of volumes
> > that are on each server.
>=20
> Thank you, good to know this, it would be used as the last
> resort.
>=20
> > Automatic failover has been done using multiple
> servers sharing the same
> > backend storage; I don't think anyone's done it with
> separate storage,
> > but we're not stopping you from doing so. You could in
> theory do
> > something like that with some other HA software, and
> writing some
> > scripts to issue 'vos' commands to do the
> conversions.
>=20
> Cheers, it is quite possible some servers would get hooked
> into SAN,
> so it is an option.
>=20
> > But it's usually a lot easier if you can just treat RO
> volumes as
> > high-availability, and RW volumes not.
>=20
> Makes sense, it looks having multiple RW volumes would not
> scale that well -
> writes would have to go to each volume, + synchronisation
> would get messy
> I guess...
>=20
> Thank you all, I have done the replicas.
>=20
> Do I understand it correctly (observation), a read-only
> replica placed on
> the same partition as the read-write volume does not "cost"
> much in terms
> of disc-space? I have released few replicas and the disc
> usage did not go
> up. Is it along the principle of LVM snapshots?
>=20
> Kind regards,
>=20
> Vladimir
>=20
> ------
> > because it reverses the logical flow of conversation +
> it is hard to follow.
> >> why not?
> >>> do not put a reply at the top of the message,
> please...
>=20
> Please access the attached hyperlink for an important
> electronic communications disclaimer: http://www.lse.ac.uk/collections/pl=
anningAndCorporatePolicy/legalandComplianceTeam/legal/disclaimer.htm
>=20
> --__--__--
>=20
> Message: 2
> Date: Mon, 22 Mar 2010 16:06:16 +0100 (CET)
> To: openafs-info@openafs.org
> From: Harald Barth <haba@kth.se>
> Subject: Re: [OpenAFS] about failover - 2 servers (one
> "master" one
>  replicas) - a bit long
>=20
>=20
> > OpenAFS is not designed for automatic failover.
>=20
> serverA volume.readonly -> serverB volume.readonly works
> automaticly
>=20
> serverA volume.readonly -> serverB volume (readwrite)
> does _not_ fail
> over automaticly
>=20
> Harald.
>=20
> --__--__--
>=20
> Message: 3
> Date: Mon, 22 Mar 2010 16:08:00 +0100 (CET)
> To: skyliner306@yahoo.com
> Cc: openafs-info@openafs.org
> From: Harald Barth <haba@kth.se>
> Subject: Re: [OpenAFS] Filesystem Types & FSCK
>=20
>=20
> I use xfs on Linux for /vicep*.
>=20
> Harald.
>=20
> --__--__--
>=20
> Message: 4
> Date: Mon, 22 Mar 2010 16:10:17 +0100 (CET)
> To: v.konrad@lse.ac.uk
> Cc: openafs-info@openafs.org
> From: Harald Barth <haba@kth.se>
> Subject: Re: [OpenAFS] Re: about failover - 2 servers (one
> "master" one
>  replicas) - a bit long
>=20
>=20
> > I have released few replicas and the disc usage did
> not go
> > up.
>=20
> Space is shared unless you change the RW so it differs from
> the RO.
> After the next vos release, it will be shared again.
>=20
> >=A0 Is it along the principle of LVM snapshots?
>=20
> Kindasorta.
>=20
> Harald.
>=20
> --__--__--
>=20
> Message: 5
> To: openafs-info@openafs.org
> Date: Mon, 22 Mar 2010 16:33:18 +0100
> From: "Dirk Heinrichs" <dirk.heinrichs@online.de>
> Organization: Privat
> Subject: Re: [OpenAFS] Filesystem Types & FSCK
>=20
> Am Montag 22 M=3DC3=3DA4rz 2010 15:57:23 schrieb J:
>=20
> > So I'm wondering whether you have any advice or
> comments about any of thi=3D
> s.
>=20
> You could use XFS, it doesn't even have fsck (it's a dummy,
> to make=3D20
> distribution's boot scripts happy).
>=20
> Bye...
>=20
> =A0=A0=A0 Dirk
>=20
> --__--__--
>=20
> Message: 6
> To: openafs-info@openafs.org
> From: Andrew Deason <adeason@sinenomine.net>
> Date: Mon, 22 Mar 2010 10:45:21 -0500
> Organization: Sine Nomine Associates
> Subject: [OpenAFS] Re: about failover - 2 servers (one
> "master" one replicas) - a bit
>  long
>=20
> On Mon, 22 Mar 2010 15:00:55 +0000
> Vladimir Konrad <v.konrad@lse.ac.uk>
> wrote:
>=20
> > > But it's usually a lot easier if you can just
> treat RO volumes as
> > > high-availability, and RW volumes not.
> >=20
> > Makes sense, it looks having multiple RW volumes would
> not scale that
> > well - writes would have to go to each volume, +
> synchronisation would
> > get messy I guess...
>=20
> I think the hardest part is conflict resolution, but I'm
> not too
> familiar with it. Coda is able to do RW replication, but as
> I recall can
> require manual conflict resolution (2 writes happened at
> the same time,
> and you must manually specify which one wins).
>=20
> I believe there have been at least one or two attempts to
> do this
> in-band in AFS (you can read about one proposed way of
> doing it at
> <http://www.student.nada.kth.se/~noora/exjobb/filer.html>).
> But nobody's
> been able to do it yet; it is a hard problem to solve. It's
> also one of
> the suggested OpenAFS GSOC projects: <http://www.openafs.org/gsoc.html>.
>=20
> > Do I understand it correctly (observation), a
> read-only replica placed
> > on the same partition as the read-write volume does
> not "cost" much in
> > terms of disc-space?
>=20
> Yes, as long as your RW does not differ much from your RO.
> That is one
> reason why it's almost always a good idea to have an RO on
> the same
> server/partition as the RW, if you have any ROs for that
> RW.
>=20
> > I have released few replicas and the disc usage did
> not go up. Is it
> > along the principle of LVM snapshots?
>=20
> Sort of, but arguably not as good. With LVM snapshots and
> similar
> systems, you get charged space for each block that is
> changed. With
> OpenAFS volume clones, you get charged for each file
> (vnode) that is
> changed.
>=20
> --=20
> Andrew Deason
> adeason@sinenomine.net
>=20
>=20
> --__--__--
>=20
> Message: 7
> Date: Mon, 22 Mar 2010 16:46:12 +0100
> From: Lars Schimmer <l.schimmer@cgv.tugraz.at>
> Cc: openafs-info@openafs.org
> Subject: Re: [OpenAFS] Filesystem Types & FSCK
>=20
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>=20
> Dirk Heinrichs wrote:
> > Am Montag 22 M=3DC3=3DA4rz 2010 15:57:23 schrieb J:
> >=3D20
> >> So I'm wondering whether you have any advice or
> comments about any of =3D
> this.
> >=3D20
> > You could use XFS, it doesn't even have fsck (it's a
> dummy, to make=3D20
> > distribution's boot scripts happy).
>=20
> XFS has got XFScheck and XFSrepair.
> BUT if you have lots of file, xfscheck needs HUGE amount of
> memory to
> run. Even with 24GB of memory my 2TB data directory (non
> OpenAFS) threw
> a out of memory error on XFScheck.
>=20
> > Bye...
> >=3D20
> > =A0=A0=A0 Dirk
>=20
>=20
> MfG,
> Lars Schimmer
> - --
> -
> -------------------------------------------------------------
> TU Graz, Institut f=3DC3=3DBCr ComputerGraphik &
> WissensVisualisierung
> Tel: +43 316 873-5405=A0 =A0
> =A0=A0=A0E-Mail: l.schimmer@cgv.tugraz.at
> Fax: +43 316 873-5402=A0 =A0
> =A0=A0=A0PGP-Key-ID: 0x4A9B1723
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>=20
> iEYEARECAAYFAkunkMQACgkQmWhuE0qbFyOTSwCfft1Aww2m0wSqAkD2Nnp4jpRT
> JsIAoJFLPXkNin73FDlOR/rPxiRnjfxU
> =3D3DkAIB
> -----END PGP SIGNATURE-----
>=20
> --__--__--
>=20
> Message: 8
> Date: Mon, 22 Mar 2010 11:57:02 -0400
> From: Chaz Chandler <clc31@inbox.com>
> To: openafs-info@openafs.org
> Subject: Re: [OpenAFS] Filesystem Types & FSCK
>=20
> >=3D20
> > XFS has got XFScheck and XFSrepair.
> > BUT if you have lots of file, xfscheck needs HUGE
> amount of memory to
> > run. Even with 24GB of memory my 2TB data directory
> (non OpenAFS) threw
> > a out of memory error on XFScheck.
> >=3D20
>=20
> True, but xfs_check =3D21=3D3D fsck_xfs, which is what would be
> run at boot.
> xfs_check doesn't need to be run much unless you suspect a
> problem.
>=20
> ____________________________________________________________
> FREE 3D EARTH SCREENSAVER - Watch the Earth right on your
> desktop=3D21
> Check it out at http://www.inbox.com/earth
>=20
>=20
> --__--__--
>=20
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info
>=20
>=20
> End of OpenAFS-info Digest
> =0A=0A=0A