[OpenAFS] Running OpenAFS in Vmware ESX host
Neulinger, Nathan
nneul@umr.edu
Fri, 19 Nov 2004 15:31:38 -0600
No, we'll possibly look at it down the road, but couldn't afford the
infrastructure required to support vmotion. (Have to have hugely
expensive SAN hardware underneath.) When ESX adds support for iSCSI
devices, we'll likely look more into vMotion.=20
We're also not using it for fileservers, just db servers and other
stuff. Our standard AFS server platform is a PIII or P4, with three 185G
drives on a 4port 3ware card set up as mirror and hot spare.=20
Our environment is split between two buildings, and most of our vmware
space is set up with paired services where we have a server in each
building, using linux virtual server w/ keepalived for load balancing
and failover between the two vm's.=20
-- Nathan
------------------------------------------------------------
Nathan Neulinger EMail: nneul@umr.edu
University of Missouri - Rolla Phone: (573) 341-6679
UMR Information Technology Fax: (573) 341-4216
=20
> -----Original Message-----
> From: Matthew Cocker [mailto:matt@cs.auckland.ac.nz]=20
> Sent: Friday, November 19, 2004 2:40 PM
> To: Neulinger, Nathan
> Cc: openafs-info
> Subject: Re: [OpenAFS] Running OpenAFS in Vmware ESX host
>=20
> Neulinger, Nathan wrote:
>=20
> > It only obliterates it if you have ONE vmware system.=20
> >=20
> > We are doing this for our afs database servers and have 10=20
> DELL 2650's
> > virtualizing approximately 100 linux and windows servers,=20
> with plenty of
> > room to spare.
> >=20
> > -- Nathan
> >=20
> Are you using vmotion. ESX (if the vm images are stored on=20
> sans) has the=20
> ability to migrate vms from one vm server to another with out=20
> taking the=20
> vm down. Also if any esx server goes down the other can=20
> instantly bring=20
> up all the vms it was running. Lastly on one modern box our=20
> afs servers=20
> are using hardly using any cpu or memory, but when the FS crashes it=20
> takes out a lot of users and takes ages to come up.
>=20
> Also fibre channel switches are real expensive, so with ESX=20
> we can slipt=20
> the modern server into several virtual afs fs instance with=20
> less users=20
> each (better slavage times), better use the hardware and SANs=20
> connections (of which we need less). All very promising. It=20
> also allows=20
> us to have identical dev/test/production environments and=20
> means that the=20
> dependence on a particular version of linux to run X hba on=20
> X hardware=20
> is removed.
>=20
> If only it did not cost so much.
>=20
>=20
> Nathan, did you put vicepX on raw disk or on a vmfs shared lun?
>=20
> Cheers
>=20
> Matt
>=20
>=20
>=20