[OpenAFS] AFS, IP-Filters, and NAT
Jeffrey Hutzelman
jhutz@cmu.edu
Mon, 9 Jul 2001 10:29:48 -0400 (EDT)
On Sun, 8 Jul 2001, Charles Clancy wrote:
> On 7 Jul 2001, Derek Atkins wrote:
>
> > Charles Clancy <mgrtcc@cs.rose-hulman.edu> writes:
> >
> > > I'm trying to configure a cluster of Sun machines behind a NAT
> > > proxy/firewall using ip-filters. They all run an AFS client that accesses
> > > an AFS server off the NAT.
> >
> > You do not need to map ports directly. Just change your UDP timeout
> > so that UDP ports originating inside the NAT from port 7001 have NO
> > timeout. That way the NAT box will always remember the mapping for
> > those ports. Alternatively, you can set the UDP timeout to something
> > large, like 30 minutes. That should assure that the mapping is kept
> > alive.
>
> I ended up using Win2K's NAT, and changing the UDP timeout from 1 minute
> to 1 hour. While this *seemed* to work, performance was pretty bad.
> When looking at the NAT mappings, Win2K would use the same source port
> multiple times, provided the destination IP and port were different, so it
> could still distinguish incoming packets. I'm not sure if this could
> cause any problems with what the AFS servers' (1 on cell and 3 off cell)
> cache of client IP/ports.
>
> Has anyone gotten multiple AFS clients work behind a NAT, and achieved
> close to the performance of being connected to a routable subnet? If so,
> what NAT implementation were you using?
<soapbox>
NAT's are the evil spawn of the devil, not so much because they make
hideous changes to your packets as because they grossly violate one of the
fundamental architectural principals of the internet. They do this by
storing state within the network which cannot be recovered if the NAT
crashes or chooses to throw the state away too soon. This results in idle
TCP connections being broken, AFS performing badly, and a number of other
applications breaking entirely, as you've noticed with NIS.
</soapbox>
That said...
For the last couple of days, I've been in the process of setting up my
network at home. This consists of a bunch of machines behind a router,
which is a RedHat 6.2 box with a Linux 2.2.19 kernel. Until the VPN is
fully configured, I've been using NAT functionality (IP masquerading) to
hide my machines behind the one address I get from my DSL provider.
I can't speak much for performance, since being behind a DSL link tends to
overwhelm any other bottlenecks. But AFS doesn't seem any worse from
inside machine than from the router machine itself.
For those of you forced to run AFS behind a NAT, I do have a few tips:
First, enable loose UDP destination matching. This means the NAT will use
the same source port for all UDP traffic originating from a given internal
host and port, which means all your cache manager traffic will appear on
the same port. This isn't essential, but it is a good idea. Besides
cutting down on the number of ports being allocated, it will also reduce
the chances of an association timing out, and make things work if you talk
to a multi-homed fileserver which talks back to you.
On Linux 2.2.16 and newer kernels, this can be done by writing "2" into
/proc/sys/net/ipv4/ip_masq_udp_dloose. Of course, this must be done on
every boot; on RedHat, this is done by editing /etc/sysctl.conf and adding
the following line:
net.ipv4.ip_masq_udp_dloose = 2
This option apparently appeared around 2.2.16; kernels before that had a
compile-time configuration option and/or did the right thing by default.
Second, make sure the timeout for UDP associations is long enough. I've
been using 15 minutes, but really, anything just over 5 minutes should
be sufficient. The cache manager generates calls to all known fileservers
every 5 minutes to keep track of whether they are up. This should
generate more than enough traffic to keep the NAT box happy.
On 2.2 systems with the ipchains model, this is done by issuing a command
like 'ipchains -M -S 0 0 900'. The 0's cause the TCP timeouts to remain
unchanged. Note that again, this change must be done every boot, and it
will _not_ be saved by the usual ipchains-save tool. On systems using
ipchains-save/ipchains-restore (such as RedHat), you can make the right
thing happen by adding the following line to /etc/sysconfig/ipchains:
-M -S 0 0 900
-- Jeffrey T. Hutzelman (N3NHS) <jhutz+@cmu.edu>
Sr. Research Systems Programmer
School of Computer Science - Research Computing Facility
Carnegie Mellon University - Pittsburgh, PA