Thursday, October 28, 2010

Setting up bridging and vlans in Fedora 13

In the previous half of this, I talked about how Fedora's NetworkManager interfered with complex configurations, and discussed how to disable it. Now I'll show you how to define a network that consists of:
  • Two Ethernet ports, bridged:
    • eth0 - to public network
    • eth1 - to internal network transparently bridged to public network
  • VLAN 4000 on internal network, *NOT* bridged to public network
  • Bridge to VLAN 4000 for my KVM virtual machines to connect to, *NOT* bridged to public network.
Now, to do all of this we take advantage of an interesting feature of the Linux networking stack -- bridges aren't "really" bridges. If a packet arrives at the physical eth1 hardware, it gets dispatched to either eth1.4000 or eth1 based upon the VLAN tag. Only those packets that are dispatched to eth1 actually make it onto the bridge to go to eth0. In other words, the Linux bridging code is a *logical* bridge, not a *physical* bridge -- it is not the physical ports that are connected, it is the logical Ethernet devices inside the Linux networking stack that are connected, and eth1.4000 and eth1 just happen to be connected to the same physical port but otherwise are logically distinct with the dispatching to the logical Ethernet device happening based upon the VLAN header (or lack thereof).

So, here we go:






And there you are. Two bridges, one of which has a single port (eth1.4000) for virtual machines to attach to, one of which bridges eth0 and eth1 so that the machine plugged to eth1 on this cluster can also make it out to the outside world (via that bridge) as well as having the VM's on both systems communicate with each other (via the bridge attached to eth1.4000 on the 192.168.22.x network). Internal VM cluster communications stay internal, either within br1 or on VLAN 4000 that never gets routed to the outside world (we'd need an eth0.4000 to bridge it to the outside world -- but we're not going to do that). This does introduce a single point of failure for the cluster -- the cluster controller -- but it's one that's managable, if we need to talk to the outside world after the cluster controller dies we can simply plug in the red interconnect cable to a smart switch that blocks VLAN 4000 rather than into the cluster controller if the cluster controller goes down.

Now, there's other things that can be done here. Perhaps br1 could be given an IP alias that migrates to another cluster controller if the current cluster controller goes down. Perhaps we have multiple gigabit Ethernet ports that we want to bond together into a high speed bond device. There's all sorts of possibilities allowed by Red Hat's "old school" /etc/sysconfig/network-scripts system, and it won't stop you. The same, alas, cannot be said of the new "better" NetManager system, which would simply throw up its hands in disgust if you asked it to do anything more complicated than a single network port attached to a single network.


No comments:

Post a Comment