- Two Ethernet ports, bridged:
- eth0 - to public network
- eth1 - to internal network transparently bridged to public network
- VLAN 4000 on internal network, *NOT* bridged to public network
- Bridge to VLAN 4000 for my KVM virtual machines to connect to, *NOT* bridged to public network.
So, here we go:
/etc/sysconfig/network-scripts/ifcfg-eth0:
DEVICE=eth0
BRIDGE=br0
ONBOOT=yes
/etc/sysconfig/network-scripts/ifcfg-eth1:
DEVICE=eth1
BRIDGE=br0
ONBOOT=yes
/etc/sysconfig/network-scripts/ifcfg-eth1.4000:
VLAN=yes
DEVICE=eth1.4000
BRIDGE=br1
ONBOOT=yes
/etc/sysconfig/network-scripts/ifcfg-br0:
DEVICE=br0
TYPE=Bridge
USERCTL=yes
ONBOOT=yes
BOOTPROTO=dhcp
/etc/sysconfig/network-scripts/ifcfg-br1:
DEVICE=br1
TYPE=Bridge
USERCTL=yes
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.22.2
NETMASK=255.255.255.0
And there you are. Two bridges, one of which has a single port (eth1.4000) for virtual machines to attach to, one of which bridges eth0 and eth1 so that the machine plugged to eth1 on this cluster can also make it out to the outside world (via that bridge) as well as having the VM's on both systems communicate with each other (via the bridge attached to eth1.4000 on the 192.168.22.x network). Internal VM cluster communications stay internal, either within br1 or on VLAN 4000 that never gets routed to the outside world (we'd need an eth0.4000 to bridge it to the outside world -- but we're not going to do that). This does introduce a single point of failure for the cluster -- the cluster controller -- but it's one that's managable, if we need to talk to the outside world after the cluster controller dies we can simply plug in the red interconnect cable to a smart switch that blocks VLAN 4000 rather than into the cluster controller if the cluster controller goes down.
Now, there's other things that can be done here. Perhaps br1 could be given an IP alias that migrates to another cluster controller if the current cluster controller goes down. Perhaps we have multiple gigabit Ethernet ports that we want to bond together into a high speed bond device. There's all sorts of possibilities allowed by Red Hat's "old school" /etc/sysconfig/network-scripts system, and it won't stop you. The same, alas, cannot be said of the new "better" NetManager system, which would simply throw up its hands in disgust if you asked it to do anything more complicated than a single network port attached to a single network.
-ELG
No comments:
Post a Comment