OpenVPN Bridged Server Setup On Xen
OpenVPN Bridged Server Setup On Xen
OpenVPN Bridged Server Setup On Xen
DOM0
3 Phyisical NW cards and several bridges for VLANs
FW
Running in DOM0, several NW Cards.
VPN Server
Running in DOM0, two network cards
XEN Setup
There is a new script for xen networking (network-bridge-custom.sh) who:
# VLAN Bridges
# Invoke networ-bridge-vlan to create additional bridges in
# named VLAN.
# Best thing for them is that it's transparent for VMs that
# reside on those bridges. They don't know about VLANs and any
# change in the network architecture may be transparent to the
# virtual machines.
#
# Tagged traffic received on the phisical netdev will be sent to
# the appropiate bridge.
# Untagged traffic received on a bridge, will leave the bridge
# tagged.
#
#Usr Servers
"$dir/network-bridge-vlan" "$@" vlan=200 netdev=peth1 bridge=vbr200
#Wifi guests
"$dir/network-bridge-vlan" "$@" vlan=300 netdev=peth1 bridge=vbr300
#IT Team
"$dir/network-bridge-vlan" "$@" vlan=800 netdev=peth1 bridge=vbr800
#External GW
"$dir/network-bridge-vlan" "$@" vlan=10 netdev=peth0 bridge=vbr010
#VPN Bridge for VPN Bridged NW Card
"$dir/network-bridge-vlan" "$@" vlan=1000 netdev=peth1 bridge=vpnbr
This is the network-bridge-vlan.sh script that is referenced above:
#!/bin/sh
#============================================================================
# Xen vlan bridge start/stop script.
# Xend calls a network script when it starts.
# The script name to use is defined in /etc/xen/xend-config.sxp
# in the network-script field.
#
# This script creates a bridge (default vlanbr${vlan}), creates a device
# (default eth0.${vlan}), and adds it to the bridge. This scrip assumes
# the Dom0 does not have an active interface on the selected vlan; if
# it does the network-bridge script should be used instead.
#
# To use this script, vconfig must be installed.
#
# Usage:
#
# network-bridge-vlan (start|stop|status) {VAR=VAL}*
#
# Vars:
#
# vlan The vlan to bridge (default 2)
# bridge The bridge to use (default vlanbr${vlan}).
# netdev The interface to add to the bridge (default eth0}).
#
# Internal Vars:
# vlandev="${netdev}.${vlan}"
#
# start:
# Creates the bridge
# Adds vlandev to netdev
# Enslaves vlandev to bridge
#
# stop:
# Removes vlandev from the bridge
# Removes vlandev from netdev
# Deletes bridge
#
# status:
# Print vlan, bridge
#
#============================================================================
dir=$(dirname "$0")
. "$dir/xen-script-common.sh"
findCommand "$@"
evalVariables "$@"
vlan=${vlan:-2}
bridge=${bridge:-vlanbr${vlan}}
netdev=${netdev:-eth0}
vlandev="${netdev}.${vlan}"
##
# link_exists interface
#
# Returns 0 if the interface named exists (whether up or down), 1 otherwise.
#
link_exists()
{
if ip link show "$1" >/dev/null 2>/dev/null
then
return 0
else
return 1
fi
}
echo '============================================================'
cat /proc/net/vlan/${vlandev}
echo ' '
brctl show ${bridge}
echo '============================================================'
}
op_start () {
if [ "${bridge}" = "null" ] ; then
return
fi
create_bridge ${bridge}
op_stop () {
if [ "${bridge}" = "null" ]; then
return
fi
if ! link_exists "$bridge"; then
return
fi
# adds $dev to $bridge but waits for $dev to be in running state first
add_to_bridge2() {
local bridge=$1
local dev=$2
local maxtries=10
case "$command" in
start)
op_start
;;
stop)
op_stop
;;
status)
show_status ${vlandev} ${bridge}
;;
*)
echo "Unknown command: $command" >&2
echo 'Valid commands are: start, stop, status' >&2
exit 1
esac
We Assume that:
Clone the base VM. Remember that virt-manager cannot clone VMs that are not active or
without a XML. So, as XEN uses it's own VM definition file, so first, export to virt-manager xml
format:
[root@xxxxxx:~]$ virsh dumpxml CentOS_Base > CentOS_Base.xml
Verify the VM settings. Give it a single network card and attach it to the desired interface.
We're giving meaningful names to the network interfaces of the VMs so a quick brctl show is
more BOFH-readable.
[root@xxxxxx:~]$ cat /etc/xen/VPN_Server
#VPN Access Point Config File
#ETH0 Attached to Internet via Proxy ARP behind Firewall
name = "VPN_Server"
uuid = "cwhateverc"
maxmem = 256
memory = 128
vcpus = 1
bootloader = "/usr/bin/pygrub"
on_poweroff = "destroy"
on_reboot = "restart"
on_crash = "restart"
vfb = [ "type=vnc,vncunused=0,vncdisplay=0,keymap=es" ]
disk = [ "phy:/dev/VG_Xen_VMs/LV_VPN_Server,xvda,w" ]
vif = [ "mac=00:16:36:XX:XX:XX,bridge=xenbr1,script=vif-bridge,vifname=vpnext" ]
[root@xxxxxx:~]$
Notice the virbr0, the default QEMU bridge, only for access betweeen same host.
Access and setup your VPN_Server according your preferences, that's hostname and security
rules. Remember, you are still behind your firewall.
PROXY ARP
Now it's time to setup the VPN_Server VM as it were not behind the firewall, but in paralel.
We should have a pool of public IP addresses from our ISP, or the setup makes no sense.
If you don't have a pool of public IP Addresses, just DNAT the port for OpenVPN to the internal
IP, and forget about Proxy ARP
What's elegant of this solution is that you can move the Virtual Machine to another location
almost without tweaking. It's set up so it appears to be in the internet. If you move it to your
DMZ, just setup a firewall on it.
- You'll have 6 public ip addresses, one of them being the ISP router.
o Network: 1.1.1.0/29
o Broadcast: 1.1.1.7
o ISP_Router: 1.1.1.1
o FW Public: 1.1.1.2 FW does MASQ on this interface
- IPADDR: 1.1.1.3
- Netmask: 255.255.255.248
- Network: 1.1.1.0
- GW: 1.1.1.1
- Broadcast: 1.1.1.7
If we are connected from virtual serial console on DOM0 there is no risk of losing the
connection
[root@xxxxxx:~]$ service network restart
Check everything is working and you can ping your router (1.1.1.1) and browse the web
We assume that you already had net->loc drop rules on firewall so the VPN_Server is safe
behind firewall.
If you don't manage proxy arp to work, don't worry, ask google.
Your loc rules should apply also for you new ProxyARPED machine, so if you had several
zones(loc and loc:usr) in /etc/shorewall/zones and the proper setup, you'll be able to access
intrazone according to your policies (/etc/shorewall/policies) i.e.:
usr srv ACCEPT info
srv usr ACCEPT
loc net ACCEPT
$FW net ACCEPT
...
all all DROP info
We are using 2.2.1.0/24 as srv network, 2.2.2.0/24 as usr network, 2.2.3.0/24 as VPN full
access and 2.2.4.0/24 as VPN restricted access.
- Create, on DOM0 a new Virtual Bridge (preferable on a VLAN so we can add more VPN
servers in the future) (see network-bridge-custom.sh)
- Add a network card to VPN_Server and attach it into the new Virtual Bridge
- Add two network cards to Firewall and attach them into the new Virtual Bridge
Note: VLAN setup is transparent for the Virtual Machines. Only the bridge resides in the VLAN,
and all untagged traffic that comes from the Virtual Machines will leave the bridge via the
phisical VLAN tagged device, so in effect, it'll leave the bridge TAGGED. Or OTOH, if you add to
the same "wire" (maybe another physical machine or another vm inside another host) on that
VLAN, it'll be attached to the corresponding virtual bridge.
[root@xxxxxx:~]$ cat /etc/xen/VPN_Server
#VPN Access Point Config File
#ETH0 Attached to Internet via Proxy ARP behind Firewall
name = "VPN_Server"
uuid = "cwhateverc"
maxmem = 256
memory = 128
vcpus = 1
bootloader = "/usr/bin/pygrub"
on_poweroff = "destroy"
on_reboot = "restart"
on_crash = "restart"
vfb = [ "type=vnc,vncunused=0,vncdisplay=0,keymap=es" ]
disk = [ "phy:/dev/VG_Xen_VMs/LV_VPN_Server,xvda,w" ]
vif = [ "mac=00:16:36:XX:XX:XX,bridge=xenbr1,script=vif-bridge,vifname=vpnext",
"mac=00:16:36:XX:XX:XX,bridge=vpnbr,script=vif-bridge,vifname=vpnint"
]
[root@xxxxxx:~]$
Notice the name of the bridge for the second network card, and the name of the interface.
Do the same for the firewall, adding two new network cards, both of them on the new bridge.
[root@xxxxxx:~]$ cat /etc/xen/Firewall
name = "Firewall"
uuid = "cwhateverc"
maxmem = 256
memory = 128
vcpus = 1
bootloader = "/usr/bin/pygrub"
on_poweroff = "restart"
on_reboot = "restart"
on_crash = "restart"
vfb = [ "type=vnc,vncunused=1,keymap=es" ]
disk = [ "phy:/dev/VG_Xen_VMs/LV_Firewall,xvda,w" ]
vif = [ "mac=00:16:36:XX:XX:XX,bridge=vbr010,script=vif-bridge,vifname=fweth0",
"mac=00:16:36:XX:XX:XX,bridge=xenbr1,script=vif-bridge,vifname=fweth1",
"mac=00:16:36:XX:XX:XX,bridge=xenbr2,script=vif-bridge,vifname=fweth2",
"mac=00:16:36:XX:XX:XX,bridge=vpnbr,script=vif-bridge,vifname=fweth3",
"mac=00:16:36:XX:XX:XX,bridge=vpnbr,script=vif-bridge,vifname=fweth4" ]
[root@xxxxxx:~]$
Now attach the same network cards to the firewall and VPN
[root@xxxxxx:~]$ xm network-attach Firewall mac=00:16:36:XX:XX:XX bridge=vpnbr
vifname=fweth3
Log onto the firewall and setup the new network cards
[root@xxxxxx:~]$ xm console Firewall
... setup network cards
(eth3 = 2.2.3.1 ; eth4 = 2.2.4.1)
/etc/shorewall/interfaces
Vpntrusted eth3
Vpnusr eth4
/etc/shorewall/policy
Vpnusr loc DROP
Vpnusr net REJECT
Vpntrusted loc ACCEPT
vpntrusted net ACCEPT
/etc/shorewall/rules
ACCEPT vpnusr srv:2.2.1.2 tcp 80,443
Log into the VPN_Server and setup the network bridge on the new ETH1 card
[root@xxxxxx:~]$ yum install bridge-utils
…
[root@xxxxxx:~]$ cat /etc/sysconfig/network-scripts/ifcfg-br0
# Bridge
DEVICE=br0
BOOTPROTO=none
ONBOOT=yes
HWADDR=00:16:36:XX:XX:XX
TYPE=Bridge
We see that the bridge it's created and eth1 is attached to it.
Notice there is no IP address on the card. It's bridged to DOM0 VPN VLAN. It's firewall / gw
responsability to manage the incomming packets.
Now we setup OpenVPN and attach the tap device to the bridge, and we'll setup different
users into different network ranges so they can contact the two different subnets in the FW.
We can chose several setups: Different tap devices for several OpenVPN instances achieving
multiple server-bridge setups, with two or more CAs and so, but we prefer to hold only one CA,
one OpenVPN server and one default pool.
If the CName for the client fails into our ccd directory, assign him a different GW and
IP/Network/Netmask so it falls into another subnet at the firewall.
OpenVPN server.conf
[root@xxxxxxxxxxxx:~]$ grep -Ev "^#|^$|^;" /etc/openvpn/server.conf
local 1.1.1.3
port 443
proto tcp
dev tap0
ca ca.crt
cert server.crt
key server.key # This file should be kept secret
dh dh1024.pem
ifconfig-pool-persist ipp.txt
server-bridge 2.2.3.1 255.255.255.0 2.2.3.32 2.2.3.191
push route 2.2.0.0 255.255.0.0
client-config-dir ccd
push "dhcp-option DNS 2.2.1.x"
push "dhcp-option NTP 2.2.1.x"
push "dhcp-option WINS 2.2.1.x"
push "dhcp-option DOMAIN example.com"
push "dhcp-option domain example.com"
keepalive 10 120
comp-lzo
user nobody
group nobody
persist-key
persist-tun
status openvpn-status.log
verb 3
up /etc/openvpn/bridge-start
crl-verify crl.pem
First, local is the public IP Address. It's proxyarped by the FW and FW rules apply to both
incoming and outgoing connections.
Second, it's a TAP tunnel on SSL port (avoiding some blocked ports on some public networks)
After that:
Finally, we make sure the bridge-start script is called when the tap0 interface is up. This script
just adds the interface to the bridge:
#!/bin/bash
#################################
# Set up Ethernet bridge on Linux
# Requires: bridge-utils
# br already setup
#################################
For untrusted clients (or those with special access) we fill their data in the ccd directory (with
the setup above, /etc/openvpn/ccd).
Then, log in to firewall and add a rule for that client to allow ssh to that server:
/etc/shorewall/rules
ACCEPT vpnusr:2.2.4.2 srv:2.2.1.15 tcp 22
He will be assigned 2.2.4.2 IP Address, his default gw will be 2.2.4.1 and a route to 2.2.1.15 will
be added to his routing table.
He won't have access to 2.2.1.0/24 network, just to one host. If you want to allow him full srv
network, just add
push route 2.2.1.0 255.255.255.0
Adding a route makes a network visible to the client, but firewall rules still apply. Update the
rules if you want to allow / disallow him services, for example, in order to give him full srv
network access:
/etc/shorewall/rules
ACCEPT vpnusr:2.2.4.2 srv