OpenVPN Bridged Server Setup On Xen

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Enviroment:

DOM0
3 Phyisical NW cards and several bridges for VLANs

FW
Running in DOM0, several NW Cards.

Shorewall is used as an interface to IPTables

VPN Server
Running in DOM0, two network cards

XEN Setup
There is a new script for xen networking (network-bridge-custom.sh) who:

- Creates 3 bridges (one per physical network) (default XEN Networking)


- Creates additional bridges per needed vlan (usr, wifi, it, external network pool / router,
VPN)
#!/bin/sh
# Wrapper for several bridges in DOM0
# Invoked from xend-config.xsp
# If you don't know how to change it, better RTFM
# and come back when you grow up.

# Where are the other scripts located?


dir=$(dirname "$0")

# Standard bridges, one per network. Default virtual interface


# names applies
"$dir/network-bridge" "$@" vifnum=0 netdev=eth0 bridge=xenbr0
"$dir/network-bridge" "$@" vifnum=1 netdev=eth1 bridge=xenbr1
"$dir/network-bridge" "$@" vifnum=2 netdev=eth2 bridge=xenbr2

# VLAN Bridges
# Invoke networ-bridge-vlan to create additional bridges in
# named VLAN.
# Best thing for them is that it's transparent for VMs that
# reside on those bridges. They don't know about VLANs and any
# change in the network architecture may be transparent to the
# virtual machines.
#
# Tagged traffic received on the phisical netdev will be sent to
# the appropiate bridge.
# Untagged traffic received on a bridge, will leave the bridge
# tagged.
#

#Usr Servers
"$dir/network-bridge-vlan" "$@" vlan=200 netdev=peth1 bridge=vbr200
#Wifi guests
"$dir/network-bridge-vlan" "$@" vlan=300 netdev=peth1 bridge=vbr300
#IT Team
"$dir/network-bridge-vlan" "$@" vlan=800 netdev=peth1 bridge=vbr800
#External GW
"$dir/network-bridge-vlan" "$@" vlan=10 netdev=peth0 bridge=vbr010
#VPN Bridge for VPN Bridged NW Card
"$dir/network-bridge-vlan" "$@" vlan=1000 netdev=peth1 bridge=vpnbr
This is the network-bridge-vlan.sh script that is referenced above:
#!/bin/sh
#============================================================================
# Xen vlan bridge start/stop script.
# Xend calls a network script when it starts.
# The script name to use is defined in /etc/xen/xend-config.sxp
# in the network-script field.
#
# This script creates a bridge (default vlanbr${vlan}), creates a device
# (default eth0.${vlan}), and adds it to the bridge. This scrip assumes
# the Dom0 does not have an active interface on the selected vlan; if
# it does the network-bridge script should be used instead.
#
# To use this script, vconfig must be installed.
#
# Usage:
#
# network-bridge-vlan (start|stop|status) {VAR=VAL}*
#
# Vars:
#
# vlan The vlan to bridge (default 2)
# bridge The bridge to use (default vlanbr${vlan}).
# netdev The interface to add to the bridge (default eth0}).
#
# Internal Vars:
# vlandev="${netdev}.${vlan}"
#
# start:
# Creates the bridge
# Adds vlandev to netdev
# Enslaves vlandev to bridge
#
# stop:
# Removes vlandev from the bridge
# Removes vlandev from netdev
# Deletes bridge
#
# status:
# Print vlan, bridge
#
#============================================================================

dir=$(dirname "$0")
. "$dir/xen-script-common.sh"

findCommand "$@"
evalVariables "$@"

vlan=${vlan:-2}
bridge=${bridge:-vlanbr${vlan}}
netdev=${netdev:-eth0}

vlandev="${netdev}.${vlan}"

##
# link_exists interface
#
# Returns 0 if the interface named exists (whether up or down), 1 otherwise.
#
link_exists()
{
if ip link show "$1" >/dev/null 2>/dev/null
then
return 0
else
return 1
fi
}

# Usage: create_bridge bridge


create_bridge () {
local bridge=$1
# Don't create the bridge if it already exists.
if ! brctl show | grep -q ${bridge} ; then
brctl addbr ${bridge}
brctl stp ${bridge} off
brctl setfd ${bridge} 0
fi
ip link set ${bridge} up
}

# Usage: add_to_bridge bridge dev


add_to_bridge () {
local bridge=$1
local dev=$2
# Don't add $dev to $bridge if it's already on a bridge.
if ! brctl show | grep -q ${dev} ; then
brctl addif ${bridge} ${dev}
fi
}

# Usage: show_status vlandev bridge


# Print vlan and bridge
show_status () {
local vlandev=$1
local bridge=$2

echo '============================================================'
cat /proc/net/vlan/${vlandev}
echo ' '
brctl show ${bridge}
echo '============================================================'
}

op_start () {
if [ "${bridge}" = "null" ] ; then
return
fi

if ! link_exists "$netdev"; then


return
fi

if link_exists "$vlandev"; then


# The device is already up.
return
fi

create_bridge ${bridge}

ip link set ${netdev} up

vconfig set_name_type DEV_PLUS_VID_NO_PAD


vconfig add ${netdev} ${vlan}
ip link set ${vlandev} address fe:ff:ff:ff:ff:ff
ip link set ${vlandev} up
ip link set ${bridge} up

add_to_bridge2 ${bridge} ${vlandev}


}

op_stop () {
if [ "${bridge}" = "null" ]; then
return
fi
if ! link_exists "$bridge"; then
return
fi

if link_exists "$vlandev"; then


ip link set ${vlandev} down

brctl delif ${bridge} ${vlandev}


ip link set ${bridge} down

vconfig rem ${vlandev}


fi
brctl delbr ${bridge}
}

# adds $dev to $bridge but waits for $dev to be in running state first
add_to_bridge2() {
local bridge=$1
local dev=$2
local maxtries=10

echo -n "Waiting for ${dev} to negotiate link."


for i in `seq ${maxtries}` ; do
if ifconfig ${dev} | grep -q RUNNING ; then
break
else
echo -n '.'
sleep 1
fi
done

if [ ${i} -eq ${maxtries} ] ; then echo '(link isnt in running state)' ; fi

add_to_bridge ${bridge} ${dev}


}

case "$command" in
start)
op_start
;;

stop)
op_stop
;;

status)
show_status ${vlandev} ${bridge}
;;

*)
echo "Unknown command: $command" >&2
echo 'Valid commands are: start, stop, status' >&2
exit 1
esac

We Assume that:

- Xen is running. FW is setup and no VPN exists.


- We have a "clean" Virtual Machine as base for the architecture (Centos_Clone)
- Hard Disk partitioning:
o Virtualmachines are stored in LVM Logical Volumes.
o Virtualmachines see their HDD as raw (no mounted filesystem)
o Each virtual machine creates their own filesystem at install time.
In the examples, the base virtualmachine has also LVM, so the disks can be
hot-growed or be shrinked without stopping the virtual machine.
- The virtual machines run with 128Mb. The minimun memory needed for CentOS 5 to
boot.
- Running heavy-cpu processes doesn't need more CPU to VM's
- Running RAM-eater processes can be slowly. If it's the case, increase the available
memory to the VM on-the-fly (remember to free up some memory from DOM0
FIRST!):
[root@xxxxxx:~]$ cat /etc/xen/Firewall
name = "Firewall"
uuid = "cwhateverc"
maxmem = 256
memory = 128
vcpus = 1
bootloader = "/usr/bin/pygrub"
on_poweroff = "restart"
on_reboot = "restart"
on_crash = "restart"
vfb = [ "type=vnc,vncunused=1,keymap=es" ]
disk = [ "phy:/dev/VG_Xen_VMs/LV_Firewall,xvda,w" ]
vif = [ "mac=00:16:36:XX:XX:XX,bridge=vbr010,script=vif-
bridge,vifname=fweth0","mac=00:16:36:XX:XX:XX,bridge=xenbr1,script=vif-
bridge,vifname=fweth1","mac=00:16:36:XX:XX:XX,bridge=xenbr2,script=vif-
bridge,vifname=fweth2" ]
[root@xxxxxx:~]$
[root@xxxxxx:~]$ xm list
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 1352 2 r----- 10856.6
Firewall 10 126 1 -b---- 1620.4

[root@xxxxxx:~]$ brctl show


bridge name bridge id STP enabled interfaces
vbr010 8000.feffffffffff no fweth0
peth0.10
vbr200 8000.feffffffffff no peth1.200
vbr300 8000.feffffffffff no peth1.300
vbr800 8000.feffffffffff no peth1.800
virbr0 8000.000000000000 yes
xenbr0 8000.feffffffffff no peth0
xenbr1 8000.feffffffffff no fweth1
peth1
xenbr2 8000.feffffffffff no fweth2
peth2

[root@xxxxxx:~]$ xm mem-set Domain-0 1224


[root@xxxxxx:~]$ xm mem-set Firewall 256

Creating the VPN_Server virtual machine


Create a new LV for the virtual machine
[root@xxxxxx:~]$ lvcreate -n LV_VPN_Server -L 5G VG_Xen_VMs

Clone the base VM. Remember that virt-manager cannot clone VMs that are not active or
without a XML. So, as XEN uses it's own VM definition file, so first, export to virt-manager xml
format:
[root@xxxxxx:~]$ virsh dumpxml CentOS_Base > CentOS_Base.xml

[root@xxxxxx:~]$ cat CentOS_Base.xml

[root@xxxxxx:~]$ virt-clone --original-xml ./CentOS_Base.xml -n VPN_Server -f


/dev/VG_Xen_VMs/LV_VPN_Server --force

Verify the VM settings. Give it a single network card and attach it to the desired interface.

In this tutorial, it will be attached to the internal network bridge (xenbr1).

We're giving meaningful names to the network interfaces of the VMs so a quick brctl show is
more BOFH-readable.
[root@xxxxxx:~]$ cat /etc/xen/VPN_Server
#VPN Access Point Config File
#ETH0 Attached to Internet via Proxy ARP behind Firewall
name = "VPN_Server"
uuid = "cwhateverc"
maxmem = 256
memory = 128
vcpus = 1
bootloader = "/usr/bin/pygrub"
on_poweroff = "destroy"
on_reboot = "restart"
on_crash = "restart"
vfb = [ "type=vnc,vncunused=0,vncdisplay=0,keymap=es" ]
disk = [ "phy:/dev/VG_Xen_VMs/LV_VPN_Server,xvda,w" ]
vif = [ "mac=00:16:36:XX:XX:XX,bridge=xenbr1,script=vif-bridge,vifname=vpnext" ]
[root@xxxxxx:~]$

Create the Virtual Machine and check the bridges:


[root@xxxxxx:~]$ xm create VPN_Server
[root@xxxxxx:~]$ xm list
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 1736 2 r----- 10973.3
Firewall 10 126 1 -b---- 1667.1
VPN_Server 14 127 1 -b---- 246.2

[root@xxxxxx:~]$ brctl show


bridge name bridge id STP enabled interfaces
vbr010 8000.feffffffffff no fweth0
peth0.10
vbr200 8000.feffffffffff no peth1.200
vbr300 8000.feffffffffff no peth1.300
vbr800 8000.feffffffffff no peth1.800
virbr0 8000.000000000000 yes
vpnbr 8000.feffffffffff no peth0.1000
xenbr0 8000.feffffffffff no peth0
xenbr1 8000.feffffffffff no vpnext
fweth1
peth1
xenbr2 8000.feffffffffff no fweth2
peth2
[root@xxxxxx:~]$

Notice the virbr0, the default QEMU bridge, only for access betweeen same host.

Access and setup your VPN_Server according your preferences, that's hostname and security
rules. Remember, you are still behind your firewall.

Remember, CTRL + ] to go back to the Hypervisor.


[root@xxxxxx:~]$ xm console VPN_Server
...

PROXY ARP
Now it's time to setup the VPN_Server VM as it were not behind the firewall, but in paralel.

We should have a pool of public IP addresses from our ISP, or the setup makes no sense.

If you don't have a pool of public IP Addresses, just DNAT the port for OpenVPN to the internal
IP, and forget about Proxy ARP

What's elegant of this solution is that you can move the Virtual Machine to another location
almost without tweaking. It's set up so it appears to be in the internet. If you move it to your
DMZ, just setup a firewall on it.

Assume your public network is 1.1.1.0/29:

- You'll have 6 public ip addresses, one of them being the ISP router.
o Network: 1.1.1.0/29
o Broadcast: 1.1.1.7
o ISP_Router: 1.1.1.1
o FW Public: 1.1.1.2 FW does MASQ on this interface

Setup your VPN_Server so:

- IPADDR: 1.1.1.3
- Netmask: 255.255.255.248
- Network: 1.1.1.0
- GW: 1.1.1.1
- Broadcast: 1.1.1.7

Log into Firewall and check ProxyARP and MASQ rules:


[root@xxxxxx:~]$ cat /etc/shorewall/proxyarp
#
# Shorewall version 4 - Proxyarp File
#
# For information about entries in this file, type "man shorewall-proxyarp"
#
# See http://shorewall.net/ProxyARP.htm for additional information.
#
###############################################################################
#ADDRESS INTERFACE EXTERNAL HAVEROUTE PERSISTENT
1.1.1.3 eth1 eth0 no yes
#LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE

[root@xxxxxx:~]$ cat /etc/shorewall/masq


#
# Shorewall version 4 - Masq file
#
# For information about entries in this file, type "man shorewall-masq"
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-masq.html
#
###############################################################################
#INTERFACE SOURCE ADDRESS PROTO PORT(S) IPSEC
MARK
eth0 eth1:!1.1.1.0/29
eth0 eth2

#LAST LINE -- ADD YOUR ENTRIES ABOVE THIS LINE -- DO NOT REMOVE
[root@xxxxxx:~]$

Apply changes to shorewall


[root@xxxxxx:~]$ shorewall safe-restart

Logout from firewall and restart network on VPN_Server

If we are connected from virtual serial console on DOM0 there is no risk of losing the
connection
[root@xxxxxx:~]$ service network restart

Check everything is working and you can ping your router (1.1.1.1) and browse the web

We assume that you already had net->loc drop rules on firewall so the VPN_Server is safe
behind firewall.

If you don't manage proxy arp to work, don't worry, ask google.
Your loc rules should apply also for you new ProxyARPED machine, so if you had several
zones(loc and loc:usr) in /etc/shorewall/zones and the proper setup, you'll be able to access
intrazone according to your policies (/etc/shorewall/policies) i.e.:
usr srv ACCEPT info
srv usr ACCEPT
loc net ACCEPT
$FW net ACCEPT
...
all all DROP info

BRIDGE FIREWALL AND VPN_SERVER


Now we're adding a new network card to the VPN_Server virtual machine and we are bridging
it to some new network cards on the firewall. We supose to be running our lan segment
2.2.0.0/16 grouped into some small 2.2.X.0/24 networks.

We are using 2.2.1.0/24 as srv network, 2.2.2.0/24 as usr network, 2.2.3.0/24 as VPN full
access and 2.2.4.0/24 as VPN restricted access.

We assume we have a web server on 2.2.1.2 for testing.

Steps to setup this one:

- Create, on DOM0 a new Virtual Bridge (preferable on a VLAN so we can add more VPN
servers in the future) (see network-bridge-custom.sh)
- Add a network card to VPN_Server and attach it into the new Virtual Bridge
- Add two network cards to Firewall and attach them into the new Virtual Bridge

Note: VLAN setup is transparent for the Virtual Machines. Only the bridge resides in the VLAN,
and all untagged traffic that comes from the Virtual Machines will leave the bridge via the
phisical VLAN tagged device, so in effect, it'll leave the bridge TAGGED. Or OTOH, if you add to
the same "wire" (maybe another physical machine or another vm inside another host) on that
VLAN, it'll be attached to the corresponding virtual bridge.
[root@xxxxxx:~]$ cat /etc/xen/VPN_Server
#VPN Access Point Config File
#ETH0 Attached to Internet via Proxy ARP behind Firewall
name = "VPN_Server"
uuid = "cwhateverc"
maxmem = 256
memory = 128
vcpus = 1
bootloader = "/usr/bin/pygrub"
on_poweroff = "destroy"
on_reboot = "restart"
on_crash = "restart"
vfb = [ "type=vnc,vncunused=0,vncdisplay=0,keymap=es" ]
disk = [ "phy:/dev/VG_Xen_VMs/LV_VPN_Server,xvda,w" ]
vif = [ "mac=00:16:36:XX:XX:XX,bridge=xenbr1,script=vif-bridge,vifname=vpnext",
"mac=00:16:36:XX:XX:XX,bridge=vpnbr,script=vif-bridge,vifname=vpnint"
]
[root@xxxxxx:~]$

Notice the name of the bridge for the second network card, and the name of the interface.

Do the same for the firewall, adding two new network cards, both of them on the new bridge.
[root@xxxxxx:~]$ cat /etc/xen/Firewall
name = "Firewall"
uuid = "cwhateverc"
maxmem = 256
memory = 128
vcpus = 1
bootloader = "/usr/bin/pygrub"
on_poweroff = "restart"
on_reboot = "restart"
on_crash = "restart"
vfb = [ "type=vnc,vncunused=1,keymap=es" ]
disk = [ "phy:/dev/VG_Xen_VMs/LV_Firewall,xvda,w" ]
vif = [ "mac=00:16:36:XX:XX:XX,bridge=vbr010,script=vif-bridge,vifname=fweth0",
"mac=00:16:36:XX:XX:XX,bridge=xenbr1,script=vif-bridge,vifname=fweth1",
"mac=00:16:36:XX:XX:XX,bridge=xenbr2,script=vif-bridge,vifname=fweth2",
"mac=00:16:36:XX:XX:XX,bridge=vpnbr,script=vif-bridge,vifname=fweth3",
"mac=00:16:36:XX:XX:XX,bridge=vpnbr,script=vif-bridge,vifname=fweth4" ]
[root@xxxxxx:~]$

Now attach the same network cards to the firewall and VPN
[root@xxxxxx:~]$ xm network-attach Firewall mac=00:16:36:XX:XX:XX bridge=vpnbr
vifname=fweth3

[root@xxxxxx:~]$ xm network-attach Firewall mac=00:16:36:XX:XX:XX bridge=vpnbr


vifname=fweth4

[root@xxxxxx:~]$ xm network-attach VPN_Server mac=00:16:36:XX:XX:XX bridge=vpnbr


vifname=vpnint

Check they've been added to the right bridge


[root@xxxxxx:~]$ brctl show
bridge name bridge id STP enabled interfaces
vbr010 8000.feffffffffff no fweth0
peth0.10
vbr200 8000.feffffffffff no peth1.200
vbr300 8000.feffffffffff no peth1.300
vbr800 8000.feffffffffff no peth1.800
virbr0 8000.000000000000 yes
vpnbr 8000.feffffffffff no peth0.1000
vpnint
fweth3
fweth4
xenbr0 8000.feffffffffff no peth0
xenbr1 8000.feffffffffff no vpnext
fweth1
peth1
xenbr2 8000.feffffffffff no fweth2
peth2
[root@xxxxxx:~]$

Log onto the firewall and setup the new network cards
[root@xxxxxx:~]$ xm console Firewall
... setup network cards
(eth3 = 2.2.3.1 ; eth4 = 2.2.4.1)

They will be the VPN Gateway.


[root@xxxxxx:~]$ ifup eth3; ifup eth4; ifconfig
... make sure they're correctly setup

Setup the new zones, interfaces, policies and rules in shorewall:


/etc/shorewall/zones
Vpntrusted ipv4
Vpnusr ipv4

/etc/shorewall/interfaces
Vpntrusted eth3
Vpnusr eth4

/etc/shorewall/policy
Vpnusr loc DROP
Vpnusr net REJECT
Vpntrusted loc ACCEPT
vpntrusted net ACCEPT

/etc/shorewall/rules
ACCEPT vpnusr srv:2.2.1.2 tcp 80,443

The firewall is done, so let's link it to the VPN_Server

Log into the VPN_Server and setup the network bridge on the new ETH1 card
[root@xxxxxx:~]$ yum install bridge-utils

[root@xxxxxx:~]$ cat /etc/sysconfig/network-scripts/ifcfg-br0
# Bridge
DEVICE=br0
BOOTPROTO=none
ONBOOT=yes
HWADDR=00:16:36:XX:XX:XX
TYPE=Bridge

[root@xxxxxx:~]$ cat /etc/sysconfig/network-scripts/ifcfg-eth1


# Xen Virtual Ethernet
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
TYPE=Ethernet
BRIDGE=br0

[root@xxxxxx:~]$ service network restart


Shutting down interface eth0: [ OK ]
Shutting down interface eth1: bridge br0 does not exist!
[ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth0: [ OK ]
Bringing up interface eth1: [ OK ]
Bringing up interface br0: [ OK ]

[root@xxxxxx:~]$ brctl show


bridge name bridge id STP enabled interfaces
br0 8000.001636xxxxxx no eth1
[root@xxxxxx:~]$

We see that the bridge it's created and eth1 is attached to it.

Notice there is no IP address on the card. It's bridged to DOM0 VPN VLAN. It's firewall / gw
responsability to manage the incomming packets.

Now we setup OpenVPN and attach the tap device to the bridge, and we'll setup different
users into different network ranges so they can contact the two different subnets in the FW.

We can chose several setups: Different tap devices for several OpenVPN instances achieving
multiple server-bridge setups, with two or more CAs and so, but we prefer to hold only one CA,
one OpenVPN server and one default pool.
If the CName for the client fails into our ccd directory, assign him a different GW and
IP/Network/Netmask so it falls into another subnet at the firewall.

OpenVPN server.conf
[root@xxxxxxxxxxxx:~]$ grep -Ev "^#|^$|^;" /etc/openvpn/server.conf
local 1.1.1.3
port 443
proto tcp
dev tap0
ca ca.crt
cert server.crt
key server.key # This file should be kept secret
dh dh1024.pem
ifconfig-pool-persist ipp.txt
server-bridge 2.2.3.1 255.255.255.0 2.2.3.32 2.2.3.191
push route 2.2.0.0 255.255.0.0
client-config-dir ccd
push "dhcp-option DNS 2.2.1.x"
push "dhcp-option NTP 2.2.1.x"
push "dhcp-option WINS 2.2.1.x"
push "dhcp-option DOMAIN example.com"
push "dhcp-option domain example.com"
keepalive 10 120
comp-lzo
user nobody
group nobody
persist-key
persist-tun
status openvpn-status.log
verb 3
up /etc/openvpn/bridge-start
crl-verify crl.pem

First, local is the public IP Address. It's proxyarped by the FW and FW rules apply to both
incoming and outgoing connections.

Second, it's a TAP tunnel on SSL port (avoiding some blocked ports on some public networks)

After that:

- The pool is persisted (remember the IP address given to users.


- The server is bridged, the Default GW will be 2.2.3.1 (FW eth3) and the IP Range is
2.2.3.32 – 191
o We keep the 2-31 and 192-254 IP Address reserved for some "special" clients
- We push the default options to clients (DNS, NTP, and so)

Finally, we make sure the bridge-start script is called when the tap0 interface is up. This script
just adds the interface to the bridge:
#!/bin/bash

#################################
# Set up Ethernet bridge on Linux
# Requires: bridge-utils
# br already setup
#################################

# Define Bridge Interface


br="br0"

echo "Adding $1 to VPN Bridge $br"


/usr/sbin/brctl addif $br $1
/sbin/ifconfig $1 up
We create some client keys. For Trusted clients, they fall in the standard setup (see above).

For untrusted clients (or those with special access) we fill their data in the ccd directory (with
the setup above, /etc/openvpn/ccd).

Imagine a partner who need to access a server on 2.2.1.15 using ssh:

- Create a key file for that partner (assume cname ext.partner).


- Create a file named "ext.partner" on ccd directory with the following content:
ifconfig-push 2.2.4.2 255.255.255.0
push route-gateway 2.2.4.1
push route 2.2.1.15 255.255.255.255

Then, log in to firewall and add a rule for that client to allow ssh to that server:
/etc/shorewall/rules
ACCEPT vpnusr:2.2.4.2 srv:2.2.1.15 tcp 22

Send the key to the user et voilà.

He will be assigned 2.2.4.2 IP Address, his default gw will be 2.2.4.1 and a route to 2.2.1.15 will
be added to his routing table.

He won't have access to 2.2.1.0/24 network, just to one host. If you want to allow him full srv
network, just add
push route 2.2.1.0 255.255.255.0

To his ccd/ext.partner file

Adding a route makes a network visible to the client, but firewall rules still apply. Update the
rules if you want to allow / disallow him services, for example, in order to give him full srv
network access:
/etc/shorewall/rules
ACCEPT vpnusr:2.2.4.2 srv

You might also like