Proxying, Reverse Proxying, and Virtual Private Networks (VPN) - Ubuntu as a Server - Ubuntu Unleashed 2017 Edition (2017)

Ubuntu Unleashed 2017 Edition (2017)

Part IV: Ubuntu as a Server

Chapter 29. Proxying, Reverse Proxying, and Virtual Private Networks (VPN)


In This Chapter

Image What Is a Proxy Server?

Image Installing Squid

Image Configuring Clients

Image Access Control Lists

Image Specifying Client IP Addresses

Image Sample Configurations

Image Virtual Private Networks (VPN)

Image References


You can never have enough of two things in this world: time and bandwidth. Ubuntu comes with a proxy server—Squid—that enables you to cache web traffic on your server so that websites load faster and users consume less bandwidth. Sometimes proxy servers are recommended for security and privacy, but a virtual private network (VPN) is an even better option if security and privacy are your main concerns. The last section of the chapter is about VPNs. Both proxy servers and VPNs have the interesting side effect that when they are in use, everything that your computer connects to—say a website—assumes the IP address of the proxy or VPN server is your IP address.

What Is a Proxy Server?

A proxy server lies between client machines—the desktops in your company—and the Internet. As clients request websites, they do not connect directly to the Web and send the HTTP request. Instead, they connect to the local proxy server. The proxy then forwards their requests on to the Web, retrieves the result, and hands it back to the client. At its simplest, a proxy server really is just an extra layer between client and server, so why bother?

The three main reasons for deploying a proxy server are as follows:

Image Content control—You want to prevent access to certain types of content.

Image Speed—You want to cache common sites to make the most of your bandwidth.

Image Security—You want to monitor what people are doing.

Squid accomplishes these things and more.

Installing Squid

You can easily install Squid as usual from the Ubuntu software repositories, where it is called squid3. After Squid is installed, it is automatically enabled for each boot. You can check this by running ps aux | grep squidwhen the machine boots. If you see nothing there, run sudo start squid.

Configuring Clients

Before you configure your new Squid server, set up the local web browser to use it for its web access. Doing so enables you to test your rules as you are working with the configuration file.

To configure Firefox, while Firefox is running in the foreground select Preferences from the Edit menu from the top panel of the Ubuntu desktop. From the dialog that appears, select the Advanced settings using the icon in the top row, and within Advanced, select the Network tab. Then click the Settings button next to Configure how Firefox connects to the Internet and select the Manual Proxy Configuration option. Check the box beneath it labeled Use the Same Proxy for All Protocols. Enter 127.0.0.1 in the HTTP Proxy box and 3128 as the port number. See Figure 29.1 for how this should look. If you are configuring a remote client, specify the IP address of the Squid server rather than 127.0.0.1.

Image

FIGURE 29.1 Setting up Firefox to use 127.0.0.1 routes all its web requests through Squid.

You can similarly configure other web browsers such as Google Chrome, Opera, and so on. The difference is the labels used and menu locations for the options, so you might have to do a little digging to discover where this may be adjusted in a specific browser’s settings.

Access Control Lists

The main Squid configuration file is /etc/squid3/squid.conf, and the default Ubuntu configuration file is full of comments to help guide you. The default configuration file allows full access to the local machine but denies the rest of your network. This is a secure place to start; we recommend you try all the rules on yourself (localhost) before rolling them out to other machines.

Before you start, open two terminal windows. In the first, change to the directory /var/log/squid3 and run this command:

Click here to view code image

matthew@seymour:~$ sudo tail -f access.log cache.log

That reads the last few lines from both files and (thanks to the -f flag) follows them so that any changes appear in there. This allows you to watch what Squid is doing as people access it. We refer to this window as the “log window,” so keep it open. In the other window (again, with sudo), bring up the file /etc/squid/squid.conf in your favorite editor. We refer to this window as the “config editor,” and you should keep it open, too.

To get started, search for the string acl all; this brings you to the access control section, which is where most of the work needs to be done. You can configure a lot elsewhere, but unless you have unusual requirements, you can leave the defaults in place.


Note

The default port for Squid is 3128, but you can change that by editing the http_port line. Alternatively, you can have Squid listen on multiple ports by having multiple http_port lines; 80, 8000, and 8080 are all popular ports for proxy servers.


The acl lines make up your access control lists (ACLs). The first 16 or so define the minimum recommended configuration that set up ports to listen to, and so on. You can safely ignore these. If you scroll down further (past another short block of comments), you come to the http_access lines, which are combined with the acl lines to dictate who can do what. You can (and should) mix and match acl and http_access lines to keep your configuration file easy to read.

Just below the first block of http_access lines is a comment like # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS. This is just what we are going to do. First, though, scroll just a few lines further; you should see the following two lines (they are not necessarily next to each other in the actual file):

http_access allow localhost
http_access deny all

These lines are self-explanatory: The first says, “Allow HTTP access to the local computer, but deny everyone else.” This is the default rule, as mentioned earlier. Leave that in place for now and run service squid start to start the server with the default settings. If you have not yet configured the local web browser to use your Squid server, do so now so that you can test the default rules.

In your web browser (Firefox is assumed from here on because it is the default in a standard Ubuntu install, but it makes little difference), go to the URL www.ubuntulinux.org. You should see it appear as normal in the browser, but in the log window you should see a lot of messages scroll by as Squid downloads the site for you and stores it in its cache. This is all allowed because the default configuration allows access to the localhost.

Go back to the config editor window and add this before the last two http_access lines:

http_access deny localhost

So, the last three lines should look like this:

http_access deny localhost
http_access allow localhost
http_access deny all

Save the file and quit your editor. Then, run this command:

Click here to view code image

matthew@seymour:~$ kill -SIGHUP 'cat /var/run/squid.pid'

That looks for the process ID (PID) of the squid daemon and then sends the SIGHUP signal to it, which forces it to reread its configuration file while running. You should see a string of messages in the log window as Squid rereads its configuration files. If you now go back to Firefox and enter a new URL, you should see the Squid error page informing you that you do not have access to the requested site.

The reason you are now blocked from the proxy is because Squid reads its ACL lines in sequence, from top to bottom. If it finds a line that conclusively allows or denies a request, it stops reading and takes the appropriate action. So, in the previous lines, localhost is being denied in the first line and then allowed in the second. When Squid sees localhost asking for a site, it reads the deny line first and immediately sends the error page; it does not even get to the allow line. Having a deny all line at the bottom is highly recommended so that only those you explicitly allow are able to use the proxy.

Go back to editing the configuration file and remove the deny localhost and allow localhost lines. This leaves only deny all, which blocks everyone (including the localhost) from accessing the proxy. Now you are going to add some conditional allow statements. You want to allow localhost only if it fits certain criteria.

Defining access criteria is done with the acl lines, so above the deny all line, add this:

Click here to view code image

acl newssites dstdomain news.bbc.co.uk slashdot.org
http_access allow newssites

The first line defines an access category called newssites, which contains a list of domains (dstdomain). The domains are news.bbc.co.uk and slashdot.org, so the full line reads, “Create a new access category called newssites that should filter on domain, and contain the two domains listed.” It does not say whether access should be granted or denied to that category; that comes in the next line. The line http_access allow newssitesmeans, “Allow access to the category newssites with no further restrictions.” It is not limited to localhost, which means that applies to every computer connecting to the proxy server.

Save the configuration file and rerun the kill -SIGHUP line from before to restart Squid; then go back to Firefox and try loading www.ubuntu.com. You should see the same error as before because that was not in your newssites category. Now try http://news.bbc.co.uk, and it should work. However, if you try www.slashdot.org, it will not work, and you might also have noticed that the images did not appear on the BBC News website either. The problem here is that specifying slashdot.org as the website is specific: It means that http://slashdot.org will work, whereas www.slashdot.org will not. The BBC News site stores its images on the site http://newsimg.bbc.co.uk, which is why they do not appear.

Go back to the configuration file and edit the newssites ACL to this:

Click here to view code image

acl newssites dstdomain .bbc.co.uk .slashdot.org

Putting the period in front of the domains (and in the BBC’s case, taking the news off, too) means that Squid will allow any subdomain of the site to work, which is usually what you want. If you want even more vagueness, you can just specify .com to match *.com addresses.

Moving on, you can also use time conditions for sites. For example, if you want to allow access to the news sites in the evenings, you can set up a time category using this line:

Click here to view code image

acl freetime time MTWHFAS 18:00-23:59

This time, the category is called freetime and the condition is time, which means we need to specify what time the category should contain. The seven characters following that are the days of the week: Monday, Tuesday, Wednesday, tHursday, Friday, sAturday, and Sunday. Thursday and Saturday use capital H and A so they do not clash with Tuesday and Sunday.

With that category defined, you can change the http_access line to include it, like this:

Click here to view code image

http_access allow newssites freetime

For Squid to allow access now, it must match both conditions—the request must be for either *.bbc.co.uk or slashdot.org, and during the time specified. If either condition does not match, the line is not matched and Squid continues looking for other matching rules beneath it. The times you specify here are inclusive on both sides, which means users in the freetime category are able to surf from 18:00:00 until 23:59:59.

You can add as many rules as you like, although you should be careful to try to order them so that they make sense. Keep in mind that all conditions in a line must be matched for the line to be matched. Here is a more complex example:

Image You want a category newssites that contains serious websites people need for their work.

Image You want a category playsites that contains websites people do not need for their work.

Image You want a category worktime that stretches from 09:00 to 18:00.

Image You want a category freetime that stretches from 18:00 to 20:00, when the office closes.

Image You want people to be able to access the news sites, but not the play sites, during working hours.

Image You want people to be able to access both the news sites and the play sites during the free time hours.

To do that, you need the following rules:

Click here to view code image

acl newssites dstdomain .bbc.co.uk .slashdot.org
acl playsites dstdomain .tomshardware.com ubuntulinux.org
acl worktime time MTWHF 9:00-18:00
acl freetime time MTWHF 18:00-20:00
http_access allow newssites worktime
http_access allow newssites freetime
http_access allow playsites freetime


Note

The letter D is equivalent to MTWHF, meaning “all the days of the working week.”


Notice that there are two http_access lines for the newssites category: one for worktime and one for freetime. This is because all the conditions must be matched for a line to be matched. Alternatively, you can write this:

Click here to view code image

http_access allow newssites worktime freetime

However, if you do that and someone visits http://news.bbc.co.uk at 2:30 p.m. (14:30) on a Tuesday, Squid works like this:

Image Is the site in the newssites category? Yes, continue.

Image Is the time within the worktime category? Yes, continue.

Image Is the time within the freetime category? No; do not match rule and continue searching for rules.

It is because of this that two lines are needed for the worktime category.

One particularly powerful way to filter requests is with the url_regex ACL line. This enables you to specify a regular expression that is checked against each request: If the expression matches the request, the condition matches.

For example, if you want to stop people downloading Windows executable files, you use this line:

Click here to view code image

acl noexes url_regex -i exe$

The dollar sign ($) means “end of URL,” which means it would match www.somesite.com/virus.exe but not www.executable.com/innocent.html. The -i part means “not case sensitive,” so the rule matches .exe, .Exe, .EXE, and so on. You can use the caret sign (^) for “start of URL.”

For example, you could stop some pornography sites using this ACL:

acl noporn url_regex -i sex

Do not forget to run the kill -SIGHUP command each time you make changes to Squid; otherwise, it does not reread your changes. You can have Squid check your configuration files for errors by running squid -k parse as root. If you see no errors, it means your configuration is fine.


Note

It is critical that you run the command kill -SIGHUP and provide it the PID of your Squid daemon each time you change the configuration; without this, Squid does not reread its configuration files.


Specifying Client IP Addresses

The configuration options so far have been basic, and you can use many more to enhance the proxying system you want.

After you are past deciding which rules work for you locally, it is time to spread them out to other machines. You do so by specifying IP ranges that should be allowed or disallowed access, and you enter these into Squid using more ACL lines.

If you want to, you can specify all the IP addresses on your network, one per line. However, for networks of more than about 20 people or using Dynamic Host Control Protocol (DHCP), that is more work than necessary. A better solution is to use classless interdomain routing (CIDR) notation, which enables you to specify addresses like this:

192.0.0.0/8
192.168.0.0/16
192.168.0.0/24

Each line has an IP address, followed by a slash and then a number. That last number defines the range of addresses you want covered and refers to the number of bits in an IP address. An IP address is a 32-bit number, but we are used to seeing it in dotted-quad notation: A.B.C.D. Each of those quads can be between 0 and 255 (although in practice some of these are reserved for special purposes), and each is stored as an 8-bit number.

The first line in the previous code covers IP addresses starting from 192.0.0.0; the /8 part means that the first 8 bits (the first quad, 192) is fixed and the rest is flexible. So, Squid treats that as addresses 192.0.0.0, 192.0.0.1, through to 192.0.0.255, then 192.0.1.0, 192.0.1.1, all the way through to 192.255.255.255.

The second line uses /16, which means Squid allows IP addresses from 192.168.0.0 to 192.168.255.255. The last line has /24, which allows from 192.168.0.0 to 192.168.0.255.

These addresses are placed into Squid using the src ACL line, as follows:

acl internal_network src 10.0.0.0/24

That line creates a category of addresses from 10.0.0.0 to 10.0.0.255. You can combine multiple address groups together, like this:

Click here to view code image

acl internal_network src 10.0.0.0/24 10.0.3.0/24 10.0.5.0/24 192.168.0.1

That example allows 10.0.0.0 through 10.0.0.255, then 10.0.3.0 through 10.0.3.255, and finally the single address 192.168.0.1.

Keep in mind that if you are using the local machine and you have the web browser configured to use the proxy at 127.0.0.1, the client IP address will be 127.0.0.1, too. So, make sure you have rules in place for localhost.

As with other ACL lines, you need to enable them with appropriate http_access allow and http_access deny lines.

Sample Configurations

To help you fully understand how Squid access control works, and to give you a head start developing your own rules, the following are some ACL lines you can try. Each line is preceded with one or more comment lines (starting with a #) explaining what it does:

Click here to view code image

# include the domains news.bbc.co.uk and slashdot.org
# and not newsimg.bbc.co.uk or www.slashdot.org.
acl newssites dstdomain news.bbc.co.uk slashdot.org

# include any subdomains or bbc.co.uk or slashdot.org
acl newssites dstdomain .bbc.co.uk .slashdot.org

# only include sites located in Canada
acl canadasites dstdomain .ca

# only include working hours
acl workhours time MTWHF 9:00-18:00

# only include lunchtimes
acl lunchtimes time MTWHF 13:00-14:00

# only include weekends
acl weekends time AS 00:00-23:59

# include URLs ending in ".zip". Note: the \ is important,
# because "." has a special meaning otherwise
acl zipfiles url_regex -i \.zip$

# include URLs starting with https
acl httpsurls url_regex -i ^https

# include all URLs that match "Hotmail""
url_regex hotmail url_regex -i hotmail

# include three specific IP addresses
acl directors src 10.0.0.14 10.0.0.28 10.0.0.31

# include all IPs from 192.168.0.0 to 192.168.0.255
acl internal src 192.168.0.0/24

# include all IPs from 192.168.0.0 to 192.168.0.255
# and all IPs from 10.0.0.0 to 10.255.255.255
acl internal src 192.168.0.0/24 10.0.0.0/8

When you have your ACL lines in place, you can put together appropriate http_access lines. For example, you might want to use a multilayered access system so that certain users (for example, company directors) have full access, whereas others are filtered. For example:

Click here to view code image

http_access allow directors
http_access deny hotmail
http_access deny zipfiles
http_access allow internal lunchtimes
http_access deny all

Because Squid matches those in order, directors will have full, unfiltered access to the Web. If the client IP address is not in the directors list, the two deny lines are processed so that the user cannot download zip files or read online mail at Hotmail. After blocking those two types of requests, the allow rule on line four allows internal users to access the Web, as long as they do so only at lunchtime. The last line (which is highly recommended) blocks all other users from the proxy.

Virtual Private Networks (VPN)

A virtual private network, or VPN as they are more commonly called, creates a way for networks that are otherwise isolated or inaccessible to communicate with one another. This is often used by businesses at an enterprise level to keep internal business networks secure while allowing workers to access the internal network from a remote location, such as when an executive is traveling and needs to use a laptop to download and reply to email using an internal business server. The VPN keeps all traffic out except that traffic which originates within the network itself or traffic that attempts to connect from the outside using a VPN connection with proper access credentials. This sounds similar to remote access standards already in place in Unix, Linux, and Ubuntu, but using a VPN takes security to a new level.

There are other types of VPNs in use as well. Not only can one be used to allow remote access to secure internal networks, but they can also be used to allow two networks to connect to one another using a different network in the middle, for example two networks that each use IPv6 could connect to one another over an IPv4 network using a VPN connection. This is much less common, so we concentrate on the first scenario of a remote user connecting to a secure, internal network. You might be asking how this is different than using a proxy server, as it seems that the VPN is somehow working as an intermediary or bridge between the remote user and the secure system. It is a little more complicated than that. When a proxy server is in use, it is another layer between the two ends of a connection, an intermediary. When a VPN is in use, it provides direct access between the two ends, but via an encrypted tunnel; it is analogous to running a cable directly from one end system to the other, effectively making the remote computer an actual part of the system to which it is connecting. From this moment, the remote system tunnels all of its network traffic through the main system.

Where proxies generally work via web browser and secure all traffic that passes through the browser, a VPN tunnels all traffic. When using a VPN, the remote computer no longer perceives itself as connected first to the Internet, then to the secure system, but rather it perceives itself as if it is connected directly to the secure system and the VPN is its router. The difference is illustrated in Figure 29.2.

Image

FIGURE 29.2 Comparing a proxy connection to a VPN connection.

Some use an Internet router as a metaphor to help explain how a VPN works. In this analogy, the remote computer connects directly to the VPN, which uses the Internet to connect it to its ultimate host computer, the secure network.

So, why do we care? The differences between proxy servers and a VPN make the most difference when it comes time for implementation. Which will best serve your needs? Here are some facts to help you decide:

Image Proxy servers are usually cheaper and easier to set up. VPNs generally cost more and are more difficult to set up, but after they’re set up they’re easy to use and are more secure.

Image A single proxy server can service hundreds or thousands of users, but usually a VPN is designed for one connection that is specific to one remote computer and a secure host (yes, exceptions exist, they are beyond the scope of this introductory material).

Image Each piece of software that uses a proxy must be set up separately. Web browsers are the most common way to use a proxy server, but others can also be configured to use one. When a VPN is up and running, all Internet software on the computer automatically uses it without additional configuration.

Setting Up a VPN Client

The easy part of using a VPN is also the most commonly needed part. It is generally companies that set up VPN servers and then provide access to their secure networks to company employees or clients who use a VPN client installed on their local system, such as a laptop.

Most VPN servers run a protocol that is easily used on Ubuntu, especially from the type of GUI-based system that a typical user who is using Unity would have. You need to check with the administrator of the VPN network to which you intend to connect to find out which VPN client you will use and also to get your credentials so that you may connect. We use the default Network-Manager, which is installed by default with Unity and is the default way to manage all Internet connections in a typical Ubuntu installation, to manage our VPN client connection.

Install the VPN client software needed for the specific type of VPN server in use by the network to which you will connect. The following table will help you to find what you need.

Image

Restart Network-Manager to make it aware of the new package(s):

Click here to view code image

matthew@seymour:~$ sudo restart network-manager

Click the Network icon in the top panel of the Unity screen. Hover over VPN Connections and select Configure VPN, as shown in Figure 29.3.

Image

FIGURE 29.3 Network-Manager makes configuring a VPN client easy.

Click Add to create a new VPN connection.

Choose your VPN connection type from the list. If you aren’t certain of your connection type, try the one you think is correct; if it doesn’t work, come back and edit the connection to try another type.

Click Create.

Enter the information about your VPN connection. You need to enter things like the gateway IP address of the server, your account username and password, perhaps a group name and group password, and where to find the certificate authority (CA) file. In some cases, you might need to click the Advanced button to enter other details such as the encryption method, NAT traversal, and more. When you have your information entered, click Save.

To begin using this VPN connection, use the same menu as in Figure 29.3, but this time select your VPN connection from the list, shown as Main VPN Connection in our example.

Setting Up a VPN Server

We use OpenVPN to set up a simple server. Advanced configuration can become quite complex, but it is easy to get started if you only require a basic server.

Install openvpn from the Ubuntu software repositories.

The next several steps create a public key infrastructure (PKI) for your OpenVPN.

Set up the certificate authority to generate your own certificates and keys.

Click here to view code image

matthew@seymour:~$ sudo mkdir /etc/openvpn/easy-rsa
matthew@seymour:~$ sudo cp -r /usr/share/doc/openvpn/examples/easy-rsa/2.0/*
/etc/openvpn/easy-rsa/

Enter your specific details by editing /etc/openvpn/easy-rsa/vars and adjusting the following:

Click here to view code image

export KEY_COUNTRY="us"
export KEY_PROVINCE="IA"
export KEY_CITY="Iowa City"
export KEY_ORG="Your Company"
export KEY_EMAIL="yourContact@yourEmailDomain.com"

Generate your certificate authority and key:

Click here to view code image

matthew@seymour:~$ cd /etc/openvpn/easy-rsa
matthew@seymour:~$ sudo source vars
matthew@seymour:~$ sudo ./clean-all
matthew@seymour:~$ sudo ./build-ca
matthew@seymour:~$ sudo cp -r /usr/share/doc/openvpn/examples/easy-rsa/2.0/*
/etc/openvpn/easy-rsa/

Generate a certificate and private key for the server. Replace yourservername with the name of your server:

Click here to view code image

matthew@seymour:~$ sudo ./buid-key-server yourservername

Build the Diffie Hellman parameters:

Click here to view code image

matthew@seymour:~$ sudo ./build-dh

Copy the certificates and keys. Replace yourservername with the name of your server.

Click here to view code image

matthew@seymour:~$ cd keys/
matthew@seymour:~$ sudo cp yourservername.crt yourservername.key ca.crt dh1024.pem /etc/openvpn/

You must create a different certificate for each client using this method. This is because the larger, proprietary VPN vendors distribute their certificates with their server and client software, but you are creating your own. Do this on the server machine for each client, replacing clientname with the name of each client system.

Click here to view code image

matthew@seymour:~$ cd /etc/openvpn/easy-rsa/
matthew@seymour:~$ source vars
matthew@seymour:~$ ./build-key clientname

Now, copy the following files you just generated to the client for which it was generated. Repeat as needed for each client replacing clientname with the name of each client system.

Click here to view code image

/etc/openvpn/ca.crt
/etc/openvpn/easy-rsa/keys/clientname.crt
/etc/openvpn/easy-rsa/keys/clientname.key

Remove the files from the server after they are installed on the client.

Many sample configuration files are included with OpenVPN in /usr/share/doc/openvpn/examples/sample-config-files/. You can read through them if you have more complex needs than our simple setup. For our setup, we only need the most basic configuration files. Copy and unpack this file:

Click here to view code image

matthew@seymour:~$ sudo cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn
matthew@seymour:~$ sudo gzip -d /etc/openvpn/server.conf.gz

Edit /etc/openvpn/server.conf to point to and use the certificates and keys you created earlier by changing or adding these lines. Replace yourservername with the name of your server. Leave all the other default settings in place.

ca ca.crt
cert yourservername.crt
key yourservername.key
dh dh1024.pem

Start your server:

Click here to view code image

matthew@seymour:~$ sudo /etc/init.d/openvpn start

OpenVPN should create a new networking interface on your computer called tun0. To make sure the interface is created, enter:

Click here to view code image

matthew@seymour:~$ sudo ifconfig tun0

To use your new VPN server with the client described in the previous section, select OpenVPN as the VPN type, enter yourservername from this section as the Gateway, set Type to Certificates (TLS), point User Certificate to use the client certificate you created and moved to the client machine, CA Certificate to use the credential authority certificate you created and moved to the client machine, and Private Key to use the private key file you created and moved to the client machine.

References

Image www.squid-cache.org/—The home page of the Squid web proxy cache.

Image www.deckle.co.za/squid-users-guide/—The home page of Squid: A User’s Guide, a free online book about Squid.

Image https://help.ubuntu.com/community/Squid—Ubuntu community documentation for setting up Squid.

Image There are two excellent books on the topic of web caching. The first is Squid: The Definitive Guide (O’Reilly) by Duane Wessels, ISBN: 0-596-00162-2. The second is Web Caching (O’Reilly) also by Duane Wessels, ISBN: 1-56592-536-X. Of the two, the former is more practical and covers the Squid server in depth. The latter is more theoretical, discussing how caching is implemented. Wessels is one of the leading developers on Squid, so both books are of impeccable technical accuracy.

Image https://help.ubuntu.com/14.04/serverguide/openvpn.html—Official Ubuntu server documentation for setting up OpenVPN.