Small Office/Home Office Server - Ubuntu Hacks: Tips & Tools for Exploring, Using, and Tuning Linux (2009)

Ubuntu Hacks: Tips & Tools for Exploring, Using, and Tuning Linux (2009)

Chapter 10. Small Office/Home Office Server

Hacks 93-100

Ubuntu may be an awesome desktop environment (and with Kubuntu in the mix, it offers plenty of choices), but it's also an excellent choice for a server operating system. By choosing the bare minimum packages, you can cook up a lean installation that's ready to serve web pages, host shell accounts, run virtual machines, or do anything else you need. Read on for hacks that will help you get that bare-bones installation going, install essential services, and administer your server from afar.

Hack 93. Install and Configure an Ubuntu Server

The Ubuntu installer makes it easy to do a clean and minimal server setup.

The Debian distribution has a well-deserved reputation as being extremely well suited for use in the datacenter, and Ubuntu builds on that by providing simplified installation and official commercial support, making it ideal for mission-critical server deployments.

Minimal Installation

A good principle when building servers is to install as few packages as possible, minimizing the number of things that can go wrong as well as the potential for security flaws. The Ubuntu installer offers a special "server" mode that makes it simple to create a basic server platform onto which you can install the software you require.

Before you perform the actual installation, boot up the server and enter the BIOS setup screen. Because servers typically run without a monitor attached, you will need to find the BIOS setting that tells the computer which errors it should consider fatal and make sure it won't fail on a "no keyboard" or "no monitor" error. The actual setting varies depending on the specific BIOS, so consult the manual for your computer or motherboard if necessary.

Save the BIOS changes and then boot the computer from the Dapper install CD, but don't proceed with the usual installation procedure. If you get a graphical menu, select Install a Server; otherwise, type server at the first prompt. Then, go through the installation procedure [Hack #5]. This will give you a minimal selection of packages installed on the system. The server-mode installation doesn't include X or any services at all, giving you a clean platform to configure as you see fit.

One of the first services you will want to install is probably SSH, allowing you to use a secure shell to "Administer Your Server Remotely" [Hack #95].

Static Network Configuration

You may have a DHCP server on your network already, in which case your server has been assigned an IP address, but most servers need to have static addresses assigned so they can be found on the network.

Open /etc/network/interfaces (as root) in your favorite editor and find a section that looks like this:

# The primary network interface

auto eth0

iface eth0 inet dhcp

The dhcp argument tells Ubuntu to use a DHCP server to assign the configuration to this interface, so change it to a static configuration and specify the address, netmask, and gateway (router) addresses. Here's an example:

# The primary network interface

auto eth0

iface eth0 inet static




You can now manually force networking to restart with the following command, but be warned that if the static address you have assigned the server is different from the current address, any SSH sessions will be dropped. You will then be able to log back in at the new address:

$ sudo /etc/init.d/networking restart

UPS-Triggered Shutdown

A uninterruptible power suppy (UPS) will keep your server running during a short power failure, but batteries don't last forever, and you risk corrupted filesystems if the batteries go flat and the server stops abruptly. Connect your server to your UPS with a null-modem serial cable and install a program to monitor UPS status and begin a clean shutdown in the event of a blackout. Different brands of UPS have different communication methods, and there are a variety of UPS-management packagesincluding genpower, apcd, apcupsd, powstatd, and nuteach of which supports different types of UPS. If you run multiple servers on a single UPS, then nut (Network UPS Tools) is a good choice because it can initiate a shutdown of all your servers at once via the network:

$ sudo apt-get install nut

The exact setup process will depend on your UPS type, so start by looking through /usr/share/doc/nut/README.Debian.gz for general background information, and then look at the example configurations in /usr/share/doc/nut/examples/.

Network UPS Tools also has a number of supporting packages available:


Web interface subsystem


Development files


Meta SNMP Driver subsystem


USB Drivers subsystem

Remember that if your server is shut down by the UPS-management software, it won't restart automatically when power returns. Now that you have your server up and running, you may want to see "Build a Web Server" [Hack #96], "Build an Email Server" [Hack #97], "Build a Domain Name Server" [Hack #100], "Build a DHCP Server" [Hack #99], and "Build a File Server" [Hack #94].

Hack 94. Build a File Server

Share files with Linux, Windows, and Macintosh machines.

There are many different file-sharing protocols, each with strengths and weaknesses and each coming from different development backgrounds. The traditional file-sharing protocol for Unix is NFS (Network File System); for Mac OS, it's AppleShare; and for Windows, it's SMB (Server Message Block). Running a mixed-environment file server used to require supporting a multitude of protocols simultaneously, but in recent years, there has been a convergence on the use of CIFS (Common Internet File System) across all platforms. CIFS is derived from SMB and is the standard file-sharing method in recent versions of Windows. It is also extremely well supported under both Linux and Mac OS as a client and as a server, thanks to the Samba project.

The server component of Samba can even run as a domain controller for a Windows network and supports several authentication backends, including LDAP and TDB. Large installations may benefit from using LDAP, but it is far more complex to set up, so this hack will cover the use of TDB, which is quite suitable for networks up to several hundred users.

Enable Quota Support

To work with quotas, first install the quota package:

$ sudo apt-get install quota

Open /etc/fstab (the File System TABle) in your favorite editor and find the line that refers to the partition that will hold your shares. Add the usrquota and grpquota options. If you have /home on a separate partition, you will need to add the same options to that as well. The end result should look something like:

/dev/hda2 / ext3 defaults,usrquota,grpquota 0 1

/dev/hda3 /home ext3 defaults,usrquota,grpquota 0 2

Then set up the user and group quota files and remount the filesystem:

$ sudo touch /quota.user /

$ sudo chmod 600 /quota.*

$ sudo mount -o remount /

If you have a separate /home partition, do the same for that file:

$ sudo touch /home/quota.user /home/

$ sudo chmod 600 /home/quota.*

$ sudo mount -o remount /home

Since there is already data on the partitions, you will need to run the Quota Check tool to scan the filesystems and record current usage per user and group, then activate quota enforcement:

$ sudo quotacheck -avugm

$ sudo quotaon -avug

The mechanism is now in place to enforce quotas, but no users or groups have limits set, so there is no limit yet on how much of the disk they can use.

Install Samba

On your server, install Samba itself plus some additional packages for documentation, share browsing, and printer sharing:

$ sudo apt-get install samba samba-doc libcupsys2-gnutls10 \\

libkrb53 winbind smbclient

There are quite a few things to change in the default Samba config file, so open /etc/samba/smb.conf in an editor and go through it to adjust the settings to match the following example. Most of the example can be copied verbatim, but set WORKGROUP to the name of the Windows domain (you can even leave it at WORKGROUP) and set FILESERVER to the hostname of your Linux server:


workgroup = WORKGROUP

netbios name = FILESERVER

server string = %h server (Samba, Ubuntu)

passdb backend = tdbsam

security = user

username map = /etc/samba/smbusers

name resolve order = wins bcast hosts

domain logons = yes

preferred master = yes

wins support = yes

## Use CUPS for printing

printcap name = CUPS

printing = CUPS

## Set default logon

logon drive = H:

#logon script = scripts/logon.bat

logon path = \\\\fileserver\\profile\\%U

## User management scripts

add user script = /usr/sbin/useradd -m %u

delete user script = /usr/sbin/userdel -r %u

add group script = /usr/sbin/groupadd %g

delete group script = /usr/sbin/groupdel %g

add user to group script = /usr/sbin/usermod -G %g %u

add machine script = /usr/sbin/useradd -s /bin/false/ -d /var/lib/nobody %u

idmap uid = 15000-20000

idmap gid = 15000-20000

## Settings to sync Samba passwords with system passwords

passwd program = /usr/bin/passwd %u

passwd chat = *Enter\\snew\\sUNIX\\spassword:* %n\\n *Retype\\snew\\s

UNIX\\spassword:* %n\\n .

passwd chat debug = yes

unix password sync = yes

## Set the log verbosity level

log level = 3


comment = Home

valid users = %S

read only = no

browsable = no


comment = All Printers

path = /var/spool/samba

printable = yes

guest ok = yes

browsable = no


comment = Network Logon Service

path = /home/samba/netlogon

admin users = Administrator

valid users = %U

read only = no


comment = User profiles

path = /home/samba/profiles

valid users = %U

create mode = 0600

directory mode = 0700

writable = yes

browsable = no

The commented-out line that says:

#logon script = scripts/logon.bat

defines a Windows batch script that will be executed by Windows workstations as soon as a user logs in. This can be really handy if you want to apply standard settings to all computers on your network; you may want to define server drive mappings, set up printers, or configure a proxy server. If you have a logon.bat script, uncomment that line.

Create directories to store domain logons and profiles:

$ sudo mkdir -p /home/samba/netlogon

$ sudo mkdir /home/samba/profiles

$ sudo mkdir /var/spool/samba

$ sudo chmod 777 /var/spool/samba/

$ sudo chown -R root:users /home/samba/

$ sudo chmod -R 771 /home/samba/

Make sure Samba has seen your new configuration:

$ sudo /etc/init.d/samba restart

To enable WINS (Windows Internet Name Service) host resolution, edit /etc/nsswitch.conf and look for a line similar to:

hosts: files dns mdns

Change it to:

hosts: files wins dns mdns

Since your file server is going to be the domain controller (DC) for your Windows domain, you need to specify the computers that will be part of the domain. Open /etc/hosts and add all the servers and workstations: server1 workstation1 workstation2

... workstation7

Windows typically has a special user named Administrator, which is akin to the root user on Linux, so add the root user to the Samba password database and set up an alias for it. This will allow you to use the Administrator username to add new computers to the Windows domain:

$ sudo smbpasswd -a root

$ sudo sh -c "echo 'root = Administrator' > /etc/samba/smbusers"

To make sure everything has worked up to this point, use smbclient to query Samba:

$ smbclient -L localhost -U%

The output will include details of several standard services, such as netlogon and ADMIN, as well as the machines that have registered in the domain:

Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.21b]

Sharename Type Comment

--------- ---- -------

netlogon Disk Network Logon Service

IPC$ IPC IPC Service (fileserver server (Samba, Ubuntu))

ADMIN$ IPC IPC Service (fileserver server (Samba, Ubuntu))

Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.21b]

Server Comment

--------- -------

FILESERVER fileserver server (Samba, Ubuntu)

Workgroup Master

--------- -------


Map some standard groups used in Windows domains to equivalent Linux groups:

$ sudo net groupmap modify ntgroup="Domain Admins" unixgroup=root

$ sudo net groupmap modify ntgroup="Domain Users" unixgroup=users

$ sudo net groupmap modify ntgroup="Domain Guests" unixgroup=nogroup

To allow users to authenticate in the domain, they need to be defined in the domain controller. This process needs to be repeated for each user, but for now just create one user to start with by first adding the Linux user and then setting a password for that user in Samba. The user also needs to be placed in the users group that was aliased to the Windows group Domain Users a moment ago:

$ sudo useradd jon -m -G users

$ sudo smbpasswd -a jon

That user should now be able to authenticate on workstations within your domain, but there haven't yet been any shares added.

Add Shares

Start by adding a simple share that all users can access. First, create the directory that will become the share and set appropriate permissions on it:

$ sudo mkdir -p /home/shares/public

$ sudo chown -R root:users /home/shares/public

$ sudo chmod -R ug+rwx,o+rx-w /home/shares/public

Each share needs to be configured in Samba. Open /etc/samba/smb.conf and add a new stanza at the end:


comment = Public Share

path = /home/shares/public

valid users = @users

force group = users

create mask = 0660

directory mask = 0771

writable = yes

Each time you modify the configuration, you need to restart Samba:

$ sudo /etc/init.d/samba restart

The preceding public stanza allows all users in the @users group to access the share with full write privileges. This is probably not what you want in many cases. For a share that can be accessed only by specific users, you can substitute a line such as:

valid users = jon,kyle,bill

To manage a large number of users, you can alternatively create another separate group, set permissions on the share appropriately, and specify it in the share definition. That way, you can add and remove users from that group without having to reconfigure or restart Samba:

valid users = @authors

You can also create read-only shares by setting the writable option to no:

writable = no

Another typical scenario is a share that is read/write by some users but read-only for others:

valid users = @authors,@editors

read list = @authors

write list = @editors

Samba has a huge range of options for specifying various access restrictions, so refer to the extensive documentation at for more details.

Share Printers

If you have printers that you want to make accessible to your Windows workstations through Samba, start by following the steps in "Set Up Your Printer" [Hack #9] to get your printers working locally, and then use cupsaddsmb to tell Samba to make them available. To share all printers, run:

$ sudo cupsaddsmb -a

If you want to share only a specific printer, you can instead refer to its CUPS identity:

$ sudo cupsaddsmb laserjet

Hack 95. Administer Your Server Remotely

Install and configure SSH to securely connect and administer your server from any machine with a network connection.

Apart from when you are doing the base installation or some sort of local maintenance, generally a Linux server is meant to be run without a monitor connected. Most tasks you would need to perform on a server can be done via the command line, and these days Telnet is out and SSH is in. SSH provides you with the ability to remotely log in to your server and run commandsall over an encrypted channel. Plus, SSH offers a number of advanced functions that can make remote administration simpler.

First things first: Ubuntu (at least the desktop version) does not install the SSH server by default, so you will need to install it. Either use your preferred package manager to install the openssh-server package or run:

$ sudo apt-get install openssh-server

The installation scripts included with the package will take care of creating the initial RSA and DSA keys you need, as well as providing you with a good default SSH config. Once the install finishes, you should be able to log in to the machine from other machines on the network by typing:

$ ssh


(Replace ip_address with the IP address or hostname for your remote Ubuntu server.)

Configure SSH

One issue with the default SSH config (/etc/ssh/sshd_config) that ships with Ubuntu is that it enables remote root logins and X11 forwarding, which create potential security concerns. Since the root account is disabled on Ubuntu by default anyway, it doesn't hurt to disable the root login option. Just find the line that says:

PermitRootLogin yes

and change it to say:

PermitRootLogin no

If you aren't planning on using X11 forwarding, you can disable that as well. Find the line that says:

X11Forwarding yes

and change it to:

X11Forwarding no

Once you have made your changes, type:

$ sudo /etc/init.d/ssh restart

to load the new configuration.

X11 Forwarding

Now while X11 forwarding should be disabled if you aren't planning to use it, if you are planning to use it, it can allow you to do some pretty interesting things. Essentially, X11 forwarding allows you to set up a secure communication channel between you and the remote server over which you can run graphical applications. The performance of these applications will vary depending on the speed of your network connection. To take advantage of X11 forwarding, add the -X argument to ssh when connecting to the server:

$ ssh -X


Then start a graphical program such as xterm (which, when you think about it, wouldn't make much sense to run since you're already in a shell) or perhaps Synaptic. Give the application some time if you are on a slower network link; eventually, the graphical program should appear on your local desktop. This feature can be particularly useful if your server needs third-party graphical programs to manage hardware RAID volumes or backup programs and you need to manage these tools remotely.

Configure Passwordless Authentication

If you find yourself connecting to the same machine frequently, or you want to be able to set up a script to run commands on the machine when you aren't around, you will want to set up passwordless authentication. Essentially, this requires that you set up a public and private key on your local machine and then add the public key to a particular configuration file on the remote machine. First, generate your keys on the local machine with the ssh-keygen program:

greenfly@ubuntu:~$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/greenfly/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/greenfly/.ssh/id_rsa.

Your public key has been saved in /home/greenfly/.ssh/

The key fingerprint is:

b7:db:cc:2c:81:c5:8c:db:df:28:f3:1e:17:14:cd:63 greenfly@ubuntu

(If you want to generate DSA keys instead, replace rsa with dsa in the above command.) Notice that it prompted for a passphrase. If you want to use this key for passwordless authentication, just hit Enter for no passphrase. When the program finishes, you will have two new files in your ~/.ssh/ directory called id_rsa and, which are your private and public keys, respectively. Now, for security reasons, the id_rsa file is set to be readable only by your user, and you should never share that file with anyone else, since she would then be able to log in to any machine that you could with that key. The public key is the one you will share with remote servers to allow you to log in passwordlessly. To do so, copy the file to the remote machine and append its contents to the ~/.ssh/authorized_keys file. You can do this step by step, or you can use this handy one-liner to do it all in one fell swoop:

$ ssh


"cat >> ~/.ssh/authorized_keys" < ~/.ssh/

Replace user@remotehost with your username and the hostname of the remote server. You will be prompted for your password one final time, and then the key will be appended to the remote host's authorized_keys file. You should now be able to ssh to the remote machine and log in without being prompted for a password.

If you do include a passphrase when you generate the key, you will be prompted for that key's passphrase every time you log in. You can make this step almost as convenient as passwordless login by running the command ssh-agent bash on your local machine; this starts up a bash shell session under the control of an SSH agent process. You can then add your key to the agent with ssh-add. You'll be prompted once for your password, and then you can ssh to remotehost without being prompted for your password, unless you exit that shell that you started with ssh-agent.

Copy Files Securely

Another common need is to be able to copy files between servers you are administering. While you could set up FTP on all of the servers, this is a less-than-ideal and potentially insecure solution. SSH includes within it the capability to copy files using the scp command. This has the added benefit of copying the files over a secure channel along with taking advantage of any key-based authentication you might have already set up.

To copy a file to a remote machine, type:

$ scp /path/to/file user@remotehost:


Or, if you need to copy from the remote host to the local host, reverse the two arguments:

$ scp user@remotehost:/path/to/file


scp supports recursion, so if you need to copy an entire directory full of files to a remote location, use the -r argument:

$ scp -r

/path/to/directory/ user@remotehost:/path/to/destination/

If you are transferring logfiles or other highly compressible files, you might benefit from the -C argument. This turns on compression, which, while it will increase the CPU usage during the copy, should also increase the speed in which the file transfers, particularly over a slow link.

Alternatively, if you want to copy a file but can't afford to saturate your upload with the transfer, use the -l argument to limit how much bandwidth is used. Follow -l with the bandwidth you want to use in kilobits per second. So, to transfer a file and limit it to 256 Kbps, type:

$ scp -l 256

/path/to/file user@remotehost:/path/to/destination

Hack 96. Build a Web Server

Serve web content using the massively popular and capable Apache web server.

Ubuntu makes an ideal web-server platform, with Apache and a huge range of supporting software available quickly and easily from the official Ubuntu archives. But just installing the software gets you only halfway there: with a few small tweaks, you can have a very flexible and capable web-hosting environment.

Install Apache

First, install Apache:

$ sudo apt-get install apache2

Then, make sure Apache is running:

$ sudo /etc/init.d/apache2 restart

The Apache installation will create a directory at /var/www, which is the document root of the default server. Any documents you place in this directory will be accessible via a web browser at http://localhost/ or the IP address assigned to your computer.

Install PHP

PHP is a server-side scripting language that is commonly used by content-management systems, blogs, and discussion forums, particularly in conjunction with either a MySQL or Postgres database:

$ sudo apt-get install libapache2-mod-php5

Restart Apache to make sure the module has loaded:

$ sudo /etc/init.d/apache2 restart

To check that the module is loaded properly, create a PHP file and try accessing it through the web server. PHP has a built-in function called phpinfo that reports detailed information on its environment, so a quick way to check if everything is working is to run:

sudo sh -c "echo '<?php phpinfo( ); ?>' > /var/www/info.php"

and then point your browser at http://localhost/info.php to see a page showing the version of PHP that you have installed.

One possible problem at this point is that your browser may prompt you to download the file instead of displaying the page, which means that Apache has not properly loaded the PHP module. Make sure there is a line in either /etc/apache2/apache2.conf or /etc/apache2/mods-enabled/php5.conf similar to:

AddType application/x-httpd-php .php .phtml .php3

If you make that change, you'll need to stop and start Apache manually to make sure it re-reads the configuration file:

$ sudo /etc/init.d/apache2 stop

$ sudo /etc/init.d/apache2 start

Configure Dynamic Virtual Hosting

Web servers typically host multiple web sites, each with its own virtual server, and Apache provides support for the two standard types of virtual server: IP-based and name-based:


These virtual servers use a separate IP address for each web site. This approach does have some advantages, but due to the shortage of IPv4 addresses, it's usually used only as a last resort, such as when SSL (Secure Sockets Layer) encryption is required.


These virtual servers share a single IP address among multiple web sites, with the server using the Host: header from the HTTP request to determine which site the request is intended for. The usual way to achieve this is to create a configuration for each virtual server individually, specifying the name of the host and the directory to use as the "root" of the web site. However, that means you have to modify the Apache configuration and restart it every time you add a new virtual server.

Dynamic virtual hosting allows you to add new virtual hosts at any time without reconfiguring or restarting Apache by using a module called vhost_alias. Enable vhost_alias by creating a symlink in Apache2's mods-enabled directory:

$ sudo ln -s /etc/apache2/mods-available/vhost_alias.load \\


To allow the module to work, there are some changes that need to be made to /etc/apache2/apache2.conf to turn off canonical names, alter the logfile configuration, and specify where your virtual hosts will be located. Add or alter any existing settings to match the following:

# get the server name from the Host: header

UseCanonicalName Off

# this log format can be split per virtual host based on the first field

LogFormat "%V %h %l %u %t "%r" %s %b" vcommon

CustomLog /var/log/apache2/access_log vcommon

# include the server name in the filenames used to satisfy requests

VirtualDocumentRoot /var/www/vhosts/%0/web

VirtualScriptAlias /var/www/vhosts/%0/cgi-bin

Create the directory that will hold the virtual hosts:

$ sudo mkdir /var/www/vhosts

Create a skeleton virtual server:

$ sudo mkdir -p /var/www/vhosts/skeleton/cgi-bin

$ sudo cp -a /var/www/apache2-default /var/www/vhosts/skeleton/web

Restart apache2 so the configuration changes take effect:

$ sudo /etc/init.d/apache2 restart

You are now ready to create name-based virtual hosts by copying the skeleton to the hostname you want it to respond to. For example, to create a new virtual server for, you would simply run:

$ sudo cp -a /var/www/vhosts/skeleton /var/www/vhosts/

Any HTTP connections made to your server with the Host: header set to will now be answered out of that virtual server.

To make the virtual hosts accessible to other users, you will need to put appropriate entries in a publicly accessible DNS server and have the domains delegated to it, but for a quick local test you can edit your /etc/hosts file and add an entry similar to:

Hack 97. Build an Email Server

Setting up an email server is remarkably straightforward, but there are a couple of things to be very careful of so it doesn't end up being a haven for spammers.

An email server consists of several components: an SMTP (Simple Mail Transport Protocol) server to handle mail transfer between hosts, POP and IMAP servers to give users access to mailboxes from their desktop mail clients, and often some kind of mail-filtering system for reducing spam and viruses passing through the system.

Postfix SMTP Server

There are many SMTP servers available in Ubuntu, and many administrators have their own personal preference, but the Postfix SMTP server is a good general-purpose choice that is fast, secure, and extensible:

$ sudo apt-get install postfix

The installation process will ask some questions about how the system will operate. Select Internet Site as the operation mode and set Mail Name to your domain.

Once the package has been installed, open /etc/postfix/ in an editor and find a line like:

mynetworks =

To allow computers on your network to send outgoing email through the server, you need to add your network range to the mynetworks value. For example, if your network is the class-C range, you would edit the line to read:

mynetworks =

This setting is critical to preventing your mail server being used as a relay by spammers, so only add network ranges that you trust.

When mail is delivered to a local user, it can be stored in several different ways. The older and most common approach is the mbox format, which stores all mail in a single file for each user, but the performance of the mbox format falls off dramatically with large mail volumes. Most newer mail systems use the maildir format, which stores messages in individual files nested inside directories. Postfix can handle either format equally well. Add this line to to use the maildir format:

home_mailbox = Maildir/

The Maildir/ value is appended to the home directory path of the recipient, and the trailing slash indicates to use the maildir format for storage.

Finally, look for a line that starts with mydestination =. Mail for all domains listed in this line will be accepted by your mail server, and local delivery will be attempted, so if you will host mail for multiple domains, add them here.

Restart Postfix to make your changes take effect:

$ sudo /etc/init.d/postfix restart

If you will be using your mail server only as an outbound mail gateway, that's all you need to do. Configure your email client to use your mail server for outbound mail and try sending a message to an external email account.

If the message doesn't come through, try "putting a tail" on the Postfix logfile to see what went wrong, and adjust your configuration as necessary:

$ sudo tail -f /var/log/mail.log

Reduce Spam with Greylisting

There are a variety of methods to protect your users from spam, but unfortunately there is no magic solution that causes absolutely no false positives or negatives. Greylisting is one approach that requires very little ongoing maintenance but has a very high success rate with very few false positives in which valid email is mistakenly rejected.

Greylisting works on the premise that valid mail servers will attempt redelivery of mail if they receive a "temporarily unavailable" error from the destination server, while spam hosts and viruses will typically attempt delivery only once and then move on to the next target. This means legitimate mail from a remote system will be delayed, but afterwards your mail server will remember that the sender is valid and let the mail straight through. The delay on the first message can be inconvenient, but on the whole, greylisting is one of the most successful spam-mitigation techniques currently available. To take advantage of greylisting, install Postgrey:

$ sudo apt-get install postgrey

Postgrey runs as a daemon on your mail server on port 60000, so configure Postfix to use it as a delivery policy service. Open /etc/postfix/ and add an entry for the service:

smtpd_recipient_restrictions =


check_policy_service inet:

Then restart Postfix and put a tail on the Postfix logfile before sending a test message to the system from an external mail server. On the first delivery attempt, you will see the message rejected with a nonfatal error, and then after five minutes your mail server will allow the message to be delivered. Subsequent messages from the same remote system will be delivered immediately.

Activity Reporting

To see how much traffic your mail server is handling, install the mailgraph package and start it up:

$ sudo apt-get install mailgraph

$ sudo /etc/init.d/mailgraph start

Mailgraph watches mail-server activity and logs it in an extremely efficient database, and then builds graphs that you can access through a web browser at http:// yourhost /cgi-bin/mailgraph.cgi. By default, the graphs are accessible from anywhere, so if you prefer to keep them secret, you may wish to restrict access to them using an Apache .htaccess file or with explicit access control in the Apache configuration.

POP and IMAP Services

To allow users to collect mail from the server, you need to run IMAP and/or POP services. Once again, there is a variety of alternatives, each of which have advantages and disadvantages, but the Courier suite provides very simple setup and natively supports maildir format:

$ sudo apt-get install courier-imap courier-imap-ssl \\

courier-pop courier-pop-ssl

If you configured Postfix to use maildirs, as described above, you don't need to make any changes to the Courier configuration: it will automatically detect the maildirs, and everything should just work.

Hack 98. Build a Caching Proxy Server

If you have multiple computers on your network, you can save bandwidth and improve browser performance with a local proxy server.

A proxy server sits on your network; intercepts requests for HTML files, CSS files, and images; and keeps a local copy handy in case another user wants to access the same file. If multiple users visit the same site, a proxy server will save bandwidth by not downloading everything to your local network for each user individually, and performance will be improved because objects will come from the local network instead of the Internet.

The Squid Web Proxy Cache ( is a full-featured proxy cache for Linux and Unix.

Basic Squid Setup

Install the Squid caching proxy:

$ sudo apt-get install squid

The installation process will automatically create a directory structure in /var/spool/squid where downloaded objects will be stored. Old objects will be cleaned out automatically, but if you run a busy proxy server, it can still use up a lot of disk space, so make sure you have plenty of room available.

Squid's default configuration file /etc/squid/squid.conf is one of the longest and most verbosely commented in the entire history of software: over 3,000 lines, with an extensive explanation for every possible config option. It's easy to get lost in it, so, to get started, here are some basic options you need to look for.

Around line 1,890 are some options that trip up most first-time Squid administrators. Squid implements ACLs (Access Control Lists) to determine who is allowed to connect through the proxy. By default, the only system allowed to connect is localhost:

#acl our_networks src

#http_access allow our_networks

http_access allow localhost

To allow machines on your network to connect, you need to uncomment and edit the our_networks definition to include the IP address range of your local network, and uncomment the line that permits the our_networks ACL to use the proxy. The end result will probably be something like this:

acl our_networks src

http_access allow our_networks

http_access allow localhost

Then go to approximately line 53 to find the http-port option:

# http_port 3128

This option specifies the port that Squid will listen on. 3128 is a good default, but some proxies run on port 8080 or even port 80, so you may prefer to change the value and uncomment it.

Once you are satisfied with your changes, restart Squid:

$ sudo /etc/init.d/squid restart

Restarting Squid can take a while on an active proxy because it waits for existing connections from clients to close cleanly before restarting.

You can test your proxy by manually updating your Firefox configuration to connect through it. In Firefox, go to EditPreferencesGeneralConnection Settings, select "Manual proxy configuration," and put in the details for your proxy server, as shown in Figure 10-1.

Figure 10-1. Browser proxy settings

To see the activity passing through the proxy, put a tail on the Squid logfile and then try accessing a web site. Squid stores its access logs in /var/log/squid, so run:

$ sudo tail -f /var/log/squid/access.log

to have tail "follow" the end of the logfile. If the web page loads normally and you also see entries appear in the logfile, then congratulations, Squid is working!

Proxy Traffic Reports

The popular web-server-analysis program Webalizer can read Squid logfiles natively. Install Webalizer:

$ sudo apt-get install webalizer

Then use your favorite text editor to open /etc/webalizer.conf, and look around line 36 for an entry like this:

LogFile /var/log/apache/access.log.0

Change it to reference Squid's rotated logfile:

LogFile /var/log/squid/access.log.0

Around line 42, you will see the option to set the directory where the report will be created. If you have a default Apache installation on your proxy server, you shouldn't need to change the default setting, but if your web document root is in an alternative location or you already have a report being generated for your web server, you may need to change it:

OutputDir /var/www/webalizer

If you've only just installed and tested Squid, you probably won't have a rotated logfile yet, so manually rotate the file:

$ sudo /etc/init.d/squid stop

The output directory is not created automatically, so you'll need to do it manually:

$ sudo mkdir /var/www/webalizer

Now run Webalizer:

$ sudo webalizer

When it's finished, you'll find a bunch of files in /var/www/webalizer, and you should be able to view the report by pointing your browser at

Peering Proxies

If your ISP provides a proxy, you can chain it together with your Squid proxy. Your local clients will connect to your proxy, which in turn will use your ISP's proxy. In the Squid configuration file, go to about line 190 and add a line similar to:

cache_peer parent 3128 0 no-query

where is the address of your ISP's cache. The parent setting tells your proxy to treat this as an upstream source rather than a local peer. You may need to change the 3128 setting if your ISP uses a different proxy port. The 0 and no-query values tell your proxy not to use ICP (Internet Cache Protocol) to communicate with the cache. ICP is a protocol typically used when multiple proxies run in parallel as a load-sharing group, and allows them to communicate cache state to each other very rapidly.

Restart Squid, put a tail on the logfile again, and try accessing a popular site. If the upstream proxy already had some of the items in its cache, you should see this reported as PARENT_HIT in your proxy log.

Hack 99. Build a DHCP Server

Use a DHCP server to automatically configure the network settings for all computers on your network.

DHCP (Dynamic Host Configuration Protocol) dramatically simplifies the connection of new computers to your network. With a properly configured DHCP server, any new computers you connect will automatically be assigned an IP address, the address of your router, and nameserver addresses. And, to really make things easy on yourself, you can link your DHCP server to the BIND9 DNS server and have new computers automatically assigned a hostname that maps correctly to its dynamically assigned IP address.

Install the DHCP Daemon

First, make sure you don't already have a DHCP server running on your network; two servers providing conflicting information is a recipe for obscure network problems! Install the Internet Software Consortium (ISC) DHCP server:

$ sudo apt-get install dhcp3-server

Basic Configuration

Open the configuration file /etc/dhcp3/dhcpd.conf, where you will see various configuration options that apply both globally and to specific subnets. The majority of the sample options included in the file are quite self-explanatory, so put appropriate entries in the global settings, and then add a basic stanza for your network:

subnet netmask {


option routers;


The range setting specifies the pool of IP addresses to use when new computers connect to your network, and the routers option is passed on so they can add a default route to use to connect to the Internet.

Assign Addresses to Specific Hosts

Sometimes it can be helpful to force specific IP addresses to be associated with certain hosts, such as printers. When a host connects to the DHCP server, it provides the MAC (Media Access Control) address of the network interface, and the DHCP server can then use that to associate the host with a specific configuration.

If you don't know the MAC address of your computer, you can find it printed on a label on most Ethernet cards; network printers often have it labeled somewhere near the Ethernet connector. On Linux, you can obtain it using ifconfig:

$ /sbin/ifconfig eth0 | grep HWaddr

Back on the DHCP server, open /etc/dhcp3/dhcpd.conf and add a stanza near the end for each host:

host workstation51 {

hardware ethernet 08:00:07:26:c0:a5;



Make sure the fixed-addresses you set don't fall within a range that has been nominated for general assignment.

Finally, restart the DHCP server so your configuration will take effect:

$ sudo /etc/init.d/dhcp3-server restart

Hacking the Hack

DNS provides a hostname-to-IP-address resolution service so you don't need to care what actual IP address has been assigned to a computer, but DHCP allows IP addresses to be dished out semi-randomly to machines on your network, which makes it very hard to maintain sensible DNS entries. However, if you use BIND9 to build a domain name server [Hack #100], you can link it to your DHCP server and have DNS records updated automatically each time a computer joins or leaves your network.

First, get your DNS and DHCP servers functioning correctly independently. Once you are happy that they are doing what they are meant to, open the BIND9 configuration file (/etc/bind/named.conf.options) and add a new stanza at the end:

controls {

inet allow {localhost; } keys { "rndc-key"; };


The localhost setting specifies that only local processes are allowed to connect, and rndc-key is the name of a secret key that will be used to authenticate connections. The actual key is stored in /etc/bind/rndc.key, which is pre-populated with a randomized key value when the bind9 package is installed. If your DNS and DHCP servers are on the same physical machine, these settings will work nicely, but if they are on different machines, you will need to tell BIND to allow connections from your DHCP host and copy the key file across. Open /etc/bind/named.conf.local, add forward and reverse zones for your local network, and specify that these zones can be updated by clients that know the secret key:

zone "" {

type master;

file "/etc/bind/zones/";

allow-update { key "rndc-key"; };

notify yes;


zone "" {

type master;

file "/etc/bind/zones/192.168.0.hosts";

allow-update { key "rndc-key"; };

notify yes;


Set up the zone files for and 192.168.0.hosts as usual, including any statically assigned hostname values.

You also need to tell BIND to load the key file, so after the zone stanzas, add an include line:

include "/etc/bind/rndc.key";

Once you restart BIND, it will be ready to accept dynamic zone updates:

$ sudo /etc/init.d/bind9 restart

Your DHCP server now needs to be told to send update notifications to your DNS server. Open /etc/dhcp3/dhcpd.conf and add these entries to the top of the file:

server-identifier server;

ddns-updates on;

ddns-update-style interim;

ddns-domainname "";

ddns-rev-domainname "";

ignore client-updates;

include "/etc/bind/rndc.key";

zone {


key rndc-key;


You may need to comment out existing settings that conflict, such as the ddns-update-style none; option included in Ubuntu's default DHCP configuration.

Restart DHCP to apply your changes:

$ sudo /etc/init.d/dhcp3-server restart

From now on, any hosts that register themselves with DHCP will also be automatically added in-memory to your DNS zone.

Hack 100. Build a Domain Name Server

Run your own DNS server to map hostnames to IP addresses.

The Domain Name System (DNS) is a distributed directory service that maps machine hostnames to IP addresses and vice versa. DNS allows hostnames to be just "pointers" to the actual network location of the server, providing a consistent human-readable hostname even if the actual IP address changes.

Understand DNS in 60 Seconds

The reason DNS is called a " distributed" service is that there is no single machine that contains a comprehensive lookup table for the entire Internet. Instead, DNS functions as a tree, with root servers distributed around the world that look after the top-level domains (TLDs) such as .com. You can find more information about the current root servers at The area that each nameserver is responsible for is called a zone, and the details of each zone are typically stored in a configuration file called a zonefile.

At each level, a DNS server can delegate authority for part of its zone below itself, so the root servers delegate authority for .au to certain Australian nameservers, which in turn delegate authority for to other nameservers, which then delegate authority for to Jonathan Oxer's nameservers, which then manage specific host records such as and provide mappings to IP addresses. The system is very hierarchical and allows for the management of specific hostname data by delegating it right out to the edges of the Internet.

Note that there is nothing stopping you from setting up a domain name server and putting any data in it you like: you could put in an entry for that points to the server if you wanted to, and if you used that DNS server, that's exactly what you would see. However, unless your DNS server is part of the global namespace that comes under the authority of the root nameservers, nobody else will ever see the records you put in it. It's not enough to just set up a nameserver: you need to have domains "delegated" to your nameserver by the appropriate upstream authority so that other computers will know your server is authoritative for that domain. Otherwise, you are just running an orphan zone.

There are, however, complete alternative DNS namespaces that have been set up outside the usual root servers, but they are accessible only to people using nameservers that have been specially reconfigured. Using these alternative namespaces, you can register domains in .indy, .parody, .job, and even .www, but you'll be cut off from the vast majority of users on the Net.

One final point to be very careful of is that, strictly speaking, domains actually end in a "." (period) character, although the final dot is implied and most Internet users don't even realize that it should be there. Try it yourself: point your browser at, including the final dot, and see what happens. If everyone were being technically correct, that final dot would be included on all URLs, but things just work anyway because DNS servers assume we're just being lazy and treat our URLs as if the dot were silently appended. To most people, that's just a piece of useless Net trivia, but once you start configuring your own DNS server, it becomes criticalso keep it in mind.

DNS is actually a very complex subject that can't really be understood in a mere 60 seconds, but the critical things to remember are that it's structured as a hierarchical tree starting from ".", that zones are delegated down the nameserver hierarchy and become more specific at each level, and that zones can map hostnames to IP addresses and vice versa.

Authoritative and Recursive Lookups

When a computer needs to look up a hostname and convert it to an IP address, there are two types of lookups that can be performed.

An authoritative lookup is a query to a nameserver that can answer the request directly from its own knowledge of what does or does not exist in that zone. For example, if you queried the root nameservers for the host, they would not be able to answer authoritatively because they do not contain specific information about the hosts in that zone. However, a query to O'Reilly's own nameservers would return an authoritative answer. Nameservers at hosting companies typically spend most of their time providing authoritative answers about zones they manage.

A recursive lookup involves a query to a nameserver that does not specifically know about the requested hostname, but which can then work through the DNS tree to obtain the answer before returning it to the requesting computer. Nameservers at ISPs typically spend most of their time performing recursive lookups on behalf of users rather than serving authoritative answers.

Authoritative and recursive lookups are actually totally different operations, so there is specialized DNS server software available for each type of query. It's not uncommon for a DNS server to run two different packages to handle authoritative and recursive queries.

Install BIND9

For a general-purpose DNS server, a good software choice is BIND (the Berkeley Internet Name Daemon), which is a very popular DNS server that can handle both authoritative and recursive lookups natively:

$ sudo apt-get install bind9

If all you want is a recursive DNS service, that's actually all you need to do. If you look in /etc/bind/db.root, you'll find that BIND has been seeded with the latest IP addresses of the root nameservers, allowing it to look up delegation information and issue recursive lookup requests on behalf of other computers right out of the box.

You can test it from a Linux machine without changing your nameserver configuration by specifying the address of your DNS server and performing a manual lookup using the nslookuptool, which is in the dnsutils package:

jon@jbook:~$ nslookup



Non-authoritative answer:



As you can see, the result was returned nonauthoritatively because the server had to refer to an external source to obtain the answer. If that worked, you can edit /etc/resolv.conf on your workstations and have them use your DNS server for lookups.

Create an Authoritative Forward Zone

Authoritative nameservers come in two types: master and slave. A master nameserver is explicitly configured with all the details of the zones it manages, while a slave is simply told the names of the zones and pointed at a master to periodically refresh its locally cached copies of the zone by performing a zone transfer. In this hack, you'll learn how to configure a master nameserver.

In fact, there's nothing to stop you running all your nameservers as masters; as long as you keep all their configurations synchronized, everything will work fine. External machines doing lookups don't know the difference between a master and a slave: a master/slave setup is purely a convenience issue.

The most master BIND configuration file is /etc/bind/named.conf. Rather than modify it directly, though, it's best to keep your customizations in separate files and have them "included" into the main configuration. The default installation on Ubuntu includes /etc/bind/named.conf.local, which you can use to define your own zones.

To keep everything neat, create a subdirectory in which to store your actual zone files:

$ sudo mkdir /etc/bind/zones

Now create a zone file for your zone named after the zone itself, such as /etc/bind/zones/, and put in the file something like the following: IN SOA (

2001061407 ; serial

10800 ; refresh

3600 ; retry

432000 ; expire

38400 ) ; ttl IN NS IN NS IN MX 30 IN A IN A

The first line specifies the zone, the Start Of Authority as the nameserver, and the administrative contact as Notice that the @ symbol is replaced by a dot in the zone file: BIND treats the first item in the string as the username and the rest as the domain. The subsequent values specify how the zone should be treated by other nameservers, such as how long results can be cached.

The NS records specify the nameservers that are authoritative for this zone, the MX record specifies the mail exchange host for this domain along with a priority from 1 to 100 (lower numbers indicating higher priority), and the A records map specific hostnames to IP addresses.

Note that the full hostnames in the zone file all end in a period, and this is where properly specifying hostnames becomes critical. You might leave the dot off the end of URLs when you type them into your browser, but you can't be ambiguous in the zone file! If you leave the final dot off, BIND assumes the hostname has not been explicitly terminated and appends the domain to it, leaving you with addresses like You can take advantage of this behavior by deliberately leaving off the domain entirely and specifying just the first part of the hostname without a trailing dot:

www IN A

For BIND to know about your new zone file, you need to edit /etc/bind/named.conf.local and add an entry at the end similar to:

zone "" {

type master;

file "/etc/bind/zones/";


Then restart BIND:

$ sudo /etc/init.d/bind9 reload

Now if you try a query against the nameserver for a host in your zone, you will see the result shows your IP address and isn't flagged as "nonauthoritative":

jon@jbook:~$ nslookup





Firewall Rules

If you set up a firewall [Hack #69], you will need to add specific rules to allow queries from external machines to reach it. DNS queries are sent on port 53 using UDP by default, falling back to TCP if the request packet exceeds 512 bytes in size. You therefore need to allow both UDP and TCP on port 53 through your firewall to your DNS server.


Our look is the result of reader comments, our own experimentation, and feedback from distribution channels. Distinctive covers complement our distinctive approach to technical topics, breathing personality and life into potentially dry subjects.

The tool on the cover of Ubuntu Hacks is a tuning fork. This device, used primarily to tune musical instruments, is a two-tined, U-shaped metal bar that emits a pure tone of a specific pitch when struck against an object. It was invented in the early 18th century by British musician John Shore, a trumpeter in the employ of King George I. The "pitch fork," as Shore referred to it, has endured in modern times as an extremely useful tool for orchestral musicians. Its sonic properties have also been harnessed for sundry purposes, ranging from timekeeping in quartz watches to sonopuncture therapy.

The cover image is a stock photo from Photodisc Images. The cover font is Adobe ITC Garamond. The text font is Linotype Birka; the heading font is Adobe Helvetica Neue Condensed; and the code font is LucasFont's TheSans Mono Condensed.