Shell Scripting - Tools and Programming - UNIX: The Complete Reference (2007)

UNIX: The Complete Reference (2007)

Part V: Tools and Programming

Chapter 20: Shell Scripting

Overview

By now you are familiar with using the shell interactively to enter commands. In addition to being a command interpreter, however, the shell is a full-fledged programming language. A program written in the shell language (or, as some users would say, written “in shell”) is often called a shell script. A shell script is just a sequence of commands that have been saved in a file. In fact, any commands you might enter at the command line can be made into a script, and any script that you might write can also be executed just by entering the commands in the file at the command line. This makes basic shell programming very easy to learn, even if you have never programmed before. The shell configuration files (such as your .profile or .bashrc) are examples of shell scripts.

This chapter shows you how to program in shell, including how to:

§ Write and execute simple shell scripts

§ Include UNIX System commands in shell programs

§ Use shell features like variables and I/O redirection in your scripts

§ Pass arguments and parameters to shell scripts

§ Make logical tests and execute commands based on their outcome

§ Use branching and looping operators

§ Use arithmetic expressions in shell programs

This chapter covers Bourne shell (sh) style scripting only This includes the ksh and bash shell languages. It will not cover scripting in csh or tcsh. These shells are much less commonly used for scripting than the Bourne-compatible shells, and in fact they lack some features that are important for scripting. If you use csh or tcsh as your interactive shell, you can still use sh to run the scripts described in this chapter. See the section “Other Ways to Execute Scripts” for details.

The Shell Language vs. Other Programming Languages

The shell is a high-level programming language, meaning that you do not have to worry about complex tasks such as memory management. This makes it easier to learn than a systems programming language such as C or C++. Shell programs are generally faster to write than corresponding C programs, and they are often easier to debug. However, C programs almost always run faster and more efficiently. Therefore, shell scripting and C programming are used for very different tasks. For quickly writing relatively short tools, shell is a much better choice, but for large systems programming projects, C is clearly superior.

One important feature of shell scripts is that they are interpreted rather than compiled. This means that when you run a shell script, the shell program itself is invoked to run the commands in your file. You can easily test shell scripts as you write them just by running them from the command line. In contrast, compiled languages such as C are written in source files, which must be converted to binary executables before they can be run. You cannot create binary executables from your shell scripts.

In comparison to other scripting languages, such as Perl, Python, or TCL, the shell programming language is tightly integrated into UNIX. It is designed to allow you to call UNIX commands and tools from within your scripts. This means that you already know many of the commands for writing shell scripts, since they are the UNIX commands you use frequently. If you are writing a script that relies heavily on existing UNIX commands, shell is an excellent choice.

However, the shell language was largely written when the Bourne shell was released in 1978. Because it is so important for shell scripts to be backward compatible (since shell scripts are used for so much existing UNIX code), the language cannot evolve as much as other scripting languages. This means that shell scripting lacks many new and powerful features that other languages have introduced more recently. Shell scripting remains an excellent introduction to scripting (and programming in general), and it can be the fastest choice when writing short UNIX-based scripts, but if you find yourself writing longer or more complex programs, you will eventually want to explore other languages (such as Perl or Python).

A Sample Shell Script

A common use of shell programs is to assemble an often-used string of commands. For example, suppose you are writing a long article that has been formatted for use with nroff and the related tools tbl and col. When you want to print a proof copy of your article, you have to enter a command string like this:

$ cat article | tbl |

> nroff -cm -rA2 -rN2 -rE1 -rC3 -rL66 -rW67 -rO0 |

> col | lp -dpr2

Clearly, typing this entire command sequence, and looking up the options each time you wish to proof your article, is tedious. You can avoid this effort by putting the list of commands into a file and running that file whenever you wish to proof the article. In this example, the file is called proof:

$ cat proof

cat article | tbl | nroff -cm -rA2 \

-rN2 -rE1 -rC3 -rL66 -rW67 -rO0 |

col | lp -dpr2

The backslash (\) at the end of the first line of output indicates that the command continues over to the next line. The shell automatically continues at the end of the second line, because a pipe (|) cannot end a command.

Executing Your Script

The next step after creating the file is to make it executable. This means setting the read and execute permissions on the file so that the shell can run it.

If you attempt to run a script that is not executable, you will get an error message like

sh: proof: Permission denied

To give the proof file read and execute permissions for all users, use the chmod command:

$ chmod +rx proof

Now you can execute the command by typing the name of the executable file. For example,

$ ./proof

if the script is in your current directory, or

$ proof

if it is in a directory in your PATH. At this point, all of the commands in the file will be read by the shell and executed just as if you had typed them.

Other Ways to Execute Scripts

The preceding example shows the most common way to run a shell script-that is, treating it as a program and executing the command file directly However, there are other ways to execute scripts that are sometimes useful.

Specifying Which Shell to Use

Many scripts start with a line that looks like this:

#!/bin/sh

When you run a script like this, your shell reads the first line and interprets it to mean “run this script with /bin/sh.” This means that regardless of which shell you are using when you run the script, it will always be interpreted by sh. Since some scripts may not be compatible with all shells, this can help make your scripts more portable. For example, you could run this script even if you are using tcsh, and it will still work properly

Note, by the way, that this works with any program, not just /bin/sh. You could use the line #!/bin/bash to make your script run under bash. A Perl script might start with #!/usr/bin/perl, or a Python script with #! /usr/bin/python.

Explicitly Invoking the Shell

In all of the examples we have seen so far, your shell automatically starts a new subshell that reads and executes your script. You can explicitly start a subshell to run a script like this:

$ sh scriptname

This will start an instance of sh that runs the commands in scriptname. When scriptname terminates, the subshell dies, and the original shell awakens and returns a system prompt. Because you are not executing the file scriptname directly, you do not need execute permission for it, although it must still be readable. Note that this will work even if sh is not your current shell.

Running Scripts in the Current Shell

When you run a script in a subshell, as all of the examples so far have done, the commands that are executed cannot change your current environment. For example, suppose you make some changes to your .profile, such as adding new environment variables or defining some aliases, and you want to test them. You could do

$ -/.profile

if the file is executable, or

$ ksh -/.profile

if it is not. But in either case, the changes to your environment are lost as soon as the script finishes and the subshell exits. Instead, you should use

$ . -/.profile

The . (dot) command is a shell command that takes a filename as its argument and causes your current shell to read and execute the commands in it. Any changes to your current environment will remain even after the script is completed. When run with the . command, scripts do not need execute permission, only read permission.

Putting Comments in Shell Scripts

You can insert comments into your scripts to help you recall what they are for. Comments can also be used to document complex sections of a script, or to help other users understand how a script works. Providing good comments can make your programs more maintainable, meaning that they are easy to edit in the future. Adding comments does not affect the speed or performance of a shell program.

A comment begins with the # (pound) sign. When the shell encounters a statement beginning with #, it ignores everything from the # to the end of the line. (The only exception is when the first line in a file begins with #!. As discussed previously, this causes the shell to execute the file with a specific program.) This example shows how comments may be used to clarify even a relatively short script:

#!/bin/sh

#

# backupWork-a program to backup some important files and directories

# Version 1, Aug 2006

#

# Get the current date in a special format

# On Sept 27, 2006 at 9:05 pm,

# this would look like 2006.09.27.09:05:00

TIMESTAMP='date +%Y.%m.%d.%T'

# Create the new backup directory

# Could look like ~/Backups/Backup.2006.09.27.09:05:00

BACKUPDIR="~/Backups/Backup.$TIMESTAMP"

mkdir $BACKUPDIR

# Copy files to new directory

cp -r ~/Work/Project $BACKUPDIR

cp -r ~/Mail $BACKUPDIR

cp −/important $BACKUPDIR

# Send mail to confirm that backup was done

echo "Backup to $BACKUPDIR completed." | mail $LOGNAME

Working with Variables

You can create variables in your scripts to save information. These work just like the shell variables described in Chapter 4. You can set or access a variable like this:

MESSAGE="Hello, world"

echo $MESSAGE

Recall that echo prints its arguments to standard output. The section “Shell Input and Output” will explain more about printing to the screen.

You need the $ in $MESSAGE if you want to print the value. The line echo MESSAGE will just print the word “MESSAGE”. This is different from languages like C, which do not require a $ when printing a variable, and also from Perl, which always requires a $ or other symbol in front of variable names.

You can also use your shell environment variables in your scripts. For example, you might want to create a script that configures your environment for a special project, like this:

$ cat dev-config

DEVPATH=/usr/project2.0/bin:/usr/project2.0/tools/bin:$HOME/dev/project2.0

export DEVPATH

cd $HOME/dev/proj ect2.0

This script uses the value of the shell environment variable $HOME. It also sets a new variable, called DEVPATH. If you want DEVPATH to become a new environment variable, and the cd command to change your current directory, you will have to run the script in the current shell, like this:

$ . ./dev-config

You can use environment variables to pass information to your scripts, as in this example, which uses the environment variable ARTICLE to pass information to the proof script we saw earlier:

$ cat proof

cat $ARTICLE | tbl | nroff -cm -rA2 \

-rN2 -rE1 -rC3 -rL66 -rW67 -rO0 |

col | lp -dpr2

$ export ARTICLE=article2

$ ./proof

A better way to get information to your scripts is with command-line arguments, which will be explained later in this chapter. Alternatively, you can get input directly from the user with read, which is discussed in the section “Shell Input and Output.”

Special Variable Expansions

When the shell interprets or expands the value of a variable, it replaces the variable with its value. You can perform a number of operations on variables as part of their expansion. These include specifying a default value and providing error messages for unset variables.

Grouping Variable Names

While $VARNAME is usually more convenient, you can also get the value of a variable with ${VARNAME}. This can be useful when you want to concatenate the variable with other information. For example,

NEWFILE=$OLDFILExxx

will set NEWFILE to the value of the variable OLDFILExxx. Since this variable probably doesn’t exist, NEWFILE will be empty Instead, you can use

NEWFILE=${OLDFILE}XXX

which will set NEWFILE to the value of OLDFILE with “xxx” added on to the end.

Providing Default Values

At times you may want to use a variable without knowing whether or not it has been set. You can specify a default value for the variable with this construct:

${VARIABLE:-default}

This will use the value of VARIABLE if it is defined, and the string default if it is not. It does not set or change the variable.

For example, in the proof script shown earlier, the environment variable ARTICLE might not be defined. If you replace $ARTICLE as shown,

cat ${ARTICLE:-article} | tbl | nroff -cm -rA2 \

-rN2 -rE1 -rC3 -rL66 -rW67 -rO0 |

col | lp -dpr2

the script will format and print the file article by default when ARTICLE is undefined.

A related operation assigns a default to an unset variable. The syntax for this is

${VARIABLE:=value}

If VARIABLE is null or unset, it is set to value. If it already has a value, it is not changed.

Giving an Error Message for a Missing Value

Occasionally, you may not want a shell program to execute unless all of the important parameters are set. For example, a program may have to look in various directories specified by your PATH to find important programs. If the value of PATH is not available to the shell, execution should stop. You can use the form

${VARIABLE:?message}

to do this. When VARIABLE is not set, this will print message and exit. For example,

echo ${PATH:?warning: PATH not set}

will print the value of the PATH variable if it is set. If PATH is not defined, the script exits with an error, and the message “warning: PATH not set” is printed to standard error.

If you do not specify an error message, as in,

${PATH:?}

a generic message will be displayed, such as

sh: PATH: parameter null or not set

In the variable expansion examples just presented, the colon (:) and curly braces ({}) are optional. It is a good idea, however, to always make a point of using them, since they help make your scripts more readable and can prevent certain bugs.

Special Variables for Shell Programs

The shell provides a number of special variables that are useful in scripts. These provide information about aspects of your environment that may be important in shell programs. The shell also uses special variables, including the values $* and $#, to pass command-line arguments to your scripts. These variables will be discussed in a later section.

The variable ? is the value returned by the most recent command. When a command executes, it returns a number to the shell. This number indicates whether it succeeded (ran to completion) or failed (encountered an error). By convention, 0 is returned by a successful command, and a nonzero value is returned when a command fails. In the section “Conditional Execution,” you will learn how to test whether the last command was successful by checking $?.

The variable $ contains the process ID of the current process (the shell that is running your script). This can be used to create a temporary file with a unique name. For example, suppose you write a script that uses the find command, which often prints messages to standard error. You might want to capture the error messages in a file rather than printing them on the screen, but you need to pick a filename that does not already exist. You could use this command:

find . -name $FILENAME > error$$

The value $$ is the number of the current process, and the filename error$$ is most likely unique.

The variable ! contains the process ID of the last background process. It is useful when a script needs to kill a background process it has previously begun.

Remember that NAME is the name of a shell variable, but $NAME is the value of the variable. Therefore, $, ?, and ! are variables, but $$, $?, and $! are their values.

The Korn shell and bash add the following useful variables. These are not standard in sh.

§ PWD contains the name of the current working directory.

§ OLDPWD contains the name of the preceding working directory.

§ LINENO is the current line number in your script.

§ RANDOM contains a random integer, taken from a uniform distribution over the range from 0 to 32,767. The value of RANDOM changes each time it is accessed.

Arrays and Lists

The Korn shell and bash allow you to define arrays. An array is a list of values, in which each element has a number, or index, associated with it. The first element in an array has index 0. For example, the following defines an array FILE consisting of three items:

FILE[0]=new

FILE[1]=temp

FILE[2] =$BACKUP

The first element in FILE is the string “new”. The last element is the value $BACKUP. To print an element, you could enter

echo ${FILE [2]}

You can also create arrays from a list of values. A list is contained in parentheses, like this:

NUMBERS=(1 2 3 4 5)

To print all the values in an array, use * for the index:

echo ${NUMBERS[*]}

Working with Strings

ksh and bash include several operators for working with strings of text. To find the length of a variable (the number of characters it contains), use the ${#VARIABLE} construct. For example,

$ FILENAME="firefly.sh"

$ echo ${#FILENAME}

10

The construct ${VARIABLE%wildcard} removes anything matching the pattern wildcard from the end (right side) of $VARIABLE. The pattern can include the shell wildcards described in Chapter 4, including * to stand for any string of characters. For example,

$ echo ${FILENAME%.*}

firefly

uses the wildcard .* to match the extension .sh, so echo prints the first part of the filename. The variable FILENAME is not modified.

Similarly, the pound sign can be used to remove an initial substring. For example,

$ echo ${FILENAME#*.}

sh

In this case, the wildcard *. matches the string “firefly.”. Echo prints the remainder of the string, which is “sh”.

Using Command-Line Arguments

You can pass command-line arguments to your scripts. When you execute a script, shell variables are automatically set to match the arguments. These variables are referred to as positional parameters. The parameters $1, $2, $3, $4 (up to $9) refer to the first, second, third, fourth (and so on) arguments on the command line. The parameter $0 is the name of the shell program itself.

Shell Positional Parameters

Command

arg1

arg2

arg3

arg4

arg5

arg9

|

|

|

|

|

|

|

$0

$1

$2

$3

$4

$5

$9

The parameter $# is the total number of arguments passed to the script. The parameter $* refers to all of the command-line arguments (not including the name of the script). The parameter $@ is sometimes used in place of $*; for the most part, they mean the same thing, although they behave slightly differently when quoted.

To see the relationships between words entered on the command line and variables available to a shell program, create the following sample shell program:

$ cat show_args

echo You ran the program called $0

echo with the following arguments:

echo $1

echo $2

echo $3

echo Here are all $# arguments:

echo $*

The output of this script could look like this:

$ chmod +x show_args

$ ./show_args This is a test of show_args with 11 command line arguments

You ran the program called ./show_args

with the following arguments:

This

is

a

Here are all 11 arguments:

This is a test of show_args with 11 command line arguments

The variable $* is especially useful because it allows your scripts to accept an arbitrary number of command-line arguments. For example, the backupWork script can be generalized to back up any files specified on the command line. In this example, the positional parameters are also used to add information to the e-mail sent by backupWork.

#!/bin/sh

# backupWork-a program to backup any files and

# directories given as command line arguments

# Version 2, Sept 2006

# Get the current date in a special format

# Create the new backup directory

TIMESTAMP='date +%Y.%m.%d.%T'

BACKUPDIR="~/Backups/Backup.$TIMESTAMP"

mkdir $BACKUPDIR

# Copy files in command line arguments to new directory

cp -r $* $BACKUPDIR

# Send mail to confirm that backup was done

# Include name of script and all command line arguments in the mail

echo "Running the script $0 $*" > mailmsg

echo "Backup to $BACKUPDIR completed." >> mailmsg

mail $LOGNAME < mailmsg

rm mailmsg

Shifting Positional Parameters

You can reorder positional parameters with the built-in shell command shift. This removes the first argument from $* and takes $# down by 1. It also renames the parameters, changing $2 to $1, $3 to $2, $4 to $3, and so forth. The original value of $1 is lost. (The value of $0 is unchanged.)

The following example illustrates the use of shift to manage positional parameters. The first argument to quickmail must be an e-mail address. The second argument is the (one-word) subject, and the remaining arguments are the contents of the e-mail.

#!/bin/sh

# quickmail-send mail from the command line

# useage: quickmail recipient subject contents

RECIPIENT=$1

SUBJECT=$2

shift/shift

(echo $*) mail $RECIPIENT -s $SUBJECT

echo $# word message sent to $RECIPIENT.

In this script, the first two arguments are saved in the variables RECIPIENT and SUBJECT. The two shift commands then move the list of positional parameters by two items; after the shift commands, $1 is the third word of the original command-line arguments. All of the remaining arguments are sent to mail on standard input (as the output of the echo command). Here’s what quickmail might look like when run:

$ ./quickmail jcm homework When will you hand out the next assignment?

8 word message sent to jcm.

The set Command

The shell command set takes a string and assigns each word to one of the positional parameters. (Any command-line arguments that are stored in the positional parameters will be lost.) For example, you could assign all the list of files in the current directory to the variables $1, $2, etc., with

set *

echo "There are $# files in the current directory."

You may recall from Chapter 4 that backquotes can be used to perform command substitution. You can use this to set the positional parameters to the output of a command. For example,

$ set 'date'

$ echo $*

Sun Dec 30 12:55:14 PST 2006

$ echo "$1, the ${3}th of $2"

Sun, the 30th of Dec

$ echo $6

2006

Arithmetic Operations

If you have used other programming languages, you may expect to be able to include arithmetic operations directly in your shell scripts. For example, you might try to enter something like the following:

$ x=2

$ x=$x+l

$ echo $x

2+1

In this example, you can see that the shell concatenated the strings “2” and “+1” instead of adding 1 to the value of x. To perform arithmetic operations in your shell scripts, you must use the command expr.

The expr command takes a list of arguments, evaluates them, and prints the result on standard output. Each term must be separated by spaces. For example,

$ expr 1 + 2

3

You can use command substitution to assign the output from expr to a variable. For example, you could increment the value of i with this line:

i='expr $i + 1'

Drawbacks of expr

Unfortunately, expr is awkward to use because of collisions between the syntax of expr and that of the shell itself. You can use expr to add, subtract, multiply, and divide integers using the +, −, *, and / operators. However, the * must be escaped with a backslash to prevent the shell from interpreting it as an asterisk:

$ expr 5 + 6

11

$ expr 11 – 3

8

$ expr 8 / 2

4

$ expr 4 \* 4

16

Another drawback of expr is that it can only be used for integer arithmetic. If you try to give it a decimal argument, you will get an error, and it will truncate decimal results. For example,

$ expr 1.5 + 2.5

expr: non-numeric argument

$ expr 7 / 2

3

If you leave out the spaces between arguments, expr will not interpret your expression:

$ expr 1+2

1+2

Other problems are that you cannot group arguments to expr with parentheses, and it does not recognize operations such as exponentiation. You can use the bc calculator, described in Chapter 19, to write scripts that can do these things. For example,

echo "scale=2; (.5 + (7/2)) ^ 2" | bc

will print the number 16.00. Another way to address these problems is with the let command, which is included in ksh and bash.

Using let for Arithmetic

In bash and ksh, the let command is an alternative to expr that provides a simpler and more complete way to deal with integer arithmetic.

The following example illustrates a simple use of let:

$ x=100

$ let y=2*(x+5)

$ echo $y

210

Note that let automatically uses the value of a variable like x or y. You do not need to add a $ in front of the variable name.

The let command can be used for all of the basic arithmetic operations, including addition, subtraction, multiplication, integer division, calculating a remainder, and inequalities. It also provides more specialized operations, such as conversion between bases and bitwise operations.

You can abbreviate let statements with double parentheses, (( )). For example, this is the same as let x=x+3

(( x = x+3 ))

Clearly, let is a significant improvement over expr. It still does not work with decimals, however, and it is not supported in sh. The limitations of expr and let are a good example of why shell is not the best language for some tasks.

Conditional Execution

An if statement tests whether a given condition is true. If it is, the block of code within the if statement will be executed. This is the general form of an if statement:

if testcommand

then

command(s)

fi

The command following the keyword if is executed. If it has a return value of zero (true), the commands following the keyword then are executed. The keyword fi (if spelled backward) marks the end of the if structure. Although the indentation of the commands does not have any effect when the script is executed, it can make a tremendous difference in making your scripts more readable.

UNIX System commands provide a return value or exit status when they complete. By convention, an exit status of zero (true) is sent back to the original process if the command completes normally; a nonzero exit status (false) is returned otherwise. This can be used as the test condition in an if statement. For example, you might want to execute a second command only if the first completes successfully Consider the following lines:

# Copy the directory $WORK to ${WORK}.OLD

cp -r $WORK ${WORK}.OLD

# Remove $WORK

rm -r $WORK

The problem with this sequence is that you would only want to remove the $WORK if it has been successfully copied. Using if…then allows you to make the rm command conditional on the outcome of cp. For example,

# Copy the directory $WORK to ${WORK}.OLD

# Remove $WORK if copy is successful

if cp -r $WORK ${WORK}.OLD

then

rm -rf $WORK

fi

In this example, $WORK is removed only if cp completes successfully and sends back a true (zero) return value. The -f option to rm suppresses any error messages that might result if the file is not present or not removable.

Testing Logical Conditions

You often need to test conditions other than whether a command was successful. The test command can be used to evaluate logical expressions in your if statements. When test evaluates a true expression, it returns 0. If the expression is false (or if no expression is given), test returns a nonzero status.

test allows you to compare integers or strings. The test -eq form checks to see if two integers are equal. For example, you could check the number of arguments that had been provided to a script:

if test $# -eq 0

then

echo "No command line arguments provided, setting user to current user."

username=$LOGNAME

fi

If $# is equal to zero (meaning there were no command-line arguments), the message is displayed and the variable username is set. Otherwise, the script continues after the keyword fi.

Table 20–1 shows the tests allowed on integers.

Table 20–1: Integer Tests

Integer Test

True If…

n1 -eq n2

n1 is equal to n2

n1 -ne n2

n1 is not equal to n2

n1 -gt n2

n1 is greater than n2

n1 -ge n2

n1 is greater than or equal to n2

n1 -It n2

n1 is less than n2

n1 -le n2

n1 is less than or equal to n2

Similarly, you can use test to examine strings, although the syntax is a bit different than for integers. For example,

if test -z "$input"

then input="default"

fi

checks to see if the length of $input is zero, and if so, it sets the value to “default”. Including the quotes around $input prevents errors when the variable is undefined (because even when $input is undefined, “$input” has the value “”).

Table 20–2 shows the tests you can use on strings.

Table 20–2: String Tests

String Test

True if…

-z string

length of string is zero

-n string

length of string is nonzero

string

string is not the null string (same as -n)

string1=string2

string1 is identical to string2

string1 != string2

string1 is not identical to string2

In some cases, you may want to test a more complex logical condition. For example, you might want to check if a string has one of two different values, as in this example:

if test "$input" = "quit" -o "$input" = "Quit"

then exit 0

fi

The operator -o stands for or. It returns the value true if the first condition or the second condition (or both) is true. Here’s a rather complex example with logical operators:

if test ! \( $x -gt 0 -a $y -gt 0 \)

then echo "Both x and y should be greater than 0."

fi

This uses the operator ! to stand for not, and -a for and. It says “if it is not the case that both $x is greater than 0 and $y is greater than 0, then print the error message.” Parentheses are used to group the statements. If the parentheses were removed, it would say “if it is not the case that $x is greater than 0, and it is the case that $y is greater than 0, print the error.” In order to prevent the shell from interpreting them, the parentheses must be quoted with \.

Table 20–3 lists the logical operators in sh.

Table 20–3: Logical Operators

Operator

Meaning

!

Negation

-a

AND

-o

OR

Using Brackets for Tests

Surrounding a comparison with square brackets has the effect of the test command. The brackets must be separated by spaces from the text, as in

if [ $# -eq 0]

If you forget to include the spaces, as in [$# -eq 0], the test will not work.

Here are some sample test expressions, and the equivalents using square brackets:

test $ # -eq 0

# Same as

[ $# -eq 0]

test -z $1

# Same as

[ -z $1]

test $1

# Same as

[$1]

Tests in ksh and bash

The shells ksh and bash provide the operator [[ ]], which can be used as another alternative to test. If the positional parameter $1 is set, the following three tests are equivalent:

test $1 = turing

[ $1 = turing]

[[ $1 = turing ]]

However, if $1 is not set, the first two versions of the test will give you an error, but the double bracket form will not.

The [[ ]] operator allows you to use the expression && for AND and | | for OR. It also understands < and > when comparing numbers. This can make your conditions significantly easier to type and read. For example, in ksh and bash, the following line says “it is not the case that both $x and $y are greater than zero”:

[[ ! ( $x > 0 && $y > 0)]]

Whereas with test it would look like this:

test ! \( $x -gt 0 -a $y -gt 0 \)

Testing Files and Directories

You can also evaluate the status of files and directories in your if statements. For example,

if [ -a "$1"]

checks to see if the first argument to the script is a valid filename. Checking to see if files exist is very common in shell scripts. As in this example, you will often want to check that filename arguments are valid before trying to run commands on them.

Table 20–4 shows the most common tests for files and directories.

Table 20–4: Tests for Files and Directories

File Tests

True if...

-a file

file exists

-r file

file exists and is readable

-w file

file exists and is writable

-x file

file exists and is executable

-f file

file exists and is a regular file

-d file

file exists and is a directory

-h file

file exists and is a symbolic link

-c file

file exists and is a character special file

The following example shows how you could check that a file exists before mailing it. If the file exists and is bigger than zero, the script mails it to $LOGNAME. If mail completes successfully, the file is removed.

if test -s logfile$$

then

if mail $LOGNAME < logfile$$

then

rm -f logfile$$

fi

fi

Exiting from Scripts

The built-in shell command exit causes the shell to exit and return an exit status number. By convention, an exit status of 0 (zero) means the program terminated normally, and a nonzero exit status indicates that some kind of error occurred. Often, an exit value of 1 indicates that the program terminated abnormally (because the user interrupted it with CTRL-C, e.g.), and an exit value of 2 indicates a usage or command-line error by the user. If you specify no argument, exit returns the status of the last command executed.

The exit command is often found inside a conditional statement. For example, this script will exit if the first command-line argument is not a valid filename.

if [ ! -a "$1"]

then

echo "File $1 not found."

exit 2

fi

if... elif… else Statements

The if ... elif ... else operation is an extension of the basic if statements just shown. It allows for more flexibility in controlling program flow. The general format looks like this:

if testcommand

then

command(s)

elif testcommand

then

command(s)

else

command(s)

fi

The command following the keyword if is evaluated. If it returns true, then the commands in the first block (between then and elif) are executed. If it returns false, however, then the command following elif is evaluated. If that command returns true, the next block of commands is executed. Otherwise, if both test commands were false, then the last block (following else) is executed. Note that, regardless of how the test commands turn out, exactly one of the three blocks of code is executed.

Because if ... elif ... else statements can be quite long, the examples here show the keyword then on the same line as the test commands. This can make your scripts more readable, although it is entirely a question of personal style. Notice, however, that a semicolon separates the test commands from the then. This semicolon is required so that the shell interprets then as a new statement and not as part of the test command.

Here’s an example that just uses the if and else blocks, without elif.

if [ -a "$1"] ; then

# good, the argument is a file that exists

inputfile = $1

else

# print error and exit

echo "Error: file not found"

exit 1

fi

This could be expanded with an elif block:

if [ -a "$1"] ; then

# good, the argument is a file that exists

# we can assign it to a variable

# and continue after the keyword fi

inputfile = $1

elif [ ! $1] ; then

# the argument $1 isn't defined

# print error message and exit

echo 'Error: filename argument required"

exit 1

else

# the problem must be that the file doesn't exist

# print error and exit

echo "Error: file $1 not found"

exit 1

fi

case Statements

If you need to compare a variable against a long series of possible values, you can use a long chain of if ... elif ... else statements. However, the case command provides a cleaner syntax for a chain of comparisons. It also allows you to compare a variable to a shell wildcard pattern, rather than to a specific value.

The syntax for using case is shown here:

case string

in

pattern)

command(s)

;;

pattern)

command(s)

;;

esac

The value of string is compared in turn against each of the patterns. If a match is found, the commands following the pattern are executed up until the double semicolon (;;), at which point the case statement terminates. If the value of string does not match any of the patterns, the program goes through the entire case statement.

Here’s an example of a case statement. It checks $INPUT to see if it is a math statement containing +, −, *, or /. If it is, the statement is evaluated with bc. If $INPUT says “Interactive”, the script runs a copy of bc for the user. If $INPUT is a string such as “quit”, the script exits. And if it is something else, the script prints a warning message.

case $INPUT

in

*+* | *..* | *\** | */*)

echo "scale=5; $INPUT" | bc

;;

"Interactive")

echo "Starting bc for interactive use."

echo -e "Enter bc commands. \c"

echo "To quit bc and return to this script, type quit."

bc

echo "Exiting bc, returning to $0."

;;

[Qq]uit | [Ee]xit)

# matches the strings Quit, quit, Exit, and exit

echo "Quitting now."

exit 0

;;

*)

echo "Warning: input string does not match."

;;

esac

In this case statement, the * in the last block matches any string, so this block is executed by default if none of the other patterns match.

Note for C programmers: unlike the break statement in C, the ;; is not optional. You cannot leave out the ;; after a block of commands to continue executing the case statement after a match.

Writing Loops

The shell provides several ways to loop through a set of commands. A loop allows you to repeatedly execute a block of commands before proceeding further in the script. The two main types of loop are for and while. until loops are a variation on while loops. In addition, the select command can be used to repeatedly present a selection menu.

for Loops

The for loop executes a block of commands once for each member of a list. The basic format is

for i in list

do

commands

done

The variable i in the example can have any name that you choose.

You can use for loops to repeat a command a fixed number of times. For example, if you enter the following on the command line,

$ for x in 0 1 2 3 4 5 6 7 8 9

> do

> touch testfile$x

> done

the shell will run the touch command ten times. Each time, it will create an empty file with the name testfile followed by a number.

If you omit the in list portion of the for loop, the value of $* will be used instead. That will cause the command block between do and done to be executed once for each positional parameter You could use this to iterate through the command-line arguments to a script. For example, the following script can be used to look up several people in the file called friends:

#

# contacts - takes names as arguments

# looks up each name in the friends file

#

for NAME

do

grep $NAME $HOME/friends

done

If you issue the command

$ contacts John Dave Albert Rachel

the grep command will be run four times-first for John, then for Dave, then for Albert, and finally for Rachel.

Loops can be nested. Each of the loops must use a different variable name. For example, the following script iterates through the files in the current directory. For each file, it runs the script proof five times.

for FILENAME in *

do

echo "Printing 5 copies of $FILENAME"

for x in 1 2 3 4 5

do

proof $FILENAME

done

done

while and until Loops

The while command repeats a block of commands based on the result of a logical test. The general form for the use of while is

while testcommand

do

commandlist

done

When while is executed, it runs testcommand. If the return value of testcommand is true, commandlist is executed, and the program returns to the while test. The loop continues until the value of testcommand is false, at which point while terminates.

This while loop prints the squares of the integers from 1 to 10.

i=1

while [ $i -le 10]

do

expr $i \* $i

i='expr $i + 1'

done

The until command is the complement of the while command, and its form and usage are similar. The only difference between them is that while loops repeat until the test is false, and until loops repeat until the test is true. Thus, the preceding example could also be written as

i=1

until [ $i -gt 10]

do

expr $i \* $i

i='expr $i + 1'

done

break and continue

Normally, execution of a loop continues until the logical condition of the loop is met. Sometimes, however, you want to exit a loop early or skip certain commands.

break exits from a loop. The script resumes execution with the first command after the loop. In a set of nested loops, break exits the immediately enclosing loop. If you give break a numeric argument, the program breaks out of that number of loops, so for example, break 3 would exit a set of three nested loops all at once.

continue sends control back to the top of the smallest enclosing loop. If an argument is given, control goes to the top of the nth enclosing loop.

The true and false Commands

The commands true and false are very simple-true simply returns a successful exit status, and false generates an unsuccessful exit status. The primary use of these two commands is in setting up infinite loops. For example,

while true

do

read NEWLINE

if [ $NEWLINE = "."]

then break

fi

done

This loop will execute forever-or at least until the user enters a dot on a line by itself. Infinite loops should be used sparingly, since they are often difficult to read and to debug.

Printing Menus with select

ksh and bash provide another iteration command, select. The select command displays a numbered list of items on standard error and waits for input. After the selection is processed, the user is prompted for input again, and so on until the loop ends (usually with a break statement).

For example, you could write a script to help new users execute common programs. The select command provides a menu of alternatives from which to choose. The variable PS3 is used to prompt for input. A case statement is used in the script to execute the chosen command. (You could use an if statement, if you prefer.) If a user presses ENTER without making a selection, the list of items is displayed again.

#!/bin/bash

# startMenu - Provide a menu of common actions.

PS3='What would you like to do? (enter 13) '

select ACTION in "Read Mail with Pine" "Start XWindows" "Exit this Menu"

do

case $ACTION in

"Read Mail with Pine")

# run the pine mailreader; return to this menu when done

pine

;;

"Start XWindows")

# start XWindows, and do not return to this script

# replace this process with the X process

exec startx

;;

"Exit this Menu")

echo "Returning to your login shell."

break

;;

*)

echo "Response not recognized, try again."

;;

esac

done

In this example, the selection is saved in the variable ACTION. For example, entering “1” would set ACTION to “Read Mail”. If the user selects a number outside the appropriate range, the variable is set to null, and in this example is caught by the last case block. When you run this script, the output will look like this:

$ startMenu

1) Read Mail

2) Start XWindows

3) Exit this Menu

What would you like to do? (enter 13)

Shell Input and Output

You have already seen how to use echo to print output from your script, and how to use environment variables or command-line arguments to get information to your script. This section describes additional features for dealing with input and output.

The echo Command

Table 20–5 shows the escape sequences that may be embedded in the arguments to echo:

Table 20–5: echo Escape Sequences

Echo Escape Sequences

\b

Backspace

\c

Print line without newline

\f

Form feed

\n

Newline

\r

Return

\t

Tab

\v

Vertical tab

\\

Backslash

For example,

echo "Copying files ... \c"

cp -r $OLDDIR $NEWDIR

echo "done.\nFile $OLDDIR copied."

will print something like

Copying files ... done.

File CurrentScripts copied.

In some versions of echo (including bash), you will need to enable escape sequences with the flag -e. You can also disable escape sequences with -E. In ksh and bash, you can use the flag -n to prevent echofrom adding a newline at the end of each line. So in bash, this example could be written as

echo -n "Copying files ... "

cp -r $OLDDIR $NEWDIR

echo -e "done.\nFile $OLDDIR copied."

The read Command

The read command lets you insert the user input into your script interactively read reads one line from standard input and saves the line in one or more shell variables. For example,

echo "Enter your name."

read NAME

echo "Hello, $NAME"

If you do not specify a variable to save the input, REPLY is used as a default.

You can also use the read command to assign several shell variables at once. When you use read with more than one variable name, the first field typed by the user is assigned to the first variable; the second field, to the second variable; and so on. Leftover fields are assigned to the last variable.

$ cat readDemo

echo "Enter a line of text:"

read $FIRST $SECOND $REST

echo -e "$FIRST\n$SECOND\n$REST"

$ ./readDemo

Enter a line of test:

the five boxing wizards jump quickly

the

five

boxing wizards jump quickly

The field separator for shell input is defined by the IFS (Internal Field Separator) variable, which is a blank space by default. If you wish to use a different character to separate fields, you can do so by redefining the IFS shell variable. For example, IFS=: will set the field separator to the colon character (:).

Here Documents

The here document facility provides multiline input to commands within shell scripts, while preserving the newlines in the input. It is similar to file redirection. Instead of typing

echo "Reminder: team meeting is in one hour," > message

echo "in the second floor meeting room." >> message

echo "Please reply if you can't make it." >> message

mail dbp etch a-liu < message

rm message

to create and mail a file, you can use

mail dbp etch a-liu <<message

Reminder: team meeting is in one hour,

in the second floor meeting room.

Please reply if you can't make it.

message

to send a block of text to the command without first writing it to a file.

The operator <<word defines the beginning of multiline input. The shell reads everything up to the next line that contains only word, and treats it as input from a file. If you use <<-word (with a minus sign in front of word), then leading spaces and tabs will be stripped out of each line of input. This allows you to indent your script to make it more readable, like this:

mail dbp etch a-liu <<-message

Reminder: team meeting is in one hour,

in the second floor meeting room.

Please reply if you can't make it.

message

Creating Functions

In ksh and bash, you can create your own functions. Functions can be used within a script to break up large sections of code, or to make it easy to reuse a block of code. For example,

function factorial {

n=$1

FACT=1

while [ $n -gt 0]

do

FACT='expr $FACT \* $n'

n='expr $n − 1'

done

echo "$1 factorial is $FACT"

}

for NUM in $*

do

factorial $NUM

done

The arguments to a function are saved in the positional parameters $1, $2, and so on. These values only apply within the function-when execution returns to the main body of code, the positional parameters still have their earlier values.

You can also use functions to define more advanced aliases in your configuration files. For example, you could add these lines to your .bashrc or .kshrc file to define a command called del. The del command will move files to a hidden “wastebasket” directory instead of deleting them.

function del{

mv $* $HOME/.Wastebasket

}

Further Scripting Techniques

By now you know most of the important techniques for shell scripting, including various methods of getting input from the user, working with data, and controlling the flow of your scripts with statements like if and for. This section describes techniques that are less common (but still useful), such as how to process command-line options, how to read all the lines in a file, and how to process interrupt signals.

Command-Line Options in Shell Scripts

You already know how to use command-line arguments, such as filenames, with the positional parameters $1, $2, and so on. You could use the positional parameters and a set of if or case statements to handle option flags (as in ls -la) as well, but the command getopts is much easier to use.

getopts parses the options that are given to a script on the command line. It interprets any letter preceded by a minus sign as an option. It allows options to be specified in any order, and options without arguments to be grouped together.

The easiest way to understand getopts is from an example. This example simply reads the command-line options with getopts and prints them to standard output:

$ cat getoptsExample

# Look for the command line options a, b, c, and d.

# The options a and d take arguments, unlike b and c.

# Print any options that are found.

while getopts a:bcd: FLAGNAME

do

case $FLAGNAME in

a) echo "Found -a $OPTARG"

;;

b) echo "Found -b"

;;

c) echo "Found -c"

;;

d) echo "Found -d $OPTARG"

;;

\?) echo "Error: unexpected argument"

exit 2

;;

done

echo "There were $OPTIND options and arguments total."

# Remove the options from the list of positional parameters.

shift 'expr $OPTIND − 1'

echo -e "The other command line arguments were:\n$*"

Here’s what it might look like when run:

$ ./getoptsExample -bc -a "testing options" filename1 filename2

Found -b

Found -c

Found -a testing options

There were 4 options and arguments total.

The rest of the command line arguments were:

filename1 filename2

Here’s how the example works. The line getopts a:bcd: FLAGNAME looks for the options a, b, c, and d. The : after a and d shows that those options take additional arguments. The first option found is saved in FLAGNAME. Any arguments for that option are saved in the special variable OPTARG. The case statement checks which option it was, and takes whatever action is appropriate. In this case, the options were printed with echo. More commonly, variables might be set here to indicate which options were chosen and to save their arguments.

If an argument not on the getopts list is found, FLAGNAME is set to ?. The case statement shown above includes a test for ?, which will print an error message and exit.

The while loop repeats until all the options have been found. At this point, the special variable OPTIND has the number of options and arguments that have been found. The shift command is used to remove these from the list of positional parameters, so that the command-line arguments can be used.

Using getopts may seem rather daunting, and of course for the majority of scripts it is unnecessary But once you understand how it works, it’s not too hard to adapt the sample code just shown for use in any script you might write.

Grouping Commands

You can execute a list of commands as a group by enclosing them in parentheses. The commands are executed in their own subshell. For example,

(cd ~/bin; ls −1)

You can enter this on the command line to list the contents of ~/bin. Because the commands are executed in a subshell, your current directory will not be changed.

If you want to execute a group of commands in the current shell, enclose them with curly brackets instead of parentheses.

Grouping commands makes it easy to redirect output. For example,

{date; who; last} > $LOGFILE

is shorter than

date > $LOGFILE

who >> $LOGFILE

last >> $LOGFILE

Grouping also allows you to redirect output from commands in a pipeline. If you try to redirect standard error like this:

diff $OLDFILE $NEWFILE | lp 2> errorfile

only error messages from lp will be captured. You can use

(diff $OLDFILE $NEWFILE | lp) 2> errorfile

to redirect error messages from all the commands in the pipeline.

Reading Each Line in a File

Suppose you want to read the contents of a file one line at a time. For example, you might want to print a line number at the beginning of each line. You could do it like this:

n=0

cat $FILE |

while read LINE

do

echo "$n) $LINE"

n='expr $n + 1'

done

echo "There were $n lines in $FILE."

This uses a pipe to send the contents of $FILE to the read command in the while loop. The loop repeats as long as there are lines to read. The variable n keeps track of the total number of lines.

The problem with is this is that each command in a pipeline is executed in a subshell. Because the while loop is executed in its own subshell, the changes to the variable n don’t get saved. So the last line of the script says that there were 0 lines in the file.

You can fix this by grouping the loop with curly braces (so that it gets executed in the current shell), and sending the contents of $FILE to the loop. The new script will look like this:

n=0

{

while read LINE

do

echo "$n) $LINE"

n='expr $n + 1'

done

} < $FILE

echo "There were $n lines in $FILE."

As before, the lines from $FILE are printed with line numbers, but this time the variable n is updated, so the total number of lines is reported correctly

The trap Command

Some shell scripts create temporary files to store data. These files are typically deleted at the end of the script. But sometimes scripts are interrupted before they finish (e.g., if you hit CTRL-C), in which case these files might be left sitting there. The trap command provides a way to execute a short sequence of commands to clean up before your script is forced to exit.

Ending a process with kill, hitting CTRL-C, or closing your terminal window causes the UNIX system to send an interrupt signal to your script. With trap you can specify which of these signals to look for. The general form of the command is

trap commands interrupt-numbers

The first argument to trap is the command or commands to be executed when an interrupt is received. The interrupt-numbers are codes that specify the interrupt. The most important interrupts are shown in Table 20–6.

Table 20–6: Interrupt Codes

Number

Interrupt Meaning

0

Shell Exit

This occurs at the end of a script that is being executed in a subshell.

It is not normally included in a trap statement.

1

Hangup

This occurs when you exit your current session (e.g., if you close your terminal window).

2

Interrupt

This happens when you end a process with CTRL-C.

9

Kill

This happens when you use kill −9 to terminate the script.

It cannot be trapped.

15

Terminate

This happens if you use kill to terminate the script, as in kill %1.

The trap statement is usually added at the beginning of your script, so that it will be executed no matter when your script is interrupted. It might look something like this:

trap 'rm tmpfile; exit 1' 1 2 15

In this case, if an interrupt is received, tmpfile will be deleted, and the script will exit with an error code. If you do not include the exit command, the script will not exit. Instead, it will continue executing from the point where the interrupt was received. To ensure that your scripts exit when they are interrupted, always remember to include exit as part of the trap statement. If you forget to do this, you will have to use kill −9 to end your script. Since interrupt 9 cannot be trapped, you can always use CTRL-Z, followed by kill −9 %n (where n is the job number), to end your current process.

The xargs Command

One much-used feature of the shell is the capability to connect the output of one program to the input of another using pipes. Sometime you may want to use the output of one command to define the arguments for another. xargs is a shell programming tool that lets you do this. xargs is an especially useful command for constructing lists of arguments and executing commands. This is the general format of xargs:

xargs [flags] [command [(initial args)]]

xargs takes its initial arguments, combines them with arguments read from the standard input, and uses the combination in executing the specified command. Each command to be executed is constructed from the command, then the initial args, and then the arguments read from standard input.

For example, you can use xargs to combine the commands find and grep in order to search an entire directory structure for files containing a particular string. The find command is used to recursively descend the directory tree, and grep is used to search for the target string in all of the files from find.

In this example, find starts in the current directory (.) and prints on standard output all filenames in the directory and its subdirectories. xargs then takes each filename from its standard input and combines it with the options to grep (-s, -i, -l, -n) and the command-line arguments ($*, which is the target pattern) to construct a command of the form grep -i -l, -n $* filename. xargs continues to construct and execute a new command line for every filename provided to it. The program fileswith prints out the name of each file that has the target pattern in it, so the command fileswith Calvino will print out names of all files that contain the string “Calvino”.

#

# fileswith - descend directory structure

# and print names of files that contain

# target words specified on the command line.

#

find . -type f -print | xargs grep −1 -i -s $* 2>/dev/null

The output is a listing of all the files that contain the target phrase:

$ fileswith Borges

./mbox

./Notes/books

./Scripts/Perl/orbis-tertius.pl

xargs itself can take several arguments, and its use can get rather complicated. The two most commonly used arguments are:

-i

Each line from standard input is treated as a single argument and inserted into initial args in place of the () symbols.

-p

Prompt mode. For each command to be executed, print the command, followed by a ?. Execute the command only if the user types y (followed by anything). If anything else is typed, skip the command.

In the following example, move uses xargs to list all the files in a directory ($1) and move each file to a second directory ($2), using the same filename. The -i option to xargs replaces the () in the script with the output of ls. The -p option prompts the user before executing each command:

#

# move $1 $2 - move files from directory $1 to directory $2,

# echo mv command, and prompt for "y" before # executing command.

#

ls $1 | xargs -i -p mv $1/() $2/()

Debugging Shell Programs

Quite often you will find that your shell scripts don’t work the way you expect when you try to run them. It is easy to enter a typo, or to leave out necessary quotation marks or escape characters, in the first draft of a script. A typo in a shell script will usually cause the script to stop running when it gets to the error, but in some cases the script will skip over the error and continue execution. Occasionally this can cause serious problems. For example, if you attempt to copy and then delete a file with

copy oldfile newfile

rm oldfile

the copy will fail (because the command is named cp), but rm will still remove oldfile.

The best way to prevent frustrating errors is to test your scripts frequently as you write them, as opposed to writing a very long script all at once and then attempting to run it. It is also a good idea to run your scripts on test files or data before using them on important information.

A script that does not run will often provide an error message on the screen. For example,

prog: syntax error at line 12: 'do' unmatched

or

prog: syntax error at line 142: 'end of file' unexpected

These error messages function as broad hints that you have made an error. Several shell key words are used in pairs, for example, if ... fi, case ... esac, and do ... done. This type of message tells you that an unmatched pair exists, although it does not tell you where it is. Since it is difficult to tell how word pairs such as do ... done were intended to be used, the shell informs you that a mismatch occurred, not where it was. The do unmatched at line 12 may be missing a done at line 142, but at least you know what kind of problem to track down.

The next thing to do if you are having trouble with a script is to watch it while each line of the script is executed. The command

$ sh -x filename

tells the shell to run the script in filename, printing each command and its arguments as it is executed. Because the most common errors in scripts have to do with unmatched keywords, incorrect quotation marks (e.g., ‘rather than’), and improperly set variables, sh -x reveals most of your early errors. At the very least, sh -x can help you determine where in your script things start to go wrong.

Summary

In this chapter, you learned the fundamentals of shell programming, including how to write and execute simple shell scripts, how to include UNIX System commands in your scripts, and how to pass arguments to the shell. You also learned more advanced techniques, including flow control with if statements and for/while loops. You saw how getopts is used to parse a command line, and how expr can be used to evaluate mathematical expressions.

Shell scripting does have limitations. By itself, it is not especially good at string or text manipulation, for example. The next chapter discusses the UNIX tools awk and sed, which can be powerful additions to your scripts. They add the ability to easily process lines of text with regular expressions, and to quickly edit large sources of input.

Alternatively, once you feel comfortable with shell scripting, you may want to look at other scripting languages to get a sense of how they differ from shell. As you have seen, the shell programming language can be used to write many useful tools, and is especially good at integrating UNIX commands into scripts. However, other languages offer improvements such as cleaner syntax, advanced data structures, and better portability Chapters 22 and 23 provide introductions to Perl and Python, respectively, which are two of the most popular scripting languages in use today.

How to Find Out More

This book is a very popular and thorough reference for shell scripting.

· Robbins, Arnold, and Nelson H.F. Beebe. Classic Shell Scripting. 1st ed. Sebastopol, CA: O’Reilly, 2005.

These two books contain many examples of useful and interesting shell scripts. The first is a bit more general and introductory; the second is targeted at somewhat advanced bash scripters.

· Johnson, Chris F.A. Shell Scripting Recipes: A Problem-Solution Approach. 1st ed. Berkeley, CA: Apress, 2005.

· Taylor, Dave. Wicked Cool Shell Scripts. 1st ed. San Francisco, CA: No Starch Press, 2004.

This definitive reference for the Korn shell includes Korn shell scripting.

· Bolsky, Morris I., and David G. Korn. The New Korn Shell, Command and Programming Language. 2nd ed. Englewood Cliffs, NJ: Prentice Hall, 1995.