Introduction to Bash Shell Scripting - Beginning the Linux Command Line, Second edition (2015)

Beginning the Linux Command Line, Second edition (2015)

CHAPTER 14. Introduction to Bash Shell Scripting

Once you really get to be at ease working on the command line, you’ll want to do more than what the previous chapters have taught you. You’ve already learned how to combine commands using piping, but if you really want to get the best out of your commands, there is much more you can do. In this chapter, you’ll get an introduction to the possibilities of Bash shell scripting, which really is the command line on steroids; piping and redirection just is not enough if you need to do really complex tasks. As soon as you really understand shell scripting, you’ll be able to automate many tasks, and thus do your work at least twice as fast as you used to do it.

Basic Shell Script Components

A shell script is a text file that contains a sequence of commands. So basically, anything that can run a bunch of commands can be considered a shell script. Nevertheless, some rules exist for making sure that you create decent shell scripts, scripts that will not only do the task you’ve written them for, but also be readable by others. At some point in time, you’ll be happy with the habit of writing readable shell scripts. As your scripts get longer and longer, you will notice that if a script does not meet the basic requirements of readability, even you yourself won’t be able to understand what it is doing.

Elements of a Good Shell Script

When writing a script, make sure that you meet the following requirements:

· Give it a unique name.

· Include the shebang (#!) to tell the shell which subshell should execute the script.

· Include comments—lots of them.

· Use the exit command to tell the shell that executes the script that the script has executed successfully.

· Make your scripts executable.

Let’s start with an example script (see Listing 14-1).

Listing 14-1. Make Sure Your Script Is Well Structured

# this is the hello script
# run it by typing ./hello in the directory where you’ve found it
echo hello world
exit 0

Let’s talk about the name of the script first: you’ll be amazed how many commands already exist on your computer. So you have to make sure that the name of your script is unique. For instance, many people like to give the name test to their first script. Unfortunately, there’s already an existing command with that name (see the section “Using Control Structures” later in this chapter). If your script has the same name as an existing command, the existing command will be executed, not your script (unless you prefix the name of the script with ./). So make sure that the name of your script is not in use already. You can find out whether a name already exists by using the which command. For instance, if you want to use the name hello and want to be sure that it’s not in use already, type which hello. Listing 14-2 shows the result of this command.

Listing 14-2. Use which to Find Out Whether the Name of Your Script Is Not Already in Use

nuuk:~ # which hello
which: no hello in

In the first line of the script is the shebang. This scripting element tells the shell from which this script is executed which subshell should be executed to run this script. This may sound rather cryptic, but is not too hard to understand. If you run a command from a shell, the command becomes the child process of the shell; the pstree command will show you that perfectly. If you run a script from the shell, the script becomes a child process of the shell. This means that it is by no means necessary to run the same shell as your current shell to run the script. To tell your current shell which subshell should be executed when running the script, include the shebang. As mentioned previously, the shebang always starts with #! and is followed by the name of the subshell that should execute the script. In Listing 14-1, I’ve used /bin/bash as the subshell, but you can use any other shell if you’d like.

You will notice that not all scripts include a shebang, and in many cases, even if your script doesn’t include a shebang, it will still run. However, if a user who uses a shell other than /bin/bash tries to run a script without a shebang, it will probably fail. You can avoid this by always including a shebang.

The second part of the example script in Listing 14-1 are two lines of comment. As you can guess, these command lines explain to the user what the purpose of the script is and how to use it. There’s only one rule about comment lines: they should be clear and explain what’s happening. A comment line always starts with a # followed by anything.

Image Note You may ask why the shebang, which also starts with a #, is not interpreted as a comment. That is because of its position and the fact that it is immediately followed by an exclamation mark. This combination at the very start of a script tells the shell that it’s not a comment, but a shebang.

Following the comment lines is the body of the script itself, which contains the code that the script should execute. In the example from Listing 14-1, the code consists of two simple commands: the first clears the screen, and the second echoes the text “hello world” to the screen.

The last part of the script is the command exit 0. It is a good habit to use the exit command in all your scripts. This command exits the script and next tells the parent shell how the script has executed. If the parent shell reads exit 0, it knows the script executed successfully. If it encounters anything other than exit 0, it knows there was a problem. In more complex scripts, you could even start working with different exit codes; use exit 1 as a generic error message and exit 2 , and so forth, to specify that a specific condition was not met. When applying conditional loops later (see the section “Using Control Structures” later in this chapter), you’ll see that it may be very useful to work with exit codes.

Executing the Script

Now that your first shell script is written, it’s time to execute it. There are different ways of doing this:

· Make it executable and run it as a program.

· Run it as an argument of the bash command.

· Source it.

Making the Script Executable

The most common way to run a shell script is by making it executable. To do this with the hello script from the example in Listing 14-1, you would use the following command:

chmod +x hello

After making the script executable, you can run it, just like any other normal command. The only limitation is the exact location in the directory structure where your script is. If it is in the search path, you can run it by typing just any command. If it is not in the search path, you have to run it from the exact directory where it is. This means that if linda created a script with the name hello that is in /home/linda, she has to run it using the command /home/linda/hello. Alternatively, if she is already in /home/linda, she could use ./hello to run the script. In the latter example, the dot and the slash tell the shell to run the command from the current directory.

Image Tip Not sure whether a directory is in the path or not? Use echo $PATH to find out. If it’s not, you can add a directory to the path by redefining it. When defining it again, you’ll mention the new directory, followed by a call to the old path variable. For instance, to add the directory/something to the path, you would use PATH=$PATH:/something.

Running the Script as an Argument of the bash Command

The second option for running a script is to specify its name as the argument of the bash command. For instance, our example script hello would run by using the command bash hello. The advantage of running the script in this way is that there is no need to make it executable first. Make sure that you are using a complete reference to the location where the script is when running it this way; it has to be in the current directory, or you have to use a complete reference to the directory where it is. This means that if the script is /home/linda/hello, and your current directory is /tmp, you should run it using the following command:

bash /home/linda/hello

Sourcing the Script

The third way of running the script is rather different. You can source the script. By sourcing a script, you don’t run it as a subshell, but you are including it in the current shell. This may be useful if the script contains variables that you want to be active in the current shell (this happens often in the scripts that are executed when you boot your computer). Some problems may occur as well. For instance, if you use the exit command in a script that is sourced, it closes the current shell. Remember, the exit command exits the current script. In fact, it doesn’t exit the script itself, but tells the executing shell that the script is over and it has to return to its parent shell. Therefore, you don’t want to source scripts that contain the exit command. There are two ways to source a script. The next two lines show how to source a script that has the name settings:

. settings
source settings

It doesn’t really matter which one you use, as both are equivalent. When discussing variables in the next section, I’ll give you some more examples of why sourcing may be a very useful technique.

Working with Variables and Input

What makes a script so flexible is the use of variables. A variable is a value you get from somewhere that will be dynamic. The value of a variable normally depends on the circumstances. You can have your script get the variable itself, for instance, by executing a command, by making a calculation, by specifying it as a command-line argument for the script, or by modifying some text string. In this section, you’ll learn all there is to know about variables.

Understanding Variables

A variable is a value that you define somewhere and use in a flexible way later. You can do this in a script, but you don’t have to, as you can define a variable in the shell as well. To define a variable, you use varname=value. To get the value of a variable later on, you call its value by using the echo command. Listing 14-3 gives an example of how a variable is set on the command line and how its value is used in the next command.

Listing 14-3. Setting and Using a Variable

nuuk:~ # HAPPY=yes
nuuk:~ # echo $HAPPY

Image Note The method described here works for the Bash and Dash shells. Not every shell supports this method, however. For instance, on tcsh, you need to use the set command to define a variable: set happy=yes gives the value yes to the variable happy.

Variables play a very important role on your computer. When booting, lots of variables are defined and used later when you work with your computer. For instance, the name of your computer is in a variable, the name of the user account you logged in with is in a variable, and the search path is in a variable as well. These are the shell variables, the so-called environment variables you get automatically when logging in to the shell. As discussed earlier, you can use the env command to get a complete list of all the variables that are set for your computer. You will notice that most environment variables are in uppercase. However, this is in no way a requirement; an environment variable can be in lowercase as well.

The advantage of using variables in shell scripts is that you can use them in three ways:

· As a single point of administration for a certain value

· As a value that a user provides in some way

· As a value that is calculated dynamically

When reading some of the scripts that are used in your computer’s boot procedure, you will notice that the beginning of the script features a list of variables that are referred to several times later in the script. Let’s have a look at the somewhat silly example in Listing 14-4.

Listing 14-4. Understanding the Use of Variables

# dirscript
# Silly script that creates a directory with a certain name
# next sets $USER and $GROUP as the owners of the directory
# and finally changes the permission mode to 770

chmod 770 $DIRECTORY

exit 0

As you can see, after the comment lines, this script starts by defining all the variables that are used. I’ve specified them in all uppercase, because it makes it a lot easier to recognize the variables when reading a longer script. In the second part of the script, the variables are referred to by typing in their names with a $ sign in front of each.

You will notice that quite a few scripts work in this way. There is a disadvantage though: it is a rather static way of working with variables. If you want a more dynamic way to work with variables, you can specify them as arguments to the script when executing it on the command line, for instance.

Variables, Subshells, and Sourcing

When defining variables, you should be aware that a variable is defined for the current shell only. This means that if you start a subshell from the current shell, the variable won’t be there. And if you define a variable in a subshell, it won’t be there anymore once you’ve quit the subshell and returned to the parent shell. Listing 14-5 shows how this works.

Listing 14-5. Variables Are Local to the Shell Where They Are Defined

nuuk:~/bin # HAPPY=yes
nuuk:~/bin # echo $HAPPY
nuuk:~/bin # bash
nuuk:~/bin # echo $HAPPY

nuuk:~/bin # exit
nuuk:~/bin # echo $HAPPY
nuuk:~/bin #

In Listing 14-5, I’ve defined a variable with the name HAPPY, and next its value is correctly echoed. In the third command, a subshell is started, and as you can see, when asking for the value of the variable HAPPY in this subshell, it isn’t there because it simply doesn’t exist. But when the subshell is closed by using the exit command, we’re back in the parent shell where the variable still exists.

Now in some cases, you may want to set a variable that is present in all subshells as well. If this is the case, you can define it by using the export command. For instance, the following command would define the variable HAPPY and make sure that it is available in all subshells from the current shell on, until you next reboot the computer. However, there is no similar way to define a variable and make that available in the parent shells.

export HAPPY=yes

Image Note Make sure that you include the definition of variables in /etc/profile so that the new variable will also be available after a reboot.

Listing 14-6 shows the same commands as used in Listing 14-5, but now with the value of the variable being exported.

Listing 14-6. By Exporting a Variable, You Can Make It Available in Subshells As Well

nuuk:~/bin # export HAPPY=yes
nuuk:~/bin # echo $HAPPY
nuuk:~/bin # bash
nuuk:~/bin # echo $HAPPY
nuuk:~/bin # exit
nuuk:~/bin # echo $HAPPY
nuuk:~/bin #

So that’s what you have to do to define variables that are available in subshells as well.

A technique you will see often as well that is related to variables is the sourcing of a file that contains variables. The idea is that somewhere on your computer you keep a common file that contains variables. For instance, consider the example file vars that you see in Listing 14-7.

Listing 14-7. By Putting All Your Variables in One File, You Can Make Them Easily Available


The main advantage of putting all variables in one file is that you can make them available in other shells as well by sourcing them. To do this with the example file from Listing 14-7, you would use the following command (assuming that the name of the variable file is vars):

. vars

Image Note . vars is not the same as ./vars. With . vars, you include the contents of vars in the current shell. With ./vars, you run vars from the current shell. The former doesn’t start a subshell, whereas the latter does.

In Listing 14-8, you can see how sourcing is used to include variables from a generic configuration file in the current shell. In this example, I’ve used sourcing for the current shell, but the technique is also quite commonly used to include common variables in a script.

Listing 14-8. Example of Sourcing Usage

nuuk:~/bin # echo $HAPPY

nuuk:~/bin # echo $ANGRY

nuuk:~/bin # echo $SUNNY

nuuk:~/bin # . vars
nuuk:~/bin # echo $HAPPY
nuuk:~/bin # echo $ANGRY
nuuk:~/bin # echo $SUNNY
nuuk:~/bin #

Working with Script Arguments

In the preceding section, you have learned how you can define variables. Up to now, you’ve seen how to create a variable in a static way. In this section, you’ll learn how to provide values for your variables in a dynamic way by specifying them as an argument for the script when running the script on the command line.

Using Script arguments

When running a script, you can specify arguments to the script on the command line. Consider the script dirscript that you’ve seen previously in Listing 14-4. You could run it with an argument on the command line as well, as in the following example:

dirscript /blah

Now wouldn’t it be nice if in the script you could do something with the argument /blah that is specified in the script? The good news is that you can. You can refer to the first argument that was used when launching the script by using $1 in the script, the second argument by using $2, and so on, up to $9. You can also use $0 to refer to the name of the script itself. The example script in Listing 14-9 shows how it works.

Listing 14-9. Showing How Arguments Are Used

# argscript
# Silly script that shows how arguments are used


echo The name of this script is $SCRIPTNAME
echo The first argument used is $ARG1
echo The second argument used is $ARG2
echo The third argument used is $ARG3
exit 0

The example code in Listing 14-10 shows how dirscript is rewritten to work with an argument that is specified on the command line. This changes dirscript from a rather static script that can create one directory only to a very dynamic script that can create any directory and assign any user and any group as the owner to that directory.

Listing 14-10. Referring to Command-Line Arguments in a Script

# dirscript
# Silly script that creates a directory with a certain name
# next sets $USER and $GROUP as the owners of the directory
# and finally changes the permission mode to 770
# Provide the directory name first, followed by the username and
# finally the groupname.


chmod 770 $DIRECTORY

exit 0

To execute the script from Listing 14-10, you would use a command as in this example:

dirscript /somedir kylie sales

This line shows you how the dirscript has been made more flexible now, but at the same time it also shows you the most important disadvantage: it has become somehow less obvious as well. You can imagine that it might be very easy for a user to mix up the right order of the arguments and type dirscript kylie sales /somedir instead. So it becomes important to provide good information on how to run this script.

Counting the Number of Script Arguments

On some occasions, you’ll want to check the number of arguments that are provided with a script. This is useful if you expect a certain number of arguments, for instance, and want to make sure that the required number of arguments is present before running the script.

To count the number of arguments provided with a script, you can use $#. Basically, $# is a counter that does no more than show you the exact number of arguments you’ve used when running the script. Used all by itself, that doesn’t really make sense. Combined with an if statement (about which you’ll read more in the section “Using if ... then ... else” later in this chapter) it does make sense. For example, you could use it to show a help message if the user hasn’t provided the correct number of arguments. Listing 14-11 shows the contents of the scriptcountargs, in which $# is used. Directly following the code of the script, you can see a sample running of it.

Listing 14-11. Counting the Number of Arguments

nuuk:~/bin # cat countargs
# countargs
# sample script that shows how many arguments were used

echo the number of arguments is $#

exit 0
nuuk:~/bin # ./countargs a b c d e
the number of arguments is 5
nuuk:~/bin #.

Referring to all Script Arguments

So far, you’ve seen that a script can work with a fixed number of arguments. The example in Listing 14-10 is hard-coded to evaluate arguments as $1, $2, and so on. But what if the number of arguments is not known beforehand? In that case, you can use $@ or $* in your script. Both refer to all arguments that were specified when starting the script, although there is a difference. To explain the difference, I need to show you how a for loop treats $@ or $*.

A for loop can be used to test all elements in a string of characters. Now what I want to show you at this point is that the difference between $@ and $* is exactly in the number of elements that each has. But let’s have a look at their default output first. Listing 14-12 shows version 1 of the showargs script.

Listing 14-12. Showing the Difference Between $@ and $*

# showargs
# this script shows all arguments used when starting the script

echo the arguments are $@
echo the arguments are $*

exit 0

Now let’s have a look at what happens if you launch this script with the arguments a b c d. You can see the result in Listing 14-13.

Listing 14-13. Running showargs with Different Arguments

nuuk:~/bin # ./showargs a b c d
the arguments are a b c d
the arguments are a b c d

So far, there seem to be no differences between $@ and $*, yet there is a big difference: the collection of arguments in $* is treated as one text string, whereas the collection of arguments in $@ is seen as separate strings. In the section “Using for” later in this chapter, you will see some proof for this.

At this moment, you know how to handle a script that has an infinite number of arguments. You can tell the script that it should interpret them one by one. The next subsection shows you how to count the number of arguments.

Asking for Input

Another elegant way to get input is just to ask for it. To do this, you can use read in the script. When using read, the script waits for user input and puts that in a variable. The sample script askinput in Listing 14-14 shows a simple example script that first asks for the input and then shows the input that was provided by echoing the value of the variable. Directly following the sample code, you can also see what happens when you run the script.

Listing 14-14. Asking for Input with read

nuuk:~/bin # cat askinput
# askinput
# ask user to enter some text and then display it
echo Enter some text
echo -e "You have entered the following text:\t $SOMETEXT"

exit 0
nuuk:~/bin # ./askinput
Enter some text
hi there
You have entered the following text: hi there
nuuk:~/bin #

As you can see, the script starts with an echo line that explains what it expects the user to do. Next, with the line read SOMETEXT, it will stop to allow the user to enter some text. This text is stored in the variable SOMETEXT. In the following line, the echo command is used to show the current value of SOMETEXT. As you see, in this sample script I’ve used echo with the option -e. This option allows you to use some special formatting characters, in this case the formatting character \t, which enters a tab in the text. Formatting like this ensures that the result is displayed in a nice manner.

As you can see, in the line that has the command echo -e, the text that the script needs to be echoed is between double quotes. This is to prevent the shell from interpreting the special character \t before echo does. Again, if you want to make sure the shell does not interpret special characters like this, put the string between double quotes.

You may get confused here, because two different mechanisms are at work. First is the mechanism of escaping characters so that they are not interpreted by the shell. This is the difference between echo \t and echo "\t". In the former, the \ is treated as a special character, with the result that only the letter t is displayed; in the latter, double quotes tell the shell not to interpret anything that is between the double quotes, hence it shows \t.

The second mechanism is the special formatting character \t, which tells the shell to display a tab. To make sure that this or any other special formatting character is not interpreted by the shell when it first parses the script (which here would result in the shell just displaying a t), you have to put it between double quotes. In Listing 14-15, you can see the differences between all the possible commands.

Listing 14-15. Escaping and Special Characters

SYD:~ # echo \t
SYD:~ # echo "\t"
SYD:~ # echo -e \t
SYD:~ # echo -e "\t"

SYD:~ #

When using echo -e, you can use the following special characters:

· \0NNN: The character whose ASCII code is NNN (octal).

· \\: Backslash. Use this if you want to show just a backslash.

· \a: Alert (BEL, or bell code). If supported by your system, this will let you hear a beep.

· \b: Backspace.

· \c: Character that suppresses a trailing newline.

· \f: Form feed.

· \n: Newline.

· \r: Carriage return.

· \t: Horizontal tab.

· \v: Vertical tab.

Using Command Substitution

Another way of getting a variable text in a script is by using command substitution. In command substitution, you’ll use the result of a command in the script. This is useful if the script has to do something with the result of a command. For instance, by using this technique, you can tell the script that it should only execute if a certain condition is met (you would have to use a conditional loop with if to accomplish this). To use command substitution, put the command that you want to use between backquotes (also known as back ticks). The following sample code line shows how it works:

nuuk:~/bin # echo "today is `date +%d-%m-%y`"
today is 27-01-09

In this example, the date command is used with some of its special formatting characters. The command date +%d-%m-%y tells date to present its result in the day-month-year format. In this example, the command is just executed; however, you can also put the result of the command substitution in a variable, which makes it easier to perform a calculation on the result later in the script. The following sample code shows how to do this:

nuuk:~/bin # TODAY=`date +%d-%m-%y`
echo today=$TODAY
today is 27-01-09

Substitution Operators

Within a script, it may be important to check whether a variable really has a value assigned to it before the script continues. To do this, Bash offers substitution operators. By using substitution operators, you can assign a default value if a variable doesn’t have a value currently assigned, and much more. Table 14-1 provides an overview of the substitution operators with a short explanation of their use.

Table 14-1. Substitution Operators




Shows the value if a parameter is not defined.


Assigns the value to the parameter if the parameter does not exist at all. This operator does nothing if the parameter exists but doesn’t have a value.


Assigns a value if the parameter currently has no value or if parameter doesn’t exist at all.


Shows a message that is defined as the value if the parameter doesn’t exist or is empty. Using this construction will force the shell script to be aborted immediately.


Displays the value if the parameter has one. If it doesn’t have a value, nothing happens.

Substitution operators can be hard to understand. To make it easier to see how they work, Listing 14-16 provides some examples. In all of these examples, something happens to the $BLAH variable. You’ll see that the result of the given command is different depending on the substitution operator that’s used. To make it easier to discuss what happens, I’ve added line numbers to the listing. Notice that, when trying this yourself, you should omit the line numbers.

Listing 14-16. Using Substitution Operators

1. sander@linux %> echo $BLAH
3. sander@linux %> echo ${BLAH:-variable is empty}
4 variable is empty
5. sander@linux %> echo $BLAH
7. sander@linux %> echo ${BLAH=value}
8. value
9. sander@linux %> echo $BLAH
10. value
11. sander@linux %> BLAH=
12. sander@linux %> echo ${BLAH=value}
14. sander@linux %> echo ${BLAH:=value}
15. value
16. sander@linux %> echo $BLAH
17. value
18. sander@linux %> echo ${BLAH:+sometext}
19. sometext

The example of Listing 14-16 starts with the following command:

echo $BLAH

This command reads the variable BLAH and shows its current value. Because BLAH doesn’t have a value yet, nothing is shown in line 2. Next, a message is defined in line 3 that should be displayed if BLAH is empty. This happens with the following command:

sander@linux %> echo ${BLAH:-variable is empty}

As you can see, the message is displayed in line 4. However, this doesn’t assign a value to BLAH, which you see in lines 5 and 6 where the current value of BLAH is asked again:

3. sander@linux %> echo ${BLAH:-variable is empty}
4 variable is empty
5. sander@linux %> echo $BLAH

In line 7, BLAH finally gets a value, which is displayed in line 8:

7. sander@linux %> echo ${BLAH=value}
8. value

The shell remembers the new value of BLAH, which you can see in lines 9 and 10 where the value of BLAH is referred to and displayed:

9. sander@linux %> echo $BLAH
10. value

In line 11, BLAH is redefined but it gets a null value:

11. sander@linux %> BLAH=

The variable still exists; it just has no value here. This is demonstrated when echo ${BLAH=value} is used in line 12; because BLAH has a null value at that moment, no new value is assigned:

12. sander@linux %> echo ${BLAH=value}

Next, the construction echo ${BLAH:=value} is used to assign a new value to BLAH. The fact that BLAH really gets a value from this is shown in lines 16 and 17:

14. sander@linux %> echo ${BLAH:=value}
15. value
16. sander@linux %> echo $BLAH
17. value

Finally, the construction in line 18 is used to display sometext if BLAH currently does have a value:

18. sander@linux %> echo ${BLAH:+sometext}
19. sometext

Notice that this doesn’t change anything for the value that is assigned to BLAH at that moment; sometext just indicates that it has a value and that’s all.

Changing Variable Content with Pattern Matching

You’ve just seen how substitution operators can be used to do something if a variable does not have a value. You can consider them a rather primitive way of handling errors in your script.

A pattern-matching operator can be used to search for a pattern in a variable and, if that pattern is found, modify the variable. This can be very useful because it allows you to define a variable exactly the way you want. For example, think of the situation in which a user enters a complete path name of a file, but only the name of the file itself (without the path) is needed in your script.

The pattern-matching operator is the way to change this. Pattern-matching operators allow you to remove part of a variable automatically. Listing 14-17 is an example of a script that works with pattern-matching operators.

Listing 14-17. Working with Pattern-Matching Operators

# stripit
# script that extracts the file name from one that includes the complete path
# usage: stripit <complete file name>

echo "The name of the file is $filename"

exit 0

When executed, the script will show the following result:

sander@linux %> ./stripit /bin/bash
the name of the file is bash

Pattern-matching operators always try to locate a given string. In this case, the string is */. In other words, the pattern-matching operator searches for a /, preceded by another character (*). In this pattern-matching operator, ## is used to search for the longest match of the provided string, starting from the beginning of the string. So, the pattern-matching operator searches for the last / that occurs in the string and removes it and everything that precedes the / as well. You may ask how the script comes to remove everything in front of the /. It’s because the pattern-matching operator refers to */ and not to /. You can confirm this by running the script with /bin/bash/ as an argument. In this case, the pattern that’s searched for is in the last position of the string, and the pattern-matching operator removes everything.

This example explains the use of the pattern-matching operator that looks for the longest match. By using a single #, you can let the pattern-matching operator look for the shortest match, again starting from the beginning of the string. If, for example, the script in Listing 14-17 usedfilename=${1#*/}, the pattern-matching operator would look for the first / in the complete file name and remove that and everything before it.

You should realize that in these examples the * is important. The pattern-matching operator ${1#*/} removes the first / found and anything in front of it. The pattern-matching operator ${1#/} removes the first / in $1 only if the value of $1 starts with a /. However, if there’s anything before the /, the operator will not know what to do.

In these examples, you’ve seen how a pattern-matching operator is used to start searching from the beginning of a string. You can start searching from the end of the string as well. To do so, a % is used instead of a #. This % refers to the shortest match of the pattern, and %% refers to its longest match. The script in Listing 14-18 shows how this works.

Listing 14-18. Using Pattern-Matching Operators to Start Searching at the End of a String

# stripdir
# script that isolates the directory name from a complete file name
# usage: stripdir <complete file name>

echo "The directory name is $dirname"

exit 0

While executing, you’ll see that this script has a problem:

sander@linux %> ./stripdir /bin/bash
The directory name is

As you can see, the script does its work somewhat too enthusiastically and removes everything. Fortunately, this problem can be solved by first using a pattern-matching operator that removes the / from the start of the complete file name (but only if that / is provided) and then removing everything following the first / in the complete file name. The example in Listing 14-19 shows how this is done.

Listing 14-19. Fixing the Example from Listing 14-18

# stripdir
# script that isolates the directory name from a complete file name
# usage: stripdir <complete file name>

echo "The directory name is $dirname"

exit 0

As you can see, the problem is solved by using ${1#/}. This construction starts searching from the beginning of the file name to a /. Because no * is used here, it looks for a / only at the very first position of the file name and does nothing if the string starts with anything else. If it finds a /, it removes it. So, if a user enters usr/bin/passwd instead of /usr/bin/passwd, the ${1#/} construction does nothing at all. In the line after that, the variable dirname is defined again to do its work on the result of its first definition in the preceding line. This line does the real work and looks for the pattern /*, starting at the end of the file name. This makes sure that everything after the first / in the file name is removed and that only the name of the top-level directory is echoed. Of course, you can easily edit this script to display the complete path of the file: just use dirname=${dirname%/*} instead.

So, to make sure that you are comfortable with pattern-matching operators, the script in Listing 14-20 gives another example. This time, though, the example does not work with a file name, but with a random text string.

Listing 14-20. Another Example with Pattern Matching

# generic script that shows some more pattern matching
# usage: pmex
echo BLAH is $BLAH
echo 'The result of ##ba is '${BLAH##*ba}
echo 'The result of #ba is '${BLAH#*ba}
echo 'The result of %%ba is '${BLAH%ba*}
echo 'The result of %ba is '${BLAH%%ba*}

exit 0

When running it, the script gives the result shown in Listing 14-21.

Listing 14-21. The Result of the Script in Listing 14-20

root@RNA:~/scripts# ./pmex
BLAH is babarabaraba
The result of ##ba is
The result of #ba is barabaraba
The result of %%ba is babarabara
The result of %ba is

Performing Calculations

Bash offers some options that allow you to perform calculations from scripts. Of course, you’re not likely to use them as a replacement for your spreadsheet program, but performing simple calculations from Bash can be useful. For example, you can use calculation options to execute a command a number of times or to make sure that a counter is incremented when a command executes successfully. The script in Listing 14-22 provides an example of how counters can be used.

Listing 14-22. Using a Counter in a Script

# counter
# script that counts until infinity
counter=$((counter + 1))
echo counter is set to $counter
exit 0

This script consists of three lines. The first line initializes the variable counter with a value of 1. Next, the value of this variable is incremented by 1. In the third line, the new value of the variable is shown.

Of course, it doesn’t make much sense to run the script this way. It would make more sense if you include it in a conditional loop, to count the number of actions that is performed until a condition is true. In the section “Using while” later in this chapter, I have an example that shows how to combine counters with while.

So far, we’ve dealt with only one method to do script calculations, but you have other options as well. First, you can use the external expr command to perform any kind of calculation. For example, the following line produces the result of 1 + 2:

sum=`expr 1 + 2`; echo $sum

As you can see, a variable with the name sum is defined, and this variable gets the result of the command expr 1 + 2 by using command substitution. A semicolon is then used to indicate that what follows is a new command. (Remember the generic use of semicolons? They’re used to separate one command from the next command.) After the semicolon, the command echo $sum shows the result of the calculation.

The expr command can work with addition, and other types of calculation are supported as well. Table 14-2 summarizes the options.

Table 14-2. expr Operators




Addition (1 + 1 = 2).


Subtraction (10 – 2 = 8).


Division (10 / 2 = 5).


Multiplication (3 * 3 = 9).


Modulus; this calculates the remainder after division. This works because expr can handle integers only (11 % 3 = 2).

When working with these options, you’ll see that they all work fine with the exception of the multiplication operator, *. Using this operator results in a syntax error:

linux: ~> expr 2 * 2
expr: syntax error

This seems curious but can be easily explained. The * has a special meaning for the shell, as in ls -l *. When the shell parses the command line, it interprets the *, and you don’t want it to do that here. To indicate that the shell shouldn’t touch it, you have to escape it. Therefore, change the command as follows:

expr 2 \* 2

Another way to perform some calculations is to use the internal command let. Just the fact that let is internal makes it a better solution than the external command expr: it can be loaded from memory directly and doesn’t have to come all the way from your computer’s hard drive. Using let, you can make your calculation and apply the result directly to a variable, as in the following example:

let x="1 + 2"

The result of the calculation in this example is stored in the variable x. The disadvantage of working this way is that let has no option to display the result directly as can be done when using expr. For use in a script, however, it offers excellent capabilities. Listing 14-23 shows a script in which let is used to perform calculations.

Listing 14-23. Performing Calculations with let

# calcscript
# usage: calc $1 $2 $3
# $1 is the first number
# $2 is the operator
# $3 is the second number

let x="$1 $2 $3"
echo $x

exit 0

Here you can see what happens if you run this script:

SYD:~/bin # ./calcscript 1 + 2
SYD:~/bin #

If you think that we’ve now covered all methods to perform calculations in a shell script, you’re wrong. Listing 14-24 shows another method that you can use.

Listing 14-24. Another Way to Calculate in a Bash Shell Script

# calcscript
# usage: calc $1 $2 $3
# $1 is the first number
# $2 is the operator
# $3 is the second number

x=$(($1 $2 $3))

echo $x
exit 0

You saw this construction before when you read about the script that increases the value of the variable counter. Note that the double pair of parentheses can be replaced by one pair of square brackets instead, assuming the preceding $ is present.

Using Control Structures

Up until now, you haven’t read much about the way in which the execution of commands can be made conditional. The technique for enabling this in shell scripts is known as flow control. Flow control is about commands that are used to control the flow of your script based on specific conditions, hence the classification “control structures.” Bash offers many options to use flow control in scripts:

· if: Use if to execute commands only if certain conditions were met. To customize the working of if some more, you can use else to indicate what should happen if the condition isn’t met.

· case: Use case to work with options. This allows the user to further specify the working of the command when he or she runs it.

· for: This construction is used to run a command for a given number of items. For example, you can use for to do something for every file in a specified directory.

· while: Use while as long as the specified condition is met. For example, this construction can be very useful to check whether a certain host is reachable or to monitor the activity of a process.

· until: This is the opposite of while. Use until to run a command until a certain condition has been met.

The following subsections cover flow control in more detail. Before going into these details, however, I want to first introduce you to the test command. This command is used to perform many checks to see, for example, whether a file exists or if a variable has a value. Table 14-3shows some of the more common test options. For a complete overview, consult its man page.

Table 14-3. Common Options for the test Command



test -e $1

Checks whether $1 is a file, without looking at what particular kind of file it is.

test -f $1

Checks whether $1 is a regular file and not (for example) a device file, a directory, or an executable file.

test -d $1

Checks whether $1 is a directory.

test -x $1

Checks whether $1 is an executable file. Note that you can test for other permissions as well. For example, -g would check to see whether the SGID permission (see Chapter 7) is set.

test $1 -nt $2

Controls whether $1 is newer than $2.

test $1 -ot $2

Controls whether $1 is older than $2.

test $1 -ef $2

Checks whether $1 and $2 both refer to the same inode. This is the case if one is a hard link to the other (see Chapter 5 for more on inodes).

test $1 -eq $2

Checks whether the integers $1 and $2 are equal.

test $1 -ne $2

Checks whether the integers $1 and $2 are not equal.

test $1 -gt $2

Gives true if integer $1 is greater than integer $2.

test S1 -lt $2

Gives true if integer $1 is less than integer $2.

test $1 -ge $2

Checks whether integer $1 is greater than or equal to integer $2.

test $1 -le $2

Checks whether integer $1 is less than or equal to integer $2.

test -z $1

Checks whether $1 is empty. This is a very useful construction for finding out whether a variable has been defined.

test $1

Gives the exit status 0 if $1 is defined.

test $1=$2

Checks whether the strings $1 and $2 are the same. This is most useful to compare the value of two variables.

test $1 != $2

Checks whether the strings $1 and $2 are not equal to each other. You can use ! with all other tests as well to check for the negation of the statement.

You can use the test command in two ways. First, you can write the complete command, as in test -f $1. This command, however, can be rewritten as [ -f $1 ]. (Don’t forget the spaces between the square brackets—the script won’t work without them!) Most of the time you’ll see the latter option only because people who write shell scripts like to work as efficiently as possible.

Using if ... then ... else

Possibly the classic example of flow control consists of constructions that use if ... then ... else. This construction offers various interesting possibilities, especially if used in conjunction with the test command. You can use it to find out whether a file exists, whether a variable currently has a value, and much more. Listing 14-25 provides an example of a construction with if ... then ... else that can be used in a shell script.

Listing 14-25. Using if to Perform a Basic Check

# testarg
# test to see if argument is present

if [ -z $1 ]
echo You have to provide an argument with this command
exit 1

echo the argument is $1

exit 0

The simple check from the Listing 14-25 example is used to see whether the user who started your script provided an argument. Here’s what you see if you run the script:

SYD:~/bin # ./testarg
You have to provide an argument with this command
SYD:~/bin #

If the user didn’t provide an argument, the code in the if loop becomes active, in which case it displays the message that the user needs to provide an argument and then terminates the script. If an argument has been provided, the commands within the loop aren’t executed, and the script will run the line echo the argument is $1, and in this case echo the argument to the user’s screen.

Also notice how the syntax of the if construction is organized. First, you have to open it with if. Then, separated on a new line (or with a semicolon), then is used. Finally, the if loop is closed with an fi statement. Make sure all those ingredients are used all the time, or your loop won’t work.

Image Note You can use a semicolon as a separator between two commands. So ls; who would first execute the command ls and then the command who.

The example in Listing 14-25 is rather simple; it’s also possible to make if loops more complex and have them test for more than one condition. To do this, use else or elif. Using else within the control structure allows you to not only make sure that something happens if the condition is met, but also check another condition if the condition is not met. You can even use else in conjunction with if (elif) to open a new control structure if the first condition isn’t met. If you do that, you have to use then after elif. Listing 14-26 is an example of the latter construction.

Listing 14-26. Nesting if Control Structures

# testfile

if [ -f $1 ]
echo "$1 is a file"
elif [ -d $1 ]
echo "$1 is a directory"
echo "I don’t know what \$1 is"

exit 0

Here you can see what happens when you run this script:

SYD:~/bin # ./testfile /bin/blah
I don’t know what $1 is
SYD:~/bin #

In this example, the argument that was entered when running the script is checked. If it is a file (if [ -f $1 ]), the script tells the user that. If it isn’t a file, the part under elif is executed, which basically opens a second control structure. In this second control structure, the first test performed is to see whether $1 is perhaps a directory. Notice that this second part of the control structure becomes active only if $1 is not a file. If $1 isn’t a directory either, the part after else is run, and the script reports that it has no idea what $1 is. Notice that for this entire construction, only one fi is needed to close the control structure, but after every if (that includes all elif as well), you need to use then.

You should know that if ... then ... else constructions are used in two different ways. You can write out the complete construction as in the previous examples, or you can employ constructions that use && and ||. These so-called logical operators are used to separate two commands and establish a conditional relationship between them. If && is used, the second command is executed only if the first command is executed successfully (in other words, if the first command is true). If || is used, the second command is executed only if the first command isn’t true. So, with one line of code, you can find out whether $1 is a file and echo a message if it is:

[ -f $1 ] && echo $1 is a file

Note that this can be rewritten as follows:

[ ! -f $1 ] || echo $1 is a file

Image Note This example only works as a part of a complete shell script. Listing 14-27 shows how the example from Listing 14-26 is rewritten if you want to use this syntax.

In case you don’t quite follow what is happening in the second example: it performs a test to see whether $1 is not a file. (The ! is used to test whether something is not the case.) Only if the test fails (which is the case if $1 is indeed a file) does the command execute the part after the ||and echoes that $1 is a file. Listing 14-27 shows how you can rewrite the script from Listing 14-26 with the && and || tests.

Listing 14-27. The Example from Listing 14-26 Rewritten with && and ||

([ -z $1 ] && echo please provide an argument; exit 1) || (([ -f $1 ] && echo $1 is\
a file) || ([ -d $1 ] && echo $1 is a directory || echo I have no idea what $1 is))

Image Note You’ll notice in Listing 14-27 that I used a \ at the end of the line. This slash makes sure that the carriage return sign at the end of the line is not interpreted and is used only to make sure that you don’t type two separated lines. I’ve used the \ for typographical reasons only. In a real script, you’d just put all code on one line (which wouldn’t fit on these pages without breaking it, as I’ve had to do). I’ll use this convention in some later scripts as well.

It is not really hard to understand the script in Listing 14-27 if you understand the script in Listing 14-26, because they do the same thing. However, you should be aware of a few differences. First, I’ve added a [ -z $1 ] test to give an error if $1 is not defined.

Next, the example in Listing 14-27 is all on one line. This makes the script more compact, but it also makes it a little harder to understand what is going on. I’ve used parentheses to increase the readability a little bit and also to keep the different parts of the script together. The parts between parentheses are the main tests, and within these main tests some smaller tests are used as well.

Let’s have a look at some other examples with if ... then ... else. Consider the following line:

rsync -vaze ssh --delete /srv/ftp || echo "rsync failed" | mail

Here, the rsync command tries to synchronize the content of the directory /srv/ftp with the content of the same directory on some other machine. If this succeeds, no further evaluation of this line is attempted. If something happens, however, the part after the || becomes active and makes sure that user gets a message.

The following script presents another example, a complex one that checks whether available disk space has dropped below a certain threshold. The complex part lies in the sequence of pipes used in the command substitution:

if [ `df -m /var | tail -n1 | awk '{print $4} '` -lt 120 ]
logger running out of disk space

The important part of this piece of code is in the first line, where the result of a command is included in the if loop by using backquoting, and that result is compared with the value 120.

If the result is less than 120, the following section becomes active. If the result is greater than 120, nothing happens. As for the command itself, it uses df to check available disk space on the volume where /var is mounted, filters out the last line of that result, and from that last line filters out the fourth column only, which in turn is compared to the value 120. And, if the condition is true, the logger command writes a message to the system log file. This example isn’t really well organized. The following rewrite does exactly the same, but using a different syntax:

[ `df -m /var | tail -n1 | awk '{print $4}'` -lt $1 ] && logger running out of
disk space

This shows why it’s fun to write shell scripts: you can almost always make them better.


Let’s start with an example this time (see Listing 14-28). Create the script, run it, and then try to figure out what it’s done.

Listing 14-28. Example Script with Case

# soccer
# Your personal soccer expert
# predicts world championship football

cat << EOF
Enter the name of the country you think will be world soccer champion in 2010.

# translate $COUNTRY into all uppercase
COUNTRY=`echo $COUNTRY | tr a-z A-Z`

# perform the test
case $COUNTRY in
echo "Yes, you are a soccer expert "
echo "No, they are the worst team on earth"
echo "hahahahahahaha, you must be joking"
echo "Huh? Do they play soccer?"

exit 0

In case you haven’t guessed, this script can be used to analyze the next World Cup championship (of course, you can modify it for any major sports event you like). It will first ask the person who runs the script to enter the name of the country that he or she thinks will be the next champion. This country is put in the $COUNTRY variable. Notice the use of uppercase for this variable; it’s a nice way to identify variables easily if your script becomes rather big.

Because the case statement that’s used in this script is case sensitive, the user input in the first part is translated into all uppercase using the tr command. Using command substitution with this command, the current value of $COUNTRY is read, translated to all uppercase, and assigned again to the $COUNTRY variable using command substitution. Also notice that I’ve made it easier to distinguish the different parts of this script by adding some additional comments.

The body of this script consists of the case command, which is used to evaluate the input the user has entered. The generic construction used to evaluate the input is as follows:

alternative1 | alternative2)

So, the first line evaluates everything that the user can enter. Notice that more than one alternative is used on most lines, which makes it easier to handle typos and other situations where the user hasn’t typed exactly what you were expecting him or her to type. Then on separate lines come all the commands that you want the script to execute. In the example, just one command is executed, but you can enter a hundred lines to execute commands if you like. Finally, the test is closed by using ;;. Don’t forget to close all items with the double semicolons; otherwise, the script won’t understand you. The ;; can be on a line by itself, but you can also put it directly after the last command line in the script.

When using case, you should make it a habit to handle “all other options.” Hopefully, your user will enter something that you expect. But what if he or she doesn’t? In that case, you probably do want the user to see something. This is handled by the *) at the end of the script. So, in this case, for everything the user enters that isn’t specifically mentioned as an option in the script, the script will echo "Huh? Do they play soccer?" to the user.

Using while

You can use while to run a command as long as a condition is met. Listing 14-29 shows how while is used to monitor activity of an important process.

Listing 14-29. Monitoring Process Activity with while

# procesmon
# usage: monitor <processname>

while ps aux | grep $1
sleep 1

logger $1 is no longer present

exit 0

The body of this script consists of the command ps aux | grep $1. This command monitors for the availability of the process whose name was entered as an argument when starting the script. As long as the process is detected, the condition is met and the commands in the loop are executed. In this case, the script waits 1 second and then repeats its action. When the process is no longer detected, the logger command writes a message to syslog.

As you can see from this example, while offers an excellent method to check whether something (such as a process or an IP address) still exists. If you combine it with the sleep command, you can start your script with while as a kind of daemon and perform a check repeatedly. For example, the script in Listing 14-30 would write a message to syslog if the IP address suddenly gets lost due to an error.

Listing 14-30. Checking Whether the IP Address Is Still There

# ipmon
# script that monitors an IP address
# usage: ipmon <ip-address>

while ip a s | grep $1/ > /dev/null
sleep 5

logger HELP, the IP address $1 is gone.

exit 0

Using until

Whereas while does its work as long as a certain condition is met, until is used for the opposite: it runs until the condition is met. This can be seen in Listing 14-31 where the script monitors whether the user, whose name is entered as the argument, is logged in.

Listing 14-31. Monitoring User Login

# usermon
# script that alerts when a user logs in
# usage: ishere <username>

until who | grep $1 >> /dev/null
echo $1 is not logged in yet
sleep 5

echo $1 has just logged in

exit 0

In this example, the who | grep $1 command is executed repeatedly. In this command, the result of the who command that lists users currently logged in to the system is grepped for the occurrence of $1. As long as that command is not true (which is the case if the user is not logged in), the commands in the loop will be executed. As soon as the user logs in, the loop is broken, and a message is displayed to say that the user has just logged in. Notice the use of redirection to the null device in the test, ensuring that the result of the who command is not echoed on the screen.

Using for

Sometimes it’s necessary to execute a series of commands, whether for a limited or an unlimited number of times. In such cases, for loops offer an excellent solution. Listing 14-32 shows how you can use for to create a counter.

Listing 14-32. Using for to Create a Counter

# counter
# counter that counts from 1 to 9

for (( counter=1; counter<10; counter++ )); do
echo "The counter is now set to $counter"

exit 0

The code used in this script isn’t difficult to understand: the conditional loop determines that, as long as the counter has a value between 1 and 10, the variable counter must be automatically incremented by 1. To do this, the construction counter++ is used. As long as this incrementing of the variable counter continues, the commands between do and done are executed. When the specified number is reached, the loop is left, and the script will terminate and indicate with exit 0 to the system that it has done its work successfully.

Loops with for can be pretty versatile. For example, you can use it to do something on every line in a text file. The example in Listing 14-33 illustrates how this works (as you will see, however, it has some problems).

Listing 14-33. Displaying Lines from a Text File

# listusers
# faulty script that tries to show all users in /etc/passwd

for i in `cat /etc/passwd`
echo $i

exit 0

In this example, for is used to display all lines in /etc/passwd one by one. Of course, just echoing the lines is a rather trivial example, but it’s enough to show how for works. If you’re using for in this way, you should notice that it cannot handle spaces in the lines. A space would be interpreted as a field separator, so a new field would begin after the space.

Listing 14-34 shows one more example with for: in this example, for is used to ping a range of IP addresses. This is a script that one of my customers likes to run to see whether a range of machines is up and running. Because the IP addresses are always in the same range, starting with 192.168.1, there’s no harm in including these first three bits in the IP address itself. Of course, you’re free to work with complete IP addresses instead.

Listing 14-34. Testing a Range of IP Addresses

for i in $@
ping -c 1 192.168.1.$i

Notice the use of $@ in this script. This operator allows you to refer to all arguments that were specified when starting the script, no matter how many there are. Let’s have a closer look at this.

Remember $* nd $@, used when treating arguments within a script? Time to show you exactly what the difference is between the two by using a for loop. Using for, you can perform an action on each element in a string. Listing 14-35 provides a simple example that demonstrates this.

Listing 14-35. Using for to Distinguish Different Elements in a String

nuuk:~/bin # for i in 1 2 3; do echo $i; done

The example command line in Listing 14-35 consists of three different parts, which are separated by a semicolon. The first part is for i in 1 2 3, which you can interpret as “for each element in the string 1 2 3.” While evaluating the for loop, each of these elements is stored in the temporary variable i. In the second part, for each of these elements a command is executed. In this case, the command do echo $i echoes the elements one by one, which you can clearly see in the output of the command used in Listing 14-35. Finally, the third part of this for loop is the word done, which closes the for loop. Every for loop starts with for, is followed by do, and closes with done. Now let’s change the showargs script that appeared earlier in this chapter in Listing 14-12 to include a for loop for both $@ and $*.

Listing 14-36 shows what the new script looks like.

Listing 14-36. Evaluating $@ and $* Using for

# showargs
# this script shows all arguments used when starting the script

echo showing for on \$@
for i in "$@"
echo $i

echo showing for on \$*
for i in "$*"
echo $i

exit 0

Let’s consider a few comments before running this script. In this script, a technique called escaping is used. The purpose of escaping is to make sure that the shell doesn’t interpret certain elements. For instance, consider this line:

echo showing for on $@

If you run this line as shown, the shell will interpret $@ and show you its current value. In this case, we want the shell to display the characters $@ instead. To do so, the shell should not interpret the $ sign, which we make clear by adding a slash in front of it. By using a slash, we tell the shell not to interpret the next character.

Later in the script, notice the lines for i in "$@" and for i in "$*". In here, I’ve used double quotes to prevent the shell from interpreting $@ and $* before executing the code lines in the script. We want the shell to interpret these at the moment it runs the script, and therefore I put both between double quotes. At this point, I recommend you try running the script once without the double quotes and once with the double quotes to see the difference yourself.

When you run the script without the double quotes and start the script with a command like ./showargs a b c d, the shell has already interpreted $* before it comes to the line for i in $*. So it would in fact execute for i in a b c d and next show a, b, c, and d, each displayed on its own line. But that’s not what we want—we want the shell to show the result of for i in $*. To make sure this happens, put $* between double quotes. In Listing 14-37, you can see the result of running the example script from Listing 14-36.

Listing 14-37. Result of Running the Example Script in Listing 14-36

nuuk:~/bin # ./showargs a b c d
showing for on $@
showing for on $*
a b c d
nuuk:~/bin #


In this chapter, you’ve learned how to write a Bash shell script. Having mastered shell scripting, you are well on your way to becoming a real expert on the Linux command line. The following common Bash shell script elements have been covered:

· #!/bin/bash: Represents a shebang. Every script should start with a shebang, which tells the parent shell what shell should be used to interpret the script.

· #: Indicates a comment line. Use comments to explain to the user of a script what exactly the script ought to be doing.

· exit: Informs the parent shell whether the script executed successfully. It is good practice to include exit at the end of scripts.

· echo: Displays text on the STDOUT while executing the script.

· source: Includes a script in the current shell environment without launching a subshell.

· .: Operates the same way as source.

· read: Stops the script to read user input and put that into a variable.

· which: Searches the path to see where an executable file exists. Issue this before giving a name to a script to avoid using a name already in use.

· $0: Refers to the script name.

· $1, $n: Refer to arguments that were employed when starting the script.

· $@: Refers to all arguments.

· $#: Gives the number of arguments used when starting the script.

· $*: Refers to all arguments.

· \: Escapes the next character so that it is not interpreted by the shell.

· "...": Escapes the next string so that some characters are not interpreted by the shell.

Generally, this is used when a string contains spaces.

· '...': Escapes the next string so that no characters are interpreted by the shell at all.

· expr: Performs calculations.

· let: Performs calculations.

· test: Performs tests, for instance, to see whether a file exists or a value is greater or smaller than another value.

· if ... then ... else: Executes a command when a certain condition has been met.

· while ... do ... done: Executes as long as a certain condition has been met.

· until ... do ...done: Executes until a certain condition has been met.

· case ... esac: Checks different options and, depending on the option that is true, executes a command.

· for ... do ... done: Executes a command for a range of items.

This was the last chapter. After reading all chapters in this book, you should now be capable of working efficiently from the Linux command line.