Using Stored Routines, Triggers, and Events - MySQL Cookbook (2007)

MySQL Cookbook (2007)

Chapter 16. Using Stored Routines, Triggers, and Events

Introduction

This chapter discusses the following kinds of database objects:

Stored routines (functions and procedures)

A stored function performs a calculation and returns a value that can be used in expressions just like a built-in function such as RAND(), NOW(), or LEFT(). A stored procedure performs calculations for which no return value is needed. Procedures are not used in expressions, they are invoked with the CALL statement. A procedure might be executed to update rows in a table or produce a result set that is sent to the client program. One reason for using a stored routine is that it encapsulates the code for performing a calculation. This enables you to perform the calculation easily by invoking the routine rather than by repeating all its code each time.

Triggers

A trigger is an object that is defined to activate when a table is modified. Triggers are available for INSERT, UPDATE, and DELETE statements. For example, you can check values before they are inserted into a table, or specify that any row deleted from a table should be logged to another table that serves as a journal of data changes. Triggers are useful for automating these actions so that you don’t need to remember to do them yourself each time you modify a table.

Events

An event is an object that executes SQL statements at a scheduled time or times. You can think of an event as something like a Unix cron job, but that runs within MySQL. For example, events can help you perform administrative tasks such as deleting old table records periodically or creating nightly summaries.

Stored routines and triggers are supported as of MySQL 5.0. Event support begins with MySQL 5.1.

These different kinds of objects have in common the property that they are user-defined but stored on the server side for later execution. This differs from sending an SQL statement from the client to the server for immediate execution. Each of these objects also has the property that it is defined in terms of other SQL statements to be executed when the object is invoked. The object has a body that is a single SQL statement, but that statement can use compound-statement syntax (a BEGIN ... END block) that contains multiple statements. This means that the body can range from very simple to extremely complex. The following stored procedure is a trivial routine that does nothing but set a user-defined variable and for which the body consists of a single SET statement:

CREATE PROCEDURE get_time()

SET @current_time = CURTIME();

For more complex operations, a compound statement is necessary:

CREATE PROCEDURE part_of_day()

BEGIN

CALL get_time();

IF @current_time < '12:00:00' THEN

SET @day_part = 'morning';

ELSEIF @current_time = '12:00:00' THEN

SET @day_part = 'noon';

ELSE

SET @day_part = 'afternoon or night';

END IF;

END;

Here, the BEGIN ... END block contains multiple statements, but is itself considered to constitute a single statement. Compound statements enable you to declare local variables and to use conditional logic and looping constructs. Note also that one stored procedure can invoke another:part_of_day() calls get_time(). These capabilities provide you with considerably more flexibility for algorithmic expression than you have when you write inline expressions in noncompound statements such as SELECT or UPDATE.

The statements within a compound statement must each be terminated by a ; character. That requirement causes a problem if you use the mysql client to define an object that uses compound statements because mysql itself interprets ; to determine statement boundaries. The solution to this problem is to redefine mysql’s statement delimiter while you’re defining a compound-statement object. Creating Compound-Statement Objects covers how to do this; make sure that you read that recipe before proceeding to those that follow it.

Due to space limitations, this chapter illustrates by example but does not otherwise go into much detail about the extensive syntax for stored routines, triggers, and events. For complete syntax descriptions, see the MySQL Reference Manual.

The scripts for the examples shown in this chapter can be found in the routines, triggers, and events directories of the recipes distribution. Scripts to create some of the tables are in the tables directory.

In addition to the stored routines shown in this chapter, others can be found elsewhere in this book. See, for example, Recipes , , and .

PRIVILEGES FOR STORED ROUTINES, TRIGGERS, AND EVENTS

When you create a stored routine, the following privilege requirements must be satisfied or you will have problems:

§ To create a stored routine, you must have the CREATE ROUTINE privilege.

§ If binary logging is enabled for your MySQL server (which is common practice), there are some additional requirements for creating stored functions (but not stored procedures). These requirements are necessary to ensure that if you use the binary log for replication or for restoring backups, function invocations cause the same effect when reexecuted as they do when originally executed:

§ You must have the SUPER privilege, and you must declare either that the function is deterministic or does not modify data by using one of the DETERMINISTIC, NO SQL, or READS SQL DATA characteristics. (It’s possible to create functions that are not deterministic or that modify data, but they might not be safe for replication or for use in backups.)

§ Alternatively, if you enable the log_bin_trust_function_creators system variable, the server waives both of the preceding requirements.

To create a trigger in MySQL 5.0, you must have the SUPER privilege. In MySQL 5.1, you must have the TRIGGER privilege for the table associated with the trigger.

To create events, you must have the EVENT privilege for the database in which the events are created.

Creating Compound-Statement Objects

Problem

You want to define a stored routine, a trigger, or an event, but its body contains instances of the ; statement terminator. This is the same terminator that mysql uses by default, so mysql misinterprets the definition and produces an error.

Solution

Redefine the mysql statement terminator with the delimiter command.

Discussion

Each stored routine, trigger, or event is an object with a body that must be a single SQL statement. However, these objects often perform complex operations that require several statements. To handle this, you write the statements within a BEGIN ... END block that forms a compound statement. That is, the block is itself a single statement but can contain multiple statements, each terminated by a ; character. The BEGIN ... END block can contain statements such as SELECT or INSERT, but compound statements also allow for conditional statements such as IF or CASE, looping constructs such as WHILE or REPEAT, or other BEGIN ... END blocks.

Compound-statement syntax provides you with a lot of flexibility, but if you define compound-statement objects within mysql, you’ll quickly run into a small problem: statements within a compound statement each must be terminated by a ; character, but mysql itself interprets ; to figure out where each statement ends so that it can send them one at a time to the server to be executed. Consequently, mysql stops reading the compound statement when it sees the first ; character, which is too early. The solution to this problem is to tell mysql to recognize a different statement delimiter. Then mysql will ignore the ; character within the object body. You terminate the object itself with the new delimiter, which mysql recognizes and then sends the entire object definition to the server. You can restore the mysql delimiter to its original value after defining the compound-statement object.

Suppose that you want to define a stored function that calculates and returns the average size in bytes of mail messages listed in the mail table. The function can be defined with a body part consisting of a single SQL statement like this:

CREATE FUNCTION avg_mail_size()

RETURNS FLOAT READS SQL DATA

RETURN (SELECT AVG(size) FROM mail);

The RETURNS FLOAT clause indicates the type of the function’s return value, and READS SQL DATA indicates that the function reads but does not modify data. The body of the function follows those clauses and consists of the single RETURN statement that executes a subquery and returns the value that it produces to the caller. (Every stored function must have at least one RETURN statement.)

In mysql, you can enter that statement as shown and there is no problem. The definition requires just the single terminator at the end and none internally, so no ambiguity arises. But suppose instead that you want to define the function to take an argument naming a user that is interpreted as follows:

§ If the argument is NULL, the function returns the average size for all messages (as before).

§ If the argument is non-NULL, the function returns the average size for messages sent by that user.

To accomplish this, the routine needs a more complex body that uses a BEGIN ... END block:

CREATE FUNCTION avg_mail_size(user VARCHAR(8))

RETURNS FLOAT READS SQL DATA

BEGIN

IF user IS NULL THEN

# return average message size over all users

RETURN (SELECT AVG(size) FROM mail);

ELSE

# return average message size for given user

RETURN (SELECT AVG(size) FROM mail WHERE srcuser = user);

END IF;

END;

If you try to define the function within mysql by entering that definition as is, mysql will improperly interpret the first semicolon in the function body as ending the definition. To handle this, use the delimiter command to change the mysql delimiter to something else temporarily. The following example shows how to do this and then restore the delimiter to its default value:

mysql>delimiter $$

mysql> CREATE FUNCTION avg_mail_size (user VARCHAR(8))

-> RETURNS FLOAT READS SQL DATA

-> BEGIN

-> IF user IS NULL THEN

-> # return average message size over all users

-> RETURN (SELECT AVG(size) FROM mail);

-> ELSE

-> # return average message size for given user

-> RETURN (SELECT AVG(size) FROM mail WHERE srcuser = user);

-> END IF;

-> END;

-> $$

Query OK, 0 rows affected (0.02 sec)

mysql> delimiter ;

After defining the stored function, you can invoke it the same way as built-in functions:

mysql>SELECT avg_mail_size(NULL), avg_mail_size('barb');

+---------------------+-----------------------+

| avg_mail_size(NULL) | avg_mail_size('barb') |

+---------------------+-----------------------+

| 237386.5625 | 52232 |

+---------------------+-----------------------+

The same principles apply to defining other objects that use compound statements (stored procedures, triggers, and events).

Using a Stored Function to Encapsulate a Calculation

Problem

A particular calculation to produce a value must be performed frequently by different applications, but you don’t want to write out the expression for it each time it’s needed. Or a calculation is difficult to perform inline within an expression because it requires conditional or looping logic.

Solution

Use a stored function to hide all the ugly details of the calculation and make it easy to perform.

Discussion

Stored functions enable you to simplify your applications because you can write out the code that produces a calculation result once in the function definition, and then simply invoke the function whenever you need to perform the calculation. Stored functions also enable you to use more complex algorithmic constructs than are available when you write a calculation inline within an expression. This section shows an example that illustrates how stored functions can be useful in these ways. Granted, the example is not actually that complex, but you can apply the same principles used here to write functions that are much more elaborate.

Different states in the U.S. charge different rates for sales tax. If you sell goods to people from different states and must charge tax using the rate appropriate for customer state of residence, tax computation is something you’ll need to do for every sale. You can handle this with a table that lists the sales tax rate for each state, and a stored function that calculates amount of tax given the amount of a sale and a state.

To set up the table, use the sales_tax_rate.sql script in the tables directory of the recipes distribution. The sales_tax_rate table has two columns: state (a two-letter abbreviation), and tax_rate (a DECIMAL value rather than a FLOAT, to preserve accuracy).

The stored function, sales_tax() can be defined as follows:

CREATE FUNCTION sales_tax(state_param CHAR(2), amount_param DECIMAL(10,2))

RETURNS DECIMAL(10,2) READS SQL DATA

BEGIN

DECLARE rate_var DECIMAL(3,2);

DECLARE CONTINUE HANDLER FOR SQLSTATE '02000' SET rate_var = 0;

SELECT tax_rate INTO rate_var

FROM sales_tax_rate WHERE state = state_param;

RETURN amount_param * rate_var;

END;

The function looks up the tax rate for the given state, and returns the tax as the product of the sale amount and the tax rate.

Suppose that the tax rates for Vermont and New York are 1 and 9 percent, respectively. Try the function to see whether the tax is computed correctly for a sales amount of $100:

mysql>SELECT sales_tax('VT',100.00), sales_tax('NY',100.00);

+------------------------+------------------------+

| sales_tax('VT',100.00) | sales_tax('NY',100.00) |

+------------------------+------------------------+

| 1.00 | 6.00 |

+------------------------+------------------------+

For a location not listed in the tax rate table, the function should fail to determine a rate and compute a tax of zero:

mysql>SELECT sales_tax('ZZ',100.00);

+------------------------+

| sales_tax('ZZ',100.00) |

+------------------------+

| 0.00 |

+------------------------+

Obviously, if you take sales from locations not listed in the table, the function cannot determine the rate for those locations. In this case, the function assumes a tax rate is 0 percent. This is done by means of a CONTINUE handler, which kicks in if a No Data condition (SQLSTATE value02000) occurs. That is, if there is no row for the given state_param value, the SELECT statement fails to find a sales tax rate. In that case, the CONTINUE handler sets the rate to 0 and continues execution with the next statement after the SELECT. (This handler is an example of the kind of logic that you can use in a stored routine that is not available in inline expressions.)

Using a Stored Procedure to “Return” Multiple Values

Problem

You want to perform an operation that produces two or more values, but a stored function can return only a single value.

Solution

Use a stored procedure that has OUT or INOUT parameters, and pass user-defined variables for those parameters when you invoke the procedure. A procedure does not “return” a value the way a function does, but it can assign values to those parameters, which will be the values of the variables when the procedure returns.

Discussion

Unlike stored function parameters, which are input values only, a stored procedure parameter can be any of three types:

§ An IN parameter is for input only. This is the default parameter type if you specify no type.

§ An INOUT parameter is used to pass a value in, and it can also be used to pass a value back out.

§ An OUT parameter is used to pass a value out.

This means that if you need to produce multiple values from an operation, you can use INOUT or OUT parameters. The following example illustrates this, using an IN parameter for input, and passing back three values via OUT parameters.

Creating Compound-Statement Objects showed an avg_mail_size() function that returns the average mail message size for a given sender. The function returns a single value. If you want additional information, such as the number of messages and total message size, a function will not work. You could write three separate functions, but it’s also possible to use a single procedure that retrieves multiple values about a given mail sender. The following procedure, mail_sender_stats(), runs a query on the mail table to retrieve mail-sending statistics about a given username, which is the input value. The procedure determines how many messages that user sent, and the total and average size of the messages in bytes, which it returns through three OUT parameters:

CREATE PROCEDURE mail_sender_stats(IN user VARCHAR(8),

OUT messages INT,

OUT total_size FLOAT,

OUT avg_size FLOAT)

BEGIN

# Use IFNULL() to return 0 for SUM() and AVG() in case there are

# no rows for the user (those functions return NULL in that case).

SELECT COUNT(*), IFNULL(SUM(size),0), IFNULL(AVG(size),0)

INTO messages, total_size, avg_size

FROM mail WHERE srcuser = user;

END;

To use the procedure, pass a string containing the username, and three user-defined variables to receive the OUT values. After the procedure returns, check the variable values:

mysql>CALL mail_sender_stats('barb',@messages,@total_size,@avg_size);

mysql> SELECT @messages, @total_size, @avg_size;

+-----------+-------------+-----------+

| @messages | @total_size | @avg_size |

+-----------+-------------+-----------+

| 3 | 156696 | 52232 |

+-----------+-------------+-----------+

Using a Trigger to Define Dynamic Default Column Values

Problem

A column in a table needs to be initialized to a nonconstant value, but MySQL allows only constant default values.

Solution

Use a BEFORE INSERT trigger. This enables you to initialize a column to the value of an arbitrary expression. In other words, the trigger performs dynamic column initialization by calculating the default value.

Discussion

Other than TIMESTAMP columns, which can be initialized to the current date and time, the default value for a column in MySQL must be a constant value. You cannot define a column with a DEFAULT clause that refers to a function call (or other arbitrary expression), and you cannot define one column in terms of the value assigned to another column. That means each of these column definitions is illegal:

d DATE DEFAULT NOW()

i INT DEFAULT (... some subquery ...)

hashval CHAR(32) DEFAULT MD5(blob_col)

However, you can work around this limitation by setting up a suitable trigger, which enables you to initialize a column however you want. In effect, the trigger enables you to define dynamic (or calculated) default column values.

The appropriate type of trigger for this is BEFORE INSERT, because that enables you to set column values before they are inserted into the table. (An AFTER INSERT trigger can examine column values for a new row, but by the time the trigger activates, it’s too late to change the values.)

Suppose that you want to use a table for storing large data values such as PDF or XML documents, images, or sounds, but you also want to be able to look them up quickly later. A TEXT or BLOB data type might be suitable for storing the values, but is not very suitable for finding them. (Comparisons in a lookup operation will be slow for large values.) To work around this problem, use the following strategy:

1. Compute some kind of hash value for each data value and store it in the table along with the data.

2. To look up the row containing a particular data value, compute the hash value for the value and search the table for that hash value. For best performance, make sure that the hash column is indexed.

To implement this strategy, a BEFORE INSERT trigger is helpful because you can have the trigger compute the hash value to be stored in the table for new data values. (If you might change the data value later, you should also set up a BEFORE UPDATE trigger to recompute the hash value.)

The following example assumes that you want to store documents in a table, along with the document author and title. In addition, the table contains a column for storing the hash value computed from the document contents. To generate hash values, the example uses the MD5() function, which returns a 32-byte string of hexadecimal characters. That’s actually still somewhat long to use for a comparison value. Nevertheless, it’s a lot shorter than full-column comparisons based on contents of very long documents.

First, create a table to hold the document information and the hash values calculated from the document contents. The following table uses a MEDIUMBLOB column to allow storage of documents up to 16 MB in size:

CREATE TABLE doc_table

(

author VARCHAR(100) NOT NULL,

title VARCHAR(100) NOT NULL,

document MEDIUMBLOB NOT NULL,

doc_hash CHAR(32) NOT NULL,

PRIMARY KEY (doc_hash)

);

Next, to handle inserts, create a BEFORE INSERT trigger that uses the document to be inserted to calculate the hash value and causes that value to be stored in the table:

CREATE TRIGGER bi_doc_table BEFORE INSERT ON doc_table

FOR EACH ROW SET NEW.doc_hash = MD5(NEW.document);

This trigger is simple and its body contains only a single SQL statement. For a trigger body that needs to execute multiple statements, use BEGIN ... END compound-statement syntax. In that case, if you use mysql to create the event, you’ll need to change the statement delimiter while you’re defining the trigger, as discussed in Creating Compound-Statement Objects.

Within the trigger, NEW. col_name refers to the new value to be inserted into the given column. By assigning a value to NEW. col_name within the trigger, you cause the column to have that value in the new row.

Finally, insert a row and check whether the trigger correctly initializes the hash value for the document:

mysql>INSERT INTO doc_table (author,title,document)

-> VALUES('Mr. Famous Writer','My Life as a Writer',

-> 'This is the document');

mysql> SELECT * FROM doc_table\G;

*************************** 1. row ***************************

author: Mr. Famous Writer

title: My Life as a Writer

document: This is the document

doc_hash: 5282317909724f9f1e65318be129539c

mysql> SELECT MD5('This is the document');

+----------------------------------+

| MD5('This is the document') |

+----------------------------------+

| 5282317909724f9f1e65318be129539c |

+----------------------------------+

The first SELECT shows that the doc_hash column was initialized even though the INSERT provided no value for it. The second SELECT shows that the hash value stored in the row by the trigger is correct.

The example thus far demonstrates how a trigger enables you to initialize a row column in a way that goes beyond what is possible with the DEFAULT clause in the column’s definition. The same idea applies to updates, and it’s a good idea to apply it in the present scenario: when initialization of a column is a function of the value in another column (as is the case for doc_hash), it is dependent on that column. Therefore, you should also update it whenever the column on which it depends is updated. For example, if you update a value in the document column, you should also update the corresponding doc_hash value. This too can be handled by a trigger. Create a BEFORE UPDATE trigger that does the same thing as the INSERT trigger:

CREATE TRIGGER bu_doc_table BEFORE UPDATE ON doc_table

FOR EACH ROW SET NEW.doc_hash = MD5(NEW.document);

Test the UPDATE trigger by updating the document value and checking whether the hash value is updated properly:

mysql>UPDATE doc_table SET document = 'A new document'

-> WHERE document = 'This is the document';

mysql> SELECT * FROM doc_table\G;

*************************** 1. row ***************************

author: Mr. Famous Writer

title: My Life as a Writer

document: A new document

doc_hash: 21c03f63d2f01b598665d4d960f3a4f2

mysql> SELECT MD5('A new document');

+----------------------------------+

| MD5('A new document') |

+----------------------------------+

| 21c03f63d2f01b598665d4d960f3a4f2 |

+----------------------------------+

Simulating TIMESTAMP Properties for Other Date and Time Types

Problem

The TIMESTAMP data type provides auto-initialization and auto-update properties. You would like to use these properties for other temporal data types, but the other types allow only constant values for initialization, and they don’t auto-update.

Solution

Use an INSERT trigger to provide the appropriate current date or time value at record creation time. Use an UPDATE trigger to update the column to the current date or time when the row is changed.

Discussion

Using TIMESTAMP to Track Row Modification Times describes the special initialization and update properties of the TIMESTAMP data type that enable you to record row creation and modification times automatically. These properties are not available for other temporal types, although there are reasons you might like them to be. For example, if you use separate DATE and TIME columns to store record-modification times, you can index the DATE column to enable efficient date-based lookups. (With TIMESTAMP, you cannot index just the date part of the column.)

One way to simulate TIMESTAMP properties for other temporal data types is to use the following strategy:

§ When you create a row, initialize a DATE column to the current date and a TIME column to the current time.

§ When you update a row, set the DATE and TIME columns to the new date and time.

However, this strategy requires all applications that use the table to implement the same strategy, and it fails if even one application neglects to do so. To place the burden of remembering to set the columns properly on the MySQL server and not on application writers, use triggers for the table. This is, in fact, a particular application of the general strategy discussed in Using a Trigger to Define Dynamic Default Column Values that uses triggers to provide calculated values for initializing (or updating) row columns.

The following example shows how to use triggers to simulate TIMESTAMP properties for each of the DATE, TIME, and DATETIME data types. Begin by creating the following table, which has a nontemporal column for storing data and columns for the DATE, TIME, and DATETIME temporal types:

CREATE TABLE ts_emulate

(

data CHAR(10),

d DATE,

t TIME,

dt DATETIME

);

The intent here is that applications will insert or update values in the data column, and MySQL should set the temporal columns appropriately to reflect the time at which modifications occur. To accomplish this, set up triggers that use the current date and time to initialize the temporal columns for new rows, and to update them when existing rows are changed. A BEFORE INSERT trigger handles new row creation by invoking the CURDATE(), CURTIME(), and NOW() functions to get the current date, time, and date-and-time values and using those values to set the temporal columns:

CREATE TRIGGER bi_ts_emulate BEFORE INSERT ON ts_emulate

FOR EACH ROW SET NEW.d = CURDATE(), NEW.t = CURTIME(), NEW.dt = NOW();

A BEFORE UPDATE trigger handles updates to the temporal columns when the data column changes value. An IF statement is required here to emulate the TIMESTAMP property that an update occurs only if the values in the row actually change from their current values:

CREATE TRIGGER bu_ts_emulate BEFORE UPDATE ON ts_emulate

FOR EACH ROW

BEGIN

# update temporal columns only if the nontemporal column changes

IF NEW.data <> OLD.data THEN

SET NEW.d = CURDATE(), NEW.t = CURTIME(), NEW.dt = NOW();

END IF;

END;

To test the INSERT trigger, create a couple of rows, but supply a value only for the data column. Then verify that MySQL provides the proper default values for the temporal columns:

mysql>INSERT INTO ts_emulate (data) VALUES('cat');

mysql> INSERT INTO ts_emulate (data) VALUES('dog');

mysql> SELECT * FROM ts_emulate;

+------+------------+----------+---------------------+

| data | d | t | dt |

+------+------------+----------+---------------------+

| cat | 2006-06-23 | 13:29:44 | 2006-06-23 13:29:44 |

| dog | 2006-06-23 | 13:29:49 | 2006-06-23 13:29:49 |

+------+------------+----------+---------------------+

Change the data value of one row to verify that the BEFORE UPDATE trigger updates the temporal columns of the changed row:

mysql>UPDATE ts_emulate SET data = 'axolotl' WHERE data = 'cat';

mysql> SELECT * FROM ts_emulate;

+---------+------------+----------+---------------------+

| data | d | t | dt |

+---------+------------+----------+---------------------+

| axolotl | 2006-06-23 | 13:30:12 | 2006-06-23 13:30:12 |

| dog | 2006-06-23 | 13:29:49 | 2006-06-23 13:29:49 |

+---------+------------+----------+---------------------+

Issue another UPDATE, but this time use one that does not change any data column values. In this case, the BEFORE UPDATE trigger should notice that no value change occurred and leave the temporal columns unchanged:

mysql>UPDATE ts_emulate SET data = data;

mysql> SELECT * FROM ts_emulate;

+---------+------------+----------+---------------------+

| data | d | t | dt |

+---------+------------+----------+---------------------+

| axolotl | 2006-06-23 | 13:30:12 | 2006-06-23 13:30:12 |

| dog | 2006-06-23 | 13:29:49 | 2006-06-23 13:29:49 |

+---------+------------+----------+---------------------+

The preceding example shows how to simulate the auto-initialization and auto-update properties offered by TIMESTAMP columns. If you want only one of those properties and not the other, create only one trigger and omit the other.

Using a Trigger to Log Changes to a Table

Problem

You have a table that maintains current values of items that you track (such as auctions being bid on), but you’d also like to maintain a journal (or history) of changes to the table.

Solution

Use triggers to “catch” table changes and write them to a separate log table.

Discussion

Suppose that you conduct online auctions, and that you maintain information about each currently active auction in a table that looks like this:

CREATE TABLE auction

(

id INT UNSIGNED NOT NULL AUTO_INCREMENT,

ts TIMESTAMP,

item VARCHAR(30) NOT NULL,

bid DECIMAL(10,2) NOT NULL,

PRIMARY KEY (id)

);

The auction table contains information about the currently active auctions (items being bid on and the current bid for each auction). When an auction begins, you enter a row into the table. Its bid column gets updated for each new bid on the item. When the auction ends, the bid value is the final price and the row is removed from the table. As the auction proceeds, the ts column is updated to reflect the time of the most recent bid.

If you also want to maintain a journal that shows all changes to auctions as they progress from creation to removal, you can modify the auction table to allow multiple records per item and add a status column to show what kind of action each row represents. Or you could leave theauction table unchanged and set up another table that serves to record a history of changes to the auctions. This second strategy can be implemented with triggers.

To maintain a history of how each auction progresses, use an auction_log table with the following columns:

CREATE TABLE auction_log

(

action ENUM('create','update','delete'),

id INT UNSIGNED NOT NULL,

ts TIMESTAMP,

item VARCHAR(30) NOT NULL,

bid DECIMAL(10,2) NOT NULL,

INDEX (id)

);

The auction_log table differs from the auction table in two ways:

§ It contains an action column to indicate for each row what kind of change was made.

§ The id column has a nonunique index (rather than a primary key, which requires unique values). This allows multiple rows per id value because a given auction can generate many rows in the log table.

To ensure that changes to the auction table are logged to the auction_log table, create a set of triggers. The triggers should write information to the auction_log table as follows:

§ For inserts, log a row-creation operation showing the values in the new row.

§ For updates, log a row-update operation showing the new values in the updated row.

§ For deletes, log a row-removal operation showing the values in the deleted row.

For this application, AFTER triggers are used, because they will activate only after successful changes to the auction table. (BEFORE triggers might activate even if the row-change operation fails for some reason.) The trigger definitions look like this:

CREATE TRIGGER ai_auction AFTER INSERT ON auction

FOR EACH ROW

BEGIN

INSERT INTO auction_log (action,id,ts,item,bid)

VALUES('create',NEW.id,NOW(),NEW.item,NEW.bid);

END;

CREATE TRIGGER au_auction AFTER UPDATE ON auction

FOR EACH ROW

BEGIN

INSERT INTO auction_log (action,id,ts,item,bid)

VALUES('update',NEW.id,NOW(),NEW.item,NEW.bid);

END;

CREATE TRIGGER ad_auction AFTER DELETE ON auction

FOR EACH ROW

BEGIN

INSERT INTO auction_log (action,id,ts,item,bid)

VALUES('delete',OLD.id,OLD.ts,OLD.item,OLD.bid);

END;

The INSERT and UPDATE triggers use NEW. col_name to access the new values being stored in rows. The DELETE trigger uses OLD. col_name to access the existing values from the deleted row. The INSERT and UPDATE triggers use NOW() to get the row-modification times; the tscolumn is initialized automatically to the current date and time, but NEW.ts will not contain that value.

Suppose that an auction is created with an initial bid of five dollars:

mysql>INSERT INTO auction (item,bid) VALUES('chintz pillows',5.00);

mysql> SELECT LAST_INSERT_ID();

+------------------+

| LAST_INSERT_ID() |

+------------------+

| 792 |

+------------------+

The SELECT statement fetches the auction ID value to use for subsequent actions on the auction. Then the item receives three more bids before the auction ends and is removed:

mysql>UPDATE auction SET bid = 7.50 WHERE id = 792;

... time passes ...

mysql> UPDATE auction SET bid = 9.00 WHERE id = 792;

... time passes ...

mysql> UPDATE auction SET bid = 10.00 WHERE id = 792;

... time passes ...

mysql> DELETE FROM auction WHERE id = 792;

At this point, no trace of the auction remains in the auction table, but if you query the auction_log table, you can obtain a complete history of what occurred:

mysql>SELECT * FROM auction_log WHERE id = 792 ORDER BY ts;

+--------+-----+---------------------+----------------+-------+

| action | id | ts | item | bid |

+--------+-----+---------------------+----------------+-------+

| create | 792 | 2006-06-22 21:24:14 | chintz pillows | 5.00 |

| update | 792 | 2006-06-22 21:27:37 | chintz pillows | 7.50 |

| update | 792 | 2006-06-22 21:39:46 | chintz pillows | 9.00 |

| update | 792 | 2006-06-22 21:55:11 | chintz pillows | 10.00 |

| delete | 792 | 2006-06-22 22:01:54 | chintz pillows | 10.00 |

+--------+-----+---------------------+----------------+-------+

With the strategy just outlined, the auction table remains relatively small, but we can always find information about auction histories as necessary by looking in the auction_log table.

Using Events to Schedule Database Actions

Problem

You want to set up a database operation that runs periodically without user intervention.

Solution

Create an event that executes according to a schedule.

Discussion

As of MySQL 5.1, one of the capabilities available to you is an event scheduler that enables you to set up database operations that run at times that you define. This section describes what you must do to use events, beginning with a simple event that writes a row to a table at regular intervals. Why bother creating such an event? One reason is that the rows serve as a log of continuous server operation, similar to the MARK line that some Unix syslogd servers write to the system log periodically so that you know they’re alive.

Begin with a table to hold the mark records. It contains a TIMESTAMP column (which MySQL will initialize automatically) and a column to store a message:

mysql>CREATE TABLE mark_log (ts TIMESTAMP, message VARCHAR(100));

Our logging event will write a string to a new row. To set it up, use a CREATE EVENT statement:

mysql>CREATE EVENT mark_insert

-> ON SCHEDULE EVERY 5 MINUTE

-> DO INSERT INTO mark_log (message) VALUES('-- MARK --');

The mark_insert event causes the message '-- MARK --' to be logged to the mark_log table every five minutes. Use a different interval for more or less frequent logging.

This event is simple and its body contains only a single SQL statement. For an event body that needs to execute multiple statements, use BEGIN ... END compound-statement syntax. In that case, if you use mysql to create the event, you need to change the statement delimiter while you’re defining the event, as discussed in Creating Compound-Statement Objects.

At this point, you should wait a few minutes and then select the contents of the mark_log table to verify that new rows are being written on schedule. However, if this is the first event that you’ve set up, you might find that the table remains empty no matter how long you wait:

mysql>SELECT * FROM mark_log;

Empty set (0.00 sec)

If that’s the case, very likely the event scheduler isn’t running (which is its default state until you enable it). Check the scheduler status by examining the value of the event_scheduler system variable:

mysql>SHOW VARIABLES LIKE 'event_scheduler';

+-----------------+-------+

| Variable_name | Value |

+-----------------+-------+

| event_scheduler | 0 |

+-----------------+-------+

To enable the scheduler interactively if it is not running, execute the following statement (which requires the SUPER privilege):

mysql>SET GLOBAL event_scheduler = 1;

That statement enables the scheduler, but only until the server shuts down. To make sure that the scheduler runs each time the server starts, set the system variable to 1 in your my.cnf option file:

[mysqld]

event_scheduler=1

When the event scheduler is enabled, the mark_insert event eventually will create many rows in the table. There are several ways that you can affect event execution to prevent the table from growing forever:

§ Drop the event:

mysql>DROP EVENT mark_insert;

This is the simplest way to stop an event from occurring. But if you want it to resume later, you must re-create it.

§ Suspend execution for the event:

mysql>ALTER EVENT mark_insert DISABLE;

Disabling an event leaves it in place but causes it not to run until you reactivate it:

mysql>ALTER EVENT mark_insert ENABLE;

§ Let the event continue to run, but set up another event that “expires” old mark_log rows. This second event need not run so frequently (perhaps once a day). Its body should contain a DELETE statement that removes rows older than a given threshold. The following definition creates an event that deletes rows that are more than two days old:

§ mysql>CREATE EVENT mark_expire

§ -> ON SCHEDULE EVERY 1 DAY

-> DO DELETE FROM mark_log WHERE ts < NOW() - INTERVAL 2 DAY;

If you adopt this strategy, you have cooperating events, such that one event adds rows to the mark_log table and the other removes them. They act together to maintain a log that contains recent records but does not become too large.

To check on event activity, look in the server’s error log, where it records information about which events it executes and when.