Files - Expert Oracle Database Architecture, Third Edition (2014)

Expert Oracle Database Architecture, Third Edition (2014)

Chapter 3. Files

In this chapter, we will examine the eight major file types that make up a database and instance. The files associated with an instance are simply

· Parameter files: These files tell the Oracle instance where to find the control files, and they also specify certain initialization parameters that define how big certain memory structures are, and so on. We will investigate the two options available for storing database parameter files.

· Trace files: These are diagnostic files created by a server process, generally in response to some exceptional error condition.

· Alert files: These are similar to trace files, but they contain information about “expected” events, and they also alert the DBA in a single, centralized file of many database events.

The files that make up the database are

· Data files: These are for the database; they hold your tables, indexes, and all other data segment types.

· Temp files: These are used for disk-based sorts and temporary storage.

· Control files: These tell you where the data files, temp files, and redo log files are, as well as other relevant metadata about their state. They also contain backup information maintained by RMAN (Recovery Manager, the backup and recovery tool).

· Redo log files: These are your transaction logs.

· Password files: These are used to authenticate users performing administrative activities over the network. We will not discuss these files in any great detail as they are not a necessary component of any Oracle database.

In Oracle 10g and above, there are a couple of optional file types that are used by Oracle to facilitate faster backup and faster recovery operations. These two files are

· Change-tracking file: This file facilitates a true incremental backup of Oracle data. It does not have to be located in the Fast Recovery Area, but as it relates purely to database backup and recovery, we’ll discuss it in the context of that area.

· Flashback log files: These files store “before images” of database blocks in order to facilitate the FLASHBACK DATABASE command.

We’ll also take a look at other types of files commonly associated with the database, such as

· Dump (DMP) files: These files are generated by the Export database utility and consumed by the Import database utility. It should be noted that the Export utility is deprecated in current releases of Oracle—only Import is fully supported. Import is supported to facilitate the moving of data from older releases of Oracle (where Export was fully supported) into newer releases of the database.

· Data Pump files: These files are generated by the Oracle Data Pump Export process and consumed by the Data Pump Import process. This file format may also be created and consumed by external tables.

· Flat files: These are plain old files you can view in a text editor. You normally use these for loading data into the database.

The most important files in these lists are the data files and the redo log files, because they contain the data you worked so hard to accumulate. I can lose any and all of the remaining files and still get to my data. If I lose my redo log files, I may start to lose some data. If I lose my data files and all of their backups, I’ve definitely lost that data forever.

We will now take a look at the types of files, where they are usually located, how they are named, and what we might expect to find in them.

Parameter Files

There are many different parameter files associated with an Oracle database, from a tnsnames.ora file on a client workstation (used to “find” a server on the network), to a listener.ora file on the server (for the network listener startup), to the sqlnet.ora, cman.ora, andldap.ora files, to name a few. The most important parameter file, however, is the database’s parameter file—without this, we can’t even get an instance started, as demonstrated in Chapter 2. The remaining files are important; they are all related to networking and getting connected to the database. However, they are beyond the scope of our discussion. For information on their configuration and setup, I refer you to the Net Services Administrator’s Guide. Since you’re a developer, typically these files would be set up for you, not by you.

The parameter file for a database is commonly known as an init file, or an init.ora file. This is due to its historic default name, which is init<ORACLE_SID>.ora. I call it “historic” because starting with Oracle9i Release 1, a vastly improved method of storing parameter settings for the database was introduced: the server parameter file, or simply SPFILE. This file has the default name of spfile<ORACLE_SID>.ora. We’ll take a look at both kinds of parameter files.

Image Note For those who are unfamiliar with the term SID or ORACLE_SID, a full definition is called for. The SID is a site identifier. It and ORACLE_HOME (where the Oracle software is installed) are hashed together in UNIX/Linux to create a unique key name for creating or attaching a Shared Global Area (SGA) memory region. If your ORACLE_SID or ORACLE_HOME is not set correctly and you are using a local (not network based) connection (see Chapter 2 for details on local/remote connections), you’ll get the ORACLE NOT AVAILABLE error, since you can’t attach to a shared memory segment that is identified by this unique key. On Windows, shared memory isn’t used in the same fashion as on UNIX/Linux, but the SID is still important. You can have more than one database under the same ORACLE_HOME, so you need a way to uniquely identify the instance associated with each one, along with their configuration files.

Without a parameter file, you can’t start an Oracle database. This makes the parameter file fairly important, and as of Oracle9i Release 2 (versions 9.2 and above), the backup and recovery tool Recovery Manager (RMAN) recognizes this file’s importance and will allow you to include the server parameter file (but not the legacy init.ora parameter file type) in your backup set. However, since the init.ora file is simply a plain text file that you can create with any text editor, it is not a file you have to necessarily guard with your life. You can re-create it, as long as you know what was in it (e.g., you can retrieve that information from the database’s alert log, if you have access to that, and reconstruct your entire init.ora parameter file).

We will now examine both types of database startup parameter files (init.ora and SPFILE) in turn, but before we do that, let’s see what a database parameter file looks like.

What Are Parameters?

In simple terms, a database parameter may be thought of as a key/value pair. You saw an important parameter, db_name, in the preceding chapter. The db_name parameter was stored as db_name = ora12c. The key here is db_name and the value is ora12c. This is our key/value pair. To see the current value of an instance parameter, you can query the V$ view V$PARAMETER. Alternatively, in SQL*Plus you can use the SHOW PARAMETER command, for example:

EODA@ORA12CR1> select value
2 from v$parameter
3 where name = 'db_block_size'
4 /

VALUE
-------
8192

EODA@ORA12CR1> show parameter db_block_s

NAME TYPE VALUE
------------------------------------ ----------- -------
db_block_size integer 8192

Both outputs show basically the same information, although you can get more information from V$PARAMETER (there are many more columns to choose from than displayed in this example). But SHOW PARAMETER wins for me in ease of use and the fact that it “wildcards” automatically. Notice that I typed in only db_block_s; SHOW PARAMETER adds % to the front and back.

Image Note All V$ views and all dictionary views are fully documented in the Oracle Database Reference manual. Please regard that manual as the definitive source of what is available in a given view.

If you were to execute the preceding example as a less-privileged user (EODA has been granted the DBA role for purposes of this book), you would see instead:

EODA@ORA12CR1> connect scott/tiger
Connected.

SCOTT@ORA12CR1> select value
2 from v$parameter
3 where name = 'db_block_size'
4 /
from v$parameter
*
ERROR at line 2:
ORA-00942: table or view does not exist

SCOTT@ORA12CR1> show parameter db_block_s
ORA-00942: table or view does not exist

“Normal” accounts are not granted access to the V$ performance views by default. Don’t let that get you down, however. There is a documented API typically available to all users that permits you to see the contents of V$PARAMETER; this little helper function helps you see what is set as a parameter. For example:

SCOTT@ORA12CR1> create or replace
2 function get_param( p_name in varchar2 )
3 return varchar2
4 as
5 l_param_type number;
6 l_intval binary_integer;
7 l_strval varchar2(256);
8 invalid_parameter exception;
9 pragma exception_init( invalid_parameter, -20000 );
10 begin
11 begin
12 l_param_type :=
13 dbms_utility.get_parameter_value
14 ( parnam => p_name,
15 intval => l_intval,
16 strval => l_strval );
17 exception
18 when invalid_parameter
19 then
20 return '*access denied*';
21 end;
22 if ( l_param_type = 0 )
23 then
24 l_strval := to_char(l_intval);
25 end if;
26 return l_strval;
27 end get_param;
28 /

Function created.

Image Note If you’ve applied the latest security patch for 11g R2 or 12c, then you may need to grant select on v_$parameter or select_catalog_role to the user executing this function.

If you execute this function in SQL*Plus, you’ll see:

SCOTT@ORA12CR1> exec dbms_output.put_line( get_param( 'db_block_size' ) );
8192

PL/SQL procedure successfully completed.

Not every parameter is available via the dbms_utility.get_parameter_value API call. Specifically, the memory-related parameters such as sga_max_size, db_cache_size, pga_aggregate_target and the like are not visible. We deal with that in the code on lines 17 through 21—we return '*access denied*' when we hit a parameter that we are not allowed to see. If you are curious about the entire list of restricted parameters, you can (as can any account that has been granted EXECUTE on this function) issue the following query:

EODA@ORA12CR1> select name, scott.get_param( name ) val
2 from v$parameter
3 where scott.get_param( name ) = '*access denied*';

NAME VAL
------------------------------ --------------------
sga_max_size *access denied*
shared_pool_size *access denied*
large_pool_size *access denied*
java_pool_size *access denied*
streams_pool_size *access denied*
...
client_result_cache_lag *access denied*
olap_page_pool_size *access denied*

25 rows selected.

Image Note You’ll see different results for this query on different versions. You should expect the number and values of inaccessible parameters to go up and down over time as the number of parameters changes.

If you were to count the number of documented parameters you can set in each of the database versions—9i Release 2, 10g Release 2, 11g Release 1, 11g Release 2 and 12c Release 1—you’d probably find 258, 259, 294, 342 and 368 different parameters, respectively (I’m sure there could be additional parameters available on an operating system–specific basis). In other words, the number of parameters (and their names) varies by release. Most parameters, like db_block_size, are very long-lived (they won’t go away from release to release), but over time many other parameters become obsolete as implementations change.

For example, in Oracle 9.0.1 and before—back to version 6 of Oracle—there was a distributed_transactions parameter that could be set to some positive integer and that controlled the number of concurrent distributed transactions the database could perform. It was available in prior releases, but it is not found in any release subsequent to 9.0.1. In fact, attempting to use that parameter with subsequent releases raises an error. For example:

EODA@ORA12CR1> alter system set distributed_transactions = 10;
alter system set distributed_transactions = 10
*
ERROR at line 1:
ORA-25138: DISTRIBUTED_TRANSACTIONS initialization parameter has been made
obsolete

If you would like to review the parameters and get a feeling for what is available and what each parameter does, refer to the Oracle Database Reference manual. The first chapter of this manual examines every documented parameter in detail. On the whole, the default value assigned to each parameter (or the derived value for parameters that obtain their default settings from other parameters) is sufficient for most systems. In general, the values of parameters such as the control_files parameter (which specifies the location of the control files on your system),db_block_size, various memory-related parameters, and so on, need to be set uniquely for each database.

Notice I used the term “documented” in the preceding paragraph. There are undocumented parameters as well. You can identify these because their names begin with an underscore (_). There is a great deal of speculation about these parameters. Since they are undocumented, some people believe they must be “magical,” and many people assume that they are well known and used by Oracle insiders. In fact, I find the opposite to be true. They are not well known and they are hardly ever used. Most of these undocumented parameters are rather boring, actually, as they represent deprecated functionality and backward-compatibility flags. Others help in the recovery of data, not of the database itself; for example, some of them enable the database to start up in certain extreme circumstances, but only long enough to get data out. You have to rebuild after that.

Unless you are so directed by Oracle Support, there is no reason to have an undocumented parameter in your configuration. Many have side effects that could be devastating. In my production database, I don’t want to use any undocumented settings.

Image Caution Use undocumented parameters only at the request of Oracle Support. Their use can be damaging to a database, and their implementation can—and will—change from release to release.

You may set the various parameter values in one of two ways: either just for the current instance or persistently. It is up to you to make sure that the parameter files contain the values you want them to. When using legacy init.ora parameter files, this is a manual process. To change a parameter value persistently, to have that new setting be in place across server restarts, you must manually edit and modify the init.ora parameter file. With server parameter files, you’ll see that this has been more or less fully automated for you in a single command.

Legacy init.ora Parameter Files

The legacy init.ora file is a very simple file in terms of its construction. It is a series of variable key/value pairs. A sample init.ora file might look like this:

control_files='/u01/dbfile/ORA12CR1/control01.ctl','/u02/dbfile/ORA12CR1/control02.ctl'
db_block_size=8192
db_name='ORA12CR1'

In fact, this is pretty close to the most basic init.ora file you could get away with in real life, though if the block size I was using was the default on my platform (the default block size varies by platform), I could remove that parameter. The parameter file is used at the very least to get the name of the database and the location of the control files. The control files tell Oracle the location of every other file, so they are very important to the “bootstrap” process that starts the instance.

Now that you know what these legacy database parameter files are and where to get more details about the valid parameters you can set, you also need to know where to find them on disk. The naming convention for this file by default is

init$ORACLE_SID.ora (UNIX/Linux environment variable)
init%ORACLE_SID%.ora (Windows environment variable)

and by default it will be found in

$ORACLE_HOME/dbs (UNIX/Linux)
%ORACLE_HOME%\DATABASE (Windows)

It is interesting to note that, in many cases, you’ll find the entire contents of this parameter file to be something like

IFILE= /some/path/to/somewhere/init.ora'

The IFILE directive works in a similar fashion to an #include file in C. It includes in the current file the contents of the named file. Here, this directive includes an init.ora file from a nondefault location.

It should be noted that the parameter file does not have to be in any particular location. When starting an instance, you can use the pfile=filename option to the startup command. This is most useful when you’d like to try out different init.ora parameters on your database to see the effects of different settings.

Legacy parameter files can be maintained by using any text editor. For example, on UNIX/Linux, I’d use vi; on the many Windows operating system versions, I’d use Notepad; and on a mainframe, I would perhaps use Xedit. It is important to note that you are fully responsible for editing and maintaining this file. There are no commands within the Oracle database itself that you can use to maintain the values in the init.ora file. For example, when you use the init.ora parameter file, issuing an ALTER SYSTEM command to change the size of an SGA component would not be reflected as a permanent change in that file. If you want that change to be made permanent—in other words, if you’d like it to be the default for subsequent restarts of the database—it’s up to you to make sure all init.ora parameter files that might be used to start this database are manually updated.

The last interesting point of note is that the legacy parameter file is not necessarily located on the database server. One of the reasons the parameter file (that that we’ll discuss shortly) was introduced was to remedy this situation. The legacy parameter file must be present on the client machine attempting to start the database, meaning that if you run a UNIX/Linux server but administer it using SQL*Plus installed on your Windows desktop machine over the network, then you need the parameter file for the database on your desktop.

I still remember how I made the painful discovery that the parameter files are not stored on the server. This goes back many years to when a brand-new (now retired) tool called SQL*DBA was introduced. This tool allowed us to perform remote operations, specifically, remote administrative operations. From my server (running SunOS at the time), I was able to connect remotely to a mainframe database server. I was also able to issue the shutdown command. However, it was at that point I realized I was in a bit of a jam—when I tried to start up the instance, SQL*DBA would complain about not being able to find the parameter file. I learned that these parameter files—the init.ora plain text files—were located on the machine with the client; they had to exist on the client machine—not on the server. SQL*DBA was looking for a parameter file on my local system to start the mainframe database. Not only did I not have any such file, I had no idea what to put into one to get the system started up again! I didn’t know the db_name or control file locations (even just getting the correct naming convention for the mainframe files would have been a bit of a stretch), and I didn’t have access to log into the mainframe system itself. I’ve not made that same mistake since; it was a painful lesson to learn.

When DBAs realized that the init.ora parameter file had to reside on the client’s machine that starts the database, it led to a proliferation of these files. Every DBA wanted to run the administrative tools from his desktop, so every DBA needed a copy of the parameter file on his desktop machine. Tools such as Oracle Enterprise Manager (OEM) would add yet another parameter file to the mix. These tools would attempt to centralize the administration of all databases in an enterprise on a single machine, sometimes referred to as a management server. This single machine would run software that would be used by all DBAs to start up, shut down, back up, and otherwise administer a database. That sounds like a perfect solution: centralize all parameter files in one location and use the GUI tools to perform all operations. But the reality is that sometimes it’s much more convenient to issue the administrative startup command from within SQL*Plus on the database server machine itself during the course of some administrative task, so we ended up with multiple parameter files again: one on the management server and one on the database server. These parameter files would then get out of sync with each other and people would wonder why the parameter change they made last month might “disappear,” then reappear in seemingly randomly manner.

Enter the server parameter file (SPFILE), which can now be a single source of truth for the database.

Server Parameter Files (SPFILEs)

SPFILEs represent a fundamental change in the way Oracle accesses and maintains parameter settings for the instance. An SPFILE eliminates the two serious issues associated with legacy parameter files:

· It stops the proliferation of parameter files. An SPFILE is always stored on the database server; the SPFILE must exist on the server machine itself and can’t be located on the client machine. This makes it practical to have a single source of “truth” with regard to parameter settings.

· It removes the need (in fact, it removes the ability) to manually maintain parameter files outside of the database using a text editor. The ALTER SYSTEM command lets you write values directly into the SPFILE. Administrators no longer have to find and maintain all of the parameter files by hand.

The naming convention for this file by default is

$ORACLE_HOME/dbs/spfile$ORACLE_SID.ora (Unix/Linux environment variable)
%ORACLE_HOME/database/spfile%ORACLE_SID%.ora (Windows environment variable)

I strongly recommend using the default location; doing otherwise defeats the simplicity SPFILEs represent. When an SPFILE is in its default location, everything is more or less done for you. Moving the SPFILE to a nondefault location means you have to tell Oracle where to find theSPFILE, leading to the original problems of legacy parameter files all over again!

Converting to SPFILEs

Suppose you have a database that is using a legacy parameter file. The move to an SPFILE is quite simple—you use the CREATE SPFILE command.

Image Note You can also use a “reverse” command to create a parameter file (PFILE) from an SPFILE. I’ll explain shortly why you might want to do that.

So, assuming you have an init.ora parameter file and that init.ora parameter file is in the default location on the server, you simply issue the CREATE SPFILE command and restart your server instance:

EODA@ORA12CR1> show parameter spfile;

NAME TYPE VALUE
------------------------------------ ----------- -------
spfile string
EODA@ORA12CR1> create spfile from pfile;
create spfile from pfile
*
ERROR at line 1:
ORA-01031: insufficient privileges

Well, that SHOW PARAMETER command shows that we did not create an SPFILE: the value is blank. We are not privileged enough to create the SPFILE, even though I am logged in as a DBA. Creating an SPFILE is considered to be very privileged, and you can do it only if you are connected using credentials that allow you to startup and shutdown the database. So let’s do that:

EODA@ORA12CR1> connect / as sysoper;
Connected.

PUBLIC@ORA12CR1> create spfile from pfile;
File created.

PUBLIC@ORA12CR1> startup force;
ORACLE instance started.
Database mounted.
Database opened.

I used the least privileged account I can to perform that operation, an account (mine) that uses the SYSOPER administrator privilege. SYSOPER is allowed to manage the parameter files, start and stop the database, but not much else; that’s why the output of the startup command looks different—there is no SGA report, the memory settings are not visible, in fact:

PUBLIC@ORA12CR1> show parameter spfile;
ORA-00942: table or view does not exist

While the SYSOPER privilege can start and stop the database, it can’t access V$ views and so on. It is very limited in what it can do. We can verify that we are using the SPFILE by connecting as an account privileged enough to do so:

EODA@ORA12CR1> show parameter spfile;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string /home/ora12cr1/app/ora12cr1/pr
oduct/12.1.0/dbhome_1/dbs/spfi
leora12cr1.ora

To recap, we used the SHOW PARAMETER command here to show that initially we were not using an SPFILE, but after we created one and restarted the instance, we were using one and it had the default name.

Image Note In a clustered environment, using Oracle RAC, all instances share the same SPFILE, so this process of converting over to an SPFILE from a PFILE should be done in a controlled fashion. The single SPFILE can contain all of the parameter settings, even instance-specific settings, but you’ll have to merge all of the necessary parameter files into a single PFILE using the format that follows.

In a clustered environment, in order to convert from individual PFILEs to an SPFILE shared by all, you’d merge your individual PFILEs into a single file resembling this:

*.cluster_database_instances=2
*.cluster_database=TRUE
*.cluster_interconnects='10.10.10.0'
*.compatible='12.1.0.0.0'
*.control_files='/u1/d1/O12C/control01.ctl','/u1/d2/O12C/control02.ctl'
*.db_name='O12C'
...
*.processes=150
*.undo_management='AUTO'
O12C1.instance_number=1
O12C2.instance_number=2
O12C1.local_listener='LISTENER_O12C1'
O12C2.local_listener='LISTENER_O12C2'
O12C1.remote_listener='LISTENER_O12C2'
O12C2.remote_listener='LISTENER_O12C1'
O12C1.thread=1
O12C2.thread=2
O12C1.undo_tablespace='UNDOTBS1'
O12C2.undo_tablespace='UNDOTBS2'

That is, parameter settings that are common to all instances in the cluster would start with *.. Parameter settings that are specific to a single instance, such as the INSTANCE_NUMBER and the THREAD of redo to be used, are prefixed with the instance name (the Oracle SID). In the preceding example,

· The PFILE would be for a two-node cluster with instances named O12C1 and O12C2.

· The *.db_name = 'O12C' assignment indicates that all instances using this SPFILE will be mounting a database named O12C.

· O12C1.undo_tablespace='UNDOTBS1' indicates that the instance named O12C1 will use that specific undo tablespace, and so on.

Setting Values in SPFILEs

Once our database is up and running on the SPFILE, the next question relates to how we set and change values contained therein. Remember, SPFILEs are binary files and we can’t just edit them using a text editor. The answer is to use the ALTER SYSTEM command, which has the following syntax (portions in <> are optional, and the presence of the pipe symbol indicates “one of the list”):

Alter system set parameter=value <comment='text'> <deferred>
<scope=memory|spfile|both> <sid='sid|*'>
<container=current|all>

The ALTER SYSTEM SET command, by default, will update the currently running instance and make the change to the SPFILE for you—or in the case of a pluggable database, in the data dictionary of that pluggable database (see the following section on pluggable databases for more information). This greatly eases administration, and it eliminates the problems that arose when you used ALTER SYSTEM to add or modify parameter settings, but you forgot to update or missed an init.ora parameter file.

Let’s take a look at each element of the command:

· The parameter=value assignment supplies the parameter name and the new value for the parameter. For example, pga_aggregate_target = 1024m would set the pga_aggregate_target parameter to a value of 1,024MB (1GB).

· comment='text' is an optional comment you can associate with this setting of the parameter. The comment will appear in the UPDATE_COMMENT field of the V$PARAMETER view. If you use the option to save the change to the SPFILE, the comment will be written into the SPFILE and preserved across server restarts as well, so future restarts of the database will see the comment.

· deferred specifies whether the system change takes place for subsequent sessions only (not currently established sessions, including the one making the change). By default, the ALTER SYSTEM command will take effect immediately, but some parameters can’t be changed immediately—they can be changed only for newly established sessions. We can use the following query to see what parameters mandate the use of deferred:

EODA@ORA12CR1> select name
2 from v$parameter
3 where issys_modifiable='DEFERRED'
4 /

NAME
------------------------------
backup_tape_io_slaves
recyclebin
audit_file_dest
object_cache_optimal_size
object_cache_max_size_percent
sort_area_size
sort_area_retained_size
olap_page_pool_size

8 rows selected.

Image Note Your results may differ; from version to version, the list of which parameters may be set online—but must be deferred—can and will change.

The code shows that SORT_AREA_SIZE is modifiable at the system level, but only in a deferred manner. The following code shows what happens if we try to modify its value with and without the deferred option:

EODA@ORA12CR1> alter system set sort_area_size = 65536;
alter system set sort_area_size = 65536
*
ERROR at line 1:
ORA-02096: specified initialization parameter is not modifiable with this
option
EODA@ORA12CR1> alter system set sort_area_size = 65536 deferred;
System altered.

· SCOPE=MEMORY|SPFILE|BOTH indicates the “scope” of this parameter setting. Here are our choices for setting the parameter value:

· SCOPE=MEMORY changes the setting in the instance(s) only; it will not survive a database restart. The next time you start the database, the setting will be whatever was already recorded in the SPFILE.

· SCOPE=SPFILE changes the value in the SPFILE only. The change will not take place until the database is restarted and the SPFILE is processed again. Some parameters can be changed only by using this option. For example, the processes parameter must use SCOPE=SPFILE, as you can’t change the active instance value.

· SCOPE=BOTH means the parameter change takes place both in memory and in the SPFILE. The change will be reflected in the current instance and, the next time you start, this change will still be in effect. This is the default value for scope when using anSPFILE. With an init.ora parameter file, the default and only valid value is SCOPE=MEMORY.

· sid='sid|*' is useful mostly in a clustered environment; sid='*' is the default. This lets you specify a parameter setting uniquely for any given instance in the cluster. Unless you are using Oracle RAC, you will not need to specify the sid= setting.

· container=current|all is used in a multitenant database to determine the scope of the change. If the ALTER SYSTEM is executed in a root container database, the setting may be propagated down to every pluggable database by using the all option. Otherwise, by default, only the current container or pluggable database is affected by the change. Note that pluggable database–specific settings are not recorded in the SPFILE but are stored in the data dictionary of the pluggable database, so that when it is moved to another container, its specific settings will move with it.

A typical use of this command might be simply

EODA@ORA12CR1> alter system set pga_aggregate_target=512m;
System altered.

Image Note The preceding command—and in fact many of the ALTER SYSTEM commands in this book—may fail on your system. If you use other settings that are incompatible with my example (other memory parameters, for example), you may well receive an error. That doesn’t mean the command doesn’t work, but rather, the settings you attempted to use are not compatible with your overall setup.

Better yet, perhaps, would be using the COMMENT= assignment to document when and why a particular change was made:

EODA@ORA12CR1> alter system set pga_aggregate_target=512m
2 comment = 'Changed 14-aug-2013, AWR recommendation';

System altered.

EODA@ORA12CR1> select value, update_comment
2 from v$parameter
3 where name = 'pga_aggregate_target'
4 /

VALUE UPDATE_COMMENT
-------------------- ----------------------------------------
536870912 Changed 14-aug-2013, AWR recommendation

Unsetting Values in SPFILEs

The next question that arises is, how do we unset a value that we previously set. In other words, we don’t want that parameter setting in our SPFILE anymore. Since we can’t edit the file using a text editor, how do we accomplish that? This, too, is done via the ALTER SYSTEM command, but using the RESET clause:

Alter system reset parameter <scope=memory|spfile|both> sid='sid|*'

So, for example, if we wanted to remove the sort_area_size parameter, to allow it to assume the default value we specified previously, we could do so as follows:

EODA@ORA12CR1> alter system reset sort_area_size scope=spfile ;
System altered.

Image Note In prior releases, specifically in Oracle 10g Release 2 and earlier, the SID= clause was not optional as it is now. In those releases, you’d include SID='*' on the end of the ALTER SYSTEM command to reset the parameter for all instances in the SPFILE. Or you’d specifySID='some_sid' to reset it for a single instance.

The sort_area_size is removed from the SPFILE, which you can verify by issuing the following:

EODA@ORA12CR1> connect / as sysoper;
Connected.

PUBLIC@ORA12CR1> create pfile='/tmp/pfile.tst' from spfile;
File created.

You can then review the contents of /tmp/pfile.tst, which will be generated on the database server. You’ll find the sort_area_size parameter does not exist in the parameter file anymore.

Creating PFILEs from SPFILEs

The CREATE PFILE...FROM SPFILE command we just saw is the opposite of CREATE SPFILE. It takes the binary SPFILE and creates a plain text file from it—one that can be edited in any text editor and subsequently used to start up the database. You might use this command for at least two things on a regular basis:

· To create a one-time parameter file with some special settings, to start up the database for maintenance. So, you’d issue CREATE PFILE...FROM SPFILE and edit the resulting text PFILE, modifying the required settings. You’d then start the database, using thePFILE=<FILENAME> option to specify your PFILE instead of the SPFILE. After you finished, you’d just start up normally without specifying the PFILE=<FILENAME>, and the database would use the SPFILE.

· To maintain a history of commented changes. In the past, many DBAs heavily commented their parameter files with a change history. If they changed the size of the buffer cache 20 times, for example, they would have 20 comments in front of the db_cache_size init.ora parameter setting, stating the date and reason for making the change. The SPFILE does not support this, but you can achieve the same effect if you get into the habit of doing the following:

PUBLIC@ORA12CR1> connect / as sysdba
Connected.

SYS@ORA12CR1> create pfile='init_14_aug_2013_ora12cr1.ora' from spfile;
File created.

SYS@ORA12CR1> alter system set pga_aggregate_target=512m
2 comment = 'Changed 14-aug-2013, AWR recommendation';
System altered.

In this way, your history will be saved in the series of parameter files over time.

Fixing Corrupted SPFILEs

The last question that comes up with regard to SPFILEs is, “SPFILEs are binary files, so what happens if one gets corrupted and the database won’t start? At least the init.ora file was just text, so we could edit it and fix it.” Well, SPFILEs shouldn’t go corrupt any more than should a data file, redo log file, control file, and so forth. However, in the event one does—or if you have set a value in your SPFILE that does not allow the database to start—you have a couple of options.

First, the amount of binary data in the SPFILE is very small. If you are on a UNIX/Linux platform, a simple strings command will extract all of your settings:

[ora12cr1@dellpe dbs]$ strings $ORACLE_HOME/dbs/spfile$ORACLE_SID.ora
*.audit_file_dest='/home/ora12cr1/app/ora12cr1/admin
/ora12cr1/adump'
*.audit_trail='db'
*.compatible='12.1.0.0.0'
...

On Windows, simply open the file with write.exe (WordPad). WordPad will display all of the clear text in the file, and by simply cutting and pasting into init<ORACLE_SID>.ora, you can create a PFILE to use to start your instance.

In the event that the SPFILE has just “gone missing” (for whatever reason—not that I’ve seen an SPFILE disappear), you can also resurrect the information for your parameter file from the database’s alert log (more on the alert log shortly). Every time you start the database, the alert log will contain a section like this:

Starting up:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options.
ORACLE_HOME = /home/ora12cr1/app/ora12cr1/product/12.1.0/dbhome_1
System name: Linux
Node name: dellpe
Release: 2.6.39-400.109.1.el6uek.x86_64
Version: #1 SMP Tue Jun 4 23:21:51 PDT 2013
Machine: x86_64
Using parameter settings in server-side spfile /home/ora12cr1/app/ora12cr1/product/12.1.0/dbhome_1/dbs/spfileora12cr1.ora
System parameters with non-default values:
processes = 300
resource_limit = TRUE
sga_target = 4800M
control_files = "/home/ora12cr1/oradata/ORA12CR1/controlfile/o1_mf_8wvv2pml_.ctl"
control_files = "/home/ora12cr1/app/ora12cr1/fast_recovery_area/ORA12CR1/controlfile/o1_mf_8wvv2ps2_.ctl"
db_block_size = 8192
compatible = "12.1.0.0.0"
db_create_file_dest = "/home/ora12cr1/oradata"
db_recovery_file_dest = "/home/ora12cr1/app/ora12cr1/fast_recovery_area"
db_recovery_file_dest_size = 4815M
undo_tablespace = "UNDOTBS1"
remote_login_passwordfile = "EXCLUSIVE"
db_domain = ""
dispatchers = "(PROTOCOL=TCP) (SERVICE=ora12cr1XDB)"
local_listener = "(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.25)(PORT=1521))"
parallel_min_servers = 0
parallel_max_servers = 0
audit_file_dest = "/home/ora12cr1/app/ora12cr1/admin/ora12cr1/adump"
audit_trail = "DB"
db_name = "ora12cr1"
open_cursors = 300
_column_tracking_level = 1
pga_aggregate_target = 1600M
diagnostic_dest = "/home/ora12cr1/app/ora12cr1"
NOTE: remote asm mode is local (mode 0x1; from cluster type)
Starting background process PMON
Mon Sep 02 16:56:22 2013
PMON started with pid=2, OS id=21572

From this section, you can easily create a PFILE to be converted into a new SPFILE using the CREATE SPFILE command.

Pluggable Databases

Pluggable databases are designed to be a set of files you can move from one root container database to another. That is, we can unplug a pluggable database, and upon plugging it back into either the same root container database or some other root container database, we would have our original pluggable database back—with all of the application schemas, users, metadata, grants, data, and even our pluggable database parameter settings (settings that were not inherited from the root container). This is achieved by storing pluggable database–specific parameter settings in a data dictionary table: SYS.PDB_SPFILE$. It is in this fashion that pluggable databases can override a parameter setting for some parameters (not every parameter can be set at the pluggable database level) in an SPFILE and have those parameter settings travel with them as they move from root container database to root container database.

Parameter File Wrap-up

In this section, we covered the basics of managing Oracle initialization parameters and parameter files. We looked at how to set parameters, view parameter values, and have those settings persist across database restarts. We explored the two types of database parameter files: legacy PFILEs(simple text files) and the newer SPFILEs. For all existing databases, using SPFILEs is recommended for the ease of administration and clarity they bring. The ability to have a single source of parameter “truth” for the database, along with the ability of the ALTER SYSTEM command to persist the parameter values, make SPFILEs a compelling feature. I started using them the instant they became available and haven’t looked back.

Trace Files

Trace files are a source of debugging information. When the server encounters a problem, it generates a trace file full of diagnostic information. When a developer executes DBMS_MONITOR.SESSION_TRACE_ENABLE, the server generates a trace file full of performance-related information. Trace files are available to us because Oracle is a heavily instrumented piece of software. By “instrumented,” I mean that the programmers who wrote the database kernel put in debugging code—lots and lots of it. And they left it in, on purpose.

I’ve met many developers who consider debugging code to be overhead—something that must be ripped out before an application goes into production in a vain attempt to squeeze every ounce of performance out of the code. Later, of course, they discover that their code has a bug or it isn’t running as fast as it should (which end users tend to call a bug as well; to an end user, poor performance is a bug). At that point, they really wish that the debug code was still in the code (or had been in there if it never was), especially since you can’t drop debug code into the production system. You have to test any new code before putting it into a production environment, and that’s not something you do at the drop of a hat.

The Oracle database (and Application Server and Oracle applications and various tools such as Application Express (APEX)) is heavily instrumented. Signs of this instrumentation in the database are

· V$ views: Most V$ views contain “debug” information. V$WAITSTAT, V$SESSION_EVENT, and many others exist solely to let us know what is going on deep in the kernel.

· The AUDIT command: This command allows you to specify what events the database should record for later analysis.

· Resource Manager (DBMS_RESOURCE_MANAGER): This feature lets you micromanage resources (CPU, I/O, and the like) within the database. What makes a Resource Manager in the database possible is that it has access to all of the runtime statistics describing how the resources are being used.

· Oracle events: These enable you to ask Oracle to produce trace or diagnostic information as needed.

· DBMS_TRACE: This facility within the PL/SQL engine exhaustively records the call tree of stored procedures, exceptions raised, and errors encountered.

· Database event triggers: These triggers, such as ON SERVERERROR, allow you to monitor and log any condition you feel is “exceptional” or out of the ordinary. For example, you can log the SQL that was running when an “out of temp space” error was raised.

· SQL_TRACE/DBMS_MONITOR: This is used to view the exact SQL, wait events and other performance/behavior related diagnostic information generated by running your application. The SQL Trace facility is also available in an extended fashion via the 10046 Oracle event.

among others. Instrumentation is vital in application design and development, and the Oracle database becomes better instrumented with each release. In fact, the amount of additional instrumentation in the database between Oracle9i Release 2 and Oracle 11g, and now Oracle 12c, is phenomenal. Oracle 10g took code instrumentation in the kernel to a whole new level with the introduction of the Automatic Workload Repository (AWR) and Active Session History (ASH) features. Oracle 11g took that further with options such as the Automatic Diagnostic Repository (ADR) and the SQL Performance Analyzer (SPA). Oracle 12c advanced even further with the addition of a DDL log to track all DDL operations in a database (something that shouldn’t be happening in many typical production databases day to day) and the debug log to track exceptional conditions in the database.

In this section we’re going to focus on the information you can find in various types of trace files. We’ll cover what they are, where they are stored, and what we can do with them.

There are two general types of trace files, and what we do with each kind is very different:

· Trace files you expected and want: These are, for example, the result of enabling DBMS_MONITOR.SESSION_TRACE_ENABLE. They contain diagnostic information about your session and will help you tune your application to optimize its performance and diagnose any bottlenecks it is experiencing.

· Trace files you were not expecting but the server generated as the result of an ORA-00600 “Internal Error”, ORA-03113 “End of file on communication channel”, or ORA-07445 “Exception Encountered” type of error. These traces contain diagnostic information that is most useful to an Oracle Support analyst and, beyond showing where in our application the internal error was raised, are of limited use to us.

Requested Trace Files

The trace files you typically expect to be generated as the result of enabling trace via DBMS_MONITOR (ALTER SESSION SET SQL_TRACE=TRUE in Oracle9i Release 2 and earlier), or using the extended trace facility via the 10046 event, might be as follows:

EODA@ORA12CR1> alter session set events
2 '10046 trace name context forever, level 12'
3 /
Session altered.

These trace files contain diagnostic and performance related information. They provide invaluable insights into the inner workings of your database application. You will see these trace files more often than any other kind of trace file in a normally operating database.

File Locations

Whether you use DBMS_MONITOR, SQL_TRACE or the extended trace facility, Oracle will start generating a trace file on the database server machine in one of two locations:

· If you are using a dedicated server connection, the trace file will be generated in the directory specified by the user_dump_dest parameter.

· If you are using a shared server connection, the trace file will be generated in the directory specified by the background_dump_dest parameter.

To see where the trace files will go, you can issue the show parameter dump_dest command from SQL*Plus, query the V$PARAMETER view, use the routine we created earlier (SCOTT.GET_PARAM), or query the new V$DIAG_INFO view. We’ll demonstrate each in turn next.

EODA@ORA12CR1> show parameter dump_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
background_dump_dest string /home/ora12cr1/app/ora12cr1/di
ag/rdbms/ora12cr1/ora12cr1/tra
ce
core_dump_dest string /home/ora12cr1/app/ora12cr1/di
ag/rdbms/ora12cr1/ora12cr1/cdu
mp
user_dump_dest string /home/ora12cr1/app/ora12cr1/di
ag/rdbms/ora12cr1/ora12cr1/tra
ce

This shows the three dump (trace) destinations. The background dump destination is used by any “server” process (see Chapter 5 for a comprehensive list of Oracle background processes and their functions). The core dump destination is used for a “core dump” (very detailed process diagnostic information) when a serious problem arises, such as a process crash. The user dump destination is used by dedicated and shared server connections (covered in Chapter 2) when they generate a trace file.

To continue with the various methods of examining these dump destinations, let’s take a look at the V$ tables available:

EODA@ORA12CR1> select name, value
2 from v$parameter
3 where name like '%dump_dest%';

NAME VALUE
------------------------------ ------------------------------
background_dump_dest /home/ora12cr1/app/ora12cr1/di
ag/rdbms/ora12cr1/ora12cr1/tra
ce

user_dump_dest /home/ora12cr1/app/ora12cr1/di
ag/rdbms/ora12cr1/ora12cr1/tra
ce

core_dump_dest /home/ora12cr1/app/ora12cr1/di
ag/rdbms/ora12cr1/ora12cr1/cdu
mp

We could, of course, use the DBMS_UTILITY package we put in our earlier SCOTT.GET_PARAM function to query the V$PARAMETER table as well:

EODA@ORA12CR1> set serveroutput on
EODA@ORA12CR1> exec dbms_output.put_line( scott.get_param( 'user_dump_dest' ) )
/home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/trace

PL/SQL procedure successfully completed.

In Oracle 11g, a new facility, the ADR, was added. As part of this new facility, there’s a new V$ view—V$DIAG_INFO.

Image Note V$DIAG_INFO is a view available in Oracle 11g (and above) and is not available in older releases. It is an easier interface to the trace information used by the new ADR facility.

For readability purposes, in the following query against V$DIAG_INFO, I’ve factored out the long path name to the ADR Home directory, replacing it with $home$ in the output. This just makes it easier to read the output in the book; it is not something you need to do:

EODA@ORA12CR1> with home
2 as
3 (select value home
4 from v$diag_info
5 where name = 'ADR Home'
6 )
7 select name,
8 case when value <> home.home
9 then replace(value,home.home,'$home$')
10 else value
11 end value
12 from v$diag_info, home
13 /

NAME VALUE
------------------------------ ------------------------------
Diag Enabled TRUE
ADR Base /home/ora12cr1/app/ora12cr1
ADR Home /home/ora12cr1/app/ora12cr1/di
ag/rdbms/ora12cr1/ora12cr1

Diag Trace $home$/trace
Diag Alert $home$/alert
Diag Incident $home$/incident
Diag Cdump $home$/cdump
Health Monitor $home$/hm
Default Trace File $home$/trace/ora12cr1_ora_2231
9.trc

Active Problem Count 0
Active Incident Count 0

11 rows selected.

As you can see, the rows contain paths to the locations of various trace files. Oracle 11g revamped where many files are stored by default, organizing them a bit better to ease the support process when you log a service request with Oracle Support. The most important rows are

· Diag Trace: This is where the trace files—both background and user dump destinations—go to in Oracle 11g and above.

· Default Trace File: This is the name of your current session’s trace file. In earlier releases, this file name could be tricky to figure out (we’ll see how next). In Oracle 11g and above, a simple query against V$DIAG_INFO returns the fully qualified file name.

Prior to Oracle 11g and the addition of the Default Trace File information, you had to locate your trace file manually. If you use a shared server connection to Oracle, you are using a background process so the location of your trace files is defined by background_dump_dest. If you use a dedicated server connection, you are using a user or foreground process to interact with Oracle so your trace files will go in the directory specified by the user_dump_dest parameter. The core_dump_dest parameter defines where a “core” file would be generated in the event of a serious Oracle internal error (such as a segmentation fault on UNIX/Linux), or if Oracle Support had you generate one for additional debug information. In general, the two destinations of interest are the background and user dump destinations. Unless otherwise stated, we will be using dedicated server connections in the course of this book, so all of our trace files will be generated in the user_dump_dest location.

Naming Convention

The trace file naming convention changes from time to time in Oracle, but if you have an example trace file name from your system, it is easy to see the template in use. For example, on my various UNIX/Linux servers, a trace file name looks like those in Table 3-1.

Table 3-1. Sample Trace File Names

Trace File Name

Database Version

ora_10583.trc

9i Release 1

ora9ir2_ora_1905.trc

9i Release 2

ora10gr2_ora_6793.trc

10g Release 2

ora11gr2_ora_1990.trc

11g Release 2

ora12cr1_ora_2344.trc

12c Release 1

On my servers, the trace file name can be broken down as follows:

· The first part of the file name is the ORACLE_SID (with the exception of Oracle9i Release 1, where Oracle decided to leave that off).

· The next bit of the file name is just ora.

· The number in the trace file name is the process ID of your dedicated server, available to you from the V$PROCESS view.

Therefore, prior to Oracle 11g, which has the easy to use V$DIAG_INFO view, in practice (assuming dedicated server mode) you need access to four views to determine your trace file name:

· V$PARAMETER, which is used to locate the trace file for user_dump_dest and to find the optional tracefile_identifier that might be used in your trace file name.

· V$PROCESS, which is used to find the process ID.

· V$SESSION, which is used to correctly identify your session’s information in the other views.

· V$INSTANCE, which is used to get the ORACLE_SID.

As noted earlier, you can use the DBMS_UTILITY to find the location, and often you simply “know” the ORACLE_SID, so technically you might only need access to V$SESSION and V$PROCESS, but for ease of use you’d want access to all four.

A query, then, to generate your trace file name could be:

EODA@ORA12CR1> column trace new_val TRACE format a100

EODA@ORA12CR1> select c.value || '/' || d.instance_name || '_ora_' || a.spid || '.trc' trace
2 from v$process a, v$session b, v$parameter c, v$instance d
3 where a.addr = b.paddr
4 and b.audsid = userenv('sessionid')
5 and c.name = 'user_dump_dest'
6 /

TRACE
-------------------------------------------------------------------------------------
/home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/trace/ora12cr1_ora_22319.trc

And this just shows that if the file exists, you’ll be able to access it via that name (assuming you have the permissions to read the trace directory). The following example generates a trace file, showing how the file is created once the trace is enabled:

EODA@ORA12CR1> !ls &TRACE
ls: cannot access /home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/trace/ora12cr1_ora_22319.trc: No such file or directory

EODA@ORA12CR1> exec dbms_monitor.session_trace_enable
PL/SQL procedure successfully completed.

EODA@ORA12CR1> !ls &TRACE
/home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/trace/ora12cr1_ora_22319.trc

As you can see, before we enabled tracing in that session, no file existed; as soon as tracing is enabled, however, we are able to see it.

It should be obvious that on Windows you’d replace the / with \. If you are using 9i Release 1, instead of adding the instance name into the trace file name, you’d simply issue the following:

select c.value || 'ora_' || a.spid || '.trc'

Tagging Trace Files

There is a way to “tag” your trace file so that you can find it even if you are not permitted access to V$PROCESS and V$SESSION. Assuming you have access to read the user_dump_dest directory, you can use the session parameter tracefile_identifier. With this, you can add a uniquely identifiable string to the trace file name, for example:

EODA@ORA12CR1> alter session set tracefile_identifier = 'Look_For_Me';
Session altered.

EODA@ORA12CR1> !ls /home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/trace/*Look_For_Me*
ls: cannot access /home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/trace/*Look_For_Me*: No such file or directory

EODA@ORA12CR1> exec dbms_monitor.session_trace_enable
PL/SQL procedure successfully completed.

EODA@ORA12CR1> !ls
/home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/trace/*Look_For_Me*

The prior line of code didn’t fit within the physical limitations of this page, so it displays as two lines when it should be on one line. The ls command is searching for files in this directory:

/home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/trace

The * character is a wildcard instructing ls to look for any files with the string of Look_For_Me included in the file name. For this example, there are two files the prior ls command located:

/home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/trace/ora12cr1_ora_22489_Look_For_Me.trc
/home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/trace/ora12cr1_ora_22489_Look_For_Me.trm

As you can see, the trace file is now named in the standard <ORACLE_SID>_ora_<PROCESS_ID> format, but it also has the unique string we specified associated with it, making it easy to find “our” trace file name. The trace file ends with the extension of .trc. There’s also a corresponding trace map file (with the extension of .trm) which contains structural information about the trace file. Usually you’ll only be interested in the contents of the .trc file.

Trace Files Generated in Response to Internal Errors

I’d like to close this section with a discussion about those other kinds of trace files—the ones we did not expect that were generated as a result of an ORA-00600 or some other internal error. Is there anything we can do with them?

The short answer is that, in general, they are not for you and me. They are useful to Oracle Support. However, they can be helpful when we file a service request with Oracle Support. That point is crucial: if you are getting internal errors, the only way they will ever be corrected is if you file a service request. If you just ignore them, they will not get fixed by themselves, except by accident.

For example, in Oracle 10g Release 1, if you create the following table and run the query, you may well get an internal error (or not; it was filed as a bug and is corrected in later patch releases):

ops$tkyte@ORA10G> create table t ( x int primary key );
Table created.

ops$tkyte@ORA10G> insert into t values ( 1 );
1 row created.

ops$tkyte@ORA10G> exec dbms_stats.gather_table_stats( user, 'T' );
PL/SQL procedure successfully completed.

ops$tkyte@ORA10G> select count(x) over ()
2 from t;
from t
*
ERROR at line 2:
ORA-00600: internal error code, arguments: [12410], [], [], [], [], [], [], []

Now, suppose you are the DBA and all of a sudden this trace file pops up in the trace area. Or you are the developer and your application raises an ORA-00600 error and you want to find out what happened. There is a lot of information in that trace file (some 35,000 lines, in fact), but in general it’s not useful to you and me. We would generally just compress the trace file and upload it as part of our service request processing.

Starting in Oracle 11g and above, the process of gathering the trace information and uploading it to support has been modified (and made significantly easier). A new command-line tool, in conjunction with a user interface via Enterprise Manager, allows you to review the trace information in the ADR, and package and transmit it to Oracle Support.

The Automatic Diagnostic Repository Command Interpreter (ADRCI) utility allows you to review “problems” (critical errors in the database) and incidents (occurrences of those critical errors) and to package them up for transmission to support. The packaging step includes retrieving not only the trace information, but also details from the database alert log and other configuration/test case information. For example, I set up a situation in my database that raised a critical error. (No, I won’t say what it is. You have to generate your own critical errors.) I knew I had a “problem” in my database because the ADRCI tool told me so:

[ora12cr1@dellpe ~]$ adrci

ADRCI: Release 12.1.0.1.0 - Production on Mon Sep 2 17:45:38 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

ADR base = "/home/ora12cr1/app/ora12cr1"
adrci> show problem

ADR Home = /home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1:
*************************************************************************
PROBLEM_ID PROBLEM_KEY LAST_INCIDENT LASTINC_TIME
------------ ------------------------ --------------- ---------------------------------
1 ORA 7445 [qctcopn] 36281 2013-09-02 17:52:11.438000 -04:00

On September 2, 2013 I caused an ORA-7445, a serious problem, in the database (a bug was filed and fixed). I can now see what was affected by that error by issuing the show incident command:

adrci> show incident

ADR Home = /home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1:
*************************************************************************
INCIDENT_ID PROBLEM_KEY CREATE_TIME
--------------- --------------------------- ----------------------------------
36249 ORA 7445 [qctcopn] 2013-09-02 17:45:06.791000 -04:00
36250 ORA 7445 [qctcopn] 2013-09-02 17:51:58.469000 -04:00
36281 ORA 7445 [qctcopn] 2013-09-02 17:52:11.438000 -04:00

I can see there three incidents, and I can identify the information related to each incident via the show tracefile command:

adrci> show tracefile -I 36250
diag/rdbms/ora12cr1/ora12cr1/incident/incdir_36250/ora12cr1_ora_22682_i36250.trc

This shows me the location of the trace file for incident number 36250. Further, I can see a lot of detail about the incident if I so choose:

adrci> show incident -mode detail -p "incident_id=36250"
ADR Home = /home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1:
*************************************************************************

**********************************************************
INCIDENT INFO RECORD 1
**********************************************************
INCIDENT_ID 36250
STATUS ready
CREATE_TIME 2013-09-02 17:51:58.469000 -04:00
PROBLEM_ID 1
CLOSE_TIME <NULL>
FLOOD_CONTROLLED none
ERROR_FACILITY ORA
ERROR_NUMBER 7445
ERROR_ARG1 qctcopn
ERROR_ARG2 SIGSEGV
ERROR_ARG3 ADDR:0x18
ERROR_ARG4 PC:0xB859512
ERROR_ARG5 Address not mapped to object
ERROR_ARG6 <NULL>
ERROR_ARG7 <NULL>
ERROR_ARG8 <NULL>
ERROR_ARG9 <NULL>
ERROR_ARG10 <NULL>
ERROR_ARG11 <NULL>
ERROR_ARG12 <NULL>
SIGNALLING_COMPONENT <NULL>
SIGNALLING_SUBCOMPONENT <NULL>
SUSPECT_COMPONENT <NULL>
SUSPECT_SUBCOMPONENT <NULL>
ECID <NULL>
IMPACTS 0
PROBLEM_KEY ORA 7445 [qctcopn]
FIRST_INCIDENT 36249
FIRSTINC_TIME 2013-09-02 17:45:06.791000 -04:00
LAST_INCIDENT 36281
LASTINC_TIME 2013-09-02 17:52:11.438000 -04:00
IMPACT1 0
IMPACT2 0
IMPACT3 0
IMPACT4 0
KEY_NAME Client ProcId
KEY_VALUE oracle@dellpe (TNS V1-V3).22682_140239502662112
KEY_NAME SID
KEY_VALUE 416.23
KEY_NAME ProcId
KEY_VALUE 31.41
KEY_NAME PQ
KEY_VALUE (0, 1378158717)
OWNER_ID 1
INCIDENT_FILE /home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/trace/ora12cr1_ora_22682.trc
OWNER_ID 1
INCIDENT_FILE /home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/incident/incdir_36250/ora12cr1_ora_22682_i36250.trc

And, finally, I can create a “package” of the incident that is useful for support. The package will contain everything a support analyst needs to begin working on the problem.

This section is not intended to be a full overview or introduction to the ADRCI utility, which is documented fully in the Oracle Database Utilities manual. Rather, I just wanted to introduce the existence of the tool—a tool that makes using trace files easy.

Prior to ADRCI in 11g, was there anything you could do with the unexpected trace files beyond sending them to support? Yes, there is some information in a trace file that can help you track down the who, what, and where of an error. The trace file can also help you find out if the problem is something others have experienced.

The previous example shows that ADRCI is an easy way to interrogate the trace files in Oracle 12c (I showed just a small fraction of the commands available). In 10g and before, you can do the same thing, albeit it a bit more manually. For example, a quick inspection of the very top of a trace file provides some useful information. Here’s an example:

/home/ora10gr1/admin/ora10gr1/udump/ora10gr1_ora_2578.trc
Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /home/ora10gr1
System name: Linux
Node name: dellpe
Release: 2.6.9-11.ELsmp
Version: #1 SMP Fri May 20 18:26:27 EDT 2005
Machine: i686
Instance name: ora10gr1
Redo thread mounted by this instance: 1
Oracle process number: 16
Unix process pid: 2578, image: oracle@dellpe (TNS V1-V3)

The database information is important to have when you go to http://support.oracle.com to file the service request or to search to see if what you are experiencing is a known problem. In addition, you can see the Oracle instance on which the error occurred. It is quite common to have many instances running concurrently, so isolating the problem to a single instance is useful.

Here’s another section of the trace file to be aware of:

*** 2010-01-20 14:32:40.007
*** ACTION NAME:() 2010-01-20 14:32:39.988
*** MODULE NAME:(SQL*Plus) 2010-01-20 14:32:39.988
*** SERVICE NAME:(SYS$USERS) 2010-01-20 14:32:39.988

This part of the trace file is new with Oracle 10g and above and won’t be there in Oracle9i and before. It shows the session information available in the columns ACTION and MODULE from V$SESSION. Here we can see that it was a SQL*Plus session that caused the error to be raised (you and your developers can and should set the ACTION and MODULE information; some environments such as Oracle Forms and APEX already do this for you).

Additionally, we have the SERVICE NAME. This is the actual service name used to connect to the database—SYS$USERS, in this case—indicating we didn’t connect via a TNS service. If we logged in using user/pass@ora10g.localdomain, we might see:

*** SERVICE NAME:(ORA10G) 2010-01-20 14:32:39.988

where ora10g is the service name (not the TNS connect string; rather, it’s the ultimate service registered in a TNS listener to which it connected). This is also useful in tracking down which process or module is affected by this error.

Lastly, before we get to the actual error, we can see the session ID (19 in this example), session serial number (27995 in this example), and related date/time information (all releases) as further identifying information:

*** SESSION ID:(19.27995) 2010-01-20 14:32:39.988

Now we are ready to get into the error itself:

ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [12410], [], [], [], [], [], [], []
Current SQL statement for this session:
select count(x) over ()
from t
----- Call Stack Trace -----
_ksedmp+524
_ksfdmp.160+14
_kgeriv+139
_kgesiv+78
_ksesic0+59
_qerixAllocate+4155
_qknRwsAllocateTree+281
_qknRwsAllocateTree+252
_qknRwsAllocateTree+252
_qknRwsAllocateTree+252
_qknDoRwsAllocate+9
...

Here we see a couple of important pieces of information. First, we find the SQL statement that was executing when the internal error was raised, which is very useful for tracking down what application(s) was affected. Also, since we see the SQL here, we can start investigating possible workarounds—trying different ways to code the SQL see if we can quickly work around the issue while working on the bug. Furthermore, we can cut and paste the offending SQL into SQL*Plus and see if we have a nicely reproducible test case for Oracle Support (these are the best kinds of test cases, of course).

The other important pieces of information are the error code (typically 600, 3113, or 7445) and other arguments associated with the error code. Using these, along with some of the stack trace information that shows the set of Oracle internal subroutines that were called in order, we might be able to find an existing bug (and workarounds, patches, and so on). For example, we might use the search string

ora-00600 12410 ksesic0 qerixAllocate qknRwsAllocateTree

Using My Oracle Support’s advanced search (using all of the words, search the bug database), we immediately find the bug 3800614, “ORA-600 [12410] ON SIMPLE QUERY WITH ANALYTIC FUNCTION”. If we go to http://support.oracle.com and search using that text, we will discover this bug, see that it is fixed in the next release, and note that patches are available—all of this information is available to us. I often find that the error I receive is one that has happened before and there are fixes or workarounds for it.

Trace File Wrap-up

You now know the two types of general trace files, where they are located, and how to find them. Hopefully you’ll use trace files mostly for tuning and increasing the performance of your application, rather than for filing service requests. As a last note, Oracle Support does have access to many undocumented “events” that are very useful for dumping out tons of diagnostic information whenever the database hits any error. For example, if you are getting an ORA-01555 Snapshot Too Old that you absolutely feel you should not be getting, Oracle Support can guide you through the process of setting such diagnostic events to help you track down precisely why that error is getting raised, by creating a trace file every time that error is encountered.

Alert File

The alert file (also known as the alert log) is the diary of the database. It is a simple text file written to from the day the database is “born” (created) to the end of time (when you erase it). In this file, you’ll find a chronological history of your database—the log switches; the internal errors that might be raised; when tablespaces were created, taken offline, put back online; and so on. It is an incredibly useful file for viewing the history of a database. I like to let mine grow fairly large before “rolling” (archiving) it. The more information the better, I believe, for this file.

I will not describe everything that goes into an alert log—that’s a fairly broad topic. I encourage you to take a look at yours, however, and see the wealth of information it holds. Instead, in this section we’ll take a look at a specific example of how to mine information from this alert log, in this case to create an uptime report.

In the past, I’ve used the alert log file for the http://asktom.oracle.com web site and to generate an uptime report for my database. Instead of poking through the file and figuring that out manually (the shutdown and startup times are in there), I decided to take advantage of the database and SQL to automate that work, thus creating a technique for generating a dynamic uptime report straight from the alert log.

Using an EXTERNAL TABLE (which is covered in much more detail in Chapter 10 and Chapter 15), we can actually query our alert log and see what is in there. I discovered that a pair of records was produced in my alert log every time I started the database:

Thu May 6 14:24:42 2004
Starting ORACLE instance (normal)

That is, I always saw a timestamp record, in that constant, fixed-width format, coupled with the message Starting ORACLE instance. I also noticed that before these records would be an ALTER DATABASE CLOSE message (during a clean shutdown), or a shutdown abort message, or nothing—no message, indicating a system crash. But any message would have some timestamp associated with it as well. So, as long as the system didn’t crash, some meaningful timestamp would be recorded in the alert log (and in the event of a system crash, some timestamp would be recorded shortly before the crash, as the alert log is written to quite frequently).

I discovered that I could easily generate an uptime report if I

· Collected all of the records like Starting ORACLE instance %.

· Collected all of the records that matched the date format (that were in fact dates).

· Associated with each Starting ORACLE instance record the prior two records (which would be dates).

The following code creates an external table to make it possible to query the alert log. (Note: replace /background/dump/dest/ with your actual background dump destination and use your alert log name in the CREATE TABLE statement.)

EODA@ORA12CR1> create or replace
2 directory data_dir
3 as
4 '/home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/trace/'
5 /
Directory created.

EODA@ORA12CR1> CREATE TABLE alert_log
2 (
3 text_line varchar2(4000)
4 )
5 ORGANIZATION EXTERNAL
6 (
7 TYPE ORACLE_LOADER
8 DEFAULT DIRECTORY data_dir
9 ACCESS PARAMETERS
10 (
11 records delimited by newline
12 fields
13 )
14 LOCATION
15 (
16 'alert_ora12cr1.log'
17 )
18 )
19 reject limit unlimited
20 /
Table created.

We can now query that information anytime:

EODA@ORA12CR1> select to_char(last_time,'dd-mon-yyyy hh24:mi') shutdown,
2 to_char(start_time,'dd-mon-yyyy hh24:mi') startup,
3 round((start_time-last_time)*24*60,2) mins_down,
4 round((last_time-lag(start_time) over (order by r)),2) days_up,
5 case when (lead(r) over (order by r) is null )
6 then round((sysdate-start_time),2)
7 end days_still_up
8 from (
9 select r,
10 to_date(last_time, 'Dy Mon DD HH24:MI:SS YYYY') last_time,
11 to_date(start_time,'Dy Mon DD HH24:MI:SS YYYY') start_time
12 from (
13 select r,
14 text_line,
15 lag(text_line,1) over (order by r) start_time,
16 lag(text_line,2) over (order by r) last_time
17 from (
18 select rownum r, text_line
19 from alert_log
20 where text_line like '___ ___ __ __:__:__ 20__'
21 or text_line like 'Starting ORACLE instance %'
22 )
23 )
24 where text_line like 'Starting ORACLE instance %'
25 )
26 /

SHUTDOWN STARTUP MINS_DOWN DAYS_UP DAYS_STILL_UP
----------------- ----------------- ---------- ---------- -------------
28-jun-2013 16:04
28-jun-2013 17:02 28-jun-2013 17:02 .03 .04
29-jun-2013 06:00 01-jul-2013 09:42 3102.53 .54
02-jul-2013 14:59 02-jul-2013 14:59 .03 1.22
02-jul-2013 15:00 02-jul-2013 15:00 .03 0
02-jul-2013 15:10 02-jul-2013 15:10 .03 .01
02-jul-2013 17:01 02-jul-2013 17:02 1.55 .08
18-jul-2013 02:00 18-jul-2013 11:31 571.37 15.37
05-aug-2013 09:00 06-aug-2013 09:06 1445.62 17.9
14-aug-2013 09:09 14-aug-2013 09:58 49.42 8
31-aug-2013 14:08 02-sep-2013 10:51 2683.15 17.17
02-sep-2013 14:32 02-sep-2013 14:51 18.93 .15
02-sep-2013 15:13 02-sep-2013 15:13 .03 .02
02-sep-2013 15:15 02-sep-2013 15:15 .05 0
02-sep-2013 16:53 02-sep-2013 16:54 .03 .07
02-sep-2013 16:56 02-sep-2013 16:56 .03 0 .07

16 rows selected.

I won’t go into the nuances of the SQL query here, but the innermost query from lines 18 through 21 collects the “Starting” and date lines (remember, when using a LIKE clause, _ matches precisely one character—at least one and at most one). That query also numbers the lines usingrownum. Then, the next level of query uses the built-in LAG() analytic function to reach back one and two rows for each row, and slide that data up so the third row of this query has the data from rows 1, 2, and 3. Row 4 has the data from rows 2, 3, and 4, and so on. We end up keeping just the rows that were like Starting ORACLE instance %, which now have the two preceding timestamps associated with them. From there, computing downtime is easy: we just subtract the two dates. Computing the uptime is not much harder (now that you’ve seen the LAG()function): we just reach back to the prior row, get its startup time, and subtract that from this line’s shutdown time.

My Oracle 12c database came into existence on 28-Jun-2013 and it has been shut down numerous times (and as of this writing it has been up for .07 days in a row).

If you are interested in seeing another example of mining the alert log for useful information, go to http://tinyurl.com/y8wkhjt. This page shows a demonstration of how to compute the average time it took to archive a given online redo log file. Once you understand what is in the alert log, generating these queries on your own becomes easy.

In addition to using an external table to query the alert log in 12c, you can easily view the alert log using the ADRCI tool. That tool lets you find, edit (review), and monitor (interactively display new records as they appear in the alert log). Also, the alert log in 11g and above is available in two versions—the old version we just used and an XML version:

EODA@ORA12CR1> column value new_val V
EODA@ORA12CR1> select value from v$diag_info where name = 'Diag Alert';

VALUE
-------------------------------------------------------------------------------
/home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/alert

EODA@ORA12CR1> !ls &V/log.xml
/home/ora12cr1/app/ora12cr1/diag/rdbms/ora12cr1/ora12cr1/alert/log.xml

EODA@ORA12CR1> !head &V/log.xml
<msg time='2013-06-28T16:04:25.378-04:00' org_id='oracle' comp_id='rdbms'
msg_id='dbkrlCheckSuppressAlert:332:7003611' type='NOTIFICATION' group='startup'
level='16' host_id='localhost.localdomain' host_addr='::1'
pid='32628' version='1'>
<txt>Adjusting the default value of parameter parallel_max_servers
</txt>
</msg>
<msg time='2013-06-28T16:04:25.378-04:00' org_id='oracle' comp_id='rdbms'
msg_id='dbkrlCheckSuppressAlert:332:2000778772' type='NOTIFICATION' group='startup'
level='16' host_id='localhost.localdomain' host_addr='::1'

If you have utilities or tools to generate reports from XML (such as an Oracle database using XDB—XML DB—for example), you may query/report on that format as well.

Of course, Enterprise Manager also displays the important alert log information as well.

Data Files

Data files, along with redo log files, are the most important type of files in the database. This is where all of your data will ultimately be stored. Every database has at least one data file associated with it, and typically has many more than one. Only the most simple “test” databases have one file. In fact, in Chapter 2 we saw that the simplest CREATE DATABASE command by default created a database with three data files, listed here for reference:

NAME
-----------------------------------------------------------------------
/home/ora12cr1/app/ora12cr1/product/12.1.0/dbhome_1/dbs/dbs1ora12c.dbf
/home/ora12cr1/app/ora12cr1/product/12.1.0/dbhome_1/dbs/dbx1ora12c.dbf
/home/ora12cr1/app/ora12cr1/product/12.1.0/dbhome_1/dbs/dbu1ora12c.dbf

one for the SYSTEM tablespace (which houses the true Oracle data dictionary), and one for the SYSAUX tablespace (where other non-dictionary objects are stored in version 10g and above) one for the USER tablespace (tablespaces will be explained shortly in the “Tablespaces” section). Any real database will have at least these three data files.

After a brief review of file system types, we’ll discuss how Oracle organizes these files and how data is organized within them. To understand this, you need to know what tablespaces, segments, extents, and blocks are. These are the units of allocation that Oracle uses to hold objects in the database, and I describe them in detail shortly.

A Brief Review of File System Mechanisms

There are four file system mechanisms (only three in Oracle 12c) in which to store your data in Oracle. By your data, I mean your data dictionary, redo, undo, tables, indexes, LOBs, and so on—the data you personally care about at the end of the day. Briefly, they are

· “Cooked” operating system (OS) file systems: These are files that appear in the file system just like your word processing documents do. You can see them in Windows Explorer; you can see them in UNIX/Linux as the result of an ls command. You can use simple OS utilities such as xcopy on Windows or cp on UNIX/Linux to move them around. Cooked OS files are historically the most popular method for storing data in Oracle, but I see that changing with the introduction of ASM (more on that in a moment). Cooked file systems are typically buffered as well, meaning that the OS will cache information for you as you read and, in some cases, write to disk.

· Raw partitions: These are not files—these are raw disks. You don’t ls them; you don’t review their contents in Windows Explorer. They are just big sections of disk without any sort of file system on them. The entire raw partition appears to Oracle as a single large file. This is in contrast to a cooked file system, where you might have many dozens or even hundreds of database data files. Currently, only a small percentage of Oracle installations use raw partitions due to their perceived administrative overhead. Raw partitions are not buffered devices—all I/O performed on them is direct I/O, without any OS buffering of data (which, for a database, is generally a positive attribute).

Image Note Raw partitions are deprecated in Oracle 11g and are no longer supported at all in Oracle 12c. If you have an existing database that uses raw partitions, you will have to data pump that data out, or use some other replication tool, such as Golden Gate, to move your data into a new database that uses one of the supported file systems. Alternatively, you can add new tablespaces utilizing a supported file system to your existing database, and move your data from raw partitions to this other file system. This approach will only work if your SYSTEM tablespace is not on a raw partition.

· Automatic Storage Management (ASM): This is a new feature of Oracle 10g Release 1 (for both Standard and Enterprise editions). In releases prior to 11g Release 2, ASM is a file system designed exclusively for use by the database. An easy way to think about it is as a database file system. You won’t store your shopping list in a text file on this particular file system—you’ll store only database-related information here: tables, indexes, backups, control files, parameter files, redo logs, archives, and more. But even in ASM, the equivalent of a data file exists; conceptually, data is still stored in files, but the file system is ASM. ASM is designed to work in either a single machine or clustered environment. Since Oracle 11g Release 2, ASM provides not only this database file system but optionally a clustered file system as well, which is described next.

· Clustered file system: This is specifically for a RAC (clustered) environment and provides what looks like a cooked file system that is shared by many nodes (computers) in a clustered environment. A traditional cooked file system is usable by only one computer in a clustered environment. So, while it is true that you could NFS mount or Samba share (a method of sharing disks in a Windows/UNIX/Linux environment similar to NFS) a cooked file system among many nodes in a cluster, it represents a single point of failure. If the node owning the file system and performing the sharing failed, that file system would be unavailable. In releases of Oracle prior to 11g Release 2, the Oracle Cluster File System (OCFS) is Oracle’s offering in this area and is currently available for Windows and UNIX/Linux only. Other third-party vendors provide certified clustered file systems that work with Oracle as well. Oracle 11g Release 2 provides another option in the form of the Oracle Automatic Storage Management Cluster File System (ACFS). A clustered file system brings the comfort of a cooked file system to a clustered environment.

The interesting thing is that a database might consist of files from any or all of the preceding file systems—you don’t need to pick just one. You could have a database whereby portions of the data were stored in conventional cooked file systems, some on raw partitions, others in ASM, and yet other components in a clustered file system. This makes it rather easy to move from technology to technology, or to just get your feet wet in a new file system type without moving the entire database into it. Now, since a full discussion of file systems and all of their detailed attributes is beyond the scope of this book, we’ll dive back into the Oracle file types. Regardless of whether the file is stored on cooked file systems, in raw partitions, within ASM, or on a clustered file system, the following concepts always apply.

The Storage Hierarchy in an Oracle Database

A database is made up of one or more tablespaces. A tablespace is a logical storage container in Oracle that comes at the top of the storage hierarchy and is made up of one or more data files. These files might be cooked files in a file system, raw partitions, ASM-managed database files, or files on a clustered file system. A tablespace contains segments, as described next.

Segments

Segments are the major organizational structure within a tablespace. Segments are simply your database objects that consume storage—typically objects such as tables, indexes, undo segments, and so on. Most times, when you create a table, you create a table segment. When you create a partitioned table, you are not creating a table segment, rather you create a segment per partition. When you create an index, you normally create an index segment, and so on. Every object that consumes storage is ultimately stored in a single segment. There are undo segments, temporary segments, cluster segments, index segments, and so on.

Image Note It might be confusing to read “Every object that consumes storage is ultimately stored in a single segment.” You will find many CREATE statements that create multisegment objects. The confusion lies in the fact that a single CREATE statement may ultimately create objects that consist of zero, one, or more segments! For example, CREATE TABLE T ( x int primary key, y clob ) will create four segments: one for the TABLE T, one for the index that will be created in support of the primary key, and two for the CLOB (one segment for the CLOB is the LOB index and the other segment is the LOB data itself). On the other hand, CREATE TABLE T ( x int, y date ) cluster MY_CLUSTER will create zero segments (the cluster is the segment in this case). We’ll explore this concept further in Chapter 10.

Extents

Segments consist of one or more extents. An extent is a logically contiguous allocation of space in a file. (Files themselves, in general, are not contiguous on disk; otherwise, we would never need a disk defragmentation tool! Also, with disk technologies such as Redundant Array of Independent Disks (RAID), you might find that a single file also spans many physical disks.) Traditionally, every segment starts with at least one extent. Oracle 11g Release 2 has introduced the concept of a “deferred” segment—a segment that will not immediately allocate an extent, so in that release and going forward, a segment might defer allocating its initial extent until data is inserted into it. When an object needs to grow beyond its initial extent, it will request another extent be allocated to it. This second extent will not necessarily be located right next to the first extent on disk—it may very well not even be allocated in the same file as the first extent. The second extent may be located very far away from the first extent, but the space within an extent is always logically contiguous in a file. Extents vary in size from one Oracle data block (explained shortly) to 2GB.

Blocks

Extents, in turn, consist of blocks. A block is the smallest unit of space allocation in Oracle. Blocks are where your rows of data, or index entries, or temporary sort results are stored. A block is what Oracle typically reads from and writes to disk. Blocks in Oracle are generally one of four common sizes: 2KB, 4KB, 8KB, or 16KB (although 32KB is also permissible in some cases; there are restrictions in place as to the maximum size by operating system).

Image Note Here’s a little-known fact: the default block size for a database does not have to be a power of two. Powers of two are just a commonly used convention. You can, in fact, create a database with a 5KB, 7KB, or nKB block size, where n is between 2KB and 32KB. I don’t advise making use of this fact in real life, though—stick with the usual as your block size. Using nonstandard block sizes could easily become a support issue—if you are the only one using a 5KB block size, you may well encounter issues that other users would simply never see.

The relationship between segments, extents, and blocks is shown in Figure 3-1.

image

Figure 3-1. Segments, extents, and blocks

A segment is made up of one or more extents, and an extent is a logically contiguous allocation of blocks. Starting with Oracle9i Release 1, a database may have up to six different block sizes in it.

Image Note This feature of multiple block sizes was introduced for the purpose of making transportable tablespaces usable in more cases. The ability to transport a tablespace allows a DBA to move or copy the already formatted data files from one database and attach them to another—for example, to immediately copy all of the tables and indexes from an Online Transaction Processing (OLTP) database to a Data Warehouse (DW). However, in many cases, the OLTP database might be using a small block size, such as 2KB or 4KB, whereas the DW would be using a much larger one (8KB or 16KB). Without support for multiple block sizes in a single database, you wouldn’t be able to transport this information. Tablespaces with multiple block sizes should be used to facilitate transporting tablespaces; they are not generally used for anything else.

There will be the database default block size, which is the size specified in the initialization file during the CREATE DATABASE command. The SYSTEM tablespace will have this default block size always, but you can then create other tablespaces with nondefault block sizes of 2KB, 4KB, 8KB, 16KB, and, depending on the operating system, 32KB. The total number of block sizes is six if and only if you specified a nonstandard block size (not a power of two) during database creation. Hence, for all practical purposes, a database will have at most five block sizes: the default size and then four other nondefault sizes.

Any given tablespace will have a consistent block size, meaning that every block in that tablespace will be the same size. A multisegment object, such as a table with a LOB column, may have each segment in a tablespace with a different block size, but any given segment (which is contained in a tablespace) will consist of blocks of exactly the same size.

Most blocks, regardless of their size, have the same general format, which looks something like Figure 3-2.

image

Figure 3-2. The structure of a block

Exceptions to this format include LOB segment blocks and hybrid columnar compressed blocks in Exadata storage, for example, but the vast majority of blocks in your database will resemble the format in Figure 3-2. The block header contains information about the type of block (table block, index block, and so on); transaction information when relevant (only blocks that are transaction-managed have this information—a temporary sort block would not, for example) regarding active and past transactions on the block; and the address (location) of the block on the disk.

The next two block components are found on the most common types of database blocks, those of HEAP-organized tables. We’ll cover database table types in much more detail in Chapter 10, but suffice it to say that most tables are of this type.

The table directory, if present, contains information about the tables that store rows in this block (data from more than one table may be stored on the same block). The row directory contains information describing the rows that are to be found on the block. This is an array of pointers to where the rows are to be found in the data portion of the block. These three pieces of the block are collectively known as the block overhead, which is space used on the block that is not available for your data, but rather is used by Oracle to manage the block itself.

The remaining two pieces of the block are straightforward: there may be free space on a block, and then there will generally be used space that is currently storing data.

Now that you have a cursory understanding of segments, which consist of extents, which consist of blocks, let’s take a closer look at tablespaces and then at exactly how files fit into the big picture.

Tablespaces

As noted earlier, a tablespace is a container—it holds segments. Each segment belongs to exactly one tablespace. A tablespace may have many segments within it. All of the extents for a given segment will be found in the tablespace associated with that segment. Segments never cross tablespace boundaries. A tablespace itself has one or more data files associated with it. An extent for any given segment in a tablespace will be contained entirely within one data file. However, a segment may have extents from many different data files. Graphically, a tablespace might look like Figure 3-3.

image

Figure 3-3. A tablespace containing two data files, three segments, and four extents

Figure 3-3 shows a tablespace named USER_DATA. It consists of two data files, user_data01.dbf and user_data02.dbf. It has three segments allocated to it: T1, T2, and I1 (probably two tables and an index). The tablespace has four extents allocated in it, and each extent is depicted as a logically contiguous set of database blocks. Segment T1 consists of two extents, one extent in each file. Segments T2 and I1 each have one extent depicted. If we need more space in this tablespace, we could either resize the data files already allocated to the tablespace or we could add a third data file to it.

A tablespace is a logical storage container in Oracle. As developers, we will create segments in tablespaces. We will never get down to the raw file level—we don’t specify that we want our extents to be allocated in a specific file (we can, but in general we don’t). Rather, we create objects in tablespaces and Oracle takes care of the rest. If at some point in the future, the DBA decides to move our data files around on disk to more evenly distribute I/O, that is OK with us. It will not affect our processing at all.

Storage Hierarchy Summary

In summary, the hierarchy of storage in Oracle is as follows:

1. A database is made up of one or more tablespaces.

2. A tablespace is made up of one or more data files. These files might be cooked files in a file system, raw partitions, ASM managed database files, or a file on a clustered file system. A tablespace contains segments.

3. A segment (TABLE, INDEX, and so on) is made up of one or more extents. A segment exists in a tablespace, but may have data in many data files within that tablespace.

4. An extent is a logically contiguous set of blocks on disk. An extent is in a single tablespace and, furthermore, is always in a single file within that tablespace.

5. A block is the smallest unit of allocation in the database. A block is the smallest unit of I/O used by a database on data files.

Dictionary-Managed and Locally-Managed Tablespaces

Before we move on, let’s look at one more topic related to tablespaces: how extents are managed in a tablespace. Prior to Oracle 8.1.5, there was only one method for managing the allocation of extents within a tablespace: a dictionary-managed tablespace. That is, the space within a tablespace was managed in data dictionary tables, in much the same way you’d manage accounting data, perhaps with a DEBIT and CREDIT table. On the debit side, we have all of the extents allocated to objects. On the credit side, we have all of the free extents available for use. When an object needed another extent, it would ask the system for one. Oracle would then go to its data dictionary tables, run some queries, find the space (or not), and then update a row in one table (or remove it all together) and insert a row into another. Oracle managed space in very much the same way you write your applications: by modifying data and moving it around.

This SQL, executed on your behalf in the background to get the additional space, is referred to as recursive SQL. Your SQL INSERT statement caused other recursive SQL to be executed to get more space. This recursive SQL can be quite expensive if it is done frequently. Such updates to the data dictionary must be serialized; they can’t be done simultaneously. They are something to be avoided.

In earlier releases of Oracle, we would see this space management issue—this recursive SQL overhead—most often in temporary tablespaces (this was before the introduction of “real” temporary tablespaces created via the CREATE TEMPORARY TABLESPACE command). Space would frequently be allocated (we would have to delete from one dictionary table and insert into another) and deallocated (we would put the rows we just moved back where they were initially). These operations would tend to serialize, dramatically decreasing concurrency and increasing wait times. In version 7.3 (way back in 1995), Oracle introduced the concept of a true temporary tablespace, a new tablespace type dedicated to just storing temporary data, to help alleviate this issue. Prior to this special tablespace type, temporary data was managed in the same tablespaces as persistent data and treated in much the same way as permanent data was.

A temporary tablespace was one in which you could create no permanent objects of your own. This was fundamentally the only difference; the space was still managed in the data dictionary tables. However, once an extent was allocated in a temporary tablespace, the system would hold on to it (i.e., it would not give the space back). The next time someone requested space in the temporary tablespace for any purpose, such as sorting, Oracle would look for an already allocated extent in its internal list of allocated extents. If it found one there, it would simply reuse it, or else it would allocate one the old-fashioned way. In this manner, once the database had been up and running for a while, the temporary segment would appear full but would actually just be “allocated.” The free extents were all there; they were just being managed differently. When someone needed temporary space, Oracle would look for that space in an in-memory data structure, instead of executing expensive, recursive SQL.

In Oracle 8.1.5 and later, Oracle went a step further in reducing this space management overhead. It introduced the concept of a locally-managed tablespace as opposed to a dictionary-managed one. Local management of space effectively did for all tablespaces what Oracle 7.3 did for temporary tablespaces: it removed the need to use the data dictionary to manage space in a tablespace. With a locally-managed tablespace, a bitmap stored in each data file is used to manage the extents. To get an extent, all the system needs to do is set a bit to 1 in the bitmap. To free up some space, the system sets a bit back to 0. Compared with using dictionary-managed tablespaces, this is incredibly fast. We no longer serialize for a long-running operation at the database level for space requests across all tablespaces. Rather, we serialize at the tablespace level for a very fast operation. Locally-managed tablespaces have other nice attributes as well, such as the enforcement of a uniform extent size, but that is starting to get heavily into the role of the DBA.

Going forward, the only storage management method you should be using is a locally-managed tablespace. In fact, in Oracle9i and above, if you create a database using the database configuration assistant (DBCA), it will create the SYSTEM tablespace as a locally-managed tablespace, and if the SYSTEM tablespace is locally managed, all other tablespaces in that database will be locally managed as well, and the legacy dictionary-managed method will not work. It’s not that dictionary-managed tablespaces are not supported in a database where the SYSTEM tablespace is locally managed, it’s that they simply can’t be created:

EODA@ORA12CR1> create tablespace dmt
2 datafile '/tmp/dmt.dbf' size 2m
3 extent management dictionary;
create tablespace dmt
*
ERROR at line 1:
ORA-12913: Cannot create dictionary managed tablespace

EODA@ORA12CR1> !oerr ora 12913
12913, 00000, "Cannot create dictionary managed tablespace"
// *Cause: Attemp to create dictionary managed tablespace in database
// which has system tablespace as locally managed
// *Action: Create a locally managed tablespace.

Note that oerr is a UNIX/Linux-only utility; on non-UNIX/Linux platforms, you’ll need to refer to the Oracle Database Error Messages manual for the details on the error you receive.

Image Note You might wonder why I wrote “It’s not that dictionary-managed tablespaces are not supported in a database where the SYSTEM tablespace is locally managed, it’s that they simply can’t be created.” If they can’t be created, why would we need to support them? The answer lies in the transportable tablespace feature. You can transport a dictionary-managed tablespace into a database with a SYSTEM tablespace that is locally managed. You can plug that tablespace in and have a dictionary-managed tablespace in your database, but you can’t create one from scratch in that database.

The inability to create dictionary-managed tablespaces is a positive side effect, as it prohibits you from using the legacy storage mechanism, which was less efficient and dangerously prone to space fragmentation. Locally-managed tablespaces, in addition to being more efficient in space allocation and deallocation, also prevent tablespace fragmentation. We’ll take an in-depth look at this in Chapter 10.

Temp Files

Temporary data files (temp files) in Oracle are a special type of data file. Oracle will use temporary files to store the intermediate results of large sort operations and hash operations, as well as to store global temporary table data, or resultset data, when there is insufficient memory to hold it all in RAM. In Oracle 12c and above, temporary tablespaces can also hold the UNDO generated by operations performed on global temporary tables. In earlier releases, the UNDO generated by global temporary tables was routed to the UNDO tablespace and hence would cause REDO to be generated; this is no longer the case. Permanent data objects, such as a table or an index, will never be stored in a temp file, but the contents of a temporary table and its indexes would be. So, you’ll never create your application tables in a temp file, but you might store data there when you use a temporary table.

Temp files are treated in a special way by Oracle. Normally, every change you make to an object will be recorded in the redo logs; these transaction logs can be replayed at a later date to “redo a transaction,” which you might do during recovery from failure. Temp files are excluded from this process. Specifically, transactions in global temporary tables (located in temp files) never have REDO generated for them, although they can have UNDO generated. Thus, there may be REDO generated working with temporary tables since UNDO is always protected by REDO, as you will see in detail in Chapter 9. The UNDO generated for global temporary tables is to support rolling back work you’ve done in your session, either due to an error processing data or because of some general transaction failure. A DBA never needs to back up a temporary data file, and, in fact, attempting to do so would be a waste of time, as you can never recover a temporary data file.

Image Note In Oracle 12c and above, the UNDO generated for global temporary tables may be stored in the temporary tablespace. By default, UNDO will be generated into the permanent UNDO tablespace, just like prior releases. An init.ora system-level setting, or a TEMP_UNDO_ENABLEDsession-level settable parameter, may be set to TRUE to enable the UNDO generated for global temporary tables to be stored in a temp file. In this manner, no REDO will be generated for these operations. We will investigate this further in Chapter 9.

It is recommended that your database be configured with locally-managed temporary tablespaces. You’ll want to make sure that as a DBA, you use a CREATE TEMPORARY TABLESPACE command. You don’t want to just alter a permanent tablespace to a temporary one, as you do not get the benefits of temp files that way.

One of the nuances of true temp files is that if the OS permits it, the temporary files will be created sparse—that is, they will not actually consume disk storage until they need to. You can see that easily in this example (on Oracle Linux):

EODA@ORA12CR1> !df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 6.5G 41G 14% /

EODA@ORA12CR1> create temporary tablespace temp_huge
2 tempfile '/tmp/temp_huge.dbf' size 2g;

Tablespace created.

EODA@ORA12CR1> !df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 6.5G 41G 14% /

EODA@ORA12CR1> !ls -l /tmp/temp_huge.dbf
-rw-rw----. 1 ora12cr1 ora12cr1 2147491840 Sep 3 13:28 /tmp/temp_huge.dbf

Image Note The UNIX/Linux command df shows “disk free” space. This command showed that I have 41GB free in the file system containing /tmp before I added a 2GB temp file to the database. After I added that file, I still had 41GB free in the file system.

Apparently it didn’t take much storage to hold that file. If we look at the ls output, it appears to be a normal 2GB file, but it is, in fact, consuming only a few kilobytes of storage currently. So we could actually create hundreds of these 2GB temporary files, even though we have roughly 41GB of disk space free. Sounds great—free storage for all! The problem is, as we start to use these temp files and they start expanding out, we would rapidly hit errors stating “no more space.” Since the space is allocated or physically assigned to the file as needed by the OS, we stand a definite chance of running out of room (especially if after we create the temp files, someone else fills up the file system with other stuff).

How to solve this differs from OS to OS. On UNIX/Linux, you can use dd to fill the file with data, causing the OS to physically assign disk storage to the file, or use cp to create a nonsparse file, for example:

EODA@ORA12CR1> !cp --sparse=never /tmp/temp_huge.dbf /tmp/temp_huge_not_sparse.dbf

EODA@ORA12CR1> !df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 8.5G 39G 19% /

EODA@ORA12CR1> drop tablespace temp_huge including contents and datafiles;

Tablespace dropped.

EODA@ORA12CR1> create temporary tablespace temp_huge
2 tempfile '/tmp/temp_huge_not_sparse.dbf' reuse;

Tablespace created.

After copying the sparse 2GB file to /tmp/temp_huge_not_sparse.dbf and creating the temporary tablespace using that temp file with the REUSE option, we are assured that temp file has allocated all of its file system space and our database actually has 2GB of temporary space to work with.

Image Note In my experience, Windows NTFS does not do sparse files, and this applies to UNIX/Linux variants. On the plus side, if you have to create a 15GB temporary tablespace on UNIX/Linux and have temp file support, you’ll find it happens very fast (instantaneously); just make sure you have 15GB free and reserve it in your mind.

Control Files

Control files are fairly small files (they can grow up to 64MB or so in extreme cases) that contain a directory of the other files Oracle needs. The parameter file tells the instance where the control files are, and the control files tell the instance where the database and online redo log files are.

The control files also tell Oracle other things, such as information about checkpoints that have taken place, the name of the database (which should match the db_name parameter in the parameter file), the timestamp of the database as it was created, an archive redo log history (this can make a control file large in some cases), RMAN information, and so on.

Control files should be multiplexed either by hardware (RAID) or by Oracle when RAID or mirroring is not available. More than one copy should exist, and the copies should be stored on separate disks to avoid losing them in case you have a disk failure. It is not fatal to lose your control files—it just makes recovery that much harder.

Control files are something a developer will probably never have to actually deal with. To a DBA, they are an important part of the database, but to a software developer they are not really relevant.

Redo Log Files

Redo log files are crucial to the Oracle database. These are the transaction logs for the database. They are generally used only for recovery purposes, but they can be used for the following as well:

· Instance recovery after a system crash

· Media recovery after a data file restore from backup

· Standby database processing

· Input into “Streams,” or Golden Gate - redo log mining processes for information sharing (a fancy way of saying replication)

· Allow administrators to inspect historical database transactions through the Oracle LogMiner utility

Their main purpose in life is to be used in the event of an instance or media failure, or as a method of maintaining a standby database for failover. If the power goes off on your database machine, causing an instance failure, Oracle will use the online redo logs to restore the system to exactly the point it was at immediately prior to the power outage. If your disk drive containing your data file fails permanently, Oracle will use archived redo logs, as well as online redo logs, to recover a backup of that drive to the correct point in time. Additionally, if you “accidentally” drop a table or remove some critical information and commit that operation, you can restore a backup and have Oracle restore it to the point just before the accident using these online and archived redo log files.

Virtually every operation you perform in Oracle generates some amount of redo to be written to the online redo log files. When you insert a row into a table, the end result of that insert is written to the redo logs. When you delete a row, the fact that you deleted that row is written. When you drop a table, the effects of that drop are written to the redo log. The data from the table you dropped is not written; however, the recursive SQL that Oracle performs to drop the table does generate redo. For example, Oracle will delete a row from the SYS.OBJ$ table (and other internal dictionary objects), and this will generate redo, and if various modes of supplemental logging are enabled, the actual DROP TABLE statement will be written into the redo log stream.

Some operations may be performed in a mode that generates as little redo as possible. For example, I can create an index with the NOLOGGING attribute. This means that the initial creation of the index data will not be logged, but any recursive SQL Oracle performed on my behalf will be. For example, the insert of a row into SYS.OBJ$ representing the existence of the index will be logged, as will all subsequent modifications of the index using SQL inserts, updates, and deletes. But the initial writing of the index structure to disk will not be logged.

I’ve referred to two types of redo log file: online and archived. We’ll take a look at each in the sections that follow. In Chapter 9 we’ll take another look at redo in conjunction with undo segments, to see what impact they have on you as a developer. For now, we’ll just concentrate on what they are and what their purpose is.

Online Redo Log

Every Oracle database has at least two online redo log file groups. Each redo log group consists of one or more redo log members (redo is managed in groups of members). The individual redo log file members of these groups are true mirror images of each other. These online redo log files are fixed in size and are used in a circular fashion. Oracle will write to log file group 1, and when it gets to the end of this set of files, it will switch to log file group 2 and overwrite the contents of those files from start to end. When it has filled log file group 2, it will switch back to log file group 1 (assuming we have only two redo log file groups; if we have three, it would, of course, proceed to the third group). This is shown in Figure 3-4.

image

Figure 3-4. Writing to log file groups

The act of switching from one log file group to another is called a log switch. It is important to note that a log switch may cause a temporary “pause” in a poorly configured database. Since the redo logs are used to recover transactions in the event of a failure, we must be certain we won’t need the contents of a redo log file before we are able to use it. If Oracle isn’t sure that it won’t need the contents of a log file, it will suspend operations in the database momentarily and make sure that the data in the cache that this redo “protects” is safely written (checkpointed) onto disk. Once Oracle is sure of that, processing will resume and the redo file will be reused.

We’ve just started to talk about a key database concept: checkpointing. To understand how online redo logs are used, you’ll need to know something about checkpointing, how the database buffer cache works, and what a process called Database Block Writer (DBWn) does. The database buffer cache and DBWn are covered in more detail a later on, but we’ll skip ahead a little anyway and touch on them now.

The database buffer cache is where database blocks are stored temporarily. This is a structure in Oracle’s SGA. As blocks are read, they are stored in this cache, hopefully so we won’t have to physically reread them later. The buffer cache is first and foremost a performance-tuning device. It exists solely to make the very slow process of physical I/O appear to be much faster than it is. When we modify a block by updating a row on it, these modifications are done in memory to the blocks in the buffer cache. Enough information to redo, to replay this modification is stored in the redo log buffer, another SGA data structure. When we COMMIT our modifications, making them permanent, Oracle does not go to all of the blocks we modified in the SGA and write them to disk. Rather, it just writes the contents of the redo log buffer out to the online redo logs. As long as that modified block is in the buffer cache and not on disk, we need the contents of that online redo log in case the database fails. If, at the instant after we committed, the power was turned off, the database buffer cache would be wiped out.

If this happens, the only record of our change is in that redo log file. Upon restart of the database, Oracle will actually replay our transaction, modifying the block again in the same way we did and committing it for us. So, as long as that modified block is cached and not written to disk, we can’t reuse (overwrite) that redo log file.

This is where DBWn comes into play. This Oracle background process is responsible for making space in the buffer cache when it fills up and, more important, for performing checkpoints. A checkpoint is the writing of dirty (modified) blocks from the buffer cache to disk. Oracle does this in the background for us. Many things can cause a checkpoint to occur, the most common being a redo log switch.

As we filled up log file 1 and switched to log file 2, Oracle initiated a checkpoint. At this point, DBWn started writing to disk all of the dirty blocks that are protected by log file group 1. Until DBWn flushes all of these blocks protected by that log file, Oracle can’t reuse (overwrite) it. If we attempt to use it before DBWn has finished its checkpoint, we’ll get a message like this in our database’s ALERT log:

...
Thread 1 cannot allocate new log, sequence 66
Checkpoint not complete
Current log# 2 seq# 65 mem# 0: /home/ora12cr1/app/ora12cr1/oradata/orcl/redo01.log
...

So, when this message appeared, processing was suspended in the database while DBWn hurriedly finished its checkpoint. Oracle gave all the processing power it could to DBWn at that point in the hope it would finish faster.

This is a message you never want to see in a nicely tuned database instance. If you do see it, you know for a fact that you have introduced artificial, unnecessary waits for your end users. This can always be avoided. The goal (and this is for the DBA, not the developer necessarily) is to have enough online redo log files allocated so that you never attempt to reuse a log file before the checkpoint (initiated by the log switch) completes. If you see this message frequently, it means a DBA has not allocated sufficient online redo logs for the application, or that DBWn needs to be tuned to work more efficiently.

Different applications will generate different amounts of redo log. A Decision Support System (DSS, query only) or DW system will naturally generate significantly less online redo logging than an OLTP (transaction processing) system would, day to day. A system that does a lot of image manipulation in Binary Large Objects (BLOBs) in the database may generate radically more redo than a simple order-entry system. An order-entry system with 100 users will probably generate a tenth the amount of redo 1,000 users would generate. Thus, there is no “right” size for your redo logs, although you do want to ensure they are large enough for your unique workload.

You must take many things into consideration when setting both the size of and the number of online redo logs. Many of them are beyond the scope of this book, but I’ll list some of them to give you an idea:

· Peak workloads: You’d like your system to not have to wait for checkpoint-not-complete messages, to not get bottlenecked during your peak processing. You should size your redo logs not for average hourly throughput, but rather for your peak processing. If you generate 24GB of log per day, but 10GB of that log is generated between 9:00 am and 11:00 am, you’ll want to size your redo logs large enough to carry you through that two-hour peak. Sizing them for an average of 1GB per hour would probably not be sufficient.

· Lots of users modifying the same blocks: Here you might want large redo log files. Since everyone is modifying the same blocks, you’d like to update them as many times as possible before writing them out to disk. Each log switch will fire a checkpoint, so you’d like to switch logs infrequently. This may, however, affect your recovery time.

· Mean time to recover: If you must ensure that a recovery takes as little time as possible, you may be swayed toward smaller redo log files, even if the previous point is true. It will take less time to process one or two small redo log files than a gargantuan one upon recovery. The overall system will run slower than it absolutely could day to day perhaps (due to excessive checkpointing), but the amount of time spent in recovery will be shorter. There are other database parameters that may also be used to reduce this recovery time, as an alternative to the use of small redo log files.

Archived Redo Log

The Oracle database can run in one of two modes: ARCHIVELOG mode and NOARCHIVELOG mode. The difference between these two modes is simply what happens to a redo log file when Oracle goes to reuse it. “Will we keep a copy of that redo or should Oracle just overwrite it, losing it forever?” is an important question to answer. Unless you keep this file, you can’t recover data from a backup to that point in time.

Say you take a backup once a week on Saturday. Now, on Friday afternoon, after you have generated hundreds of redo logs over the week, your hard disk fails. If you have not been running in ARCHIVELOG mode, the only choices you have right now are as follows:

· Drop the tablespace(s) associated with the failed disk. Any tablespace that had a file on that disk must be dropped, including the contents of that tablespace. If the SYSTEM tablespace (Oracle’s data dictionary) or some other important system-related tablespace like yourUNDO tablespace is affected, you can’t do this. You will have to use the next option instead.

· Restore last Saturday’s data and lose all of the work you did that week.

Neither option is very appealing. Both imply that you lose data. If you had been executing in ARCHIVELOG mode, on the other hand, you simply would have found another disk and restored the affected files from Saturday’s backup onto it. Then, you would have applied the archived redo logs and, ultimately, the online redo logs to them (in effect replaying the week’s worth of transactions in fast-forward mode). You lose nothing. The data is restored to the point of the failure.

People frequently tell me they don’t need ARCHIVELOG mode for their production systems. I have yet to meet anyone who was correct in that statement. I believe that a system is not a production system unless it is in ARCHIVELOG mode. A database that is not in ARCHIVELOG mode will, someday, lose data. It is inevitable; you will lose data (not might, but will) if your database is not in ARCHIVELOG mode.

“We are using RAID-5, so we are totally protected” is a common excuse. I’ve seen cases where, due to a manufacturing error, all disks in a RAID set froze, all at about the same time. I’ve seen cases where the hardware controller introduced corruption into the data files, so people safely protected corrupt data with their RAID devices. RAID also does not do anything to protect you from operator error, one of the most common causes of data loss. RAID does not mean the data is safe, it might be more available, it might be safer, but data solely on a RAID device will be lost someday; it is a matter of time.

“If we had the backups from before the hardware or operator error and the archives were not affected, we could have recovered.” The bottom line is that there is no excuse for not being in ARCHIVELOG mode on a system where the data is of any value. Performance is no excuse; properly configured archiving adds little to no overhead. This, and the fact that a fast system that loses data is useless, means that even if archiving added 100 percent overhead, you still need to do it. A feature is overhead if you can remove it and lose nothing important; overhead is like icing on the cake. Preserving your data, and making sure you don’t lose your data isn’t overhead—it’s the DBA’s primary job!

Only a test or maybe a development system should execute in NOARCHIVELOG mode. Most development systems should be run in ARCHIVELOG mode for two reasons:

· This is how you will process the data in production; you want development to act and react as your production system would.

· In many cases, the developers pull their code out of the data dictionary, modify it, and compile it back into the database. The development database holds the current version of the code. If the development database suffers a disk failure in the afternoon, what happens to all of the code you compiled and recompiled all morning? It’s lost.

Don’t let anyone talk you out of being in ARCHIVELOG mode. You spent a long time developing your application, so you want people to trust it. Losing their data will not instill confidence in your system.

Image Note There are some cases in which a large DW could justify being in NOARCHIVELOG mode—if it made judicious use of READ ONLY tablespaces and was willing to fully rebuild any READ WRITE tablespace that suffered a failure by reloading the data.

Password Files

The password file is an optional file that permits the remote SYSDBA or administrator access to the database.

When you attempt to start Oracle, there is no database available that can be consulted to verify passwords. When you start Oracle on the local system (i.e., not over the network, but from the machine the database instance will reside on), Oracle will use the OS to perform the authentication.

When Oracle was installed, the person performing the installation was asked to specify an OS group for the administrators. Normally, on UNIX/Linux, this group will be DBA by default, and ORA_DBA on Windows. It can be any legitimate group name on that platform, however. That group is “special,” in that any user in that group can connect to Oracle “as SYSDBA” without specifying a username or password. For example, in my Oracle 12c Release 1 install, I specified an ora12cr1 group. Anyone in the ora12cr1 group may connect without a username/password:

[tkyte@dellpe ~]$ groups
tkyte ora12cr1 ora11gr2 ora10gr2
[tkyte@dellpe ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Sep 3 14:15:31 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SYS@ORA12CR1> show user
USER is "SYS"

That worked. I’m connected and I could now start up this database, shut it down, or perform whatever administration I wanted to. But suppose I wanted to perform these operations from another machine, over the network. In that case, I would attempt to connect using @tns-connect-string. However, this would fail:

[tkyte@dellpe ~]$ sqlplus /@ora12cr1 as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Sep 3 14:16:22 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied

OS authentication won’t work over the network for SYSDBA, even if the very unsafe (for security reasons) parameter REMOTE_OS_AUTHENT is set to true. So, OS authentication won’t work and, as discussed earlier, if you’re trying to start up an instance to mount and open a database, then by definition there’s no database yet in which to look up authentication details. It is the proverbial chicken and egg problem.

Enter the password file. The password file stores a list of usernames and passwords that are allowed to remotely authenticate as SYSDBA over the network. Oracle must use this file to authenticate them, not the normal list of passwords stored in the database.

So, let’s correct our situation. First, verify that the REMOTE_LOGIN_PASSWORDFILE parameter is set to the default of EXCLUSIVE, meaning only one database uses a given password file:

EODA@ORA12CR1> show parameter remote_login_passwordfile

NAME TYPE VALUE
------------------------------------ ----------- ----------
remote_login_passwordfile string EXCLUSIVE

Image Note Other valid values for this parameter are NONE, meaning there is no password file (there are no remote SYSDBA connections), and SHARED (more than one database can use the same password file).

The next step is to use the command-line tool (on UNIX/Linux and Windows) named orapwd to create and populate the initial password file:

[ora12cr1@dellpe ~]$ orapwd
Usage: orapwd file=<fname> entries=<users> force=<y/n> asm=<y/n>
dbuniquename=<dbname> format=<legacy/12> sysbackup=<y/n> sysdg=<y/n>
syskm=<y/n> delete=<y/n> input_file=<input-fname>

Usage: orapwd describe file=<fname>

where
...
There must be no spaces around the equal-to (=) character.

The command we’ll use when logged into the operating system account that owns the Oracle software is

[ora12cr1@dellpe dbs]$ orapwd file=orapw$ORACLE_SID password=bar entries=20

This creates a password file named orapwora12cr1 in my case (my ORACLE_SID is ora12cr1). That’s the naming convention for this file on most UNIX/Linux platforms (see your installation/OS admin guide for details on the naming of this file on your platform), and it resides in the $ORACLE_HOME/dbs directory. On Windows, this file is named PW%ORACLE_SID%.ora and it’s located in the %ORACLE_HOME%\database directory. You should navigate to the correct directory prior to running the command to create that file, or move that file into the correct directory afterward.

Now, currently the only user in that file is SYS, even if there are other SYSDBA accounts on that database (they are not in the password file yet). Using that knowledge, however, we can for the first time connect as SYSDBA over the network:

[tkyte@dellpe ~]$ sqlplus sys/bar@ora12cr1 as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Sep 3 14:21:08 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SYS@ORA12CR1>

Image Note If you experience an ORA-12505 “TNS:listener does not currently know of SID given in connect Descriptor” error during this step, that means that the database listener is not configured with a static registration entry for this server. The DBA has not permitted remote SYSDBA connections when the database instance is not up. This will be the case for most Oracle installations for version 9i and above. You would need to configure static server registration in your listener.ora configuration file. Please search for "Configuring Static Service Information" (in quotes) on the OTN (Oracle Technology Network) documentation search page for the version of the database you are using for details on configuring this static service.

We have been authenticated, so we are in. We can now successfully start up, shut down, and remotely administer this database using the SYSDBA account. Now, we have another user, OPS$TKYTE, who has been granted SYSDBA, but will not be able to connect remotely yet:

[tkyte@dellpe ~]$ sqlplus 'ops$tkyte/foobar'@ora12cr1 as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Sep 3 14:22:21 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied

The reason for this is that OPS$TKYTE is not yet in the password file. In order to get OPS$TKYTE into the password file, we need to “regrant” that account the SYSDBA privilege:

[tkyte@dellpe ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Sep 3 14:23:11 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SYS@ORA12CR1> grant sysdba to ops$tkyte;

Grant succeeded.

SYS@ORA12CR1> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[tkyte@dellpe ~]$ sqlplus 'ops$tkyte/foobar'@ora12cr1 as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Sep 3 14:23:25 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SYS@ORA12CR1>

This created an entry in the password file for us, and Oracle will now keep the password in sync. If OPS$TKYTE alters his password, the old one will cease working for remote SYSDBA connections and the new one will start working.

The same process is repeated for any user who was a SYSDBA but is not yet in the password file.

Change Tracking File

The change-tracking file is an optional file for use with Oracle 10g Enterprise Edition and above. The sole purpose of this file is to track what blocks have modified since the last incremental backup. With this, the Recovery Manager (RMAN) tool can back up only the database blocks that have actually been modified without having to read the entire database.

In releases prior to Oracle 10g, an incremental backup would have had to read the entire set of database files to find blocks that had been modified since the last incremental backup. So, if you had a 1TB database to which you simply added 500MB of new data (e.g., a data warehouse load), the incremental backup would have read 1TB of data to find that 500MB of new information to back up. So, the incremental backup would have stored significantly less data in the backup, and it would have still read the entire database.

In Oracle 10g Enterprise Edition and up, that’s no longer the case. As Oracle is running, and as blocks are modified, Oracle optionally maintains a file that tells RMAN what blocks have been changed. Creating this change-tracking file is rather simple and is accomplished via the ALTER DATABASE command:

SYS@ORA12CR1> alter database enable block change tracking using file
2 '/home/ora12cr1/oradata/ORA12CR1/changed_blocks.bct';
Database altered.

Image Caution I’ll say this from time to time throughout the book: please bear in mind that commands that set parameters, modify the database, or make fundamental changes should not be done lightly, and definitely should be tested prior to performing them on your “real” system. The preceding command will, in fact, cause the database to do more work. It will consume resources.

To turn off and remove the block change-tracking file, you’d use the ALTER DATABASE command once again:

SYS@ORA12CR1> alter database disable block change tracking;
Database altered.

Note that this command will erase the block change-tracking file. It does not just disable the feature—it removes the file as well.

Image Note On certain operating systems, such as Windows, you might find that if you run my example—creating a block change-tracking file and then disabling it—the file appears to still exist. This is an OS-specific issue—it does not happen on many operating systems. It will happen only if you CREATE and DISABLE the change-tracking file from a single session. The session that created the block change-tracking file will leave that file open and some operating systems will not permit you to erase a file that has been opened by a previous process (for example, the session process that created the file). This is harmless; you just need to remove the file yourself later.

You can enable this new block change-tracking feature in either ARCHIVELOG or NOARCHIVELOG mode. But remember, a database in NOARCHIVELOG mode, where the redo log generated daily is not retained, can’t recover all changes in the event of a media (disk or device) failure! ANOARCHIVELOG mode database will lose data some day. We will cover these two database modes in more detail in Chapter 9.

Flashback Logs

Flashback logs were introduced in Oracle 10g in support of the FLASHBACK DATABASE command, a new feature of the Enterprise Edition of the database in that release. Flashback logs contain “before images” of modified database blocks that can be used to return the database to the way it was at some prior point in time.

Flashback Database

The FLASHBACK DATABASE command was introduced to speed up the otherwise slow process of point-in-time database recovery. It can be used in place of a full database restore and a rolling forward using archive logs, and it is primarily designed to speed up the recovery from an “accident.” For example, let’s take a look at what a DBA might do to recover from an accidentally dropped schema, in which the right schema was dropped, just in the wrong database (it was meant to be dropped in the test environment). The DBA immediately recognizes the mistake he has made and shuts down the database right away. Now what?

Prior to the FLASHBACK DATABASE capability, what would probably happen is this:

1. The DBA would shut down the database.

2. The DBA would restore the last full backup of database from tape (typically), generally a long process. Typically this would be initiated with RMAN via RESTORE DATABASE UNTIL <point in time>.

3. The DBA would restore all archive redo logs generated since the backup that were not available on the system.

4. Using the archive redo logs (and possibly information in the online redo logs), the DBA would roll the database forward and stop rolling forward at a point in time just before the erroneous DROP USER command. Steps 3 and 4 in this list would typically be initiated with RMAN via RECOVER DATABASE UNTIL <point in time>.

5. The database would be opened with the RESETLOGS option.

This was a nontrivial process with many steps and would generally consume a large piece of time (time when no one could access the database, of course). The causes of a point-in-time recovery like this are many: an upgrade script gone awry, an upgrade gone bad, an inadvertent command issued by someone with the privilege to issue it (a mistake, probably the most frequent cause), or some process introducing data integrity issues into a large database (again, an accident; maybe it was run twice instead of just once, or maybe it had a bug). Whatever the reason, the net effect was a large period of downtime.

The steps to recover in Oracle 10g Enterprise Edition and above, assuming you configured the flashback database capability, would be as follows:

1. The DBA shuts down the database.

2. The DBA startup-mounts the database and issues the flashback database command, using either an SCN (the Oracle internal clock), a restore point (which is a pointer to an SCN), or a timestamp (wall clock time), which would be accurate to within a couple of seconds.

3. The DBA opens the database with resetlogs.

To use this feature, the database must be in ARCHIVELOG mode and must have been set up to enable the FLASHBACK DATABASE command. What I’m trying to say is that you need to set up this capability before you ever need to use it. It is not something you can enable after the damage is done; you must make a conscious decision to use it, whether you have it on continuously or whether you use it to set restore points.

Fast Recovery Area

The Fast Recovery Area is a new concept in Oracle 10g and above. Starting with 10g and for the first time in many years (over 25 years), the basic concept behind database backups has changed in Oracle. In the past, the design of backup and recovery in the database was built around the concept of a sequential medium, such as a tape device. That is, random access devices (disk drives) were always considered too expensive to waste for mere backups. You used relatively inexpensive tape devices with large storage capacities.

Today, however, you can buy terabytes of disk storage at a very low cost. In fact, my son Alan was the first kid on the block with a 1TB NAS (network-attached storage device). It cost $125.00 USD. I remember my first hard drive on my personal computer: a whopping 40MB. I actually had to partition it into two logical disks because the OS I was using (MS-DOS at the time) could not recognize a disk larger than 32MB. Things have certainly changed in the last 25 years or so.

The Fast Recovery Area in Oracle is a location where the database will manage many of the files related to database backup and recovery. In this area (an area being a part of a disk set aside for this purpose; a directory, for example), you could find the following:

· RMAN backup pieces (full and/or incremental backups)

· RMAN image copies (byte-for-byte copies of data files and control files)

· Online redo logs

· Archived redo logs

· Multiplexed control files

· Flashback logs

Oracle uses this new area to manage these files, so the server will know what is on disk and what is not on disk (and perhaps on tape elsewhere). Using this information, the database can perform operations like a disk-to-disk restore of a damaged data file or the flashing back (a “rewind” operation) of the database to undo an operation that should not have taken place. For example, you could use the FLASHBACK DATABASE command to put the database back the way it was five minutes ago (without doing a full restore of the database and a point-in-time recovery). That would allow you to “undrop” that accidentally dropped user account.

The Fast Recovery Area is more of a logical concept. It is a holding area for the file types discussed in this chapter. Its use is optional—you don’t need to use it, but if you want to use some advanced features, such as the Flashback Database, you must use this area to store the information.

DMP Files (EXP/IMP Files)

Export and Import are venerable Oracle data extraction and load tools that have been around for many versions. Export’s job is to create a platform-independent DMP file that contains all of the required metadata (in the form of CREATE and ALTER statements), and optionally the data itself to re-create tables, schemas, or even entire databases. Import’s sole job is to read these DMP files, and execute the DDL statements and load any data it finds.

Image Note Export is officially deprecated with Oracle 11g Release 2. It is supplied only for use with legacy database structures. New data types, new structures, new database features will not be supported by this tool. I strongly recommend using Data Pump, the export/import replacement tool introduced with Oracle 10g several years back.

DMP files are designed to be forward-compatible, meaning that newer releases can read older releases’ DMP files and process them successfully. I have heard of people exporting a version 5 database and successfully importing it into Oracle 10g (just as a test). So Import can read older version DMP files and process the data therein. The converse, however, is most definitely not true: the Import process that comes with Oracle9i Release 1 can’t—will not—successfully read a DMP file created by Oracle9i Release 2 or Oracle 10g Release 1. For example, I exported a simple table from Oracle 11g Release 2. Upon trying to use these DMP files in Oracle 10g Release 2, I soon discovered Oracle 10g Release 2 Import will not even attempt to process the Oracle 11g Release 2 DMP file:

$ imp userid=/ full=y

Import: Release 10.2.0.4.0 - Production on Wed Jan 20 18:21:03 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

IMP-00010: not a valid export file, header failed verification
IMP-00000: Import terminated unsuccessfully

Even in the cases where IMP was able to recognize the file, there would be a great chance that the DDL generated by the 11g Release 2 EXP tool would not be DDL that the earlier version of Oracle would recognize. For example, suppose you export from any release of Oracle version 9iRelease 2 or above. You would find in the export file that every CREATE TABLE has a COMPRESS or NOCOMPRESS option added to it. Oracle9i Release 2 added basic table compression as a feature. If you were able to get any release of Oracle older than 9i Release 2 to read that export file, you’d find that the DDL it contains would fail—100 percent of the time. Not a single CREATE TABLE statement would work because the NOCOMPRESS/COMPRESS keywords would not be recognized by the older release.

DMP files are platform-independent, so you can safely take an export from any platform, transfer it to another, and import it (as long as the versions of Oracle permit). One caveat, however, with Windows and FTPing of files is that Windows will consider a DMP file a text file by default and will tend to convert linefeeds (the end-of-line marker on UNIX/Linux) into carriage return/linefeed pairs, thus totally corrupting the DMP file. When transferring a DMP file in Windows, make sure you’re doing a binary transfer. If your subsequent import won’t work, check the source and target file sizes to make sure they’re the same. I can’t recall how many times this issue has brought things to a screeching halt while the file had to be retransferred.

DMP files are binary files, meaning you won’t be editing them to change them. You can extract a large amount of information from them—CREATE DDL and more—but you won’t be editing them in a text editor (or any sort of editor, actually). In the book Expert One-on-One Oracle(Apress, 2003), I spent a great deal of time discussing the Import and Export utilities and working with DMP files. Because these tools are falling out of favor, in place of the infinitely more flexible Data Pump utilities, I’ll defer a full discussion of how to manipulate them, extract data from them, and use them in general to the existing first edition of this book.

Data Pump Files

Data Pump is a file format used by at least two tools in Oracle. External tables can load and unload data in the Data Pump format, and the new import/export tools IMPDP and EXPDP use this file format in much the same way IMP and EXP used the DMP file format.

Image Note The Data Pump format is exclusive to Oracle 10g Release 1 and above—it did not exist in any Oracle9i release, nor can it be used with that release.

Pretty much all of the same caveats that applied to DMP files mentioned previously will apply over time to Data Pump files as well. They are cross-platform (portable) binary files that contain metadata (not stored in CREATE/ALTER statements, but rather in XML) and possibly data. That they use XML as a metadata representation structure is actually relevant to you and me as end users of the tools. IMPDP and EXPDP have some sophisticated filtering and translation capabilities never seen in the IMP/EXP tools of old. This is in part due to the use of XML and to the fact that a CREATE TABLE statement is not stored as a CREATE TABLE, but rather as a marked-up document. This permits easy implementation of a request like “Please replace all references to tablespace FOO with tablespace BAR.” When the metadata was stored in the DMP file asCREATE/ALTER statements, the Import utility would have had to basically parse each SQL statement before executing it in order to accomplish the feat of changing tablespace names (something it does not do). IMPDP, however, just has to apply a simple XML transformation to accomplish the same. FOO, when it refers to a TABLESPACE, would be surrounded by <TABLESPACE>FOO</TABLESPACE> tags (or some other similar representation).

The fact that XML is used has allowed the EXPDP and IMPDP tools to literally leapfrog the old EXP and IMP tools with regard to their capabilities. In Chapter 15, we’ll take a closer look at these tools. Before we get there, however, let’s see how we can use this Data Pump format to quickly extract some data from database A and move it to database B. We’ll be using an “external table in reverse” here.

External tables, originally introduced in Oracle9i Release 1, gave us the ability to read flat files—plain old text files—as if they were database tables. We had the full power of SQL to process them. They were read-only and designed to get data from outside Oracle in. External tables in Oracle 10g Release 1 and above can go the other way: they can be used to get data out of the database in the Data Pump format to facilitate moving the data to another machine or another platform. To start this exercise, we’ll need a DIRECTORY object, telling Oracle the location to unload to:

EODA@ORA12CR1> create or replace directory tmp as '/tmp';
Directory created.

EODA@ORA12CR1> create table all_objects_unload
2 organization external
3 ( type oracle_datapump
4 default directory TMP
5 location( 'allobjects.dat' )
6 )
7 as
8 select * from all_objects
9 /
Table created.

And that literally is all there is to it: we have a file in /tmp named allobjects.dat that contains the contents of the query select * from all_objects. We can peek at this information:

EODA@ORA12CR1> !strings /tmp/allobjects.dat | head
"EODA"."U"
x86_64/Linux 2.4.xx
AL32UTF8
12.00.00.00.00
001:001:000001:000001
i<?xml version="1.0"?>
<ROWSET>
<ROW>
<STRMTABLE_T>

That’s just the head, or top, of the file. Now, using a binary FTP (same caveat as for a DMP file), you can move that file to any other platform where you have Oracle 12c installed and by issuing a CREATE DIRECTORY statement (to tell the database where the file is) and a CREATE TABLE statement, such as this:

create table t
( OWNER VARCHAR2(30),
OBJECT_NAME VARCHAR2(30),
SUBOBJECT_NAME VARCHAR2(30),
OBJECT_ID NUMBER,
DATA_OBJECT_ID NUMBER,
OBJECT_TYPE VARCHAR2(19),
CREATED DATE,
LAST_DDL_TIME DATE,
TIMESTAMP VARCHAR2(19),
STATUS VARCHAR2(7),
TEMPORARY VARCHAR2(1),
GENERATED VARCHAR2(1),
SECONDARY VARCHAR2(1)
)
organization external
( type oracle_datapump
default directory TMP
location( 'allobjects.dat' )
);

You would be set to read that unloaded data using SQL immediately. That is the power of the Data Pump file format: immediate transfer of data from system to system, over “sneakernet” if need be. Think about that the next time you’d like to take a subset of data home to work with over the weekend while testing.

Even if the database character sets differ (they did not in this example), Oracle has the ability now to recognize the differing character sets due to the Data Pump format and deal with them. Character-set conversion can be performed on the fly as needed to make the data “correct” in each database’s representation.

Again, we’ll come back to the Data Pump file format in Chapter 15, but this section should give you an overall feel for what it is about and what might be contained in the file.

Flat Files

Flat files have been around since the dawn of electronic data processing. We see them literally every day. The text alert log described previously is a flat file.

I found the following definition for “flat file” on the Web, and feel it pretty much wraps things up:

“An electronic record that is stripped of all specific application (program) formats. This allows the data elements to be migrated into other applications for manipulation. This mode of stripping electronic data prevents data loss due to hardware and proprietary software obsolescence.” 1

A flat file is simply a file whereby each “line” is a “record,” and each line has some text delimited, typically by a comma or pipe (vertical bar). Flat files are easily read by Oracle using either the legacy data-loading tool SQLLDR or external tables. In fact, I will cover this in detail inChapter 15 (External tables are also covered in Chapter 10).

However, flat files are not something produced so easily by Oracle. For whatever reason, there is no simple command-line tool to export information to a flat file. Tools such as APEX, SQL Developer, and Enterprise Manager facilitate this process, but there are no official command-line tools that are easily usable in scripts and such to perform this operation.

That’s one reason I decided to mention flat files in this chapter: to propose a set of tools capable of producing simple flat files. I have over the years developed three methods to accomplish this task, each appropriate in its own right. The first utility uses a combination of PL/SQL and UTL_FILE with dynamic SQL to accomplish the job. With small volumes of data (hundreds or thousands of rows), this utility is sufficiently flexible and fast enough to get the job done. However, it must create its files on the database server machine, which is sometimes not the location we’d like for them. To that end, I have a SQL*Plus utility that creates flat files on the machine that is running SQL*Plus. Since SQL*Plus can connect to an Oracle server anywhere on the network, this gives us the ability to unload to a flat file any data from any database on the network. Lastly, when the need for total speed is there, nothing but C will do (if you ask me). So I also have a Pro*C command-line unloading tool to generate flat files. All of these tools are freely available athttp://tkyte.blogspot.com/2009/10/httpasktomoraclecomtkyteflat.html, and any new tools developed for unloading to flat files will appear there as well.

Summary

In this chapter, we explored the important types of files used by the Oracle database, from lowly parameter files (without which you won’t even be able to get started) to the all-important redo log and data files. We examined the storage structures of Oracle from tablespaces to segments and extents, and finally down to database blocks, the smallest unit of storage. We briefly reviewed how checkpointing works in the database, and we even started to look ahead at what some of the physical processes or threads of Oracle do. We also covered many optional file types such as password files, change-tracking files, Data Pump files, and more. In the next chapter we are ready to look at the Oracle memory structures.

____________________

1See http://osulibrary.oregonstate.edu/archives/handbook/definitions