Oracle and Tightly Integrated Hardware and Software Platforms - Features and Foundations - Achieving Extreme Performance with Oracle Exadata (Oracle Press) (2011)

Achieving Extreme Performance with Oracle Exadata (Oracle Press) (2011)

PART I
Features and Foundations

CHAPTER 1
Oracle and Tightly Integrated Hardware and Software Platforms

In 2008, Oracle introduced its first “Database Machine” with Exadata storage and promoted the new Machine as a breakthrough product. Only a year later, Oracle rolled out a set of major enhancements to the original offering in the form of the Sun Oracle Database Machine. In 2010, new node configurations for the Oracle Exadata Database Machine were announced. Extraordinary levels of customer interest and demand led Larry Ellison, Oracle Chief Executive Officer, to call Exadata “the most exciting product [from Oracle] in many, many years.” Today, the Oracle Exadata Database Machine is available for deployment as a platform dedicated to single data warehousing and online transaction processing databases, and for consolidation where multiple Oracle database servers are deployed.

Since Oracle is building and supporting database features designed for this specific platform, including the Oracle Exadata Storage Server Software, this offering raises some new considerations. Throughout most of Oracle’s history, the Oracle database was designed to deploy and perform equally well on a variety of server platforms and storage devices. In this book, we’ll cover what makes the Oracle Exadata Database Machine a unique platform as an integrated Oracle software and hardware combination, and how you can take advantage of its capabilities when you deploy it.

Despite introducing a shift in Oracle’s platform strategy, the Oracle Exadata Database Machine builds upon previous lessons the Oracle community learned when deploying and managing Oracle databases on other platforms. So, we’ll also cover how those earlier strategies and previously acquired skills can be utilized for this platform. When you finish reading this book, you should understand thoroughly the software, hardware concepts, and many of the best practices associated with the Oracle Exadata Database Machine and Exadata Storage Server technology.

If you think of the Oracle Exadata Database Machine as an “appliance,” you are not correct in your understanding of the platform. Oracle has been careful to avoid that tag, though the platform is sometimes described as “appliance-like.” The reason for this is simple. Some of the characteristics we will describe, such as the specific prebalanced configurations and rapid installation services, remind one of an appliance. But other characteristics, such as the need for database administrators (DBAs) to manage patches (updates to the database and also operating system versions), are decidedly familiar to those managing Oracle on traditional platforms. So throughout the book, we’ll be careful to describe exactly how and why this platform is a combination of both of these entities.

In this first chapter, we’ll briefly describe why we are seeing appliances and appliance-like platforms emerge now, and we’ll start by describing the history of such platforms. This chapter also introduces the Oracle Exadata Database Machine and Exadata concepts that we’ll cover in much more detail throughout the remainder of the book.

A History of Appliance-like Computing Solutions

The birth of appliance-like computing solutions is difficult to pin down. Computer systems were designed to store and retrieve data from their inception, and in the early days of computing, all underlying systems consisting of servers, storage, and software were designed, assembled, and supported by single vendors. Following on this design philosophy, early databases of varying types were optimized for specific hardware provided by those same vendors.

The 1970s saw the introduction of a new type of database, the relational database. Some of these early database vendors continued to develop and support specific hardware platforms. To speed up data warehousing performance (then called decision support), Teradata was founded in 1979 based on the concept of massively parallel processing systems and featuring a tightly coupled Teradata database, operating system, and server and storage platform. Another team of developers, originally involved in the creation of the Ingres database, formed Britton Lee in 1979 and introduced a hardware and software platform that served as database accelerator, optimizing performance on client platforms that surrounded other popular back-end servers. IBM introduced support of relational databases for large-scale transaction processing in 1983 with a database specifically designed for their mainframes, DB2 for MVS. In 1988, IBM introduced the AS/400 with its own relational database, SQL/400 (also known as DB2/400), targeting transaction-processing applications ideal for mid-sized companies and providing a platform noted for its ease of deployment and management.

One of the relational databases introduced in the late 1970s was Oracle, but Oracle (and many of Oracle’s competitors) created database engines that were deployable on a variety of hardware vendors’ platforms. In the 1980s, these databases gained popularity and were being developed using portable programming languages such as C. Support of multiple hardware platforms also became easier because of the growing popularity of Unix-based operating systems on many of these platforms. The growing popularity of database servers drove pricing for servers and storage downward and further escalated the rapid evolution of hardware components as vendors raced to differentiate from each other based on performance and other characteristics. The hardware components became more modular and open, and enabled IT organizations and vendors to mix and match servers, storage, and other devices. Throughout the 1990s, the tradeoff that most companies considered was the benefit of aggressive open systems pricing versus incremental value provided by tightly integrated solutions. For common workloads in many businesses and organizations, database performance differences were negligible when making this choice, making the lower cost of commodity hardware more attractive.

At the turn of this century, some factors emerged that caused IT organizations deploying and integrating their own software and hardware components to reconsider both the benefits and challenges of a modular strategy. First, all sizes of businesses began to experience a massive growth in data volume well beyond what they had previously experienced. Very large data volumes especially became common when deploying business intelligence and data warehousing solutions that produced significant business value by solving ad hoc business questions or providing deeper analysis for strategic decision making. Business analysts using such systems often required more detailed data, longer histories, and added data from external sources to uncover solutions to their business problems. The changing workloads and growing data volumes on these systems made the sizing and integration of system components more challenging. These systems also evolved from serving as “nice-to-have” strategic management systems to becoming tactical business management systems, thus introducing the need for better reliability, availability, and serviceability more commonly associated with transaction processing.

Some of the platform solutions from an earlier generation continued to service this need. But the need for shorter time-to-deployment and more predictable performance also led to the appearance of new platform choices from vendors such as Netezza, Greenplum, and DATAllegro that provided more out-of-the-box “appliance” solutions. These preintegrated and predefined solutions were assembled using a combination of commodity components and innovative packaging and integration. By tightly defining the entire platform, integrated hardware and software optimizations could be introduced on these systems, pushing the execution of some database intelligence into storage systems. Typically, these new vendors started with “open-source” databases that were customized for the hardware. Such platforms demonstrated exceptional performance for executing ad hoc queries, because the predefined hardware was balanced and because of integrated optimization of the software. As is common in growth markets where start-up vendors appear, some thrived while others disappeared or were acquired.

Meanwhile, the platform flexibility provided by a database such as Oracle was no longer always seen as a deciding factor, or even desirable, when selecting a database technology to be deployed. In some organizations, speed, ease of deployment and management, and more predictable performance became the deciding factors. It was time for a shift in Oracle’s database platform strategy. But as you’ll see, this shift was already years in the making before the release of the first Oracle Exadata Database Machine. Oracle was already taking the steps needed to transform it into a systems company.

Oracle’s Evolution Towards Integrated Hardware and Software

When Oracle was founded in 1977 as Software Development Laboratories (SDL), relational databases were largely the subject of research papers. It was thought that relational databases would be too demanding for the hardware platforms that were available at the time. Nevertheless, SDL began Project Oracle for the CIA by creating a database based on IBM Research’s System R4 model. Among other things, the early database was technically noteworthy for its early support of SQL. By 1983, Oracle had released a version of the Oracle database written in the C programming language to better enable portability to heterogeneous hardware platforms in that fast-growing market. Thus, began a mantra that Oracle ran equally well on and was optimized for a wide variety of hardware platforms.

Though initially targeting departmental minicomputers, Oracle had its eyes on enterprise-class servers for the computing market as early as the 1980s. Oracle saw that the means to get there was through clustering such systems to support large-scale databases and released its first product of this type for VAX clusters in 1986. A version of this product for Unix-based platforms, Oracle Parallel Server, followed in the early 1990s and was primarily deployed for scaling parallel queries in data warehouses or providing high availability in transaction processing. A breakthrough occurred in the introduction of the re-architected Oracle 9i database featuring Real Application Clusters (RAC) in 2001. “Cache Fusion” in Oracle9i RAC reduced traffic over the cluster interconnect, resulting in the ability for RAC to provide linear performance improvements as nodes and appropriate storage and throughput were added to cluster configurations. For the first time, Oracle was able to scale data warehousing and transaction-processing workloads on a variety of clustered systems.

Shortly after, Oracle signaled a move toward providing deeper infrastructure software capabilities and support that was formerly provided by partners or was simply lacking. Oracle Database 10g Enterprise Edition was introduced in 2003 with Automatic Storage Management (ASM) and embedded cluster management software, eliminating the need for third-party software that provided those capabilities. ASM virtualizes storage resources and provides advanced volume management, thus enabling the database and DBA to manage tasks such as striping and mirroring of data. The Automatic Database Diagnostics Monitor (ADDM) also first appeared in this release and provided advisory alerts and suggestions for tactics to improve performance based on periodic snapshots of performance in a production environment. These suggestions are displayed to DBAs by Enterprise Manager’s Grid Control, whose enhanced alerting and management capabilities are critically important when deploying Oracle for large-scale databases and for “grid computing.”

In 2005, penguins appeared on stage at Oracle OpenWorld in what was seen as a curious announcement at the time but would have important implications in establishing a complete Oracle platform. Oracle announced it would provide support for Red Hat and SUSE Linux and deliver an “unbreakable Linux” support model. Oracle later introduced Oracle Enterprise Linux, which has roots in Red Hat Linux from the open-source community to which Oracle had long provided technology. Meanwhile, Oracle had begun work on establishing reference configurations for data warehousing on Linux-based and other platforms from a variety of platform vendors.

Taken together, these moves by Oracle and the changes in the deployment strategies at many of Oracle customers led to Oracle’s decision to launch a tightly integrated hardware and software platform. The initial platform, the HP Oracle Database Machine with Exadata storage, became available in 2008. In 2009, Oracle announced a second generation of the Oracle Exadata Database Machine and a new hardware platform—the Sun Oracle Database Machine. The acquisition of Sun by Oracle Corporation in 2010 completed the transition of Oracle to a vendor that can design, build, and support complete database hardware and software solutions that are deployed as systems. In September 2010, Oracle announced new X2-2 and X2-8 nodes for the Oracle Exadata Database Machine and also announced Solaris 11 as an alternative to the Oracle Enterprise Linux operating system for database server nodes. The “Unbreakable Enterprise Kernel for Linux” was announced at this time to provide support for the large number of processing cores made available in the Exadata Database Machine.

We’ll discuss some implications of the Sun acquisition later in this chapter and some of the likely future directions. Table 1-1 summarizes some of the important events and timing that led to the Oracle Exadata Database Machine.

Image

Image

TABLE 1-1. Key Events in the Evolution Toward the Oracle Exadata Database Machine

Next, we’ll take a closer look at the fundamental concepts behind the Oracle Exadata Database Machine. These concepts will set the stage for what follows in this book.

Oracle Exadata Database Machine Fundamental Concepts

Prior to the availability of Oracle’s Exadata Database Machines, most systems deployed by IT organizations to host Oracle databases were customized configurations designed by those organizations, their systems integrators, or by the server and storage vendors. The process of configuring such a custom platform solution requires a series of steps, each of which tends to be complicated, time consuming, and offers opportunities to make mistakes.

Such designs start with pre-implementation system sizing based on real and anticipated workloads. Once the servers and storage are determined for the platform, the components are acquired, installed, configured, tested, and validated. Individuals involved in this process often include systems and storage architects, database administrators, hardware and storage vendors, network vendors, and systems integrators. Delivered configurations can be problematic if proper balance is not provided in the design, most often occurring when the throughput delivered by the storage subsystem and storage-to-system connections and networks are not balanced with what the server-side CPUs and memory are capable of handling.

Oracle began to explore solving the problem of lack of proper balance in data warehousing systems through the Oracle Optimized Warehouse Initiative, and publicly announced that initiative in 2007. A series of reference configurations provided balanced server and storage configuration recommendations and eliminated much of the pre-implementation system-sizing guesswork. Developed with a number of Oracle’s server and storage partners, the reference configurations were defined in charts that contained a variety of recommended server and storage combinations. The optimal platform for deployment was selected based on workload characteristics and required storage capacity. When closely adhered to, the reference configurations could deliver optimal performance for generic server and storage platforms.

Some organizations continued to struggle in deploying balanced platforms even where reference configurations were initially used. In some situations, the recommended storage was not purchased but was replaced by cheaper storage with similar capacity but inferior performance. In other situations, the server and storage platforms were initially configured and deployed as described in the reference configurations, but later upgrades and additions to the configuration were not made in a balanced fashion.

The introduction of Oracle’s Exadata Database Machines eliminated much of this risk by offering a limited set of server and storage configurations within predefined racks. The systems scale a combination of database server nodes, Exadata Storage Server cells, and I/O throughput (including internal networking) in a balanced fashion as the system grows. So, as user workloads and data volumes grow, you can scale the systems to maintain constant response times. The Exadata Storage Server Software further enhances this scalability by minimizing the volume of data retrieved from the Exadata Storage Server cells to database server nodes, thus mitigating a typical throughput bottleneck.

The Oracle Exadata Database Machine thus shares the appliance characteristic of containing fixed components that assure a balanced hardware configuration. The multistep pre-implementation sizing, configuration, and integration process is reduced to understanding the planned workload and ordering the right standard rack configuration that will support the workload.

The first generation of this platform, the HP Oracle Database Machine, was configured specifically to handle data warehousing workloads. The ratio of database server nodes to Exadata Storage Server cells was set at 4:7, with each storage cell containing 12 disk drives for user data. The switch providing the fabric for database server node-to-Exadata Storage Server cell communications and RAC node-to-node communications was a high-speed InfiniBand switch rated at 20 Gb per second. For queries tested on this platform in combination with the Exadata Storage Server Software, Oracle would claim query performance speed-up of ten times or more as typical when compared to performance on other Oracle-based but non-Exadata data warehousing platforms. Platform upgrades were available in fixed configuration increments when Half-Racks and Full Racks needed to be scaled to meet increased workloads or storage demands. Key components were prepackaged and configured such that single points of failure were eliminated.

When Oracle introduced the second-generation Sun Oracle Database Machine in 2009, the ratio of database server nodes to Exadata Storage Server cells remained fixed at 4:7 in Half-Racks and Full Racks and for multiple Full Racks (and 2:3 in Quarter-Racks). As before, there were 12 disks for user data in each storage server cell. However, performance characteristics of key components were improved across the board.

The Intel Xeon 5540 CPUs in the database server nodes and Exadata Storage Server cells were about 80 percent faster than the CPUs in the earlier-generation Database Machine. The DDR dynamic access random memory (DRAM) was improved from DDR2 to DDR3, and memory capacity was increased in each database server node from 32GB to 72GB. Though a Full Rack also contained 168 disks for user data in this platform, disk throughput increased by 50 percent and storage capacity increased with the introduction of 600GB and 2TB disk drives. The addition of Exadata Smart Flash Cache to the Exadata Storage Server cells (384GB of Flash per cell) added a new dimension to speeding up I/O operations and database performance. To complete the balanced performance improvements, Oracle doubled the throughput of the InfiniBand switch to 40 Gb per second. The increased power and flexibility enabled Oracle to extend the applicability of the configurations beyond just data warehousing to also support transaction processing and consolidation of multiple database servers to a single Sun Oracle Database Machine.

The Oracle announcement in September 2010 of the Oracle Exadata Database Machine X2-2 systems featured configured database server nodes to Exadata Storage Server cells in the ratios previously mentioned. Nodes in the X2-2 systems had two sockets for CPUs per node but now contained six core Intel Xeon 5670 processors and 96GB of memory. At that time, Oracle also introduced large symmetric multiprocessing (SMP) nodes paired in new Oracle Exadata Database Machine X2-8 full-rack systems. Each node in the X2-8 configuration contained eight sockets holding eight core Intel Xeon 7560 processors along with 1TB of memory.

Figure 1-1 illustrates where the key components in the Oracle Exadata Database Machine X2-2 configurations reside in a Full Rrack. The X2-2 configurations are typically deployed where smaller but more numerous nodes are more appropriate for workloads and to meet high-availability performance needs (since an individual node failure would have less impact on overall system performance than loss of a larger SMP node). Oracle Exadata Database Machine X2-8 systems are deployed where workloads scale better on large SMP nodes and/or where larger memory footprints are required.

Image

FIGURE 1-1. Oracle Exadata Database Machine X2-2 Full Rack

The specifics of this hardware will continue to change over time. But the fundamental balanced design concepts continue with each subsequent release of the product. As new designs emerge, capacities and performance levels continue to improve across these key components in a balanced fashion.

Software Integration and the Oracle Exadata Database Machine

Though understanding the hardware concepts is important, it is equally important to understand the implications of Oracle’s software integration to this platform. Oracle’s platform support includes the operating system, Oracle Enterprise Linux, as the foundation for the database server nodes and Exadata Storage Server cells. As an alternative, Oracle also offers the Solaris 11 operating system for the database server nodes. Upon this platform, Oracle tightly integrates Oracle Database 11g Enterprise Edition and extends database functionality into storage with the Exadata Storage Server Software. Other Oracle database software components on the platform that are central to a typical deployment are RAC and the Oracle Partitioning Option.

One example of the tight integration of the hardware and database is the leveraging of Exadata Smart Flash Cache as a database cache. Supported in the Oracle Exadata Storage Server Software available with Oracle Database 11g Release 2 Enterprise Edition, Oracle provides a caching algorithm that is more than simply the “last recently used” data to avoid frequent flushing of the cache. The Exadata Storage Server Software thus leverages the Flash as a cache and automatically manages it by default. The DBA can designate usage of the Flash Cache for specific application tables and indexes by using an ALTER command with the CELL_FLASH_CACHE attribute set to KEEP.

The Oracle Exadata Storage Server Software really provides two key functions—managing Sun servers as disk arrays and pushing key database functionally into storage to speed up performance. The Exadata Storage Server Software performance speed-up capabilities are transparent to Oracle SQL and optimizer. This approach is beneficial, since applications can run unchanged on the Oracle Exadata Database Machine and also because as new Exadata Storage Server Software releases introduce new optimization techniques, these will also provide their benefits transparently. For example, users of early HP Oracle Database Machines that later installed database upgrades to Oracle Database 11g Release 2 and the corresponding Exadata Storage Server Software gained benefits from transparently having access to new underlying features such as storage indexes and data-mining model scoring.

Because the optimization is transparent, there is no certification process for applications specifically tied to their support of the Oracle Exadata Database Machine. Rather, the applications simply need to be supported on the necessary Oracle database version (Oracle Database 11g Enterprise Edition Release 2 or later on the Sun versions of the Oracle Exadata Database Machine), scale well when deployed on RAC, and should not have unusual storage characteristics that preclude the use of Automatic Storage Management.

Management of the platform is provided by Oracle Enterprise Manager Grid Control. Strategies for backups, disaster recovery, user management, security, and other common DBA tasks remain unchanged from other Oracle deployment platforms. Certainly, the balanced nature of the platform and better integrated optimization of database and hardware does simplify the DBA’s role to some degree. In addition to standard Oracle database management capabilities, Grid Control adds capabilities specific to managing Exadata Storage Server cells through a plug-in. A DBA has the ability to view how queries are resolved and see certain Exadata Storage Server optimizations, such as smart scans and Bloom filters, in their Explain Plans.

When managing multiple databases or mixed workloads, I/O resource management through the Database Resource Manager and Exadata Storage Server Software can ensure that users and tasks will have the resources they need. Databases can be designated to run on certain database server nodes, but a more flexible means of sharing the resources of a single database node can be achieved through the use of instance caging and setting a “CPU count” per database instance in the Database Resource Manager. Enterprise Manager provides a single point of management for Quality of Service (QoS) where policies are defined and enabled, performance is monitored, and resources can be reallocated when needed.

For the more technically advanced, there is an Exadata Storage Server cell command-line interface (CELLCLI) available for managing the cells. Management Services provided by the Oracle Exadata Storage Server Software deployed on the cells enables administering, managing, and querying the status of the cells from either Grid Control or the CELLCLI interface. Cell Services provide the majority of other Exadata storage services needed, while Restart Services are used to update the Exadata software and ensure storage services are started and running.

Impact of the Platform on Personnel

Traditionally, platform determination, deployment, and maintenance tasks are delegated to many individuals. As you might have gathered from the description of the Oracle Exadata Database Machine thus far, a platform like this can affect the roles of your enterprise architects, storage architects, development and deployment DBAs, and network and systems administrators. For an enterprise system of this type, having the right business sponsor also engaged in the process is often key to gaining funding to buy the system and claiming success after deployment.

The role of enterprise architect has gained in popularity and breadth of responsibility. Enterprise architects often evaluate the ability of a platform to meet technical architecture requirements and standards, as well as their appropriateness in meeting business requirements as defined by business sponsors. A platform like the Oracle Exadata Database Machine can introduce new considerations into such an evaluation. The potentially rapid speed of deployment and predefined balanced configuration help eliminate potential errors and risk of project delays that often occur when platforms are custom designed. More focus can be placed on matching the value of such a platform in delivery of solutions that meet business goals and provide real business return on investment. On the other hand, organizational standards for certain hardware and storage components might not align and might need to be overlooked in cases where the value of a complete integrated system outweighs the benefits of conflicting technical component standards. In fact, technical component specialist roles, such as the role of storage architect, typically change from one of selection of components to evaluating more advanced deployment considerations, such as planning for disaster recovery.

The DBA’s role also can change in fundamental ways. The Oracle Exadata Storage Server Software minimizes certain design and management considerations, such as the need to apply extensive indexing typical on other Oracle-based platforms. Striping and mirroring are handled through ASM, so basic storage management tasks often move to the DBA’s area of responsibility. DBAs for the Oracle Exadata Database Machine also typically monitor patch releases for not only the Oracle database and Exadata software, but also for the operating system. Other basic tasks remain unchanged. For example, high availability and disaster recovery can be enabled using Data Guard and Flashback. Backups are handled using Oracle’s Recovery Manager (RMAN).

Networking the system into the current IT infrastructure is performed during the installation process. Oracle and some Oracle partners offer rapid installation services that usually are completed within two or three days after system delivery, and include the configuration of the Oracle Exadata Database Machine InfiniBand switch. Once configured, the Oracle Exadata Database Machine appears much like any other Oracle database server on a network.

As you might gather, the consolidation of system and database management roles has implications for speed of change once the system is deployed, given that its management is much more centralized. Organizations that have deployed such platforms often find their ability to respond to new configuration needs within IT are greatly enhanced, and there are new opportunities for architects to focus on more difficult and challenging problems common in any enterprise deployment situation.

Future Directions

Upon the completion of the acquisition of Sun by Oracle Corporation, Oracle executives began to outline a platform system vision of the future. Key in that vision is the emergence of integrated software and hardware solutions. The Oracle Exadata Database Machine represents the first such platform.

In this initial chapter, we’ve outlined at an introductory level what this platform is. Oracle has been pretty clear that the vision for the future extends into the realm of end-to-end hardware and software solutions, going beyond simply databases and eventually including applications. The transaction processing and data warehousing data models of those applications reside in Oracle databases and hence, can reside on the Oracle Exadata Database Machine today. There is great potential for Oracle to further package and bundle such solutions, easing deployment and use, speeding up performance, and assuring even better security and availability.

Of course, most applications also rely on middle-tier components. While the discussion of deployment of such components is outside the scope of this book, Oracle’s extensive array of Sun servers can certainly also meet those needs. With the continued introduction of other tightly integrated Oracle software and Sun hardware configurations such as Exalogic, a platform designed to speed WebLogic and Java-based applications, end-to-end solutions from Oracle promise to move deployment strategies from custom design, build, and integration to rapid assembly of well-defined components.

Oracle is clearly evolving the administration and maintenance of its software toward being more self-tuning and self-managing. Enterprise Manager Grid Control continues to complement this direction by providing a monitoring and management interface not only for the database but also for middleware and applications. Where organizations do not want to do the maintenance themselves, there are initiatives underway within Oracle and among its partners to provide managed services for these platforms. Oracle also positions these platforms to fill critical roles in “private cloud” or “public cloud” deployment scenarios, where organizations can obtain the benefits of Software-as-a-Service (SaaS) when deploying their own hardware and software platform infrastructure or when accessing the infrastructure of an Oracle platform services provider. The Oracle Exalogic Elastic Cloud includes Exalogic and Exadata as key middleware and database platforms linked via InfiniBand that are deployed for such cloud-based solutions.

The server and storage platforms that make up the Oracle Exadata Database Machine themselves will continue to evolve too. For databases, having large volumes of frequently accessed data closest to the CPUs has always been a desirable method for speeding up performance. Many predict that over coming generations, you will come to expect ever larger capacities of Flash and memory and will also see large footprints of physical disk storage disappear from the data center. The impact of such changes in server and storage configurations should improve the performance of all workloads and also result in dramatic reductions in floor space, cooling, and electrical power required. These advances could lead to solutions of new classes of business problems that can’t be practically solved today.

Summary

Now that you’ve read this introductory chapter, it should be clear to you that the Oracle Exadata Database Machine builds upon Oracle database capabilities familiar to an immense worldwide technical community, but also adds new innovative integrated software and hardware that could have a profound impact on your organization. Having the benefit of looking back, we can see how this platform evolved from development efforts at Oracle that took place over a period of years. The acquisition of Sun now enables Oracle to entirely develop, build, sell, and support pre-engineered and integrated software and hardware solutions.

Given how the evolution took place, we structured the book to next review some Oracle database topics that might be familiar to you. Those topics will lay the groundwork for gaining a full understanding of the Oracle Exadata Database Machine platform as we cover it in detail in later chapters.