Example Architectures - Architecture - OpenStack Operations Guide (2014)

OpenStack Operations Guide (2014)

Part I. Architecture

Designing an OpenStack cloud is a great achievement. It requires a robust understanding of the requirements and needs of the cloud’s users to determine the best possible configuration to meet them. OpenStack provides a great deal of flexibility to achieve your needs, and this part of the book aims to shine light on many of the decisions you need to make during the process.

To design, deploy, and configure OpenStack, administrators must understand the logical architecture. A diagram can help you envision all the integrated services within OpenStack and how they interact with each other.

OpenStack modules are one of the following types:

Daemon

Runs as a background process. On Linux platforms, a daemon is usually installed as a service.

Script

Installs a virtual environment and runs tests.

Command-line interface (CLI)

Enables users to submit API calls to OpenStack services through commands.

As shown, end users can interact through the dashboard, CLIs, and APIs. All services authenticate through a common Identity Service, and individual services interact with each other through public APIs, except where privileged administrator commands are necessary. Figure I-2 shows the most common, but not the only logical architecture for an OpenStack cloud.

Figure I-2. OpenStack Havana Logical Architecture (http://opsgui.de/1kYnyy1)

Chapter 1. Example Architectures

To understand the possibilities OpenStack offers, it’s best to start with basic architectures that are tried-and-true and have been tested in production environments. We offer two such examples with basic pivots on the base operating system (Ubuntu and Red Hat Enterprise Linux) and the networking architectures. There are other differences between these two examples, but you should find the considerations made for the choices in each as well as a rationale for why it worked well in a given environment.

Because OpenStack is highly configurable, with many different backends and network configuration options, it is difficult to write documentation that covers all possible OpenStack deployments. Therefore, this guide defines example architectures to simplify the task of documenting, as well as to provide the scope for this guide. Both of the offered architecture examples are currently running in production and serving users.

TIP

As always, refer to the Glossary if you are unclear about any of the terminology mentioned in these architectures.

Example Architecture—Legacy Networking (nova)

This particular example architecture has been upgraded from Grizzly to Havana and tested in production environments where many public IP addresses are available for assignment to multiple instances. You can find a second example architecture that uses OpenStack Networking (neutron) after this section. Each example offers high availability, meaning that if a particular node goes down, another node with the same configuration can take over the tasks so that service continues to be available.

Overview

The simplest architecture you can build upon for Compute has a single cloud controller and multiple compute nodes. The simplest architecture for Object Storage has five nodes: one for identifying users and proxying requests to the API, then four for storage itself to provide enough replication for eventual consistency. This example architecture does not dictate a particular number of nodes, but shows the thinking and considerations that went into choosing this architecture including the features offered.

Components

Component

Details

OpenStack release

Havana

Host operating system

Ubuntu 12.04 LTS or Red Hat Enterprise Linux 6.5, including derivatives such as CentOS and Scientific Linux

OpenStack package repository

Ubuntu Cloud Archive or RDO*

Hypervisor

KVM

Database

MySQL*

Message queue

RabbitMQ for Ubuntu; Qpid for Red Hat Enterprise Linux and derivatives

Networking service

nova-network

Network manager

FlatDHCP

Single nova-network or multi-host?

multi-host*

Image Service (glance) backend

file

Identity Service (keystone) driver

SQL

Block Storage Service (cinder) backend

LVM/iSCSI

Live Migration backend

Shared storage using NFS*

Object storage

OpenStack Object Storage (swift)

An asterisk (*) indicates when the example architecture deviates from the settings of a default installation. We’ll offer explanations for those deviations next.

NOTE

The following features of OpenStack are supported by the example architecture documented in this guide, but are optional:

§ Dashboard: You probably want to offer a dashboard, but your users may be more interested in API access only.

§ Block storage: You don’t have to offer users block storage if their use case only needs ephemeral storage on compute nodes, for example.

§ Floating IP address: Floating IP addresses are public IP addresses that you allocate from a predefined pool to assign to virtual machines at launch. Floating IP address ensure that the public IP address is available whenever an instance is booted. Not every organization can offer thousands of public floating IP addresses for thousands of instances, so this feature is considered optional.

§ Live migration: If you need to move running virtual machine instances from one host to another with little or no service interruption, you would enable live migration, but it is considered optional.

§ Object storage: You may choose to store machine images on a file system rather than in object storage if you do not have the extra hardware for the required replication and redundancy that OpenStack Object Storage offers.

Rationale

This example architecture has been selected based on the current default feature set of OpenStack Havana, with an emphasis on stability. We believe that many clouds that currently run OpenStack in production have made similar choices.

You must first choose the operating system that runs on all of the physical nodes. While OpenStack is supported on several distributions of Linux, we used Ubuntu 12.04 LTS (Long Term Support), which is used by the majority of the development community, has feature completeness compared with other distributions and has clear future support plans.

We recommend that you do not use the default Ubuntu OpenStack install packages and instead use the Ubuntu Cloud Archive. The Cloud Archive is a package repository supported by Canonical that allows you to upgrade to future OpenStack releases while remaining on Ubuntu 12.04.

KVM as a hypervisor complements the choice of Ubuntu—being a matched pair in terms of support, and also because of the significant degree of attention it garners from the OpenStack development community (including the authors, who mostly use KVM). It is also feature complete, free from licensing charges and restrictions.

MySQL follows a similar trend. Despite its recent change of ownership, this database is the most tested for use with OpenStack and is heavily documented. We deviate from the default database, SQLite, because SQLite is not an appropriate database for production usage.

The choice of RabbitMQ over other AMQP compatible options that are gaining support in OpenStack, such as ZeroMQ and Qpid, is due to its ease of use and significant testing in production. It also is the only option that supports features such as Compute cells. We recommend clustering with RabbitMQ, as it is an integral component of the system and fairly simple to implement due to its inbuilt nature.

As discussed in previous chapters, there are several options for networking in OpenStack Compute. We recommend FlatDHCP and to use Multi-Host networking mode for high availability, running one nova-network daemon per OpenStack compute host. This provides a robust mechanism for ensuring network interruptions are isolated to individual compute hosts, and allows for the direct use of hardware network gateways.

Live Migration is supported by way of shared storage, with NFS as the distributed file system.

Acknowledging that many small-scale deployments see running Object Storage just for the storage of virtual machine images as too costly, we opted for the file backend in the OpenStack Image Service (Glance). If your cloud will include Object Storage, you can easily add it as a backend.

We chose the SQL backend for Identity Service (keystone) over others, such as LDAP. This backend is simple to install and is robust. The authors acknowledge that many installations want to bind with existing directory services and caution careful understanding of the array of options available.

Block Storage (cinder) is installed natively on external storage nodes and uses the LVM/iSCSI plug-in. Most Block Storage Service plug-ins are tied to particular vendor products and implementations limiting their use to consumers of those hardware platforms, but LVM/iSCSI is robust and stable on commodity hardware.

While the cloud can be run without the OpenStack Dashboard, we consider it to be indispensable, not just for user interaction with the cloud, but also as a tool for operators. Additionally, the dashboard’s use of Django makes it a flexible framework for extension.

Why not use the OpenStack Network Service (neutron)?

This example architecture does not use the OpenStack Network Service (neutron), because it does not yet support multi-host networking and our organizations (university, government) have access to a large range of publicly-accessible IPv4 addresses.

Why use multi-host networking?

In a default OpenStack deployment, there is a single nova-network service that runs within the cloud (usually on the cloud controller) that provides services such as network address translation (NAT), DHCP, and DNS to the guest instances. If the single node that runs the nova-networkservice goes down, you cannot access your instances, and the instances cannot access the Internet. The single node that runs the nova-network service can become a bottleneck if excessive network traffic comes in and goes out of the cloud.

TIP

Multi-host is a high-availability option for the network configuration, where the nova-network service is run on every compute node instead of running on only a single node.

Detailed Description

The reference architecture consists of multiple compute nodes, a cloud controller, an external NFS storage server for instance storage, and an OpenStack Block Storage server for volume storage. A network time service (Network Time Protocol, or NTP) synchronizes time on all the nodes. FlatDHCPManager in multi-host mode is used for the networking. A logical diagram for this example architecture shows which services are running on each node:

The cloud controller runs the dashboard, the API services, the database (MySQL), a message queue server (RabbitMQ), the scheduler for choosing compute resources (nova-scheduler), Identity services (keystone, nova-consoleauth), Image services (glance-api, glance-registry), services for console access of guests, and Block Storage services, including the scheduler for storage resources (cinder-api and cinder-scheduler).

Compute nodes are where the computing resources are held, and in our example architecture, they run the hypervisor (KVM), libvirt (the driver for the hypervisor, which enables live migration from node to node), nova-compute, nova-api-metadata (generally only used when running in multi-host mode, it retrieves instance-specific metadata), nova-vncproxy, and nova-network.

The network consists of two switches, one for the management or private traffic, and one that covers public access, including floating IPs. To support this, the cloud controller and the compute nodes have two network cards. The OpenStack Block Storage and NFS storage servers only need to access the private network and therefore only need one network card, but multiple cards run in a bonded configuration are recommended if possible. Floating IP access is direct to the Internet, whereas Flat IP access goes through a NAT. To envision the network traffic, use this diagram:

Optional Extensions

You can extend this reference architecture as follows:

§ Add additional cloud controllers (see Chapter 11).

§ Add an OpenStack Storage service (see the Object Storage chapter in the OpenStack Installation Guide for your distribution).

§ Add additional OpenStack Block Storage hosts (see Chapter 11).

Example Architecture—OpenStack Networking

This chapter provides an example architecture using OpenStack Networking, also known as the Neutron project, in a highly available environment.

Overview

A highly-available environment can be put into place if you require an environment that can scale horizontally, or want your cloud to continue to be operational in case of node failure. This example architecture has been written based on the current default feature set of OpenStack Havana, with an emphasis on high availability.

Components

Component

Details

OpenStack release

Havana

Host operating system

Red Hat Enterprise Linux 6.5

OpenStack package repository

Red Hat Distributed OpenStack (RDO)

Hypervisor

KVM

Database

MySQL

Message queue

Qpid

Networking service

OpenStack Networking

Tenant Network Separation

VLAN

Image Service (glance) backend

GlusterFS

Identity Service (keystone) driver

SQL

Block Storage Service (cinder) backend

GlusterFS

Rationale

This example architecture has been selected based on the current default feature set of OpenStack Havana, with an emphasis on high availability. This architecture is currently being deployed in an internal Red Hat OpenStack cloud and used to run hosted and shared services, which by their nature must be highly available.

This architecture’s components have been selected for the following reasons:

Red Hat Enterprise Linux

You must choose an operating system that can run on all of the physical nodes. This example architecture is based on Red Hat Enterprise Linux, which offers reliability, long-term support, certified testing, and is hardened. Enterprise customers, now moving into OpenStack usage, typically require these advantages.

RDO

The Red Hat Distributed OpenStack package offers an easy way to download the most current OpenStack release that is built for the Red Hat Enterprise Linux platform.

KVM

KVM is the supported hypervisor of choice for Red Hat Enterprise Linux (and included in distribution). It is feature complete and free from licensing charges and restrictions.

MySQL

MySQL is used as the database backend for all databases in the OpenStack environment. MySQL is the supported database of choice for Red Hat Enterprise Linux (and included in distribution); the database is open source, scalable, and handles memory well.

Qpid

Apache Qpid offers 100 percent compatibility with the Advanced Message Queuing Protocol Standard, and its broker is available for both C++ and Java.

OpenStack Networking

OpenStack Networking offers sophisticated networking functionality, including Layer 2 (L2) network segregation and provider networks.

VLAN

Using a virtual local area network offers broadcast control, security, and physical layer transparency. If needed, use VXLAN to extend your address space.

GlusterFS

GlusterFS offers scalable storage. As your environment grows, you can continue to add more storage nodes (instead of being restricted, for example, by an expensive storage array).

Detailed Description

Node types

This section gives you a breakdown of the different nodes that make up the OpenStack environment. A node is a physical machine that is provisioned with an operating system, and running a defined software stack on top of it. Table 1-3 provides node descriptions and specifications.

Type

Description

Example hardware

Controller

Controller nodes are responsible for running the management software services needed for the OpenStack environment to function. These nodes:

§ Provide the front door that people access as well as the API services that all other components in the environment talk to.

§ Run a number of services in a highly available fashion, utilizing Pacemaker and HAProxy to provide a virtual IP and load-balancing functions so all controller nodes are being used.

§ Supply highly available “infrastructure” services, such as MySQL and Qpid, that underpin all the services.

§ Provide what is known as “persistent storage” through services run on the host as well. This persistent storage is backed onto the storage nodes for reliability.

See Figure 1-5.

Model: Dell R620

CPU: 2 x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz

Memory: 32 GB

Disk: 2 x 300 GB 10000 RPM SAS Disks

Network: 2 x 10G network ports

Compute

Compute nodes run the virtual machine instances in OpenStack. They:

§ Run the bare minimum of services needed to facilitate these instances.

§ Use local storage on the node for the virtual machines so that no VM migration or instance recovery at node failure is possible.

See Figure 1-6.

Model: Dell R620

CPU: 2x Intel® Xeon® CPU E5-2650 0 @ 2.00 GHz

Memory: 128 GB

Disk: 2 x 600 GB 10000 RPM SAS Disks

Network: 4 x 10G network ports (For future proofing expansion)

Storage

Storage nodes store all the data required for the environment, including disk images in the Image Service library, and the persistent storage volumes created by the Block Storage service. Storage nodes use GlusterFS technology to keep the data highly available and scalable.

See Figure 1-8.

Model: Dell R720xd

CPU: 2 x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz

Memory: 64 GB

Disk: 2 x 500 GB 7200 RPM SAS Disks + 24 x 600 GB 10000 RPM SAS Disks

Raid Controller: PERC H710P Integrated RAID Controller, 1 GB NV Cache

Network: 2 x 10G network ports

Network

Network nodes are responsible for doing all the virtual networking needed for people to create public or private networks and uplink their virtual machines into external networks. Network nodes:

§ Form the only ingress and egress point for instances running on top of OpenStack.

§ Run all of the environment’s networking services, with the exception of the networking API service (which runs on the controller node).

See Figure 1-7.

Model: Dell R620

CPU: 1 x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz

Memory: 32 GB

Disk: 2 x 300 GB 10000 RPM SAS Disks

Network: 5 x 10G network ports

Utility

Utility nodes are used by internal administration staff only to provide a number of basic system administration functions needed to get the environment up and running and to maintain the hardware, OS, and software on which it runs.

These nodes run services such as provisioning, configuration management, monitoring, or GlusterFS management software. They are not required to scale, although these machines are usually backed up.

Model: Dell R620

CPU: 2x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz

Memory: 32 GB

Disk: 2 x 500 GB 7200 RPM SAS Disks

Network: 2 x 10G network ports

Table 1-3. Node types

Networking layout

The network contains all the management devices for all hardware in the environment (for example, by including Dell iDrac7 devices for the hardware nodes, and management interfaces for network switches). The network is accessed by internal staff only when diagnosing or recovering a hardware issue.

OpenStack internal network

This network is used for OpenStack management functions and traffic, including services needed for the provisioning of physical nodes (pxe, tftp, kickstart), traffic between various OpenStack node types using OpenStack APIs and messages (for example, nova-compute talking tokeystone or cinder-volume talking to nova-api), and all traffic for storage data to the storage layer underneath by the Gluster protocol. All physical nodes have at least one network interface (typically eth0) in this network. This network is only accessible from other VLANs on port 22 (for ssh access to manage machines).

Public Network

This network is a combination of:

§ IP addresses for public-facing interfaces on the controller nodes (which end users will access the OpenStack services)

§ A range of publicly routable, IPv4 network addresses to be used by OpenStack Networking for floating IPs. You may be restricted in your access to IPv4 addresses; a large range of IPv4 addresses is not necessary.

§ Routers for private networks created within OpenStack.

This network is connected to the controller nodes so users can access the OpenStack interfaces, and connected to the network nodes to provide VMs with publicly routable traffic functionality. The network is also connected to the utility machines so that any utility services that need to be made public (such as system monitoring) can be accessed.

VM traffic network

This is a closed network that is not publicly routable and is simply used as a private, internal network for traffic between virtual machines in OpenStack, and between the virtual machines and the network nodes that provide l3 routes out to the public network (and floating IPs for connections back in to the VMs). Because this is a closed network, we are using a different address space to the others to clearly define the separation. Only Compute and OpenStack Networking nodes need to be connected to this network.

Node connectivity

The following section details how the nodes are connected to the different networks (see “Networking layout”) and what other considerations need to take place (for example, bonding) when connecting nodes to the networks.

Initial deployment

Initially, the connection setup should revolve around keeping the connectivity simple and straightforward in order to minimize deployment complexity and time to deploy. The deployment shown in Figure 1-3 aims to have 1 x 10G connectivity available to all compute nodes, while still leveraging bonding on appropriate nodes for maximum performance.

Figure 1-3. Basic node deployment

Connectivity for maximum performance

If the networking performance of the basic layout is not enough, you can move to Figure 1-4, which provides 2 x 10G network links to all instances in the environment as well as providing more network bandwidth to the storage layer.

Figure 1-4. Performance node deployment

Node diagrams

The following diagrams (Figure 1-5 through Figure 1-8) include logical information about the different types of nodes, indicating what services will be running on top of them and how they interact with each other. The diagrams also illustrate how the availability and scalability of services are achieved.

Figure 1-5. Controller node

Figure 1-6. Compute node

Figure 1-7. Network node

Figure 1-8. Storage node

Example Component Configuration

Table 1-4 and Table 1-5 include example configuration and considerations for both third-party and OpenStack components:

Component

Tuning

Availability

Scalability

MySQL

binlog-format = row

Master/master replication. However, both nodes are not used at the same time. Replication keeps all nodes as close to being up to date as possible (although the asynchronous nature of the replication means a fully consistent state is not possible). Connections to the database only happen through a Pacemaker virtual IP, ensuring that most problems that occur with master-master replication can be avoided.

Not heavily considered. Once load on the MySQL server increases enough that scalability needs to be considered, multiple masters or a master/slave setup can be used.

Qpid

max-connections=1000 worker-threads=20connection-backlog=10, sasl security enabled with SASL-BASIC authentication

Qpid is added as a resource to the Pacemaker software that runs on Controller nodes where Qpid is situated. This ensures only one Qpid instance is running at one time, and the node with the Pacemaker virtual IP will always be the node running Qpid.

Not heavily considered. However, Qpid can be changed to run on all controller nodes for scalability and availability purposes, and removed from Pacemaker.

HAProxy

maxconn 3000

HAProxy is a software layer-7 load balancer used to front door all clustered OpenStack API components and do SSL termination. HAProxy can be added as a resource to the Pacemaker software that runs on the Controller nodes where HAProxy is situated. This ensures that only one HAProxy instance is running at one time, and the node with the Pacemaker virtual IP will always be the node running HAProxy.

Not considered. HAProxy has small enough performance overheads that a single instance should scale enough for this level of workload. If extra scalability is needed, keepalived or other Layer-4 load balancing can be introduced to be placed in front of multiple copies of HAProxy.

Memcached

MAXCONN="8192" CACHESIZE="30457"

Memcached is a fast in-memory key-value cache software that is used by OpenStack components for caching data and increasing performance. Memcached runs on all controller nodes, ensuring that should one go down, another instance of Memcached is available.

Not considered. A single instance of Memcached should be able to scale to the desired workloads. If scalability is desired, HAProxy can be placed in front of Memcached (in raw tcp mode) to utilize multiple Memcached instances for scalability. However, this might cause cache consistency issues.

Pacemaker

Configured to use corosync and cman as a cluster communication stack/quorum manager, and as a two-node cluster.

Pacemaker is the clustering software used to ensure the availability of services running on the controller and network nodes:

§ Because Pacemaker is cluster software, the software itself handles its own availability, leveraging corosync and cman underneath.

§ If you use the GlusterFS native client, no virtual IP is needed, since the client knows all about nodes after initial connection and automatically routes around failures on the client side.

§ If you use the NFS or SMB adaptor, you will need a virtual IP on which to mount the GlusterFS volumes.

If more nodes need to be made cluster aware, Pacemaker can scale to 64 nodes.

GlusterFS

glusterfs performance profile “virt” enabled on all volumes. Volumes are setup in two-node replication.

Glusterfs is a clustered file system that is run on the storage nodes to provide persistent scalable data storage in the environment. Because all connections to gluster use thegluster native mount points, the gluster instances themselves provide availability and failover functionality.

The scalability of GlusterFS storage can be achieved by adding in more storage volumes.

Table 1-4. Third-party component configuration

Component

Node type

Tuning

Availability

Scalability

Dashboard (horizon)

Controller

Configured to use Memcached as a session store, neutron support is enabled, can_set_mount_point = False

The dashboard is run on all controller nodes, ensuring at least once instance will be available in case of node failure. It also sits behind HAProxy, which detects when the software fails and routes requests around the failing instance.

The dashboard is run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for the dashboard as more nodes are added.

Identity (keystone)

Controller

Configured to use Memcached for caching and PKI for tokens.

Identity is run on all controller nodes, ensuring at least once instance will be available in case of node failure. Identity also sits behind HAProxy, which detects when the software fails and routes requests around the failing instance.

Identity is run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for Identity as more nodes are added.

Image Service (glance)

Controller

/var/lib/glance/images is a GlusterFS native mount to a Gluster volume off the storage layer.

The Image Service is run on all controller nodes, ensuring at least once instance will be available in case of node failure. It also sits behind HAProxy, which detects when the software fails and routes requests around the failing instance.

The Image Service is run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for the Image Service as more nodes are added.

Compute (nova)

Controller, Compute

Configured to use Qpid, qpid_heartbeat = 10, configured to useMemcached for caching, configured to use libvirt, configured to useneutron.

Configured nova-consoleauth to use Memcached for session management (so that it can have multiple copies and run in a load balancer).

The nova API, scheduler, objectstore, cert, consoleauth, conductor, and vncproxy services are run on all controller nodes, ensuring at least once instance will be available in case of node failure. Compute is also behind HAProxy, which detects when the software fails and routes requests around the failing instance.

Compute’s compute and conductor services, which run on the compute nodes, are only needed to run services on that node, so availability of those services is coupled tightly to the nodes that are available. As long as a compute node is up, it will have the needed services running on top of it.

The nova API, scheduler, objectstore, cert, consoleauth, conductor, and vncproxy services are run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for Compute as more nodes are added. The scalability of services running on the compute nodes (compute, conductor) is achieved linearly by adding in more compute nodes.

Block Storage (cinder)

Controller

Configured to use Qpid, qpid_heartbeat = 10, configured to use a Gluster volume from the storage layer as the backend for Block Storage, using the Gluster native client.

Block Storage API, scheduler, and volume services are run on all controller nodes, ensuring at least once instance will be available in case of node failure. Block Storage also sits behind HAProxy, which detects if the software fails and routes requests around the failing instance.

Block Storage API, scheduler and volume services are run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for Block Storage as more nodes are added.

OpenStack Networking (neutron)

Controller, Compute, Network

Configured to use QPID, qpid_heartbeat = 10, kernel namespace support enabled, tenant_network_type = vlan,allow_overlapping_ips = true, tenant_network_type = vlan, bridge_uplinks = br-ex:em2, bridge_mappings = physnet1:br-ex

The OpenStack Networking service is run on all controller nodes, ensuring at least one instance will be available in case of node failure. It also sits behind HAProxy, which detects if the software fails and routes requests around the failing instance.

OpenStack Networking’s ovs-agent, l3-agent-dhcp-agent, and metadata-agent services run on the network nodes, as lsb resources inside of Pacemaker. This means that in the case of network node failure, services are kept running on another node. Finally, the ovs-agent service is also run on all compute nodes, and in case of compute node failure, the other nodes will continue to function using the copy of the service running on them.

The OpenStack Networking server service is run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for OpenStack Networking as more nodes are added. Scalability of services running on the network nodes is not currently supported by OpenStack Networking, so they are not be considered. One copy of the services should be sufficient to handle the workload. Scalability of the ovs-agent running on compute nodes is achieved by adding in more compute nodes as necessary.

Table 1-5. OpenStack component configuration

Parting Thoughts on Architectures

With so many considerations and options available, our hope is to provide a few clearly-marked and tested paths for your OpenStack exploration. If you’re looking for additional ideas, check out Appendix A, the OpenStack Installation Guides, or the OpenStack User Stories page.