Ubuntu in the Cloud - Ubuntu as a Server - Ubuntu Unleashed 2017 Edition (2017)

Ubuntu Unleashed 2017 Edition (2017)

Part IV: Ubuntu as a Server

Chapter 35. Ubuntu in the Cloud


In This Chapter

Image Why a Cloud?

Image Ubuntu Cloud and OpenStack

Image Juju

Image Snappy Ubuntu Core

Image Ubuntu Metal as a Service

Image Landscape

Image References


Cloud computing enables you to build large, flexible systems for on-demand processing of data. When your requirements are low, you use few resources. As the need arises, your processes scale to use multiple systems with optimized performance according to the requirements of the moment. This is an efficient way to use hardware and minimize waste.

To accomplish this feat of computer engineering, a special network is set up using on-demand virtual systems that consume resources only as needed and release those resources for use by others when they are not in use. Virtualization is the technology that enables this concept. This may be accomplished locally using third-party virtualization platforms such as VMware, VirtualBox, Parallels, and others (see Chapter 34, “Virtualization on Ubuntu”). Ubuntu has another option to offer, the Ubuntu Cloud, which moves virtualization into the cloud and is the main focus of this chapter. Beyond being an outstanding cloud hosting platform, Ubuntu Server is being developed with a strong intent to make it an outstanding cloud guest. Look for the term Ubuntu cloud guest to become more popular as time goes by.


SysAdmin Versus DevOps

The traditional title for someone who keeps systems up and running is systems administrator, or sysadmin (sometimes called ops, for operations). The traditional title for someone who creates the software that runs on those systems is software developer. Over the past few years, a new title has emerged, DevOps. DevOps combine many of the talents and responsibilities of sysadmins and developers, but often with a cloud computing environment focus and some specific refinements. They aren’t purely one or the other and often don’t fit neatly into other existing categories like engineer, but they do combine many of the skills of all of these while adding to it a QA-like focus on making sure that new features do not break anything that was working previously. DevOps are the ones who develop large applications to run on cloud resources while simplifying the orchestration of those resources with automation and configuration management. This chapter is not only for DevOps, but the chapter describes the sorts of tools and environments that these folks are likely to love.


Ubuntu Cloud is a stack of applications from Canonical that are included in the Ubuntu Server Edition. These applications make it easy to install and configure an Ubuntu-based cloud. The software is free and open source, but Canonical offers paid technical support.


Install Instructions

Install instructions change regularly and the most up-to-date version is always on the provider’s site. Instead, in this chapter we provide a high-level view to help you understand how a cloud can be set up and work, why you should care, who the big players are, and where to look for the next steps.


Why a Cloud?

Businesses and enterprises have built computer networks for years. There are many reasons, but usually networks are built because specific computation or data processing tasks are made easier and faster using more than one computer. The size of the network generally depends on the tasks that need to be done. Building a network usually entails taking a detailed survey of needs, analyzing those requirements, and gathering together the necessary hardware and software to fulfill those needs now, perhaps with a little room for growth if money permits.

Cloud computing is designed to make that easier by providing resources such as computing power and storage as services on the Internet in a way that is easy to access remotely, available on demand, simple to provision and scale, and highly dynamic. In the ideal case, this saves both time and money.

Some of the greatest benefits are the ease with which new resources may be added to a cloud, the fault tolerance inherent in the built-in redundancy of a large pool of servers, and the payment schedules that charge for resources only when they are used. There is also a great benefit in abstracting the complexity out of the process; clients perform the tasks they want to perform, and the cloud computing platform takes care of the details of adding resources as needed without the end user being aware of the process. Virtual machines (VMs) are created, configured, and used when needed and destroyed immediately after they are no longer needed, freeing up system resources for other purposes. These VMs can be created to suit a wide range of needs.

Hardware, storage, networks, and software are abstracted as services instead of being manually built and configured. They are then accessed locally on demand when the additional resources are required. Sometimes these service model abstractions are referred to as software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS).

Software as a Service (SaaS)

SaaS is sometimes referred to as on-demand software. In this service model, it is the software application and its related data that are moved to the cloud. Access is generally through a web browser, although a thin client and server style configuration are not uncommon. Someone else takes care of everything else. This is kind of like renting a hotel room—everything is provided and set up for you and you just enjoy and use it for a specific need. Some examples of this include email hosts like Yahoo! Mail, services like Google Docs, web games, and customer relationship management (CRM) software.

Platform as a Service (PaaS)

PaaS takes things a step further. In this service model, an entire computing platform is provided in the cloud. This typically includes the operating system, programming language interpreters or execution environments, databases, web servers, and so on. They are accessed directly for computing platform maintenance using provider portals, application programming interfaces (APIs), software development kits (SDKs), or services like SSH. Then, what is built on the platform is accessed by the end user the same way it would be accessed if it were running on a locally owned and operated piece of hardware or hardware that’s running in a large datacenter. Someone else takes care of everything else, but they take care of less than they do with SaaS, which means that you take care of more. This scenario is more like an apartment—you rent the space and decorate and configure it as you like within structured guidelines. Some examples of this include the Google App Engine, raw compute nodes used to scale services, and social application platforms like Facebook.

Infrastructure as a Service (IaaS)

IaaS goes even further. In this service model, you are transitioning your entire server to the cloud. Your provider offers computers, most like virtual ones, on which you can install any operating system (perhaps within a set menu they allow) and you can configure it as you like. Someone else takes care of the physical machines and networks, and you take care of all the rest. This is like buying a condominium—you own it and can do whatever you want inside it, but someone else takes care of the grounds and landscaping.

Metal as a Service (MaaS)

Generally, the only other step available from here is traditional server building, where you are responsible for the physical machine and everything on it. However, Ubuntu has added another service to the list, Metal as a Service (MaaS), which is designed to bring the language of the cloud to physical servers. Their goal is to make it as easy to set up the physical hardware, to deploy your app or service, and to scale up or down dynamically as it is in the cloud. It is installed on the physical hardware and then managed using one web interface to manage all the various machines. There is a section later in this chapter dedicated to this topic.

Before You Do Anything

You don’t have to create your own cloud infrastructure, but you can. You can also deploy Ubuntu Cloud to providers like Rackspace or HP. Before you do anything, you need to carefully consider what your needs are and decide what sort of service(s) you need. Do you just want to run a web application on someone else’s already-set-up server, or do you want to set up a system for scalable computing where additional Hadoop nodes can be added and removed at will when big jobs start and end? Only you know the answer. When you have it figured out, you can seek your solution and can think about how you can use Ubuntu to set it up. This chapter describes many options, but you are the one who is in control. That is a powerful, and sometimes overwhelming, position. Thought and planning prevent painful mistakes and repeated engineering.

Deploy/Install Basics: Public, Private, or Hybrid?

There are two ways to deploy Ubuntu in the cloud: on a private cloud or on a public cloud. Both have benefits and drawbacks. This section presents the things you need to consider when choosing. We also look at a way to mix the two, which is called a hybrid cloud.

A public cloud is built on a cloud provider’s systems. This means your local hardware requirements are minimal, your startup costs are low, deployment is quick, and growth is easy. This can be incredibly useful for testing, and has gained the stability and reputation for also being a great idea for production. The drawback to working this way is that you do not physically control the hardware on which your cloud is running. For many this is a benefit, but this might not be suitable for high security needs. Although you alone control the software and processes on your public cloud, there might be some worry about who has access to the machines. Although a cloud provider would not last long in business if its data centers and machines were not secure, some applications and data are so sensitive you cannot afford to allow any outside risk. Legal constraints, such as from the Sarbanes-Oxley Act, sometimes force IT policy decisions in an organization and make the public option impossible.

A private cloud is created on hardware you own and control. This requires a large upfront commitment, but you have the security of running everything behind a company firewall and with complete knowledge of who is able to physically access your machines and who is listening on the network.

One thing to consider is the possibility of starting your Ubuntu Cloud as a private cloud and then creating interfaces from there to public services, creating a hybrid cloud. Perhaps you prefer to keep some of your data and services stored on the private cloud, but you have other data that is less sensitive and want to use some services and applications on a public cloud. This is an avenue worth exploring if your company has a mixture of “must be secured and held in-house” and “we still want to keep it away from prying eyes, but if something happens it won’t be catastrophic” needs. The big issue with this method is moving data between public and private servers; if you have large amounts of data that may move between the two, this can be prohibitive. As always, do your due diligence.

Ubuntu Cloud and OpenStack

OpenStack is an Apache-licensed cloud computing platform. It was founded as a collaboration between NASA and Rackspace. After less than a year, it boasted a worldwide community of developers. Adoption has been swift, and already many large corporations, universities, and institutions are using OpenStack for cloud computing. Ubuntu and OpenStack have worked closely together for a long time, have similar release schedules, and Ubuntu is the reference operating system for OpenStack.

OpenStack is not a service provider. They don’t operate systems or data centers. OpenStack is open-source software for building public and private and hybrid clouds. There are many companies who have implemented and use OpenStack, which is a good thing. This means that if you develop your cloud deployment and it works on one company’s servers, if they are using OpenStack, you can move that deployment to another company’s servers with little or even no changes, if the second company is also running OpenStack. In fact, it is easy enough to create your deployment across several different providers, using cloud servers from multiple companies concurrently according to your needs.

For a current list of cloud providers offering OpenStack to customers for cloud deployments, see http://www.openstack.org/marketplace/public-clouds/. You will find big names like HP and Rackspace along with many you have not yet heard of who may be just a suitable or perhaps even better for your needs.

Many cloud providers offer free trials that give users a certain amount of usage for a limited amount of time. This can be handy for testing out the capabilities of different providers while testing out deployments of components of your system.

One interesting option while you are exploring and learning about cloud computing is DevStack, from http://devstack.org, which is a shell script with documentation that builds complete OpenStack development environments. This is not an option designed for production environments, but it can be very useful for getting started and testing the capabilities of cloud computing. The program and the documentation are maintained by a community of developers rather than one company, again emphasizing the portability aspect.

If what you read in this chapter interests you, the combination of a free trial period and DevStack may provide you with an easy way to try out what you are learning.

OpenStack uses a set of APIs for its services that are compatible with the Amazon EC2/S3 APIs. Client tools written for those can also be used with OpenStack. OpenStack has several main service families, each described in the following subsections.

OpenStack is in active development. Newer releases may have details that differ from what is recorded in this chapter. Check the official OpenStack website, listed in the “Resources,” section of this chapter, for current details.

Compute Infrastructure (Nova)

Nova manages the compute resources, networking, and scaling for the OpenStack cloud. By itself, it does not perform any virtualization tasks, but rather it uses libvirt APIs to interact with supported hypervisors and is the management component of the system. Nova has an Amazon EC2-compatible RESTful API.

Nova consists of several components:

Image nova-api—The API server provides an interface to enable outside systems to interact with the cloud infrastructure.

Image rabbit-mq—The message queue server performs asynchronous calls to communicate with other Nova components, such as the scheduler or the network controller.

Image Qpid—Like rabbit-mq, this is a message queue server and has similar functions. Research both to see which is the best fit for your situation.

Image nova-compute—The compute nodes’ host instances. They carry out operations based on requests received by the message queue. Instances are deployed on available compute nodes based on a scheduling algorithm managed by the scheduler.

Image nova-network—The network controller allocates IP addresses, configures VLANs, configures networks, and implements security groups for compute nodes. It is expected that this will eventually be replaced by Neutron, when Neutron is ready.

Image nova-volume—The volume worker manage Logical Volume Manager (LVM)-based storage volumes. They create and delete volumes, attach and detach volumes from instances, and provide persistent storage for use by instances.

Image nova-scheduler—The scheduler uses an adjustable algorithm to determine which compute, network, or storage volume servers should be used from an available pool of resources. Schedules can be configured based on server loads, availability zones, or random chance.

Storage Infrastructure (Swift)

Swift is an object store. It is scalable up to multiple petabytes and billions of objects. It is elastic. It has built-in redundancy and failover. Swift is designed to store a very large number of objects distributed across a commodity hardware.

Networking Service (Neutron)

Neutron provides “networking as a service” between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).

Identity Service (Keystone)

Keystone is the identity service used for authentication (authN) and high-level authorization (authZ). It supports token-based authN and user-service authorization.

Imaging Service (Glance)

Glance is a lookup and retrieval system for VM images. It can use one of three back ends: OpenStack Object Store, S3 storage, or S3 Storage using the OpenStack Object Store as an intermediary.

Dashboard (Horizon)

Horizon is the standard implementation of OpenStack’s Dashboard, which provides a web-based user interface to OpenStack services including Nova, Swift, Neutron, Keystone, etc.

Learning More

OpenStack has many more features not discussed here. Some include OpenStack Heat, from https://wiki.openstack.org/wiki/Heat, for orchestration and management of infrastructure and applications within an OpenStack cloud and OpenStack Ironic, from https://wiki.openstack.org/wiki/Ironic, which is a bare metal provisioning program. See https://wiki.openstack.org/wiki/Programs for a current list of official Open Stack programs.

Juju

Juju has been described as APT for the cloud. As you learned from Chapter 9, “Managing Software,” APT does an amazing job of installing, configuring, and starting complicated software stacks and services, but only as long as all of that happens on only one system. Juju extends this ability across multiple machines. Often, Linux servers are set up for similar tasks. Multiple physical machines may be deployed with similar configurations to work with one another in a network, perhaps for load distribution or redundancy to prevent downtime in the event of one failing or being overloaded. Systems administrators are masters at creating and orchestrating these networks. However, doing so traditionally requires setting up each machine individually, configuring its software settings and so on.

Tools have appeared over the years to help with this great task, such as Chef and Puppet; see Chapter 36, “Managing Sets of Servers,” for a little more about these. Juju works to do for servers what package managers do for individual systems. It enables you to deploy services quickly and easily across multiple servers, simplifying the configuration process, and is particularly designed with cloud servers in mind. As with Chef’s recipes, those services are deployed using formulas that standardize communication, for example, and which may have been written by different people.

What makes Juju different from Chef and Puppet is that the Juju formulas, called charms, encapsulate services, defining all the ways that services need to expose or consume configuration data to or from other services. This can be done many ways in the Juju charm, including via shell scripts or using Chef itself in solo mode. Also, Juju orchestrates provisioning by tracking its available resources (such as EC2, Eucalyptus, or OpenStack machines) and adding or removing them as appropriate.

Juju is pretty cool, but it hasn’t seen much serious adoption outside of Canonical, especially now that the OpenStack tools are growing in number and scope. However, it does have some unique features and it definitely is worth your consideration.

Getting Started

Start by installing Juju on a server:

Click here to view code image

matthew@wolfram~$: sudo apt-get install juju


Note

The release of Juju 2.0 is imminent as this chapter is being revised, but some command syntax and details are still changing and it is not yet ready. This chapter currently introduces Juju 1.25. See the Juju website listed in this chapter’s “Resources” section for the most current version and technical details.


Next, you must bootstrap the system, configuring it to use either a cloud resource, like Amazon Web Services or EC2 or your local environment (if you are using a local machine for development and testing). The specific information you enter here will differ, but the initial command is always the same:

Click here to view code image

matthew@wolfram~$: juju bootstrap

The first time this is run, it creates a file, ~/.juju/environments/yaml, which looks something like the following:

Click here to view code image

default: sample
environments:
sample:
type: ec2
control-bucket: juju-faefb490d69a41f0a3616a4808e0766b
admin-secret: 81a1e7429e6847c4941fda7591246594
default-series: precise
juju-origin: ppa
ssl-hostname-verification: true

The preceding sample was taken directly from the official Juju documentation. Yours will look different in some places and also needs to be adjusted appropriately with your settings. For example, if you are using Amazon AWS, you will probably want to add lines to this file with your AWS access key and secret key so that Juju can access and use your Amazon AWS account. Because the typical Juju user is a DevOps or SysAdmin type who has been doing this sort of thing manually for a while, we will gloss over this step and move on.

Bootstrapping takes a few minutes. If you want to check on the status of your Juju deployment, enter

Click here to view code image

matthew@wolfram~$: juju status

You see something similar to the following (again from the official Juju docs):

Click here to view code image

machines:
0:
agent-state: running
dns-name: ec2-50-16-107-102.compute-1.amazonaws.com
instance-id: i-130c9168
instance-state: running
services:

When the status shows the deployment up and running, it is a good idea to start a debug log session. This is not required but makes troubleshooting much easier, should it be needed.

Click here to view code image

matthew@wolfram~$: juju debug-log

Now comes the fun part, deploying service units. We chose a simple one for our sample: deploying a WordPress blog on our server with all needed services. This is done using charms, which are prepackaged installation and configuration details for specific services. Here’s how it works:

Click here to view code image

matthew@wolfram~$: juju deploy mysql
matthew@wolfram~$: juju deploy wordpress

Now, your services are deployed, but they are not yet connected with each other. We do this by adding relations, in this case:

Click here to view code image

matthew@wolfram~$: juju add-relation wordpress mysql

Now, if you check your status as shown earlier, you see something like this:

Click here to view code image

machines:
0:
agent-state: running
dns-name: localhost
instance-id: local
instance-state: running
services:
mysql:
charm: cs:precise/mysql-3
relations:
db:
- wordpress
units:
mysql/0:
agent-state: started
machine: 2
public-address: 192.168.122.165
wordpress:
charm: cs:precise/wordpress-3
exposed: false
relations:
db:
- mysql
units:
wordpress/0:
agent-state: started
machine: 1
public-address: 192.168.122.166

Now, expose your WordPress service to the world so that you can connect with it from outside the server:

Click here to view code image

matthew@wolfram~$: juju expose wordpress

And as simple as that, your install is ready. Using the public-address shown earlier in the status message, open 192.168.122.166 in your browser, and you should go to your WordPress configuration page.

What happens if you get your WordPress blog set up and running and then it suddenly gets popular? In a traditional setting, you would need to reinstall on heftier equipment and migrate the database over. Not here. Instead you just add units:

Click here to view code image

matthew@wolfram~$: juju add-unit wordpress

This creates a new WordPress instance, joins the relation with the existing WordPress instance, discovers in that configuration that it is related to a specific MySQL database, and also relates this one. That’s it. One command and you are done!

When a Juju-created environment is no longer needed, there is only one command to issue:

Click here to view code image

matthew@wolfram~$: juju destroy-environment

Beware, this command also destroys all service data, so if you are doing something that is important long-term, make sure you extract your data first.

Charms

Charms define how services are to be deployed and integrated and how they react to events. Juju orchestrates all of this based on the instructions in charms. Charms are created using plain text metadata files. These files, with the extension .yaml, describe the details needed for deployment. These are the supported fields in a charm:

Image name—The name of the charm.

Image summary—A one-line description.

Image maintainer—This must include an email address for the main point of contact.

Image description—A long description of the charm and its features.

Image provides—Relations that are made available from this charm.

Image requires—Relations that must already exist for this charm to work.

Image peers—Relations that work together with this charm.

This sounds complicated, and it is. But with a little study, anyone who knows enough about a service can write a charm for it. Here are example charms for the two services we deployed earlier. First, MySQL:

Click here to view code image

name: mysql
summary: "A pretty popular database"
maintainer: "Juju Charmers <juju@lists.ubuntu.com>"

provides:
db: mysql

And WordPress:

Click here to view code image

name: wordpress
summary: "A pretty popular blog engine"
maintainer: "Juju Charmers <juju@lists.ubuntu.com>"
provides:
url:
interface: http

requires:
db:
interface: mysql

Probably the most confusing part of a charm to most newcomers are the relations. The preceding examples might help clear those up a little bit. As you can see, there are subfields used with relations that define how the relation will work. Here is a list of available subfields for relations:

Image interface—The type of relation, such as http or mysql. Services will only be permitted to use interfaces listed here to interact with other services.

Image limit—The maximum number of relations of this kind that will be established to other services.

Image optional—Denotes whether the relation is required. A value of false means it is not optional.

Image scope—Controls which units of related-to services can be communicated with via this relation, whether global or container. Container means restricted to units deployed in the same container, specifically subordinate services.

There is also a way to notify a service unit about changes happening in its lifecycle or the larger distributed environment. Called hooks, these are executable files that can query the environment, make desired changes on the local machine, and change relation settings. Hooks are implemented by placing the executable file in the hooks directory of the charm directory. Juju executes the hook based on its filename, when the corresponding event occurs. Hooks are optional. For example, a hook titled install would run just once during the service unit’s life, when it was first set up, and it might check whether package dependencies are met. Hooks with titles like start or stop might run when the service is begun or ended. There are possibilities for creating hooks for relations, opening and closing ports, and more.

There are many charms already written and available from the Ubuntu Juju Charm Browser (the link is listed in Resources). You can quickly deploy a Jenkins build integration server or slave, a Hadoop database or node, a MediaWiki instance, a Minecraft game server, and tons more using already-written and -available charms. This is probably how most readers will interact with charms.

If you want to try your hand at writing and creating charms for services, you can. Much more detail is available at https://juju.ubuntu.com/docs/write-charm.html to help you learn the process, the semantics, and how to get your charm included in the Charm Store.

Juju has a charm feature called bundles. A bundle is a set of services with a specific configuration and all corresponding relations in a convenient package that can deploy the services in one single step.

The Juju GUI

Juju has a GUI available, as in Figure 35.1. You must first deploy a charm for the Juju GUI, and then you access it using a local URL. The charm is new and may change slightly, so rather than print the details, we are choosing to send you to the Charm Browser so you can learn about it directly from the developers. See: http://jujucharms.com/charms/precise/juju-gui.

Image

FIGURE 35.1 The Juju GUI.

Juju Quickstart

A recent addition to Juju is the quickstart command, which helps you get started by walking you through an installation, taking away much of the pain of the manual process. Quickstart helps you enter the configuration information needed to set up local clouds or with providers using OpenStack, Windows Azure, and Amazon EC2. It even installs the Juju GUI for you. Quickstart works with and can deploy Juju charm bundles. If the environment is not already bootstrapped, Quickstart will bring up the environment, install the GUI, and then deploy the bundle.

Juju on Mac OS X and Windows

It is possible to use a Juju client on your Mac or Windows machine to manage your Ubuntu servers. See: https://juju.ubuntu.com/install/ to download the client.

Mojo: Continuous Delivery for Juju

Mojo helps you with configuration and tools to verify the success of Juju deployments. It is made by Canonical. It gives you a structured means of having an entirely repeatable deployment process, specifically going from an entirely empty environment with no VMs running to VMs with the services deployed on them, relations established between each, and a fully working service. More information is available at https://mojo.canonical.com/.

Snappy Ubuntu Core

Snappy Ubuntu Core is minimalistic Ubuntu installation with transactional updates. This is a bare-bones server image with the same libraries as regular Ubuntu. However, applications are installed in a new way. The idea is that applications are deployed in an encapsulated way, as Snap packages, such that a change to one application cannot affect any other applications. This provides more predictable behavior and stronger security. Application updates come as “delta updates,” meaning only the changes are downloaded and installed, leading to smaller downloads and faster upgrades. See Chapter 9, “Managing Software,” to learn how to use Snap packages; Chapter 39, “Opportunistic Development,” for information about creating Snap packages of existing software; and www.ubuntu.com/cloud/snappy for more.

Ubuntu Metal as a Service (MaaS)

Juju exists to deploy workloads to the cloud. Ubuntu Metal as a Service is built as a first step to deploy that cloud, when you are creating the cloud on hardware you own or that you have bare-metal configuration control over. Ubuntu Metal as a Service is a collection of best practices for deploying Ubuntu servers from Ubuntu servers. It is designed to assist with deployments to cloud servers numbering in the dozens, hundreds, or even thousands. You install Ubuntu MaaS directly to bare metal on one server and from there all other bare-metal servers are provisioned and set up; an entire data center could be implemented quickly and easily. After it’s deployed, MaaS provides automatic federation and integrated management, monitoring, and logging. You can add physical equipment, remove it, repurpose it within your cloud, and more. You can do this dynamically, scaling up or down and managing resources as needed. It is powerful and very cool.

This is all done from the standard Ubuntu Server install CD, which you can download as described in Chapter 1, “Installing Ubuntu and Post-Installation Configuration,” except you want the server version instead of the regular version of Ubuntu. At the time of this writing, the Ubuntu community is still creating documentation for Metal as a Service (the link is given below in Resources), so if this interests you, take a look at their page.

Landscape

Landscape is an enterprise-focused systems management and monitoring tool that is available from Canonical. It can monitor Ubuntu Cloud servers like the ones discussed in this chapter. Landscape can be deployed locally on your cloud or used as part of a paid service from Canonical called Ubuntu Advantage. Landscape is described further in Chapter 36, “Managing Sets of Servers.”

References

Image http://www.ubuntu.com/cloud—The official Ubuntu introduction to cloud computing.

Image www.linux-kvm.org/page/Main_Page—The main page for KVM, the Kernel-based Virtual Machine.

Image www.openstack.org—The official website for OpenStack.

Image https://landscape.canonical.com—Canonical’s Landscape is a commercial management tool for Ubuntu Cloud and Amazon EC2 instances.

Image http://juju.ubuntu.com—The official Ubuntu documentation for Juju.

Image http://jujucharms.com/—The official Ubuntu Juju Charm Browser.

Image http://conjure-up.io—The official site for Conjure Up, which is an easy way to deploy big software stacks to the cloud using Juju.

Image http://maas.io/—The official Ubuntu documentation Ubuntu and MAAS.