Icehouse Preview - OpenStack Operations Guide (2014)

OpenStack Operations Guide (2014)

Appendix D. Icehouse Preview

The Icehouse release of OpenStack was made available April 17, 2014, a few days before this book went to print! It was built by 112 different companies. Over 1,200 contributors submitted a patch to the collection of projects for more than 17,000 commits. All in all, this release had more than 113,000 reviews on those commits.

Here is a preview of the features being offered for the first time for each project. Not all of these features are fully documented in the official documentation, but we wanted to share the list here.

A few themes emerge as you investigate the various blueprints implemented and bugs fixed:

§ Projects now consider migration and upgrades as part of the “gated” test suite.

§ Projects do not have a common scheduler yet, but are testing and releasing various schedulers according to real-world uses.

§ Projects are enabling more notifications and custom logging.

§ Projects are adding in more location-awareness.

§ Projects are tightly integrating with advancements in orchestration.

§ The Block Storage, Compute, and Networking projects implemented the x-openstack-request-id header to more efficiently trace request flows across OpenStack services by logging mappings of request IDs as they cross service boundaries.

These sections offer listings of features added, hand-picked from a total of nearly 350 blueprints implemented across ten projects in the integrated Icehouse release. For even more details about the Icehouse release including upgrade notes, refer to the release notes athttps://wiki.openstack.org/wiki/ReleaseNotes/Icehouse.

Block Storage (cinder)

31 blueprints

§ When using absolute limits command, let user know the currently used resources, similar to Compute.

§ Notifications for volume attach and detach.

§ Deprecate the chance and simple schedulers.

§ Retyping volumes enabled—HP LeftHand, SolidFire.

§ Volume metadata stored on backup.

§ Let operators log the reasons that the Block Storage service was disabled.

§ Dell EqualLogic volumes can now be extended.

§ Support for qos_specs for SolidFire driver; creation and management of qos separate from volume-types.

§ Quota settings for deletion.

§ Backup recovery API for import and export.

§ EMC VNX Direct Driver added.

§ Fibre Channel Volume Driver added.

§ HP MSA2040 driver added.

§ Enhancements to 3PAR driver such as migrate volume natively and the qos-specs feature.

§ IBM SONAS and Storwize v7000 Unified Storage Systems drivers added.

§ HP LeftHand driver enhancements.

§ TSM Backup Driver enhancements.

Common (oslo)

22 blueprints

§ Standalone rootwrap implementation.

§ Messages can be localized for i18n support.

§ Notifications are configurable.

Compute (nova)

65 blueprints

Limited live upgrades are now supported. This enables deployers to upgrade controller infrastructure first, and subsequently upgrade individual compute nodes without requiring downtime of the entire cloud to complete.

Hyper-V driver added RDP console support.

Libvirt (KVM) driver additions:

§ The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the os_command_line key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise, the default kernel arguments are used.

§ The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.

§ The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random; however, use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.

§ The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting thehw_video_model, hw_video_vram, and hw_video_head properties in the image metadata. Currently supported video driver models are vga, cirrus, vmvga, xen and qxl.

§ Watchdog support has been added to the Libvirt driver. The watchdog device used is i6300esb. It is enabled by setting the hw_watchdog_action property in the image properties or flavor extra specifications (extra_specs) to a value other than disabled. Supportedhw_watchdog_action property values, which denote the action for the watchdog device to take in the event of an instance failure, are poweroff, reset, pause, and none.

§ The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.

§ The Libvirt driver now supports waiting for an event from Neutron during instance boot for better reliability. This requires a suitably new Neutron that supports sending these events, and avoids a race between the instance expecting networking to be ready and the actual plumbing that is required.

VMware driver additions:

§ The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the nova diagnostics INSTANCE command, where INSTANCE is replaced by an instance name or instance identifier.

§ The VMware Compute drivers now support booting an instance from an ISO image.

§ The VMware Compute drivers now support the aging of cached images.

XenServer driver additions:

§ Added initial support for PCI pass-through.

§ Maintained group B status through the introduction of the XenServer CI.

§ Improved support for ephemeral disks (including migration and resize up of multiple ephemeral disks).

§ Support for vcpu_pin_set, essential when you pin CPU resources to Dom0.

§ Numerous performance and stability enhancements.

API changes:

§ In OpenStack Compute, the OS-DCF:diskConfig API attribute is no longer supported in V3 of the nova API.

§ The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.

§ The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the Compute Service had been disabled and the system re-provisioned. This functionality is provided by theExtendedServicesDelete API extension.

§ Separated the V3 API admin_actions plug-in into logically separate plug-ins so operators can enable subsets of the functionality currently present in the plug-in.

§ The Compute Service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3, which allows non-unique tenant names.

§ The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the nova hypervisor-show command.

Scheduler updates:

§ The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.

§ A new scheduler filter, AggregateImagePropertiesIsolation, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute Service configuration keys aggregate_image_properties_isolation_namespace and aggregate_image_properties_isolation_separator are used to determine which image properties are examined by the filter.

§ Weight normalization in OpenStack Compute is now a feature. Weights are normalized, so there is no need to artificially inflate multipliers. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0.

§ The scheduler now supports server groups. The following types are supported—anti-affinity and affinity filters. That is, a server that is deployed will be done according to a predefined policy.

Other features:

§ Notifications are now generated upon the creation and deletion of keypairs.

§ Notifications are now generated when an compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.

§ Compute services are now able to shut down gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.

§ The Compute Service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the running_deleted_instance_action configuration key. A new shutdown value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.

§ File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the inject_key and inject_partition configuration keys in/etc/nova/nova.conf and restart the compute services. The file injection mechanism is likely to be disabled in a future release.

§ A number of changes have been made to the expected format /etc/nova/nova.conf configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.

Database Service (trove)

23 blueprints

§ Integration with Orchestration.

§ Support for Apache Cassandra NoSQL database for Ubuntu.

§ Incremental backups for point in time restoration of database.

§ Limited support for MongoDB.

Identity (keystone)

26 blueprints

§ The Identity API v2 has been prepared for deprecation, but remains stable and supported in Icehouse.

NOTE

The OpenStack Operations Guide does not contain Identity API v3 in either example architecture.

§ Users can update their own password.

§ The /v3/OS-FEDERATION/ call allows Identity API to consume federated authentication with Shibboleth for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments.

§ The POST /v3/users/{user_id}/password call allows API users to update their own passwords.

§ The GET v3/auth/token?nocatalog call allows API users to opt out of receiving the service catalog when performing online token validation.

§ The /v3/regions call provides a public interface for describing multi-region deployments.

§ /v3/OS-SIMPLECERT/ now publishes the certificates used for PKI token validation.

§ The /v3/OS-TRUST/trusts call is now capable of providing limited-use delegation via the remaining_uses attribute of trusts.

§ Deployers can now define arbitrary limits on the size of collections in API responses (for example, GET /v3/users might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.

§ Backwards compatibility for keystone.middleware.auth_token has been removed. The auth_token middleware module is no longer provided by the keystone project itself, and must be imported from keystoneclient.middleware.auth_token instead.

§ The s3_token middleware module is no longer provided by the keystone project itself, and must be imported from keystoneclient.middleware.s3_token instead. Backwards compatibility for keystone.middleware.s3_token is slated for removal in Juno.

§ Default token download lasts 1 hour instead of 24 hours. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.

§ Middleware changes:

§ The keystone.contrib.access.core.AccessLogMiddleware middleware has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.

§ The keystone.contrib.stats.core.StatsMiddleware middleware has been deprecated in favor of external tooling and may be removed in the K release.

§ The keystone.middleware.XmlBodyMiddleware middleware has been deprecated in favor of support for “application/json” only and may be removed in the K release.

§ A v3 API version of the EC2 Credential system has been implemented. To use this, the following section needs to be added to keystone-paste.ini:

§ [filter:ec2_extension_v3]

paste.filter_factory = keystone.contrib.ec2:Ec2ExtensionV3.factory

Also, ec2_extension_v3 needs to be added to the pipeline variable in the [pipeline:api_v3] section of keystone-paste.ini.

§ Trust notification for third parties. Identity now emits Cloud Audit Data Federation (CADF) event notifications in response to authentication events.

§ KVS drivers are now capable of writing to persistent key-value stores such as Redis, Cassandra, or MongoDB.

§ Notifications are now emitted in response to create, update and delete events on roles, groups, and trusts.

§ The LDAP driver for the assignment backend now supports group-based role assignment operations.

§ Identity now publishes token revocation events in addition to providing continued support for token revocation lists. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Identity to eliminate token persistence during the Juno release.

Image Service (glance)

10 blueprints

§ Can limit the number of additional image locations to pull from storage.

§ Upload an image policy now for v1 API.

§ ISO Image support.

§ NFS as a backend storage facility for images.

§ Image location support.

§ Image owners in v2 API.

§ Retry failed image download with Object Storage (swift) as backend.

§ Allow modified kernel arguments.

Networking (neutron)

43 blueprints

§ Migration paths for deprecated plug-ins.

§ Quotas extended to VIPs, pools, members, and health monitors.

§ Hyper-V agent security groups.

§ Least routers scheduler.

§ Operational status for floating IP addresses.

§ Open Daylight plug-in.

§ Big Switch plug-in enhancements—DHCP scheduler support.

§ MidoNet plug-in enhancement.

§ One Convergence plug-in.

§ Nuage plug-in.

§ IBM SDN-VE plug-in.

§ Brocade mechanism driver for ML2.

§ Remote L2 gateway integration.

§ LBaaS driver from Embrane, Radware.

§ VPNaaS for Cisco products.

§ NEC plug-in enhanced to support packet filtering by Programmable Flow Controllers (PFC).

§ Plumgrid plug-in enables provider network extension.

Object Storage (swift)

15 blueprints

§ Logging at the warning level for when an object is quarantined.

§ Better container synchronization support across multiple clusters while enabling a single endpoint.

§ New gatekeeper middleware guarding the system metadata.

§ Discoverable capabilities added to /info API response so end user can be informed of cluster limits on names, metadata, and object’s size.

§ Account-level access control lists (ACLs) with x-Account-Access-Control header.

OpenStack dashboard (horizon)

40 blueprints

§ Translated user interfaces added for German, Hindi, and Serbian.

§ Live migration tasks can be performed in the dashboard.

§ Users can set public container access for Object Storage objects from the dashboard.

§ Users can create “directories” to organize their stored objects.

Orchestration (heat)

53 blueprints

§ Adds Identity API v3 support.

§ Enables a Database as a Service resource.

§ Provides a stack filtering capability.

§ Provides a management API.

§ Provides a cloud-init resource for templates to use.

Telemetry (ceilometer)

21 blueprints

§ Add alarm support for HBase.

§ Addition of time-constrained alarms, providing flexibility to set the bar higher or lower depending on time of day or day of the week.

§ Enabled derived rate-based meters for disk and network, more suited to threshold-oriented alarming.

§ Support for collecting metrics of VMs deployed on VMware vCenter.

§ Exclude alarms on data points where not enough data is gathered to be significant.

§ Enable API access to sampled data sets.

§ New sources of metrics:

§ Neutron north-bound API on SDN controller

§ VMware vCenter Server API

§ SNMP daemons on baremetal hosts

§ OpenDaylight REST APIs