NoSQL Databases - Ubuntu as a Server - Ubuntu Unleashed 2017 Edition (2017)

Ubuntu Unleashed 2017 Edition (2017)

Part IV: Ubuntu as a Server

Chapter 31. NoSQL Databases


In This Chapter

Image Key/Value Stores

Image Document Stores

Image Wide Column Stores

Image Graph Stores

Image References


If you read Chapter 30, “Administering Relational Database Services,” you have already read a brief description of the databases in this chapter. NoSQL is a broad term covering a large number of database styles. There are some similarities, but each of these databases was developed for a specific purpose. That means that they are not necessarily interchangeable, although it might be possible to force one to serve a task for which it is not designed. That is rarely a good idea. Also, although there has been a lot of press and hype about NoSQL over the past few years, NoSQL will not and should not be considered as a replacement for relational databases. Rather, this is a new set of databases designed to excel in specific situations, especially in large-scale, high-traffic-volume applications. If you have a need for storing and interacting with specific types of data as is described in one of the following sections, only then do we recommend using a database listed in that section.


Not one size fits all

“It’s not a one size fits all anymore. One will use multiple technologies. If I were a CTO, I’d want to use NoSQL for scalable high performance operational data access, lots of reads and writes at high speed and [for] semi real-time and low latency for end users. And you need another for reporting and BI. These [NoSQL] technologies are not optimal for that. In general, a classic data warehouse is a good solution for those things.”

—Dwight Merriman, CEO of MongoDB, at OSCON 2011, as quoted in an article from ZDNet at www.zdnet.com/article/mongodb-chief-it-will-be-mixed-sql-nosql-world/.


There are different definitions and even some controversy over what NoSQL means. Does it mean that the database does not use SQL for interactions? Perhaps, but that is not absolute. Does it mean, as some now suggest, “not only SQL?” Maybe. That is certainly a broader and more accurate description, although it is also a bit misleading as it seems to include all the relational databases that use SQL. Here is a well-known secret: There is no consistent definition of the term. With that said, here is a reasonably accurate set of features that the databases that are generally labeled NoSQL share:

Image They store structured data (organized in a way that is defined and identifiable).

Image They do not store data relationally (no tables with rows and columns and relationships between tables).

That’s about it.

The advantages to using a NoSQL option instead of a relational database are as follows:

Image NoSQL databases are designed for really large sets of data and can often handle more than any relational database.

Image NoSQL databases are designed to scale as needed. That is, instead of buying a bigger database server to handle increased load as you would with a relational model, you can add additional database hosts easily and spread the database out across them. This is designed to work with commodity hardware and transparently.

Image Commodity hardware is much cheaper than dedicated, professional relational database server hardware, making NoSQL a cheaper option in many cases.

Image Data models with NoSQL are much more relaxed. Some would call this a disadvantage, but it is included in the advantages list because there are some types of data models that may change frequently. In a relational system, data model changes require taking the database off line to modify the structure. Often, NoSQL databases have very little or even no data model restrictions and can allow quick and dirty changes.

The disadvantages to using a NoSQL database are these:

Image Support is not as readily available. Most of the NoSQL databases are relatively small open-source projects, which is something that should please most readers of this book. However, this also means that, unlike most enterprise-focused relational databases, there is probably not any enterprise support available for businesses. This will frighten many managers.

Image Most NoSQL projects are fairly new. This also means that they are untested in large enterprises and businesses. It can also mean that it takes longer to set up because of the learning curve; known solutions can be implemented more quickly.

Image NoSQL is not always ACID (atomicity, consistency, isolation, durability) compliant. Instead of guaranteeing that every data transaction is instantly and properly recorded, that only one interaction may occur with a piece of data at any given time, and only the current version is available to the end user, NoSQL databases often work on a system of replication of data across multiple hosts that each get updated eventually. This may happen quickly, but there is no guarantee that at a given moment you will retrieve the most up-to-date data. This matters when dealing with financial transactions but might not be a concern with web search results where “close enough” might be all that is wanted or necessary.

Image There is no particular advantage to NoSQL unless you have data that is large enough to benefit from it or your project fits neatly into a specific use case.

Image It isn’t difficult to migrate to NoSQL from traditional relational databases if the need arises. You know Donald Knuth’s saying that “premature optimization is the root of all evil.” It might be wiser to design your site or application using known technology and then migrate later if the need arises.

One facet that is sometimes described as an advantage and at other times as a disadvantage relates to administration. Relational databases often require trained staff to administer them. The positive side is that qualified database administrators (DBAs) are plentiful, if expensive. NoSQL databases are designed to be created and, at least in theory, require little to no further maintenance. Some pundits claim cost savings; others claim that DBAs will still be needed, but for different tasks, and that there are very few people who are trained and available to perform those tasks because of this perspective and the newness of the database style.


Now, a Quick Waffle on the Name

Google and others seem to like the name “NewSQL” more than “NoSQL.” Some only apply “NewSQL” to a certain group of databases that are somehow different than “NoSQL” databases. While “NoSQL” is much more common to hear, “NewSQL” is the latest buzzword in the database world. Here is one description why:

“NewSQL” is our shorthand for the various new scalable/high performance SQL database vendors. We have previously referred to these products as ‘ScalableSQL’ to differentiate them from the incumbent relational database products. Since this implies horizontal scalability, which is not necessarily a feature of all the products, we adopted the term ‘NewSQL’ in the new report.

“And to clarify, like NoSQL, NewSQL is not to be taken too literally: the new thing about the NewSQL vendors is the vendor, not the SQL.”

The 451 Group’s Matt Aslett, as recorded at http://blogs.the451group.com/information_management/2011/04/06/what-we-talk-about-when-we-talk-about-newsql/.


An interesting development in the NoSQL world is that specifications are being created for a new database query language called UnQL (pronounced “uncle”), which stands for Unstructured Query Language. This is being developed as a joint project by two developers: Richard Hipp, the creator of SQLite; and Damien Katz, the creator of CouchDB. They expect more to join them soon. In a nutshell, the language contains some familiar commands, such as SELECT, INSERT, UPDATE, and DELETE. However, it is different from SQL because these commands do not work on tables, but rather on collections of unordered sets of objects that are described using JavaScript Object Notation (JSON). You can learn more about UnQL from the first product we have seen that uses it at www.unqlite.org.

The sections that follow group databases by similarities between them, choosing one standout feature as the name of the section. Because NoSQL databases are still fairly new, it is unclear which, if any, will become a long-term standard. For that reason, this chapter gives high-level coverage of a larger number of options rather than deep coverage of a couple of primary options.

Key/Value Stores

Key/value stores are listed first because they are the simplest of the NoSQL databases, at least in the sense of interactions. You have a piece of data of any type; this is your value. You give it a name of some sort; this is your key. Any time you need that specific piece of data, you ask for it using the key. Values might be bits of text, binaries, pretty much anything, and the data type does not need to be defined in advance, or often at all. The database never needs to know what the value object is, just that it is stored using the given key. These databases have no schema. The contents might be vastly different from one another in type, size, domain, and so on. It is the client, the application that uses the database, that is required to know about the value (what it is and the context in which it can be used). The database merely stores it using a key, knows the key/value pair, and serves the value when requested using its key.

Key/value stores are great for things like contents of a website shopping cart, user preference lists, a post in a social media site. Think of things that are not vital, things that might be useful but that will not cause problems if lost. You would not want to use this for credit card information, personal identification, health records, and such. You would want it for high-traffic sites that need to make sure that a local user has quick and accurate access to the information, but where the information can take time to replicate to other database nodes or which might not require replication across nodes at all, where there is heavy access to the database itself, but where users are not necessarily using the same data concurrently.

Berkeley DB

Berkeley DB was originally created at the University of California, Berkeley, to create a disk hash table that worked better than an existing solution while also helping the university clean up its free UNIX version called BSD by removing code inherited from AT&T. Several years later, Netscape asked the developers to add some desired features to make Berkeley DB more useful to them. This resulted in spinning off Berkeley DB from the university to a company founded for this purpose called Sleepycat Software, which headed development for many years. As of the purchase of Sleepycat Software in 2006, Berkeley DB is now owned by Oracle.

Although it is listed under key/value stores, this is not the only way to interact with a Berkeley DB database. Support also exists for using SQL and Java. Interaction is accomplished using an application programming interface (API). Berkeley DB is very fast and very small. As a result, it can be found running on large-scale systems and embedded within applications and even running on mobile devices.

Berkeley DB is easily the most mature database mentioned in this chapter and is most notable for its use within many well-known software projects including Subversion, Postfix, and OpenLDAP. It was even included as a data storage backend for MySQL prior to MySQL 5.1.

Cassandra

Cassandra was developed by Facebook for their Inbox searching feature. It was released open source when Facebook turned it over to Apache in 2008. Cassandra is a key/value store that runs on a flexible cluster of nodes and is also a wide column store, like HBase discussed later in the “Wide Column Store” section. Nodes may be added and removed from the cluster. Data is replicated across multiple nodes of the cluster. There is no central node, access to data exists from any node; if the node receiving the request does not house the specific data requested, it still services the request by retrieving and sending the data. The main goal of Cassandra is fast retrieval of data with fault tolerance being handled through replication across nodes and speed adjustments via adding additional nodes to create more access points.

One interesting feature is that Cassandra may be tuned to adjust the trade-off between speed of transactions and consistency of data. When data is stored, it is initially stored in memory and gets sent to disk only when specific criteria are met. This makes interaction very quick. In fact, not all data stored in Cassandra is designed to persist over time, and data might not get written to disk at all. This means that not all readers or seekers of data may find a specific piece, but in cases like Facebook’s need to store Inbox search data that only has limited time value (like search results, that could be different tomorrow or even ten minutes from now), this might not matter at all. In these cases, both access speed and convenience are more important.

Cassandra is being used by Facebook, Twitter, Reddit, and many others.

Memcached and MemcacheDB

Memcached stores data requested on a system in RAM for a specific period of time to make retrieving that data faster if it is requested again. The time that data persists can be based on a specific setting, memory needs, and other criteria. The goal is to reduce the number of times that data stores must be accessed. Data that is accessed often is held in memory, from where it is much more quickly retrieved. This can alleviate problems such as a page on a blog that has suddenly become popular as a result of the URL being posted on a social networking site. The spike in traffic could be kept manageable because the content of the blog post is being held in memory instead of being requested over and over from the database.

MemcacheDB is an implementation of the Memcached API that uses a key/value format based on Berkeley DB. However, where Memcached is designed as a cache solution to speed up data access from memory, MemcacheDB is designed as a persistent storage engine. Because it uses the same API protocol as Memcached, it is an easy way to add data persistence where caching is already in place with Memcached.

Memcached is used by sites like Twitter, Reddit, YouTube, and Facebook, and it is also supported and often used by websites based on content management systems like Drupal and WordPress.

Redis

Initially released in 2009, Redis is intended for applications where performance and flexibility are more important than persistence and absolute data integrity. It is an open-source key/value store written in C. Keys can contain strings, hashes, lists, sets, and stored sets. Redis works in RAM for speed, occasionally dumping to disk. Because actions are performed in memory, they are done faster. Operations include appending to a string, incrementing a hash value, pushing to a list, set computations, and more. Redis is also designed so that master-slave replication is easy to set up.

Riak

Riak is a fault tolerant, distributed database designed for scalability and use in the cloud. It is masterless, meaning there should be no single point of failure. It is designed for speed, simplicity, and stability. Riak is based on a paper by Amazon describing Dynamo, which is an internal, proprietary system owned by Amazon. The Riak Wiki describes the database in one place as “the most boring database you’ll ever run in production. No sharding required, just horizontal scaling and straightforward capacity planning. The same operational tasks apply to small clusters and large clusters. More machines does not mean more ops.”

Document Stores

Document stores are designed to store data that is already structured in some form of notation like JSON or XML. They typically focus on one specific type of notation and are intended to allow entire objects, including arrays and hashes, to be stored and retrieved at once.

Many times document stores are implemented as a layer between an application and a relational database to hold the output of certain types of queries. For example, it might be convenient to aggregate information that is typically requested together such as a set of user preferences or name and address information and store it as one object. Requesting and retrieving only one object that is already formatted in an object notation like JSON is faster than making many database queries, and it supplies preformatted data for the client application that can be used to both style output and display specific data at the same time.

Data that is stored and served this way does not have to fit database-specific formatting requirements in a NoSQL database. There are no tables to relate, and data may be larger or smaller and include more or less information. This is generally called semistructured data. Listings 31.1 and 31.2 are a quick snippet of two sets of user preferences in JSON—one that includes many user-set preferences and one that includes only one. The client application could be created to assume a set of default preferences that will be used unless specifically overridden by this file.

LISTING 31.1 Sandra’s Preferences

Click here to view code image

{"userpreferences": {
"displayName": "Don'tHitOnMe",
"gender":"DoNotDisplay",
"siteTheme":"Springtime",
"postsDisplayed":"25",
"keepLoggedIn":"True"
}
}

LISTING 31.2 Matthew’s Preferences

Click here to view code image

{"userpreferences": {
"siteTheme":"TieDye",
}
}

CouchDB

CouchDB began in 2005 as a self-funded personal project of Damien Katz. In 2008, it was given to Apache, where development continues. The goal of CouchDB is to provide a database useful for serving web applications. The emphasis is on scalability and fault tolerance while using commodity hardware (Couch is an acronym for cluster of unreliable commodity hardware). This is not an easy task, but when done successfully it lowers costs.

CouchDB uses a RESTful HTTP API that is designed from the beginning to be used on and for the Web. All stored items have a unique uniform resource identifier (URI), and full create, read, update, and delete (CRUD) functions are available directly using standard HTTP calls, making CouchDB very easy to integrate into web applications. These calls can be made from a browser or from a command line using a tool like cURL, which is available on many typical server platforms, including Ubuntu.

A nice feature of CouchDB is that it is designed with the ability to include ACID compliance, unlike many NoSQL options. This makes it possible to use CouchDB with more consistency-sensitive data.

CouchDB is written in Erlang, which is a language designed for concurrency. That makes CouchDB even better suited for use in a concurrent distributed system. CouchDB is designed to store JSON document objects.

CouchDB is used by several software and web applications, including many Facebook games and applications, internal use at the BBC, and Ubuntu One, Canonical’s cloud storage.

MongoDB

MongoDB is similar to CouchDB in that both are designed as document stores for JSON objects and, like Cassandra, is designed for replication and high-availability. It is created and supported by a company called 10gen and is newer, with its first public release in 2009. A unique feature for this open-source database is that the developer offers commercial, enterprise-class support, training, and consulting. This has made adoption of MongoDB much faster than is typical for NoSQL products.

MongoDB supports sharding, which automatically partitions data across servers for increased performance and scalability. This produces a form of load and data balancing and also offers a way to add nodes simply. Sharding is also intended to support an automatic failover system where node data is replicated, allowing no single point of failure.

In addition, MongoDB includes support for indexing in a manner that is more extensive and powerful than most NoSQL solutions.


Marketing Hype or Great Design?

MongoDB wasn’t designed in a lab. We built MongoDB from our own experiences building large-scale, high-availability, robust systems. We didn’t start from scratch, we really tried to figure out what was broken, and tackle that. So the way I think about MongoDB is that if you take MySQL, and change the data model from relational to document based, you get a lot of great features: embedded docs for speed, manageability, agile development with schema-less databases, easier horizontal scalability because joins aren’t as important. There are lots of things that work great in relational databases: indexes, dynamic queries, and updates, to name a few; and we haven’t changed much there. For example, the way you design your indexes in MongoDB should be exactly the way you do it in MySql or Oracle; you just have the option of indexing an embedded field.

—Eliot Horowitz, 10gen CTO and co-founder


Obviously, the people behind MongoDB are good at marketing. At the same time, if you listen closely to the crowd, you don’t hear many negative comments about MongoDB, and they have an impressive list of users including Craigslist, Shutterfly, SourceForge, the New York Times, and GitHub. They have quickly garnered great respect and are constantly spreading use.

BaseX

BaseX was started by Christian Grün at the University of Knostanz in 2005 and was subsequently released using a BSD license in 2007. It is a simple, lightweight database that does not support a lot of features, but which could be just right for specific applications. Rather than using JSON like CouchDB and MongoDB, BaseX is designed to store document objects in XML. It supports standard XML tools like XPath and Xquery and also includes a lightweight GUI.

BaseX creates indexes, supports W3C recommendations and standards, ACID-safe transactions, large documents, and various APIs like REST/JAX-RX and XML:DB. Although not as sexy or well known as other options in this section, perhaps because of the newness and popularity of JSON over XML, BaseX is respected and used by many universities and enterprises.

Wide Column Stores

Wide column stores are often referred to as big table stores, after one of the best-known examples, Google’s BigTable. Typically, a relational database reads data from tables using rows. Data is then sorted to find only those contents of a row that are needed. Wide column stores change the system by reading data from tables in columns, selecting the attributes first before reading in data. This is more efficient for input and output read-only queries. This means that wide column stores tend to be very efficient for databases that are mostly used for reading stored data, especially from very large data sets.

Wide column stores use something like tables, with a defined schema for each table. Unlike relational databases, wide column stores do not record relationships between tables. These are not relational databases, but are more like maps that show where data exists across multiple dimensions. They are designed for scalability and as distributed systems.

Two examples of wide column stores are discussed here. One more, Cassandra, was discussed earlier in this chapter and fits into both this category and the earlier key/value stores category.

BigTable

BigTable is a proprietary Google product that is only used by Google. It is designed to work with Google’s MapReduce framework, which was created to process huge data sets across large clusters of computing nodes.

BigTable stores the massive sets of data used by many Google programs like Google Reader, My Search History, Google Earth, YouTube, and Gmail. BigTable is not available for use outside of Google.

The papers that describe Google’s design for both BigTable and MapReduce are listed in the “References” section of this chapter.

HBase

HBase is the database used by Hadoop. Hadoop is the Apache Project’s free software application for processing huge amounts of data across large clusters of compute nodes in a cluster. Hadoop is modeled in part after the information in Google’s MapReduce and Google File System papers. HBase is to BigTable what Hadoop is to MapReduce.

The main feature of HBase is its ability to host very large tables, on the scale of billions of rows across millions of columns. It is designed to host them on commodity hardware. HBase provides a RESTful web service interface that supports many formats and encodings and is optimized for real-time queries.

Numerous companies are using Hadoop, including some very big names like Amazon, eBay, Facebook, IBM, LinkedIn, Rackspace, and Yahoo!

Graph Stores

Graph stores, or graph databases, literally store data as a graph. What that means is the data is represented as a series of nodes and how they relate to each other. In the simplest case, a graph with only one node, all that need be recorded is the record and its properties. The properties list can be as short as one or as long as a few million (perhaps more).

Rather than allow that awkwardness to grow, most will start creating new nodes sooner, each node having its own properties and also explicit relationships that tie each node to other nodes. It is the relationships that organize the nodes, and the structure is therefore flexible. A graph can look like a list or a map or a tree or something else entirely.

Graph databases are queried using traversals. A traversal begins at defined starting nodes and follows through related nodes to answer questions such as, “What classes are my friends taking that I am not enrolled in?” or, “If server X has a network connection problem, what web services will be disrupted?” In a graph database, an index is just a special type of traversal, usually something commonly used such as finding specific nodes or relationships according to a property they share.

Graph stores are not terribly common, but are beloved by those who promote them. There is less differentiation between the options available in this category, at least when compared to the differentiation between the other categories of NoSQL databases in this chapter.

Neo4j

Neo4j is the graph store that most people have heard of in the NoSQL world. It has both a free version and a commercial version. Language bindings exist for Java, Python, and Ruby. It is scalable up to graphs of several billion nodes/relationships/properties on a single machine and can be scaled across multiple machines. It can be deployed on a standalone server or as a small-footprint database coexisting on the same machine with other software.

OrientDB

OrientDB is a free database, released under the Apache 2.0 license. It uses a different indexing algorithm called MVRB-Tree, which it claims is significantly faster. You might remember an older relational database called Orient ODBMS. OrientDB is related and can be used with a subset of SQL, but it is a complete rewrite using a document/graph database foundation.

HyperGraphDB

HyperGraphDB is another free option that uses the LGPL. It is designed primarily for use with the semantic web, knowledge management, and artificial intelligence projects. In mathematics, the definition of a hypergraph is an extension to the standard graph, allowing an edge to point to more than two nodes. According to the HyperGraphDB website, “HyperGraphDB extends this even further by allowing edges to point to other edges as well and making every node or edge carry an arbitrary value as payload.” HyperGraphDB seems to be focused on the academic side of things, so students might be especially interested in it because HyperGraphDB appears to be trying out some new research ideas.

FlockDB

FlockDB is used by Twitter to store social graphs, such as who follows whom, and for some secondary indices. It is free and open source, using the Apache 2.0 license. It is simpler than other graph databases as it seems to try to solve fewer problems, being designed for one primary use. FlockDB is designed for online, low-latency, high-throughput environments such as websites like Twitter; even then, it’s only for storing specific types of data.

References

Image www.oracle.com/us/products/database/berkeley-db/index.html—The main Berkeley DB website.

Image http://cassandra.apache.org/—The main website for Cassandra.

Image www.memcached.org/—The main website for Memcached.

Image http://memcachedb.org/—The main website for MemcacheDB.

Image http://redis.io/—The main website for Redis.

Image http://basho.com/products/riak-s2/—The main website for Riak.

Image http://couchdb.apache.org/—The main website for CouchDB.

Image http://mongodb.org/—The main website for MongoDB.

Image http://basex.org/—The main website for BaseX.

Image http://research.google.com/archive/bigtable.html—The paper describing BigTable.

Image http://research.google.com/archive/mapreduce.html—The paper describing MapReduce.

Image http://hadoop.apache.org/—The main website for Hadoop.

Image http://hbase.apache.org/—The main website for Hbase, the Hadoop database.

Image http://research.google.com/archive/gfs.html—The paper describing Google File System.

Image http://neo4j.org/—The main website for Neo4j.

Image http://orientdb.org/—The main website for OrientDB.

Image http://www.hypergraphdb.org/—The main website for HyperGraphDB.

Image https://github.com/twitter/flockdb—The main website for FlockDB.