Security for Web Developers (2015)
II Applying Successful Coding Practices
5 Building Reliable Code
You might wonder why this book contains a chapter about building reliable code when the topic is security. An interrelation exists between application speed, reliability, and security. Each of these elements has a role to play in turning a good application into a great application. You can’t emphasize one over the other without diminishing your application in some way. Of course, companies do make the conscious decision to emphasize one element over another, but usually to gain specific goals. For example, you may have a legal requirement to protect application at all costs, in which case you probably need to sacrifice some speed and reliability to achieve the goal. However, for most applications, the balance between the three elements is critical. Until you understand the interactions between speed, reliability, and security, it’s nearly impossible to achieve an application with the maximum possible security in place and it is impossible to create an application that performs well in an environment where all three elements are prized.
This chapter views reliability as it relates to security. In other words, it examines how you balance reliability in order to achieve specific security goals in your organization. It also considers reliability as a statistical science; although the chapter won’t bore you with the specifics of reliability calculations. The point is to understand the concept of calculated risk with regard to the state of security in an application. Anyone who tells you that an application doesn’t create risks with regard to both data and organizational integrity doesn’t truly understand either reliability or security.
As part of dealing with reliability issues, the chapter explores reliability from several perspectives. For example, in order to create a reliable application, you must define team protocols that ensure everyone understands the required concepts. Reliability experts also learn from each iteration of the calculations they make—likewise, you must incorporate a feedback loop to make changes in the way in which your organization views reliability in its unique environment. There are also the issues of using packaged solutions. You won’t build many applications from scratch, so it’s important to know the effect that using a packaged solution will have on your application.
The main thought to take away from this chapter is that the most reliable application in the world is one in which the application performs tasks without any sort of potential for disruption. This would exclude the use of any external processing, input, or resource because all three of these items have failure points. Such an application would have no users and would need to run on hardware with incredibly high reliability. However, even with all these factors in place, it’s still impossible to create an application that is 100 percent reliable. Every application has the potential to fail, making it less than reliable.
Lack of Reliability Kills Businesses
The biggest headlines in the trade press are often about security failures. Hackers getting into supposedly secure health records or obtaining access to the social security numbers of thousands of credit card users tend to make for big news. It seems that you see reliability reported far less often. However, a lack of reliability can cause terrifying results—even business failures.
Consider the case of the Knight Capital Group (http://dealbook.nytimes.com/2012/08/02/knight-capital-says-trading-mishap-cost-it-440-million/). A glitch in the software it used to interact with the New York Stock Exchange caused it to buy nearly $7 billion dollars of stock it didn’t want. Quickly reselling the stock caused the company a net $440 million dollar loss. However, the losses soon became greater. As people lost confidence in the Knight Capital Group, it began bleeding even more red ink. Eventually, another company, Getco, acquired the Knight Capital Group for pennies on the dollar (http://www.reuters.com/article/2012/12/19/us-knightcapital-getcoidUSBRE8BI0OF20121219)—all because of a software glitch.
Differentiating Reliability and Security
Some people equate reliability with security. However, reliability and security are two completely different measures of application performance. Yes, the two measures do interact, but in ways that many people really don’t understand well. A reliable application isn’t necessarily secure and vice versa. The following sections examine the issue of the interaction between reliability and security in greater detail.
The best way to work with the example described in this chapter is to use the downloadable source, rather than type it in by hand. Using the downloadable source reduces potential errors. You can find the source code examples for this chapter in the \S4WD\Chapter05 folder of the downloadable source.
Defining the Roles of Reliability and Security
Reliability is a measure of how often an application breaks. It’s a statistical measure. You use reliability to answer the question of likely an application is to break given a certain set of circumstances. Reliability also tells you when the application environment is changing. For example, when you add staff and the load on the application increases, reliability may decrease unless you design the application to scale well. Even when the software is perfect, however, the hardware reaches a breaking point and reliability will still decrease. Therefore, reliability is a whole system measure. The hardware, user, platform, operating environment, management techniques, and myriad other things all affect reliability and change what you see as application faults or failures.
Security is a measure of how much effort it takes to break an application. Unlike reliability, there is no method available to measure security statistically. What you have instead is a potential for damage that can only be quantified by the determination of the hacker who is attempting to cause the damage. If hackers are sincerely determined to break into your application can cause damage, they will almost certainly find the means to do so. When dealing with security, you must also consider the effects of monitoring and the ability of a team to react quickly to breaches.
In both cases, an application breaks when the stress applied to the application becomes greater than the application’s ability to resist. A broken application fails in its primary responsibility to manage data correctly. Just how this failure occurs depends on the application and the manner in which it’s broken. Reliability faults tend to cause data damage—security faults, on the other hand, tend to cause data breaches or result in compromised system integrity.
It’s important to demonstrate the difference between simply being secure and also being reliable. The RangeCheck1.html example below shows code that is secure from the client side (you would also check the data on the server side).
<title>Performing a Range Check</title>
value = document.getElementById("Data").value;
if (value == "")
alert("Please type a number!");
if ((value < 0) || (value > 5))
alert("Value must be between 0 and 5!");
alert("Value = " + value);
<h1>Performing a Range Check</h1>
<input id="Data" type="number" value="0" min=0 max=5 /><br />
<button id="Test" onclick="testValue()">
Figure 5-1. Range checks are made easier using the appropriate controls.
The problem with this code is that it’s secure, but it’s not reliable. If conditions change, then the code will no longer function as it should. The RangeCheck2.html example below makes the code more reliable by tying the range check to the min and max attributes of the <input> tag.
<title>Performing a Range Check</title>
inputObj = document.getElementById("Data");
value = inputObj.value;
if (value == "")
alert("Please type a number!");
if ((value < inputObj.getAttribute("min")) ||
(value > inputObj.getAttribute("max")))
alert("Value must be between 0 and 5!");
alert("Value = " + value);
<h1>Performing a Range Check</h1>
<input id="Data" type="number" value="0" min=0 max=5 /><br />
<button id="Test" onclick="testValue()">
The basic checks work as before. However, if someone chooses to change the min and max values accepted by the <input> tag, the code automatically responds by changing the conditions of the check. The failure points in this example are fewer.
However, to obtain code that is both secure and reliable, you must play a price in speed. Notice the number of additional lines of code in the second example and the increased number of function calls. You won’t likely notice a difference in the speed of this particular example, but when you start adding these sorts of checks to an entire application, you can see some serious speed degradation. The code is more reliable and secure, but the user may not be happy with the result.
Avoiding Security Holes in Reliable Code
Just as secure software isn’t automatically reliable, reliable software isn’t automatically secure. In fact, you may find that the more reliable the software, the greater the risk that it truly isn’t secure. The problem is one of the divergent goals of reliability and security. A reliable application is always accessible, usable, and predictable, which causes more than a few security issues. Here are cases in which reliability has divergent goals from security:
• A complex password keeps software secure, but it means that the software may not be available when the user forgets the password, making the software unreliable.
• Checks that ensure no one has tampered with application data can cause the software to become unavailable when the checks find tampering.
• False positives to security checks make the application’s behavior unpredictable.
• Managing security settings increases application interface complexity and make it less usable.
• Denial of access to required resources due to security restrictions (such as, those created by a policy) makes the application less accessible and also makes it less predictable.
The results of security holes in reliable code really can be terrifying. In a recent InfoWorld listing of ten extreme hacks (see http://www.infoworld.com/article/2933868/hacking/10-extreme-hacks-to-be-truly-paranoid-about.html), medical devices came in at number two. These devices are tested for five to ten years to ensure they continue working no matter what else might happen. However, no one patches the software during the testing period (and patching would entail additional testing). In addition, medical devices must prove easy to use, so anything that even resembles comprehensive security is left out of the development process. As a result, it’s quite easy to kill someone by hacking their medical device. Of all the examples of reliable software with serious security issues, medical devices are at the top of the heap. They also have the honor of being the software with the most devastating results when hacked.
In fact, there are many situations where security and reliability butt heads. You must choose some sort of balance between the two in order to ensure that application data remains reasonably safe and the application still runs reliably.
Using the examples in the previous section as a starting point, it’s possible to see how a range check would interfere with the user’s ability to enter values outside the predicted range. Of course, the range check makes the application more secure. However, it’s important to consider what happens when the person configuring the application’s range check performs the task incorrectly and now the user is unable to enter a perfectly valid value. The application is still secure, but it becomes unreliable.
In some situations, a designer may view the potential ramifications of such a limitation as unwanted and make the application more reliable by excluding the range check. After all, if the purpose of the software is to prevent a nuclear reactor from going critical, yet the software prevents the entry of a value that will keep the reactor from going critical, then the security of the software is no longer important because no one will be around to debate the issue.
A middle ground fix for such a situation does exist, but it increases the complexity of the software and therefor affects reliability even more. In addition, because the fix requires added coding, application speed is also affected. However, by including the various security checks during normal operation and allowing an override by a manager or administrator to run the checks off during an emergency, the user can enter the correct value for saving the reactor, even though the software wouldn’t normally allow it.
The point is that you can usually find a workaround for the security-only or reliability-only conundrum. It’s usually a bad idea to focus on one or the other because the hackers (or the users) will make you pay at some point.
Focusing On Application Functionality
Making an application both secure and reliable fills a developer with joy, but the user won’t care. User’s always focus their attention on getting a task that the user cares about accomplished. The user’s task might involve creating a report. (In reality, the report might not be the focus—the focus might involve getting money from investors—the report simply helps the user accomplish that goal.) If hand typing the report is easier and more transparent than using your application, the user will hand type the report. User’s don’t care what tool they use to accomplish a task, which is why you see users trying to create complex output using a smartphone. As a developer, you can secretly revel in the amazing strategies contained within your code, but the user won’t care about it. The point is that a user won’t come to you and say that the application is unreliable. The user’s input will always involve the user’s task—whatever that task might be. It’s up to you to determine that the issue involved in accomplishing the user’s task successfully is that the application you created isn’t reliable in some important way.
The balance between reliability and security becomes more pronounced when you focus on application functionality. It isn’t simply a matter getting the task done, but getting the task done in the way that the user originally envisioned. In today’s world, this means doing things like:
• Counting keystrokes—fewer is better
• Allowing the application to run on any platform
• Ensuring the application is always available
• Reducing the number of non-task-related steps to zero
• Making answers to questions obvious or avoiding the questions completely
Developing Team Protocols
Any effort made toward creating a reliable application has to consider the entire team. A development team needs to design and build the application with reliability in mind from the beginning. The team also needs to keep the matter of balance in mind during this process. It doesn’t matter if an application is both reliable and secure if no one uses it because it runs slowly.
Team protocols can take in all sorts of issues. For example, in 1999 the Mars Climate Orbiter burned up on entry into the Martian atmosphere because one group working on the software used metric units and another group used English (imperial) units (http://www.wired.com/2010/11/1110mars-climate-observer-report/). The problem was a lack of communication—part of the protocol that you need to create for successful application development.
In order to create appropriate protocols for your organization, you need to break the tasks up in several ways. The team that creates an application must consider the issues of reliability from three separate levels:
• Accidental Design or Implementation Errors: When most people think about reliability problems, they think about glitches that cause the application to work in a manner other than the way in which the development team originally designed it to function. The application fails to perform tasks correctly. However, these errors could also be of the sort that opens the application to access by hackers or simply doesn’t provide the required flexibility. Development teams overcome this problem through the use of developer training, use of secure development practices, and the employment of tools designed to location reliability issues of this sort.
• Changing Technology: An application becomes obsolete the day you finish working on it. In fact, sometimes the application is obsolete before you complete it. Technology changes act against software to make it unreliable. There are two levels of change you must consider:
• Future Proofing: In order to create an environment in which an application can maintain its technical edge, you must future proof it. The best way to accomplish this goal is to create the application as components that interact, but are also separate entities, to make it possible to upgrade one without necessarily upgrading the entire system. This is the reason that microservices have become so popular, but you can practice module coding strategies using monolithic designs as well.
• Hacker Improvements: Hackers do innovate and become better at their jobs. A security or reliability problem that wasn’t an issue when you started a project may become quite problematic before you complete the application. You may not even know the issue exists until a hacker points it out. The best way to handle this problem is to ensure you keep your tools and techniques updated to counter hacker improvements.
• Malicious Intent: There is a good chance that someone on your development team isn’t happy or has possibly taken a job with your organization with the goal of finding ways to exploit software glitches. This team member may even introduce the glitches or backdoors with the notion of exploiting them after leaving the organization. Of course, you don’t want to create a big brother atmosphere because doing so stifles innovation and tends to create more problems than it fixes, but you also need to ensure any management staff actually does manage the development team.
Communication between members of the team is essential, but ensuring that the communication isn’t misunderstood is even more important. Assumptions create all sorts of problems and humans are especially good at filling in information gaps with assumptions. Of course, the assumptions of one team member may not be held by another member of the same team. Developing the sort of communication that teams require includes these best practices (you can find additional best practices, tools, and resources on the Cyber Security and Information Systems Information Analysis Center (CSIAC) site at https://sw.csiac.org/databases/url/key/2):
• Reliability and Security Training: Team members can’t communicate unless they speak the same language and understand reliability concerns at the same level. The only way to accomplish this goal is to ensure team members receive proper training. When creating a training program, ensure that the training is consistent, whether you use external or in-house trainers.
• Reliability Requirements: Creating an application without first defining what you want is a little like building a house without a blueprint. Just as you wouldn’t even start digging the basement for a house without a blueprint in hand, you can’t start any sort of coding effort without first establishing the roles that reliability and security will play in the overall operation of the application. Make sure any definition you create includes specific metrics and goals for each development phase. The definition should also outline the use of both security and reliability reviews, code audits, and testing.
• Reliable Design: Just as you begin any project by identifying the security threats an application will face, you must also identify the reliability issues and specify methods for overcoming them. The design process must include specifics on how to deal with reliability issues. Although many organizations are aware of the need for security experts to help solve security issues, few are aware that a similar capability exists with reliability experts such as Reliability Consulting Services (http://www.reliasoft.com/consulting/).
• Reliable Coding: The coding process must keep both security and reliability in mind. It doesn’t matter how much preparation you do unless you put what you’ve learned and designed into practice.
• Secure Source Code Handling: In order to handle threats such as malicious intent, your organization must practice secure source code handling. This means that only people with the proper training and credentials see the source code. In addition, it also means that you perform both design and code reviews to ensure the application remains faithful to the original design goals.
• Reliability Testing: It’s essential to test the code to ensure it actually meets reliability goals that are part of the application requirements and design. Reliability testing can include all sorts of issues, such as how the application responds when a load is applied or when it loses access to a needed resource. The task is to test the failure points of the application and verify that the application handles each of them successfully.
• Reliability and Security Documentation: It’s important to document the requirements, design, coding techniques, testing techniques, and other processes you have put into place to ensure the application is both reliable and secure. The documentation should express the balance issues that you found and explain how you handled them as part of the application requirements and design.
• Reliability and Security Readiness: Just before application release, the development team needs to ensure no new threats have appeared on the scene that the application must address to work successfully. Reliability and security both deal with risk and this phase determines the risk posed by new threats. It may work just as well to handle low risk threats as part of an update, rather than hold the application release.
• Reliability and Security Response: After application release, the development team needs to response to any new reliability and security threats in a timely manner. It’s important to understand that sources outside the development team may report these issues and expect that the development team will provide a quick response.
• Integrity Checking: Ensuring the application continues to work as it should means securing the code using some type of signing technique (to ensure no one modifies it). In addition, the development team should continue testing the code against new threat and verify that the application manages data in a secure way. The idea is to keep looking for potential problems, even if you’re certain that none exist. Hackers are hoping that your team will lack the diligence to detect new threats until it’s too late.
• Security and Reliability Research: It’s important to task individuals with the requirement to find new threats as they appear and to come up with methods for handling them. Testing the application is fine, but knowing how to test it against the latest threats is the only way to ensure your testing is actually doing what it should.
Creating a Lessons Learned Feedback Loop
Reliability is based on statistical analysis of events over time. The more time that elapses and the more events recorded, the more accurate the prediction. The average time between failure events is the Mean Time Between Failures (MTBF). Most software texts don’t seem to pursue the topic from this perspective, but software, like anything else, has failure patters and it’s possible to analyze those patterns to create a picture of when you can expect the software to fail. Of course, like any statistic, it’s not possible to pin down precise moments—only the general course of activity for a given application in a specific environment.
Some people view MTBF as an incorrect measure of software reliability because they feel it literally indicates the next time that a software bug, environmental issue, or other factor will cause the software to fail. The important thing to remember is that MTBF is based on a specific environment. Because software rarely operates in precisely the same environment from organization to organization, trying to create an MTBF for an application that works in any organization won’t work. An MTBF for an application for a specific organization does work because the environment for that organization is unlikely to change significantly. When it does, the MTBF value is no longer valid.
The higher the MTBF of an application, the less time spent supporting it and the lower the maintenance costs. In addition, a high MTBF also signals a situation where security breaches due to application failures are less likely. Application failures don’t occur just because of bugs in the software—they also occur due to usage failures, environmental issues (such as a lack of memory), unreproducible causes (such as cosmic rays causing a spike in power), and other sources. Because it’s not possible to manage all of these failure sources, software can never be 100 percent reliable. At some point, even the best software will experience a failure.
Cosmic rays really do affect computers. In fact, the higher the altitude of the computer’s storage, the greater the effect experienced. The size of the transistors in the chips also affects the incidence of soft errors. You can discover more about this interesting effect athttp://www.nature.com/news/1998/980730/full/news980730-7.html, http://www.ncbi.nlm.nih.gov/pubmed/17820742, and http://www.newscientist.com/blog/technology/2008/03/do-we-need-cosmic-ray-alerts-for.html. The information is interesting and it may finally explain a few of those unreproducible errors you’ve seen in the past.
A feedback loop as to the cause of failures can help you increase MTBF within a given organization. By examining the causes of failure, you can create a plan to improve MTBF for the lowest possible cost. Here are some ideas to consider as part of analyzing the failure sources in software:
• Quality: As the quality of the software improves, the MTBF becomes higher. It’s possible to improve the quality of software by finding and removing bugs, improving the user interface to make it less likely that a user will make a mistake, and adding checks to ensure needed resources are available before using them.
• Failure Points: Reducing the number of failure points within an application improves MTBF. The best way to reduce failure points is to make the application less complex by streamlining routines and making them more efficient. However, you can also do things such as increase redundancy when possible. For example, having two sources for the same data makes it less likely that the loss of a single source will cause an application failure.
• Training: Better training reduces operational errors and improves MTBF. There is a point of diminishing returns for training, but most users today don’t receive nearly enough training on the software that they’re expected to use to perform useful work. In addition, training support personnel to handle errors more accurately and developers to spot the true sources of failures will help improve the feedback process.
Creating complete lists of failures, along with failure causes, is the best way to begin understanding the dynamics of your application. As the knowledge base for an application grows, it’s possible to see predictive patterns and use those patterns as a means of determining where to spend time and resources making corrections. Of course, it’s not possible to fix some failure sources. Yes, you could possibly shield all your hardware to get rid of those pesky cosmic rays, but the chances of any organization expending the money is incredibly small (and the returns are likely smaller still).
Considering Issues of Packaged Solutions
As mentioned in earlier chapters, most applications today rely on packaged solutions to perform common tasks. Trying to create an application completely from scratch would be too time intensive and financially prohibitive. There really isn’t a good reason to reinvent the wheel. However, using these packaged solutions will affect your application’s reliability. You’re relying on code written by someone else to make your application work, so that code is also part of the reliability calculation. The following sections discuss some issues you need to consider when working with packaged solutions.
Dealing with External Libraries
External libraries create a number of interesting reliability problems. The most important issue is that the application will generally download a copy of the library each time it begins to run. If you keep the application running, this process doesn’t happen often, but most web-based applications run on the client system, which means that the client will need to download the library every time it starts the application. Speed becomes a problem because of the library download. However, the application might not even start if something prevents the client download from succeeding. For example, consider the use of jQuery UI to create an accordion effect like the one shown in Figure 5-2.
Figure 5-2. Even a simple jQuery UI example requires downloaded code.
This example won’t even start should the library files it depends on become inaccessible for some reason. The following code (found in the Accordian.html file) appears in the header to create the required connectivity.
If any of the three files shown in this example are missing, the application will fail to run. This example also shows two different approaches to selecting the libraries. Notice that the jQuery library relies on the latest version, which means that you automatically get bug fixes and other updates. However, the jQuery UI library (and its associated CSS file) both rely on a specific version of the library. You won’t get updates in this case, but may get bug fixes that could still cause your application to crash if you’ve already included workaround code for the problem.
One of the best ways to improve the reliability of external library use is to download the library to local drives and use that copy, rather than the copy on the vendor site. This strategy ensures that you have the same version of the library at all times and that there is less of a chance that the library won’t be available for use. Of course, you’ll need to supply local hard drive space to store the library, but the increased use of local resources is a small price to pay for the stability you gain.
As with everything else, there is a price for using a localized copy of the library. The most important of these issues is the lack of upgrades. Most vendors provide upgrades relatively often that include bug fixes, improvements in reliability, and speed enhancements. Of course, the library will usually contain new features as well. Balancing the new features are deprecated features that you may need to make your application run. In order to achieve the reliability gains, you give up some potential security improvements and other updates you may really want to use in your application.
A middle ground approach is to download the library locally, keep track of improvements in updates, and time your application updates to coincide with needed security and feature updates in the majority of the libraries you use. This would mean performing updates on your schedule instead of the vendor’s schedule. Even though this alternative might seem like the perfect solution, it isn’t. Hackers often rely on zero-day exploits to do the maximum harm to the greatest number of applications. Because your application won’t automatically receive the required updates, you still face the possibility of a devastating zero-day attack.
Dealing with External APIs
External Application Programming Interfaces (APIs) provide the means to access data and other resources using an external source. An API is usually a bundle of classes that you instantiate as objects and use for making calls. The code doesn’t execute on the client system. Rather, it executes as an Out-Of-Process (OOP) Remote Procedure Call (RPC) on the server. A client/server request/response cycle takes place with the client making requests of the server. Creating a link to the API is much like creating a link to a library, except that you must normally provide a key of some sort to obtain access as shown here for the GoogleAPI.html file (see Chapter 7 for details on this example).
src="https://maps.googleapis.com/maps/api/js?key=Your Key Here&sensor=false">
The query requires that the client build a request and send it to the server. When working with the Google API, you must provide items such as the longitude and latitude of interface, along with the preferred map type. The API actually writes the data directly to the application location provided as part of the request as shown here.
// This function actually displays the map on
// Create a list of arguments to send to Google.
var MapOptions =
center: new google.maps.LatLng(
// Provide the location to place the map and the
// map options to Google.
var map = new google.maps.Map(
The result of the call is a map displayed on the client canvas. Figure 5-3 shows a typical example of the output from this application.
Figure 5-3. The Google API draws directly to the client canvas.
Using external APIs makes it possible to use someone else’s code with fewer security concerns and potentially fewer speed issues (depending on whether it’s faster to request the data or download the code). However, from a reliability perspective, external APIs present all sorts of risk. Any interruption of the connection means that the application stops working. Of course, it’s relatively easy for a developer to create code that compensates for this issue and presents a message for the user. Less easy to fix is the issue of slow connections, data mangling, and other issues that could reflect the condition of the connection or hackers at work. In this case, user frustration becomes a problem because the user could add to the chaos by providing errant input in order to receive some sort of application response.
A way around the reliability issue in this case is to time calls. When the application detects that the calls are taking longer than normal, it’s possible to respond by telling the user about the slow connection, contacting an administrator, or reacting to the problem in some other way. The point is to ensure you track how long each call takes and act appropriately.
Working with Frameworks
Frameworks represent a middle ground of reliability between libraries and APIs. For the most part, frameworks are compressed code. One of the most commonly used frameworks is MooTools (http://mootools.net/). However, there are many other frameworks available out there and you need to find the right one for your needs. In most cases, frameworks can run on the server, the client, or a combination of both. Depending on how you use the framework, you can see the same reliability issues found with libraries, APIs, or a combination of the two.
Differentiating Between Frameworks and Libraries
Even though a framework like Dojo and a library like jQuery look quite a bit alike—and you use them in the same manner, for the most part—Dojo is a framework and jQuery is a library. From a security, reliability, and speed perspective, the two entities are different.
A library is pure code that downloads and runs as part of your application. You call functions directly and the source for those functions is sometimes available so that you can change the function behavior.
Frameworks provide a means of interacting with a behavior, which means that some tasks aren’t visible to the developer—the developer requests that the framework perform the task, and the framework determines how to accomplish it. Code still downloads from the vendor and still becomes part of your application, but the underlying technology is different. Some people define a framework as a packaged form of library that provides structure as well as code.
The interesting part about products such as MooTools is that you can perform many of the same tasks that you do when using libraries. For example, you can create a version of the accordion example using MooTools. The following code is much simplified, but it gets the point across.
<title>MooTools Accordion Demo</title>
padding: 2px 5px;
border-bottom: 1px solid #ddd;
border-right: 1px solid #ddd;
border-top: 1px solid #f5f5f5;
border-left: 1px solid #f5f5f5;
var accordion = new Fx.Accordion('h3.atStart', 'div.atStart',
onActive: function(toggler, element)
onBackground: function(toggler, element)
<h2>MooTools Accordion Demo</h2>
<h3 class="toggler atStart">Section 1</h3>
<div class="element atStart">
<p>Section 1 Content</p>
<h3 class="toggler atStart">Section 2</h3>
<div class="element atStart">
<p>Section 2 Content</p>
<h3 class="toggler atStart">Section 3</h3>
<div class="element atStart">
<p>Section 3 Content</p>
You obtain the MooTools-More-1.5.1.js file from the Builder site at http://mootools.net/more/builder. Make sure you include the core libraries. Google does offer a hosted site at https://developers.google.com/speed/libraries/, however, this site doesn’t include the additional features, such asFx.Accordion.
The framework requires that you set up a series of headings and divisions to contain the content for the accordion as shown in the HTML portion of the example. How these items appear when selected and deselected depends on the CSS you setup. The script defines two events: onActive(when the item is selected) and onBackground (when the item is deselected). In this particular case, MooTools behaves very much like a library and you see the output shown in Figure 5-4.
Figure 5-4. MooTools is a framework that provides both library and API functionality.
Calling Into Microservices
In many respects, a microservice is simply a finer grained API. The concept of using smaller bits of code so that it’s possible to rely on whatever resources are needed to perform that one given task well makes sense. Because of the way in which microservices work, much of what you know about APIs from a reliability perspective also applies to microservices. For example, calls to a particular microservice may take longer than expected, fueling user frustration. The way around this problem is to provide application timeouts. However, in the case of a microservice, you can always try making the call using a secondary source when one is available.
However, microservices also differ from APIs in some respects simply because they are smaller and more customized. With this in mind, here are some reliability issues you need to consider when dealing with microservices:
• Using more microservices from different sources increases the potential for a lost connection and lowers application reliability.
• Choosing microservices that truly do optimize the service provided for the specific need you require will increase application reliability because there is less risk of major errors.
• Relying on a consistent interface for microservice calls reduces potential coding errors, enhancing application reliability.
• Having more than one source that can perform the same service reduces the number of single failure points, increasing application reliability.
• Keeping services small means that the loss of a single service doesn’t mean all services become unavailable as it would when using an API, so reliability is higher in this case as well.