Cross-Site Request Forgery (CSRF) - Hacking Web Apps: Detecting and Preventing Web Application Security Problems (2012)

Hacking Web Apps: Detecting and Preventing Web Application Security Problems (2012)

Chapter 3. Cross-Site Request Forgery (CSRF)

Information in this chapter:

Understanding Cross-Site Request Forgery

Understanding Clickjacing

Securing the Browsing Context

Imagine standing at the edge of a field, prepared to sprint across it. Now imagine your hesitation knowing the field, peppered with wildflowers under a clear blue sky, is strewn with mines. The consequences of a misstep would be dire and gruesome. Browsing the web carries a metaphorical similarity that while obviously not hazardous to life and limb still poses a threat to the security of your personal information. This chapter is dedicated to a type of hack in which your browser makes a request on a hacker’s behalf using your relationship (i.e. security, credentials, etc.) with a site. Before we dive into the technical details of CSRF, consider the broader behavior of using web sites.

How often do you forward a copy of all your incoming email, including password resets and private documents, to a stranger? In September 2007 a security researcher demonstrated that the filter list for a GMail account could be surreptitiously changed by an attacker (http://www.gnucitizen.org/blog/google-gmail-e-mail-hijack-technique/). Two events needed to happen for the attack to succeed. First, the victim needed to be logged in to their GMail account or have closed the browsing tab used to check email without logging off—not an uncommon event since most people remain logged into their web-based email account for hours or days without having to re-enter their password. Second, the victim needed to visit a booby-trapped web page whose HTML can be modified by the attacker—a bit trickier to pull off from the attacker’s perspective, but the page wasn’t obviously malicious. The page could be hosted on any domain, be completely unrelated to GMail, and did not even require JavaScript to execute. It could be part of an inane blog post—or a popular one that would attract unwitting victims.

To summarize this scenario: A victim had two browser tabs open. One contained email, the second was used to visit random web pages. Activity in the second tab affected the user’s email account without violating the Same Origin Policy, using HTML injection (XSS), tricking the victim into divulging their password, or exploiting a browser bug. We’ll examine the technical details of this kind of hack in this chapter. First, consider a few more scenarios.

Have an on-line brokerage account? Perhaps at lunch time you logged in to check some current stock prices. Then you read a blog or viewed the latest 30-second video making the viral rounds of email. On one of those sites your browser might have tried to load an image tag that instead of showing a goofy picture or a skate-boarding trick gone wrong, used your brokerage account to purchase a few thousand shares of a penny stock. As consolation, many other victims executed the same trade from their accounts, having fallen prey to the same scam. In the mean time, the attacker, having sown the CSRF payload across various web sites, watches the penny stock rise until it reaches a nice profit point. Then the attacker sells. All of the victims, realizing that a trade has been made in their account attempt to have the trade invalidated. However—and this is a key aspect of CSRF—the web application saw legitimate activity from the victim’s browser, originating from the victim’s IP address, in a context that required the victim’s correct username and password. At a glance, there’s nothing suspicious about the trade other than the victim’s word that they didn’t make it. Because there’s no apparent fraud or malicious activity, the victims may have no recourse other than to sell the unwanted shares. The attacker, suspecting this will be the victims’ action, shorts the stock and makes more money as the artificially inflated price drops to its previous value.

Use a site that provides one-click shopping? With luck your browser won’t become someone else’s personal shopper after attempting to load an image tag that in fact purchases and ships a handful of DVDs to someone you’ve never met.

None of these attacks require anything more than the victim to be authenticated to a web site and in the course of browsing the web come across nothing more dangerous than a page with a single image tag or iframe placed with apparent carelessness. After visiting dozens of sites across several browser tabs, each loading hundreds of lines of HTML (it’s not even necessary to include JavaScript at this point in the hack), do you really know what your browser is doing?

Understanding Cross-Site Request Forgery

“We are what we pretend to be, so we must be careful about what we pretend to be.”—Kurt Vonnegut, Mother Night.

Since its inception the web browser has always been referred to as the User-Agent, as evident in one of the browser’s request headers:

GET / HTTP/1.1

Host: web.site

User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)

Accept: text/html, application/xhtml+xml, ∗/∗

The User-Agent communicates with web sites on the user’s behalf. Sites ask for login credentials, set cookies, etc. in order to establish a context specific to each browser, and by extension each user, that visits it. From a site’s perspective, the browser is who you are. The site only knows you based on aspects of the browser like IP address of its traffic, headers, cookies, and the links it requests. (The notion of proving who you are is covered in Chapter 6: Breaking Authentication Schemes.)

Cross-site request forgery attacks leverage this commingled identity by manipulating the victim’s browser into making requests against a site on the attacker’s behalf; thereby making the request within the (security!) context of the victim’s relationship to a site. The attacker’s relationship to the site is immaterial. In fact, the targeted site never sees traffic from the attacker. The hack is completely carried out from an attacker-influenced site against the victim’s browser and from the victim’s browser against the target site.

This isn’t a phishing attack, although it can be part of one. There’s an important nuance here: A phishing attack requires manipulating the user, a person, into initiating a request from the browser, whereas CSRF surfs right by the user and forces the browser into initiating a request. The attacker hasn’t really gained remote control of the browser, but the attacker has made the browser do something of which the user is unaware.

Note

This book uses CSRF as the acronym for cross-site request forgery. An alternative, XSRF, evokes the shorthand for cross-site scripting (XSS) attacks, but seems less commonly used. You will encounter both versions when looking for additional material on the web.

In simplest terms a CSRF attack forces the victim’s browser to make a request without the victim’s knowledge or agency. Browsers make requests all the time without the knowledge or approval of the user: images, frames, script tags, etc. The crux of CSRF is to find a link that when requested performs an action beneficial to the attacker (and detrimental to the victim in zero-sum games like financial transactions). We’ll return to this point in a moment. Before you protest that the browser shouldn’t be requesting links without your approval or initiation, take a look at the types of elements that generate requests in that very manner:

<iframe src=”http://web.site/frame/html”>

<img src=”http://pictures.site/something_cute”>

<script src=”http://resources.site/browser_code”>

Web pages contain dozens, sometimes hundreds, of resources that the browser automatically retrieves in order to render the page. There is no restriction on the domains or hosts from which these resources (images, stylesheets, JavaScript code, HTML) are loaded. As a performance optimization, sites commonly host static content such as images on a Content Delivery Network (CDN) whose domain is entirely different from the domain name visitors see in the navigation bar of their web browsers. Figure 3.1 shows how a browser displays images from popular, unrelated web sites in a single web page. The HTML source of the page is also included in order to demonstrate both the simplicity of pulling together this content and to emphasize that HTML is intended for this very purpose of pulling content from different origins.

image

Figure 3.1 Images loaded from different domains create multiple Security Origins in one page

Another important point demonstrated in Figure 3.1 is the mix of HTTP and HTTPS in the links for each image. HTTPS uses Secure Sockets Layer (SSL) or Transport Layer Security (TLS) to provide proof-of-identity and to encrypt traffic between the site and the browser. There is no prohibition on mixing several encrypted connections to different servers in the same web page. The browser only reports an error if the domain name provided in the site’s certificate does not match the domain name of the link used to retrieve content.

Browsers have always been intended to retrieve resources from disparate, distributed sites into a single web page. The Same Origin policy was introduced to define content how from different origins (the combination of domain, port, and protocol) is allowed to interact inside the browser, not to control the different origins from which it can be loaded.

A “mashup” is slang for a site that uses the web browser or some server-side code to aggregate data and functionality from unrelated sites in a single page. For example, a mashup might combine real estate listings from craigslist.org with maps.google.com or return search results from multiple search engines in one page. Mashups demonstrate the power of sharing information and programming interfaces among web sites. If you’re already familiar with mashups, think of a CSRF attack as an inconspicuous, malicious mashup of two sites: the target site to which the victim’s browser makes a request and a random site that initiates that request for the attacker.

The Mechanics of CSRF

Let’s turn the images example in Figure 3.1 into a CSRF attack. We’ll start with a simple demonstration of the mechanism of CSRF before we discuss the impact of an exploit. Our CSRF drama has four roles: Unwitting Victim, Furtive Attacker, Target Site, and Random Site. The role of the Target Site will be played by the Bing search engine (http://bing.com/), the Random Site represents any site in which the Furtive Attacker is able to create an <img> tag. The Attacker’s goal is to insert a search term into the Unwitting Victim’s search history as tracked by Bing. This is just Act One—we don’t know the Attacker’s motivation for doing this, nor does that matter for the moment.

The drama begins with the Unwitting Victim browsing the web, following links, searching for particular content. Bing has a “Search History” link on its home page. Clicking this link takes you to a list of the terms queried with the current browser. Figure 3.2 shows the four terms in the Unwitting Victim’s history. (Note that this is the history tracked by the search engine, not the browser’s history.)

image

Figure 3.2 The Unwitting Victim’s search habits

Meanwhile, the Furtive Attacker has placed several <img> tags in as many sites as would allow. The src attribute contains a curious link. One that at first glance doesn’t appear to point to an image file:

<img style=”visibility:hidden” src=”http://www.bing.com/search?q=deadliest+web+attacks”>

Through intrigue, deceit, or patience, the Attacker lures the Unwitting Victim to a web site that contains the <img> tag. The domain name of the site is immaterial; Same Origin Policy has no bearing on this attack. Nor does the Victim need to do anything other than visit a page. The browser automatically loads the link associated with the image. Plus, the Attacker was shrewd enough to hide the image from view using a CSS property. Thus, the Victim will not even notice the broken image element, assuming the Victim would even notice in the first place.

The Victim’s search history has been updated after visiting the Random Site. Figure 3.3 reveals the appearance of a new term: deadliest web attacks. The Furtive Attacker has succeeded! And at no point was the Victim tricked into visiting bing.com and typing in the Attacker’s search term; everything happened automatically within the browser.

image

Figure 3.3 A new search term appears from nowhere!

We close Act One of our drama with a few notes for the audience. First, Bing is no more or less vulnerable to CSRF in this manner than other search engines (or other types of sites, for that matter). The site’s easy access to and display of the “Search History” make a nice visual example. In fact, we’ll come back to this topic and praise the “Turn history off” feature visible in Figure 3.3 when we discuss privacy issues in Chapter 8.

Second, there was a bit of hand-waving about how the Victim came across the <img> tag. For now it’s okay to have this happen offstage because we focused on showing the mechanism of CSRF. We’ll get to its impact, risk, etc. shortly.

Finally, make sure you understand how the search history was updated. Normally, the Victim would type the search terms into a form field, click “Search,” and view the results. This usual sequence would end up with the browser rendering something similar to Figure 3.4.

image

Figure 3.4 The forced query

The Attacker effectively forced the Victim’s browser to make a request that was equivalent to submitting the Search form. But the Attacker pre-populated the form with a specific search term and forced the Victim’s browser to submit it. This was made even easier since the form’s defaultmethod attribute was GET rather than POST. In other words, the search terms were just part of the link’s query string.

Note

CSRF focuses on causing a browser to perform an action via an HTTP request that the victim does not initiate or know about. The consequence of the request is important, be it beneficial to the attacker or detrimental to the victim. The content of the request’s response is neither important nor available to the attacker in this kind of hack.

With this in mind, the cross-site request aspect of cross-site request forgery merely describes the normal, expected behavior of web browsers. The forgery aspect is where the exploit puts money into the attacker’s bank account (to use one example) without tripping intrusion detection systems, web application firewalls, or other security alarms. None of these security measures are tripped because the attacker doesn’t submit the forged request. The request is created by the attacker (evoking forged in the sense of creation), but the victim ultimately submits the request (evoking forged in the sense of a counterfeit item) to the target site. It’s a lot more difficult—and unexpected—to catch an attack that carries out legitimate activity when the activity is carried out by the victim.

Request Forgery via Forced Browsing

Effective cross-site request forgery attacks force the browser to make an HTTP request and negatively impact the victim’s account, data, or security context. This outcome could be forwarding all of the victim’s incoming email to the attacker’s email address, purchasing shares in a penny stock, selling shares in a penny stock, changing a password to one of the attacker’s choosing, transferring funds from the victim’s account to the attacker’s account, and so on. The previous section demonstrated the mechanics of CSRF. Now we’ll fill in some more details.

Many HTTP requests are innocuous and won’t have any detrimental effect on the victim (or much benefit for the attacker). Imagine a search query for “maltese falcon.” A user might type this link into the browser’s address bar:

http://search.yahoo.com/search?p=maltese+falcon.

A CSRF attack uses an <iframe>, <img>, or other any other element with a src attribute that the browser would automatically fetch. For example, the href attribute typically requires user interaction before the browser requests the link whereas an image is loaded immediately. And from the attacker’s point of view it’s not necessary that the <img src=...> point to an image; it’s just necessary that the browser request the link.

The following HTML shows a variation of the Bing search engine example against Yahoo!. Lest you think that performing searches in the background is all smoke without fire, consider the possible consequences of a page using CSRF to send victims’ browsers in search of hate-based sites, sexually explicit images, or illegal content. Attackers can be motivated by malicious mischief as much as financial gain.

<html><body>

This is an empty page!

<iframe src=”http://search.yahoo.com/search?p=maltese+falcon” height=0 width=0 style=”visibility:hidden”>

<img src=”http://search.yahoo.com/search?p=thin+man” alt=””>

</body></html>

When anyone visits this page their web browser will make two search requests. By itself this isn’t too interesting, other than to reiterate that the victim’s browser is making the request to the search engine. Attackers who are after money might change the iframe to something else, like the link for an advertising banner. In that case the browser “clicks” on the link and generates revenue for the attacker. This manifestation of the attack, clickfraud, can be both profitable and potentially difficult to detect. (Consider that the advertiser is the one paying for clicks, not the ad delivery system.) All of the clicks on the target ad come from wildly varied browsers, IP addresses, and geographic locations—salient ingredients to bypassing fraud detection. If instead the attacker were to create a script that repeatedly clicked on the banner from a single IP address the behavior would be easy to detect and filter.

POST Forgery

An <img> tag is ideal for requests that rely on the GET method. Although the forms in the previous search engine examples used the GET method, many other forms use POST. Thus, the attacker must figure out how to recreate the form submission. As you might guess, the easiest way is to copy-and-paste the original form, ensure the action attribute contains the correct link, and force the browser to submit it.

The HTML5 autofocus attribute, combined with the onfocus event handler provide a way to automatically submit a form. We came across them previously in Chapter 2: HTML Injection & Cross-Site Scripting. The following HTML shows what a hacker use. Even if it were hosted athttp://trigger.site/csrf the action ensures that the request reaches the target site.

<html><body>

<form action=”http://web.site/resetPassword” method=”POST”>

<input type=hidden name=notify value=”1”>

<input type=hidden name=email value=”attacker@anon.email”>

<input type=text autofocus onfocus=submit() style=”width:1px”>

<input type=submit name=foo>

</form>

</body></html>

This technique satisfies two criteria of a CSRF hack: forge a legitimate request to a web site and force the victim’s browser to submit the request without user intervention. However, the technique fails to satisfy the criterion of subterfuge; the browser displays the target site’s response to the forced (and forged) request. The attack succeeds, but is immediately noticeable to the victim.

The Madness of Methods

Forging a POST request is no more difficult than forging a GET. The unfortunate difference, from the hacker’s perspective, is that using a <form> to forge a POST request is not as imperceptible to the victim as using an <img> tag hidden with CSS styling. There are at least three ways to overcome this obstacle:

Switch methods—Convert the POST to GET

Resort to scripting—Forge the POST with the XMLHttpRequest object. We’ll explore this in the countermeasures section later in this chapter.

Fool the user into submitting the form—Hide the request in an apparently innocuous form.

This section explores the conversion of POST to GET. Recall that the format of an HTTP POST request differs in a key way from GET. Take this simple form:

<form method=“POST” action=“/api/transfer”>

<input type=“hidden” name=“from” value=“checking”>

Name of account: <input type=“text” name=”to” value=“savings”><br>

Amount: <input type=”text” name=”amount” value=”0.00”>

</form>

A browser submits the form via POST, as instructed by the form’s method attribute. Notice the Content-Type and Content-Length headers, which are not part of a usual GET request.

POST /api/transfer HTTP/1.1

Host: my.bank

Content-Type: application/x-www-form-urlencoded

Content-Length: 36

from=checking&to=savings&amount=0.00

The request’s conversion to GET is straightforward: move the message body’s name/value pairs to the query string and remove the Content-Length and Content-Type headers. The easiest way to test this is to change the form’s method attribute to GET. The new request looks like the following capture.

GET /api/transfer?from=checking&to=savings&amount=0.00 HTTP/1.1

Host: my.bank

Whether the web application accepts the GET version of the request instead of POST depends on a few factors, such as if the web platform’s language distinguishes between request parameters, how developers choose to access request parameters, and if request methods are enforced. Strong enforcement of request methods and request parameters is common to REST-like APIs, but tends to be uncommon for form handling.

As an example of a programming language’s handling of request parameters, consider PHP. This popular language offers two ways to access the parameters from an HTTP request via the built-in superglobal arrays. One way is to use the array associated with the expected method, i.e. $_GET or $_POST. The other is to use the $_REQUEST array that compounds values from both methods.

For example, an “amount” parameter submitted via POST is accessible from the $_POST[”amount”] or $_REQUEST[”amount”] element of either array. It would not be accessible from the $_GET[”amount”] element, which would be unset (empty) in PHP parlance.

Having a choice of accessors to the form data leads to mistakes that expose the server to different vulnerabilities. As an aside, imagine the problem if a cross-site scripting filter were applied to the values from the $_POST array, but the application accessed values from the $_REQUEST array. A carefully crafted request (using GET or POST) might bypass the security check. Even if security checks are correctly applied, this still has relevance to CSRF. Requests made via POST cannot be considered safe from forged requests even though browsers require manual interaction to submit a form (with the notable exception of the autofocus/onfocus combination).

Note

A hacking technique known as HTTP Parameter Pollution (HPP) repeats name/value arguments in querystrings and POST data. For example, the a parameter is given three different values in the link http://web.site/page?a=one&a=two&a=<xss>. HPP takes advantage of a web platform’s ambiguous or inconsistent decomposition of parameters. Given three possible values, a platform might return the first value (one from the example), the last value (<xss>), or an array with each value ([one, two, <xss>]). This is related to the technique of converting POST requests to GET, but the behavior has more security implications for validation filters than for CSRF. A validation filter might be confused by multiple values or fail due to mismatched types (e.g. it expects a string but receives an array). CSRF relies on valid actions with valid requests from authenticated users—it’s just that the victim has neither approved nor initiated the action.

Develop the application so that request parameters are either explicitly handled by accessors for the expected method or consistently handled (e.g. collapsing all methods into a single accessor). Even though this doesn’t have a direct impact on CSRF, it will improve overall code quality and prevent other types of attacks. This applies to any web programming language.

Attacking Authenticated Actions without Passwords

The password is a significant security barrier. It remains secure as long as it is known only to the user. A more insidious characteristic of CSRF is that it manipulates the victim’s authenticated session without requiring knowledge of the password. Nor does the hack need to grab cookies or otherwise spoof the victim’s session. All of the requests originate from the victim’s browser, within the victim’s current authentication context to the web site.

Dangerous Liaison: CSRF and HTML Injection

It is easy to conflate CSRF and HTML injection (a.k.a. cross-site scripting) attacks. Much of this is understandable: both attacks use a web site to deliver a payload to the victim’s browser, both attacks cause the browser to perform some action defined by the attacker. XSS requires injecting a malicious payload into a vulnerable area of the target web site. CSRF uses an unrelated, third-party web site to deliver a payload, which causes the victim’s browser to make a request of the target web site. With CSRF the attacker never needs to interact with the target site and the payload does not consist of suspicious characters.

The two attacks do have a symbiotic relationship. CSRF targets the functionality of a web site, tricking the victim’s browser into making a request on the attacker’s behalf. XSS exploits inject code into the browser, automatically siphoning data or making it act in a certain way. If a site has an XSS vulnerability, then it’s likely that any CSRF countermeasures can be bypassed. It’s also likely that CSRF will be the least of the site owner’s worries, XSS can wreak far greater havoc than just breaking CSRF defense. In many ways XSS is just an enabler to many nefarious attacks. Confusing CSRF and XSS might lead developers into misplacing countermeasures or assuming an anti-XSS defense also works against CSRF and vice versa. They are separate, orthogonal problems that require different solutions. Don’t underestimate the effect of having both vulnerabilities in a site, but don’t overestimate the site’s defenses against one in the face of the other.

Be Wary of the Tangled Web

Forged requests need not only be scattered among pages awaiting a web browser. Many applications embed web content or are web-aware, having the ability to make requests directly to web sites without opening a browser. Applications like iTunes, Microsoft Office documents, PDF documents, Flash movies, and many others are able to generate HTTP requests. If the document or application makes requests with the operating system’s default browser, then it represents a useful attack vector for delivering forged requests to the victim. If the browser, as an embedded object or via a call through an API, is used for the request, then the request is likely to contain the user’s security context for the target site. The browser, after all, has complete access to cookies and session state. As a user, consider any web-enabled document or application as an extension of the web browser and treat it with due suspicion with regard to CSRF.

Epic Fail

CSRF affects web-enabled devices as easily as it can affect huge web sites. In January 2008 attackers sent out millions of emails that included an image tag targeting a URI with an address of 192.168.1.1. This IP address resides in the private network space defined by RFC 1918, which means that it’s not publicly accessible across the Internet. At first this seems a peculiar choice, but only until you realize that this is the default IP address for a web-enabled Linux-based router. The web interface of this router was vulnerable to CSRF attacks as well as an authentication bypass technique that further compounded the vulnerability. Consequently, anyone whose email reader automatically loaded the image tag in the email would be executing a shell command on their router. For example, the fake image <img src=“http://192.168.1.1/cgi-bin/;reboot”> would reboot the router. So, by sending out millions of spam messages attackers could drop firewalls or execute commands on these routers.

In February 2012 a researcher at Stanford University, Jonathan Mayer, noted how a well-known quirk in Safari’s blocking of third-party cookies was leveraged by Google and other advertisers to maintain cookies outside of browser privacy settings (http://blogs.wsj.com/digits/2012/02/16/how-google-tracked-safari-users/?mod=WSJBlog). Obviously, there are many ways to force a browser to make requests to a third-party in an attempt to set cookies: images, CSS files, JavaScript, and so on. However, this technique bypassed an explicit setting to block third-party cookies by taking advantage of behind-the-scenes for submission—form submission being an exception to the browser’s enforcement of the third-party cookie restriction. And a violation of the spirit of Safari’s cookie settings.

The relevance in CSRF is evident from the attributes of the iframe used to enclose the hack (albeit a “hack” common to many advertising HTML design patterns as well as malware):

<iframe frameborder=0 height=0 width=0 src=”http://ad.server/browser-sniff?unique-id” style=”position:absolute”>

When a Safari browser requested the iframe the third-party server returned HTML with an empty form that included self-submitting JavaScript. Safari’s quirk was that once one cookie was set—supposedly through explicit user interaction with the site, such as manually submitting a form—more cookies could automatically follow.

<form id=”empty_form” method=”post” action=”/set-a-cookie.page?identifiers”></form>

<script>document.getElementById(“empty_form”).submit();</script>

A central point throughout this chapter has been that CSRF attacks primarily threaten a user’s security context. This third-party cookie example is a CSRF hack even though it submitted an empty form with no intention of performing an action against a user’s authenticated session. In this case the CSRF hack targeted the user’s privacy context, rather than their security context. Privacy and security are distinct topics. But neither should be ignored when evaluating the hacks against a web application. We’ll explore more about how they overlap and compete with each other inChapter 8.

Variation on a Theme: Clickjacking

Up to this point we’ve emphasized how CSRF forces a victim’s browser to automatically submit a forged request of the attacker’s choosing. The victim in this scenario does not need to be tricked into divulging a password or manually initiating the request. Like a magician who forces a spectator’s secretly selected card to the top of a deck with a trick deal, clickjacking uses misdirection to force the user to manually perform an action of the attacker’s choice.

Clickjacking is related to CSRF in which attacker wishes the victim’s browser to generate a request that the user is not aware of. CSRF places the covert request in an <iframe>, <img>, or similar tag that a browser automatically fetches. Clickjacking takes a different approach. This hack tricks a user into submitting a request of the attacker’s choice through a bait-and-switch technique that makes the user think they performed a completely unrelated action.

The attacker perpetrates this skullduggery by overlaying an innocuous web page, to be seen by the victim, with the form to be targeted, to be obscured from the victim’s view. The form is placed positioned within an iframe such that the button to be clicked is shifted to the upper-left corner of the page. The iframe’s opacity and size are reduced so that the victim only sees the innocuous page. Then, it is positioned underneath the mouse pointer. Upon a user’s mouse click the camouflaged form is submitted—along with all cookies, headers, and any CSRF defenses intact. One on-line reference that demonstrates clickjacking is at http://www.planb-security.net/notclickjacking/iframetrick.html.

The visual sleight-of-hand behind clickjacking is perhaps better demonstrated with pictures. Figure 3.5 shows the target site loaded in an iframe. The iframe’s content has been shifted so that the “Like” button is positioned in the upper-left corner of the browser. This placement makes it easier for the attacker to overlay the button on an innocuous link.

image

Figure 3.5 Clickjacking target framed and positioned

Figure 3.6 shows the target iframe overlaying content to be visible to the victim. The opacity of the target iframe has been reduced to 25% in order to demonstrate transparency while leaving enough of the ghostly image visible to see how the “Like” button is placed over a link. A bit of JavaScript ensures that the target iframe follows the mouse pointer.

image

Figure 3.6 The overlay for a clickjacking attack

The clickjacking attack is completed by hiding the target page from the user. The page still exists in the browser’s Document Object Model; it’s merely hidden from the user’s view by a style setting along the lines of opacity=0.1 to make it transparent and reducing the size of the frame to a few pixels. The basic HTML for this hack is shown below:

<html><body>

<!-- The innocuous iframe comes first. -->

<iframe src=“overlay.html” style=“position:absolute;left:0px;top:0px”></iframe>

<!-- The “left” and “top” properties are sensitive to the type of browser. -->

<iframe src=“http://www.amazon.com/dp/1597495433?tag=aht3-20&camp=14573&creative=327641&linkCode=as1&creativeASIN=1597495433&adid=0W4W2WS1DK3M7AXK7NMT&” height=”350px” width=”850px” scrolling=”no” style=”position:absolute;left:-520px;top:-270px;opacity:0.25”></iframe>

</body></html>

A more descriptive, less antagonistic synonym for clickjacking is UI redress. “Clickjacking” describes the outcome of the hack. “UI Redress” describes the mechanism of the hack.

Employing Countermeasures

Solutions to cross-site request forgery span both the web application and web browser. Like cross-site scripting (XSS), CSRF uses a web site as a means to attack the browser. Whereas XSS attacks leave a trail of requests with suspicious characters, the traffic associated with a CSRF attack is legitimate and, with a few exceptions, originates from the victim’s browser. Even though there are no clear payloads or patterns for a site to monitor, an application can protect itself by fortifying the workflows it expects users to follow.

Tip

Focus countermeasures on actions (clicks, form submissions) in the web site that require the security context of the user. A user’s security context comprises actions whose outcome or affected data require authentication and authorization specific to that user. Viewing the 10 most recent public posts on a blog is an action with an anonymous security context—unauthenticated site visitors are authorized to read anything marked public. Viewing that user’s 10 most recent messages in a private inbox is an action in that specific user’s context—users must authenticate to read private messages and are only authorized to read their own messages.

Filtering input to the web site is always the first line of defense. Cross-site scripting vulnerabilities pose a particular danger because successful exploits control the victim’s browser to the whim of the attacker. The other compounding factor of XSS is that any JavaScript that has been inserted into pages served by the web site is able to defeat CSRF countermeasures. Recall the Same Origin Policy, which restricts JavaScript access to the Document Object Model based on a combination of the protocol, domain, and port from which the script originated. If malicious JavaScript is served from the same server as the web page with a CSRF vulnerability, then that JavaScript will be able to set HTTP headers and read form values—crippling the defenses we are about to cover.

Immunity to HTML injection doesn’t imply protection from CSRF. The two vulnerabilities are exploited differently. Their root problems are very different and thus their countermeasures require different approaches. It’s important to understand that an XSS vulnerability will render CSRF defenses moot. The threat of XSS shouldn’t distract from designing or implementing CSRF countermeasures.

Warning

Keep in mind that CSRF countermeasures rely on browser security principles like the Origin header from XMLHttpRequest connections or the ability to establish a temporary shared between the site and the user’s current session that identifies a specific action. Basic web transactions like POST requests (or any HTTP method), cookies, or sequential forms (submit form A before form B) do not establish the session-based security required to defeat CSRF.

Heading in the Right Direction

HTTP headers have a complicated relationship with web security. Request headers are easily spoofed and represent yet another vector for attacks like cross-site scripting, SQL injection, or situations where the application relies on their values. On the other hand, the new Origin request header was created explicitly for mitigating CSRF attacks. The goal of the following sections is to reduce risk by removing some of an attacker’s tactics, not to block all possible scenarios.

A Dependable Origin

Browsers that support HTML5’s Cross-Origin Request Sharing set an Origin header to indicate from where a request made via the XMLHttpRequest object was initiated. The origin concept is key to establishing security boundaries for content, as enforced by browsers’ Same Origin Policy. Recall that the origin concept comprises the scheme, host, and port of a URI. For example, the origin of https://book.site/updates is the triplet of https://, book.site, 443 (the default port for HTTPS) or compounded as https://book.site (the path is always omitted). As we’ve seen in Chapter 2and from the opening sections of this chapter, the Same Origin Policy prevents content from different origins from accessing their respective DOMs. It does not prevent browsers from loading content from different origins—which is key to CSRF attacks.

The Origin header provides feedback to a web site in order to allow it to decide whether to honor requests from different origins. Browsers normally permit requests to different origins, but their Same Origin Policy segregates responses so that resources are not accessible across origins. In some situations, it’s advantageous for applications to allow browsers to access and manipulate content from different origins. Hence the inclusion of an Origin header to enable the browser and web site to agree when content is allowed to be shared “cross-origin” or between different origins.

One characteristic of CSRF attacks is that the forged request is initiated from a different origin than that of the target web site. The following example demonstrates a CSRF attempt against a “reset password” feature. The hack uses an XMLHttpRequest object placed in a page served byhttp://trigger.site/csrf to cause the http://api.web.site/resetPassword link to send a reset link to the attacker’s email address. (Bonus question: In addition to CSRF, what other security problems does this reset method expose?)

<html><body>

<script>

var xhr = new XMLHttpRequest();

xhr.open(“POST”, “http://api.web.site/resetPassword”);

xhr.setRequestHeader(“Content-Type”, “application/x-www-form-urlencoded”);

xhr.setRequestHeader(“Content-Length”, “34”);

xhr.send(“notify=1&email=attacker@anon.email”);

</script>

</body></html>

When the browser visits the http://trigger.site/csrf link it generates an XHR request without intervention by the user. The following traffic capture shows the Origin value present as part of the request headers. Some unrelated headers have been excised for brevity. In this example, the Origin is http://trigger.site, which does not match https://api.web.site and therefore could be ignored as a potential CSRF attack:

POSThttp://api.web.site/resetPassword HTTP/1.1

Host: web.site

User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:8.0.1) Gecko/20100101 Firefox/8.0.1

Referer:http://trigger.site/csrf

Content-Length: 34

Content-Type: text/plain; charset=UTF-8

Origin: http://trigger.site

notify=1&email=attacker@anon.email

The Origin header enables web sites to distinguish the source (scheme, domain, and port) of incoming requests. The browser sets the Origin value for XMLHttpRequests. Its value is not modifiable by JavaScript. Checking this header’s value for explicitly permitted origins is one way a web site can prevent CSRF abuse of its API. For a more thorough explanation of Cross-Origin Request Sharing and use cases of the Origin header, see Chapter 1: HTML5.

Keep in mind the discussion of the Origin header has focused on CSRF hacks that use the XMLHttpRequest object to forge requests. If the “reset password” API did not distinguish between POST and GET methods, then the hack could have been carried out with the following HTML hosted on http://trigger.site/csrf:

<html><body>

<img src=”http://api.web.site/resetPassword?notify=1&email=attacker@anon.email”>

</body></html>

The <img> tag generates an automatic request from the browser that produces the following traffic. Again, some unrelated headers have been removed for brevity. Nevertheless, the Origin header is missing:

GEThttp://api.web.site/resetPassword?notify=1&email=attacker@anon.email HTTP/1.1

Host: web.site

User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:8.0.1) Gecko/20100101 Firefox/8.0.1

Referer:http://trigger.site/csrf

So, resources that are expected to be retrieved by XMLHttpRequest objects can be protected by checking for Origin header values. On the other hand, if a resource is expected to be retrieved via links or forms (i.e. a simple GET or POST method) then the Origin header will not be present and cannot be relied upon.

Warning

HTML5’s Access-Control-Allow-Origin header provides a mechanism for sites to inform browsers that cross-origin requests are permitted. The value of this header may be “null,” a space-separated list of origins (“http://web.sitehttp://book.sitehttp://api.web.site:8000”), or the all-encompassing wildcard (“∗”). Assigning this header the wildcard value does not protect users from CSRF.

An Unreliable Referer

1In the previous section on the Dependable Origin there was another indicator of where a request originated from in each of its examples: the Referer header. The Referer indicates the URI from which the navigation request was initiated. For example, the Referer in the previous section’s examples was the page that contained the forged CSRF link, http://trigger.site/csrf.

Web developers are already warned about including sensitive information in URIs because it may be exposed to other sites via the Referer (http://www.w3.org/Protocols/rfc2616/rfc2616-sec15.html#sec15.1.3). The Referer is not intended as a security mechanism, but its presence may be used to identify the origin of a request.

Recall the “reset password” example from the previous section. A request for http://trigger.site/csrf loads a page that contains an <img> tag with the CSRF payload. The traffic capture of the browser’s request for the image looks like this:

GEThttp://api.web.site/resetPassword?notify=1&email=attacker@anon.email HTTP/1.1

Host: web.site

User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:8.0.1) Gecko/20100101 Firefox/8.0.1

Referer:http://trigger.site/csrf

The web application at http://api.web.site/ could check the origin of incoming Referer headers to distinguish between requests made within the application from requests originating elsewhere. Since the request is for a sensitive capability (resetting the user’s password) and the Referer is from an unknown source the site could ignore the request.

The presence of a Referer header is a reliable indicator of its request origin, but its absence is not. Let’s modify the previous example such that the forged request is placed in an <img> tag placed in a page on an HTTPS link, e.g. https://trigger.site/csrf. The resulting traffic capture shows that the browser omits the Referer header on purpose. HTTPS links are assumed to have information that must not be exposed over HTTP. Consequently, browsers strip the Referer as (not!) seen below:

GEThttp://api.web.site/resetPassword?notify=1&email=attacker@anon.email HTTP/1.1

Host: web.site

User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:8.0.1) Gecko/20100101 Firefox/8.0.1

The Referer is absent for requests that transition from HTTPS to HTTP. It is also absent if the link is typed into the browser’s navigation bar or selected from a history or bookmark menu; after all there’s no referrer in either of those cases. The header may also be absent for users who have a proxy that strips all Referer values for privacy reasons. Absence of Referer does not equate presence of malice.

Warning

The presence of other security problems like HTML injection (Chapter 2), open redirects (Chapter 6), or network sniffing (Chapter 7) negates many CSRF countermeasures. An XSS attack easily compromises a user’s data without resorting to CSRF. Allowing session cookies to transit HTTP (as opposed to HTTPS) enables an attacker to fully spoof requests. However, that is no reason to assume these countermeasures are insufficient or ineffective. It emphasizes that good security requires an assortment of defenses that focus on specific problems and an awareness that a strong defense can be undermined by other weaknesses.

Custom Headers: X-Marks-the-Spot

HTTP headers have a tenuous relationship to security. Headers can be modified and spoofed, which makes them unreliable for many situations. However, there are certain properties of headers that make them a useful countermeasure for CSRF attacks. One important property of custom headers, those prefixed with X-, is that they cannot be sent cross-domain without explicit permission (see Cross-Origin Request Sharing, CORS, in Chapter 1: HTML5). If the application hosted at http://social.site/ expects an X-CSRF header to accompany requests, then it can reliably assume that a request containing that header originated from social.site and not some other origin. A malicious attacker who creates a page hosted at http://trigger.site/ with a CSRF hack that causes visiting browsers to automatically request http://social.site/auth/update_profile is not able to forge a custom header (such as X-CSRF). Modern browsers will not include custom headers for cross-origin requests (e.g. from trigger.site to social.site).

For example, this is what a legitimate HTTP request looks like for a site that employs custom headers to mitigate CSRF. The following request updates the user’s email address. The X-CSRF header indicates the request originated from the web application and the cookie provides the session context so the application knows which profile to update.

GET /auth/update_profile?email=user@new.email HTTP/1.1

Host: social.site

X-CSRF: 1

Cookie: sid=98345890345

A CSRF hack would forge requests so that the victim’s browser unwittingly changes their profile’s email address to one owned by the attacker. Changing the email address is a useful attack because sensitive information like password reset information is emailed. The attacker creates a booby-trapped page that uses the familiar <img> tag technique:

<html><body>

<img src=“http://social.site/auth/update_profile.cgi?email=attacker@anon.email”>

</body></html>

The request coming from the victim’s browser would lack one important item, the X-CSRF header.

GET /auth/update_profile?email=attacker@anon.email HTTP/1.1

Host: social.site

Cookie: sid=98345890345

Even if the attacker were to create the request using the XHR object, which allows for the creation of custom headers, the browser would not forward the header outside the page’s security origin unless given explicit permission via the Allow-Control-Allow-Headers (part of CORS). A web site is free to ignore requests that do not contain the expected custom header because there is a strong guarantee that the request did not originate from within the site.

Alas, vulnerabilities arise when exceptions occur to security rules. Plug-ins like Flash or Silverlight might allow requests to include any number or type of header regardless of the origin or destination of the request. While vendors try to maintain secure products, a vulnerability or mistake could expose users to CSRF even in the face of this countermeasure. CSRF exploits both the client and server—which means they each need to pull their weight to keep attackers at bay.

Shared Secrets

Another effective CSRF countermeasure assigns a temporary pseudo-random token to sensitive actions performed by authenticated users. The value of the token is known only to the web application and the user’s web browser. When the web application receives a request it first verifies that the token’s value is correct. If the value doesn’t match the one expected for the user’s current session, then the request is rejected. An attacker must include a valid token when forging a request.

<form>

<input type=hidden name=”csrf” value=”57ba40e58ea68b228b7b4eaf3bca9d43”>

</form>

Secret tokens must be ephemeral and unpredictable in order to be effective. The token should be refreshed for each sensitive state transition; its goal is to tie a specific action to a unique user. Unpredictable tokens prevent attackers from successfully forging a request because they do not know the correct value to use. Otherwise, a predictable token like the

victim’s userid can be guessed by the attacker.

Note

The term “state transition” is a fancy shortcut for any request that affects the data associated with a user. The request could be a form submission, a click on a link, or a JavaScript call to the XmlHttpRequest object. The data could be part of the user’s profile, such as the current password or email address, or information handled by the web application, such as a banking transfer amount. Not every request needs to be protected from CSRF, just the ones that impact a user’s data or actions that are specific to the user. Submitting a search for email address that starts with the letter Y doesn’t affect the user’s data or account. Performing an action to submit a vote to a poll question is an action that should be specific to each user.

Predictable tokens come in many guises: time-based values, sequential values, hashes of the user’s email address. Poorly created tokens might be hard to guess correctly in one try, but the attacker isn’t limited to one guess. A time-based token with resolution to seconds only has 60 possible values in a one-minute window. Millisecond resolution widens the range, but only by about nine more bits. Fifteen bits (about the range of time in milliseconds) represent a nice range of values—an attacker would have to create 600 booby-trapped <img> tags to obtain a 1% chance of success. On the other hand, a smarter hacker might put together a sophisticated bit of on-line social engineering that forces the victim toward a predictable time window.

Warning

Transforming a value to increase its bit length doesn’t always translate into “better randomness.” (In quotes because a rigorous discussion of generating random values is well beyond the scope and topic of this book.) Hash functions are one example of a transformation with misunderstood effect. For example, the SHA-256 hash function generates a 256-bit value from an input seed for a total of 2256 possible outcomes. The integers between 0 and 255 are represented with eight bits (28 possible values). The value of an 8-bit token is easy to predict or brute force. Using an 8-bit value to seed the SHA-256 hash function does not make a token any more difficult to brute force in spite of the apparent range 2256 values. Hash functions always produce the same output for a given input. Thus, only a pittance (28) of the those 2256 values will ever be generated. The mistake is to assume that a brute force attempt to reverse engineer the seed requires a complete scan of every possible value, something that isn’t computationally feasible. Those 256 bits merely obfuscate a poor entropy source—the original 8-bit seed. An attacker wouldn’t even have to be very patient before figuring out how the tokens are generated; an ancient Commodore 64 could accomplish such a feat first by guessing number zero, then one, and so on until the maximum possible seed of 255. From there it’s a trivial step to spoofing the tokens for a forged request.

Mirror the Cookie

Web applications already rely on pseudo-random values for session cookies. This cookie, whether a session cookie provided by the application’s programming language or custom-created by the developers, has (or should have!) the necessary properties of a secret token. Thus, the cookie’s value is a perfect candidate for protecting forms. Using the cookie also alleviates the necessity for the application to track an additional value for each request; the application need only match the user’s cookie value with the token value submitted via the form.

Also referred to as “double submit,” this countermeasure places a copy of the session cookie in a hidden form field. Thus, a server should be able to trivially verify that the session cookie of the request matches the value provided in the form. A hacker would have to compromise the session cookie in order to create a valid token. And if a hacker can obtain or guess the session cookie in the first place, then the site has much worse security problems than CSRF to deal with.

This countermeasure takes advantage of the browser’s Same Origin Policy (SOP). The SOP prevents a site of one “origin”, the attacker’s CSRF-laden page for example, from reading the cookies set by other origins. (Only pages with the same URI scheme, host, and port of the cookie’s origin may access it.) Without access to the cookie’s value the attacker is unable to forge a valid request. The victim’s browser will, of course, submit the cookie to the target web application, but the attacker does not know that cookie’s value and therefore cannot add it to the spoofed form submission.

The Direct Web Remoting (DWR) framework employs this mechanism. DWR combines server-side Java with client-side JavaScript in a library that simplifies the development process for highly interactive web applications. It provides configuration options to auto-protect forms against CSRF attacks by including a hidden httpSessionId value that mirrors the session cookie. For more information visit the project’s home page at http://directwebremoting.org/. Built-in security mechanisms are a great reason to search out development frameworks rather than build your own.

Require Manual Confirmation

One way to preserve the security of sensitive actions is to keep the user explicitly in the process. This ranges from requiring a response to the question, “Are you sure?” to asking the users to re-supply their passwords. Adopting this approach requires particular attentiveness to usability. The Windows User Account Control (UAC) is a case where Microsoft attempted to raise user’s awareness of changes in the user’s security context by throwing up an incessant amount of alerts.

Manual confirmation doesn’t necessarily enforce a security boundary. UAC alerts were intended to make users aware of potentially malicious outcomes due to certain action. The manual confirmation was intended to prevent the user from unwittingly executing a malicious program; it wasn’t intended as a way to block the activity of malicious software once it is installed on the computer. Web site owners trying to minimize the number of clicks to purchase an item or site designers trying to improve the site’s navigation experience are likely to balk at intervening alerts as much as users will complain about the intrusiveness.

The manual confirmation must require an action that only a person can carry out, such as clicking a modal JavaScript alert or answering a CAPTCHA. Users unfamiliar with security or annoyed by pop-ups will be inattentive to an alert’s content and merely seek out whatever button closes it most quickly. These factors relegate manual confirmation to an act of last resort or a measure for infrequent, but particularly sensitive actions, such as resetting a password or transferring money outside of a user’s accounts.

Tip

Remember, cross-site scripting vulnerabilities weaken or disable CSRF countermeasures, even those that seek manual confirmation of an action.

Understanding Same Origin Policy

In Chapter 2 we touched on the browser’s Same Origin Policy with regard to executing JavaScript and accessing DOM elements. Same Origin Policy restricts JavaScript’s access to the Document Object Model. It prohibits content of one host from accessing or modifying the content from another host even if the content is rendered in the same page. This policy inhibits certain exploit techniques, but it is unrelated to the vulnerability’s root cause. The same is true for CSRF.

Same Origin Policy preserves the separation of content between sites (unrelated origins). Without it all of the CSRF countermeasures fail miserably. On the other hand, Same Origin has no bearing on submitting requests to a web application. HTML5’s Cross-Origin Requesting Sharing (CORS) improves on this by defining how the XMLHttpRequest object may be used across origins. However, CORS is a method for improving a site’s intended communication with other origins. Relying on the Same Origin Policy to defeat CSRF is misguided because it does not address the hack’s underlying issues. Browser vulnerabilities or plug-ins that break the Same Origin Policy threaten CSRF defenses. Reiterating the policy here is intended to punctuate the use of explicit CSRF countermeasures like custom headers and pseudo-random tokens.

Anti-Framing via JavaScript

CSRF’s cousin, clickjacking, is not affected by any of the countermeasures mentioned so far. This attack relies on fooling users into making the request themselves rather than forcing the browser to automatically generate the request. The main property of a clickjacking attack is framing the target web site’s content. Since clickjacking frames the target site’s HTML a natural line of defense might be to use JavaScript to detect whether the page has been framed. A tiny piece of JavaScript is all it takes to break page-framing:

// Example 1

if (parent.frames.length > 0) {

top.location.replace(document.location);

}

// Example 2

if (top.location != location) {

if(document.referrer && document.referrer.indexOf(“domain.name”) == -1) {

top.location.replace(document.location.href);

}

• }

The two examples in the preceding code are effective, but not absolute. A more in-depth analysis of JavaScript-based countermeasures is available from a paper produced by Stanford University’s Web Security Group at http://seclab.stanford.edu/websec/framebusting/framebust.pdf.

Warning

JavaScript-based anti-framing defenses might fail for many reasons. JavaScript might be disabled in the user’s browser. For example, the attacker might add the security-restricted attribute to the enclosing iframe, which blocks Internet Explorer from executing any JavaScript in the frame’s source. A valid counter-argument asserts that disabling JavaScript for the frame may also disable functionality needed by the targeted action, thereby rendering the attack ineffective anyway. (What if the form to be hijacked calls JavaScript in its onSubmit or an onClick event?) More sophisticated JavaScript (say 10 lines or so) can be used to break the anti-framing code. In terms of reducing exploit vectors, anti-framing mechanisms work well. They do not completely resolve the issue. Expect the attacker to always have the advantage in the JavaScript arms race.

Framing the Solution

Internet Explorer 8 introduced the X-Frame-Options response header to help site developers instruct the browser whether it may render content within a frame. There are two possible values for this header:

• DENY—The content cannot be rendered within a frame. This setting would be the recommended default for the site to be protected. For example, www.facebook.com sets this value.

• SAMEORIGIN—The content may only be rendered in frames with the same origin as the content. This setting would be applied to pages that are intended to be loaded within a frame of the web site. For example, www.google.com sets this value.

All modern browsers have adopted this security measure. It effectively blocks clickjacking attacks as well as preventing other types of framing hacks. The web application’s code doesn’t have to change at all because this countermeasure is applied via response headers and enforced by the browser. It is one of the easiest defenses to deploy. It also demonstrates how good security design can obviate an entire class of vulnerabilities. Once an overwhelming majority of users upgrade to modern browsers and sites set the X-Frame-Options header, clickjacking will be relegated to an appendix of web security history.

Note

The iframe’s sandbox attribute and the text/html-sandboxed Content-Type do not affect clickjacking attacks. They control how the browser handles framed content. For example, restricting JavaScript execution or forbidding form submission. An effective clickjacking countermeasure needs to prevent the content from being framed in a browser. Even if the server sets the X-Frame-Options header, the site is not really protected unless the user’s browser supports it.

Defending the Web Browser

There is a fool-proof defense against CSRF for the truly paranoid: change browsing habits. Its level of protection, though, is directly proportional to the level of inconvenience. Only visit one web site at a time, avoiding multiple browser windows or tabs. When finished with a site use its logout mechanism rather than just closing the browser or moving on to the next site. Don’t use any “remember me” or auto-login features if the web site offers it. An effective prescription perhaps, but one that quickly becomes inconvenient.

Vulnerability & Verisimilitude

This chapter has focused on the mechanics of executing a CSRF hack and the means to defend against it. But there’s one aspect of CSRF that always arises in discussing its impact: Do you care?

CSRF hacks that affect a user’s security context (the user’s relationship to the site or to their data) are obvious problems. Less clear are situations like login forms or logout buttons. Does a login form require CSRF protection? After all, an attacker needs to populate the form’s username and password to forge the request—so why not just use those credentials to login in the first place? The logout button changes a user’s security context, they go from authenticated to unauthenticated in a single click, but how much of an impact does that have beyond being a nuisance? Every search engine is vulnerable to CSRF, but how much of an impact is it to force random browsers to execute search requests?

It’s possible to build counter-examples to the login, logout, and search situations. But those counter-examples rely on contrived scenarios or additional threats to a user rather than threats to the web application. In short, weigh the amount of effort required to implement a countermeasure with the amount of time spent determining the risk of a CSRF vulnerability. If it’s possible to deploy a web framework with built-in countermeasures, then the effort to fix the problem seems minimal and there’s no reason to waste time considering attack scenarios. Engineering involves creating effective solutions to real problems.

Summary

Cross-site request forgery (CSRF) targets the stateless nature of HTTP requests by crafting innocuous pages with HTML elements that force a victim’s browser to perform a request using the victim’s role and privilege relationship to a site, rather than the attacker’s. The forged request is placed in the source (src) attribute of an element that browsers automatically load, such as an iframe or img. The trap-laden page is deployed to any site that a victim might visit, or perhaps even sent as an HTML email. When the victim’s browser encounters the page it loads all of the page’s resources, including the link with the forged request. The forged link represents some action, perhaps a money transfer or a password reset, on a site using the victim’s security context—after all, it’s their browser, their cookies. The hack relies on the assumption that the victim has already authenticated to the web site, either in a different browser tab or window. A successful hack tricks the victim’s browser into making a pre-authenticated, pre-authorized request—but without the knowledge or consent of the victim.

CSRF happens behind the scenes of the web browser, following behaviors common to every site on the web. The web site targeted in the forged request only ever sees a valid request from a valid user; there’s no indication that anything is amiss (and therefore nothing to monitor for a firewall or IDS). The indirect nature of CSRF makes it difficult to catch. The apparent validity of CSRF traffic makes it difficult to block. The impact makes it difficult to accept.

Web developers must protect their sites by applying measures beyond authenticating the user. After all, the forged request originates from the user even if the user isn’t aware of it. Hence the site must authenticate the request as well as the user. This ensures that the request, already known to be from an authenticated user, was made after visiting a page in the web application itself and not an insidious img element somewhere on the Internet.

CSRF also attacks the browser so visitors to web sites must also take precautions. The general recommendations of up-to-date browser versions and fully patched systems always applies. Users can take a few steps to specifically protect themselves from CSRF. Using separate browsers for sensitive tasks reduces the possibility that a bank account accessed in Internet Explorer would be compromised by a CSRF payload encountered in Safari. Users can also make sure to use sites’ logout mechanisms. Such steps are a bitter pill since they start to unbalance usability with the burden of security.

It isn’t likely that these attacks will diminish over time. The vulnerabilities that lead to CSRF lie within HTTP and how browsers interpret HTML. The proliferation of web-based APIs at once makes it easier for developers to centralize security defenses, but also enables easier attacks. CSRF attacks are hard to detect, they have more subtle characteristics than others like cross-site scripting or SQL injection. The threat remains as long as attackers can exploit vulnerable sites for profit. The growth of new web sites and the amount of valuable information moving into those sites seem to ensure that attackers will keep that threat alive for a long time. Both web site developers and browser vendors must be diligent in employing countermeasures now because going after the root of the problem, increasing the inherent security of standards like HTTP and HTML, is a task that will take years to complete.

1 This header name was misspelled in the original HTTP/1.0 standard (RFC 1945) published in 1996. The prevalence of web servers and browsers expecting the misspelled version likely ensures that it will remain so for a long, long time.