Advanced Integration - Development with the Platform: Building Business Applications in the Cloud, Third Edition (2014)

Development with the Platform: Building Business Applications in the Cloud, Third Edition (2014)

11. Advanced Integration

This chapter focuses on integration features that are highly specialized and not typically essential for everyday application development. They are features often used by independent software vendors to extend the platform at a low level to add new capabilities.

Due to their specialized nature and complexity, the APIs covered here each have their own dedicated reference guides at The intent of this chapter is to provide a brief introduction to the APIs and sample code that can serve as a way to get started with them.

This chapter is divided into sections that each address a different integration feature:

Image Introduction to the Streaming API—The Streaming API provides near-real-time notifications about the creation and modification of database records.

Image Working with the Bulk API—The Bulk API is a way to get mass quantities of database records in and out of

Image Getting started with Canvas—Canvas provides a secure mechanism to embed user interfaces, hosted outside, into Chatter and Visualforce pages.

Image Introduction to the Tooling API—The Tooling API is used by the IDE and other tools to maintain code artifacts and access debugging functionality.

Image Understanding the Metadata API—The Metadata API enables you to write code to perform development and configuration management tasks such as database object maintenance and application migration. It is the same API used by the IDE.

Image Sample application—In an integration scenario for the Services Manager sample application, a Java program is developed to update with information from a human resources database.


The code listings in this chapter are available in a GitHub Gist at

Introduction to the Streaming API

The Streaming API delivers notifications to your program when records in the database are created or modified. This can be useful for user interfaces that have a real-time data requirement or to keep an external database in sync with Streaming API is a scalable alternative to polling for changes or writing triggers with callouts.

This section provides an introduction to Streaming API in two parts, described here:

1. Overview—Learn the key concepts involved in the Streaming API.

2. Getting started with Streaming API—Construct a working example that uses the Streaming API within a Visualforce page.


For more information about the Streaming API, consult the Streaming API Developer’s Guide, found at


Streaming notifications in are best understood in terms of publishers and subscribers. can be configured to publish notifications when something interesting happens with a database object. This publishing configuration is expressed through a PushTopic. The PushTopic defines the database object to monitor, a public name that subscribers can reference called a Channel, and guidance on what conditions in the database object must be satisfied to create a notification. The subscriber is a program inside or outside of that uses the Bayeux protocol (CometD implementation) to register interest in and receive the streaming notifications.

PushTopics are ordinary database records, but contain four components that are critical to properly configuring your streaming notifications, described in the following list:

1. Channel name—This is the name that client applications will use to subscribe to the streaming notifications on this PushTopic. It must be 25 characters or fewer and be unique in your organization.

2. SOQL query—The SOQL query defines the database object and fields that you are monitoring for changes, plus optionally the criteria used to determine whether a change is worthy of a notification. To receive notifications, the subscriber must have at least read access to the object, field-level security to the fields in the WHERE clause, and visibility to the records causing the notifications via sharing rules.

3. NotifyForOperations—By default, notifications are sent on the Channel when matching records are created or updated (All). Use this field to limit notifications to only creation (Create) or only modification (Update) of records.

4. NotifyForFields—This setting instructs the Channel on what fields in the SOQL query are considered changes and trigger a notification. Any filters in a WHERE clause are always evaluated first. By default, it is set to Referenced, which means all fields in the query are factored into the decision. Other valid values are All (all fields in the object, even those not in SELECT or WHERE), Select (fields in a SELECT clause only), and Where (fields in a WHERE clause only).

As soon as a PushTopic is created, it is instantly available to subscribers. Likewise, when it is modified, the new definition takes effect immediately. You can delete a PushTopic record to stop its notifications, or set IsActive to false to disable it temporarily.

Each organization has a limit of 20 PushTopics. There are also per-edition limits on subscribers per topic and notifications per day. There are also a number of limitations on the SOQL query used in PushTopics, described next:

Image Subset of objects—All custom objects are supported, but only a handful of standard objects: Account, Campaign, Case, Contact, Lead, Opportunity, and Task.

Image Subset of query features—Aggregate queries, semi-join and anti-joins, count, limit, relationship fields, order by, group by, and formula fields are not supported.

Image Required fields—The query must include the Id field.

Image Maximum length—The query cannot exceed 1,300 characters.

Getting Started with Streaming API

A simple way to experiment with the Streaming API is to create a Visualforce page to serve as the subscriber. You can then visually see notifications as they arrive. Figure 11.1 shows a sample Visualforce page to do this. The button on the top starts and stops notifications by creating and deleting a PushTopic record. The table below it displays notifications as they arrive from, in response to the creation and modification of Timecard records.


Figure 11.1 Streaming API example

To try this example in your own Salesforce organization, create the controller class in Listing 11.1. Then download the CometD library at Uncompress it and extract the following files:

Image cometd-2.2.0/cometd-javascript/common/target/org/Cometd.js

Image cometd-2.2.0/cometd-javascript/jquery/src/main/webapp/jquery/jquery-1.5.1.js

Image cometd-2.2.0/cometd-javascript/jquery/src/main/webapp/jquery/json2.js

Image cometd-2.2.0/cometd-javascript/jquery/src/main/webapp/jquery/jquery.cometd.js

Place them into a zip file and upload it as a static resource named cometd. Now you can create the Visualforce page given in Listing 11.2.

Listing 11.1 Visualforce Controller for Streaming API Example

public with sharing class MyPageController11_1 {
public Boolean started { get; set; }
private static final String TOPIC_NAME = 'TimecardUpdates';
public MyPageController11_1() {
started = 1 == [ SELECT count() FROM PushTopic
public PageReference stop() {
PushTopic p = [ SELECT Id from PushTopic
if (p != null) {
delete p;
started = false;
return null;
public PageReference start() {
PushTopic p = new PushTopic();
p.Name = TOPIC_NAME;
p.Query = 'SELECT Id, Name, Status__c FROM Timecard__c';
p.ApiVersion = 28.0;
p.NotifyForOperations = 'All';
p.NotifyForFields = 'Referenced';
insert p;
started = true;
return null;

Listing 11.2 Visualforce Page for Streaming API Example

<apex:page controller="MyPageController11_1">
<apex:form id="form">
<apex:includeScript value="{!URLFOR($Resource.cometd,
<apex:includeScript value="{!URLFOR($Resource.cometd,
<apex:includeScript value="{!URLFOR($Resource.cometd,
<apex:includeScript value="{!URLFOR($Resource.cometd,
<apex:sectionHeader title=" Streaming API Example" />
<br />
<apex:commandButton action="{!start}" value="Start"
rerender="form" rendered="{!NOT started}" />
<apex:commandButton action="{!stop}" value="Stop"
rendered="{!started}" />
<apex:outputPanel id="comet" rendered="{!started}">
<script type="text/javascript">
(function($) {
$(document).ready(function() {
url: window.location.protocol + '//' + window.location.hostname +
requestHeaders: { Authorization: 'OAuth {!$Api.Session_ID}'}
$.cometd.subscribe('/topic/TimecardUpdates', function(message) {
'<tr><td>' + JSON.stringify( + '</td>' +
'<td>' + JSON.stringify( + '</td>' +
'<td>' + JSON.stringify( + '</td>' +
'<td>' + JSON.stringify( + '</td>' +
'<td>' + JSON.stringify( + '</td>' +
<p />
<table id="content" width="80%"><tr><th>Channel</th><th>Name</th>

Working with the Bulk API

The Bulk API allows the import or export of large quantities of records, split into units of work called batches. Up to 20 million records per 24-hour period can be imported into Both REST and SOAP versions of the API are provided.

This section focuses on hands-on examples with the REST flavor of the Bulk API. The examples require a tool named cURL, available free for every platform at

This section provides an introduction to Bulk API in three parts, described here:

1. Overview—Get to know the terminology and workflow of the Bulk API, and prepare to use it by authenticating using OAuth.

2. Importing records—Walk through API usage examples of creating a job to import records and verify its successful completion.

3. Exporting records—In a series of API calls, submit a SOQL query for a bulk export and retrieve the results.


For a comprehensive look at the Bulk API, refer to the Bulk API Developer’s Guide, found at


Bulk API operates in terms of a two-tier system of containers to track units of data movement work. Each tier is described here:

Image Batch—A batch is a set of records to be imported. The records are represented in CSV or XML format. For import jobs, a batch cannot exceed 10,000 records. Batches are not applicable to export jobs, which use result files that cannot exceed 1GB.

Image Job—A job is a list of batches. The job specifies the type of operation that will be performed in the batches, such as insert or query.


Bulk REST API calls require authentication to Use the username-password OAuth flow, which accepts username and password, to establish an authenticated session. Listing 11.3 provides a sample request and response.

Listing 11.3 Sample Password Authentication Request and Response

-d "grant_type=password" -d "client_id=$CLIENT_ID"
-d "client_secret=$CLIENT_SECRET"
-d "username=$USERNAME" -d "password=$PASSWORD"
"id": "",
"issued_at": "1374386510993",
"instance_url": "",
"signature": "...",
"access_token": "..."

The value in the response’s access_token field is needed to run all of the examples in this section. To get one yourself, set the $USERNAME environment variable to your Salesforce username, $PASSWORD to your Salesforce password with security token appended. The variables$CLIENT_ID and $CLIENT_SECRET are your OAuth Consumer Key and Consumer Secret. These come from a Connected App, which you can reuse from Chapter 10, “Integration with”

Now that you have obtained an OAuth access token, you are ready to try the Bulk API examples. Set the access token as the environment variable $TOKEN. Also, be sure to replace na15 in the following examples with your own instance of To identify your instance, look at theinstance_url field of the OAuth username-password flow, or the URL in your Web browser when you log in to

Importing Records

To import records, an authenticated user creates an import job, adds batches of data to it, closes the job, checks for completion, and then retrieves the results. The results are provided per batch and indicate the status of each imported record. Examples of each step in this process are provided in the remainder of this subsection.

Listing 11.4 creates a bulk import job. It specifies that the records in the job are to be inserted into the Project custom object from a CSV file.

Listing 11.4 Creating a Bulk Import Job

echo '<?xml version="1.0" encoding="UTF-8"?>
<jobInfo xmlns="">
<contentType>CSV</contentType></jobInfo>' |\
curl -X POST -H 'Content-type: application/xml' \
-H "X-SFDC-Session: "$TOKEN -d @-\


To adapt the command in Listing 11.4 and other listings in this chapter to run in Windows Command Prompt, remove the single quotation mark characters (') in the echo statement, replace the single quotation mark characters around the Content-type header with double quotation mark characters ("), remove the backslash (\) line-continuation characters and concatenate the lines into a single line, and replace $TOKEN with %TOKEN%.

Make a note of the job identifier, in the id field of the XML response. It is used in all of the requests that follow. In Listing 11.5, JOB_ID is a placeholder for the job identifier returned from the import creation request. Replace it with your own. The records in the batch are sent in the body of the request, composed of three Project records with unique names.

Listing 11.5 Adding Records to Bulk Import Job

echo 'Name
Project3' |\
curl -X POST -H 'Content-type: text/csv' \
-H "X-SFDC-Session: "$TOKEN --data-binary @-\

Save the batch identifier that is returned. You will need it to check for the results of the batch.

You can add more batches to the job by repeating the request. When you’re done adding batches, send the request in Listing 11.6 to close the job, again setting the job identifier to your own. Closing the job signals to that it can begin processing the job.

Listing 11.6 Closing the Bulk Import Job

echo '<?xml version="1.0" encoding="UTF-8"?>
<jobInfo xmlns="">
<state>Closed</state></jobInfo>' |\
curl -X POST -H 'Content-type: application/xml' \
-H "X-SFDC-Session: "$TOKEN -d @-\

Job processing is asynchronous, so requests complete immediately but processing continues in the background. To check for the status of the job, send the request in Listing 11.7 with your job identifier.

Listing 11.7 Checking the Status of the Bulk Import Job

curl \
-H "X-SFDC-Session: "$TOKEN

When the job is complete, you can retrieve the results of its batches. Each batch result indicates the success or failure of every record within the batch. Listing 11.8 shows a sample request to retrieve the batch status. Replace the job identifier and batch identifier (BATCH_ID) with your own.

Listing 11.8 Retrieving Results of the Bulk Import Job

job/JOB_ID/batch/BATCH_ID/result \
-H "X-SFDC-Session: "$TOKEN

Exporting Records

The Bulk API can also be used to query to export large numbers of records in a CSV or XML file format. First a bulk export job is created; then a batch is added to the job containing a SOQL statement. The SOQL cannot contain relationship fields; nested queries; or the aggregate functions COUNT, ROLLUP, SUM, or GROUP BY CUBE. Next, the status of the job is checked, and, finally, the results retrieved in files, each up to 1GB in size.

To begin, create a bulk export job using the request in Listing 11.9.

Listing 11.9 Creating the Bulk Export Job

echo '<?xml version="1.0" encoding="UTF-8"?>
<jobInfo xmlns="">
<contentType>CSV</contentType></jobInfo>' |\
curl -X POST -H 'Content-type: application/xml' \
-H "X-SFDC-Session: "$TOKEN -d @-\

Keep track of the job identifier returned in the response. Create a batch within the job, specifying the SOQL statement. In Listing 11.10, the names and identifiers of the Project records will be exported. Replace JOB_ID with your job identifier.

Listing 11.10 Creating the Bulk Export Batch

echo 'SELECT Id, Name FROM Project__c' |\
curl -X POST -H 'Content-type: text/csv' \
-H "X-SFDC-Session: "$TOKEN --data-binary @-\

Make a note of the batch identifier. Use the request in Listing 11.11 to check the status of your export job.

Listing 11.11 Checking the Status of the Bulk Export Job

-H "X-SFDC-Session: "$TOKEN

When the job is complete, the results are ready to retrieve. This is a two-step process. First, retrieve the list of result identifiers. Then, for each result identifier, make a request to retrieve the actual results. Listing 11.12 is an example of the first step. Be sure to replace the JOB_ID andBATCH_ID placeholders with your own values.

Listing 11.12 Retrieving Result Identifiers of the Bulk Export Job

job/JOB_ID/batch/BATCH_ID/result \
-H "X-SFDC-Session: "$TOKEN

The last step in the process is shown in Listing 11.13. In addition to job and batch identifiers, replace RESULT_ID with one of the result identifiers from the prior request.

Listing 11.13 Retrieving Results of the Bulk Export Job

job/JOB_ID/batch/BATCH_ID/result/RESULT_ID \
-H "X-SFDC-Session: "$TOKEN

Getting Started with Canvas

The Canvas allows you to integrate with custom applications, located outside of, at the user interface level. It consists of a flexible content “container” located in and code libraries (JavaScript and Java) to augment your custom application to take advantage of the Canvas. The libraries provide functionality around security, sizing of the content container, and communication between Canvas applications and the container.

This section provides an introduction to Canvas in two parts, described here:

1. Overview—Learn the basic components of the Canvas and how they work.

2. Getting started with the Canvas—Walk through an example of a Canvas application hosted on your local computer.


The Canvas is a complex and relatively new area of with many ways to implement it. Consult the Canvas Developer’s Guide, found at, for the most current and complete information on this feature.


Canvas integrates applications at the user interface level, through the Web browser. The typical scenario for an integrated user interface is mashing up data with data from an external system. In this scenario, the external system can maintain its own database and processes, but leverage data opportunistically from the currently logged-in user. The alternative is typically heavier-weight integration whereby the servers of the external application attempt to stay synchronized with data from

The two most important features of the Canvas are authentication and cross-domain XMLHttpRequest (XHR). These are described in the following list:

Image Authentication—Authentication enables your external Web application to verify that it is truly hosted inside a organization, with an authenticated user at the helm. It does this in one of two ways: by allowing the Web user to OAuth to or via Signed Request. OAuth is no different from OAuth in other contexts. Signed Request is a method whereby the platform digitally signs a request to your application’s Web server. The request includes the identity and session information of the authenticated user. If the request is decrypted and the signature verified, you can trust that it originated from and can use the session to make subsequent requests to Canvas Java SDK provides code for verifying data sent by the Signed Request authentication method.

Image Cross-domain XHR—Because your Web application is being served inside an IFRAME, it is subject to cross-domain scripting limitations enforced by the standard security policies of Web browsers. This means JavaScript in your Web pages cannot call out to servers other than the one serving the parent Web page. Because a common scenario with mashups is to include data from, Canvas JavaScript SDK provides API calls to proxy your requests back to Salesforce.

Getting Started with Canvas

Because so much of a Canvas application resides outside of by definition, it is a challenge to provide a generic, widely accessible example without pulling in many other technologies. This section walks through an example that leverages a local Web server and two static HTML pages to demonstrate OAuth authentication and cross-domain XHR requests.

The purpose of the example is to highlight the most common features of Canvas, and to do so without requiring an application server. In a more realistic application of Canvas, the OAuth process would originate on the Web server so the authorizations can be stored and managed securely rather than forcing the user to authenticate every time the page is rendered. Alternatively, Signed Request could be used to provide a transparent single sign-on process for the user, whereby the session is shared securely with the external Web application.

Figure 11.2 shows the sample application running within the Canvas App Previewer. The Login link has been clicked, prompting the user with an OAuth authorization pop-up. When authorization is complete, a callback Web page closes the pop-up and refreshes the parent window. The access token obtained during this process is displayed. The user can then click the My Chatter Profile link, which makes a cross-domain XHR request using the Canvas JavaScript SDK to the Chatter REST endpoint to get the current user’s Chatter profile and display the raw JSON response.


Figure 11.2 Canvas App in Canvas App Previewer

The following steps describe the process for getting the example up and running:

1. Create Connected App—In the App Setup area, go to Create, Apps and create a new Connected App. Set the Name, API Name, and Contact Email fields. Check Enable OAuth Settings. Provide a Callback URL, and add “Access and Manage Your Data (api)” to the Selected OAuth Scopes list. In the Supported App Types section, check Canvas. For the Canvas App URL, provide the URL to your local Web server and the path you are using to host the Canvas App pages. For Access Method, select OAuth Webflow (GET). For Locations, select Chatter Tab and Visualforce Page, and then click the Save button. Figure 11.3 shows an example of this configuration.


Figure 11.3 Connected App configuration

2. Set up local Web server with SSL—Get a Web server running on your machine to host the Canvas App. Make sure you have enabled SSL, using a self-signed certificate if necessary. Test the SSL configuration with your browser before proceeding. If there are any untrusted or invalid certificate errors, the Canvas App will fail to load or function properly.

3. Add Canvas App pages—Create the two pages in Listing 11.14 and Listing 11.15 within a directory on your Web server, naming them index.html and callback.html, respectively. In the examples here, they are located in a directory called chapter11, but you can put them anywhere as long as they match the settings in your Connected App.

4. Configure Canvas App pages—In your version of Listing 11.14, replace REDIRECT_URI and CLIENT_ID with the Callback URL and Consumer Key, respectively, from your Connected App configuration. Also update the instance URL in the SCRIPT tag used to load the Canvas Javascript API to match your organization.

5. Preview the Canvas App—You should now be able to see the Canvas App in the App Setup area, Canvas App Previewer. You can also see it in the Chatter tab. If there are issues, use your Web browser’s debugging facility to troubleshoot.

Listing 11.14 Main HTML Page for Canvas Example

<script type="text/javascript"
function profileHandler(e) {
var profileUrl = Sfdc.canvas.oauth.instance() +
Sfdc.canvas.client.ajax(profileUrl, {
client: Sfdc.canvas.oauth.client(),
failure: function(data) {
success: function(data) {
if (data.status === 200) {
Sfdc.canvas.byId("chatter_profile").innerHTML =
function loginHandler(e) {
var uri;
if (!Sfdc.canvas.oauth.loggedin()) {
uri = Sfdc.canvas.oauth.loginUrl();
uri: uri,
params: {
response_type : "token",
client_id : "CLIENT_ID",
redirect_uri : encodeURIComponent("REDIRECT_URI")
return false;
Sfdc.canvas(function() {
var login = Sfdc.canvas.byId("login");
var loggedIn = Sfdc.canvas.oauth.loggedin();
if (loggedIn) {
Sfdc.canvas.byId("oauth").innerHTML = Sfdc.canvas.oauth.token();
var profile = Sfdc.canvas.byId("profile");
profile.onclick = profileHandler;
login.onclick = loginHandler;
<h1> Canvas Example</h1>
<textarea id="oauth" rows="2" cols="80" disabled="true"></textarea>
<a id="login" href="#">Login</a><br/>
<a id="profile" href="#">My Chatter Profile</a><br />
<textarea id="chatter_profile" rows="20" cols="80"></textarea>

Listing 11.15 Callback HTML Page for Canvas Example

<html xmlns="" lang="en">
<script type="text/javascript">
try {
} catch (ignore) {}

Introduction to the Tooling API

The Tooling API enables the creation of developer productivity tools for the platform. With the Tooling API, features of tools such as the IDE are accessible to your own programs. This includes the ability to compile code, perform code completion in an editor, set breakpoints for debugging, and retrieve trace log results.

This section provides an introduction to Tooling API in two parts, described here:

1. Overview—Examine the high-level features of the Tooling API.

2. Getting started with the Tooling API—Build a working example of the Tooling API that allows you to edit and compile an Apex class within a Visualforce page.


Consult the Tooling API Developer’s Guide, found at


The Tooling API is available in both REST and SOAP forms. This section focuses on Apex class deployment; however, the Tooling API also provides the following services:

Image Code—Check the syntax of Apex classes, triggers, Visualforce pages, and Visualforce components.

Image Deployment—Commit code changes to your organization.

Image Debugging—Set heap dump markers and overlay Apex code or SOQL statements on an Apex execution. Set checkpoints to generate log files. Access debug log and heap dump files.

Image Custom fields—Manage custom fields on custom objects.

Getting Started with Tooling API

The power of the Tooling API can be demonstrated using a basic Visualforce page that calls to the Tooling API’s REST endpoint from the Apex controller. Figure 11.4 shows the sample user interface. On the left side are the Apex classes available in the organization, accessible with an ordinary SOQL query on ApexClass. On the upper-right side is the body of the selected Apex class. Below it is a Save button, which deploys changes to the class body.


Figure 11.4 Result of Save button click

The process for deploying Apex code or other types of logic is to create a MetadataContainer, add to it the wrapper object corresponding to the type of artifact to be deployed (in this case, ApexClassMember), create a ContainerAsyncRequest, and track the progress of the request using a specialized Tooling API query service.

Below the Save button are two fields that illustrate the internal state of the deployment: the ContainerId and RequestId. These are maintained both to check the status of the deployment (via the Refresh Status button), and to properly clean up (by deleting the MetadataContainer) when the user clicks the Start Over button.

To use the example, click Edit beside the class you’d like to edit. Make a change to the class body and click Save. You should see two successful JSON responses concatenated in the log output box, and the other buttons in the user interface should become enabled.

Figure 11.5 shows the results of clicking the Refresh Status button. According to the JSON response, the deployment is complete and without compiler errors. Click the Start Over button. You should see your changes to the selected Apex class reflected in the user interface and anywhere that Apex code is visible.


Figure 11.5 Result of Refresh Status button click

The code in Listing 11.16 and Listing 11.17 provides an implementation of the controller and page for the Tooling API example. The controller makes extensive use of HTTP callouts and the built-in JSON parsing support.


For the sample code to work, you must add a Remote Site setting to allow requests to the Tooling API endpoint. The endpoint is the root of your instance URL, for example,

Listing 11.16 Visualforce Controller for Tooling API Example

public class MyPageController11_16 {
public String editBody { get; set; }
public String editClassId { get; set; }
public String containerId { get; set; }
public String requestId { get; set; }
public String log { get; set; }
public List<ApexClass> getClasses() {
return [ SELECT Id, Name, IsValid FROM ApexClass
ORDER BY Name ];
public PageReference edit() {
editBody = [ SELECT Body FROM ApexClass
WHERE Id = :editClassId LIMIT 1 ][0].Body;
return null;
public PageReference save() {
log = '';
// Create MetadataContainer
HttpRequest req = newRequest('/sobjects/MetadataContainer',
Map<String, Object> args = new Map<String, Object>();
args.put('Name', 'ClassContainer');
String result = sendRequest(req, args);
containerId = null;
try {
containerId = getResultId(result);
} catch (Exception e) {
log += result;
return null;
// Create ApexClassMember
req = newRequest('/sobjects/ApexClassMember',
args = new Map<String, Object>();
args.put('ContentEntityId', editClassId);
args.put('Body', editBody);
args.put('MetadataContainerId', containerId);
log += sendRequest(req, args);
// Create ContainerAsyncRequest
req = newRequest('/sobjects/ContainerAsyncRequest', 'POST');
args = new Map<String, Object>();
args.put('IsCheckOnly', 'false');
args.put('MetadataContainerId', containerId);
result = sendRequest(req, args);
log += result;
requestId = getResultId(result);
return null;
public PageReference reset() {
editClassId = '';
requestId = '';
containerId = '';
log = '';
editBody = '';
return null;
public PageReference refresh() {
String soql = 'SELECT Id, State, CompilerErrors, ErrorMsg FROM ' +
'ContainerAsyncRequest where id = \'' + requestId + '\'';
HttpRequest req = newRequest('/query/?q=' +
EncodingUtil.urlEncode(soql, 'UTF-8'),
log = sendRequest(req, null);
return null;
public static void cleanup(String containerId) {
sendRequest(newRequest('/sobjects/MetadataContainer/' + containerId,
'DELETE'), null);
private static HttpRequest newRequest(String toolingPath,
String method) {
HttpRequest req = new HttpRequest();
'Bearer ' + UserInfo.getSessionID());
req.setHeader('Content-Type', 'application/json');
req.setHeader('X-PrettyPrint' , '1');
req.setEndpoint(getInstanceUrl() +
'/services/data/v28.0/tooling' + toolingPath);
return req;
private static String sendRequest(HttpRequest req,
Map<String, Object> args) {
Http h = new Http();
if (args != null) {
HttpResponse res = h.send(req);
return res.getBody();
private static String getInstanceUrl() {
String url = System.URL.getSalesforceBaseUrl()
url = url.replace('visual.force', 'salesforce');
url = url.replace('c.', '');
return url;
private static Id getResultId(String body) {
Map<String, Object> result = (Map<String, Object>)
return (Id)result.get('id');

Listing 11.17 Visualforce Page for Tooling API Example

<apex:page controller="MyPageController11_16">
<apex:form id="form">
<apex:pageBlock title=" Tooling API Example">
<apex:pageBlockSection columns="2">
<apex:pageBlockTable value="{!classes}" var="c">
<apex:column >
<apex:commandLink value="Edit" action="{!edit}"
<apex:param name="editClassId"
assignTo="{!editClassId}" value="{!c.Id}" />
<apex:column value="{!c.Name}" />
<apex:column value="{!c.IsValid}" />
<apex:outputPanel id="editor">
<apex:inputTextArea id="editBody" rows="15" cols="90"
value="{!editBody}" disabled="{!editClassId == NULL}" />
<p/><apex:commandButton value="Save" action="{!save}"
disabled="{!editClassId == NULL}" rerender="editor" />
ContainerId: {!containerId},
RequestId: {!requestId}<br />
<apex:commandButton value="Refresh Status" action="{!refresh}"
disabled="{!requestId == NULL}" rerender="editor" />
<apex:commandButton value="Start Over" action="{!reset}"
disabled="{!containerId == NULL}" />
<textarea disabled="true" rows="10" cols="90">

Understanding the Metadata API

The Metadata API allows the direct manipulation of objects, page layouts, tabs, and most of the other configurable features in By using the Metadata API, you can automate many of the click-intensive tasks commonly performed in the IDE or in the native Web user interface, such as the creation of database objects and fields.

This section provides an introduction to the Metadata API in two parts, described here:

1. Overview—The Metadata API is different from the Enterprise API in two major ways. First, it can operate on objects in memory or using zip files containing many objects represented as XML files. Second, its operations are asynchronous, returning immediately with a result identifier to use for follow-up calls to check the status.

2. Getting started with the Metadata API—Walk through a sample of calling the Metadata API to create a new object using Java.


The details of how the Metadata API operates on each type of metadata in are outside the scope of this book. Consult the Metadata API Developer’s Guide, found at, for the latest information and detailed descriptions of all the available methods of the Metadata API. Salesforce continues to expand the reach of the Metadata API in every release.


The Metadata API consists of two types of services: file-based and object-based. These service types are summarized next:

Image File-based services—The file-based services are deploy and retrieve. The deploy service takes a Base64-encoded zip file containing the components to deploy into the organization. The zip file must contain a manifest file named package.xml at its root to describe the contents of the zip. The retrieve service downloads metadata from and returns it as a zip file complete with package.xml as manifest. Its input is a RetrieveRequest object to specify the types of metadata to download. Both services can operate on up to 1,500 metadata objects per call.

Image Object-based services—The object-based services are create, update, and delete. To invoke create or delete, pass an array of Metadata objects. The Metadata object is the superclass of a wide array of objects that contain metadata for specific features of For example, the CustomObject class represents a custom database object, and Layout represents a page layout. Unlike data records in which a unique identifier (Id) field is the key, metadata uniqueness comes from a combination of its type and fullName field. The updateservice takes an array of UpdateMetadata objects, which each contain a Metadata object and the current name of the object to replace.

Note’s documentation uses the term declarative to describe its file-based services, and CRUD (for create, read, update, and delete) to describe its object-based services.

All Metadata API services are asynchronous, returning immediately with an AsyncResult object. This object contains a unique identifier for tracking the status of the asynchronous operation. For object-based services, the service to check status is called checkStatus. For the file-based service deploy, the status service is checkDeployStatus, and for retrieve, it’s checkRetrieveStatus.

Getting Started with the Metadata API

To get started with the Metadata API, follow these steps:

1. In the App Setup area, click Develop, API.

2. Right-click the Download Metadata WSDL link and save it on your local file system. You’ll need this plus the Enterprise WSDL in order to call the Metadata API.

3. Generate stub code from the WSDL (for example, by using WSC as described in Chapter 10) and add it to your project.

Listing 11.18 demonstrates usage of the Metadata API in Java by creating a new database object given a name and its plural name. The code assumes the existence of a member variable called sessionId, previously populated from the login call’s LoginResult. It prepares the minimum set of metadata required to call the create service, which is a custom object name, full name, label, deployment status, sharing model, and name field. After invoking the asynchronous create service, it loops to check the status using the checkStatus service until the invocation is complete.

Listing 11.18 Java Fragment for Creating Object

public void createObject(String name, String pluralName) {
try {
ConnectorConfig config = new ConnectorConfig();
MetadataConnection connection = new MetadataConnection(config);
CustomObject obj = new CustomObject();
obj.setFullName(name + "__c");
CustomField nameField = new CustomField();
AsyncResult[] result = connection.create(
new Metadata[] { obj });
if (result == null) {
System.out.println("create failed");
boolean done = false;
AsyncResult[] status = null;
long waitTime = 1000;
while (!done) {
status = connection.checkStatus(
new String[] { result[0].getId() });
if (status != null) {
done = status[0].isDone();
if (status[0].getStatusCode() != null) {
System.out.println("Error: " +
status[0].getStatusCode() + ": " +
waitTime *= 2;
System.out.println("Current state: " +
System.out.println("Created object: " +
} catch (Throwable t) {

Sample Application: Database Integration

This section explores a common integration scenario using the Services Manager sample application. It describes the scenario and the implementation strategy and ends with sample code.

Integration Scenario applications often require the use of data that is stored in other enterprise systems. This information can initially be pushed to through Data Loader or another data migration tool. But when is not the system of record for this information and updates occur, is left with stale data.

Updated data could be reloaded into through data migration tools, scheduled to run at regular time intervals, but this approach can quickly become impractical. This is especially true where there are requirements for real-time updates, integration to multiple systems, intricate data mappings, or complex business rules governing the updates.

Imagine that the company using your Services Manager application has a human resources system containing the names, addresses, and other core information about employees. This employee information is duplicated in in the Contact standard object. Because is not the system of record for these fields, they should be set to read-only on their page layouts to maintain data integrity between and the human resources system. But when the human resources system is updated, must also be updated. This is the goal of the integration.

Implementation Strategy

To retrieve changes from the human resources system, you could call out from using HTTP or a REST Web service call, as described in Chapter 10. But when you would do this is not clear because does not receive notifications when the human resource system is updated. Polling the system for changes would be inefficient and quickly hit governor limits on Web service callouts.

Instead, use the Enterprise API to connect to and upsert the modified records. Begin by updating a single field called Active__c, indicating whether the employee is active. After you get this field working, move on to support additional fields such as the address and phone fields of the Contact record.

The first problem is finding a common key to employees in both systems. Assume that the human resources system cannot be changed and focus on adapting to maintain the mapping between the two systems. Create a new field named Resource ID (API name ofResource_ID__c) on the Contact object to store employee identifiers used by the human resources system. For this example, make it a Number type, six digits in length, required, unique, and an external ID.


Remember that you need to regenerate the client code from Enterprise WSDL after you add this new field; otherwise, it will not be available to your program.

Sample Implementation

The code in Listing 11.19 is a sample Java implementation of the integration. It assumes that you’ve already generated the Java stub code from Enterprise WSDL using the WSC. It expects a file named import.json to be located in the working directory. This is a JSON-encoded file containing an array of Contact records to update. Listing 11.20 is an example of the file format expected by the program.


The sample implementation uses a JSON library available at

Listing 11.19 Sample Java Implementation of Integration Scenario

import java.util.ArrayList;
import java.util.List;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
import com.sforce.soap.enterprise.Connector;
import com.sforce.soap.enterprise.EnterpriseConnection;
import com.sforce.soap.enterprise.UpsertResult;
import com.sforce.soap.enterprise.sobject.Contact;
import com.sforce.soap.enterprise.sobject.SObject;
public class Listing11_19 {
EnterpriseConnection connection;
public void login(String user, String pass, String securityToken) {
ConnectorConfig config = new ConnectorConfig();
config.setPassword(pass + securityToken);
try {
connection = Connector.newConnection(config);
} catch (ConnectionException e) {
public void processImportFile(String jsonFile) {
List<SObject> changes = new ArrayList<SObject>();
try {
String json = readFileAsString(jsonFile);
JSONArray array = new JSONArray(json);
for (int i=0; i<array.length(); i++) {
if (changes.size() > 0) {
UpsertResult[] results = connection.upsert("Resource_ID__c",
changes.toArray(new SObject[changes.size()]));
int line = 0;
for (UpsertResult result : results) {
System.out.print(line + ": ");
if (!result.isSuccess()) {
for (com.sforce.soap.enterprise.Error e
: result.getErrors()) {
System.out.println(e.getStatusCode() + ": " +
} else {
} catch (Throwable t) {
private Contact importResource(JSONObject rec)
throws JSONException {
Contact result = new Contact();
return result;
private static String readFileAsString(String filePath)
throws IOException {
StringBuffer fileData = new StringBuffer(1000);
BufferedReader reader = new BufferedReader(
new FileReader(filePath));
char[] buf = new char[2048];
int numRead = 0;
while((numRead = != -1) {
fileData.append(buf, 0, numRead);
return fileData.toString();
public static void main(String[] args) {
Listing11_19 demo = new Listing11_19();

Listing 11.20 Sample JSON Input File

"ResourceID": 100000,
"Active": false
"ResourceID": 100001,
"Active": false

Before running the program, change the Resource ID values in the file to match your contacts, and the arguments of the login method to your user credentials.

Note that the only field updated by the sample implementation is Active__c. As a challenge, enhance the program to support updates to additional fields of the Contact object, or related objects like User.


This chapter has provided the basics of’s Streaming, Bulk, Canvas, Tooling, and Metadata APIs. Consider the following points for review as you move on to the next chapter:

Image The Streaming API allows you to get extremely granular and timely notifications about your data, at the level of changes to individual fields. On the other end of the spectrum, the Bulk API is optimized to move millions of records at a time in and out of the platform.

Image Canvas is a container technology for displaying your Web user interface within and providing integration of security context and other services that go well beyond what is possible with a raw IFRAME.

Image With the Metadata and Tooling APIs, you can build tools that automate development tasks, such as creating and modifying database objects and code. You can also use it to back up your entire organization’s configuration or replicate it to a new account.