Ndss2023 f96 Paper
Ndss2023 f96 Paper
Abstract—The Internet is a major distribution platform for industry proposals include payment platforms such as Stripe
web applications, but there are no effective transparency and and Square, chat protocols such as WhatsApp Web, Facebook
audit mechanisms in place for the web. Due to the ephemeral Messenger and Matrix’s Hydrogen client, encrypted cloud
nature of web applications, a client visiting a website has no storage such as MEGA or SpiderOak. A concrete example
guarantee that the code it receives today is the same as yesterday, is Nimiq, an entirely web-based digital currency managing
or the same as other visitors receive. Despite advances in web
private keys in the browser. It is challenging for such websites
security, it is thus challenging to audit web applications before
they are rendered in the browser. We propose Accountable JS, to make verifiable guarantees to their users: a compromised
a browser extension and opt-in protocol for accountable delivery or malicious web server can precisely target classes of users:
of active content on a web page. We prototype our protocol, the email provider might disable encryption on a specific IP
formally model its security properties with the TAMARIN Prover, range, the cryptocurrency wallet might redirect payments made
and evaluate its compatibility and performance impact with case in some countries, or the tracking pixel might exfiltrate data
studies including WhatsApp Web, AdSense and Nimiq. only for certain users.
Accountability is beginning to be deployed at scale, with Auditing: A common risk mitigation strategy is auditing: a
Meta’s recent announcement of Code Verify available to all 2 developer who wishes to build trust appoints external auditors
billion WhatsApp users, but there has been little formal analysis to inspect the client code. This can include both vulnerability
of such protocols. We formally model Code Verify using the
research (e.g. via bug bounties) or commissioned security
TAMARIN Prover and compare its properties to our Accountable
JS protocol. We also compare Code Verify’s and Accountable JS audits. Audits work well where it is possible for a user to
extension’s performance impacts on WhatsApp Web. verify that the code they are running is the same code that was
audited, for example when binaries are received via third party
package repositories or app stores that control the distribution
I. I NTRODUCTION
and targeting. App stores do not usually permit developers
Over the years, the web has transformed from an informa- to deliver different codes to different users for the same app,
tion system into a decentralised software distribution platform. except in a restricted set of circumstances such as for beta
Websites are programs that are freshly fetched whenever testing new features.
accessed and the web browsers are runtime environments. This
However, auditing does not work for web applications: a
design implies that when a user opens a website, they have
compromised or malicious web server can simply choose at
no reason to trust it will run the same program that it did
load time to deliver unaudited code to a user. No matter how
yesterday or the same program that other users receive. Instead,
careful the audit or even verification of the web application,
the application loaded may vary over time, and different users
users cannot know that they are receiving the audited code.
may receive different codes.
Large parts of modern web security thus depend on techniques
The majority of web pages, and even web applications, like sandboxing or access control to critical resources like
have neither specified security goals nor the need to estab- cameras, but fail to capture properties defined in the context of
lish them. Nevertheless, for some websites, maintaining trust the application (e.g. authorisation of transactions in a payment
between developers and users is part of the business model: system).
• a private email provider might wish to reassure users that Accountability: A second risk mitigation strategy is ac-
it will always encrypt their messages, countability, where developers can be held accountable for
• a cryptocurrency wallet might wish to guarantee that it has applications which they publish. In curated software reposi-
no access to users’ funds, or tories such as Debian GNU/Linux or the Apple App Store,
• a tracking pixel might wish to prove that it only receives developers’ code is reviewed and malicious or compromised
data that is explicitly sent to it. code is linked to their identities. Developers who repeatedly
publish malicious code may face consequences such as loss
Some academic proposals for secure protocols implemented of user trust or banning from the repositories. For example,
for browsers include TrollThrottle [1] and JavaScript Zero [2]; a package mirror which publishes malicious code may be
removed from future lists of mirrors, or a developer who takes
over a browser extension and publishes a malicious version [3]
Network and Distributed System Security (NDSS) Symposium 2023 may be blocked from publishing future code updates.
27 February - 3 March 2023, San Diego, CA, USA
ISBN 1-891562-83-5 Again, web applications fail to have accountability. A
https://dx.doi.org/10.14722/ndss.2023.24096 malicious or compromised web server may publish malicious
www.ndss-symposium.org code to certain users, but there is no public record of the code
which it serves, and thus no way for users to hold the server 3) We implement Accountable JS in a browser extension that
accountable. obtains the signed manifest, verifies its signature, and both
statically and dynamically ensures that the active content
Summarising, it is difficult to establish trust in the web as a on a web page agrees with the manifest. We also provide
software distribution mechanism because it lacks auditability a code-signing mechanism for the developers.
(the means for anyone to inspect the code being distributed 4) We evaluate the deployment of this technology and the
to others) and accountability (the means to hold a developer performance overhead for the client in six case studies,
accountable for the code they publish). including real-world applications: Google AdSense, Nimiq
In this paper, we propose an opt-in transparency protocol and WhatsApp.
that aims to establish more rigorous trust relations between 5) We model Meta’s Code Verify protocol and compare its
browsers and web applications, and provide the foundation for properties with Accountable JS.
a more secure web. Using our standard for accountable de-
Relationship to Meta’s Code Verify protocol: In [4], Meta
livery of active content, efficient and easy-to-use code-signing
(formerly Facebook) proposed Code Verify, likewise imple-
technique, and public transparency logs; websites can convince
menting a mechanism to enforce accountability via trans-
the users that they are trustworthy in an economical way. At a
parency for active content in the web. Our present proposal
high level, we propose that web application developers, who
goes beyond Code Verify and provides a superset of its
choose to opt-in, provide a signed manifest enumerating all
functionality, most notably the ability to delegate trust to
the active content in their applications.
third parties. On the other hand, our browser extension is
The manifest files in our proposal are stored in publicly an academic prototype and thus not ready for productive
readable transparency logs. When a browser requests a URL use. The protocol has the same message flow, but chooses
and downloads the resulting HTML document from the web a different signature scheme and encodings. We elaborate on
server, the web server also provides the corresponding manifest these differences in Section X-A. An initial draft of the present
for this URL. The browser checks that the active content proposal was shared with Meta’s WhatsApp team in 2022.
provided by the server matches the manifest entry, that the The protocol, manifest file format and browser extension we
manifest is correctly signed, and that the provided manifest is present in this work are academic developments by the authors
consistent with the transparency logs. and not endorsed by Meta in any way.
Moreover, our proposal aims to reinforce the communica-
II. BACKGROUND
tion between the browser and the web server by adding non-
repudiation to the HTTP request-response procedure. By itself, Web pages are delivered via HTTP or HTTPS. In the
Transport Layer Security (TLS) does not provide evidence that latter case, a secure and authenticated TLS channel tunnels the
what was delivered actually originated from the web server. HTTP protocol. Typically, the initiator of the TLS connection,
Using digital signatures, we show how HTTP requests can be i.e. the web browser, is not authenticated1 , whereas the
extended to provide a proof of origin. responder, i.e. the web server, is identified with their public
key and a certificate linking the public key to the domain.
From the signed manifest, the transparency logs, and the
non-repudiation mechanism, the protocol establishes that: The authentication guarantees of TLS exclude non-repudia-
tion of origin, i.e. a communication party cannot prove to a
• The code a user executes is the same for the users of the third party that they received a certain message. This property
plugin within a certain timeframe depending on the validity is an important building block for accountability and can
of the manifest and a new manifest is signed. be achieved, e.g. using digital signatures. After the shared
• On the client side, the code is bound to interact with third keys are established in TLS, any messages exchanged could
party code according to how the developer declared in the be produced by either party. Roughly speaking, the party
manifest. This includes the order of execution, the trust providing the evidence has enough information to forge it.
relation to third party code, and the use of sandboxing. Ritzdorf et al. [5] proposed a TLS extension that provides
• If the code’s execution is inconsistent with the manifest, non-repudiation, but it has not been deployed in the wild.
the browser can provide a claim that can be verified by the
public. Browsers typically parse the HTML document describing
the web page into a tree of HTML elements called Document
Our proposal can be implemented by changes in the server Object Model (DOM). Some HTML elements have active
configuration only, without the need to modify the served content, which includes Flash or Silverlight, but we will focus
web content (assuming that the web page already makes use on JavaScript (JS) in this work. Active content can be inline,
of Subresource Integrity hashes) and without changes to the i.e. hard-coded in <script>-tags or event handlers, external,
HTML standard. i.e. referring to an external JS file by URL, or via iframe, i.e.
the web page contains an iframe that refers to an HTML file
To sum up, our contributions are as follows: which, again, contains active content.
Like in the case of app stores, we distinguish the roles of
1) We propose Accountable JS, a protocol to enable auditabil- the website, which is distributing the web application, and the
ity and accountability for web apps. developer, which is the author of the web application. This
2) We formally model Accountable JS with the TAMARIN
Prover and prove desired properties in the presence of active 1 At the communication layer. Authentication may be implemented at the
adversary. application layer.
2
allows us to view the website as a distribution mechanism that developers, that active content changes infrequently, e.g. mul-
is necessarily online and publicly visible, as opposed to the tiple times per day, and that their code facilitates the audit.
developer, who can be offline most of the time. We distinguish Dishonest developers may counteract, but, due to accountabil-
the following roles: ity and authentication of origin, it is publicly recorded.
• The web application developer (short: developer) creates Therefore, while our formal security arguments make no
the active content and has a secure connection to the web assumption on how often the code changes are or how obfus-
server. It is not active all the time. cated it is, we assume that, from accountability of authentica-
• The web server (short: server) delivers code provided by tion of origin, code obfuscation attacks or microtargeting are
the developer to the client. The website and the developer practically disincentivised.
are associated with a domain, but the client is anonymous. Browser features & Transparency log : We assume the current
• The web browser (short: client) requests a URL from the browser security features, specifically the sandbox attribute
website. of the iframe tag, to be implemented correctly. Furthermore,
A transparency log (short: ledger) provides a publicly the transparency log is trusted, efficient, available, append-only
accessible database. It typically has the property of being and provides non-equivocation (i.e. the same information is
append-only (for consistency), auditable, verifiable, and it served to everyone). Many strategies are available to imple-
hinders equivocation. Hence, for the data in the logs, all parties ment such a log. For example, Trillian [7] and CONIKS [8] use
are convinced that it is a public record and that everyone sees data structures that can be distributed over multiple parties and
the same version of it. We are using the ledger to store manifest allow to prove append operations efficiently. Misbehaviour can
files for each URL. Having public records of the manifest files thus be detected by trusted public auditors or by honest logs
allows us to reason about accountability. distributing such proofs (called gossiping). See [9] for a survey
over different mechanisms.
A. Threat Model
III. U SE C ASES
Dolev-Yao attacker: We consider a Dolev-Yao style adversary,
i.e. cryptography is assumed perfect (i.e. cryptographic opera- We introduce several types of web applications that will
tions do not leak any information unless their secret keys are benefit from our protocol. We will revisit these examples later
exposed), but the attacker has full control over the network. and show how our approach can be applied to them.
This is formalised in our SAPiC [6] model in Appendix A.
Informally, we assume hash function to behave like random A. Self-Contained Application
oracles, signature schemes to be unforgeable and TLS to im-
plement an authentic and confidential communication channel. Perhaps the simplest possible web application is a one-
We also rely on an intact public-key infrastructure. page HTML document with active content that simply prints
‘Hello World’ into the developer console. Upon loading this
Corruption scenarios: We assume honest parties follow the website, a user can manually check that its sole behaviour
protocol specification and dishonest parties are controlled by was to print ‘Hello World’, but they have no guarantees about
the attacker. The parties which considered honest are deter- subsequent page loads: a server could easily decide to provide
mined by the property of interest: different behaviour to certain users, or to insert malware based
on IP address or browser fingerprint. For this simple example,
• Accountability and Authentication of Origin: An honest the consequences of a malicious or compromised server are
client wants to be sure that code is executed only if it relatively limited, although we remark that cryptojacking3 is a
was made public and transparent i.e. inserted into logs by growing trend [10].
the developer; here developer and web server are assumed
dishonest. WhatsApp Web is a large real-world self-contained web
• Non-repudiation of Reception A dishonest client may want application: its source code is bundled using WebPack and
to present false evidence for having received some JS code. served to all users; personalisation is implemented through
Here we assume the public is trusted and run a specified local storage and dynamic data fetching. We will show how
procedure2 to check the evidence, and the web server to our protocol can be applied.
behave honestly, i.e. not to help the client provide false
claims of reception, which are against the web server’s B. Trusted Third-Party Code
interest.
• Accountability of Latest Version An honest client that re- Many websites rely not just on their own content but on
ceives a version of the code and wants to ensure it is the resources served by a third party. This may be a Content Deliv-
latest version. We assume an honest global clock that helps ery Network (CDN) serving common JS libraries, embedded
comparing the time of the code reception and the latest content such as photos or videos, analytics and measurement
version at that time, and consider a dishonest developer libraries, tracking pixels, fraud detection libraries, or many
and web server. other options. For example, the following code loads the
jQuery JS library from a CDN, and uses it to display a ‘Hello
Target websites: We target developers that aim at establishing World’ message.
user trust or pretend to do so. Hence we assume, for honest
3 Malicious JS which secretly mines cryptocurrencies in unsuspecting users’
2 Detailed in Appendix B. browsers.
3
<html><head> on keeping the secure core component small and auditable.
<script src="https://googleapis../jquery-3.6.1.min For example, Nimiq’s Wallet components follow a similar
.js" integrity="sha384-i6..."/></head><body> structure.
<script>$("body").html("Hello World")</script>
</body></html>
Listing 1: Trusted third party code IV. A PPROACH : ACCOUNTABLE JS
We propose a cryptographic protocol between the client,
As before, users are supposed to always receive the same the server, the developer, and a distributed network of public
code from the server. This time, there is an additional avenue transparency logs. The protocol’s objective is to hold the
for compromise, though: even if the first-party server is honest, developer accountable for the code executed by the browser.
it is possible for the CDN to perform targeted attacks. The The protocol provides four main functionalities:
developer, however, wants to pin the third party code to the
precise version that they inspected or trust. • The server provides a manifest declaring the active content
and trust relationships of the web application, which the
C. Delegate Trust to Third Parties client compares with a published version on the trans-
parency logs.
The application uses third party code that its developer • The client measures and compares the active content re-
cannot vouch for. This can be the case if the code is too ceived by collecting active elements, e.g. JS, in the HTML
complex to inspect or if the application developer wants document delivered by the web server.
to always use the latest version. The third party developer, • Developers and clients submit manifests to a public append-
however, is willing to vouch for their code. An example of this only log to verify that everybody receives the same active
is Nimiq’s Wallet, a web application for easy payment with content.
Nimiq’s crypto currency. This application can be embedded • The server signs a nonce as non-repudiable proof of origin
by first-party applications that provide, e.g. a web shop, who for the JS that the client receives.
are willing to trust Nimiq, but only given that they make
themselves accountable for the code they deliver. Website Manifests: Website developers may provide a signed
<html><body>
manifest for each publicly accessible URL in their website
<script type="text/javascript"> (excluding the query string). The signed manifest comprises a
function addTransaction () { manifest and a signature block over it. A manifest describes
window.postMessage({’id’: ’123’, ’amount’: ’10n the webpage, including, besides the active content, its URL
’, ’from’:’abc’}, ’https://wallet.nimiq.com/’);} and a version number. The active content is described in a
</script>
<iframe src="https://wallet.nimiq.com/" onload=" custom format. We elaborate on the manifest directives in
addTransaction()"></iframe> the supplementary material [12]. The developer’s identity
</body></html> is distinct from the server’s, but their certificates must share
Listing 2: Delegate trust to third party the same Common Name(CN) to restrain from unauthorised
manifest deployments. The browser validates the authenticity
of the developer’s public key in the same way, using the
D. Untrusted Third-Party Code existing Public Key Infrastucture (PKI) and its built-in root
Certificate Authority (CA) certificates.
For web technologies, consecutive deployability is a must.
Hence, in this use case, the application developer cannot audit Accountable JS is an opt-in mechanism. The website
the code, but the third party does not use Accountable JS. The declares the signed manifest using an experimental HTTP
application developer needs to blindly trust the third party, but response header field called x-acc-js-link. Henceforth,
using sandboxing techniques, it can restrict the access that the the client, however, expects the website to provide a valid
possibly malicious script provided by the third party can have. manifest for this URL in any case.
A particularly important instance of this problem is ad Client Measurement: The client measures the active con-
bidding. The third party is an ad provider that decides online tent inside the HTML document delivered in the response
which ad is actually served. Because they cannot review the body, collecting information about each active element in the
ads that they distribute, which may contain active content, they document and validating it with the corresponding manifest
are not willing to vouch for the code they distribute. This is the block in a manifest file. Elements that cannot be matched
case for Google AdSense, used by over 38.3 million websites. trigger an error and the user is warned about this error. The
Cases where ads were misused to distribute malicious code are current extension is not preventive, but in the future with
well documented [11]. pervasive developer support, browsers may choose to halt the
execution if delivered code is inconsistent with the boundaries
E. Code Compartmentalisation drawn by manifest. The active content is measured with a
so-called mutation observer, starting with the first request.
The application that the developer provides can be compart- The measurement procedure that we developed listens to the
mentalised so that the most sensitive information is guarded observer’s collected mutations that regard active elements in a
by a component that is easy to review and changes rarely. The list. In Section VII, we explain the process in more detail.
other components that are user-facing and changing more often
are separated from this component using sandboxing. The Manifest Logs: While a signed manifest may prove the
developer wishes to reflect this structure and make themselves integrity and authenticity of the manifest, it cannot prevent
accountable for the whole code, but also separately commit equivocation, i.e. it cannot prove the same signed manifest
4
is delivered to every request by the web server. To this end, TABLE I: Trust Relationships by Type of Active Element
we propose to use transparency logs. A manifest file declares
a version number and the version number is unique per trust
manifest file. The developer publishes their signed manifest type assert blind-trust delegate sandbox
in a publicly accessible, auditable, append-only log like the
inline • ◦ ◦ ◦
Certificate Transparency (CT) protocol [13], which provides event handler • ◦ ◦ ◦
logs for TLS certificates. Clients may verify that a version external • • • ◦
they receive is the latest online, or use a mechanism like Online iframe with . . .
src type =external • • • •
Certificate Status Protocol (OCSP)-Stapling [14] to check that src type =srcdoc • ◦ ◦ •
a version they receive was the latest version a short time ago. src type =script • ◦ ◦ •
Any client that encounters a signed manifest that is not yet in
the log can submit it to the log. We discuss the transparency For static elements, the sequence number seq specifies in
log considerations in more detail in Section XI. which order they must appear after browser renders the deliv-
Non-Repudiation of Origin: We propose a simple non- ered HTML. It starts from 0 and repetitions are not allowed.
repudiation mechanism for the client’s web requests, so that in Dynamic content is only measured if they are present in the
case a developer distributes damaging active content, a client web page, i.e. it is allowed to be injected, but not required to.
can prove that they have received that content from a web This mechanism can also be used to declare region-specific
server. The client transmits a nonce via a request header and active content. The order is ignored for dynamic content. The
the server signs this nonce along with the signed manifest (c.f. measurement procedure will check if the list of the elements in
Section IX). the manifest is in the same order except for elements that will
be dynamically added to the DOM. Elements may be removed
dynamically, but only if the attribute persistent is set to false.
V. M ANIFEST F ILE
In the manifest, the developer declares the active elements B. Trust and Delegation
a web application is bound to execute during its run time.
With the manifest, the developer provides assurance for
The run time starts from the web request and ends with
the active content in their application. Third-party components,
the window’s close or a new web request. For Single Page
e.g. JS libraries, bootstrappers, advertisements or ad-analytics
Applications (SPA) (e.g. Nimiq), the run time for the web
tools play a significant role in most modern web applications,
page ends when page is refreshed, its URL is changed or the
which are thus a mixture of first-party code and code from
window is closed.
multiple third parties. In the manifest, we enable the developers
The manifest file represents the active elements and their to decide the trust level on each active element imported
relevant metadata as a collection of attribute-value pairs in to their web applications. For instance, they can take the
the JSON format. The metadata expresses the trust relations responsibility and provide assurance (i.e. with a cryptographic
w.r.t. third party content and settings for sandboxing. The top- hash) on first party elements while for the external elements,
level properties in the manifest, also called manifest header, they may declare a valid source and delegate the trust on the
contain descriptive information about the web page: its URL, developers of those resources.
its version number, and optional metadata, e.g. the developer’s
We thus require each block in the manifest to have a trust
email address. The domain within the URL determines which
declaration. There are three options to declare the trust level:
keys can be used to sign the manifest, namely, the common
name of the signature key’s certificate has to match that • assert : The developer provides the hash of the expected
domain.4 The developer can decide for any numbering scheme active content and asserts it is behaving as intended. It
for the version, but they must be strictly increasing with each is computed using the standard Subresource Integrity hash
new manifest published. generation method [15], i.e. comprises the hash algorithm
A manifest file is accepted if it is syntactically correct, i.e. used, followed by a dash and the base64-encoded hash
follows the schema (see manifest manual in the supplementary value.
material [12] for details), complete, i.e. it contains enough • delegate: The developer refers the trust to the third party
information about the web application and its active elements providing this element. Now the third party is taking respon-
to enable evaluation, and, most importantly, consistent with the sibility for this code and provides a manifest whose location
delivered resource, i.e. that evaluation succeeds. is either declared in the first-party manifest, or delivered in
the headers of the third party’s response. The third party
manifest can likewise delegate trust, thereby constructing a
A. Execution Order chain of trust delegations.
An active content is considered dynamic if it is added • blind -trust: The developer blindly trusts the third party,
after the window’s load event; otherwise, it is static. The without identifying the code they trust. This should only be
manifest specifies elements as either static or dynamic using used with the sandbox attribute.
the dynamic attribute. SPAs in particular download or preload
resources during navigation, rewriting the DOM on the fly C. Types of Active Elements
depending on how the user navigates.
The developer describes the manifest blocks for each active
4 The query component of the URL can be excluded, since the browser element by their resource type type (e.g. javascript, iframe),
extension discards that part in the measurement. trust policy trust (e.g. assert, delegate, blind -trust), whether
5
they are dynamic or static and, in case they are static, their A. ‘Hello World’ Application
sequence number seq. There are mandatory and optional direc-
We begin with the basic ‘Hello World’ website example,
tives for writing a manifest and these directives may depend
and add a reference to the manifest in its meta tags.
on the resource type. If the developer declared a manifest
section including an optional directive, that does not mean <html><head>
this directive is ignored in the evaluation; this directive still is <meta charset="utf-8" name="x-acc-js-link"
content="http://www.helloworld.com/manifest.sxg">
part of the evaluation. For instance, the crossorigin directive </head><body>
is optional for external resource type, but if the developer <script>console.log("Hello World")</script>
declares a crossorigin attribute, then it has to match with the </body></html>
active content information. Not all resource types support all Listing 3: First example: Hello World.
trust policies (see Table I). We will discuss them one by one:
Alternatively, the manifest can be provided as an HTTP
• inline: Inline scripts are script elements without the response header. The manifest file provides the URL and
src attribute, i.e. the JS code is included in the HTML version of the website and lists the base64-encoded SHA-256
document. Therefore, trust can only be assert and may hash of the inline script.
be omitted. The cryptographic hash covers the included JS
{ "url": "http://www.helloworld.com/",
code, i.e. the textContent value of the script element. "manifest_version": "v0",
• event handler : Event handlers are active content included "contents": [
in attributes such as onClick that are executed on HTML { "seq": 0,
events. Like inline scripts, trust must be assert and can be "type": "inline",
omitted. "load": "sync",
"trust": "assert",
• external : A script element can be outsourced by "hash": "sha256-AfuyZ600rk..."}]}
specifying its URL in the src attribute. An external script Listing 4: Manifest for first example.
can originate from a different origin (cross-origin) or from
the same origin. Trust can be set to assert and delegate – as
sandboxing is not supported for external scripts, blind -trust B. Self-Contained Web Applications
would give little assurance.
• iframe : An iframe embeds another document within the Web applications can be completely self-contained. This
current document. There are three ways this can hap- may be for security or because they follow the recent serverless
pen, which the manifest file represents using the attribute computing paradigm (e.g. Amazon Lambda). In serverless
src type. The most common is to specify a URL (src type computing, a web application developer may only write static
= external ). The other ways (src type = srcdoc and user-side code and delegate all the server-side logic to a cloud
src type = script) are explained in the full version [16]. service provider.
This type of content can be declared with any trust value. The application of Accountable JS is straightforward in this
case: as part of our prototype, we developed our deployment
tool generate manifest, which computes the hash values of
D. Sandboxing all active contents in the browser and produces a manifest file
that asserts their trustworthiness. The developer can then sign
Besides, iframes permit the use of sandboxing via the this manifest file.
attribute with the same name [17]. A sandboxed iframe is con-
sidered a cross-origin resource, even if its URL points to the We tested this methodology on a popular example, the
same-origin website. Hence, because of the browser’s same- WhatsApp Web client, and provide the manifest file in the
origin-policy, the parent window and the iframe are isolated, supplementary material [12]. It lists nine external and four
and they cannot access the DOM of each other. Furthermore, inline scripts.
sandboxing blocks the execution of JS and the submission
of forms and more. These restrictions can, however, be lifted C. Trusted Third-Party Code
using an allow list in the HTML tag.
The developer can use the manifest file to identify the in-
As we will see in the next section, security-critical websites cluded third party code by hash and set the order of execution.
need to use sandboxing to protect data from other browsing This expresses that the developer vouches for the third party
contexts; hence we reflect the sandbox feature in the manifest code. We add the following attribute to the header of ‘Hello
file. The measurement procedure ensures that the active ele- World’ example from Section III-B and we declare it in the
ment has an equally strict or stricter sandboxing policy than manifest file with trust = assert.
described in the manifest. An allow list is stricter if it is a <script src="https://googleapis../jquery-3.6.1.min
subset of the other. .js" integrity="sha384-i6..."></script>
6
This expresses that the main developer vouches for the third includes in
sandboxed iframe
party to be trustworthy, but demands that the third party itself accesses
The website’s manifest below (Listing 5) specifies some Fig. 1: Structure of Nimiq Ecosystem.
inline scripts with trust = assert (omitted) and an iframe
with trust = delegate. The browser now expects the response F. Compartmentalisation of Code and Development process
to the query for the iframe’s content (https://wallet.nimiq.com)
to point to a URL with a signed manifest. We further expand on Nimiq’s Wallet application, this time
as an example for compartmentalising the code and the signing
{ "url": "https://www.example-shop.com/", process. Nimiq’s Wallet application at no point has direct
"manifest_version": "v2",
"contents": [ access to the users’ private keys. It is treated the same way
[inline script manifests omitted] as any other third party application interacting with the Nimiq
{ "seq": 2, ecosystem (see Fig 1). It embeds the Hub which acts as an
"type": "iframe", interface to the user’s addresses and can trigger actions on the
"src_type": "link",
"src": "https://wallet.nimiq.com/",
private keys. Access to the users’ private keys is only possible
"sandbox": "allow-scripts", through the Hub and pre-specified APIs. The Hub will then
"dynamic": false, forward any request that needs to access the private keys to the
"trust": "delegate" }]} KeyGuard component, which upon user input can decrypt the
Listing 5: Manifest is delegated to a trusted third party locally stored keys, perform the requested action, and return
the result to the Hub.
The procedure generate manifest produces the fol-
E. Untrusted Third-Party Code lowing manifest for Nimiq’s Wallet. Observe that it heavily
employs sandboxing. Both included iframes have the sandbox
High-security applications may want to rely on third party attribute set empty, meaning no exceptions defined.
code they cannot vouch for, e.g. when including ads that are {"url": "https://wallet.nimiq.com/",
dynamically chosen by an ad-bidding process. We developed "manifest_version": "v0",
a small web application that uses Google AdSense and sand- "contents": [
boxed this code, but noticed that AdSense and many other ad [five external scripts]
providers require access to the top-level window [18] for fraud { "seq": 3,
"type": "iframe",
detection, e.g. to detect invalid clicks. "src_type": "link",
"src": "https://hub.nimiq.com/iframe.html",
We therefore needed to turn the relationship between the "sandbox": "",
secure code and the untrusted code around. We sandboxed the "dynamic": true,
secure code with trust set to assert, protecting it from the po- "trust": "assert",
tentially unsecure AdSense code, which is not sandboxed and "manifest": [[seven external scripts],
{ "seq": 7,
declared blind -trust. Now the AdSense code cannot access "type": "iframe",
the secure document in the iframe. The manifest file is shown "src_type": "link",
in List. 6. It includes thirteen active elements (six external , "src": "https://keyguard.nimiq.com/",
seven iframe) related to AdSense, along with Nimiq’s Wallet "sandbox": "",
(seq=’6’), for which trust is delegated. "dynamic": true,
"trust": "delegate"}]}]}
{"url": "https://www.helloworld.com/", Listing 7: Delegated content Nimiq Wallet’s manifest.
"manifest_version": "v3",
"contents": [
[six external scripts for AdSense with trust= The Wallet’s manifest includes hub.nimiq.com in an iframe,
blindtrust] containing, among other elements, the KeyGuard, which has a
{ "seq": 6,
"type": "iframe",
separate origin and thus exclusive access to the user’s keys.
"src_type": "link", For transactions, the Hub redirects to the KeyGuard. The
"src": "https://wallet.nimiq.com/", KeyGuard is trusted, easy to audit, does not depend on any
"sandbox": "allow-same-origin allow-scripts", third party code and changes rarely. The KeyGuard manifest
"dynamic": false, is as follows.
"trust": "delegate" // See Listing 7
}, {"url": "https://keyguard.nimiq.com/",
[six more iframes for AdSense with blindtrust]]} "manifest_version": "v0",
Listing 6: Untrusted AdSense and the Delegated Nimiq wallet "contents": [
at manifest section sequence number ‘6’. { "seq": 0,
"type": "external",
"link": "https://keyguard../web-offline.js",
7
global active
external
inline
...
...
the document inside this iframe. The same-origin policy for-
content list
top-level content script
external
external
...
... bids this in many cases. We therefore use the iframe’s content
signed
script to gather information: if the content script recognises
manifest that it is not at the top-level, it runs statelessly, collecting
iframe's the metadata as usual, but reporting it to the parent window’s
content script postMessage
generateMan
compliance
content script via postMessage.
ifest
1. monitor DOM (developer checker
external ...
(user mode)
2. pass modifications external ... mode) The metadata agent distinguishes script and iframe el-
green / red ements by their HTML tags. A script that has src at-
manifest signal
tribute is external otherwise it is inline. For external scripts
HTML active
nodes content external ...
Subresource Integrity (SRI) hashes, crossorigin and load at-
Mutation
DOM tributes are collected. For inline scripts, hash is computed on
Observer external ...
the script and the load attribute is collected. Event handlers are
Fig. 2: Manifest file generation and metadata collection . searched inside all DOM elements checking if their attributes
contain any of the global event attributes e.g. onclick in a
"hash": "sha256-L8NMxOGkIW...", list [19]. For event handlers, the hash is computed on the
"load": "defer", value of the event attribute. For iframes with src type =
"dynamic": false, external , the metadata agent in the parent window collects
"trust": "assert" the crossorigin and sandbox attributes and gathers the metadata
},
[two external scripts w/ same dynamic/trust.]]} about the document inside the iframe from its content script.
Listing 8: Nimiq Keyguard depends on its own content. Also, for each active element boolean dynamic and persistent
scores are assigned by the metadata agent. An active content
is considered dynamic if it is added after the window’s load
The Wallet manifest file reflects the web applications event; otherwise, it is static. Elements that get removed from
compartmentalisation: every component – Wallet, Hub and the DOM are marked to be non-persistent, but still kept in the
KeyGuard – runs on a different domain, hence locally stored active content list for evaluation.
information like the wallet key is inaccessible to the Hub or
Wallet due to the same-origin policy. If the web page opted in, i.e. it has sent the
x-acc-js-link header in the past and provided a valid
With this setup, it is easy to compartmentalise the develop- manifest, then the metadata collector compares the metadata
ment process, too. A separate developer key could be used for list with the list of active elements in the manifest. If the web
the KeyGuard code given that it is already bound to a second page violates the protocol, the extension reports this to the
domain. New KeyGuard releases would need to be signed by user.
that key, which, internally, can be assigned additional oversight
requirements. Without requesting a new key from the PKI, any In developer mode, a failure to comply triggers the manifest
bypassing of this procedure would either end up with code that generator to collect and generate metadata for the active
cannot access the user’s key or be provable with the signed elements. The generate manifest procedure then produces
manifest for the Wallet. a manifest file with trust = assert for each active element
based on the collected information, which can be easily
VII. M EASUREMENT PROCEDURE adapted to other trust settings. This manifest represents the
most restrictive manifest functional for this web application.
We present a practical active content measurement proce-
dure that can be used to identify active elements and collect
VIII. S IGNING AND D ELIVERING A M ANIFEST
their metadata, allowing the client to check whether the web
application follows the provided manifest. In development A valid signature on the manifest proves that the manifest
mode, the same procedure can be used to automatically gen- was created by a known origin, i.e. a developer publicly
erate a manifest file from an HTML document. associated with the website, and that it was not tampered
The measurement procedure is depicted in Fig. 2. The with in transit. To sign manifests, we adopt the Signed HTTP
browser’s rendering engine parses the raw HTML document Exchanges (SXG) standard. SXG is an emerging technology
and creates the DOM, observing the DOM for mutations, that makes websites portable. With SXG, a website can be
e.g. elements that are added at run time. Whenever an active served from others, by default untrusted, intermediaries (e.g. a
element is appended, edited or removed from the DOM, the CDN or a cache server), whereas the browser can still assure
metadata agent will be triggered, which keeps a list of the that its content was not tampered with and it originated from
active elements and their metadata. the website that the client requested. This allows decoupling
the web developer from the web host and nicely fits our
The extension obtains access to the DOM by defining a view of websites as software distribution mechanisms. The
content script, a script that runs in the context of the current SXG scheme allows signing this exchange with an X.509
page. This includes all pages loaded in top-level browser certificate that is basically a TLS certificate with the ‘CanSign-
windows (e.g. Tabs), but also iframes within those. Content HttpExchanges’ extension. Browsers will reject certificates
scripts running at the top level are responsible for collecting with this extension if they are used in a TLS exchange,
metadata on all active elements in their context. For nested ensuring key separation. SXG certificates are validated using
iframes, they can only collect the metadata about the iframe the PKI, allowing Accountable JS to be used with the existing
like the attributes src type, src and sandbox , but not inspect infrastructure, although, currently, Digicert is the only CA that
8
provides SXG certificates. The lifespan of an SXG certificate • A client can provide non-repudiable proof that they received
is at most 90 days, limiting the impact of key leaks. a manifest from the web server. This is achieved by signing
a client-provided nonce.
An SXG signature includes the HTTP request, as well as
the corresponding response headers and body from the server. The developer of the website generates a manifest file for
The signature is thus bound to the requested URL, in our case, the web page that is identified with a URL, signs the manifest
the manifest file’s URL. It also includes signature validation and publishes it in one or more public transparency logs (see
parameters like the start and end of the validity period and Fig. 3 before t). The signature proves to the client that the
the certificate URL. If the current time is outside the validity developer takes responsibility of the manifest.
period, SXG permits fetching a new signature from a URL.
This URL is also contained in the (old) signature’s validation The CodeStapling protocol ensures that, whenever the
parameters. These features provide a solid foundation for client accepted a manifest, the developer can be held account-
Accountable JS’s signed manifests, allowing manifests to be able for publishing it. Nevertheless, the developer cannot be
cached during the validity period and enabling dynamic re- held accountable for delivering it to a client, as there is no
fetching and safe key renewals. proof for that. We thus define the CodeDelivery protocol for
non-repudiable code delivery (in Fig. 3 after t). With the HTTP
A web application in compliance with Accountable JS GET request, the client C sends a nonce n signed with its
must deliver the signed manifest. If it is small enough, it can signing key sk C . The web server W responds with a signature
be transmitted directly via the HTTP response header (using on the HTTP response HTML, the client nonce n, and signed
the directive x-acc-js-man). Alternatively, the response log timestamp sig L . The client validates the log’s signature
includes the URL of the SXG file, using the HTML meta-tag and the developer’s signature within. Should one of these
or HTTP-response header x-acc-js-link. The signature checks fail, the client aborts and displays an error message.
in this file includes the manifest file (as the HTTP response Then, the client compares the active content in HTML with
body) and the manifest URL (part of the HTTP request). Also, the manifest; if they are consistent, the browser decides the
the browser needs to check that the URL value in the manifest web page adheres to the protocol.
corresponds to the web application’s URL (excluding the query
part of the URL).
X. P ROTOCOL V ERIFICATION
Providing a signed manifest indicates the website (i.e. the
URL) opted into the protocol. From now on, the extension We analysed Accountable JS with Tamarin [21], consid-
will expect an accountability manifest until the users explicitly ering the protocol’s security w.r.t. a Dolev-Yao adversary
choses to opt out. that can manipulate messages in the network and corrupt
other processes to impersonate them. Using Tamarin’s built-
Apart from the manifest generation, the signing operation in stateful applied-π calculus [6], we could model a global
and uploading the signature to the ledger can also be automated state such as represented by the transparency log.
thanks to existing tool support for SRI and SXG. We stress
that the signatures need only be computed if the JS code The protocol comprises five processes running in parallel:
changes. Techniques like microtargeting are disincentivised by
!PDeveloper |!PWebserver |!PClient |!PLog | PPub
accountability (see Section II-A), hence the performance of the
signature generation is of secondary concern. The first three processes model the role of the developer,
web server and client, outputting and accepting messages as
IX. P ROTOCOL specified in Figure 3. The developer, web server and the client
In this section, we present the Accountable JS protocol. The are under replication to account for an unbounded number
end-to-end goal is to hold the developer accountable for the of parties acting in each role. Any party except the log and
active content the client receives. Clients can compare this code the public process can become dishonest. This is modelled
with the manifest, hence, for honest clients, we can reformulate by giving control to the adversary, but only after emitting a
this task as follows: Corrupted event, which can be used to distinguish the party’s
corruption status in the security property. A corrupted party
• Clients should only run active content that follows the remains dishonest for the rest of the protocol execution.
manifest. This is a setup assumption.
• Any manifest the client accepts needs to originate from the The process PLog models an idealised append-only log
developer, even if the developer or server is dishonest. This using insert and lookup operations to a global store [6].
follows from the non-repudiation of origin property of the Moreover, the built-in lock and unlock commands are used to
signature scheme. A signed manifest was either signed by ensure atomicity of the operations. Finally, the process PPub
the developer, or the developer leaked their key. makes the public’s ability to validate a client’s claim explicit.
• Whenever two clients accept a manifest with the same Upon obtaining a claim (from the client), this process : (1)
version number, that manifest must be the same, or they reads, from the log, the information that concerns the URL
can provide non-repudiable proof that this was not the case. mentioned in the claim, (2) verifies the signatures in the claim
This is achieved by including a transparency log that gathers and (3) matches the signed values with those in the log.
all manifest files with valid signatures. Using Tamarin, we prove the following properties which
• Whenever a client accepts a manifest with some version are detailed in Appendix A.
number, this version was the latest version in some client-
defined time frame. This is achieved by a timestamping • Authentication of origin: The client executes active content
mechanism like OCSP-Stapling [20]. only if the corresponding manifest was generated by the
9
Client Web Server Developer Ledger
R t
n ← {0, 1}λ sign(sk C , ⟨n, URL⟩)
sig W ··= sign(sk W , ⟨HTML′ , n, sig L ⟩)
honest developer unless the adversary corrupts the devel- public log, could not apply. We highlight the differences to
oper. our original properties below.
• Transparency: If the client executes code then its manifest
is present in a transparency log in a sufficiently recent entry. • Authentication of origin: The client executes active con-
• Accountability: When the public accepts a claim, then even tent only if the corresponding manifest was generated by
if the client was corrupted, the code must exist in the logs the honest developer unless the developer or Cloudflare is
and the server must have sent that data (either honestly or corrupted.
dishonestly via the adversary). • Non-Accountability: The data provided to the client is not
• End-to-end guarantee : Only by corrupting the developer sufficient to prove they received certain content from the
it is possible to distribute malicious code. web server, even if web server and Cloudflare are honest.
• End-to-end guarantee: Only by corrupting the developer
or Cloudflare it is possible to distribute malicious code. In
A. Code Verify Protocol a separate lemma we show that, the developer by itself can
Meta’s Code Verify [4] was published in March 2022 and indeed distribute malicious content.
made available as an extension. As of now, it is deployed only
by WhatsApp Web. Intuitively, WhatsApp Web (the developer) The latter property indicates that Cloudflare’s role as
submits a hash of their JavaScript along with a version number trusted party is not fully exploited yet. At least as far
to Cloudflare, which Cloudflare then publishes to the end user. as we know [4], Cloudflare neither promises to ensure the
The end user’s browser extension computes a hash on the code is harmless, nor does it guarantee to collect informa-
JavaScript delivered from WhatsApp Web and compares it tion to provide transparency or accountability. Nevertheless,
against the hash published by the Cloudflare. Given that the the current message flow can be extended to provide such
manifest is hashed instead of signed, Cloudflare is trusted for guarantees by having Cloudflare acts as a transparency log.
authenticity and thus constitutes a trusted third party, replacing Accountability can likewise be achieved by simply deploying
the log. Moreover, users’ IP addresses are sent to Cloudflare signatures instead of a hashing scheme.
instead of to WhatsApp Web.
XI. L OGGING M ECHANISM
We likewise modelled Code Verify in Tamarin, considering
the following five processes: We would like clients to verify they received the latest and
same version of the code as any other user. To this end, we
!PDeveloper |!PWebserver |!PClient |!PCloudflare | PPub assume a public append-only log to provide a public record of
the software published and prevent equivocation attacks. The
Again, we assume the developer is separate from the web log does not determine which JS is considered malicious, but
server. The protocol does not have a public log and does it provides proof of receipt and origin, and allows identifying
not include independent auditors. Instead, Cloudflare records malicious versions.
the hashes for each version. To our knowledge, Cloudflare
does not provide information about the history of submitted Such a public log is realistic to deploy: CT Logs [13] are
versions or which is most recent. As the public cannot inspect used in the modern internet infrastructure. These logs store
how often versions have changed, it relies on Cloudflare to certificates, which are signed by CAs. In contrast, our logs
implement countermeasures against microtargeting. Publicly need to store manifests signed by the developers. It is thus
available information [4] did not give information about such impossible to reuse the existing CT infrastructure, but we can
measures in Meta’s deployment. closely follow the structure and properties of CT.
Under these considerations, we analysed the same prop- Websites that offer security-conscious services have an
erties, except for transparency, which, due to the lack of a incentive to retain their reputation. Similar to how CT logs
10
operate, our log can be run by a party that wants to support totyping. Ideally, the measurement procedure should be part
such webpages. Third party monitors can keep the monitor of the browser’s rendering engine, since it can access the
honest and we allow third parties to submit signed manifests response body and observe mutations to elements first-hand.
they observe in the wild. Our measurements here can thus be (promising) upper bounds.
We elaborate on the technical limitation imposed by the
When implemented naively, a logging mechanism can
extension SDK in Section XIII.
bring significant privacy implications: To confirm that other
clients receive the same manifest, the client would need to We come back to the use cases from Section VI and
consult the log on each request and reveal the URL to the measure how the extension affects the following metrics:
log. We can mitigate these privacy issues by allowing the 1) number of additional requests, 2) size of additional traffic,
web server, which learns each request anyway, to include 3) time until the browser paints the first pixel / the largest
a signed and timestamped inclusion statement from the log visible image or text block6 / until the web page is fully
instead. This is similar to the OCSP-Stapling for certificate responsive. 4) total blocking time, i.e. time during which web
revocation status requests [14]. While it mitigates the privacy page cannot process user input. We consider differences below
issues of consulting the log, it instead requires the user to trust 100 ms to be imperceptible to the users, differences of 100-300
the specific log selected by the web server. We outline other ms barely noticeable and differences above 300 ms noticeable.7
approaches to solve the trade-off between trust and privacy in
Section XIV. Evaluation environment: Measurement took place on a Mac-
Book Pro with 2 GHz Intel Quad-Core i5, 16 GB RAM and
Overall, our transparency log needs to provide interfaces macOS Monterey 12.5.1 with Google Chrome 107.0.5304.121.
to at a minimum: The results are compiled in Table II. We measured the number
• store the signed manifest file (including its version number) of additional requests and traffic using Chrome’s developer
bound to a URL, tools and the rendering metrics using Lighthouse (set to
• query the latest signed manifest file for a URL in the logs, ‘desktop simulated throttling’). Unfortunately, WhatsApp Web
• form a signed response for a query that can be pre-fetched is incompatible with Lighthouse, so we instead computed the
by the web server to staple it to each request from the combined duration of all tasks performed by the browser using
clients. Puppeteer Page metrics [27]. We automated this process using
Puppeteer and NodeJS and perform n = 200 trials per website
A possible implementation of this functionality could be based and configuration to minimise the impact of network latency
on Verifiable Log-Based Maps [22]. An implementation of this on page loads.
structure for Trillian [7], the software running Google’s CT
Configurations: For performance evaluation, we compare
server, is currently in progress [23], with the goal of supporting
the CSP built into the browser with the Code Verify and
transparency in certificate revocation.
Accountable JS extensions as follows:
Availability, scalability and the size of the transparency logs
are other implications. Be it submitting a new manifest to the 1) Baseline: disabled CSP and extensions.
log or collecting the latest version of manifest for a URL, 2) CSP: CSP active, no extension.
low latency to access the network of transparency logs can be 3) Accountable JS: CSP inactive, only Accountable JS
achieved by eliminating the single point of failure by adding extension active.
multiple logs that will provide load balancing. The mechanism 4) Code Verify: CSP inactive, only Code Verify extension
proposed for query privacy will also decrease the number of active. This configuration only applies to WhatsApp Web,
requests to the logs since the web server will provide the as Code Verify currently only supports Meta websites.
stapled result in most cases.
Experiments: We consider the examples from Section VI:
Websites that frequently update their active contents can Hello World, WhatsApp Web, Trusted Third-Party, Delegate
create significant burden on the log size. We calculate ap- Trust to Third Parties (Nimiq A), Untrusted Third Party
proximately how many times each log can be updated for a (Google AdSense and Nimiq B). For the compartmentalisation
limited time and space. We assume a non-leaf node overhead experiment on Nimiq’s Wallet, we use a different baseline that
is approximately 100 bytes and for the leaf nodes it is 700 we will discuss below. For the CSP measurement, we defined
bytes(signature 600 bytes + 100 bytes). If a log provider CSP headers for each website that listed all active content
has 100 TB of space for 5 years, it can contain 137 billion in the Accountable JS manifest files. We collected all valid
signatures in total. To make sense of this number, take the sources of external scripts and hashes for the external and in-
following example. We start with a log of 10M URLs with line scripts in CSP’s script-src directive, hashes for event
eight updates per month on average. The number of URLs handlers in script-src-attr and sources for iframes in
also increases exponentially at a rate of 1% with each update child-src. For the Accountable JS experiment, we first
(i.e. also eight times per month). 5 This number would be well navigate to the target website and wait for ten seconds for
below 137 billion signatures. the page to load. Thereafter, using the generate manifest
6 More
XII. E VALUATION precisely: the ‘largest contentful paint’.
7 We derive these performance categories from the RAIL model [24].
We implemented Accountable JS in a Chrome extension According to RAIL, users feel the result is immediate if < 100 ms and feel
[12, folder accjs-extension] for demonstration and pro- they are freely navigating between 100-1000 ms (see also [25]). However, we
found this gap is too wide to ignore, and split the category at 300 ms for an
5 e.g. after the first update, 10M updates along with 100k new URLs are unusually common delay in web apps due to the ‘double tap to zoom’ feature
appended to the existing 10M, resulting in a total of 20.1M. on iPhone Safari [26].
11
TABLE II: Evaluation results on case studies: The second and third columns show the number and total size of additional
requests made by the extension, i.e. the number of signed manifest and certificate. Each subsequent block provides Lighthouse
performance metrics for rendering time and the total time that the browser spends unresponsive. For each metric, we compare the
baseline (no Content Security Policy (CSP), no Accountable JS) with the overhead incurred by enabling CSP and enabling the
Accountable JS extension (leaving CSP disabled). For compartmentalisation, the baseline is with the extension activated but the
same signing key for all Nimiq components. All the time values are averages over n = 200 runs and given in milliseconds. The
additional traffic(kB) value is affected by the size of the signature and SXG certificate. Signatures are generated on uncompressed
manifest JSON files.
additional network . . . time to . . . baseline + CSP overhead + Accountable JS overhead
case study requests traffic (kB) first pixel largest element reactive blocking time
Hello World 2 2.06 196 +1 +20 197 +0 +23 196 +1 +24 0 +0 +0
Trusted Third-Party 2 2.46 462 +0 +21 462 +0 +21 462 +0 +21 0 +0 +0
Delegate Trust (Nimiq A) 3 9.93 262 +3 -10 262 +3 -10 5591 -29 -144 172 +4 +87
AdSense + Nimiq B 3 15.62 747 +2 +91 901 +5 +68 6034 +1 -82 159 +3 +77
Compartmentalisation 2+2 8.66 +1.10 2200 -17 4675 +20 5321 +115 212 +7
in the extension, we download the manifest file and self-sign The difference between Code Verify (220ms) and Ac-
it using the gen-signedexchange command line tool [28]. For countable JS (244ms) on WhatsApp Web is small. This is
Nimiq A+B and AdSense, we changed the trust attribute for remarkable, because Code Verify only applies SRI checks on
the external element(s) to delegate before signing. We publish external scripts but not event handlers or iframes. In contrast
this signed manifest via a local web server and configure the to Accountable JS, the order of active elements is ignored,
web server to provide a response header pointing to a URL. We attributes are not checked (e.g. load=’async’ for scripts)
also ensure the website provides SRI tags for external scripts. and a short hash value is downloaded from Cloudflare, rather
Evaluation procedures of each case study are elaborated in the than a signature.
full version [16, Appendix C].
Results: The CSP configurations show an imperceptible over- Compartmentalisation: For compartmentalisation, we evalu-
head in all case studies. This is hardly surprising, as CSP is ate the impact of the additional signing key. We signed Nimiq
built into the browser built-in and can validate resources during Keyguard, which is embedded in Nimiq Wallet, with a different
rendering. The Accountable JS configurations likewise have signing key and set the Keyguard’s trust attribute to delegate
an imperceptible overhead in all case studies. Moreover, in the Wallet’s manifest. The baseline therefore also has the
the traffic requirements are modest and incur only modest Accountable JS extension activated, but uses the same signing
blocking time. For Nimiq A, the traffic requirements are about key on all Nimiq components. The Wallet’s manifest includes
9.9 kB for the additional signature. In terms of performance, the Hub’s manifest inside and the Hub’s manifest declares the
CSP and Accountable JS’ overhead are comparable, except Keyguard with trust = delegate in its manifest section. Thus a
the total blocking time is slightly higher than CSP. Besides, separate manifest is required for the KeyGuard. Also, this time
the time to interactive value unexpectedly decreases more with a separate signing key is used for the KeyGuard manifest. For
Accountable JS than CSP. However, the difference is minimal the baseline performance, we inline the KeyGuard’s manifest
and could possibly be explained by a) network latency, (b) side as an entry for its iframe in the Wallet’s manifest, thus having
effects of the browser’s just-in-time compilation or scheduling one manifest and one signing key, and activate the extension.
or (c) a side effect of the former two on how Lighthouse eval-
uates the reactive metric. Nimiq is a complex web application
heavily dependent on external data, in particular the remote In the compartmentalisation evaluation, we observe that
blockchain it connects to. there are two more round trips and slightly higher traffic
Discussion: The Accountable JS configurations have an im- overhead (about the overhead of Accountable JS, w.r.t. the
perceptible overhead which is slightly higher than the CSP overall page traffic of 4.6 MB). This is due to downloading the
configurations. Recall that the CSP is built in the rendering extra SXG certificate and manifest for Keyguard. The effect on
engine whereas Accountable JS runs as a browser extension. the rendering metrics is small; the barely noticeable increase
Accountable JS has to perform signature validation, meta for time-to-reactive value can again be explained with network
data collection and a final compliance check. The prototype latency and side effects described above. This is because the
achieves good performance overheads by measuring all delegated manifest can be validated in parallel to rendering,
elements simultaneously and combining their results. The while it is inlined in the baseline scenario and thus validated
browser extension panel displays the results instantaneously, in sequence.
while the evaluation is in progress, although the evaluation is
usually too quick for the user to notice. Moreover, the traffic
Due to stapling, the overhead for clients to verify that they
requirements are modest and incur little blocking time.
received the latest version of the code (and thus the same as
For AdSense + Nimiq B, the network overhead is slightly any other user), is negligible. The web server staples a query
higher than Nimiq A. This is due to the larger size of the result, i.e. the log’s signature on the signed manifest, to each
manifest, which now also includes AdSense. We again observe request. The signatures use 2048-bit RSA keys and are 256
an imperceptible impact on performance with Accountable JS. Byte long.
12
XIII. L IMITATIONS OF P ROTOTYPE and Accountable JS are orthogonal and can be combined. It is
possible to generate a CSP from a manifest file.
The browser extension is a prototype to evaluate perfor-
mance and applicability of the approach. The advantage of an The Web Package proposal (currently in draft status [30],
extension (as opposed to modifying the browser’s source or see Fig. 4) aims at packaging web applications for offline use.
writing a developer plugin) is that users can easily experiment Web packages provide a declaration of the web application’s
with its code. On the other hand, extensions cannot interrupt metadata via Web App Manifests [31], a serialisation of its
the browser’s rendering engine. Thus we inject a content script content via Web Bundles [32], and authenticity via SXG [33].
that can apply the client-side operations of the protocol to the We likewise employ SXG to provide authenticity of origin
browser window. The content script runs in the same context via signatures. SXG, like Accountable JS, decouples web
as the web page; hence it can observe changes to the DOM via developer from website hoster. Web App Manifests, despite
the Mutation Observer. Since the extension cannot access to their name, are only superficially related. They contain startup
the browser’s rendering engine, some active elements can be parameters like language settings, entry points and application
added within a small time frame before the Mutation Observer icon, e.g. for ‘installable web application’ displayed in a
is registered. This race condition is a limitation of using the smartphone’s launcher. Web Bundles are a serialisation format
extension and fixable by closer integration into the browser. for pairs of URLs8 and HTTP responses. They represent a web
application as a whole, but a signature on a web bundle would
Another limitation is that other browser extensions may change with every modification of a web pages’ markup. Web
interfere with the measurement by injecting active content to Packages are thus not competing with Accountable JS. Instead,
the web page. Since extensions cannot distinguish website both standards are compatible. A web bundle can contain
code from the code that other extensions injected to the web x-acc-js-link in the header of its entry point’s HTTP
page, this can break the measurement. This is the correct response, triggering the browser to validate the manifest. The
behaviour, as the website developer cannot attest to every manifest is specified via a URL that also included in the web
possible modification of the active content by other extensions, bundle. This URL maps to an HTTP response that contains
however, there are various client-side solutions: (a) closer the manifest in its content part.
integration into the browser could distinguish active content
injected by websites, (b) the extension could provide an API Signature-based SRI [35] proposes easier maintainable SRI
for third party extensions to register modifications or (c) an tags to protect against script injections, by including signature
allowlisting for the most common extensions that gives a keys instead of hashes. These enable validating the provider
warning to the user. of the third party script, instead of their content, like the trust
relationships expressed with trust = delegate. The tags are
part of the HTML code, instead of the manifest file. Signing
XIV. R ELATED W ORK
the HTML files is impractical, as they are frequently changing.
We first discuss how Accountable JS relates to other Service Workers [36] are Network proxies programmable
(proposed) web standards with seemingly similar goals, before via JavaScript, often used to perform URL response caching,
discussing related academic proposals. separate from the browser cache. Theoretically, a compliance
CSP was introduced to counter Cross-Site Scripting (XSS) check like our measurement could be implemented in a service
attacks. They specify runtime restrictions for the browser, typi- worker, but (a) the service worker would need to be delivered
cally the set of allowed sources for scripts, iframes, stylesheets, correctly and (b) service workers lack access to the DOM and
etc., including eventual requirements for sandboxing. Like thus information about how active elements used.
accountability manifests, CSPs can specify which sources are We will now discuss related academic work. Accountability
allowed and, combined with SRI, fix their content. This is in the web requires non-repudiable proof. For static assets,
comparable to a manifest file that includes types with trust this can, in principle, be provided by digital signatures (e.g.
set to either assert (if SRI is employed) or blind -trust via SXG and web bundles, see above), but recreating the
(otherwise). By contrast, CSPs do neither cover the order nor signature for each exchange is costly. We solve this via
possibly nested active contents (e.g. iframe within iframe).
Mixed ordering of active content may create malicious activity, 8 More precisely, HTTP representations [34].
a site loading script A before script B may mean something
different from loading B before A. A site that only uses
CSP cannot catch that behaviour, whereas in Accountable 7
JS, we take the order into account. Most importantly, in URL mapped to HTTP response
CSP, there is no means of delegating trust and no distinction Web App
contains
Manifest
Web Package
13
a simple challenge-response mechanism. Ritzdorf et al. [5] the protocol provides users with a claim that includes the
provide a full-fledged solution, giving non-repudiation for the delivered code and the identity of the developer.
entire communication, optionally hiding sensitive data. The
statement we prove is that the client has obtained certain active The transparency logs can be used as a point of refer-
content, not that they execute it. Ensuring a remote partner ence for the public code for auditing and evaluating. Honest
runs certain software is the goal of remote code attestation developers aim to make their code easy to audit; dishonest
(e.g. [37]). Outside embedded systems, this is typically based developers thus risk loss of reputation if they microtarget users
on a trusted execution environment (e.g. TPM, SGX). While (as frequent updates are visible on the transparency logs),
the browser (and for that matter, our extension) could provide silently opt out of the system (as this will be caught by users
a trusted execution, establishing trust in the correctness of the that received a previous opt-in), or provide obfuscated code
browser is the crux. (due to the log).
Our work relies on a transparency log. As mentioned Honest developer will benefit from a good reputation and
before, Trillian’s [7] verifiable log-based maps would fit the their ability to provide proofs for any efforts they make toward
bill, but there are many ways to implement such a store. independent audit or formal verification. Clients, who often
The most interesting aspect is privacy. We propose an ap- debate a website’s reputation in a public forum (e.g. the case
proach based on stapling, an established method for revocation of ProtonMail or Lavabit) obtain data to substantiate positive
management [20], but other techniques promise privacy, too. and negative claims.
CONIKS [8] provides a log mapping user identities to keys,
keeping the list of all user identities in the system private. We stress that accountable code delivery is necessary to
This is not useful in our case, as the URLs (the domain of connect auditing results to the code users actually run, but does
our mapping) are not secret, but which URL a user accesses. not by itself guarantee the safety of this code. Realistically, it
Multiparty protocols for Private Information Retrieval [38], will take some time until software analyses are mature enough
Private Set Intersection (e.g. [39]) or ORAM [40] lack efficient to handle this at scale. Assuming, however, that such analy-
database updates, mechanisms to efficiently update precom- ses may not necessarily run at each browser independently,
putation steps, or only preserve k-anonymity for URLs. K- authentic code delivery appears to be a necessary first step.
anonymity is often not enough if we consider that an attacker,
Moreover, Accountable JS only authenticates the active
e.g. a censor, tries to punish access to a few critical URLs,
content, thereby exposing the active content to data-only
each of which may end up in a bucket with uncritical, but
attacks, e.g. modified button labels or form URLs. A signature
also not frequently visited URLs. Finally, Accountable JS may
on the content of a web application could be achieved by build-
be an enabler for formal verification of web applications, as
ing on Web packages/Web bundles (which we discussed in
users are potentially able to link the code they receive to code
Section XIV), however, this approach would be too static and
to published verification results. Various static and dynamic
inflexible for the requirements of the current web ecosystem.
analyses target JavaScript already [41], [42].
Thanks to accountability, the developer would take responsi-
Although we showcased only a single approach to bility for the active content that they published, in this case, for
code compartmentalisation (as it is being deployed by our code that is vulnerable to data-only attacks. Realistically, there
real-world example), other approaches are also compatible would not be consequences, because they can plausibly point
with Accountable JS. Language-based isolation methods like to the dire state of verification of JavaScript—which is at least
BrowserShield [43]) rewrite JavaScript into a safer version partially because users could thus far not be sure to receive
preventing or mediating access to critical operations like the verified code anyway. Accountable JS choice to validate
createElement or eval. If the code is rewritten on the the active content only is a compromise and possible starting
client (typically using JavaScript), the developer declares the point for future work, as we discuss in the next section.
wrapper that fetches the code and deals with the code rewriting
in the manifest file. If the code is rewritten on the server, the
developer declares the transformed JavaScript code that will XVI. C ONCLUSION
be delivered to the user. Frame based isolation methods (e.g.
AdJail [44]) that isolate the third party code inside iframe With Accountable JS, we provide a basis for the account-
are also compatible with our proposal, see the use case for able delivery of web applications, and thus a first step towards
untrusted third party code in Section VI-E. re-establishing the trust between a user and the web application
code they run on their computers. How to achieve security –
via audit, code analysis or formal verification – is a question
XV. D ISCUSSION that we left open intentionally. Accountable delivery is, nev-
ertheless, a requirement for any non-instantaneous analysis.
We provide a solution that allows users to detect if they are
microtargeted by developers and to prove this to the public if A key question for verification and audit is how to relate
it is the case. Sending different codes to classes of users might the web page’s user interface to the active content. As some
not be outlawed in many countries, but sending malicious code desirable security properties concern user input, we would like
is. Our solution neither provides a code audit tool nor does it to give guarantees about, e.g. form fields. We can account
propose a framework, legal or otherwise, for the punishment for the JavaScript code that address them by ID, but those
of malicious code distribution. It provides, however, verifiable are invisible to the user. Future work may investigate how
data that authorities can use to evaluate which code was pub- to establish stronger ties between the manifest and the user
lished and whether that code was delivered to a user. Moreover, interface.
14
Acknowledgements: This project was partially funded by the [15] “Using subresource integrity.” (2021), [Online]. Avail-
ERC Synergy Grant IMPACT (with grant agreement number able: https://developer.mozilla.org/en- US/docs/Web/
610150) and a research award for privacy-preserving technolo- Security / Subresource Integrity # using subresource
gies from Meta research, specifically for the ”Transparency.js, integrity (visited on 11/03/2021).
transparency for active content” initiative. [16] I. Esiyok, P. Berrang, K.-C. Gordon, and R.
Künnemann, Accountable js full version, 2023. [Online].
R EFERENCES Available: https://arxiv.org/abs/2202.09795.
[17] A. Eicholz, S. Moon, A. Danilo, T. Leithead,
[1] I. Esiyok, L. Hanzlik, R. Künnemann, L. M. Budde, and S. Faulkner, “Sandboxing,” W3C Recommenda-
and M. Backes, “TrollThrottle —Raising the Cost of tion, 2021, https://www.w3.org/TR/2021/SPSD-html52-
Astroturfing,” in Applied Cryptography and Network 20210128/browsers.html.
Security, 2020. (visited on 05/19/2021). [18] “Is it allowed to use iframe.” (2020), [Online]. Avail-
[2] M. Schwarz, M. Lipp, and D. Gruss, “JavaScript Zero: able: https : / / support . google . com / adsense / thread /
Real JavaScript and Zero Side-Channel Attacks,” in Pro- 24384322/is-it-allowed-to-use-iframe?hl=en (visited on
ceedings 2018 Network and Distributed System Security 11/05/2021).
Symposium, 2018. (visited on 05/19/2021). [19] Html living standard: Event handlers on elements, docu-
[3] C. Cimpanu, Chrome extension caught hijacking users’ ment objects, and window objects. [Online]. Available:
search engine results, 2019. [Online]. Available: https: https : / / html . spec . whatwg . org / #event - handlers - on -
/ / www. zdnet . com / article / chrome - extension - caught - elements,-document-objects,-and-window-objects.
hijacking-users-search-engine-results/. [20] D. Eastlake et al., “Transport layer security (tls) exten-
[4] R. Hansen and V. Silveira. “Code verify : An open sions: Extension definitions,” RFC 6066, 2011. [Online].
source browser extension for verifying code authenticity Available: https://www.rfc-editor.org/rfc/rfc6066.txt.
on the web.” (2022), [Online]. Available: https : / / [21] S. Meier, B. Schmidt, C. Cremers, and D. Basin, “The
engineering.fb.com/2022/03/10/security/code- verify/ tamarin prover for the symbolic analysis of security
(visited on 03/10/2022). protocols,” in Computer Aided Verification, 2013.
[5] H. Ritzdorf, K. Wust, A. Gervais, G. Felley, and S. [22] A. Eijdenberg, B. Laurie, and A. Cutter. “Verifi-
Capkun, “TLS-N: Non-repudiation over TLS Enabling able Data Structures.” (2015), [Online]. Available:
Ubiquitous Content Signing,” in Proceedings 2018 Net- https : / / github . com / google / trillian / blob /
work and Distributed System Security Symposium, 2018. b7ea8d2ca870e5b8ae1c05e9d2a33c4fdcca4580 / docs /
(visited on 06/17/2021). papers / VerifiableDataStructures . pdf (visited on
[6] S. Kremer and R. Künnemann, “Automated analysis of 11/05/2021).
security protocols with global state,” in Proceedings of [23] “Trillian – experimental Beam Map Gen-
the 2014 IEEE Symposium on Security and Privacy, eration.” (2021), [Online]. Available: https :
2014. / / github . com / google / trillian / tree /
[7] “Trillian.” (2021), [Online]. Available: https : / / github. b7ea8d2ca870e5b8ae1c05e9d2a33c4fdcca4580 /
com/google/trillian (visited on 09/20/2021). experimental/batchmap (visited on 11/05/2021).
[8] M. S. Melara, A. Blankstein, J. Bonneau, E. W. Fel- [24] Measure performance with the rail model, 10, 2020.
ten, and M. J. Freedman, “CONIKS: Bringing key [Online]. Available: https://web.dev/rail/.
transparency to end users,” in 24th USENIX Security [25] J. Nielsen, Response times: The 3 important limits, 1,
Symposium (USENIX Security 15), 2015. 1993. [Online]. Available: https://www.nngroup.com/
[9] S. Meiklejohn, J. DeBlasio, D. O’Brien, C. Thompson, articles/response-times-3-important-limits/.
K. Yeo, and E. Stark, “SoK: SCT Auditing in Certifi- [26] T. VanToll, What exactly is..... the 300ms click delay,
cate Transparency,” PoPETs, no. 3, 2022. (visited on 21, 2013. [Online]. Available: https://www.telerik.com/
11/21/2022). blogs/what-exactly-is.....-the-300ms-click-delay.
[10] D. Carlin, J. Burgess, P. O’Kane, and S. Sezer, “You [27] Page.metrics method. [Online]. Available: https://pptr.
could be mine(d): The rise of cryptojacking,” IEEE dev/api/puppeteer.page.metrics/.
Secur. Priv., no. 2, 2020. [28] “Signed exchange generation.” (2021), [Online]. Avail-
[11] “Adsense program policies.” (2021), [Online]. Avail- able: https://github.com/WICG/webpackage/tree/master/
able: https://support.google.com/adsense/answer/48182? go/signedexchange (visited on 11/05/2021).
amp;stc=aspe-1pp-en (visited on 07/19/2021). [29] M. Steffens, M. Musch, M. Johns, and B. Stock, “Who’s
[12] I. Esiyok, P. Berrang, K.-C. Gordon, and R. Hosting the Block Party? Studying Third-Party Block-
Künnemann, Supplementary material, 2023. [Online]. age of CSP and SRI,” in Proceedings 2021 Network and
Available: https://github.com/iesiyok/accountable-js. Distributed System Security Symposium, 2021. (visited
[13] B. Laurie, A. Langley, and E. Kasper, Certificate Trans- on 09/20/2021).
parency, RFC Editor, 2013. [Online]. Available: https: [30] J. Yasskin, Use Cases and Requirements for Web Pack-
//www.rfc-editor.org/rfc/rfc6962.txt. ages. [Online]. Available: https : / / datatracker. ietf . org /
[14] Y. N. Pettersen, The Transport Layer Security (TLS) doc/draft-yasskin-wpack-use-cases/.
Multiple Certificate Status Request Extension, 2013. [31] Marcos Cáceres, Kenneth Rohde Christiansen, Mounir
[Online]. Available: https://rfc-editor.org/rfc/rfc6961.txt. Lamouri, Anssi Kostiainen, Matt Giuca, and Aaron
Gustafson, Web App Manifest. [Online]. Available:
https://www.w3.org/TR/appmanifest/.
15
[32] J. Yasskin, Web Bundles. [Online]. Available: https : /* process goes here */
/ / wicg . github . io / webpackage / draft - yasskin - wpack - )
bundled-exchanges.html#name-semantics. )
[33] J. Yasskin. “Signed http exchanges.” (2021), [Online].
Available: https : / / wicg . github . io / webpackage / draft - The shortcut $p denotes that the term p is a public value.
yasskin- http- origin- signed- responses.html (visited on The attacker, by inputting the public value $p can pick some
04/13/2021). identifier for the party. Then, if the attacker corrupts the party,
[34] R. Fielding, M. Notthingham, and J. Reschke, HTTP a corruption event is emitted and the attacker gets access to
Semantics. [Online]. Available: https://datatracker.ietf. the secret key. The public (verification) key is emitted so that
org / doc / html / draft - ietf - httpbis - semantics - 15 . txt # other parties can use it to verify the signed messages of p.
section-8. We exemplify the process with the example of PDeveloper as
[35] M. West, Mikewest/signature-based-sri, 13, 2020. [On- follows:
line]. Available: https://github.com/mikewest/signature- in($D);
based-sri (visited on 07/20/2021). (
[36] A. Russell, J. Song, J. Archibald, and M. Kruisselbrink, [..]
| (
Service Workers 1. [Online]. Available: https://www.w3. in(<$manifest, $url, $v>);
org/TR/service-workers/. event DUploads($D, $url, φ);
[37] L. Gu, X. Ding, R. H. Deng, B. Xie, and H. Mei, out(<’update’, $D, $manifest, $url, $v, φ>)
“Remote attestation on program execution,” in Proceed- ...
ings of the 3rd ACM Workshop on Scalable Trusted )
)
Computing, 31, 2008. (visited on 07/21/2021).
[38] L. Fortnow, “Private information Retrieval Survey,”
[39] B. Pinkas, M. Rosulek, N. Trieu, and A. Yanai, “SpOT- This code snippet includes the interaction with the network
Light: Lightweight Private Set Intersection from Sparse via in and out, which is represented by the attacker. The
OT Extension,” in Advances in Cryptology – CRYPTO attacker hence inputs the public values manifest, url and
2019, 2019. version number v, then the developer process computes a
[40] O. Goldreich and R. Ostrovsky, “Software protection signature φ from these values and sends an update message to
and simulation on oblivious RAMs,” J. ACM, no. 3, 1, the log including all that information. Events are annotations
1996. (visited on 07/20/2021). associated with the parts of the processes that enable to define
[41] D. Park, A. Stefănescu, and G. Roşu, “KJS: A complete restrictions and security properties. In this example, before
formal semantics of JavaScript,” in Proceedings of the sending the update message to the log, the developer logs a
36th ACM SIGPLAN Conference on Programming Lan- DUploads event in the trace, annotating the developer’s new
guage Design and Implementation, 3, 2015. (visited on code update request to the transparency logs.
07/20/2021). The process PLog represents the transparency log as a
[42] J. F. Santos, P. Maksimović, T. Grohens, J. Dolby, protocol party that can receive and send messages, and in
and P. Gardner, “Symbolic Execution for JavaScript,” addition apply insert and lookup operations to an append-only
in Proceedings of the 20th International Symposium on global store. The applied-π calculus provides constructs for
Principles and Practice of Declarative Programming, 3, modelling the manipulation of a global store. The code snippet
2018. (visited on 07/20/2021). below includes an insert and a lookup operation.
[43] C. Reis, J. Dunagan, H. J. Wang, O. Dubrovsky, and
insert <$D, $L, ’version’, $url>, $v;
S. Esmeir, “Browsershield: Vulnerability-driven filtering ...
of dynamic HTML,” ACM Trans. Web, no. 3, 2007. lookup <$D, $L, ’manifest’, $url> as $manifest
[44] M. T. Louw, K. T. Ganesh, and V. Venkatakrishnan, in P else Q
“AdJail: Practical enforcement of confidentiality and in-
tegrity policies on web advertisements,” in 19th USENIX The insert construct associates the value $v to the
Security Symposium (USENIX Security 10), 2010. key which is a tuple < $D, $L, ‘version‘, $url > and
successive inserts overwrite the old values. The lookup
A PPENDIX A: V ERIFICATION OF S ECURITY P ROPERTIES construct retrieves the value associated with the key <
$D, $L, ‘manifest‘, $url > and assigns it to $manifest
By default, Tamarin assumes that the adversary controls variable. If the lookup was successful, it proceeds with process
the network. Our model allows the adversary to impersonate P , otherwise with Q. $D stands for the developer’s identity,
the untrusted parties in the protocol and thereby access their whereas $L stands for the log’s identity. Since there are
secrets. This is logged with a Corrupted (p) event in the trace unbounded number of developers and logs; we associate the
with p an identifier for the corrupted principal. values that are stored in the global store with the URL and the
We model the principals in the following structure using identities of the related developer and log for uniqueness.
applied-π calculus. Our model also includes lock and unlock, which the stateful
in($p); applied-π calculus defines for exclusive access to the global
( store in the concurrent setting. The code snippet below shows
( event Corrupted($p); an example of lock and unlock operations used in our protocol.
out(sk($p)) )
| out(pk(sk($p))) lock $url;
| ( insert <..., $url>, ...;
16
... that data, either honestly, or dishonestly via the adversary.
unlock $url;
PAccept($W, $url, φ, n, ts) =⇒ Log ′ ($url, φ, ts) ∧
When a $url is locked, any subsequent attempt to lock the (WSend ($W, $url, φ, n) ∨
same $url will be blocked until it is unlocked. We provide (Corrupted ($W) ∧ KU ($W, $url, φ, n))
exclusive accesses based on the $url, when the log attempts
to insert a new value to the global store. This is an over Here, the event WSend is emitted by W (who is honest) right
approximation: if a lock requires exclusive access independent before it sends the signed tuple sig W to C in Fig. 3.
for every write (independent of the URL) our model correctly Theorem 4 (End to end guarantee). When the client executes
captures this behavior too. We do not require locks for other a malicious code, then a corrupted developer is necessary to
reads, which also increases generality. distribute it.
Security properties and restrictions are first-order formulas CExec($D, $url, ‘malicious‘) =⇒ Corrupted ($D)
over the annotated events and time points. Universal quantifica-
tion (meaning: for all) and existential quantification (meaning: Theorem 5 (End to end non-guarantee). When the client
there exists) are used to check if the security property formula executes a malicious code, then a corrupted developer is
(lemma) holds for all examples in the domain or there exists sufficient to distribute it.
at least one example that satisfies the formula respectively. If
the lemma holds for the former case then the Tamarin Prover Ex. CExec($D, $url, ‘malicious‘) =⇒
shows that it is proven, whereas for the latter case a satisfying (All x. Corrupted (x) =⇒ (x =$D))
example is presented to the user. The time points enable to
account for event order in the trace, where e.g. E@i means Tamarin reports these results within 3 hours on a 16-core
that event E was emitted at index i in the trace. We prove that computer with 2.6 GHz Intel Core i5 processors and 64 GB
the following security properties hold in our protocol: of RAM. The proof is fully automatic, but relies on a so-
called ‘sources’ lemma to specify were certain messages can
Theorem 1 (Authentication of origin). Intuitively, the client
originate from. We specified this lemma manually, but it is
will only execute active content code (signified by the event
verified automatically. The full protocol can be found in the
CExec with url and manifest φ) if the code was uploaded by
supplementary material [12].
the honest developer D (logged the event DUploads), or the
developer was corrupted. The KU event is emitted whenever
the attacker (who is acting on behalf of the corrupted party A PPENDIX B: C LAIM V ERIFICATION
D) constructs a message. We simplify the formula as follows: The public runs a procedure to verify the claim generated
by a client that was allegedly targeted by a website. As shown
CExec($D, $url, φ) =⇒ DUploads($D, $url, φ) ∨ in the Appendix A Theorem 3, a claim is identified with
(Corrupted ($D) ∧ KU ($url) ∧ KU (φ)) server name, URL, manifest, request nonce and the timestamp
that was set for the manifest by the ledger. The signatures
on the request and the response data are verified, and the
Formally, the lemma is: for all CExec events there exists either
request nonce is asserted with the server nonce for authenticity.
an earlier DUploads event or there exists a Corrupted ($D)
Next, the delivered content behaviours are checked against the
event and KU events before CExec event.
manifest using the measurement procedure. Then, the public
Theorem 2 (Transparency). If the client executes JS code c for evaluates if the manifest is the latest version on the ledger
url with timestamp ts (CExec ′ ), then there is a corresponding using the timestamp. If the evaluation fails in any of these
log entry (Log) and it was deemed recent (CRecent) by the steps, then the claim is accepted.
client. The session identifier sid binds the moment when the
client checks the timestamp is recent (CRecent) to the moment A PPENDIX C: G LOSSARY
it executes (CExec ′ ) the code.
CA Certificate Authority
′ CDN Content Delivery Network
CExec ($url, sid , c, ts) ⇒ CSP Content Security Policy
Log($url, c, ts) ∧ CRecent(sid , ts) CT Certificate Transparency
DOM Document Object Model
JS JavaScript
Authentication of origin and transparency describe the OCSP Online Certificate Status Protocol
proactive behaviour of the extension. The following theorems PKI Public Key Infrastucture
cover the reactive behaviour. We first establish that a claim SPA Single Page Applications
that a client submits to the public is non-repudiable, i.e. that a SRI Subresource Integrity
corrupted client cannot forge false evidence to implicate honest SXG Signed HTTP Exchanges
parties. TLS Transport Layer Security
Theorem 3 (Accountability). When the public accepts a claim XSS Cross-Site Scripting
(identified with server id, url, manifest, client nonce and log
timestamp) then, even if the client was corrupted, the code
must exist in the logs (Log ′ ), and the server must have sent
17