Mobile Security Testing Guide
Mobile Security Testing Guide
of Contents
Introduction 1.1
Frontispiece 1.2
Overview
Introduction to the Mobile Security Testing Guide 2.1
Mobile App Taxonomy 2.2
Mobile App Security Testing 2.3
General Mobile App Testing Guide
Testing Code Quality 3.1
Cryptography in Mobile Apps 3.2
Mobile App Authentication Architectures 3.3
Testing Network Communication 3.4
Android Testing Guide
Platform Overview 4.1
Setting up a Testing Environment for Android Apps 4.2
Testing Data Storage on Android 4.3
Android Cryptographic APIs 4.4
Local Authentication on Android 4.5
Android Network APIs 4.6
Android Platform APIs 4.7
Code Quality and Build Settings for Android Apps 4.8
Tampering and Reverse Engineering on Android 4.9
Android Anti-Reversing Defenses 4.10
iOS Testing Guide
Platform Overview 5.1
Setting up a Testing Environment for iOS Apps 5.2
Data Storage on iOS 5.3
iOS Cryptographic APIs 5.4
Local Authentication on iOS 5.5
iOS Network APIs 5.6
iOS Platform APIs 5.7
Code Quality and Build Settings for iOS Apps 5.8
Tampering and Reverse Engineering on iOS 5.9
iOS Anti-Reversing Defenses 5.10
Appendix
Testing Tools 6.1
Suggested Reading 6.2
Foreword
**Welcome to the OWASP Mobile Security Testing Guide. Feel free to explore
the existing content, but do note that it may change at any time. New APIs and
best practices are introduced in iOS and Android with every major (and minor)
release and also vulnerabilities are found every day.
https://www.github.com/OWASP/owasp-mstg/
squirrel (noun plural): Any arboreal sciurine rodent of the genus Sciurus,
such as S. vulgaris (red squirrel) or S. carolinensis (grey squirrel), having a
bushy tail and feeding on nuts, seeds, etc.
Because this isn't a normal security book, the introduction doesn't list impressive
facts and data proving importance of mobile devices in this day and age. It also
doesn't explain how mobile application security is broken, and why a book like
this was sorely needed, and the authors don't thank their wifes and friends
without whom the book wouldn't have been possible.
We do have a message to our readers however! The first rule of the OWASP
Mobile Security Testing Guide is: Don't just follow the OWASP Mobile Security
Testing Guide. True excellence at mobile application security requires a deep
understanding of mobile operating system, coding, network security,
cryptography, and a whole lot of other things, many of which we can only touch
on briefly in this book. Don't stop at security testing. Write your own apps,
compile your own kernels, dissect mobile malware, learn how things tick. And
as you keep learning new things, consider contributing to the MSTG yourself!
Or, as they say: "Do a pull request".
Frontispiece
OWASP thanks the many authors, reviewers, and editors for their hard work in
developing this guide. If you have any comments or suggestions on the Mobile
Security Testing Guide, please join the discussion around MASVS and MSTG in
the OWASP Mobile Security Project Slack Channel. You can sign up for the
Slack channel at http://owasp.herokuapp.com/.
Copyright and License
Authors
Bernhard Mueller
Bernhard is a cyber security specialist with a talent for hacking systems of all
kinds. During more than a decade in the industry, he has published many zero-
day exploits for software such as MS SQL Server, Adobe Flash Player, IBM
Director, Cisco VOIP, and ModSecurity. If you can name it, he has probably
broken it at least once. BlackHat USA commended his pioneering work in
mobile security with a Pwnie Award for Best Research.
Sven Schleier
Sven is an experienced web and mobile penetration tester and assessed
everything from historic Flash applications to progressive apps build on the
MEAN stack. He is also a security engineer that supported many projects end-to-
end during the SDLC to "build security in". He was speaking at local and
international meetups and conferences and is conducting hands-on workshops
about web application and mobile app security.
Co-Authors
Co-authors have consistently contributed quality content and have at least 2,000
additions logged in the GitHub repository.
Romuald Szkudlarek
Romuald is a passionate cyber security & privacy professional with over 15
years of experience in the web, mobile, IoT and cloud domains. During his
career, he has been dedicating his spare time to a variety of projects with the goal
of advancing the sectors of software and security. He is teaching regularly at
various institutions. He holds CISSP, CCSP, CSSLP, and CEH credentials.
Jeroen Willemsen
Jeroen is a full-stack developer specializing in IT security at Xebia with a
passion for mobile and risk management. Driven by a love for explaining
technical subjects, he began as a PHP teacher to undergrad students before
moving on to discussing security, risk management, and programming issues to
anyone willing to listen and learn.
Top Contributors
Top contributors have consistently contributed quality content and have at least
500 additions logged in the GitHub repository.
Pawel Rzepa
Francesco Stillavato
Andreas Happe
Alexander Anthuk
Henry Hoggard
Wen Bin Kong
Abdessamad Temmar
Bolot Kerimbaev
Slawomir Kosowski
Contributors
Contributors have contributed quality content and have at least 50 additions
logged in the GitHub repository.
Jin Kung Ong, Sjoerd Langkemper, Gerhard Wagner, Michael Helwig, Pece
Milosev, Denis Pilipchuk, Ryan Teoh, Dharshin De Silva, Anatoly Rosencrantz,
Abhinav Sejpal, Daniel Ramirez Martin, Claudio André, Enrico Verzegnassi,
Yogesh Sharma, Dominique Righetto, Raul Siles, Prathan Phongthiproek, Tom
Welch, Luander Ribeiro, Dario Incalza, Akanksha Bana, Oguzhan Topgul,
Carlos Holguera, David Fern, Pishu Mahtani, Anuruddha E.
Reviewers
Reviewers have consistently provided useful feedback through GitHub issues
and pull request comments.
Sjoerd Langkemper
Anant Shrivastava
Editors
Heaven Hodges
Caitlin Andrews
Nick Epson
Anita Diamond
Anna Szkudlarek
Others
Many other contributors have committed small amounts of content, such as a
single word or sentence (less than 50 additions). The full list of contributors is
available on GitHub:
https://github.com/OWASP/owasp-mstg/graphs/contributors
Older Versions
The Mobile Security Testing Guide was initiated by Milan Singh Thakur in
2015. The original document was hosted on Google Drive. Guide development
was moved to GitHub in October 2016.
Top
Authors Reviewers
Contributors
This protection from injection and memory management issues doesn't mean that
app developers can get away with writing sloppy code. Following security best
practices results in hardened (secure) release builds that are resilient against
tampering. Free security features offered by compilers and mobile SDKs help
increase security and mitigate attacks.
2. The Android Testing Guide covers mobile security testing for the Android
platform, including security basics, security test cases, reverse engineering
techniques and prevention, and tampering techniques and prevention.
3. The iOS Testing Guide covers mobile security testing for the iOS platform,
including an overview of the iOS OS, security testing, reverse engineering,
and anti-reversing.
4. The Appendix contains additional technical test cases that are OS-
independent, such as authentication and session management, network
communications, and cryptography. We also include a methodology for
assessing software protection schemes.
Mobile App Taxonomy
The term "mobile app" refers to a self-contained computer program designed to
execute on a mobile device. Today, the Android and iOS operating systems
cumulatively comprise more than 99% of the mobile OS market share.
Additionally, mobile Internet usage has surpassed desktop usage for the first
time in history, making mobile browsing and apps the most widespread kind of
Internet-capable applications.
In this guide, we'll use the term "app" as a general term for referring to any
kind of application running on popular mobile OSes.
In a basic sense, apps are designed to run either directly on the platform for
which they’re designed, on top of a smart device’s mobile browser, or using a
mix of the two. Throughout the following chapter, we will define characteristics
that qualify an app for its respective place in mobile app taxonomy as well as
discuss differences for each variation.
Native App
Mobile operating systems, including Android and iOS, come with a Software
Development Kit (SDK) for developing applications specific to the OS. Such
applications are referred to as native to the system for which they have been
developed. When discussing an app, the general assumption is that it is a native
app implemented in a standard programming language for the respective
operating system - Objective-C or Swift for iOS, and Java or Kotlin for Android.
Native apps inherently have the capability to provide the fastest performance
with the highest degree of reliability. They usually adhere to platform-specific
design principles (e.g. the Android Design Principles), which tends to result in a
more consistent user interface (UI) compared to hybrid or web apps. Due to their
close integration with the operating system, native apps can directly access
almost every component of the device (camera, sensors, hardware-backed key
stores, etc.).
Some ambiguity exists when discussing native apps for Android as the platform
provides two development kits - the Android SDK and the Android NDK. The
SDK, which is based on the Java programming language, is the default for
developing apps. The NDK (or Native Development Kit) is a C/C++
development kit used for developing binary libraries components that can
directly access to lower level APIs (such as OpenGL). These libraries can be
included in regular apps built with the SDK. Therefore, we say that Android
native apps (i.e. built with the SDK) may have native code built with the NDK.
The most obvious downside of native apps is that they target only one specific
platform. To build the same app for both Android and iOS, one needs to
maintain two independent code bases, or introduce often complex development
tools to port a single code base to two platforms (e.g. Xamarin).
Web App
Mobile web apps (or simply, web apps) are websites designed to look and feel
like a native app. These apps run on top of a device’s browser and are usually
developed in HTML5, much like a modern webpage. Launcher icons may be
created to parallel the same feel of accessing a native app; however, these icons
are essentially the same as a browser bookmark, simply opening the default web
browser to load the referenced web page.
Web apps have limited integration with the general components of the device as
they run within the confines of a browser (i.e. they are “sandboxed”) and usually
lack in performance compared to native apps. Since a web app typically targets
multiple platforms, their UIs do not follow some of the design principles of a
specific platform. The biggest advantage is reduced development and
maintenance costs associated with a single code base as well as enabling
developers to distribute updates without engaging the platform-specific app
stores. For example, a change to the HTML file for app can serve as viable,
cross-platform update whereas an update to a store-based app requires
considerably more effort.
Hybrid App
Hybrid apps attempt to fill the gap between native and web apps. A hybrid app
executes like a native app, but a majority of the processes rely on web
technologies, meaning a portion of the app runs in an embedded web browser
(commonly called “web view”). As such, hybrid apps inherit both pros and cons
of native and web apps.
Apache Cordova
Framework 7
Ionic
jQuery Mobile
Native Script
Onsen UI
React Native
Sencha Touch
Throughout the guide, we use "mobile app security testing" as a catchall phrase
to refer to the evaluation of mobile app security via static and dynamic analysis.
Terms such as "mobile app penetration testing" and "mobile app security review"
are used somewhat inconsistently in the security industry, but these terms refer to
roughly the same thing. A mobile app security test is usually part of a larger
security assessment or penetration test that encompasses the client-server
architecture and server-side APIs used by the mobile app.
In this guide, we cover mobile app security testing in two contexts. The first is
the "classical" security test completed near the end of the development life cycle.
In this context, the tester accesses a nearly finished or production-ready version
of the app, identifies security issues, and writes a (usually devastating) report.
The other context is characterized by the implementation of requirements and
the automation of security tests from the beginning of the software development
life cycle onwards. The same basic requirements and test cases apply to both
contexts, but the high-level method and the level of client interaction differ.
Principles of Testing
White-box Testing versus Black-box Testing
Let's start by defining the concepts:
We strongly advise that you request the source code so that you can use the
testing time as efficiently as possible. The tester's code access obviously doesn't
simulate an external attack, but it simplifies the identification of vulnerabilities
by allowing the tester to verify every identified anomaly or suspicious behavior
at the code level. A white-box test is the way to go if the app hasn't been tested
before.
Even though decompiling on Android is straightforward, the source code may be
obfuscated, and de-obfuscating will be time-consuming, possibly to the point of
being even. Time constraints are therefore another reason for the tester to have
access to the source code.
Dynamic analysis involves examining the app from the outside while executing
it. This type of analysis can be manual or automatic. It usually doesn't provide
the information that static analysis provides, but it is a good way to detect
interesting elements (assets, features, entry points, etc.) from a user's point of
view. It may be similar to black-box testing. OWASP provides information about
Dynamic Analysis that may help you understand how to analyze apps.
Now that we have defined static and dynamic analysis, let's dive deeper.
Vulnerability Analysis
Vulnerability analysis is usually the process of looking for vulnerabilities in an
app. Although this may be done manually, automated scanners are usually used
to identify the main vulnerabilities. Static and dynamic analysis are types of
vulnerability analysis.
Static Analysis
During static analysis, the mobile app's source code is analyzed to ensure
appropriate implementation of security controls. In most cases, a hybrid
automatic/manual approach is used. Automatic scans catch the low-hanging
fruit, and the human tester can explore the code base with specific usage
contexts in mind.
In contrast to automatic code analysis, manual code review is very good for
identifying vulnerabilities in the business logic, standards violations, and design
flaws, especially when the code is technically secure but logically flawed. Such
scenarios are unlikely to be detected by any automatic code analysis tool.
A manual code review requires an expert code reviewer who is proficient in both
the language and the frameworks used for the mobile application. Full code
review can be a slow, tedious, time-consuming process for the reviewer,
especially given large code bases with many dependencies.
Automatic Code Analysis
Automated analysis tools can be used to speed up the review process of Static
Application Security Testing (SAST). They check the source code for
compliance with a predefined set of rules or industry best practices, then
typically display a list of findings or warnings and flags for all detected
violations. Some static analysis tools run against the compiled app only, some
must be fed the original source code, and some run as live-analysis plugins in the
Integrated Development Environment (IDE).
Although some static code analysis tools incorporate a lot of information about
the rules and semantics required to analyze mobile apps, they may produce many
false positives, particularly if they are not configured for the target environment.
A security professional must therefore always review the results.
The chapter "Testing tools", at the end of this book, includes a list of static
analysis tools.
Dynamic Analysis
The focus of dynamic analysis (also called DAST, or Dynamic Application
Security Testing) is the testing and evaluation of apps via their real-time
execution. The main objective of dynamic analysis is finding security
vulnerabilities or weak spots in a program while it is running. Dynamic analysis
is conducted both at the mobile platform layer and against the back-end services
and APIs, where the mobile app's request and response patterns can be analyzed.
Dynamic analysis is usually used to check for security mechanisms that provide
sufficient protection against the most prevalent types of attack, such as
disclosure of data in transit, authentication and authorization issues, and server
configuration errors.
The ability to entice the logged-in user to open a malicious link in the web
browser used to access the vulnerable site.
The client (browser) must automatically add the session cookie or other
authentication token to the request.
Mobile apps don't fulfill these requirements: even if Webviews and cookie-based
session management are used, any malicious link the user clicks opens in the
default browser, which has a separate cookie store.
Stored Cross-Site Scripting (XSS) can be an issue if the app includes Webviews,
and it may even lead to command execution if the app exports JavaScript
interfaces. However, reflected cross-site scripting is rarely an issue for the reason
mentioned above (even though whether they should exist at all is arguable—
escaping output is simply a best practice).
In any case, consider exploit scenarios when you perform the risk assessment;
don't blindly trust your scanning tool's output.
Preparation
The security level at which the app will be tested must be decided before testing.
The security requirements should be decided at the beginning of the project.
Different organizations have different security needs and resources available for
investing in test activities. Although the controls in MASVS Level 1 (L1) are
applicable to all mobile apps, walking through the entire checklist of L1 and
Level 2 (L2) MASVS controls with technical and business stakeholders is a
good way to decide on a level of test coverage.
All involved parties must agree on the decisions and the scope in the checklist
because these will define the baseline for all security testing.
Depending on the scope of the engagement, this approach may not be possible.
Requesting both production and debug builds for a white-box test will help you
complete all test cases and clearly state the app's security maturity. The client
may prefer that black-box tests be focused on the production app and the
evaluation of its security controls' effectiveness.
The scope of both types of testing should be discussed during the preparation
phase. For example, whether the security controls should be adjusted should be
decided before testing. Additional topics are discussed below.
There are three general states from which data may be accessible:
Intelligence Gathering
The organization's goals for the app. Functionality shapes users' interaction
with the app and may make some surfaces more likely than others to be
targeted by attackers.
The relevant industry. Different industries may have different risk profiles.
Stakeholders and investors; understanding who is interested in and
responsible for the app.
Internal processes, workflows, and organizational structures. Organization-
specific internal processes and workflows may create opportunities for
business logic exploits.
Architectural Information
The mobile app: How the app accesses data and manages it in-process,
how it communicates with other resources and manages user sessions, and
whether it detects itself running on jailbroken or rooted phones and reacts to
these situations.
The Operating System: The operating systems and OS versions the app
runs on (including Android or iOS version restrictions), whether the app is
expected to run on devices that have Mobile Device Management (MDM)
controls, and relevant OS vulnerabilities.
Network: Usage of secure transport protocols (e.g., TLS), usage of strong
keys and cryptographic algorithms (e.g., SHA-2) to secure network traffic
encryption, usage of certificate pinning to verify the endpoint, etc.
Remote Services: The remote services the app consumes and whether their
being compromised could compromise the client.
Once the security tester has information about the app and its context, the next
step is mapping the app's structure and content, e.g., identifying its entry points,
features, and data.
Exploitation
Unfortunately, time or financial constraints limit many pentests to application
mapping via automated scanners (for vulnerability analysis, for example).
Although vulnerabilities identified during the previous phase may be interesting,
their relevance must be confirmed with respect to five axes:
Damage potential - the damage that can result from exploiting the
vulnerability
Reproducibility - ease of reproducing the attack
Exploitability - ease of executing the attack
Affected users - the number of users affected by the attack
Discoverability - ease of discovering the vulnerability
Against all odds, some vulnerabilities may not be exploitable and may lead to
minor compromises, if any. Other vulnerabilities may seem harmless at first
sight, yet be determined very dangerous under realistic test conditions. Testers
who carefully go through the exploitation phase support pentesting by
characterizing vulnerabilities and their effects.
Reporting
The security tester's findings will be valuable to the client only if they are clearly
documented. A good pentest report should include information such as, but not
limited to, the following:
an executive summary
a description of the scope and context (e.g., targeted systems)
methods used
sources of information (either provided by the client or discovered during
the pentest)
prioritized findings (e.g., vulnerabilities that have been structured by
DREAD classification)
detailed findings
recommendations for fixing each defect
Many pentest report templates are available on the internet: Google is your
friend!
SDLC Overview
General Description of SDLC
SDLCs always consist of the same steps (the overall process is sequential in the
Waterfall paradigm and iterative in the Agile paradigm):
Based on the project's general risk profile, you may simplify (or even skip) some
artifacts, and you may add others (formal intermediary approvals, formal
documentation of certain points, etc.). Always remember two things: an SDLC
is meant to reduce risks associated with software development, and it is a
framework that helps you set up controls to that end. This this is a generic
description of SDLC; always tailor this framework to your projects.
Test strategies specify the tests that will be performed during the SDLC as well
as testing frequency. Test strategies are used to make sure that the final software
product meets security objectives, which are generally determined by clients'
legal/marketing/corporate teams. The test strategy is usually created during the
Secure Design phase, after risks have been clarified (during the Initiation phase)
and before code development (the Secure Implementation phase) begins. The
strategy requires input from activities such as Risk Management, previous
Threat Modeling, and Security Engineering.
Basically, SDLC doesn't mandate the use of any development life cycle: it is safe
to say that security can (and must!) be addressed in any situation.
Waterfall methodologies were popular before the 21st century. The most famous
application is called the "V model," in which phases are performed in sequence
and you can backtrack only a single step. The testing activities of this model
occur in sequence and are performed as a whole, mostly at the point in the life
cycle when most of the app development is complete. This activity sequence
means that changing the architecture and other factors that were set up at the
beginning of the project is hardly possible even though code may be changed
after defects have been identified.
Security Testing for Agile/DevOps and DevSecOps
DevOps refers to practices that focus on a close collaboration between all
stakeholders involved in software development (generally called Devs) and
operations (generally called Ops). DevOps is not about merging Devs and Ops.
Development and operations teams originally worked in silos, when pushing
developed software to production could take a significant amount of time. When
development teams made moving more deliveries to production necessary by
working with Agile, operation teams had to speed up to match the pace. DevOps
is the necessary evolution of the solution to that challenge in that it allows
software to be released to users more quickly. This is largely accomplished via
extensive build automation, the process of testing and releasing software, and
infrastructure changes (in addition to the collaboration aspect of DevOps). This
automation is embodied in the deployment pipeline with the concepts of
Continuous Integration and Continuous Delivery (CI/CD).
People may assume that the term "DevOps" represents collaboration between
development and operations teams only, however, as DevOps thought leader
Gene Kim puts it: "At first blush, it seems as though the problems are just
between dev and ops, but test is in there, and you have information security
objectives, and the need to protect systems and data. These are top-level
concerns of management, and they have become part of the DevOps picture."
However, DevSecOps is not just a linear process oriented towards delivering the
best possible software to operations; it is also a mandate that operations closely
monitor software that's in production to identify issues and fix them by forming
a quick and efficient feedback loop with development. DevSecOps is a process
through which Continuous Improvement is heavily emphasized.
The human aspect of this emphasis is reflected in the creation of cross-functional
teams that work together to achieve business outcomes. This section is focused
on necessary interactions and integrating security into the development life cycle
(which starts with project inception and ends with the delivery of value to users).
What Agile and DevSecOps Are and How Testing Activities Are Arranged
Overview
The following sections provide more details about these three points.
Infrastructure as Code
Devs better understand infrastructure from a familiar point of view and can
prepare resources that the running application will require.
Ops operate an environment that better suits the application, and they share
a language with Devs.
The main tools in this domain are Puppet, Terraform, Chef and Ansible.
Deployment
The deployment pipeline's sophistication depends on the maturity of the project
organization or development team. In its simplest form, the deployment pipeline
consists of a commit phase. The commit phase usually involves running simple
compiler checks and the unit test suite as well as creating a deployable artifact of
the application. A release candidate is the latest version that has been checked
into the trunk of the version control system. Release candidates are evaluated by
the deployment pipeline for conformity to standards they must fulfil for
deployment to production.
Security
At this point, the big question is: now that other activities required for delivering
code are completed significantly faster and more effectively, how can security
keep up? How can we maintain an appropriate level of security? Delivering
value to users more often with decreased security would definitely not be good!
Once again, the answer is automation and tooling: by implementing these two
concepts throughout the project life cycle, you can maintain and improve
security. The higher the expected level of security, the more controls,
checkpoints, and emphasis will take place. The following are examples:
Static Application Security Testing can take place during the development
phase, and it can be integrated into the Continuous Integration process with
more or less emphasis on scan results. You can establish more or less
demanding Secure Coding Rules and use SAST tools to check the
effectiveness of their implementation.
Dynamic Application Security Testing may be automatically performed
after the application has been built (e.g., after Continuous Integration has
taken place) and before delivery, again, with more or less emphasis on
results.
You can add manual validation checkpoints between consecutive phases,
for example, between delivery and deployment.
Scanning should take place regularly (at both the infrastructure and
application level).
Pentesting may take place regularly. (The version of the application used in
production is the version that should be pentested, and the testing should
take place in a dedicated environment and include data that's similar to the
production version data. See the section on Penetration Testing for more
details.)
Active monitoring should be performed to identify issues and remediate
them as soon as possible via the feedback loop.
References
[paul] - M. Paul. Official (ISC)2 Guide to the CSSLP CBK, Second Edition
((ISC)2 Press), 2014
[mcgraw] - G McGraw. Software Security: Building Security In, 2006
Testing Code Quality
Mobile app developers use a wide variety of programming languages and
frameworks. As such, common vulnerabilities such as SQL injection, buffer
overflows, and cross-site scripting (XSS), may manifest in apps when neglecting
secure programming practices.
The same programming flaws may affect both Android and iOS apps to some
degree, so we'll provide an overview of the most common vulnerability classes
frequently in the general section of the guide. In later sections, we will cover
OS-specific instances and exploit mitigation features.
Injection Flaws
An injection flaw describes a class of security vulnerability occurring when user
input is inserted into back-end queries or commands. By injecting
metacharacters, an attacker can execute malicious code that is inadvertently
interpreted as part of the command or query. For example, by manipulating a
SQL query, an attacker could retrieve arbitrary database records or manipulate
the content of the back-end database.
For example, while an app might query a local SQLite database, such databases
usually do not store sensitive data (assuming the developer followed basic
security practices). This makes SQL injection a non-viable attack vector.
Nevertheless, exploitable injection vulnerabilities sometimes occur, meaning
proper input validation is a necessary best practice for programmers.
SQL Injection
A SQL injection attack involves integrating SQL commands into input data,
mimicking the syntax of a predefined SQL command. A successful SQL
injection attack allows the attacker to read or write to the database and possibly
execute administrative commands, depending on the permissions granted by the
server.
Apps on both Android and iOS use SQLite databases as a means to control and
organize local data storage. Assume an Android app handles local user
authentication by storing the user credentials in a local database (a poor
programming practice we’ll overlook for the sake of this example). Upon login,
the app queries the database to search for a record with the username and
password entered by the user:
SQLiteDatabase db;
return c.getCount() != 0;
Let's further assume an attacker enters the following values into the "username"
and "password" fields:
username = 1' or '1' = '1
password = 1' or '1' = '1
Because the condition '1' = '1' always evaluates as true, this query return all
records in the database, causing the login function to return "true" even though
no valid user account was entered.
import subprocess
from flask import Flask, request
app = Flask(__name__)
URI =
"com.yahoo.mobile.client.android.weather.provider.Weather/locat
ions/"
@app.route("/")
def hello():
method = request.values['method']
sort = request.values['sort']
sort =
"_id/**/limit/**/(SELECT/**/1/**/FROM/**/sqlite_master/**/WHERE
/**/1={})".format(sort)
#sort = "_id/**/limit/**/({})".format(sort)
p = subprocess.Popen(["adb","shell","content",method,"--
uri","content://{}".format(URI),"--sort",'"
{}"'.format(sort)],stdout=subprocess.PIPE,stderr=subprocess.STD
OUT)
o, e = p.communicate()
print "[*]SORT:{}".format(sort)
print "[*]OUTPUT:{}".format(o)
return "<html><divclass='output'>{}</div></html>".format(o)
if __name__=="__main__":
app.run()
XML Injection
In a XML injection attack, the attacker injects XML metacharacters to
structurally alter XML content. This can be used to either compromise the logic
of an XML-based application or service, as well as possibly allow an attacker to
exploit the operation of the XML parser processing the content.
In this example, the local file /dev/random is opened where an endless stream
of bytes is returned, potentially causing a denial-of-service.
Identifying possible entry points for untrusted input then tracing from those
locations to see if the destination contains potentially vulnerable functions.
Identifying known, dangerous library / API calls (e.g. SQL queries) and
then checking whether unchecked input successfully interfaces with
respective queries.
During a manual security review, you should employ a combination of both
techniques. In general, untrusted inputs enter mobile apps through the following
channels:
IPC calls
Custom URL schemes
QR codes
Input files received via Bluetooth, NFC, or other means
Pasteboards
User interface
We will cover details related to input sources and potentially vulnerable APIs for
each mobile OS in the OS-specific testing guides.
Android apps are, for the most part, implemented in Java which is inherently
safe from memory corruption issues by design. However, native apps utilizing
JNI libraries are susceptible to this kind of bug.
To identify potential buffer overflows, look for uses of unsafe string functions
( strcpy , strcat , other functions beginning with the “str” prefix, etc.) and
potentially vulnerable programming constructs, such as copying user input into a
limited-size buffer. The following should be considered red flags for unsafe
string functions:
- `strcat`
- `strcpy`
- `strncat`
- `strlcat`
- `strncpy`
- `strlcpy`
- `sprintf`
- `snprintf`
- `gets`
When using integer variables for array indexing, buffer length calculations,
or any other security-critical operation, verify that unsigned integer types
are used and perform precondition tests are performed to prevent the
possibility of integer wrapping.
The app does not use unsafe string functions such as strcpy , most other
functions beginning with the “str” prefix, sprint , vsprintf , gets ,
etc.;
If the app contains C++ code, ANSI C++ string classes are used;
iOS apps written in Objective-C use NSString class. C apps on iOS should
use CFString, the Core Foundation representation of a string.
No untrusted data is concatenated into format strings.
Static Analysis
Static code analysis of low-level code is a complex topic that could easily fill its
own book. Automated tools such as RATS combined with limited manual
inspection efforts are usually sufficient to identify low-hanging fruits. However,
memory corruption conditions often stem from complex causes. For example, a
use-after-free bug may actually be the result of an intricate, counter-intuitive race
condition not immediately apparent. Bugs manifesting from deep instances of
overlooked code deficiencies are generally discovered through dynamic analysis
or by testers who invest time to gain a deep understanding of the program.
Dynamic Analysis
Memory corruption bugs are best discovered via input fuzzing: an automated
black-box software testing technique in which malformed data is continually
sent to an app to survey for potential vulnerability conditions. During this
process, the application is monitored for malfunctions and crashes. Should a
crash occur, the hope (at least for security testers) is that the conditions creating
the crash reveal an exploitable security flaw.
Fuzz testing techniques or scripts (often called "fuzzers") will typically generate
multiple instances of structured input in a semi-correct fashion. Essentially, the
values or arguments generated are at least partially accepted by the target
application, yet also contain invalid elements, potentially triggering input
processing flaws and unexpected program behaviors. A good fuzzer exposes a
substantial amount of possible program execution paths (i.e. high coverage
output). Inputs are either generated from scratch ("generation-based") or derived
from mutating known, valid input data ("mutation-based").
In the context of native apps, XSS risks are far less prevalent for the simple
reason these kinds of applications do not rely on a web browser. However, apps
using WebView components, such as ‘UIWebView’ on iOS and ‘WebView’ on
Android, are potentially vulnerable to such attacks.
An older but well-known example is the local XSS issue in the Skype app for
iOS, first identified by Phil Purviance. The Skype app failed to properly encode
the name of the message sender, allowing an attacker to inject malicious
JavaScript to be executed when a user views the message. In his proof-of-
concept, Phil showed how to exploit the issue and steal a user's address book.
Static Analysis
Take a close look at any WebViews present and investigate for untrusted input
rendered by the app.
XSS issues may exist if the URL opened by WebView is partially determined by
user input. The following example is from an XSS issue in the Zoho Web
Service, reported by Linus Särud.
@Override
public boolean shouldOverrideUrlLoading(WebView view, String
url) {
if (url.substring(0,6).equalsIgnoreCase("yourscheme:")) {
// parse the URL object and execute functions
}
}
Sergey Bobrov was able to take advantage of this in the following HackerOne
report. Any input to the html parameter would be trusted in Quora's
ActionBarContentActivity. Payloads were successful using adb, clipboarddata
via ModalContentActivity, and Intents from 3rd party applications.
ADB
adb shell
am start -n
com.quora.android/com.quora.android.ActionBarContentActivit
y -e url 'http://test/test' -e html 'XSS<script>alert(123)
</script>'
Clipboard Data
am start -n
com.quora.android/com.quora.android.ModalContentActivity -e
url 'http://test/test' -e html
'<script>alert(QuoraAndroid.getClipboardData());</script>'
3rd party Intent
Character Escaped
& &
< <
> >
" "
' '
/ /
For a comprehensive list of escaping rules and other prevention measures, refer
to the OWASP XSS Prevention Cheat Sheet_Prevention_Cheat_Sheet "OWASP
XSS Prevention Cheat Sheet").
Dynamic Analysis
XSS issues can be best detected using manual and/or automated input fuzzing,
i.e. injecting HTML tags and special characters into all available input fields to
verify the web application denies invalid inputs or escapes the HTML meta-
characters in its output.
A reflected XSS attack) refers to an exploit where malicious code is injected via
a malicious link. To test for these attacks, automated input fuzzing is considered
to be an effective method. For example, the BURP Scanner is highly effective in
identifying reflected XSS vulnerabilities. As always with automated analysis,
ensure all input vectors are covered with a manual review of testing parameters.
References
CWE
CWE-20 - Improper Input Validation
Key Concepts
The goal of cryptography is to provide constant confidentiality, data integrity,
and authenticity, even in the face of an attack. Confidentiality involves ensuring
data privacy through the use of encryption. Data integrity deals with data
consistency and detection of tampering and modification of data. Authenticity
ensures that the data comes from a trusted source.
Encryption algorithms converts plaintext data into cipher text that conceals the
original content. Plaintext data can be restored from the cipher text through
decryption. Encryption can be symmetric (secret-key encryption) or
asymmetric (public-key encryption). In general, encryption operations do not
protect integrity, but some symmetric encryption modes also feature that
protection.
Symmetric-key encryption algorithms use the same key for both encryption
and decryption. This type of encryption is fast and suitable for bulk data
processing. Since everybody who has access to the key is able to decrypt the
encrypted content, this method requires careful key management. Public-key
encryption algorithms operate with two separate keys: the public key and the
private key. The public key can be distributed freely while the private key
shouldn't be shared with anyone. A message encrypted with the public key can
only be decrypted with the private key. Since asymmetric encryption is several
times slower than symmetric operations, it's typically only used to encrypt small
amounts of data, such as symmetric keys for bulk encryption.
Hashing isn't a form of encryption, but it does use cryptography. Hash functions
deterministically map arbitrary pieces of data into fixed-length values. It's easy
to compute the hash from the input, but very difficult (i.e. infeasible) determine
the original input from the hash. Hash functions are used for integrity
verification, but don't provide an authenticity guarantee.
Verify that cryptographic algorithms are up to date and in-line with industry
standards. Vulnerable algorithms include outdated block ciphers (such as DES),
stream ciphers (such as RC4), hash functions (such as MD5), and broken random
number generators (such as Dual_EC_DRBG). Note that even algorithms that
are certified (for example, by NIST) can become insecure over time. A
certification does not replace periodic verification of an algorithm's soundness.
Algorithms with known weaknesses should be replaced with more secure
alternatives.
Additionally, you should always rely on secure hardware (if available) for
storing encryption keys, performing cryptographic operations, etc.
For more information on algorithm choice and best practices, see the following
resources:
First, ensure that no keys or passwords are stored within the source code. Note
that hard-coded keys are problematic even if the source code is obfuscated since
obfuscation is easily bypassed by dynamic instrumentation.
If the app is using two-way SSL (both server and client certificates are
validated), make sure that:
If the app relies on an additional encrypted container stored in app data, check
how the encryption key is used. If a key-wrapping scheme is used, ensure that
the master secret is initialized for each user or the container is re-encrypted with
new key. If you can use the master secret or previous password to decrypt the
container, check how password changes are handled.
If the password is smaller than the key, the full key space isn't used. The
remaining space is padded (spaces are sometimes used for padding).
A user-supplied password will realistically consist mostly of displayable
and pronounceable characters. Therefore, only some of the possible 256
ASCII characters are used and entropy is decreased by approximately a
factor of four.
Ensure that passwords aren't directly passed into an encryption function. Instead,
the user-supplied password should be passed into a KDF to create a
cryptographic key. Choose an appropriate iteration count when using password
derivation functions. For example, NIST recommends and iteration count of at
least 10,000 for PBKDF2.
Carefully inspect all the cryptographic methods used within the source code,
especially those that are directly applied to sensitive data. All cryptographic
operations should use standard cryptographic APIs for Android and iOS (we'll
write about those in more detail in the platform-specific chapters). Any
cryptographic operations that don't invoke standard routines from known
providers should be closely inspected. Pay close attention to standard algorithms
that have been modified. Remember that encoding isn't the same as encryption!
Always investigate further when you find bit manipulation operators like XOR
(exclusive OR).
ECB (Electronic Codebook) divides the input into fixed-size blocks that are
encrypted separately using the same key. If multiple divided blocks contain the
same plaintext, they will be encrypted into identical ciphertext blocks which
makes patterns in data easier to identify. It some situations, an attacker might
also be able to replay the encrypted data.
Verify that Cipher Block Chaining (CBC) mode is used instead of ECB. In CBC
mode, plaintext blocks are XORed with the previous ciphertext block. This
ensures that each encrypted block is unique and randomized even if blocks
contain the same information.
When storing encrypted data, we recommend using a block mode that also
protects the integrity of the stored data, such as Galois/Counter Mode (GCM).
The latter has the additional benefit that the algorithm is mandatory for each
TLSv1.2 implementation, and thus is available on all modern platforms.
For more information on effective block modes, see the NIST guidelines on
block mode selection.
CBC mode requires the first plaintext block to be combined with an initialization
vector (IV). The IV doesn't have to be kept secret, but it shouldn't be predictable.
Make sure that IVs are generated using a cryptographically-secure random
number generator. For more information on IVs, see Crypto Fail's initialization
vectors article.
Cryptographic APIs on Android and iOS
While same basic cryptographic principles apply independent of the particular
OS, each operating system offers its own implementation and APIs. Platform-
specific cryptographic APIs for data storage are covered in greater detail in the
Testing Data Storage on Android and Testing Data Storage on iOS chapters.
Encryption of network traffic, especially Transport Layer Security (TLS), is
covered in the Testing Network Communication chapter.
References
OWASP Mobile Top 10
M6 - Broken Cryptography
OWASP MASVS
V3.1: "The app does not rely on symmetric cryptography with hardcoded
keys as a sole method of encryption."
V3.2: "The app uses proven implementations of cryptographic primitives."
V3.3: "The app uses cryptographic primitives that are appropriate for the
particular use-case, configured with parameters that adhere to industry best
practices."
V3.4: "The app does not use cryptographic protocols or algorithms that are
widely considered depreciated for security purposes."
CWE
Most mobile apps implement some kind of user authentication. Even though part
of the authentication and state management logic is performed by the back end
service, authentication is such an integral part of most mobile app architectures
that understanding its common implementations is important.
Since the basic concepts are identical on iOS and Android, we'll discuss
prevalent authentication and authorization architectures and pitfalls in this
generic guide. OS-specific authentication issues, such as local and biometric
authentication, will be discussed in the respective OS-specific chapters.
You can also use the OWASP Mobile AppSec Verification Standard as a
guideline. For non-critical apps ("Level 1"), the MASVS lists the following
authentication requirements:
For sensitive apps ("Level 2"), the MASVS adds the following:
A second factor of authentication exists at the remote endpoint and the 2FA
requirement is consistently enforced.
Step-up authentication is required to enable actions that deal with sensitive
data or transactions.
Supplementary Authentication
Authentication schemes are sometimes supplemented by passive contextual
authentication, which can incorporate:
Geolocation
IP address
Time of day
Testing Authentication
Perform the following steps when testing authentication and authorization:
Consider the following example from the OWASP Web Testing Guide. In the
example, a web resource is accessed through a URL, and the authentication state
is passed through a GET parameter:
http://www.site.com/page.asp?authenticated=no
The client can arbitrarily change the GET parameters sent with the request.
Nothing prevents the client from simply changing the value of the
authenticated parameter to "yes," effectively bypassing authentication.
Although this is a simplistic example that you probably won't find in the wild,
programmers sometimes rely on "hidden" client-side parameters, such as
cookies, to maintain authentication state. They assume that these parameters
can't be tampered with. Consider, for example, the following classic
vulnerability in Nortel Contact Center Manager. The administrative web
application of Nortel's appliance relied on the cookie "isAdmin" to determine
whether the logged-in user should be granted administrative privileges.
Consequently, it was possible to get admin access by simply setting the cookie
value as follows:
isAdmin=True
Password Complexity
The password must meet at least three out of the following four complexity
rules:
Regular Expressions are often used to enforce password rules. For example, the
JavaScript implementation by NowSecure uses regular expressions to test the
password for various characteristics, such as length and character type. The
following is an excerpt of the code:
function(password) {
if (password.length < owasp.configs.minLength) {
return 'The password must be at least ' +
owasp.configs.minLength + ' characters long.';
}
},
function(password) {
if (!/[a-z]/.test(password)) {
return 'The password must contain at least one lowercase
letter.';
}
},
For more details, check the OWASP Authentication Cheat Sheet. zxcvbn is a
common library that can be used for estimating password strength is. It is
available for many programming languages.
A new window will open. Site requests are sent sequentially, each request
corresponding to a password from the list. Information about the response
(length, status code, ...) is provided for each request, allowing you to distinguish
successful and unsuccessful attempts:
In this example, you can identify the successful attempt by length (password =
"P@ssword1").
Tip: Append the correct password of your test account to the end of the
password list. The list shouldn't have more than 25 passwords. If you can
complete the attack without locking the account, that means the account isn't
protected against brute force attacks.
Login Throttling
Check the source code for a throttling procedure: a counter for logins attempted
in a short period of time with a given user name and a method to prevent login
attempts after the maximum number of attempts has been reached. After an
authorized login attempt, the error counter should be reset.
Additional brute force mitigation techniques are described on the OWASP page
Blocking Brute Force Attacks.
When OTP authentication is used, consider that most OTPs are short numeric
values. An attacker can bypass the second factor by brute-forcing the values
within the range at the lifespan of the OTP if the accounts aren't locked after N
unsuccessful attempts at this stage. The probability of finding a match for 6-digit
values with a 30-second time step within 72 hours is more than 90%.
1. The app sends a request with the user's credentials to the backend server.
2. The server verifies the credentials If the credentials are valid, the server
creates a new session along with a random session ID.
3. The server sends to the client a response that includes the session ID.
4. The client sends the session ID with all subsequent requests. The server
validates the session ID and retrieves the associated session record.
5. After the user logs out, the server-side session record is destroyed and the
client discards the session ID.
When sessions are improperly managed, they are vulnerable to a variety of
attacks that may compromise the session of a legitimate user, allowing the
attacker to impersonate the user. This may result in lost data, compromised
confidentiality, and illegitimate actions.
Spring (Java)
Struts (Java)
Laravel (PHP)
Ruby on Rails
Session Timeout
In most popular frameworks, you can set the session timeout via configuration
options. This parameter should be set according to the best practices specified in
the framework documentation. The recommended timeout may be between 10
minutes and two hours, depending on the app's sensitivity.
Spring (Java)
Ruby on Rails
PHP
ASP.Net.aspx)
Dynamic Analysis
You can use dynamic analysis to verify that authorization is consistently
enforced on all remote endpoints. First, manually or automatically crawl the
application to make sure that all privileged actions and data are secure and to
determine whether a valid session ID is required. Record the requests in your
proxy.
Then, replay the crawled requests while manipulating the session IDs as follows:
Invalidate the session ID (for example, append to the session ID, or delete
the session ID from the request).
Log out and log back in to see whether the session ID has changed.
Try to re-use a session ID after logging out.
Use the app extensively (going through all UI flows) while using an interception
proxy to capture the requests sent to remote endpoints. Next, replay requests to
endpoints that require 2FA (e.g., performing a financial transactions) while using
a token or session ID that hasn't yet been elevated via 2FA or step-up
authentication. If an endpoint is still sending back requested data that should
only be available after 2FA or step-up authentication, authentication checks
haven't been properly implemented at that endpoint.
Consult the OWASP Testing Guide for more information testing session
management.
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIi
wibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWV9.TJVA95OrM7E2cBab30RM
HrHDcEfxjoYZgeFONFh7HgQ
The header typically consists of two parts: the token type, which is JWT, and the
hashing algorithm being used to compute the signature. In the example above,
the header decodes as follows:
{"alg":"HS256","typ":"JWT"}
The second part of the token is the payload, which contains so-called claims.
Claims are statements about an entity (typically, the user) and additional
metadata. For example:
{"sub":"1234567890","name":"John Doe","admin":true}
The signature is created by applying the algorithm specified in the JWT header
to the encoded header, encoded payload, and a secret value. For example, when
using the HMAC SHA256 algorithm the signature is created in the following
way:
HMACSHA256(base64UrlEncode(header) + "." +
base64UrlEncode(payload), secret)
Note that the secret is shared between the authentication server and the back end
service - the client does not know it. This proves that the token was obtained
from a legitimate authentication service. It also prevents the client from
tampering with the claims contained in the token.
Static Analysis
Identify the JWT library that the server and client use. Find out whether the JWT
libraries in use have any known vulnerabilities.
Verify that the HMAC is checked for all incoming requests containing a
token;
Verify the location of the private signing key or HMAC secret key. The key
should remain on the server and should never be shared with the client. It
should be available for the issuer and verifier only.
Verify that no sensitive data, such as personal identifiable information, is
embedded in the JWT. If, for some reason, the architecture requires
transmission of such information in the token, make sure that payload
encryption is being applied. See the sample Java implementation on the
OWASP JWT Cheat Sheet_Cheat_Sheet_for_Java).
Make sure that replay attacks are addressed with the jti (JWT ID) claim,
which gives the JWT a unique identifier.
Verify that tokens are stored securely on the mobile phone, with, for
example, KeyChain (iOS) or KeyStore (Android).
An attacker executes this by altering the token and, using the 'none' keyword,
changing the hashing algorithm to indicate that the integrity of the token has
already been verified. As explained at the link above, some libraries treated
tokens signed with the none algorithm as if they were valid tokens with verified
signatures, so the application will trust altered token claims.
JWTVerifier verifier =
JWT.require(Algorithm.HMAC256(keyHMAC)).build();
Once signed, a stateless authentication token is valid forever unless the signing
key changes. A common way to limit token validity is to set an expiration date.
Make sure that the tokens include an "exp" expiration claim and the back end
doesn't process expired tokens.
For apps that handle sensitive data, make sure that the refresh token expires after
a reasonable period of time. The following example code shows a refresh token
API that checks the refresh token's issue date. If the token is not older than 14
days, a new access token is issued. Otherwise, access is denied and the user is
prompted to login again.
Dynamic Analysis
Investigate the following JWT vulnerabilities while performing dynamic
analysis:
Modify the alg attribute in the token header, then delete HS256 , set it to
none , and use an empty signature (e.g., signature = ""). Use this token and
replay it in a request. Some libraries treat tokens signed with the none algorithm
as a valid token with a verified signature. This allows attackers to create their
own "signed" tokens.
Failing to destroy the server-side session is one of the most common logout
functionality implementation errors . This error keeps the session or token alive,
even after the user logs out of the application. An attacker who gets valid
authentication information can continue to use it and hijack a user account.
Many mobile apps don't automatically log users out because it is inconvenient
for customers by implementing stateless authentication. The application should
still have a logout function, and it should be implemented according to best
practices, destroying the access and refresh token on the client and server.
Otherwise, authentication can be bypassed when the refresh token is not
invalidated.
Verifying Best Practices
If server code is available, make sure logout functionality terminates the session
is terminated . This verification will depend on the technology. Here are
examples session termination for proper server-side logout:
If access and refresh tokens are used with stateless authentication, they should be
deleted from the mobile device. The refresh token should be invalidated on the
server.
Dynamic Analysis
Use an interception proxy for dynamic application analysis. Use the following
steps to check whether the logout is implemented properly.
Getting permission from the user to access an online service using their
account.
Authenticating to an online service on behalf of the user.
Handling authentication errors.
Note: The API fulfills both the Resource Owner and Authorization Server roles.
Therefore, we will refer to both as the API.
User agent:
The user should have a way to visually verify trust (e.g., Transport Layer
Security (TLS) confirmation, website mechanisms).
To prevent man-in-the-middle attacks, the client should validate the server's
fully qualified domain name with the public key the server presented when
the connection was established.
Type of grant:
Shared secrets should not be used to prove the client's identity because the
client could be impersonated ("client_id" already serves as proof). If they
do use client secrets, be sure that they are stored in secure local storage.
End-User credentials:
Tokens:
The user's credentials are never directly exposed to the app. This guarantees
that the app cannot obtain the credentials during the login process
("credential phishing").
On the negative side, there is no way to control the behavior of the browser (e.g.
to activate certificate pinning).
For apps that operate within a closed ecosystem, embedded authentication is the
better choice. For example, consider a banking app that uses OAuth2 to retrieve
an access token from the bank's authentication server, which is then used to
access a number of micro services. In that case, credential phishing is not a
viable scenario. It is likely preferable to keep the authentication process in the
(hopefully) carefully secured banking app, instead of placing trust on external
components.
OWASP MASVS
V4.1: "If the app provides users access to a remote service, some form of
authentication, such as username/password authentication, is performed at
the remote endpoint."
V4.2: "If stateful session management is used, the remote endpoint uses
randomly generated session identifiers to authenticate client requests
without sending the user's credentials."
V4.3: "If stateless token-based authentication is used, the server provides a
token that has been signed with a secure algorithm."
V4.4: "The remote endpoint terminates the existing stateful session or
invalidates the stateless session token when the user logs out."
V4.5: "A password policy exists and is enforced at the remote endpoint."
V4.6: "The remote endpoint implements an exponential back-off or
temporarily locks the user account when incorrect authentication credentials
are submitted an excessive number of times."
V4.8: "Sessions and access tokens are invalidated at the remote endpoint
after a predefined period of inactivity."
V4.9: "A second factor of authentication exists at the remote endpoint, and
the 2FA requirement is consistently enforced."
V4.10: "Sensitive transactions require step-up authentication."
CWE
CWE-287: Improper Authentication
CWE-307: Improper Restriction of Excessive Authentication Attempts
CWE-308: Use of Single-factor Authentication
CWE-521: Weak Password Requirements
CWE-613: Insufficient Session Expiration
Tools
Several free and commercial proxy tools are available. Here are some of the
most popular:
Burp Suite
OWASP ZAP
Charles Proxy
To use the interception proxy, you'll need run it on your PC/MAC and configure
the mobile app to route HTTP(S) requests to your proxy. In most cases, it is
enough to set a system-wide proxy in the network settings of the mobile device -
if the app uses standard HTTP APIs or popular libraries such as okhttp , it will
automatically use the system settings.
Using a proxy breaks SSL certificate verification and the app will usually fail to
initiate TLS connections. To work around this issue, you can install your proxy's
CA certificate on the device. We'll explain how to do this in the OS-specific
"Basic Security Testing" chapters.
In these cases you need to monitor and analyze the network traffic first in order
to decide what to do next. Luckily, there are several options for redirecting and
intercepting network communication:
Route the traffic through the host machine. You can set up your Mac/PC as
the network gateway, e.g. by using the built-in Internet Sharing facilities of
your operating system. You can then use Wireshark to sniff any Internet-
bound traffic from the mobile device;
Use ettercap to redirect network traffic from the mobile device to your host
machine (see below);
On iOS, you can create a "Remote Virtual Interface" instead. We'll describe
this method in the chapter "Basic Security Testing on iOS".
For a full dynamic analysis of a mobile app, all network traffic should be
intercepted. To be able to intercept the messages several steps should be
considered for preparation.
Ettercap Installation
Ettercap is available for all major Linux and Unix operating systems and should
be part of their respective package installation mechanisms. You need to install it
on your machine that will act as the MITM. On macOS it can be installed by
using brew.
Wireshark offers a GUI and is more straightforward if you are not used to the
command line. If you are looking for a command line tool you should either use
TShark or tcpdump. All of these tools are available for all major Linux and Unix
operating systems and should be part of their respective package installation
mechanisms.
Network Setup
Listening on:
en0 -> AC:BC:32:81:45:05
192.168.0.105/255.255.255.0
fe80::c2a:e80c:5108:f4d3/64
33 plugins
42 protocol dissectors
57 ports monitored
20388 mac vendor fingerprint
1766 tcp OS fingerprint
2182 known services
* |=========================================>| 100.00 %
If that's the case, you are now able to see the complete network traffic that is sent
and received by the mobile phone. This includes also DNS, DHCP and any other
form of communication and can therefore be quite "noisy". You should therefore
know how to use DisplayFilters in Wireshark or know how to filter in tcpdump
to focus only on the relevant traffic for you.
As an example, we will now redirect all requests from a Xamarin app to our
interception proxy in the next section.
port forwarding or
has a span or mirror port.
When testing a Xamarin app and when you are trying to set the system proxy in
the WiFi settings you won't be able to see any HTTP requests in your
interception proxy, as the apps created by Xamarin do not use the local proxy
settings of your phone. There are two ways to resolve this:
Add a default proxy to the app, by adding the following code in the
OnCreate() or Main() method and re-create the app:
$ echo "
rdr pass inet proto tcp from any to any port 443 -> 127.0.0.1
port 8080
" | sudo pfctl -ef -
The interception proxy need to listen to the port specified in the port forwarding
rule above, which is 8080
CA Certificates
If not already done, install the CA certificates in your mobile device which will
allow us to intercept HTTPS requests:
Intercepting Traffic
Start using the app and trigger it's functions. You should see HTTP messages
showing up in your interception proxy.
Overview
One of the core mobile app functions is sending/receiving data over untrusted
networks like the Internet. If the data is not properly protected in transit, an
attacker with access to any part of the network infrastructure (e.g., a Wi-Fi
access point) may intercept, read, or modify it. This is why plaintext network
protocols are rarely advisable.
The vast majority of apps rely on HTTP for communication with the backend.
HTTPS wraps HTTP in an encrypted connection (the acronym HTTPS originally
referred to HTTP over Secure Socket Layer (SSL); SSL is the deprecated
predecessor of TLS). TLS allows authentication of the backend service and
ensures confidentiality and integrity of the network data.
Ensuring proper TLS configuration on the server side is also important. SSL is
deprecated and should no longer be used. TLS v1.2 and v1.3 are considered
secure, but many services still allow TLS v1.0 and v1.1 for compatibility with
older clients.
When both the client and server are controlled by the same organization and
used only for communicating with one another, you can increase security by
hardening the configuration.
For example, the popular Android networking library okhttp uses the following
preferred set of cipher suites, but these are only available on Android versions
7.0 and later:
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
To support earlier versions of Android, it adds a few ciphers that are considered
less secure, for example, TLS_RSA_WITH_3DES_EDE_CBC_SHA .
Similarly, the iOS ATS (App Transport Security) configuration requires one of
the following ciphers:
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
Static Analysis
Identify all API/web service requests in the source code and ensure that no plain
HTTP URLs are requested. Make sure that sensitive information is sent over
secure channels by using HttpsURLConnection or SSLSocket (for socket-level
communication using TLS).
Verify that the server is configured according to best practices. See also the
OWASP Transport Layer Protection cheat sheet and the Qualys SSL/TLS
Deployment Best Practices.
The configuration file of the web server or reverse proxy at which the HTTPS
connection terminates is required for static analysis. See also the OWASP
Transport Layer Protection cheat sheet and the Qualys SSL/TLS Deployment
Best Practices.
Dynamic Analysis
Intercept the tested app's incoming and outgoing network traffic and make sure
that this traffic is encrypted. You can intercept network traffic in any of the
following ways:
Capture all HTTP and Websocket traffic with an interception proxy like
OWASP ZAP or Burp Suite Professional and make sure all requests are
made via HTTPS instead of HTTP.
Interception proxies like Burp and OWASP ZAP will show HTTP traffic only.
You can, however, use Burp plugins such as Burp-non-HTTP-Extension and
mitm-relay to decode and visualize communication via XMPP and other
protocols.
Some applications may not work with proxies like Burp and ZAP because of
Certificate Pinning. In such a scenario, please check "Testing Custom
Certificate Stores and SSL Pinning". Tools like Vproxy can be used to
redirect all HTTP(S) traffic to your machine to sniff and investigate it for
unencrypted requests.
Capture all network traffic with Tcpdump. Consider this when Burp or
OWASP ZAP do not recognize protocols (e.g. XMPP). You can begin live
capturing via the command:
You can display the captured traffic in a human-readable format with Wireshark.
Figure out which protocols are used and whether they are unencrypted.
Capturing all traffic (TCP and UDP) is important, so you should execute all
functions of the tested application after you've intercepted it.
Making Sure that Critical Operations Use Secure
Communication Channels
Overview
For sensitive applications like banking apps, OWASP MASVS introduces
"Defense in Depth" verification levels. The critical operations (e.g., user
enrollment and account recovery) of such applications are some of the most
attractive targets to attackers. This requires implementation of advanced security
controls, such as additional channels (e.g., SMS and e-mail) to confirm user
actions.
Static Analysis
Review the code and identify the parts that refer to critical operations. Make sure
that additional channels are used for such operation. The following are examples
of additional verification channels:
Dynamic Analysis
Identify all of the tested application's critical operations (e.g., user enrollment,
account recovery, and money transfer). Ensure that each critical operation
requires at least one additional channel (e.g., SMS, e-mail, or token). Make sure
that directly calling the function bypasses usage of these channels.
Remediation
Make sure that critical operations enforce the use of at least one additional
channel to confirm user actions. These channels must not be bypassed when
executing critical operations. If you're going to implement an additional factor to
verify the user's identity, consider Infobip 2FA library or one-time passcodes
(OTP) via Google Authenticator.
References
OWASP MASVS
V5.1: "Data is encrypted on the network with TLS. The secure channel is
used consistently throughout the app."
V5.5: "The app doesn't rely on a single insecure communication channel (e-
mail or SMS) for critical operations such as enrollment and account
recovery."
CWE
CWE-308 - Use of Single-factor Authentication
CWE-319 - Cleartext Transmission of Sensitive Information
Tools
Tcpdump - http://www.androidtcpdump.com/
Wireshark - https://www.wireshark.org/
OWASP ZAP -
https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
Burp Suite - https://portswigger.net/burp/
Vproxy - https://github.com/B4rD4k/Vproxy
Android Platform Overview
This section introduces the Android platform from the architecture point of view.
The following four key areas are discussed:
Visit the official Android developer documentation website for more details
about the Android platform.
Android apps are usually written in Java and compiled to Dalvik bytecode,
which is somewhat different from the traditional Java bytecode. Dalvik bytecode
is created by first compiling the Java code to .class files, then converting the
JVM bytecode to the Dalvik .dex format with the dx tool.
The current version of Android executes this bytecode on the Android runtime
(ART). ART is the successor to Android's original runtime, the Dalvik Virtual
Machine. The key difference between Dalvik and ART is the way the bytecode is
executed.
Android apps don't have direct access to hardware resources, and each app runs
in its own sandbox. This allows precise control over resources and apps: for
instance, a crashing app doesn't affect other apps running on the device. At the
same time, the Android runtime controls the maximum number of system
resources allocated to apps, preventing any one app from monopolizing too
many resources.
The API specifications change with every new Android release. Critical bug
fixes and security patches are usually applied to earlier versions as well. The
oldest Android version supported at the time of writing is 4.4 (KitKat), API level
19, and the current Android version is 7.1 (Nougat), API level 25.
/data/app/com.google.android.youtube-1/base.apk
The Android Package Kit (APK) file is an archive that contains the code and
resources required to run the app it comes with. This file is identical to the
original, signed app package created by the developer. It is in fact a ZIP archive
with the following directory structure:
$ unzip base.apk
$ ls -lah
-rw-r--r-- 1 sven staff 11K Dec 5 14:45
AndroidManifest.xml
drwxr-xr-x 5 sven staff 170B Dec 5 16:18 META-INF
drwxr-xr-x 6 sven staff 204B Dec 5 16:17 assets
-rw-r--r-- 1 sven staff 3.5M Dec 5 14:41 classes.dex
drwxr-xr-x 3 sven staff 102B Dec 5 16:18 lib
drwxr-xr-x 27 sven staff 918B Dec 5 16:17 res
-rw-r--r-- 1 sven staff 241K Dec 5 14:45 resources.arsc
Note that unzipping with the standard unzip utility the archive leaves some
files unreadable. AndroidManifest.XML is encoded into binary XML format
which isn’t readable with a text editor. Also, the app resources are still packaged
into a single archive file. A better way of unpacking an Android app package is
using apktool. When run with default command line flags, apktool automatically
decodes the Manifest file to text-based XML format and extracts the file
resources (it also disassembles the .DEX files to Smali code – a feature that
we’ll revisit later in this book).
$ apktool d base.apk
I: Using Apktool 2.1.0 on base.apk
I: Loading resource table...
I: Decoding AndroidManifest.xml with resources...
I: Loading resource table from file:
/Users/sven/Library/apktool/framework/1.apk
I: Regular manifest package...
I: Decoding file-resources...
I: Decoding values */* XMLs...
I: Baksmaling classes.dex...
I: Copying assets and libs...
I: Copying unknown files...
I: Copying original files...
$ cd base
$ ls -alh
total 32
drwxr-xr-x 9 sven staff 306B Dec 5 16:29 .
drwxr-xr-x 5 sven staff 170B Dec 5 16:29 ..
-rw-r--r-- 1 sven staff 10K Dec 5 16:29
AndroidManifest.xml
-rw-r--r-- 1 sven staff 401B Dec 5 16:29 apktool.yml
drwxr-xr-x 6 sven staff 204B Dec 5 16:29 assets
drwxr-xr-x 3 sven staff 102B Dec 5 16:29 lib
drwxr-xr-x 4 sven staff 136B Dec 5 16:29 original
drwxr-xr-x 131 sven staff 4.3K Dec 5 16:29 res
drwxr-xr-x 9 sven staff 306B Dec 5 16:29 smali
cache: This location is used for data caching. For example, the WebView
cache is found in this directory.
code_cache: This is the location of the file system's application-specific
cache directory designed for storing cached code. On devices running
Lollipop or later Android versions, the system will delete any files stored in
this location when the app or the entire platform is upgraded.
databases: This folder stores SQLite database files generated by the app at
run time, e.g., user data files.
files: This folder stores regular files created by the app.
lib: This folder stores native libraries written in C/C++. These libraries can
have one of several file extensions, including .so and .dll (x86 support).
This folder contains subfolders for the platforms the app has native libraries
for, including
armeabi: compiled code for all ARM-based processors
armeabi-v7a: compiled code for all ARM-based processors, version 7
and above only
arm64-v8a: compiled code for all 64-bit ARM-based processors,
version 8 and above based only
x86: compiled code for x86 processors only
x86_64: compiled code for x86_64 processors only
mips: compiled code for MIPS processors
shared_prefs: This folder contains an XML file that stores values saved via
the SharedPreferences APIs.
Generally, apps are assigned UIDs in the range of 10000 and 99999. Android
apps receive a user name based on their UID. For example, the app with UID
10188 receives the user name u0_a188 . If the permissions an app requested are
granted, the corresponding group ID is added to the app's process. For example,
the user ID of the app below is 10188. It belongs to the group ID 3003 (inet).
That group is related to android.permission.INTERNET permission. The output
of the id command is shown below.
$ id
uid=10188(u0_a188) gid=10188(u0_a188)
groups=10188(u0_a188),3003(inet),9997(everybody),50188(all_a188
) context=u:r:untrusted_app:s0:c512,c768
The relationship between group IDs and permissions is defined in the file
frameworks/base/data/etc/platform.xml
Installation of a new app creates a new directory named after the app package—
/data/data/[package-name] . This directory holds the app's data. Linux
directory permissions are set such that the directory can be read from and written
to only with the app's unique UID.
are assigned one directory each and run under different user accounts:
<manifest
xmlns:android="http://schemas.android.com/apk/res/android"
package="com.android.nfc"
android:sharedUserId="android.uid.nfc">
Zygote
receives a connection, it forks a new process, which then loads and executes the
app-specific code.
App Lifecycle
A service process is a process hosting a service that has been started with
the startService method. Though these processes aren't directly visible
to the user, they are generally things that the user cares about (such as
background network data upload or download), so the system will always
keep such processes running unless there's insufficient memory to retain all
foreground and visible processes.
Manifest
Every app has a manifest file, which embeds content in binary XML format. The
standard name of this file is AndroidManifest.xml. It is located in the root
directory of the app’s APK file.
The manifest file describes the app structure, its components (activities, services,
content providers, and intent receivers), and requested permissions. It also
contains general app metadata, such as the app's icon, version number, and
theme. The file may list other information, such as compatible APIs (minimal,
targeted, and maximal SDK version) and the kind of storage it can be installed
on (external or internal).
<manifest
package="com.owasp.myapplication"
android:versionCode="0.1" >
<uses-sdk android:minSdkVersion="12"
android:targetSdkVersion="22"
android:maxSdkVersion="25" />
<uses-permission android:name="android.permission.INTERNET"
/>
<provider
android:name="com.owasp.myapplication.myProvider"
android:exported="false" />
<application
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@style/Theme.Material.Light" >
<activity
android:name="com.owasp.myapplication.MainActivity"
>
<intent-filter>
<action
android:name="android.intent.action.MAIN" />
</intent-filter>
</activity>
</application>
</manifest>
The full list of available manifest options is in the official Android Manifest file
documentation.
App Components
Android apps are made of several high-level components. The main components
are:
Activities
Fragments
Intents
Broadcast receivers
Content providers and services
All these elements are provided by the Android operating system, in the form of
predefined classes available through APIs.
Activities
Activities make up the visible part of any app. There is one activity per screen,
so an app with three different screens implements three different activities.
Activities are declared by extending the Activity class. They contain all user
interface elements: fragments, views, and layouts.
Each activity needs to be declared in the app manifest with the following syntax:
<activity android:name="ActivityName">
</activity>
Like apps, activities have their own lifecycle and need to monitor system
changes to handle them. Activities can be in the following states: active, paused,
stopped, and inactive. These states are managed by the Android operating
system. Accordingly, activities can implement the following event managers:
onCreate
onSaveInstanceState
onStart
onResume
onRestoreInstanceState
onPause
onStop
onRestart
onDestroy
An app may not explicitly implement all event managers, in which case default
actions are taken. Typically, at least the onCreate manager is overridden by the
app developers. This is how most user interface components are declared and
initialized. onDestroy may be overridden when resources (like network
connections or connections to databases) must be explicitly released or specific
actions must occur when the app shuts down.
Fragments
Because fragments have their own lifecycle, the Fragment class contains event
managers that can be redefined and extended. These event managers included
onAttach, onCreate, onStart, onDestroy and onDetach. Several others exist; the
reader should refer to the Android Fragment specification for more details.
FragmentManager fm = getFragmentManager();
Fragments don't necessarily have a user interface; they can be a convenient and
efficient way to manage background operations pertaining to the app's user
interface. A fragment may be declared persistent so that it the system preserves
its state even if its Activity is destroyed.
Inter-Process Communication
As we've already learned, every Android process has its own sandboxed address
space. Inter-process communication facilities allow apps to exchange signals and
data securely. Instead of relying on the default Linux IPC facilities, Android's
IPC is based on Binder, a custom implementation of OpenBinder. Most Android
system services and all high-level IPC services depend on Binder.
You can query the list of system services with the service list command.
Intents
Intent messaging is an asynchronous communication framework built on top of
Binder. This framework allows both point-to-point and publish-subscribe
messaging. An Intent is a messaging object that can be used to request an action
from another app component. Although intents facilitate inter-component
communication in several ways, there are three fundamental use cases:
Starting an activity
An activity represents a single screen in an app. You can start a new
instance of an activity by passing an intent to startActivity . The
intent describes the activity and carries necessary data.
Starting a service
A Service is a component that performs operations in the background,
without a user interface. With Android 5.0 (API level 21) and later,
you can start a service with JobScheduler.
Delivering a broadcast
A broadcast is a message that any app can receive. The system delivers
broadcasts for system events, including system boot and charging
initialization. You can deliver a broadcast to other apps by passing an
intent to sendBroadcast or sendOrderedBroadcast .
Intents are components for sending messages between apps and components. An
app can use them to send information to its own components (for instance, to
start a new activity inside the app), to other apps, or to the operating system.
Intents can be used to start Activities and services, run actions on given data, and
broadcast messages to the whole system.
There are two types of intents. Explicit intents name the component that will be
started (the fully qualified class name). For instance:
Implicit intents are sent to the OS to perform a given action on a given set of
data ("http://www.example.com" in our example below). It is up to the system to
decide which app or class will perform the corresponding service. For instance:
Intent intent = new Intent(Intent.MY_ACTION,
Uri.parse("http://www.example.com"));
An intent filter is an expression in app manifest files that specifies the type of
intents the component would like to receive. For instance, by declaring an intent
filter for an activity, you make it possible for other apps to directly start your
activity with a certain kind of intent. Likewise, your activity can only be started
with an explicit intent if you don't declare any intent filters for it.
Here is a short list of intents sent by the operating system. All constants are
defined in the Intent class, and the whole list is in the official Android
documentation:
ACTION_CAMERA_BUTTON
ACTION_MEDIA_EJECT
ACTION_NEW_OUTGOING_CALL
ACTION_TIMEZONE_CHANGED
To improve security and privacy, a Local Broadcast Manager is used to send and
receive intents within an app without having them sent to the rest of the
operating system. This is very useful for ensuring that sensitive and private data
don't leave the app perimeter (geolocation data for instance).
Broadcast Receivers
Broadcast Receivers are components that allow apps to receive notifications
from other apps and from the system itself. With it, apps can react to events
(internal, initiated by other apps, or initiated by the operating system). They are
generally used to update user interfaces, start services, update content, and create
user notifications.
Broadcast Receivers must be declared in the app's manifest file. The manifest
must specify an association between the Broadcast Receiver and an intent filter
to indicate the actions the receiver is meant to listen for. If Broadcast Receivers
aren't declared, the app won't listen to broadcasted messages. However, apps
don’t need to be running to receive intents; the system starts apps automatically
when a relevant intent is raised.
After receiving an implicit intent, Android will list all apps that have registered a
given action in their filters. If more than one app has registered for the same
action, Android will prompt the user to select from the list of available apps.
Content Providers
Android uses SQLite to store data permanently: as with Linux, data is stored in
files. SQLite is a light, efficient, open source relational data storage technology
that does not require much processing power, which makes it ideal for mobile
use. An entire API with specific classes (Cursor, ContentValues,
SQLiteOpenHelper, ContentProvider, ContentResolver, etc.) is available. SQLite
is not run as a separate process; it is part of the app. By default, a database
belonging to a given app is accessible to this app only. However, content
providers offer a great mechanism for abstracting data sources (including
databases and flat files); they also provide a standard and efficient mechanism to
share data between apps, including native apps. To be accessible to other apps, a
content provider needs to be explicitly declared in the manifest file of the app
that will share it. As long as content providers aren't declared, they won't be
exported and can only be called by the app that creates them.
content providers are implemented through a URI addressing scheme: they all
use the content:// model. Regardless of the type of sources (SQLite database, flat
file, etc.), the addressing scheme is always the same, thereby abstracting the
sources and offering the developer a unique scheme. Content Providers offer all
regular database operations: create, read, update, delete. That means that any app
with proper rights in its manifest file can manipulate the data from other apps.
Services
Services are Android OS components (based on the Service class) that perform
tasks in the background (data processing, starting intents, and notifications, etc.)
without presenting a user interface. Services are meant to run processes long-
term. Their system priorities are lower than those of active apps and higher than
those of inactive apps. Therefore, they are less likely to be killed when the
system needs resources, and they can be configured to automatically restart
when enough resources become available. Activities are executed in the main
app thread. They are great candidates for running asynchronous tasks.
Permissions
Because Android apps are installed in a sandbox and initially can't access user
information and system components (such as the camera and the microphone),
Android provides a system with a predefined set of permissions for certain tasks
that the app can request. For example, if you want your app to use a phone's
camera, you have to request the android.permission.CAMERA permission. Prior
to Marshmallow (API 23), all permissions an app requested were granted at
installation. From Android Marshmallow onwards, the user must approve some
permissions requests during app execution.
Protection Levels
Android permissions are ranked on the basis of the protection level they offer
and divided into four different categories:
Normal: the lower level of protection. It gives the apps access to isolated
application-level features with minimal risk to other apps, the user, or the
system. It is granted during app installation and is the default protection
level: Example: android.permission.INTERNET
Dangerous: This permission allows the app to perform actions that might
affect the user’s privacy or the normal operation of the user’s device. This
level of permission may not be granted during installation; the user must
decide whether the app should have this permission. Example:
android.permission.RECORD_AUDIO
Signature: This permission is granted only if the requesting app has been
signed with the same certificate as the app that declared the permission. If
the signature matches, the permission is automatically granted. Example:
android.permission.ACCESS_MOCK_LOCATION
Requesting Permissions
Apps can request permissions for the protection levels Normal, Dangerous, and
Signature by including <uses-permission /> tags into their manifest. The
example below shows an AndroidManifest.xml sample requesting permission to
read SMS messages:
<manifest
xmlns:android="http://schemas.android.com/apk/res/android"
package="com.permissions.sample" ...>
<uses-permission
android:name="android.permission.RECEIVE_SMS" />
<application>...</application>
</manifest>
Declaring Permissions
Apps can expose features and content to other apps installed on the system. To
restrict access to its own components, it can either use any of Android’s
predefined permissions or define its own. A new permission is declared with the
element. The example below shows an app declaring a permission:
<manifest
xmlns:android="http://schemas.android.com/apk/res/android"
package="com.permissions.sample" ...>
<permission
android:name="com.permissions.sample.ACCESS_USER_INFO"
android:protectionLevel="signature" />
<application>...</application>
</manifest>
<receiver
android:name="com.permissions.sample.AnalyticsReceiver"
android:enabled="true"
android:permission="com.permissions.sample.ACCESS_USER_INFO">
...
</receiver>
Signing Process
During development, apps are signed with an automatically generated certificate.
This certificate is inherently insecure and is for debugging only. Most stores
don't accept this kind of certificate for publishing; therefore, a certificate with
more secure features must be created. When an application is installed on the
Android device, the Package Manager ensures that it has been signed with the
certificate included in the corresponding APK. If the certificate's public key
matches the key used to sign any other APK on the device, the new APK may
share a UID with the pre-existing APK. This facilitates interactions between
applications from a single vendor. Alternatively, specifying security permissions
for the Signature protection level is possible; this will restrict access to
applications that have been signed with the same key.
The original version of app signing implements the signed APK as a standard
signed JAR, which must contain all the entries in META-INF/MANIFEST.MF . All
files must be signed with a common certificate. This scheme does not protect
some parts of the APK, such as ZIP metadata. The drawback of this scheme is
that the APK verifier needs to process untrusted data structures before applying
the signature, and the verifier discards data the data structures don't cover. Also,
the APK verifier must decompress all compressed files, which takes
considerable time and memory.
With the APK signature scheme, the complete APK is hashed and signed, and an
APK Signing Block is created and inserted into the APK. During validation, the
v2 scheme checks the signatures of the entire APK file. This form of APK
verification is faster and offers more comprehensive protection against
modification.
Safely storing your secret key and making sure it remains secret during its entire
lifecycle is of paramount importance. Anyone who gains access to the key will
be able to publish updates to your apps with content that you don't control
(thereby adding insecure features or accessing shared content with signature-
based permissions). The trust that a user places in an app and its developers is
based totally on such certificates; certificate protection and secure management
are therefore vital for reputation and customer retention, and secret keys must
never be shared with other individuals. Keys are stored in a binary file that can
be protected with a password; such files are referred to as 'keystores'. Keystore
passwords should be strong and known only to the key creator. For this reason,
keys are usually stored on a dedicated build machine that developers have
limited access to. An Android certificate must have a validity period that's longer
than that of the associated app (including updated versions of the app). For
example, Google Play will require certificates to remain valid until Oct 22nd,
2033 at least.
Signing an Application
The goal of the signing process is to associate the app file (.apk) with the
developer's public key. To achieve this, the developer calculates a hash of the
APK file and encrypts it with their own private key. Third parties can then verify
the app's authenticity (e.g., the fact that the app really comes from the user who
claims to be the originator) by decrypting the encrypted hash with the author’s
public key and verifying that it matches the actual hash of the APK file.
Zipalign
The zipalign tool should always be used to align the APK file before
distribution. This tool aligns all uncompressed data (such as images, raw files,
and 4-byte boundaries) within the APK that helps improve memory management
during app run time. zipalign must be used before the APK file is signed with
apksigner.
Publishing Process
Distributing apps from anywhere (your own site, any store, etc.) is possible
because the Android ecosystem is open. However, Google Play is the most well-
known, trusted, and popular store, and Google itself provides it. Amazon
Appstore is the trusted default store for Kindle devices. If users want to install
third-party apps from a non-trusted source, they must explicitly allow this with
their device security settings.
Apps can be installed on an Android device from a variety of sources: locally via
USB, via Google's official app store (Google Play Store) or from alternative
stores.
Whereas other vendors may review and approve apps before they are actually
published, Google will simply scan for known malware signatures; this
minimizes the time between the beginning of the publishing process and public
app availability.
Publishing an app is quite straightforward; the main operation is making the
signed .apk file downloadable. On Google Play, publishing starts with account
creation and is followed by app delivery through a dedicated interface. Details
are available from the official Android documentation at
https://developer.android.com/distribute/googleplay/start.html.
Setting up a Testing Environment for
Android Apps
By now, you should have a basic understanding of the way Android apps are
structured and deployed. In this chapter, we'll talk about setting up a security
testing environment and describe basic testing processes you'll be using. This
chapter is the foundation for the more detailed testing methods discussed in later
chapters.
You can set up a fully functioning test environment on almost any machine
running Windows, Linux, or Mac OS.
Windows:
C:\Users\<username>\AppData\Local\Android\sdk
MacOS:
/Users/<username>/Library/Android/sdk
Note: On Linux, you need to choose an SDK directory. /opt , /srv , and
/usr/local are common choices.
Rooting (i.e., modifying the OS so that you can run commands as the root user)
is recommended for testing on a real device. This gives you full control over the
operating system and allows you to bypass restrictions such as app sandboxing.
These privileges in turn allow you to use techniques like code injection and
function hooking more easily.
Note that rooting is risky, and three main consequences need to be clarified
before you proceed. Rooting can have the following negative effects:
voiding the device warranty (always check the manufacturer's policy before
taking any action)
"bricking" the device, i.e., rendering it inoperable and unusable
creating additional security risks (because built-in exploit mitigations are
often removed)
You should not root a personal device that you store your private information on.
We recommend getting a cheap, dedicated test device instead. Many older
devices, such as Google's Nexus series, can run the newest Android versions and
are perfectly fine for testing.
You need to understand that rooting your device is ultimately YOUR
decision and that OWASP shall in no way be held responsible for any
damage. If you're uncertain, seek expert advice before starting the rooting
process.
To root a mobile device, first unlock its boot loader. The unlocking procedure
depends on the device manufacturer. However, for practical reasons, rooting
some mobile devices is more popular than rooting others, particularly when it
comes to security testing: devices created by Google and manufactured by
companies like Samsung, LG, and Motorola are among the most popular,
particularly because they are used by many developers. The device warranty is
not nullified when the boot loader is unlocked and Google provides many tools
to support the root itself. A curated list of guides for rooting all major brand
devices is posted on the XDA forums.
Network Setup
The available network setup options must be evaluated first. The mobile device
used for testing and the machine running the interception proxy must be
connected to the same Wi-Fi network. Use either an (existing) access point or
create an ad-hoc wireless network.
Once you've configured the network and established a connection between the
testing machine and the mobile device, several steps remain.
After completing these steps and starting the app, the requests should show up in
the interception proxy.
AppUse
MobSF
Nathan
The following procedure, which works on the Android emulator that ships with
Android Studio 2.x, is for setting up an HTTP proxy on the emulator:
1. Configure the HTTP proxy with the device's access point settings:
2. Open the Settings Menu
3. Tap on "Wireless & Networks" -> "Cellular Networks" or "Wireless &
Networks" -> "Mobile Networks"
4. Open "Access Point Names"
5. Open the existing APN (e.g., "T-Mobile US")
6. Enter "127.0.0.1" in the "Proxy" field and your proxy port in the "Port"
field (e.g., "8080")
7. Open the menu at the top right and tap "save"
HTTP and HTTPS requests should now be routed over the proxy on the host
machine. If not, try toggling airplane mode off and on.
An easy way to install a CA certificate is to push the certificate to the device and
add it to the certificate store via Security Settings. For example, you can install
the PortSwigger (Burp) CA certificate as follows:
1. Start Burp and use a web browser on the host to navigate to http://burp/,
then download cacert.der by clicking the "CA Certificate" button.
2. Change the file extension from .der to .cer .
3. Push the file to the emulator:
You should then be prompted to confirm installation of the certificate (you'll also
be asked to set a device PIN if you haven't already).
You can create an Android Virtual Device with the AVD manager, which is
available within Android Studio. You can also start the AVD manager from the
command line with the android command, which is found in the tools
directory of the Android SDK:
$ ./android avd
Once the emulator is up and running, you can establish a root connection with
the adb command.
$ adb root
$ adb shell
root@generic_x86:/ $ id
uid=0(root) gid=0(root)
groups=0(root),1004(input),1007(log),1011(adb),1015(sdcard_rw),
1028(sdcard_r),3001(net_bt_admin),3002(net_bt),3003(inet),3006(
net_bw_stats) context=u:r:su:s0
Rooting an emulator is therefore unnecessary; root access can be established
with adb .
There are several downsides to using an emulator. You may not be able to test an
app properly in an emulator if the app relies on a specific mobile network or uses
NFC or Bluetooth. Testing within an emulator is also usually slower, and the
testing itself may cause issues.
Nevertheless, you can emulate many hardware characteristics, such as GPS and
SMS.
Testing Methods
For source code testing, you'll need a setup similar to the developer's setup,
including a test environment that includes the Android SDK and an IDE. Access
to either a physical device or an emulator (for debugging the app) is
recommended.
During black box testing, you won't have access to the original form of the
source code. You'll usually have the application package in Android's .apk
format, which can be installed on an Android device or reverse engineered to
help you retrieve parts of the source code.
apkx provides an easy method of retrieving an APK's source code via the
command line. It also packages dex2jar and CFR and automates the
extraction, conversion, and decompilation steps. Install it as follows:
This should copy apkx to /usr/local/bin . Run it on the APK that you want
to test as follows:
$ apkx UnCrackable-Level1.apk
Extracting UnCrackable-Level1.apk to UnCrackable-Level1
Converting: classes.dex -> classes.jar (dex2jar)
dex2jar UnCrackable-Level1/classes.dex -> UnCrackable-
Level1/classes.jar
Decompiling to UnCrackable-Level1/src (cfr)
If the application is based solely on Java and doesn't have any native libraries
(C/C++ code), the reverse engineering process is relatively easy and recovers
almost all the source code. Nevertheless, if the code is obfuscated, this process
may be very time-consuming and unproductive. This also applies to applications
that contain a native library. They can still be reverse engineered, but the process
is not automated and requires knowledge of low-level details.
Some static analyzers rely on the availability of the source code; others take the
compiled APK as input. Keep in mind that static analyzers may not be able to
find all problems by themselves even though they can help us focus on potential
problems. Review each finding carefully and try to understand what the app is
doing to improve your chances of finding vulnerabilities.
Configure the static analyzer properly to reduce the likelihood of false positives.
and maybe only select several vulnerability categories in the scan. The results
generated by static analyzers can otherwise be overwhelming, and your efforts
can be counterproductive if you must manually investigate a large report.
There are several open source tools for automated security analysis of an APK.
QARK
Androbugs
JAADAS
For enterprise tools, see the section "Static Source Code Analysis" in the chapter
"Testing Tools."
Dynamic Analysis
Unlike static analysis, dynamic analysis is performed while executing the mobile
app. The test cases range from investigating the file system to monitoring
communication.
Several tools support the dynamic analysis of applications that rely on the
HTTP(S) protocol. The most important tools are the so-called interception
proxies; OWASP ZAP and Burp Suite Professional are the most famous. An
interception proxy gives the tester a man-in-the-middle position. This position is
useful for reading and/or modifying all app requests and endpoint responses,
which are used for testing Authorization, Session, Management, etc.
Drozer
On Linux:
Pre-built packages for many Linux distributions are available on the Drozer
website. If your distribution is not listed, you can build Drozer from source as
follows:
On Mac:
Create a project directory to work in; you'll download several files into it.
Navigate into the newly created directory and run the command virtualenv
drozer . This creates a "drozer" folder, which contains the Python executable
$ virtualenv drozer
$ source drozer/bin/activate
(drozer) $
You're now ready to install the required version of pyOpenSSL and build it
against the OpenSSL headers installed previously. A typo in the source of the
pyOpenSSL version Drozer prevents successful compilation, so you'll need to fix
the source before compiling. Fortunately, ropnop has figured out the necessary
steps and documented them in a blog post. Run the following commands:
$ wget
https://pypi.python.org/packages/source/p/pyOpenSSL/pyOpenSSL-
0.13.tar.gz
$ tar xzvf pyOpenSSL-0.13.tar.gz
$ cd pyOpenSSL-0.13
$ sed -i '' 's/X509_REVOKED_dup/X509_REVOKED_dupe/'
OpenSSL/crypto/crl.c
$ python setup.py build_ext -L/usr/local/opt/openssl/lib -
I/usr/local/opt/openssl/include
$ python setup.py build
$ python setup.py install
With that out of the way, you can install the remaining dependencies.
Finally, download and install the Python .egg from the MWR labs website:
$ wget
https://github.com/mwrlabs/drozer/releases/download/2.3.4/droze
r-2.3.4.tar.gz
$ tar xzf drozer-2.3.4.tar.gz
$ easy_install drozer-2.3.4-py2.7.egg
Drozer agent is the software component that runs on the device itself. Download
the latest Drozer Agent here and install it with adb.
Starting a Session:
You should now have the Drozer console installed on your host machine and the
Agent running on your USB-connected device or emulator. Now you need to
connect the two to start exploring.
Open the Drozer application in the running emulator and click the OFF button at
the bottom of the app to start an Embedded Server.
The server listens on port 31415 by default. Use adb to forward this port to the
localhost interface, then run Drozer on the host to connect to the agent.
Use the "list" command to view all Drozer modules that can be executed in the
current session.
To list all the packages installed on the emulator, execute the following
command:
To find the package name of a specific app, pass "-f" and a search
string:
dz> run app.package.list –f (string to be searched)
Using Modules:
Out of the box, Drozer provides modules for investigating various aspects of the
Android platform and a few remote exploits. You can extend Drozer's
functionality by downloading and installing additional modules.
Finding Modules:
The official Drozer module repository is hosted alongside the main project on
GitHub. This is automatically set up in your copy of Drozer. You can search for
modules with the module command:
For more information about a module, pass the –d option to view the module's
description:
Installing Modules:
This will install any module that matches your query. Newly installed modules
are dynamically loaded into the console and are available immediately.
Network Monitoring/Sniffing
Remotely sniffing all Android traffic in real-time is possible with tcpdump,
netcat (nc), and Wireshark. First, make sure that you have the latest version of
Android tcpdump on your phone. Here are the installation steps:
# adb root
# adb remount
# adb push /wherever/you/put/tcpdump /system/xbin/tcpdump
If execution of adb root returns the error adbd cannot run as root in
production builds , install tcpdump as follows:
Execute tcpdump once to see if it works. Once a few packets have come in, you
can stop tcpdump by pressing CTRL+c.
# tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full
protocol decode
listening on wlan0, link-type EN10MB (Ethernet), capture size
262144 bytes
04:54:06.590751 00:9e:1e:10:7f:69 (oui Unknown) > Broadcast,
RRCP-0x23 reply
04:54:09.659658 00:9e:1e:10:7f:69 (oui Unknown) > Broadcast,
RRCP-0x23 reply
04:54:10.579795 00:9e:1e:10:7f:69 (oui Unknown) > Broadcast,
RRCP-0x23 reply
^C
3 packets captured
3 packets received by filter
0 packets dropped by kernel
To remotely sniff the Android phone's network traffic, first execute tcpdump
and pipe its output to netcat (nc):
With the pipe ( | ), we sent all output from tcpdump to netcat, which opens a
listener on port 11111. You'll usually want to monitor the wlan0 interface. If you
need another interface, list the available options with the command $ ip addr .
To access port 11111, you need to forward the port to your machine via adb.
The following command connects you to the forwarded port via netcat and
piping to Wireshark.
Wireshark should start immediately (-k). It gets all data from stdin (-i -) via
netcat, which is connected to the forwarded port. You should see all the phone's
traffic from the wlan0 interface.
Firebase/Google Cloud Messaging (FCM/GCM)
Firebase Cloud Messaging (FCM), the successor to Google Cloud Messaging
(GCM), is a free service offered by Google that allows you to send messages
between an application server and client apps. The server and client app
communicate via the FCM/GCM connection server, which handles downstream
and upstream messages.
Downstream messages (push notifications) are sent from the application server
to the client app; upstream messages are sent from the client app to the server.
FCM is available for Android, iOS, and Chrome. FCM currently provides two
connection server protocols: HTTP and XMPP. As described in the official
documentation, these protocols are implemented differently. The following
example demonstrates how to intercept both protocols.
Preparation
FCM can use either XMPP or HTTP to communicate with the Google backend.
HTTP
FCM uses the ports 5228, 5229, and 5230 for HTTP communication. Usually,
only port 5228 is used.
Configure local port forwarding for the ports used by FCM. The following
example applies to Mac OS X:
$ echo "
rdr pass inet proto tcp from any to any port 5228-> 127.0.0.1
port 8080
rdr pass inet proto tcp from any to any port 5229 -> 127.0.0.1
port 8080
rdr pass inet proto tcp from any to any port 5239 -> 127.0.0.1
port 8080
" | sudo pfctl -ef -
The interception proxy must listen to the port specified in the port
forwarding rule above (port 8080).
XMPP
For XMPP communication, FCM uses ports 5235 (Production) and 5236
(Testing).
Configure local port forwarding for the ports used by FCM. The following
example applies to Mac OS X:
$ echo "
rdr pass inet proto tcp from any to any port 5235-> 127.0.0.1
port 8080
rdr pass inet proto tcp from any to any port 5236 -> 127.0.0.1
port 8080
" | sudo pfctl -ef -
The interception proxy must listen to the port specified in the port
forwarding rule above (port 8080).
Intercepting Messages
Read the chapter "Testing Network Communication" and the test case "Man-in-
the-middle (MITM) attacks" for further preparation and instructions for running
ettercap.
Your testing machine and the Android device must be connected to the same
wireless network. Start ettercap with the following command, replacing the IP
addresses below with the IP addresses of your Android device and the wireless
network's gateway.
Start the app and trigger a function that uses FCM. You should see HTTP
messages in your interception proxy.
Interception proxies such as Burp and OWASP ZAP won't show this traffic
because they aren't capable of decoding it properly by default. There are,
however, Burp plugins that visualize XMPP traffic, such as Burp-non-HTTP-
Extension and Mitm-relay.
Potential Obstacles
Discuss with your project team the possibility of providing a debug build for the
following security controls, which may be implemented in the app you're about
to test. A debug build provides several benefits for a (white box) test by allowing
a more comprehensive analysis.
Certificate Pinning
If the app implements certificate pinning, C.509 certificates provided by an
interception proxy will be declined and the app will refuse to make any requests
through the proxy. To perform an efficient white box test, use a debug build with
deactivated certificate pinning.
There are several ways to bypass certificate pinning for a black box test, for
example, SSLUnpinning and Android-SSL-TrustKiller. Certificate pinning can
be bypassed within seconds, but only if the app uses the API functions that are
covered for these tools. If the app is implementing SSL Pinning with a
framework or library that those tools don't yet implement, the SSL Pinning must
be manually patched and deactivated, which can be time-consuming.
Deactivating SSL Pinning satisfies the prerequisites for dynamic analysis, after
which the app's communication can be investigated.
See the test case "Testing Custom Certificate Stores and Certificate Pinning" for
more details.
Root Detection
The guidelines for saving data can be summarized quite easily: Public data
should be available to everyone, but sensitive and private data must be protected,
or, better yet, kept out of device storage.
Note that the meaning of "sensitive data" depends on the app that handles it.
Data classification is described in detail in the "Identifying Sensitive Data"
section of the chapter "Mobile App Security Testing."
Overview
Conventional wisdom suggests that as little sensitive data as possible should be
stored on permanent local storage. In most practical scenarios, however, some
type of user data must be stored. For example, asking the user to enter a very
complex password every time the app starts isn't a great idea in terms of
usability. Most apps must locally cache some kind of authentication token to
avoid this. Personally identifiable information (PII) and other types of sensitive
data may also be saved if a given scenario calls for it.
Sensitive data is vulnerable when it is not properly protected by the app that is
persistently storing it. The app may be able to store the data in several places, for
example, on the device or on an external SD card. When you're trying to exploit
these kinds of issues, consider that a lot of information may be processed and
stored in different locations. Identifying at the outset the kind of information
processed by the mobile application and input by the user is important.
Identifying information that may be valuable to attackers (e.g., passwords, credit
card information, PII) is also important.
Storing data is essential for many mobile apps. For example, some apps use data
storage to keep track of user settings or user-provided data. Data can be stored
persistently in several ways. The following list of storage techniques are widely
used on the Android platform:
Shared Preferences
SQLite Databases
Realm Databases
Internal Storage
External Storage
The following code snippets demonstrate bad practices that disclose sensitive
information. They also illustrate Android storage mechanisms in detail. For more
information, check out the Security Tips for Storing Data in the Android
developer's guide.
Shared Preferences
Once the activity has been called, the file key.xml will be created with the
provided data. This code violates several best practices.
root@hermes:/data/data/sg.vp.owasp_mobile.myfirstapp/shared_pre
fs # ls -la
-rw-rw-r-- u0_a118 170 2016-04-23 16:51 key.xml
SQLite is an SQL database engine that stores data in .db files. The Android
SDK has built-in support for SQLite databases. The main package used to
manage the databases is android.database.sqlite . You may use the following
code to store sensitive information within an activity:
SQLiteDatabase notSoSecure =
openOrCreateDatabase("privateNotSoSecure",MODE_PRIVATE,null);
notSoSecure.execSQL("CREATE TABLE IF NOT EXISTS
Accounts(Username VARCHAR, Password VARCHAR);");
notSoSecure.execSQL("INSERT INTO Accounts
VALUES('admin','AdminPass');");
notSoSecure.close();
Once the activity has been called, the database file privateNotSoSecure will be
created with the provided data and stored in the clear text file
/data/data/<package-name>/databases/privateNotSoSecure .
The database's directory may contain several files besides the SQLite database:
Journal files: These are temporary files used to implement atomic commit
and rollback.
Lock files: The lock files are part of the locking and journaling feature,
which was designed to improve SQLite concurrency and reduce the writer
starvation problem.
SQLiteDatabase secureDB =
SQLiteDatabase.openOrCreateDatabase(database, "password123",
null);
secureDB.execSQL("CREATE TABLE IF NOT EXISTS Accounts(Username
VARCHAR,Password VARCHAR);");
secureDB.execSQL("INSERT INTO Accounts
VALUES('admin','AdminPassEnc');");
secureDB.close();
If encrypted SQLite databases are used, determine whether the password is hard-
coded in the source, stored in shared preferences, or hidden somewhere else in
the code or filesystem. Secure ways to retrieve the key include:
Asking the user to decrypt the database with a PIN or password once the
app is opened (weak passwords and PINs are vulnerable to brute force
attacks)
Storing the key on the server and allowing it to be accessed from a web
service only (so that the app can be used only when the device is online)
Realm Databases
The Realm Database for Java is becoming more and more popular among
developers. The database and its contents can be encrypted with a key stored in
the configuration file.
//the getKey() method either gets the key from the server or
from a Keystore, or is deferred from a password.
RealmConfiguration config = new RealmConfiguration.Builder()
.encryptionKey(getKey())
.build();
If the database is not encrypted, you should be able to obtain the data. If the
database is encrypted, determine whether the key is hard-coded in the source or
resources and whether it is stored unprotected in shared preferences or some
other location.
Internal Storage
You can save files to the device's internal storage. Files saved to internal storage
are containerized by default and cannot be accessed by other apps on the device.
When the user uninstalls your app, these files are removed. The following code
would persistently store sensitive data to internal storage:
You should check the file mode to make sure that only the app can access the
file. You can set this access with MODE_PRIVATE . Modes such as
MODE_WORLD_READABLE (deprecated) and MODE_WORLD_WRITEABLE (deprecated)
Search for the class FileInputStream to find out which files are opened and
read within the app.
External Storage
It's also worth knowing that files stored outside the application folder
( data/data/<package-name>/ ) will not be deleted when the user uninstalls the
application.
Static Analysis
Local Storage
Check the source code for keywords and API calls that are used to store
data:
File permissions, such as:
MODE_WORLD_READABLE or MODE_WORLD_WRITABLE : You should
this.db =
localUserSecretStore.getWritableDatabase("SuperPassword123");
Obtaining the key is trivial because it is contained in the source code and
identical for all installations of the app. Encrypting data this way is not
beneficial. Look for hard-coded API keys/private keys and other valuable data;
they pose a similar risk. Encoded/encrypted keys represent another attempt to
make it harder but not impossible to get the crown jewels.
The algorithm for decoding the original key might be something like this:
Example:
<resources>
<string name="app_name">SuperApp</string>
<string name="hello_world">Hello world!</string>
<string name="action_settings">Settings</string>
<string name="secret_key">My_Secret_Key</string>
</resources>
Example:
buildTypes {
debug {
minifyEnabled true
buildConfigField "String", "hiddenPassword",
"\"${hiddenPassword}\""
}
}
KeyStore
The Android KeyStore supports relatively secure credential storage. As of
Android 4.3, it provides public APIs for storing and using app-private keys. An
app can use a public key to create a new private/public key pair for encrypting
application secrets, and it can decrypt the secrets with the private key.
You can protect keys stored in the Android KeyStore with user authentication.
The user's lock screen credentials (pattern, PIN, password, or fingerprint) are
used for authentication.
1. Users are authorized to use keys for a limited period of time after
authentication. In this mode, all keys can be used as soon as the user
unlocks the device. You can customize the period of authorization for each
key. You can use this option only if the secure lock screen is enabled. If the
user disables the secure lock screen, all stored keys will become
permanently invalid.
HMAC keys are insecurely stored on several devices despite private keys being
correctly stored on the secure hardware.
Older Android versions don't include KeyStore, but they do include the
KeyStore interface from JCA (Java Cryptography Architecture). You can use
KeyStores that implement this interface to ensure the secrecy and integrity of
keys stored with KeyStore; BouncyCastle KeyStore (BKS) is recommended. All
implementations are based on the fact that files are stored on the filesystem; all
files are password-protected. To create one, you can use the
KeyStore.getInstance("BKS", "BC") method , where "BKS" is the KeyStore
Be aware that not all KeyStores properly protect the keys stored in the KeyStore
files.
KeyChain
The KeyChain class is used to store and retrieve system-wide private keys and
their corresponding certificates (chain). The user will be prompted to set a lock
screen pin or password to protect the credential storage if something is being
imported into the KeyChain for the first time. Note that the KeyChain is system-
wide—every application can access the materials stored in the KeyChain.
Make sure that the app is using the Android KeyStore and Cipher
mechanisms to securely store encrypted information on the device. Look
for the patterns import java.security.KeyStore , import
javax.crypto.Cipher , import java.security.SecureRandom , and
corresponding usages.
Use the store(OutputStream stream, char[] password) function to store
the KeyStore to disk with a password. Make sure that the password is
provided by the user, not hard-coded.
Dynamic Analysis
Install and use the app, executing all functions at least once. Data can be
generated when entered by the user, sent by the endpoint, or shipped with the
app. Then complete the following:
Identify development files, backup files, and old files that shouldn't be
included with a production release.
Determine whether SQLite databases are available and whether they
contain sensitive information. SQLite databases are stored in
/data/data/<package-name>/databases .
Avoid using Shared Preferences and other mechanisms that can't protect
data when you are storing sensitive information. Shared Preferences is
insecure and unencrypted by default. You can use secure-preferences to
encrypt the values stored in Shared Preferences, but the Android KeyStore
should be your first choice for storing data securely.
Check the permissions of the files in /data/data/<package-name> . Only
the user and group created when you installed the app (e.g., u0_a82) should
have user read, write, and execute permissions ( rwx ). Other users should
not have permission to access files, but they may have execute permissions
for directories.
Determine whether a Realm database is available in /data/data/<package-
name>/files/ , whether it is unencrypted, and whether it contains sensitive
information. By default, the file extension is realm and the file name is
default . Inspect the Realm database with the Realm Browser.
Check external storage for data. Don't use external storage for sensitive data
because it is readable and writeable system-wide.
Files saved to internal storage are by default private to your application; neither
the user nor other applications can access them. When users uninstall your
application, these files are removed.
Log Class
Logger Class
Use a centralized logging class and mechanism and remove logging statements
from the production release because other applications may be able to read them.
Static Analysis
Check the app's source code for logging mechanisms by searching for the
following keywords:
android.util.Log
Logger
System.out.print | System.err.print
logfile
logging
logs
While preparing the production release, you can use tools like ProGuard
(included in Android Studio) to delete logging-related code. To determine
whether all the android.util.Log class' logging functions have been removed,
check the ProGuard configuration file (proguard-project.txt) for the following
options:
Note that the example above only ensures that calls to the Log class' methods
will be removed. If the string that will be logged is dynamically constructed, the
code that constructs the string may remain in the bytecode. For example, the
following code issues an implicit StringBuilder to construct the log
statement:
ProGuard guarantees removal of the Log.v method call. Whether the rest of the
code ( new StringBuilder ... ) will be removed depends on the complexity of
the code and the ProGuard version.
This is a security risk because the (unused) string leaks plain text data into
memory, which can be accessed via a debugger or memory dumping.
Unfortunately, no silver bullet exists for this issue, but a few options are
available:
Remove logs at the source level instead of at the compiled bytecode level.
Below is a simple Gradle task that comments out all log statements,
including any inline string builders:
afterEvaluate {
project.getTasks().findAll { task ->
task.name.contains("compile") &&
task.name.contains("Release")}.each { task ->
task.dependsOn('removeLogs')
}
task removeLogs() {
doLast {
fileTree(dir: project.file('src')).each { File file ->
def out = file.getText("UTF-8").replaceAll("
((android\\.util\\.)*Log\\.([ewidv]|wtf)\\s*\\([\\S\\s]*?
\\)\\s*;)", "/*\$1*/")
file.write(out);
}
}
}
Dynamic Analysis
Use all the mobile app functions at least once, then identify the application's data
directory and look for log files ( /data/data/<package-name> ). Check the
application logs to determine whether log data has been generated; some mobile
applications create and store their own logs in the data directory.
strategy must include all output generated while the application is starting,
running and closing. To determine what data is directly printed by
System.out.println or printStackTrace , you can use Logcat . There are
Overview
You can embed third-party services in apps. These services can implement
tracker services, monitor user behavior, sell banner advertisements, improve the
user experience, and more.
The downside is a lack of visibility: you can't know exactly what code third-
party libraries execute. Consequently, you should make sure that only necessary,
non-sensitive information will be sent to the service.
Check the source code for API calls and third-party library functions or SDKs.
Review code changes for security best practices.
Review loaded libraries to determine whether they are necessary and whether
they are out of date or contain known vulnerabilities.
Dynamic Analysis
Check all requests to external services for embedded sensitive information. To
intercept traffic between the client and server, you can perform dynamic analysis
by launching a man-in-the-middle (MITM) attack with Burp Suite Professional
or OWASP ZAP. Once you route the traffic through the interception proxy, you
can try to sniff the traffic that passes between the app and server. All app
requests that aren't sent directly to the server on which the main function is
hosted should be checked for sensitive information, such as PII in a tracker or ad
service.
Determining Whether the Keyboard Cache Is
Disabled for Text Input Fields
Overview
When users type in input fields, the software automatically suggests data. This
feature can be very useful for messaging apps. Hovewer, the keyboard cache
may disclose sensitive information when the user selects an input field that takes
this type of information.
Static Analysis
In the layout definition of an activity, you can define TextViews that have
XML attributes. If the XML attribute android:inputType is given the value
textNoSuggestions , the keyboard cache will not be shown when the input field
<EditText
android:id="@+id/KeyBoardCache"
android:inputType="textNoSuggestions"/>
The code for all input fields that take sensitive information should include this
XML attribute to disable the keyboard suggestions:
Dynamic Analysis
Start the app and click in the input fields that take sensitive data. If strings are
suggested, the keyboard cache has not been disabled for these fields.
Finding Sensitive Data on the Clipboard
Overview
While users are typing data in input fields, they can use the clipboard to copy
and paste data. The device's apps share the clipboard, so malicious apps can
misuse it to access sensitive data.
Static Analysis
Identify input fields that take sensitive information and countermeasures that
mitigate the risk of clipboard access. Overwriting input field functions is a
general best practice that disables the clipboard for those functions.
android:longClickable="false"
Dynamic Analysis
Start the app and click in the input fields that take sensitive data. If you are
shown the copy/paste menu, the clipboard functionality has not been disabled for
these fields.
You can use the Drozer module post.capture.clipboard to extract data from
the clipboard:
Overview
As part of Android's IPC mechanisms, content providers allow an app's stored
data to be accessed and modified by other apps. If not properly configured, these
mechanisms may leak sensitive data.
Static Analysis
The first step is to look at AndroidManifest.xml to detect content providers
exposed by the app. You can identify content providers by the <provider>
element. Complete the following steps:
only by apps from the same enterprise (i.e., signed with the same key). To
make the data accessible to other apps, apply a security policy with the
<permission> element and set a proper android:protectionLevel . If
Inspect the source code to understand how the content provider is meant to be
used. Search for the following keywords:
android.content.ContentProvider
android.database.Cursor
android.database.sqlite
.query
.update
.delete
To avoid SQL injection attacks within the app, use parameterized query
methods, such as query , update , and delete . Be sure to properly
sanitize all method arguments; for example, the selection argument could
lead to SQL injection if it is made up of concatenated user input.
<provider
android:authorities="com.mwr.example.sieve.DBContentProvider"
android:exported="true" android:multiprocess="true"
android:name=".DBContentProvider">
<path-permission android:path="/Keys"
android:readPermission="com.mwr.example.sieve.READ_KEYS"
android:writePermission="com.mwr.example.sieve.WRITE_KEYS"/>
</provider>
<provider
android:authorities="com.mwr.example.sieve.FileBackupProvider"
android:exported="true" android:multiprocess="true"
android:name=".FileBackupProvider"/>
Here we see that there are actually two paths, "/Keys" and "/Passwords", and the
latter is not being protected in the manifest and is therefore vulnerable.
When accessing a URI, the query statement returns all passwords and the path
Passwords/ . We will address this in the "Dynamic Analysis" section and show
Dynamic Analysis
Testing Content Providers
Once you have a list of accessible content providers, try to extract data from
each provider with the app.provider.query module:
Insert record
Update record
Delete record
The Android platform promotes SQLite databases for storing user data. Because
these databases are based on SQL, they may be vulnerable to SQL injection. You
can use the Drozer module app.provider.query to test for SQL injection by
manipulating the projection and selection fields that are passed to the content
provider:
Content providers can provide access to the underlying filesystem. This allows
apps to share files (the Android sandbox normally prevents this). You can use the
Drozer modules app.provider.read and app.provider.download to read and
download files, respectively, from exported file-based content providers. These
content providers are susceptible to directory traversal, which allows otherwise
protected files in the target application's sandbox to be read.
dz> run app.provider.download
content://com.vulnerable.app.FileProvider/../../../../../../../
../data/data/com.vulnerable.app/database.db
/home/user/database.db
Written 24488 bytes
Overview
Many apps require users to enter several kinds of data to, for example, register
an account or make a payment. Sensitive data may be exposed if the app doesn't
properly mask it, when displaying data in clear text.
Static Analysis
To make sure an application is masking sensitive user input, check for the
following attribute in the definition of EditText:
android:inputType="textPassword"
With this setting, dots (instead of the input characters) will be displayed in the
text field, preventing the app from leaking passwords or pins to the user
interface.
Dynamic Analysis
To determine whether the application leaks any sensitive information to the user
interface, run the application and identify components that either show such
information or take it as input.
If the information is masked by, for example, replacing input with asterisks or
dots, the app isn't leaking data to the user interface.
Stock Android has built-in USB backup facilities. When USB debugging is
enabled, you can use the adb backup command to create full data backups
and backups of an app's data directory.
Google provides a "Back Up My Data" feature that backs up all app data to
Google's servers.
Auto Backup for Apps: With Android 6.0 (>= API level 23), Google
added the "Auto Backup for Apps feature." This feature automatically
syncs at most 25MB of app data with the user's Google Drive account.
OEMs may provide additional options. For example, HTC devices have a
"HTC Backup" option that performs daily backups to the cloud when
activated.
Static Analysis
Local
Android provides an attribute called allowBackup to back up all your
application data. This attribute is set in the AndroidManifest.xml file. If the
value of this attribute is true, the device allows users to back up the application
with Android Debug Bridge (ADB) via the command $ adb backup .
To prevent the app data backup, set the android:allowBackup attribute to false.
When this attribute is unavailable, the allowBackup setting is enabled by default,
and backup must be manually deactivated.
Note: If the device was encrypted, then the backup files will be encrypted as
well.
android:allowBackup="true"
If the flag value is true, determine whether the app saves any kind of sensitive
data (check the test case "Testing for Sensitive Data in Local Storage").
Cloud
Regardless of whether you use key/value backup or auto backup, you must
determine the following:
If you don't want to share files with Google Cloud, you can exclude them
from Auto Backup. Sensitive information stored at rest on the device should
be encrypted before being sent to the cloud.
Auto Backup: You configure Auto Backup via the boolean attribute
android:allowBackup within the application's manifest file. Auto Backup
is enabled by default for applications that target Android 6.0 (API Level
23). You can use the attribute android:fullBackupOnly to activate auto
backup when implementing a backup agent, but this attribute is available
for Android versions 6.0 and above only. Other Android versions use
key/value backup instead.
android:fullBackupOnly
Auto backup includes almost all the app files and stores up 25 MB of them per
app in the user's Google Drive account. Only the most recent backup is stored;
the previous backup is deleted.
android:backupAgent
BackupAgent
BackupAgentHelper
To check for key/value backup implementations, look for these classes in the
source code.
Dynamic Analysis
After executing all available app functions, attempt to back up via adb . If the
backup is successful, inspect the backup archive for sensitive data. Open a
terminal and run the following command:
Approve the backup from your device by selecting the Back up my data option.
After the backup process is finished, the file .ab will be in your working
directory. Run the following command to convert the .ab file to tar.
The Android Backup Extractor is an alternative backup tool. To make the tool to
work, you have to download the Oracle JCE Unlimited Strength Jurisdiction
Policy Files for JRE7 or JRE8 and place them in the JRE lib/security folder. Run
the following command to convert the tar file:
Static Analysis
A screenshot of the current activity is taken when an Android app goes into
background and displayed for aesthetic purposes when the app returns to the
foreground. However, this may leak sensitive information.
To determine whether the application may expose sensitive information via the
app switcher, find out whether the FLAG_SECURE option has been set. You
should find something similar to the following code snippet:
getWindow().setFlags(WindowManager.LayoutParams.FLAG_SECURE,
WindowManager.LayoutParams.FLAG_SECURE);
setContentView(R.layout.activity_main);
If the option has not been set, the application is vulnerable to screen capturing.
Dynamic Analysis
While black-box testing the app, navigate to any screen that contains sensitive
information and click the home button to send the app to the background, then
press the app switcher button to see the snapshot. As shown below, if
FLAG_SECURE is set (right image), the snapshot will be empty; if the flag has not
Overview
Analyzing memory can help developers identify the root causes of several
problems, such as application crashes. However, it can also be used to access
sensitive data. This section describes how to check for data disclosure via
process memory.
Static Analysis
For an overview of possible sources of data exposure, check the documentation
and identify application components before you examine the source code. For
example, sensitive data from a backend may be in the HTTP client, the XML
parser, etc. You want all these copies to be removed from memory as soon as
possible.
However, if you need to expose sensitive data in memory, you should make sure
that your app is designed to expose as few data copies as possible as briefly as
possible. In other words, you want the handling of sensitive data to be
centralized (i.e., with as few components as possible) and based on primitive,
mutable data structures.
The latter requirement gives developers direct memory access. Make sure that
they use this access to overwrite the sensitive data with dummy data (typically
zeroes). Examples of preferable data types include byte [] and char [] , but
not String or BigInteger . Whenever you try to modify an immutable object
like String , you create and change a copy of the object.
access such a type's value, however, you would use the toString method,
which would create an immutable copy of the data. There are several ways to
use these data types without creating an immutable copy, but they require more
effort than simply using a primitive array. Safe memory management is one
benefit of using types like StringBuffer , but this can be a two-edged sword.
If you try to modify the content of one of these types and the copy exceeds the
buffer capacity, the buffer size will automatically increase. The buffer content
may be copied to a different location, leaving the old content without a reference
you can use to overwrite it.
Unfortunately, few libraries and frameworks are designed to allow sensitive data
to be overwritten. For example, destroying a key, as shown below, doesn't really
remove the key from memory:
The RSA key pair is based on the BigInteger type and therefore resides in
memory after its first use outside the AndroidKeyStore . Some ciphers (such as
the AES Cipher in BouncyCastle ) do not properly clean up their byte-arrays.
The following section describes pitfalls of data leakage in memory and best
practices for avoiding them.
Make sure to overwrite the content of the critical object once the object is no
longer needed. Overwriting the content with zeroes is one simple and very
popular method:
This doesn't, however, guarantee that the content will be overwritten at run time.
To optimize the bytecode, the compiler will analyze and decide not to overwrite
data because it will not be used afterwards (i.e., it is an unnecessary operation).
Even if the code is in the compiled DEX, the optimization may occur during the
just-in-time or ahead-of-time compilation in the VM.
There is no silver bullet for this problem because different solutions have
different consequences. For example, you may perform additional calculations
(e.g., XOR the data into a dummy buffer), but you'll have no way to know the
extent of the compiler's optimization analysis. On the other hand, using the
overwritten data outside the compiler's scope (e.g., serializing it in a temp file)
guarantees that it will be overwritten but obviously impacts performance and
maintenance.
Then, using Arrays.fill to overwrite the data is a bad idea because the
method is an obvious hooking target (see the chapter "Tampering and Reverse
Engineering on Android" for more details).
The final issue with the above example is that the content was overwritten with
zeroes only. You should try to overwrite critical objects with random data or
content from non-critical objects. This will make it really difficult to construct
scanners that can identify sensitive data on the basis of its management.
For more information, take a look at Securely Storing Sensitive Data in RAM.
return key.clone();
}
/** Overwrites the key with dummy data to ensure this copy
is no longer present in memory.*/
public void destroy() {
if (isDestroyed()) {
return;
}
this.key = null;
System.gc();
}
Secure user-provided data is the final secure information type usually found in
memory. This is often managed by implementing a custom input method, for
which you should follow the recommendations given here. However, Android
allows information to be partially erased from EditText buffers via a custom
Editable.Factory .
Dynamic Analysis
Static analysis will help you identify potential problems, but it can't provide
statistics about how long data has been exposed in memory, nor can it help you
identify problems in closed-source dependencies. This is where dynamic
analysis comes into play.
There are basically two ways to analyze the memory of a process: live analysis
via a debugger and analyzing one or more memory dumps. Because the former is
more of a general debugging approach, we will concentrate on the latter.
For rudimentary analysis, you can use Android Studio's built-in tools. They are
on the Android Monitor tab. To dump memory, select the device and app you
want to analyze and click Dump Java Heap. This will create a .hprof file in the
captures directory, which is on the app's project path.
To navigate through class instances that were saved in the memory dump, select
the Package Tree View in the tab showing the .hprof file.
For more advanced analysis of the memory dump, use the Eclipse Memory
Analyzer (MAT). It is available as an Eclipse plugin and as a standalone
application.
To analyze the dump in MAT, use the hprof-conv platform tool, which comes
with the Android SDK.
MAT (Memory Analyzer Tool) provides several tools for analyzing the memory
dump. For example, the Histogram provides an estimate of the number of
objects that have been captured from a given type, and the Thread Overview
shows processes' threads and stack frames. The Dominator Tree provides
information about keep-alive dependencies between objects. You can use regular
expressions to filter the results these tools provide.
Object Query Language studio is a MAT that allows you to query objects from
the memory dump with an SQL-like language. The tool allows you to transform
simple objects by invoking Java methods on them, and it provides an API for
building sophisticated tools on top of the MAT.
In the example above, all String objects present in the memory dump will be
selected. The results will include the object's class, memory address, value, and
retain count. To filter this information and see only the value of each string, use
the following code:
SQL supports primitive data types as well, so you can do something like the
following to access the content of all char arrays:
Don't be surprised if you get results that are similar to the previous results; after
all, String and other Java data types are just wrappers around primitive data
types. Now let's filter the results. The following sample code will select all byte
arrays that contain the ASN.1 OID of an RSA key. This doesn't imply that a
given byte array actually contains an RSA (the same byte sequence may be part
of something else), but this is probable.
Finally, you don't have to select whole objects. Consider an SQL analogy:
classes are tables, objects are rows, and fields are columns. If you want to find
all objects that have a "password" field, you can do something like the
following:
Repeating tests and memory dumps will help you obtain statistics about the
length of data exposure. Furthermore, observing the way a particular memory
segment (e.g., a byte array) changes may lead you to some otherwise
unrecognizable sensitive data (more on this in the "Remediation" section below).
Overview
Apps that process or query sensitive information should run in a trusted and
secure environment. To create this environment, the app can check the device for
the following:
Static Analysis
To test the device-access-security policy that the app enforces, a written copy of
the policy must be provided. The policy should define available checks and their
enforcement. For example, one check could require that the app run only on
Android Marshmallow (Android 6.0) or a more recent version, closing the app or
displaying a warning if the Android version is less than 6.0.
Check the source code for functions that implement the policy and determine
whether it can be bypassed.
Dynamic Analysis
The dynamic analysis depends on the checks enforced by the app and their
expected behavior. If the checks can be bypassed, they must be validated.
References
CWE
CWE-117: Improper Output Neutralization for Logs
CWE-200 - Information Exposure
CWE-316 - Cleartext Storage of Sensitive Information in Memory
CWE-359 - Exposure of Private Information ('Privacy Violation')
CWE-524 - Information Exposure Through Caching
CWE-532: Information Exposure Through Log Files
CWE-534: Information Exposure Through Debug Log Files
CWE-311 - Missing Encryption of Sensitive Data
CWE-312 - Cleartext Storage of Sensitive Information
CWE-522 - Insufficiently Protected Credentials
CWE-530 - Exposure of Backup File to an Unauthorized Control Sphere
CWE-634 - Weaknesses that Affect System Processes
CWE-922 - Insecure Storage of Sensitive Information
Tools
Sqlite3 - http://www.sqlite.org/cli.html
Realm Browser - Realm Browser - https://github.com/realm/realm-browser-
osx
ProGuard - http://proguard.sourceforge.net/
Logcat - http://developer.android.com/tools/help/logcat.html
Burp Suite Professional - https://portswigger.net/burp/
OWASP ZAP -
https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
Drozer - https://labs.mwrinfosecurity.com/tools/drozer/
Android Backup Extractor - https://github.com/nelenkov/android-backup-
extractor
Memory Monitor -
http://developer.android.com/tools/debugging/debugging-
memory.html#ViewHeap
Eclipse’s MAT (Memory Analyzer Tool) standalone -
https://eclipse.org/mat/downloads.php
Memory Analyzer which is part of Eclipse -
https://www.eclipse.org/downloads/
Fridump - https://github.com/Nightbringer21/fridump
LiME - https://github.com/504ensicsLabs/LiME
Android Cryptographic APIs
In the chapter Cryptography for Mobile Apps, we introduced general
cryptography best practices and described typical flaws that can occur when
cryptography is used incorrectly in mobile apps. In this chapter, we'll go into
more detail on Android's cryptography APIs. We'll show how identify uses of
those APIs in the source code and how to interpret the configuration. When
reviewing code, make sure to compare the cryptographic parameters used with
the current best practices linked from this guide.
Overview
Android cryptography APIs are based on the Java Cryptography Architecture
(JCA). JCA separates the interfaces and implementation, making it possible to
include several security providers that can implement sets of cryptographic
algorithms. Most of the JCA interfaces and classes are defined in the
java.security.* and javax.crypto.* packages. In addition, there are
Below you can find the output of a running Android 4.4 in an emulator with
Google Play APIs, after the security provider has been patched:
For some applications that support older versions of Android, bundling an up-to-
date library may be the only option. Spongy Castle (a repackaged version of
Bouncy Castle) is a common choice in these situations. Repackaging is
necessary because Bouncy Castle is included in the Android SDK. The latest
version of Spongy Castle likely fixes issues encountered in the earlier versions
of Bouncy Castle that were included in Android. Note that the Bouncy Castle
libraries packed with Android are often not as complete as their counterparts
from the legion of the Bouncy Castle. Lastly: bear in mind that packing large
libraries such as Spongy Castle will often lead to a multidexed Android
application.
Android SDK provides mechanisms for specifying secure key generation and
use. Android 6.0 (Marshmallow, API 23) introduced the KeyGenParameterSpec
class that can be used to ensure the correct key usage in the application.
The KeyGenParameterSpec indicates that the key can be used for encryption
and decryption, but not for other purposes, such as signing or verifying. It further
specifies the block mode (CBC), padding (PKCS7), and explicitly specifies that
randomized encryption is required (this is the default.) "AndroidKeyStore" is
the name of the cryptographic service provider used in this example.
GCM is another AES block mode that provides additional security benefits over
other, older modes. In addition to being cryptographically more secure, it also
provides authentication. When using CBC (and other modes), authentication
would need to be performed separately, using HMACs (see the Reverse
Engineering chapter). Note that GCM is the only mode of AES that does not
support paddings.
Attempting to use the generated key in violation of the above spec would result
in a security exception.
// byte[] input
Key key = keyStore.getKey(keyAlias, null);
Cipher cipher = Cipher.getInstance(AES_MODE);
cipher.init(Cipher.ENCRYPT_MODE, key);
Both the IV (initialization vector) and the encrypted bytes need to be stored;
otherwise decryption is not possible.
Here's how that cipher text would be decrypted. The input is the encrypted
byte array and iv is the initialization vector from the encryption step:
// byte[] input
// byte[] iv
Key key = keyStore.getKey(AES_KEY_ALIAS, null);
Since the IV is randomly generated each time, it should be saved along with the
cipher text ( encryptedBytes ) in order to decrypt it later.
Prior to Android 6.0, AES key generation was not supported. As a result, many
implementations chose to use RSA and generated a public-private key pair for
asymmetric encryption using KeyPairGeneratorSpec or used SecureRandom to
generate AES keys.
Here's an example of KeyPairGenerator and KeyPairGeneratorSpec used to
create the RSA key pair:
KeyPairGenerator keyPairGenerator =
KeyPairGenerator.getInstance("RSA",
"AndroidKeyStore");
keyPairGenerator.initialize(keyPairGeneratorSpec);
This sample creates the RSA key pair with a key size of 4096-bit (i.e. modulus
size).
Static Analysis
Locate uses of the cryptographic primitives in code. Some of the most frequently
used classes and interfaces:
Cipher
Mac
MessageDigest
Signature
Ensure that the best practices outlined in the "Cryptography for Mobile Apps"
chapter are followed. Verify that the configuration of cryptographic algorithms
used are aligned with best practices from NIST and BSI and are considered as
strong.
Overview
Cryptography requires secure pseudo random number generation (PRNG).
Standard Java classes do not provide sufficient randomness and in fact may
make it possible for an attacker to guess the next value that will be generated,
and use this guess to impersonate another user or access sensitive information.
Static Analysis
Identify all the instances of random number generators and look for either
custom or known insecure java.util.Random class. This class produces an
identical sequence of numbers for each given seed value; consequently, the
sequence of numbers is predictable.
The following sample source code shows weak random number generation:
import java.util.Random;
// ...
The following sample source code shows the generation of a secure random
number:
import java.security.SecureRandom;
import java.security.NoSuchAlgorithmException;
// ...
Dynamic Analysis
Once an attacker is knowing what type of weak pseudo-random number
generator (PRNG) is used, it can be trivial to write proof-of-concept to generate
the next random value based on previously observed ones, as it was done for
Java Random. In case of very weak custom random generators it may be
possible to observe the pattern statistically. Although the recommended approach
would anyway be to decompile the APK and inspect the algorithm (see Static
Analysis).
Overview
Symmetric cryptography provides confidentiality and integrity of data because it
ensures one basic cryptographic principle. It is based on the fact that a given
ciphertext can only, in any circumstance, be decrypted when providing the
original encryption key. The security problem is thereby shifted to securing the
key instead of the content that is now securely encrypted. Asymmetric
cryptography solves this problem by introducing the concept of a private and
public key pair. The public key can be distributed freely, the private key is kept
secret.
The encryption key can be generated from the passphrase by using the Password
Based Key Derivation Function version 2 (PBKDFv2). This cryptographic
protocol is designed to generate secure and non brute-forceable keys. The code
listing below illustrates how to generate a strong encryption key based on a
password.
The above method requires a character array containing the password and the
needed keylength in bits, for instance a 128 or 256-bit AES key. We define an
iteration count of 10000 rounds which will be used by the PBKDFv2 algorithm.
This significantly increases the workload for a bruteforce attack. We define the
salt size equal to the key length, we divide by 8 to take care of the bit to byte
conversion. We use the SecureRandom class to randomly generate a salt.
Obviously, the salt is something you want to keep constant to ensure the same
encryption key is generated time after time for the same supplied password.
Storing the salt does not need any additional security measures, this can be
publicly stored in the SharedPreferences without the need of any encryption.
Afterwards the Password-based Encryption (PBE) key is generated using the
recommended PBKDF2WithHmacSHA1 algorithm.
Now, it is clear that regularly prompting the user for its passphrase is not
something that works for every application. In that case make sure you use the
Android KeyStore API. This API has been specifically developed to provide a
secure storage for key material. Only your application has access to the keys that
it generates. Starting from Android 6.0 it is also enforced that the KeyStore is
hardware-backed. This means a dedicated cryptography chip or trusted platform
module (TPM) is being used to secure the key material.
However, be aware that the KeyStore API has been changed significantly
throughout various versions of Android. In earlier versions the KeyStore API
only supported storing public\private key pairs (e.g., RSA). Symmetric key
support has only been added since API level 23. As a result, a developer needs to
take care when he wants to securely store symmetric keys on different Android
API levels. In order to securely store symmetric keys, on devices running on
Android API level 22 or lower, we need to generate a public/private key pair. We
encrypt the symmetric key using the public key and store the private key in the
KeyStore . The encrypted symmetric key can now be safely stored in the
retrieves the private key from the KeyStore and decrypts the symmetric key.
The last two options are to use hardcoded encryption keys in the source code and
storing generated keys in public places like /sdcard/ . Obviously, hardcoded
encryption keys are not the way to go. This means every instance of the
application uses the same encryption key. An attacker needs only to do the work
once, to extract the key from the source code . Consequrently, he can decrypt any
other data that he can obtain and that was encrypted by the application. Lastly,
storing encryption keys publicly also is highly discouraged as other applications
can have permission to read the public partition and steal the keys.
Static Analysis
Locate uses of the cryptographic primitives in reverse engineered or
disassembled code. Some of the most frequently used classes and interfaces:
Cipher
Mac
MessageDigest
Signature
KeyStore
Now that we have a collection of Smali bytecode files, we can search the files
for the usage of the SecretKeySpec class. We do this by simply recursively
grepping on the Smali source code we just obtained. Please note that class
descriptors in Smali start with L and end with ; :
$ grep -r "Ljavax\crypto\spec\SecretKeySpec;"
This will highlight all the classes that use the SecretKeySpec class, we now
examine all the highlighted files and trace which bytes are used to pass the key
material. The figure below shows the result of performing this assessment on a
production ready application. For sake of readability we have reverse engineered
the DEX bytecode to Java code. We can clearly locate the use of a static
encryption key that is hardcoded and initialized in the static byte array
Encrypt.keyBytes .
Dynamic Analysis
Hook cryptographic methods and analyze the keys that are being used. Monitor
file system access while cryptographic operations are being performed to assess
where key material is written to or read from.
References
[#nelenkov] - N. Elenkov, Android Security Internals, No Starch Press,
2014, Chapter 5.
M6 - Broken Cryptography
OWASP MASVS
V3.1: "The app does not rely on symmetric cryptography with hardcoded
keys as a sole method of encryption."
V3.3: "The app uses cryptographic primitives that are appropriate for the
particular use-case, configured with parameters that adhere to industry best
practices."
V3.5: "The app doesn't reuse the same cryptographic key for multiple
purposes."
V3.6: "All random values are generated using a sufficiently secure random
number generator."
CWE
Overview
Android Marshmallow (6.0) introduced public APIs for authenticating users via
fingerprint. Access to the fingerprint hardware is provided through the
FingerprintManager class. An app can request fingerprint authentication by
instantiating a FingerprintManager object and calling its authenticate()
method. The caller registers callback methods to handle possible outcomes of the
authentication process (i.e. success, failure, or error). Note that this method
doesn't constitute strong proof that fingerprint authentication has actually been
performed - for example, the authentication step could be patched out by an
attacker, or the "success" callback could be called using instrumentation.
Better security is achieved by using the fingerprint API in conjunction with the
Android KeyGenerator class. With this method, a symmetric key is stored in
the Keystore and "unlocked" with the user's fingerprint. For example, to enable
user access to a remote service, an AES key is created which encrypts the user
PIN or authentication token. By calling
setUserAuthenticationRequired(true) when creating the key, it is ensured
that the user must re-authenticate to retrieve it. The encrypted authentication
credentials can then be saved directly to regular storage on the the device (e.g.
SharedPreferences ). This design is a relatively safe way to ensure the user
actually entered an authorized fingerprint. Note however that this setup requires
the app to hold the symmetric key in memory during cryptographic operations,
potentially exposing it to attackers that manage to access the app's memory
during runtime.
An even more secure option is using asymmetric cryptography. Here, the mobile
app creates an asymmetric key pair in the Keystore and enrolls the public key on
the server backend. Later transactions are then signed with the private key and
verified by the server using the public key. The advantage of this is that
transactions can be signed using Keystore APIs without ever extracting the
private key from the Keystore. Consequently, it is impossible for attackers to
obtain the key from memory dumps or by using instrumentation.
Static Analysis
Begin by searching for FingerprintManager.authenticate() calls. The first
parameter passed to this method should be a CryptoObject instance which is a
wrapper class for crypto objects supported by FingerprintManager. Should the
parameter be set to null , this means the fingerprint authorization is purely
event-bound, likely creating a security issue.
The creation of the key used to initialize the cipher wrapper can be traced back
to the CryptoObject . Verify the key was both created using the KeyGenerator
class in addition to setUserAuthenticationRequired(true) being called during
creation of the KeyGenParameterSpec object (see code samples below).
FingerprintManager fingerprintManager =
(FingerprintManager)
context.getSystemService(Context.FINGERPRINT_SERVICE);
fingerprintManager.isHardwareDetected();
context.checkSelfPermission(Manifest.permission.USE_FINGERPRINT
) == PermissionResult.PERMISSION_GRANTED;
If any of the above checks fail, the option for fingerprint authentication should
not be offered.
SecretKeyFactory factory =
SecretKeyFactory.getInstance(getEncryptionKey().getAlgorithm(),
ANDROID_KEYSTORE);
KeyInfo secetkeyInfo = (KeyInfo)
factory.getKeySpec(yourencryptionkeyhere, KeyInfo.class);
secetkeyInfo.isInsideSecureHardware()
keyInfo.isUserAuthenticationRequirementEnforcedBySecureHardware
();
Fingerprint Authentication using a Symmetric Key
generator =
KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES,
KEYSTORE);
generator.generateKey();
if (mode == Cipher.ENCRYPT_MODE) {
cipher.init(mode, keyspec);
Keep in mind, a new key cannot be used immediately - it has to be authenticated
through the FingerprintManager first. This involves wrapping the Cipher
object into FingerprintManager.CryptoObject which is passed to
FingerprintManager.authenticate() before it will be recognized.
public void
authenticationSucceeded(FingerprintManager.AuthenticationResult
result) {
cipher = result.getCryptoObject().getCipher();
KeyPairGenerator.getInstance(KeyProperties.KEY_ALGORITHM_EC,
"AndroidKeyStore");
keyPairGenerator.initialize(
new KeyGenParameterSpec.Builder(MY_KEY,
KeyProperties.PURPOSE_SIGN)
.setDigests(KeyProperties.DIGEST_SHA256)
.setAlgorithmParameterSpec(new
ECGenParameterSpec("secp256r1"))
.setUserAuthenticationRequired(true)
.build());
keyPairGenerator.generateKeyPair();
To use the key for signing, you need to instantiate a CryptoObject and
authenticate it through FingerprintManager .
Signature.getInstance("SHA256withECDSA");
KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
keyStore.load(null);
PrivateKey key = (PrivateKey) keyStore.getKey(MY_KEY, null);
signature.initSign(key);
CryptoObject cryptObject = new
FingerprintManager.CryptoObject(signature);
Note that in cases where transactions are signed, a random nonce should be
generated and added to the signed data. Otherwise, an attacker could replay
the transaction.
(the default), keys that are valid for fingerprint authentication are irreversibly
invalidated when a new fingerprint is enrolled. This prevents an attacker from
retrieving they key even if they are able to enroll an additional fingerprint.
Dynamic Analysis
Patch the app or use runtime instrumentation to bypass fingerprint authentication
on the client. For example, you could use Frida to call the
onAuthenticationSucceeded callback method directly. Refer to the chapter
OWASP MASVS
V4.7: "Biometric authentication, if any, is not event-bound (i.e. using an
API that simply returns "true" or "false"). Instead, it is based on unlocking
the keychain/keystore."
CWE
CWE-287 - Improper Authentication
CWE-604 - Use of Client-Side Authentication
Android Network APIs
Testing Endpoint Identify Verification
Using TLS to transport sensitive information over the network is essential for
security. However, encrypting communication between a mobile application and
its backend API is not trivial. Developers often decide on simpler but less secure
solutions (e.g., those that accept any certificate) to facilitate the development
process, and sometimes these weak solutions make it into the production
version, potentially exposing users to man-in-the-middle attacks.
Make sure that the hostname and the certificate itself are verified correctly.
Examples and common pitfalls are available in the official Android
documentation. Search the code for examples of TrustManager and
HostnameVerifier usage. In the sections below, you can find examples of the
Static Analysis
Verifying the Server Certificate
"TrustManager" is a means of verifying conditions necessary for establishing a
trusted connection in Android. The following conditions should be checked at
this point:
The following code snippet is sometimes used during development and will
accept any certificate, overwriting the functions checkClientTrusted ,
checkServerTrusted , and getAcceptedIssuers . Such implementations
should be avoided, and, if they are necessary, they should be clearly separated
from production builds to avoid built-in security flaws.
@Override
public void checkClientTrusted(X509Certificate[] chain,
String authType)
throws CertificateException {
}
@Override
public void checkServerTrusted(X509Certificate[] chain,
String authType)
throws CertificateException {
}
}
};
// SSLContext context
context.init(null, trustAllCerts, new SecureRandom());
Sometimes applications use a WebView to render the website associated with the
application. This is true of HTML/JavaScript-based frameworks such as Apache
Cordova, which uses an internal WebView for application interaction. When a
WebView is used, the mobile browser performs the server certificate validation.
Ignoring any TLS error that occurs when the WebView tries to connect to the
remote website is a bad practice.
The following code will ignore TLS issues, exactly like the WebViewClient
custom implementation provided to the WebView:
sure that the app is not debuggable. See the test case "Testing If the App is
Debuggable."
Hostname Verification
HostnameVerifier NO_VERIFY =
org.apache.http.conn.ssl.SSLSocketFactory
.ALLOW_ALL_HOSTNAME_VERIFIER;
Make sure that your application verifies a hostname before setting a trusted
connection.
Dynamic Analysis
Dynamic analysis requires an interception proxy. To test improper certificate
verification, check the following controls:
Self-signed certificate
In Burp, go to the Proxy -> Options tab, then go to the Proxy Listeners
section, highlight your listener, and click Edit . Then go to the Certificate
tab, check Use a self-signed certificate , and click Ok . Now, run your
application. If you're able to see HTTPS traffic, your application is accepting
self-signed certificates.
In Burp, go to the Proxy -> Options tab, then go to the Proxy Listeners
section, highlight your listener, and click Edit . Then go to the Certificate
tab, check Generate a CA-signed certificate with a specific hostname ,
and type in the backend server's hostname. Now, run your application. If you're
able to see HTTPS traffic, your application is accepting all certificates.
In Burp, go to the Proxy -> Options tab, then go to the Proxy Listeners
section, highlight your listener, and click Edit . Then go to the Certificate
tab, check Generate a CA-signed certificate with a specific hostname ,
and type in an invalid hostname, e.g., example.org. Now, run your application. If
you're able to see HTTPS traffic, your application is accepting all hostnames.
If you're interested in further MITM analysis or you have problems with the
configuration of your interception proxy, consider using Tapioca. It's a CERT
pre-configured VM appliance for MITM software analysis. All you have to do is
deploy a tested application on an emulator and start capturing traffic.
Overview
Certificate pinning is the process of associating the backend server with a
particular X509 certificate or public key instead of accepting any certificate
signed by a trusted certificate authority. After storing ("pinning") the server
certificate or public key, the mobile app will subsequently connect to the known
server only. Withdrawing trust from external certificate authorities reduces the
attack surface (after all, there are many cases of certificate authorities that have
been compromised or tricked into issuing certificates to impostors).
The certificate can be pinned and hardcoded into the app or retrieved at the time
the app first connects to the backend. In the latter case, the certificate is
associated with ("pinned" to) the host when the host is seen for the first time.
This alternative is less secure because attackers intercepting the initial
connection can inject their own certificates.
Static Analysis
Network Security Configuration
To customize their network security settings in a safe, declarative configuration
file without modifying app code, applications can use the Network Security
Configuration (NSC) that Android provides for versions 7.0 and above.
The Network Security Configuration feature can also be used to pin declarative
certificates to specific domains. If an application uses the NSC feature, two
things should be checked to identify the defined configuration:
If an NSC configuration exists, the following event may be visible in the log:
If a certificate pinning validation check has failed, the following event will be
logged:
TrustManager
InputStream in =
resources.openRawResource(certificateRawResource);
keyStore = KeyStore.getInstance("BKS");
keyStore.load(resourceStream, password);
Once the Keystore has been loaded, we can use the TrustManager that trusts the
CAs in our KeyStore:
String tmfAlgorithm =
TrustManagerFactory.getDefaultAlgorithm();
TrustManagerFactory tmf =
TrustManagerFactory.getInstance(tmfAlgorithm);
tmf.init(keyStore);
Create an SSLContext that uses the TrustManager
// SSLContext context = SSLContext.getInstance("TLS");
sslContext.init(null, tmf.getTrustManagers(), null);
Applications that use third-party networking libraries may utilize the libraries'
certificate pinning functionality. For example, okhttp can be set up with the
CertificatePinner as follows:
OkHttpClient client = new OkHttpClient.Builder()
.certificatePinner(new CertificatePinner.Builder()
.add("example.com",
"sha256/UwQAapahrjCOjYI3oLUx5AQxPBR02Jz6/E2pt0IeLXA=")
.build())
.build();
@Override
public void onLoadResource(WebView view, String url) {
//From Android API documentation about
"WebView.getCertificate()":
//Gets the SSL certificate for the main top-level page
//or null if there is no certificate (the site is not
secure).
//
//Available information on SslCertificate class are
"Issuer DN", "Subject DN" and validity date helpers
SslCertificate serverCert = view.getCertificate();
if(serverCert != null){
//Apply check on Issuer DN against expected one
SslCertificate.DName issuerDN =
serverCert.getIssuedBy();
if(!this.expectedIssuerDN.equals(issuerDN.toString())){
//Throw exception to cancel resource loading...
}
}
}
});
For further information, please check the OWASP certificate pinning guide.
Dynamic Analysis
Dynamic analysis can be performed by launching a MITM attack with your
preferred interception proxy. This will allow you to monitor the traffic between
the client (the mobile application) and the backend server. If the proxy is unable
to intercept the HTTP requests and responses, the SSL pinning has been
implemented correctly.
Overview
Android relies on a security provider to provide SSL/TLS-based connections.
The problem with this kind of security provider (one example is OpenSSL),
which comes with the device, is that it often has bugs and/or vulnerabilities. To
avoid known vulnerabilities, developers need to make sure that the application
will install a proper security provider. Since July 11, 2016, Google has been
rejecting Play Store application submissions (both new applications and updates)
that use vulnerable versions of OpenSSL.
Static Analysis
Applications based on the Android SDK should depend on GooglePlayServices.
For example, in the gradle build file, you will find compile
'com.google.android.gms:play-services-gcm:x.x.x' in the dependencies
block. You need to make sure that the ProviderInstaller class is called with
either installIfNeeded or installIfNeededAsync . ProviderInstaller
needs to be called by a component of the application as early as possible.
Exceptions thrown by these methods should be caught and handled correctly. If
the application cannot patch its security provider, it can either inform the API of
its less secure state or restrict user actions (because all HTTPS traffic should be
deemed riskier in this situation).
Here are two examples from the Android Developer documentation that show
how to update Security Provider to prevent SSL exploits. In both cases, the
developer needs to handle the exceptions properly, and reporting to the backend
when the application is working with an unpatched security provider may be
wise.
Patching Synchronously:
...
} catch (GooglePlayServicesNotAvailableException e) {
// Indicates a non-recoverable error; the
ProviderInstaller is not able
// to install an up-to-date Provider.
/**
* This method is only called if the provider is successfully
updated
* (or is already up-to-date).
*/
@Override
protected void onProviderInstalled() {
// Provider is up-to-date, app can make secure network
calls.
}
/**
* This method is called if updating fails; the error code
indicates
* whether the error is recoverable.
*/
@Override
protected void onProviderInstallFailed(int errorCode, Intent
recoveryIntent) {
if
(GooglePlayServicesUtil.isUserRecoverableError(errorCode)) {
// Recoverable error. Show a dialog prompting the user to
// install/update/enable Google Play services.
GooglePlayServicesUtil.showErrorDialogFragment(
errorCode,
this,
ERROR_DIALOG_REQUEST_CODE,
new DialogInterface.OnCancelListener() {
@Override
public void onCancel(DialogInterface dialog) {
// The user chose not to take the recovery action
onProviderInstallerNotAvailable();
}
});
} else {
// Google Play services is not available.
onProviderInstallerNotAvailable();
}
}
@Override
protected void onActivityResult(int requestCode, int
resultCode,
Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == ERROR_DIALOG_REQUEST_CODE) {
// Adding a fragment via
GooglePlayServicesUtil.showErrorDialogFragment
// before the instance state is restored throws an error.
So instead,
// set a flag here, which will cause the fragment to
delay until
// onPostResume.
mRetryProviderInstall = true;
}
}
/**
* On resume, check to see if we flagged that we need to
reinstall the
* provider.
*/
@Override
protected void onPostResume() {
super.onPostResult();
if (mRetryProviderInstall) {
// We can now safely retry installation.
ProviderInstall.installIfNeededAsync(this, this);
}
mRetryProviderInstall = false;
}
Make sure that NDK-based applications bind only to a recent and properly
patched library that provides SSL/TLS functionality.
Dynamic Analysis
When you have the source code:
Run the application in debug mode, then create a breakpoint where the app
will first contact the endpoint(s).
Right click the highlighted code and select Evaluate Expression .
Type Security.getProviders() and press enter.
Check the providers and try to find GmsCore_OpenSSL , which should be the
new top-listed provider.
Use Xposed to hook into the java.security package, then hook into
java.security.Security with the method getProviders (with no
References
OWASP MASVS
V5.3: "The app verifies the X.509 certificate of the remote endpoint when
the secure channel is established. Only certificates signed by a trusted CA
are accepted."
V5.4: "The app either uses its own certificate store or pins the endpoint
certificate or public key, and subsequently does not establish connections
with endpoints that offer a different certificate or key, even if signed by a
trusted CA."
V5.6: "The app only depends on up-to-date connectivity and security
libraries."
CWE
Overview
Android assigns a distinct system identity (Linux user ID and group ID) to every
installed app. Because each Android app operates in a process sandbox, apps
must explicitly request access to resources and data that are outside their
sandbox. They request this access by declaring the permissions they need to use
system data and features. Depending on how sensitive or critical the data or
feature is, the Android system will grant the permission automatically or ask the
user to approve the request.
Android permissions are classified into four different categories on the basis of
the protection level they offer:
Signature: This permission is granted only if the requesting app was signed
with the same certificate used to sign the app that declared the permission.
If the signature matches, the permission will be granted automatically.
Example: android.permission.ACCESS_MOCK_LOCATION
SystemOrSignature: This permission is granted only to applications
embedded in the system image or signed with the same certificate used to
sign the application that declared the permission. Example:
android.permission.ACCESS_DOWNLOAD_MANAGER
Custom Permissions
android:name and
android:protectionLevel .
The first code block defines the new permission, which is self-explanatory. The
label tag is a summary of the permission, and the description is a more detailed
version of the summary. You can set the protection level according to the types
of permissions that will be granted. Once you've defined your permission, you
can enforce it by adding it to the application's manifest. In our example, the
second block represents the component that we are going to restrict with the
permission we created. It can be enforced by adding the android:permission
attributes.
<permission
android:name="com.example.myapp.permission.START_MAIN_ACTIVITY"
android:label="Start Activity in myapp"
android:description="Allow the app to launch the
activity of myapp app, any app you grant this permission will
be able to launch main activity by myapp app."
android:protectionLevel="normal" />
<activity android:name="TEST_ACTIVITY"
android:permission="com.example.myapp.permission.START_MAIN_ACT
IVITY">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category
android:name="android.intent.category.LAUNCHER"/>
</intent-filter>
</activity>
Once the permission START_MAIN_ACTIVTY has been created, apps can request it
via the uses-permission tag in the AndroidManifest.xml file. Any
application granted the custom permission START_MAIN_ACTIVITY can then
launch the TEST_ACTIVITY .
<uses-permission
android:name="com.example.myapp.permission.START_MAIN_ACTIVITY"
/>
Static Analysis
Android Permissions
Check permissions to make sure that the app really needs them and remove
unnecessary permissions. For example, the INTERNET permission in the
AndroidManifest.xml file is necessary for an Activity to load a web page into a
WebView.
Go through the permissions with the developer to identify the purpose of every
permission set and remove unnecessary permissions.
Besides going through the AndroidManifest.xml file manually, you can also use
the Android Asset Packaging tool to examine permissions.
Custom Permissions
Apart from enforcing custom permissions via the application manifest file, you
can also check permissions programmatically. This is not recommended,
however, because it is more error-prone and can be bypassed more easily with,
e.g., runtime instrumentation. Whenever you see code like the following snippet,
make sure that the same permissions are enforced in the manifest file.
int canProcess =
checkCallingOrSelfPermission("com.example.perm.READ_INCOMING_MS
G");
if (canProcess != PERMISSION_GRANTED)
throw new SecurityException();
Dynamic Analysis
Permissions for installed applications can be retrieved with Drozer. The
following extract demonstrates how to examine the permissions used by an
application and the custom permissions defined by the app:
Note that this method can't be used for signature level permissions because
Drozer would need to be signed by the certificate used to sign the target
application.
Overview
Both Android and iOS allow inter-app communication via custom URL
schemes. These custom URLs allow other applications to perform specific
actions within the application that offers the custom URL scheme. Custom URIs
can begin with any scheme prefix, and they usually define an action to take
within the application and parameters for that action.
Consider this contrived example:
sms://compose/[email protected]&message=I%20QUIT!&sendImmediatel
y=true . When a victim clicks such a link on a mobile device, the vulnerable
SMS application will send the SMS message with the maliciously crafted
content. This could lead to
financial loss for the victim if messages are sent to premium services or
disclosure of the victim's phone number if messages are sent to predefined
addresses that collect phone numbers.
Once a URL scheme has been defined, multiple apps can register for any
available scheme. For every application, each of these custom URL schemes
must be enumerated and the actions they perform must be tested.
URL schemes can be used for deep linking, a widespread and convenient way to
launch a native mobile app via a link, which isn't inherently risky.
Nevertheless, data that's processed by the app and comes in through URL
schemes should be validated, as described in the test case "Testing Input
Validation and Sanitization."
Static Analysis
Determine whether custom URL schemes are defined. This can be done in the
AndroidManifest.xml file, inside of an intent-filter element.
<activity android:name=".MyUriActivity">
<intent-filter>
<action android:name="android.intent.action.VIEW" />
<category android:name="android.intent.category.DEFAULT"
/>
<category
android:name="android.intent.category.BROWSABLE" />
<data android:scheme="myapp" android:host="path" />
</intent-filter>
</activity>
The example above specifies a new URL scheme called myapp:// . The
category browsable will allow the URI to be opened within a browser.
Data can then be transmitted through this new scheme with, for example, the
following URI: myapp://path/to/what/i/want?
keyOne=valueOne&keyTwo=valueTwo . Code like the following can be used to
Verify the usage of toUri , which may also be used in this context.
Dynamic Analysis
To enumerate URL schemes within an app that can be called by a web browser,
use the Drozer module scanner.activity.browsable :
You can call custom URL schemes with the Drozer module
app.activity.start :
Defining and using your own URL scheme can be risky in this situation if data is
sent to the scheme from an external party and processed in the app. Therefore
keep in mind that data should be validated as described in "Testing Input
Validation and Sanitization."
Testing for Sensitive Functionality Exposure Through
IPC
Overview
During implementation of a mobile application, developers may apply traditional
techniques for IPC (such as using shared files or network sockets). The IPC
system functionality offered by mobile application platforms should be used
because it is much more mature than traditional techniques. Using IPC
mechanisms with no security in mind may cause the application to leak or
expose sensitive data.
The following is a list of Android IPC Mechanisms that may expose sensitive
data:
Binders
Services
Bound Services
AIDL
Intents
Content Providers
Static Analysis
We start by looking at the AndroidManifest.xml, where all activities, services,
and content providers included in the source code must be declared (otherwise
the system won't recognize them and they won't run). Broadcast receivers can be
declared in the manifest or created dynamically. You will want to identify
elements such as
<intent-filter>
<service>
<provider>
<receiver>
Remember that using the permission tag ( android:permission ) will also limit
other applications' access to a component. If your IPC is intended to be
accessible to other applications, you can apply a security policy with the
<permission> element and set a proper android:protectionLevel . When
For more information about the content providers, please refer to the test case
"Testing Whether Stored Sensitive Data Is Exposed via IPC Mechanisms" in
chapter "Testing Data Storage."
Once you identify a list of IPC mechanisms, review the source code to see
whether sensitive data is leaked when the mechanisms are used. For example,
content providers can be used to access database information, and services can
be probed to see if they return data. Broadcast receivers can leak sensitive
information if probed or sniffed.
In the following, we use two example apps and give examples of identifying
vulnerable IPC components:
"Sieve"
"Android Insecure Bank"
Activities
<activity android:excludeFromRecents="true"
android:label="@string/app_name"
android:launchMode="singleTask"
android:name=".MainLoginActivity"
android:windowSoftInputMode="adjustResize|stateVisible">
<intent-filter>
<action android:name="android.intent.action.MAIN"/>
<category
android:name="android.intent.category.LAUNCHER"/>
</intent-filter>
</activity>
<activity android:clearTaskOnLaunch="true"
android:excludeFromRecents="true" android:exported="true"
android:finishOnTaskLaunch="true"
android:label="@string/title_activity_file_select"
android:name=".FileSelectActivity"/>
<activity android:clearTaskOnLaunch="true"
android:excludeFromRecents="true" android:exported="true"
android:finishOnTaskLaunch="true"
android:label="@string/title_activity_pwlist"
android:name=".PWList"/>
By inspecting the PWList.java activity, we see that it offers options to list all
keys, add, delete, etc. If we invoke it directly, we will be able to bypass the
LoginActivity. More on this can be found in the dynamic analysis below.
Services
By reversing the target application, we can see that the service AuthService
provides functionality for changing the password and PIN-protecting the target
app.
Broadcast Receivers
<receiver android:exported="true"
android:name="com.android.insecurebankv2.MyBroadCastReceiver">
<intent-filter>
<action android:name="theBroadcast"/>
</intent-filter>
</receiver>
The following extract of the target application's source code shows that the
broadcast receiver triggers transmission of an SMS message containing the user's
decrypted password.
@Override
public void onReceive(Context context, Intent intent) {
// TODO Auto-generated method stub
if (phn != null) {
try {
SharedPreferences settings =
context.getSharedPreferences(MYPREFS,
Context.MODE_WORLD_READABLE);
final String username =
settings.getString("EncryptedUsername", null);
byte[] usernameBase64Byte =
Base64.decode(username, Base64.DEFAULT);
usernameBase64ByteString = new
String(usernameBase64Byte, "UTF-8");
final String password =
settings.getString("superSecurePassword", null);
CryptoClass crypt = new CryptoClass();
String decryptedPassword =
crypt.aesDeccryptedString(password);
String textPhoneno = phn.toString();
String textMessage = "Updated Password from:
"+decryptedPassword+" to: "+newpass;
SmsManager smsManager =
SmsManager.getDefault();
System.out.println("For the changepassword -
phonenumber: "+textPhoneno+" password is: "+textMessage);
smsManager.sendTextMessage(textPhoneno, null, textMessage,
null, null);
BroadcastReceivers should use the android:permission attribute; otherwise,
other applications can invoke them. You can use
Context.sendBroadcast(intent, receiverPermission); to specify
permissions a receiver must have to read the broadcast). You can also set an
explicit application package name that limits the components this Intent will
resolve to. If left as the default value (null), all components in all applications
will be considered. If non-null, the Intent can match only the components in the
given application package.
Dynamic Analysis
You can enumerate IPC components with Drozer. To list all exported IPC
components, use the module app.package.attacksurface :
Content Providers
Content providers with names like "Passwords" and "Keys" are prime suspects
for sensitive information leaks. After all, it wouldn't be good if sensitive keys
and passwords could simply be queried from the provider!
Activities
Since the activity is called directly in this example, the login form protecting the
password manager would be bypassed, and the data contained within the
password manager could be accessed.
Services
To communicate with a service, you must first use static analysis to identify the
required inputs.
Because this service is exported, you can use the module app.service.send to
communicate with the service and change the password stored in the target
application:
Broadcast Receivers
Broadcasts can be enumerated via the Drozer module app.broadcast.info .
The target package should be specified via the -a parameter:
In the example app "Android Insecure Bank", one broadcast receiver is exported
without requiring any permissions, indicating that we can formulate an intent to
trigger the broadcast receiver. When testing broadcast receivers, you must also
use static analysis to understand the functionality of the broadcast receiver, as we
did before.
Sniffing Intents
If an Android application broadcasts intents without setting a required
permission or specifying the destination package, the intents can be monitored
by any application that runs on the device.
parameter:
Action: theBroadcast
Raw: Intent { act=theBroadcast flg=0x10 (has extras) }
Extra: phonenumber=07123456789 (java.lang.String)
Extra: newpass=12345 (java.lang.String)
Overview
JavaScript can be injected into web applications via reflected, stored, or DOM-
based Cross-Site Scripting (XSS). Mobile apps are executed in a sandboxed
environment and don't have this vulnerability when implemented natively.
Nevertheless, WebViews may be part of a native app to allow web page viewing.
Every app has its own WebView cache, which isn't shared with the native
Browser or other apps. On Android, WebViews use the WebKit rendering engine
to display web pages, but the pages are stripped down to minimal functions, for
example, pages don't have address bars. If the WebView implementation is too
lax and allows usage of JavaScript, JavaScript can be used to attack the app and
gain access to its data.
Static Analysis
The source code must be checked for usage and implementations of the
WebView class. To create and use a WebView, you must create an instance of the
WebView class.
webview.getSettings().setJavaScriptEnabled(true);
To remove all JavaScript source code and locally stored data, clear the
WebView's cache with clearCache() "clearCache() in WebViews") when the
app closes.
Devices running platforms older than Android 4.4 (API level 19) use a version
of WebKit that has several security issues. As a workaround, the app must
confirm that WebView objects display only trusted content if the app runs on
these devices.
Dynamic Analysis
Dynamic Analysis depends on operating conditions. There are several ways to
inject JavaScript into an app's WebView:
All functions offered by the endpoint should be free of stored XSS "Stored
Cross-Site Scripting").
Only files that are in the app data directory should be rendered in a
WebView (see test case "Testing for Local File Inclusion in WebViews").
The HTTPS communication must be implemented according to best
practices to avoid MITM attacks. This means:
all communication is encrypted via TLS (see test case "Testing for
Unencrypted Sensitive Data on the Network"),
the certificate is checked properly (see test case "Testing Endpoint
Identify Verification"), and/or
the certificate should be pinned (see "Testing Custom Certificate
Stores and SSL Pinning").
Overview
Several default schemas are available for Android URLs. They can be triggered
within a WebView with the following:
http(s)://
file://
tel://
WebViews can load remote content from an endpoint, but they can also load
local content from the app data directory or external storage. If the local content
is loaded, the user shouldn't be able to influence the filename or the path used to
load the file, and users shouldn't be able to edit the loaded file.
Static Analysis
Check the source code for WebView usage. The following WebView settings
control resource access:
File access is enabled by default. Note that this enables and disables file
system access only. Asset and resource access is unaffected and accessible
via file:///android_asset and file:///android_res .
setAllowFileAccessFromFileURLs : Does or does not allow JavaScript
running in the context of a file scheme URL to access content from other
file scheme URLs. The default value is true for API level 15 (Ice Cream
Sandwich) and below and false for API level 16 (Jelly Bean) and above.
setAllowUniversalAccessFromFileURLs : Does or does not allow
If one or more of the above methods is/are activated, you should determine
whether the method(s) is/are really necessary for the app to work properly.
If a WebView instance can be identified, find out whether local files are loaded
with the loadURL() "loadURL() in WebView") method.
webview.loadUrl("file:///" +
Environment.getExternalStorageDirectory().getPath() +
"filename.html");
The URL specified in loadURL should be checked for dynamic parameters that
can be manipulated; their manipulation may lead to local file inclusion.
Use the following code snippet and best practices to deactivate protocol
handlers, if applicable:
webView.getSettings().setAllowFileAccessFromFileURLs(false);
webView.getSettings().setAllowUniversalAccessFromFileURLs(false
);
webView.getSettings().setAllowContentAccess(false);
Create a whitelist that defines local and remote web pages and protocols
that are allowed to be loaded.
Create checksums of the local HTML/JavaScript files and check them while
the app is starting up. Minify JavaScript files to make them harder to read.
Dynamic Analysis
To identify the usage of protocol handlers, look for ways to trigger phone calls
and ways to access files from the file system while you're using the app.
Overview
Android offers a way for JavaScript executed in a WebView to call and use
native functions of an Android app: addJavascriptInterface .
Before Android 4.2 Jelly Bean (API Level 17), a vulnerability was discovered in
the implementation of addJavascriptInterface : a reflection that leads to
remote code execution when malicious JavaScript is injected into a WebView.
This vulnerability was fixed by API Level 17, and the access to Java Object
methods granted to JavaScript was changed. When you use
addJavascriptInterface , methods of Java Objects are only accessible to
Static Analysis
You need to determine whether the method addJavascriptInterface is used,
how it is used, and whether an attacker can inject malicious JavaScript.
myWebView.addJavascriptInterface(jsInterface, "Android");
myWebView.loadURL("http://example.com/file.html");
setContentView(myWebView);
Context mContext;
@JavascriptInterface
public String returnString () {
return "Secret String";
}
With access to the JavaScript code, via, for example, stored XSS or a MITM
attack, an attacker can directly call the exposed Java methods.
</manifest>
Dynamic Analysis
Dynamic analysis of the app can show you which HTML or JavaScript files are
loaded and which vulnerabilities are present. The procedure for exploiting the
vulnerability starts with producing a JavaScript payload and injecting it into the
file that the app is requesting. The injection can be accomplished via a MITM
attack or direct modification of the file if it is stored in external storage. The
whole process can be accomplished via Drozer and weasel (MWR's advanced
exploitation payload), which can install a full agent, injecting a limited agent
into a running process or connecting a reverse shell as a Remote Access Tool
(RAT).
Overview
Android SDK offers developers a way to present a Preferences activity to
users, allowing the developers to extend and adapt this abstract class.
This abstract class parses the extra data fields of an Intent, in particular, the
PreferenceActivity.EXTRA_SHOW_FRAGMENT(:android:show_fragment) and
PreferenceActivity.EXTRA_SHOW_FRAGMENT_ARGUMENTS(:android:show_fragment
_arguments) fields.
The first field is expected to contain the Fragment class name, and the second
one is expected to contain the input bundle passed to the Fragment .
With this vulnerability, an attacker can call fragments inside the target
application or run the code present in other classes' constructors. Any class that's
passed in the Intent and does not extend the Fragment class will cause a
java.lang.CastException, but the empty constructor will be executed before the
exception is thrown, allowing the code present in the class constructor run.
The default implementation returns true on versions older than Android 4.4
KitKat (API Level 19); it will throw an exception on later versions.
Static Analysis
Steps:
@Override
protected boolean isValidFragment(String fragmentName)
{
return
"com.fullpackage.MyPreferenceFragment".equals(fragmentName);
}
MyFragment.class
public class MyFragment extends Fragment {
public void onCreate (Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
}
public View onCreateView(LayoutInflater inflater, ViewGroup
container, Bundle savedInstanceState) {
View v = inflater.inflate(R.layout.fragmentLayout,
null);
WebView myWebView = (WebView)
wv.findViewById(R.id.webview);
myWebView.getSettings().setJavaScriptEnabled(true);
myWebView.loadUrl(this.getActivity().getIntent().getDataString(
));
return v;
}
}
To exploit this vulnerable Activity, you can create an application with the
following code:
The Vulnerable App and Exploit PoC App are available for downloading.
Testing Object Persistence
Overview
There are several ways to persist an object on Android:
Object Serialization
An object and its data can be represented as a sequence of bytes. This is done in
Java via object serialization. Serialization is not inherently secure. It is just a
binary format (or representation) for locally storing data in a .ser file. Encrypting
and signing HMAC-serialized data is possible as long as the keys are stored
safely. Deserializing an object requires a class of the same version as the class
used to serialize the object. After classes have been changed, the
ObjectInputStream can't create objects from older .ser files. The example
import java.io.Serializable;
JSON
There are several ways to serialize the contents of an object to JSON. Android
comes with the JSONObject and JSONArray classes. A wide variety of
libraries, including GSON or Jackson, can also be used. The main differences
between the libraries are whether they use reflection to compose the object,
whether they support annotations, and the amount of memory they use. Note that
almost all the JSON representations are String-based and therefore immutable.
This means that any secret stored in JSON will be harder to remove from
memory. JSON itself can be stored anywhere, e.g., a (NoSQL) database or a file.
You just need to make sure that any JSON that contains secrets has been
appropriately protected (e.g., encrypted/HMACed). See the data storage chapter
for more details. A simple example (from the GSON User Guide) of writing and
reading JSON with GSON follows. In this example, the contents of an instance
of the BagOfPrimitives is serialized into JSON:
class BagOfPrimitives {
private int value1 = 1;
private String value2 = "abc";
private transient int value3 = 3;
BagOfPrimitives() {
// no-args constructor
}
}
// Serialization
BagOfPrimitives obj = new BagOfPrimitives();
Gson gson = new Gson();
String json = gson.toJson(obj);
ORM
There are libraries that provide functionality for directly storing the contents of
an object in a database and then instantiating the object with the database
contents. This is called Object-Relational Mapping (ORM). Libraries that use the
SQLite database include
OrmLite,
SugarORM,
GreenDAO and
ActiveAndroid.
Realm, on the other hand, uses its own database to store the contents of a class.
The amount of protection that ORM can provide depends primarily on whether
the database is encrypted. See the data storage chapter for more details. The
Realm website includes a nice example of ORM Lite.
Parcelable
restored from a Parcel . Parcels are often used to pack a class as part of a
Bundle for an Intent . Here's an Android developer documentation example
Because this mechanism that involves Parcels and Intents may change over time,
and the Parcelable may contain IBinder pointers, storing data to disk via
Parcelable is not recommended.
Static Analysis
If object persistence is used for storing sensitive information on the device, first
make sure that the information is encrypted and signed/HMACed. See the
chapters on data storage and cryptographic management for more details. Next,
make sure that the decryption and verification keys are obtainable only after the
user has been authenticated. Security checks should be carried out at the correct
positions, as defined in best practices.
There are a few generic remediation steps that you can always take:
1. Make sure that sensitive data has been encrypted and HMACed/signed after
serialization/persistence. Evaluate the signature or HMAC before you use
the data. See the chapter about cryptography for more details.
2. Make sure that the keys used in step 1 can't be extracted easily. The user
and/or application instance should be properly authenticated/authorized to
obtain the keys. See the data storage chapter for more details.
3. Make sure that the data within the de-serialized object is carefully validated
before it is actively used (e.g., no exploit of business/application logic).
For high-risk applications that focus on availability, we recommend that you use
Serializable only when the serialized classes are stable. Second, we
the attacker could find the method's signature via the String-based argument
the attacker might be able to manipulate the reflection-based steps to
execute business logic.
Object Serialization
implements Serializable
JSON
import org.json.JSONObject;
import org.json.JSONArray;
import com.google.gson
import com.google.gson.annotations
import com.google.gson.reflect
import com.google.gson.stream
new Gson();
import com.fasterxml.jackson.core
ORM
When you use an ORM library, make sure that the data is stored in an encrypted
database and the class representations are individually encrypted before storing
it. See the chapters on data storage and cryptographic management for more
details. You can check for the following keywords in the corresponding libraries:
import com.j256.*
import com.j256.dao
import com.j256.db
import com.j256.stmt
import com.j256.table\
import com.github.satyan
extends SugarRecord<Type>
import org.greenrobot.greendao.annotation.Convert
import org.greenrobot.greendao.annotation.Entity
import org.greenrobot.greendao.annotation.Generated
import org.greenrobot.greendao.annotation.Id
import org.greenrobot.greendao.annotation.Index
import org.greenrobot.greendao.annotation.NotNull
import org.greenrobot.greendao.annotation.*
import org.greenrobot.greendao.database.Database
import org.greenrobot.greendao.query.Query
ActiveAndroid.initialize(<contextReference>);
import com.activeandroid.Configuration
import com.activeandroid.query.*
import io.realm.RealmObject;
import io.realm.annotations.PrimaryKey;
Parcelable
Make sure that appropriate security measures are taken when sensitive
information is stored in an Intent via a Bundle that contains a Parcelable. Use
explicit Intents and verify proper additional security controls when using
application-level IPC (e.g., signature verification, intent-permissions, crypto).
Dynamic Analysis
There are several ways to perform dynamic analysis:
1. For the actual persistence: Use the techniques described in the data storage
chapter.
2. For reflection-based approaches: Use Xposed to hook into the de-
serialization methods or add unprocessable information to the serialized
objects to see how they are handled (e.g., whether the application crashes or
extra information can be extracted by enriching the objects).
References
OWASP MASVS
V6.3: "The app does not export sensitive functionality via custom URL
schemes, unless these mechanisms are properly protected."
V6.4: "The app does not export sensitive functionality through IPC
facilities, unless these mechanisms are properly protected."
V6.5: "JavaScript is disabled in WebViews unless explicitly required."
V6.6: "WebViews are configured to allow only the minimum set of protocol
handlers required (ideally, only https is supported). Potentially dangerous
handlers, such as file, tel and app-id, are disabled."
V6.7: "If native methods of the app are exposed to a WebView, verify that
the WebView only renders JavaScript contained within the app package."
V6.8: "Object serialization, if any, is implemented using safe serialization
APIs."
CWE
CWE-79 - Improper Neutralization of Input During Web Page Generation
https://cwe.mitre.org/data/definitions/79.html
CWE-749 - Exposed Dangerous Method or Function
Tools
Drozer - https://github.com/mwrlabs/drozer
Code Quality and Build Settings of
Android Apps
Making Sure That the App is Properly Signed
Overview
Android requires all APKs to be digitally signed with a certificate before they
are installed or run. The digital signature is used to verify the owner's identity for
application updates. This process can prevent an app from being tampered with
or modified to include malicious code.
The final release build of an app must be signed with a valid release key. In
Android Studio, the app can be signed manually or via creation of a signing
configuration that's assigned to the release build type.
All app updates on Android need to be signed with the same certificate, so a
validity period of 25 years or more is recommended. Apps published on Google
Play must be signed with a key that that has a validity period ending after
October 22th, 2033.
Two APK signing schemes are available:
The v2 signature, which is supported by Android 7.0 and above, offers improved
security and performance. Release builds should always be signed via both
schemes.
Static Analysis
Make sure that the release build has been signed via both the v1 and v2 schemes
and that the code-signing certificate in the APK belongs to the developer.
The contents of the signing certificate can be examined with jarsigner . Note
that the Common Name (CN) attribute is set to "Android Debug" in the debug
certificate.
The output for an APK signed with a debug certificate is shown below:
Ignore the "CertPath not validated" error. This error occurs with Java SDK 7 and
above. Instead of jarsigner , you can rely on the apksigner to verify the
certificate chain.
v1SigningEnabled true
v2SigningEnabled true
Several best practices for configuring the app for release are available in the
official Android developer documentation.
Dynamic Analysis
Static analysis should be used to verify the APK signature.
Overview
The android:debuggable attribute in the Application element that is defined
in the Android manifest determines whether the app can be debugged or not.
Static Analysis
Check AndroidManifest.xml to determine whether the android:debuggable
attribute has been set and to find the attribute's value:
...
<application android:allowBackup="true"
android:debuggable="true" android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppTheme">
…
For a release build, this attribute should always be set to "false" (the default
value).
Dynamic Analysis
Drozer can be used to determine whether an application is debuggable. The
Drozer module app.package.attacksurface also displays information about
IPC components exported by the application.
$ run-as com.vulnerable.app id
uid=10084(u0_a84) gid=10084(u0_a84)
groups=10083(u0_a83),1004(input),1007(log),1011(adb),1015(sdcar
d_rw),1028(sdcard_r),3001(net_bt_admin),3002(net_bt),3003(inet)
,3006(net_bw_stats) context=u:r:untrusted_app:s0:c512,c768
Android Studio can also be used to debug an application and verify debugging
activation for an app.
1. Using adb and jdwp , identify the PID of the active application that you
want to debug:
$ adb jdwp
2355
16346 <== last launched, corresponds to our application
1. Using jdb , attach the debugger to the local communication channel port
and start a debug session:
$ jdb -connect
com.sun.jdi.SocketAttach:hostname=localhost,port=55555
Set uncaught java.lang.Throwable
Set deferred uncaught java.lang.Throwable
Initializing jdb ...
> help
The tool JADX can be used to identify interesting locations for breakpoint
insertion.
Help with jdb is available here.
If a "the connection to the debugger has been closed" error occurs while
jdb is being binded to the local communication channel port, kill all
Overview
Generally, you should provide compiled code with as little explanation as
possible. Some metadata, such as debugging information, line numbers, and
descriptive function or method names, make the binary or byte-code easier for
the reverse engineer to understand, but these aren't needed in a release build and
can therefore be safely omitted without impacting the app's functionality.
Static Analysis
Symbols are usually stripped during the build process, so you need the compiled
byte-code and libraries to make sure that unnecessary metadata has been
discarded.
First, find the nm binary in your Android NDK and export it (or create an
alias).
export $NM = $ANDROID_NDK_DIR/toolchains/arm-linux-androideabi-
4.9/prebuilt/darwin-x86_64/bin/arm-linux-androideabi-nm
$ $NM -a libfoo.so
/tmp/toolchains/arm-linux-androideabi-4.9/prebuilt/darwin-
x86_64/bin/arm-linux-androideabi-nm: libfoo.so: no symbols
$ $NM -D libfoo.so
Alternatively, open the file in your favorite disassembler and check the symbol
tables manually.
Dynamic symbols can be stripped via the visibility compiler flag. Adding
this flag causes gcc to discard the function names while preserving the names of
functions declared as JNIEXPORT .
externalNativeBuild {
cmake {
cppFlags "-fvisibility=hidden"
}
}
Dynamic Analysis
Static analysis should be used to verify debugging symbols.
Overview
StrictMode is a developer tool for detecting violations, e.g. accidental disk or
network access on the application's main thread. It can also be used to check for
good coding practices, such as implementing performant code.
Here is an example of StrictMode with policies enabled for disk and network
access to the main thread:
Static Analysis
To determine whether StrictMode is enabled, you can look for the
StrictMode.setThreadPolicy or StrictMode.setVmPolicy methods. Most
detectDiskWrites()
detectDiskReads()
detectNetwork()
Dynamic Analysis
There are several ways of detecting StrictMode ; the best choice depends on
how the policies' roles are implemented. They include
Logcat,
a warning dialog,
application crash.
Overview
Android apps can expose functionality through custom URL schemes (which are
a part of Intents). They can expose functionality to
other apps (via IPC mechanisms, such as Intents, Binders, Android Shared
Memory (ASHMEM), or BroadcastReceivers),
the user (via the user interface).
None of the input from these sources can be trusted; it must be validated and/or
sanitized. Validation ensures processing of data that the app is expecting only. If
validation is not enforced, any input can be sent to the app, which may allow an
attacker or malicious app to exploit app functionality.
The following portions of the source code should be checked if any app
functionality has been exposed:
Custom URL schemes. Check the test case "Testing Custom URL Schemes"
as well for further test scenarios.
IPC Mechanisms (Intents, Binders, Android Shared Memory, or
BroadcastReceivers). Check the test case "Testing Whether Sensitive Data
Is Exposed via IPC Mechanisms" as well for further test scenarios.
User interface
You can use ContentProviders to access database information, and you can
probe services to see if they return data. If data is not validated properly, the
content provider may be prone to SQL injection while other apps are interacting
with it. See the following vulnerable implementation of a ContentProvider.
<provider
android:name=".OMTG_CODING_003_SQL_Injection_Content_Provider_I
mplementation"
android:authorities="sg.vp.owasp_mobile.provider.College">
</provider>
@Override
public Cursor query(Uri uri, String[] projection, String
selection,String[] selectionArgs, String sortOrder) {
SQLiteQueryBuilder qb = new SQLiteQueryBuilder();
qb.setTables(STUDENTS_TABLE_NAME);
switch (uriMatcher.match(uri)) {
case STUDENTS:
qb.setProjectionMap(STUDENTS_PROJECTION_MAP);
break;
case STUDENT_ID:
// SQL Injection when providing an ID
qb.appendWhere( _ID + "=" +
uri.getPathSegments().get(1));
Log.e("appendWhere",uri.getPathSegments().get(1).toString());
break;
default:
throw new IllegalArgumentException("Unknown URI " +
uri);
}
/**
* register to watch a content URI for changes
*/
c.setNotificationUri(getContext().getContentResolver(),
uri);
return c;
}
All app functions that process data coming in through the UI should implement
input validation:
Dynamic Analysis
The tester should manually test the input fields with strings like OR 1=1-- if,
for example, a local SQL injection vulnerability has been identified.
On a rooted device, the command content can be used to query the data from a
Content Provider. The following command queries the vulnerable function
described above.
SQL injection can be exploited with the following command. Instead of getting
the record for Bob only, the user can retrieve all data.
Overview
Exceptions occur when an application gets into an abnormal or error state. Both
Java and C++ may throw exceptions. Testing exception handling is about
ensuring that the app will handle an exception and transition to a safe state
without exposing sensitive information via the UI or the app's logging
mechanisms.
Static Analysis
Review the source code to understand the application and identify how it handles
different types of errors (IPC communications, remote services invocation, etc.).
Here are some examples of things to check at this stage:
Make sure that the application uses a well-designed and unified scheme to
handle exceptions.
Plan for standard RuntimeException s (e.g. NullPointerException ,
IndexOutOfBoundsException , ActivityNotFoundException ,
byte[] secret;
try{
//use secret
} catch (SPECIFICEXCEPTIONCLASS | SPECIFICEXCEPTIONCLASS2 e) {
// handle any issues
} finally {
//clean the secret.
}
Adding a general exception handler for uncaught exceptions is a best practice for
resetting the application's state when a crash is imminent:
public class MemoryCleanerOnCrash implements
Thread.UncaughtExceptionHandler {
@Override
public void uncaughtException(Thread thread, Throwable ex)
{
Now the handler's initializer must be called in your custom Application class
(e.g., the class that extends Application ):
@Override
protected void attachBaseContext(Context base) {
super.attachBaseContext(base);
MemoryCleanerOnCrash.init();
}
Dynamic Analysis
There are several ways to do dynamic analysis:
Use Xposed to hook into methods and either call them with unexpected
values or overwrite existing variables with unexpected values (e.g., null
values).
Type unexpected values into the Android application's UI fields.
Interact with the application using its intents, its public providers, and
unexpected values.
Tamper with the network communication and/or the files stored by the
application.
recover from the error or transition into a state in which it can inform the
user of its inability to continue,
if necessary, tell the user to take appropriate action (The message should not
leak sensitive information.),
not provide any information in logging mechanisms used by the
application.
Overview
Because decompiling Java classes is trivial, applying some basic obfuscation to
the release byte-code is recommended. ProGuard offers an easy way to shrink
and obfuscate code and to strip unneeded debugging information from the byte-
code of Android Java apps. It replaces identifiers, such as class names, method
names, and variable names, with meaningless character strings. This is a type of
layout obfuscation, which is "free" in that it doesn't impact the program's
performance.
Since most Android applications are Java-based, they are immune to buffer
overflow vulnerabilities. Nevertheless, a buffer overflow vulnerability may still
be applicable when you're using the Android NDK; therefore, consider secure
compiler settings.
Static Analysis
If source code is provided, you can check the build.gradle file to see whether
obfuscation settings have been applied. In the example below, you can see that
minifyEnabled and proguardFiles are set. Creating exceptions to protect
some classes from obfuscation (with "-keepclassmembers" and "-keep class") is
common. Therefore, auditing the ProGuard configuration file to see what classes
are exempted is important. The getDefaultProguardFile('proguard-
android.txt') method gets the default ProGuard settings from the <Android
define custom ProGuard rules. You can see that many extended classes in our
sample proguard-rules.pro file are common Android classes. This should be
defined more granularly on specific classes or libraries.
By default, ProGuard removes attributes that are useful for debugging, including
line numbers, source file names, and variable names. ProGuard is a free Java
class file shrinker, optimizer, obfuscator, and pre-verifier. It is shipped with
Android's SDK tools. To activate shrinking for the release build, add the
following to build.gradle:
android {
buildTypes {
release {
minifyEnabled true
proguardFiles getDefaultProguardFile('proguard-
android.txt'),
'proguard-rules.pro'
}
}
...
}
proguard-rules.pro
Dynamic Analysis
If source code has not been provided, an APK can be decompiled to determine
whether the codebase has been obfuscated. Several tools are available for
converting dex code to a jar file (e.g., dex2jar). The jar file can be opened with
tools (such as JD-GUI) that can be used to make sure that class, method, and
variable names are not human-readable.
package com.a.a.a;
import com.a.a.b.a;
import java.util.List;
class a$b
extends a
{
public a$b(List paramList)
{
super(paramList);
}
References
OWASP MASVS
V6.2: "All inputs from external sources and the user are validated and if
necessary sanitized. This includes data received via the UI, IPC
mechanisms such as intents, custom URLs, and network sources."
V7.1: "The app is signed and provisioned with valid certificate."
V7.2: "The app has been built in release mode, with settings appropriate for
a release build (e.g. non-debuggable)."
V7.3: "Debugging symbols have been removed from native binaries."
V7.4: "Debugging code has been removed, and the app does not log
verbose errors or debugging messages."
V7.6: "The app catches and handles possible exceptions."
V7.7: "Error handling logic in security controls denies access by default."
V7.9: "Free security features offered by the toolchain, such as byte-code
minification, stack protection, PIE support and automatic reference
counting, are activated."
CWE
CWE-20 - Improper Input Validation
CWE-215 - Information Exposure through Debug Information
CWE-388 - Error Handling
CWE-489 - Leftover Debug Code
CWE-656 - Reliance on Security through Obscurity
Tools
ProGuard - https://www.guardsquare.com/en/proguard
jarsigner -
http://docs.oracle.com/javase/7/docs/technotes/tools/windows/jarsigner.htm
l
Xposed - http://repo.xposed.info/
Drozer - https://labs.mwrinfosecurity.com/assets/BlogFiles/mwri-drozer-
user-guide-2015-03-23.pdf
GNU nm - https://ftp.gnu.org/old-gnu/Manuals/binutils-
2.12/html_node/binutils_4.html
Tampering and Reverse Engineering on
Android
Android's openness makes it a favorable environment for reverse engineers. In
the following chapter, we'll look at some peculiarities of Android reversing and
OS-specific tools as processes.
Android offers reverse engineers big advantages that are not available with "the
other" mobile OS. Because Android is open source, you can study its source
code at the Android Open Source Project (AOSP) and modify the OS and its
standard tools any way you want. Even on standard retail devices it is possible to
do things like activating developer mode and sideloading apps without jumping
through many hoops. From the powerful tools shipping with the SDK to the
wide range of available reverse engineering tools, there's a lot of niceties to
make your life easier.
However, there are also a few Android-specific challenges. For example, you'll
need to deal with both Java bytecode and native code. Java Native Interface
(JNI) is sometimes deliberately used to confuse reverse engineers (to be fair,
there are legitimate reasons for using JNI, such as improving performance or
supporting legacy code). Developers sometimes use the native layer to "hide"
data and functionality, and they may structure their apps such that execution
frequently jumps between the two layers.
Note that we'll use the OWASP Mobile Testing Guide Crackmes as examples for
demonstrating various reverse engineering techniques in the following sections,
so expect partial and full spoilers. We encourage you to have a crack at the
challenges yourself before reading on!
The newest SDK Tools and SDK Platform-Tools packages. These packages
include the Android Debugging Bridge (ADB) client and other tools that
interface with the Android platform.
The Android NDK. This is the Native Development Kit that contains
prebuilt toolchains for cross-compiling native code for different
architectures.
In addition to the SDK and NDK, you'll also need something to make Java
bytecode more human-readable. Fortunately, Java decompilers generally handle
Android bytecode well. Popular free decompilers include JD, JAD, Proycon, and
CFR. For convenience, we have packed some of these decompilers into our apkx
wrapper script. This script completely automates the process of extracting Java
code from release APK files and makes it easy to experiment with different
backends (we'll also use it in some of the following examples).
Other tools are really a matter of preference and budget. A ton of free and
commercial disassemblers, decompilers, and frameworks with different strengths
and weaknesses exist; we'll cover some of them.
Setting up the Android SDK
Local Android SDK installations are managed through Android Studio. Create
an empty project in Android Studio and select "Tools->Android->SDK
Manager" to open the SDK Manager GUI. The "SDK Platforms" tab lets you
install SDKs for multiple API levels. Recent API levels are:
Windows:
C:\Users\<username>\AppData\Local\Android\sdk
MacOS:
/Users/<username>/Library/Android/sdk
Note: On Linux, you'll need to pick your own SDK location. /opt , /srv , and
/usr/local are common locations.
One possibility for setting up the build system is exporting the compiler path and
necessary flags as environment variables. To make things easier, however, the
NDK allows you to create a so-called standalone toolchain—a "temporary"
toolchain that incorporates the required settings.
To set up a standalone toolchain, download the latest stable version of the NDK.
Extract the ZIP file, change into the NDK root directory, and run the following
command:
variable that points to your toolchain directory, (we'll be using this in the
examples). Run the following command or add it to your .bash_profile or
other startup script:
$ export TOOLCHAIN=/tmp/android-7-toolchain
Once USB debugging is enabled, connected devices can be viewed with the
following command:
$ adb devices
List of devices attached
BAZ5ORFARKOZYDFA device
APKTool is a popular free tool that can extract and disassemble resources
directly from the APK archive and disassemble Java bytecode to Smali format
(Smali/Baksmali is an assembler/disassembler for the Dex format. It's also
Icelandic for "Assembler/Disassembler"). APKTool allows you to reassemble
the package, which is useful for patching and applying changes to the Manifest.
You can accomplish more elaborate tasks (such as program analysis and
automated de-obfuscation) with open source reverse engineering frameworks
such as Radare2 and Angr. You'll find usage examples for many of these free
tools and frameworks throughout the guide.
Commercial Tools
Although working with a completely free setup is possible, you should consider
investing in commercial tools. The main advantage of these tools is convenience:
they come with a nice GUI, lots of automation, and end user support. If you earn
your daily bread as a reverse engineer, they will save you a lot of time.
JEB
JEB, a commercial decompiler, packs all the functionality necessary for static
and dynamic analysis of Android apps into an all-in-one package. It is
reasonably reliable and includes prompt support. It has a built-in debugger,
which allows for an efficient workflow—setting breakpoints directly in the
decompiled (and annotated) sources is invaluable, especially with ProGuard-
obfuscated bytecode. Of course, convenience like this doesn't come cheap, and
now that JEB is provided fvia a subscription-based license, you'll have to pay a
monthly fee to use it.
IDA Pro
IDA Pro is compatible with ARM, MIPS, Java bytecode, and, of course, Intel
ELF binaries. It also comes with debuggers for both Java applications and native
processes. With its powerful scripting, disassembling, and extension capabilities,
IDA Pro works great for static analysis of native programs and libraries.
However, the static analysis facilities it offers for Java code are rather basic—
you get the Smali disassembly but not much more. You can't navigate the
package and class structure, and some actions (such as renaming classes) can't
performed, which can make working with more complex Java apps tedious.
Reverse Engineering
Reverse engineering is the process of taking an app apart to find out how it
works. You can do this by examining the compiled app (static analysis),
observing the app during run time (dynamic analysis), or a combination of both.
We're looking for a secret string stored somewhere inside the app, so the next
step is to look inside. First, unzip the APK file and look at the content.
In the standard setup, all the Java bytecode and app data is in the file
classes.dex in the app root directory. This file conforms to the Dalvik
Once you have a JAR file, you can use any free decompiler to produce Java
code. In this example, we'll use the CFR decompiler. CFR is under active
development, and brand-new releases are available on the author's website. CFR
was released under an MIT license, so you can use it freely even though its
source code is not available.
The easiest way to run CFR is through apkx , which also packages dex2jar
and automates extraction, conversion, and decompilation. Install it:
$ apkx UnCrackable-Level1.apk
Extracting UnCrackable-Level1.apk to UnCrackable-Level1
Converting: classes.dex -> classes.jar (dex2jar)
dex2jar UnCrackable-Level1/classes.dex -> UnCrackable-
Level1/classes.jar
Decompiling to UnCrackable-Level1/src (cfr)
You should now find the decompiled sources in the directory Uncrackable-
Level1/src . To view the sources, a simple text editor (preferably with syntax
highlighting) is fine, but loading the code into a Java IDE makes navigation
easier. Let's import the code into IntelliJ, which also provides on-device
debugging functionality.
Open IntelliJ and select "Android" as the project type in the left tab of the "New
Project" dialog. Enter "Uncrackable1" as the application name and
"vantagepoint.sg" as the company name. This results in the package name
"sg.vantagepoint.uncrackable1," which matches the original package name.
Using a matching package name is important if you want to attach the debugger
to the running app later on because Intellij uses the package name to identify the
correct process.
In the next dialog, pick any API number; you don't actually want to compile the
project, so the number doesn't matter. Click "next" and choose "Add no
Activity," then click "finish."
Once you have created the project, expand the "1: Project" view on the left and
navigate to the folder app/src/main/java . Right-click and delete the default
package "sg.vantagepoint.uncrackable1" created by IntelliJ.
Now, open the Uncrackable-Level1/src directory in a file browser and drag
the sg directory into the now empty Java folder in the IntelliJ project view
(hold the "alt" key to copy the folder instead of moving it).
You'll end up with a structure that resembles the original Android Studio project
from which the app was built.
As soon as IntelliJ has indexed the code, you can browse it just like you'd
browse any other Java project. Note that many of the decompiled packages,
classes, and methods have weird one-letter names; this is because the bytecode
has been "minified" with ProGuard at build time. This is a basic type of
obfuscation that makes the bytecode a little more difficult to read, but with a
fairly simple app like this one it won't cause you much of a headache. When
you're analyzing a more complex app, however, it can get quite annoying.
When analyzing obfuscated code, annotating class names, method names, and
other identifiers as you go along is a good practice. Open the MainActivity
class in the package sg.vantagepoint.uncrackable1 . The method verify is
called when you tap the "verify" button. This method passes user input to a static
method called a.a , which returns a boolean value. It seems plausible that
a.a verifies user input, so we'll refactor the code to reflect this.
public class a {
public static byte[] a(byte[] object, byte[] arrby) {
object = new SecretKeySpec((byte[])object,
"AES/ECB/PKCS7Padding");
Cipher cipher = Cipher.getInstance("AES");
cipher.init(2, (Key)object);
return cipher.doFinal(arrby);
}
}
Now you're getting somewhere: it's simply standard AES-ECB. Looks like the
base64 string stored in arrby1 in check_input is a ciphertext. It is decrypted
with 128bit AES, then compared with the user input. As a bonus task, try to
decrypt the extracted ciphertext and find the secret value!
A faster way to get the decrypted string is to add dynamic analysis—we'll revisit
UnCrackable Level 1 later to show how, so don't delete the project yet!
Android JNI functions are written in native code that has been compiled into
Linux ELF libraries. It's standard Linux fare. However, instead of relying on
widely used C libraries (such as glibc) Android binaries are built against a
custom libc named Bionic. Bionic adds support for important Android-specific
services such as system properties and logging, and it is not fully POSIX-
compatible.
$ wget HelloWord-JNI.apk
$ adb install HelloWord-JNI.apk
This app is not exactly spectacular—all it does is show a label with the text
"Hello from C++." This is the app Android generates by default when you create
a new project with C/C++ support— it's just enough to show the basic principles
of JNI calls.
Decompile the APK with apkx . This extracts the source code into the
HelloWorld/src directory.
$ wget https://github.com/OWASP/owasp-
mstg/raw/master/Samples/Android/01_HelloWorld-JNI/HelloWord-
JNI.apk
$ apkx HelloWord-JNI.apk
Extracting HelloWord-JNI.apk to HelloWord-JNI
Converting: classes.dex -> classes.jar (dex2jar)
dex2jar HelloWord-JNI/classes.dex -> HelloWord-JNI/classes.jar
@Override
protected void onCreate(Bundle bundle) {
super.onCreate(bundle);
this.setContentView(2130968603);
((TextView)this.findViewById(2131427422)).setText((CharSequence)
this.stringFromJNI());
}
So where is the native implementation of this function? If you look into the
lib directory of the APK archive, you'll see eight subdirectories named after
systems, you can retrieve the list of symbols with readelf (included in GNU
binutils) or nm . Do this on Mac OS with the greadelf tool, which you can
install via Macports or Homebrew. The following example uses greadelf :
This is the native function that eventually gets executed when the
stringFromJNI native method is called.
To support both older and newer ARM processors, Android apps ship with
multiple ARM builds compiled for different Application Binary Interface (ABI)
versions. The ABI defines how the application's machine code is supposed to
interact with the system at run time. The following ABIs are supported:
armeabi: ABI is for ARM-based CPUs that support at least the ARMv5TE
instruction set.
armeabi-v7a: This ABI extends armeabi to include several CPU instruction
set extensions.
arm64-v8a: ABI for ARMv8-based CPUs that support AArch64, the new
64-bit ARM architecture.
same thing with the demo or evaluation version available on the Hex-Rays
website.
Open the file in IDA Pro. In the "Load new file" dialog, choose "ELF for ARM
(Shared Object)" as the file type (IDA should detect this automatically), and
"ARM Little-Endian" as the processor type.
Once the file is open, click into the "Functions" window on the left and press
Alt+t to open the search dialog. Enter "java" and hit enter. This should
Not a lot of code there, but you should analyze it. The first thing you need to
know is that the first argument passed to every JNI is a JNI interface pointer. An
interface pointer is a pointer to a pointer. This pointer points to a function table
—an array of even more pointers, each of which points to a JNI interface
function (is your head spinning yet?). The function table is initialized by the Java
VM and allows the native function to interact with the Java environment.
With that in mind, let's have a look at each line of assembly code.
Remember: the first argument (in R0) is a pointer to the JNI function table
pointer. The LDR instruction loads this function table pointer into R2.
This instruction loads into R1 the pc-relative offset of the string "Hello from
C++." Note that this string comes directly after the end of the function block at
offset 0xe84. Addressing relative to the program counter allows the code to run
independently of its position in memory.
This instruction loads the function pointer from offset 0x29C into the JNI
function pointer table pointed to by R2. This is the NewStringUTF function. You
can look at the list of function pointers in jni.h, which is included in the Android
NDK. The function prototype looks like this:
jstring (*NewStringUTF)(JNIEnv*, const char*);
The function takes two arguments: the JNIEnv pointer (already in R0) and a
String pointer. Next, the current value of PC is added to R1, resulting in the
absolute address of the static string "Hello from C++" (PC + offset).
ADD R1, PC
BX R2
When this function returns, R0 contains a pointer to the newly constructed UTF
string. This is the final return value, so R0 is left unchanged and the function
returns.
Android apps support two different types of debugging: Debugging on the level
of the Java runtime with the Java Debug Wire Protocol (JDWP), and
Linux/Unix-style ptrace-based debugging on the native layer, both of which are
valuable to reverse engineers.
Dalvik and ART support the JDWP, a protocol for communication between the
debugger and the Java virtual machine (VM) that it debugs. JDWP is a standard
debugging protocol that's supported by all command line tools and Java IDEs,
including JDB, JEB, IntelliJ, and Eclipse. Android's implementation of JDWP
also includes hooks for supporting extra features implemented by the Dalvik
Debug Monitor Server (DDMS.
A JDWP debugger allows you to step through Java code, set breakpoints on Java
methods, and inspect and modify local and instance variables. You'll use a JDWP
debugger most of the time you debug "normal" Android apps (i.e., apps that
don't make many calls to native libraries).
In the following section, we'll show how to solve the UnCrackable App for
Android Level 1 with JDB alone. Note that this is not an efficient way to solve
this crackme—you can do it much faster with Frida and other methods, which
we'll introduce later in the guide. This, however, serves as an introduction to the
capabilities of the Java debugger.
Repackaging
To "convert" a release build into a debuggable build, you need to modify a flag
in the app's manifest file. This modification breaks the code signature, so you'll
also have to re-sign the altered APK archive.
To re-sign, you first need a code-signing certificate. If you have built a project in
Android Studio before, the IDE has already created a debug keystore and
certificate in $HOME/.android/debug.keystore . The default password for this
keystore is "android," and the key is called "androiddebugkey."
The standard Java distribution includes keytool for managing keystores and
certificates. You can create your own signing certificate and key, then add it to
the debug keystore:
<application android:allowBackup="true"
android:debuggable="true" android:icon="@drawable/ic_launcher"
android:label="@string/app_name" android:name="com.xxx.xxx.xxx"
android:theme="@style/AppTheme">
AndroidManifest file.
$ cd UnCrackable-Level1
$ apktool b
$ zipalign -v 4 dist/UnCrackable-Level1.apk ../UnCrackable-
Repackaged.apk
$ cd ..
$ apksigner sign --ks ~/.android/debug.keystore --ks-key-alias
signkey UnCrackable-Repackaged.apk
Note: If you experience JRE compatibility issues with apksigner , you can use
jarsigner instead. When you do this, zipalign is called after signing.
The UnCrackable App is not stupid: it notices that it has been run in debuggable
mode and reacts by shutting down. A modal dialog is shown immediately, and
the crackme terminates once you tap "OK."
The adb command line tool, which ships with the Android SDK, bridges the
gap between your local development environment and a connected Android
device. You'll usually debug apps on the emulator or a device connected via
USB. Use the adb devices command to list the connected devices.
$ adb devices
List of devices attached
090c285c0b97f748 device
The adb jdwp command lists the process ids of all debuggable processes
running on the connected device (i.e., processes hosting a JDWP transport). With
the adb forward command, you can open a listening socket on your host
machine and forward this socket's incoming TCP connections to the JDWP
transport of a chosen process.
$ adb jdwp
12167
$ adb forward tcp:7777 jdwp:12167
You're now ready to attach JDB. Attaching the debugger, however, causes the
app to resume, which you don't want. You want to keep it suspended so that you
can explore first. To prevent the process from resuming, pipe the suspend
command into jdb:
$ { echo "suspend"; cat; } | jdb -attach localhost:7777
Initializing jdb ...
> All threads suspended.
>
You're now attached to the suspended process and ready to go ahead with the jdb
commands. Entering ? prints the complete list of commands. Unfortunately,
the Android VM doesn't support all available JDWP features. For example, the
redefine command, which would let you redefine a class' code is not
Let's revisit the decompiled code from the UnCrackable App Level 1 and think
about possible solutions. A good approach would be suspending the app in a
state where the secret string is held in a variable in plain text so you can retrieve
it. Unfortunately, you won't get that far unless you deal with the root/tampering
detection first.
Review the code and you'll see that the method
sg.vantagepoint.uncrackable1.MainActivity.a displays the "This in
You can bypass this with a little run time tampering. With the app still
suspended, set a method breakpoint on android.app.Dialog.setCancelable
and resume the app.
main[1] locals
Method arguments:
Local variables:
flag = true
setCancelable(true) was called, so this can't be the call we're looking for.
main[1] resume
Breakpoint hit: "thread=main",
android.app.Dialog.setCancelable(), line=1,110 bci=0
main[1] locals
flag = false
You've now reached a call to setCancelable with the argument false . Set
the variable to true with the set command and resume.
Repeat this process, setting flag to true each time the breakpoint is reached,
until the alert box is finally displayed (the breakpoint will be reached five or six
times). The alert box should now be cancelable! Tap the screen next to the box
and it will close without terminating the app.
Now that the anti-tampering is out of the way, you're ready to extract the secret
string! In the "static analysis" section, you saw that the string is decrypted with
AES, then compared with the string input to the message box. The method
equals of the java.lang.String class compares the string input with the
main[1] locals
Method arguments:
Local variables:
other = "radiusGravity"
main[1] cont
main[1] locals
Method arguments:
Local variables:
other = "I want to believe"
main[1] cont
This is the plaintext string you're looking for!
Setting up a project in an IDE with the decompiled sources is a neat trick that
allows you to set method breakpoints directly in the source code. In most cases,
you should be able single-step through the app and inspect the state of variables
with the GUI. The experience won't be perfect—it's not the original source code
after all, so you won't be able to set line breakpoints and things will sometimes
simply not work correctly. Then again, reversing code is never easy, and
efficiently navigating and debugging plain old Java code is a pretty convenient
way of doing it. A similar method has been described in the NetSPI blog.
To set up IDE debugging, first create your Android project in IntelliJ and copy
the decompiled Java sources into the source folder as described above in the
"Statically Analyzing Java Code" section. On the device, choose the app as
“debug app” on the Developer options” (Uncrackable1 in this tutorial), and make
sure you've switched on the "Wait For Debugger" feature.
Once you tap the Uncrackable app icon from the launcher, it will be suspended
in "wait for a debugger" mode.
Now you can set breakpoints and attach to the Uncrackable1 app process with
the "Attach Debugger" toolbar button.
Note that only method breakpoints work when debugging an app from
decompiled sources. Once a method breakpoint is reached, you'll get the chance
to single step during the method execution.
After you choose the Uncrackable1 application from the list, the debugger will
attach to the app process and you'll reach the breakpoint that was set on the
onCreate() method. Uncrackable1 app triggers anti-debugging and anti-
Next, single-step through the onCreate() method by clicking "Force Step Into"
in Debugger view. The "Force Step Into" option allows you to debug the
Android framework functions and core Java classes that are normally ignored by
debuggers.
Once you "Force Step Into," the debugger will stop at the beginning of the next
method, which is the a() method of the class sg.vantagepoint.a.c .
This method searches for the "su" binary within a list of directories
( /system/xbin and others). Since you're running the app on a rooted
device/emulator, you need to defeat this check by manipulating variables and/or
function return values.
You can see the directory names inside the "Variables" window by clicking "Step
Over" the Debugger view to step into and through the a() method .
Step into the System.getenv method with the "Force Step Into" feature.
After you get the colon-separated directory names, the debugger cursor will
return to the beginning of the a() method, not to the next executable line. This
happens because you're working on the decompiled code instead of the source
code. This skipping makes following the code flow crucial to debugging
decompiled applications. Otherwise, identifying the next line to be executed
would become complicated.
If you don't want to debug core Java and Android classes, you can step out of the
function by clicking "Step Out" in the Debugger view. Using "Force Step Into"
might be a good idea once you reach the decompiled sources and "Step Out" of
the core Java and Android classes. This will help speed up debugging while you
keep an eye on the return values of the core class functions.
After the a() method gets the directory names, it will search for the su
binary within these directories. To defeat this check, step through the detection
method and inspect the variable content. Once execution reaches a location
where the su binary would be detected, modify one of the varibales holding
the file name or directory name by pressing F2 or right-clicking and choosing
"Set Value".
Once you modify the binary name or the directory name, File.exists should
return false .
This defeats the first root detection control of Uncrackable App Level 1. The
remaining anti-tampering and anti-debugging controls can be defeated in similar
ways so that you can finally reach the secret string verification functionality.
The secret code is verified by the method a() of class
sg.vantagepoint.uncrackable1.a . Set a breakpoint on method a() and
"Force Step Into" when you reach the breakpoint. Then, single-step until you
reach the call to String.equals . This is where user input is compared with the
secret string.
You can see the secret string in the "Variables" view when you reach the
String.equals method call.
Debugging Native Code
Native code on Android is packed into ELF shared libraries and runs just like
any other native Linux program. Consequently, you can debug it with standard
tools (including GDB and built-in IDE debuggers such as IDA Pro and JEB) as
long as they support the device's processor architecture (most devices are based
on ARM chipsets, so this is usually not an issue).
You'll now set up your JNI demo app, HelloWorld-JNI.apk, for debugging. It's
the same APK you downloaded in "Statically Analyzing Native Code." Use adb
install to install it on your device or on an emulator.
If you followed the instructions at the beginning of this chapter, you should
already have the Android NDK. It contains prebuilt versions of gdbserver for
various architectures. Copy the gdbserver binary to your device:
$ adb shell
$ ps | grep helloworld
u0_a164 12690 201 1533400 51692 ffffffff 00000000 S
sg.vantagepoint.helloworldjni
$ su
# /data/local/tmp/gdbserver --attach localhost:1234 12690
Attached; pid = 12690
Listening on port 1234
You'll now use the prebuilt version of gdb included in the NDK toolchain (if
you haven't already, follow the instructions above to install it).
$ $TOOLCHAIN/bin/gdb libnative-lib.so
GNU gdb (GDB) 7.11
(...)
Reading symbols from libnative-lib.so...(no debugging symbols
found)...done.
(gdb) target remote :1234
Remote debugging using :1234
0xb6e0f124 in ?? ()
You have successfully attached to the process! The only problem is that you're
already too late to debug the JNI function StringFromJNI ; it only runs once, at
startup. You can solve this problem by activating the "Wait for Debugger"
option. Go to "Developer Options" -> "Select debug app" and pick
HelloWorldJNI, then activate the "Wait for debugger" switch. Then terminate
and re-launch the app. It should be suspended automatically.
Our objective is to set a breakpoint at the first instruction of the native function
Java_sg_vantagepoint_helloworldjni_MainActivity_stringFromJNI before
resuming the app. Unfortunately, this isn't possible at this point in the execution
because libnative-lib.so isn't yet mapped into process memory—it is loaded
dynamically during run time. To get this working, you'll first use JDB to gently
change the process into the desired state.
First, resume execution of the Java VM by attaching JDB. You don't want the
process to resume immediately though, so pipe the suspend command into
JDB:
$ adb jdwp
14342
$ adb forward tcp:7777 jdwp:14342
$ { echo "suspend"; cat; } | jdb -attach localhost:7777
Next, suspend the process where the Java runtime loads libnative-lib.so . In
JDB, set a breakpoint at the java.lang.System.loadLibrary method and
resume the process. After the breakpoint has been reached, execute the step
up command, which will resume the process until loadLibrary() returns. At
main[1]
Execute gdbserver to attach to the suspended app. This will cause the app to
be suspended by both the Java VM and the Linux kernel (creating a state of
“double-suspension”).
Execute the resume command in JDB to resume execution of the Java runtime
(you're done with JDB, so you can detach it too). You can start exploring the
process with GDB. The info sharedlibrary command displays the loaded
libraries, which should include libnative-lib.so. The info functions command
retrieves a list of all known functions. The JNI function
java_sg_vantagepoint_helloworldjni_MainActivity_stringFromJNI should be
Non-debugging symbols:
0x00000e78
Java_sg_vantagepoint_helloworldjni_MainActivity_stringFromJNI
(...)
0xa3522e78
Java_sg_vantagepoint_helloworldjni_MainActivity_stringFromJNI
(...)
(gdb) b *0xa3522e78
Breakpoint 1 at 0xa3522e78
(gdb) cont
Your breakpoint should be reached when the first instruction of the JNI function
is executed. You can now display a disassembled version of the function with the
disassemble command.
Breakpoint 1, 0xa3522e78 in
Java_sg_vantagepoint_helloworldjni_MainActivity_stringFromJNI()
from libnative-lib.so
(gdb) disass $pc
Dump of assembler code for function
Java_sg_vantagepoint_helloworldjni_MainActivity_stringFromJNI:
=> 0xa3522e78 <+0>: ldr r2, [r0, #0]
0xa3522e7a <+2>: ldr r1, [pc, #8] ; (0xa3522e84
<Java_sg_vantagepoint_helloworldjni_MainActivity_stringFromJNI+
12>)
0xa3522e7c <+4>: ldr.w r2, [r2, #668] ; 0x29c
0xa3522e80 <+8>: add r1, pc
0xa3522e82 <+10>: bx r2
0xa3522e84 <+12>: lsrs r4, r7, #28
0xa3522e86 <+14>: movs r0, r0
End of assembler dump.
From here on, you can single-step through the program, print the contents of
registers and memory, or tamper with them to explore the JNI function (which,
in this case, simply returns a string). Use the help command to get more
information on debugging, running, and examining data.
Execution Tracing
Besides being useful for debugging, the JDB command line tool offers basic
execution tracing functionality. To trace an app right from the start, you can
pause the app with the Android "Wait for Debugger" feature or a kill –STOP
command and attach JDB to set a deferred method breakpoint on any
initialization method. Once the breakpoint is reached, activate method tracing
with the trace go methods command and resume execution. JDB will dump
all method entries and exits from that point onwards.
The Dalvik Debug Monitor Server (DDMS) is a GUI tool included with Android
Studio. It may not look like much, but its Java method tracer is one of the most
awesome tools you can have in your arsenal, and it is indispensable for
analyzing obfuscated bytecode.
To record an execution trace in Android Studio, open the "Android" tab at the
bottom of the GUI. Select the target process in the list and click the little "stop
watch" button on the left. This starts the recording. Once you're done, click the
same button to stop the recording. The integrated trace view will open and show
the recorded trace. You can scroll and zoom the timeline view with the mouse or
trackpad.
Execution traces can also be recorded in the standalone Android Device Monitor.
The Device Monitor can be started within Android Studio (Tools -> Android ->
Android Device Monitor) or from the shell with the ddms command.
To start recording tracing information, select the target process in the "Devices"
tab and click "Start Method Profiling". Click the stop button to stop recording,
after which the Traceview tool will open and show the recorded trace. Clicking
any of the methods in the profile panel highlights the selected method in the
timeline panel.
DDMS also offers a convenient heap dump button that will dump the Java heap
of a process to a .hprof file. The Android Studio user guide contains more
information about Traceview .
Moving down a level in the OS hierarchy, you arrive at privileged functions that
require the powers of the Linux kernel. These functions are available to normal
processes via the system call interface. Instrumenting and intercepting calls into
the kernel is an effective method for getting a rough idea of what a user process
is doing, and often the most efficient way to deactivate low-level tampering
defenses.
Strace is a standard Linux utility that monitors interaction between processes and
the kernel. The utility is not included with Android by default, but can easily be
built from source via the Android NDK. Strace is a very convenient way to
monitor a process' system calls. Strace depends, however on the ptrace()
system call to attach to the target process, so it only works up to the point at
which anti-debugging measures start up.
If the Android "stop application at startup" feature is unavailable, you can use a
shell script to launch the process and immediately attach strace (not an elegant
solution, but it works):
Ftrace
Ftrace is a tracing utility built directly into the Linux kernel. On a rooted device,
ftrace can trace kernel system calls more transparently than strace can (strace
relies on the ptrace system call to attach to the target process).
available_tracers: This file lists the available tracers compiled into the
kernel.
current_tracer: This file sets or displays the current tracer.
tracing_on: Echo 1 into this file to allow/start update of the ring buffer.
Echoing 0 will prevent further writes into the ring buffer.
KProbes
The KProbes interface provides an even more powerful way to instrument the
kernel: it allows you to insert probes into (almost) arbitrary code addresses
within kernel memory. KProbes inserts a breakpoint instruction at the specified
address. Once the breakpoint is reached, control passes to the KProbes system,
which then executes the user-defined handler function(s) and the original
instruction. Besides being great for function tracing, KProbes can implement
rootkit-like functionality, such as file hiding.
Jprobes and Kretprobes are other KProbes-based probe types that allow hooking
of function entries and exits.
The stock Android kernel comes without loadable module support, which is a
problem because Kprobes are usually deployed as kernel modules. The strict
memory protection the Android kernel is compiled with is another issue
becauseit prevents the patching of some parts of Kernel memory. Elfmaster's
system call hooking method causes a Kernel panic on stock Lollipop and
Marshmallow because the sys_call_table is non-writable. You can, however, use
KProbes in a sandbox by compiling your own, more lenient Kernel (more on this
later).
Emulation-based Analysis
The Android emulator is based on QEMU, a generic and open source machine
emulator. QEMU emulates a guest CPU by translating the guest instructions on-
the-fly into instructions the host processor can understand. Each basic block of
guest instructions is disassembled and translated into an intermediate
representation called Tiny Code Generator (TCG). The TCG block is compiled
into a block of host instructions, stored in a code cache, and executed. After
execution of the basic block, QEMU repeats the process for the next block of
guest instructions (or loads the already translated block from the cache). The
whole process is called dynamic binary translation.
Because the Android emulator is a fork of QEMU, it comes with all QEMU
features, including monitoring, debugging, and tracing facilities. QEMU-specific
parameters can be passed to the emulator with the -qemu command line flag.
Youcan use QEMU's built-in tracing facilities to log executed instructions and
virtual register values. Starting qemu with the "-d" command line flag will cause
it to dump the blocks of guest code, micro operations, or host instructions being
executed. With the –d_asm option, QEMU logs all basic blocks of guest code as
they enter QEMU's translation function. The following command logs all
translated blocks to a file:
DroidScope
DroidScope exports instrumentation APIs that mirror the different context levels
(hardware, OS, and Java) of a real Android device. Analysis tools can use these
APIs to query or set information and register callbacks for various events. For
example, a plugin can register callbacks for native instruction start and end,
memory reads and writes, register reads and writes, system calls, and Java
method calls.
All of this makes it possible to build tracers that are practically transparent to the
target application (as long as we can hide the fact that it is running in an
emulator). One limitation is that DroidScope is compatible with the Dalvik VM
only.
PANDA
$ cd qemu
$ ./configure --target-list=arm-softmmu --enable-android $
makee
VxStripper
1. You can't attach a debugger to the app because the android:debuggable flag
is not set to true in the Manifest.
2. You can't intercept HTTPS traffic with a proxy because the app employs
SSL pinning.
In most cases, both issues can be fixed by making minor changes to the app and
then re-signing and re-packaging it. Apps that run additional integrity checks
beyond default Android code-signing are an exception—in these cases, you have
to patch the additional checks as well.
Example: Disabling Certificate Pinning
Certificate pinning is an issue for security testers who want to intercept HTTPS
communication for legitimate reasons. Patching bytecode to deactivate SSL
pinning can help with this. To demonstrate bypassing certificate pinning, we'll
walk through an implementation in an example application.
$ apktool d target_apk.apk
You then locate the certificate pinning checks in the Smali source code.
Searching the code for keywords such as "X509TrustManager" should point you
in the right direction.
To bypass the pinning check, add the return-void opcode to the first line of
each method. This opcode causes the checks to return immediately. With this
modification, no certificate checks are performed, and the application accepts all
certificates.
.method public
checkServerTrusted([LJava/security/cert/X509Certificate;Ljava/l
ang/String;)V
.locals 3
.param p1, "chain" # [Ljava/security/cert/X509Certificate;
.param p2, "authType" # Ljava/lang/String;
.prologue
return-void # <-- OUR INSERTED OPCODE!
.line 102
iget-object v1, p0, Lasdf/t$a;->a:Ljava/util/ArrayList;
move-result-object v1
:goto_0
invoke-interface {v1}, Ljava/util/Iterator;->hasNext()Z
If the React Native framework has been used for developing then the main
application code is located in the file assets/index.android.bundle . This file
contains the JavaScript code. Most of the time, the JavaScript code in this file is
minified. By using the tool JStillery a human readable version of the file can be
retried, allowing code analysis. The CLI version of JStillery or the local server
should be preferred instead of using the online version as otherwise source code
is sent and disclosed to a 3rd party.
The following approach can be used in order to patch the JavaScript file:
6. Repack the APK archive using APKTool tool and sign it before to install it
on the target device/emulator.
To use Xposed, you need to first install the Xposed framework on a rooted
device. Deploy modifications deployed in the form of separate apps ("modules"),
which can be toggled on and off in the Xposed GUI.
Let's assume you're testing an app that's stubbornly quitting on your rooted
device. You decompile the app and find the following highly suspect method:
package com.example.a.b
int v2 = v1.length;
return v0;
}
This method iterates through a list of directories and returns "true" (device
rooted) if it finds the su binary in any of them. Checks like this are easy to
deactivate all you have to do is replace the code with something that returns
"false." Method hooking with an Xposed module is one way to do this.
package com.awesome.pentestcompany;
import static
de.robv.android.xposed.XposedHelpers.findAndHookMethod;
import de.robv.android.xposed.IXposedHookLoadPackage;
import de.robv.android.xposed.XposedBridge;
import de.robv.android.xposed.XC_MethodHook;
import
de.robv.android.xposed.callbacks.XC_LoadPackage.LoadPackagePara
m;
findAndHookMethod("com.example.a.b",
lpparam.classLoader, "c", new XC_MethodHook() {
@Override
});
}
}
Just like regular Android apps, modules for Xposed are developed and deployed
with Android Studio. For more details on writing, compiling, and installing
Xposed modules, refer to the tutorial provided by its author, rovo89.
Frida injects a complete JavaScript runtime into the process, along with a
powerful API that provides a lot of useful functionality, including calling and
hooking native functions and injecting structured data into memory. It also
supports interaction with the Android Java runtime.
FRIDA Architecture, source: http://www.frida.re/docs/hacking/
Instantiate Java objects and call static and non-static class methods
Replace Java method implementations
Enumerate live instances of specific classes by scanning the Java heap
(Dalvik only)
Scan process memory for occurrences of a string
Intercept native function calls to run your own code at function entry and
exit
Installing Frida
Your Android device doesn't need to be rooted to run Frida, but it's the easiest
setup. We assume a rooted device here unless otherwise noted. Download the
frida-server binary from the Frida releases page. Make sure that you download
the right frida-server binary for the architecture of your Android device or
emulator: x86, x86_64, arm or arm64. Make sure that the server version (at least
the major version number) matches the version of your local Frida installation.
PyPI usually installs the latest version of Frida. If you're unsure which version is
installed, you can check with the Frida command line tool:
$ frida --version
9.1.10
$ wget
https://github.com/frida/frida/releases/download/9.1.10/frida-
server-9.1.10-android-arm.xz
Or you can run the following command to automatically detect frida version and
download the right frida-server binary:
$ wget https://github.com/frida/frida/releases/download/$(frida
--version)/frida-server-$(frida --version)-android-arm.xz
With frida-server running, you should now be able to get a list of running
processes with the following command:
$ frida-ps -U
PID Name
----- --------------------------------------------------------
------
276 adbd
956 android.process.media
198 bridgemgrd
1191 com.android.nfc
1236 com.android.phone
5353 com.android.settings
936 com.android.systemui
(...)
To trace specific (low-level) library calls, you can use the frida-trace
command line tool:
Use frida CLI to work with Frida interactively. It hooks into a process and
gives you a command line interface to Frida's API.
frida -U com.android.chrome
With the -l option, you can also use the Frida CLI to load scripts , e.g., to load
myscript.js :
Frida also provides a Java API, which is especially helpful for dealing with
Android apps. It lets you work with Java classes and objects directly. Here is a
script to overwrite the onResume function of an Activity class:
Java.perform(function () {
var Activity = Java.use("android.app.Activity");
Activity.onResume.implementation = function () {
console.log("[*] onResume() got called!");
this.onResume();
};
});
The above script calls Java.perform to make sure that your code gets executed
in the context of the Java VM. It instantiates a wrapper for the
android.app.Activity class via Java.use and overwrites the onResume()
Frida also lets you search for and work with instantiated objects that are on the
heap. The following script searches for instances of android.view.View objects
and calls their toString method. The result is printed to the console:
setImmediate(function() {
console.log("[*] Starting script");
Java.perform(function () {
Java.choose("android.view.View", {
"onMatch":function(instance){
console.log("[*] Instance found: " +
instance.toString());
},
"onComplete":function() {
console.log("[*] Finished heap search")
}
});
});
});
You can also use Java's reflection capabilities. To list the public methods of the
android.view.View class, you could create a wrapper for this class in Frida and
Java.perform(function () {
var view = Java.use("android.view.View");
var methods = view.class.getMethods();
for(var i = 0; i < methods.length; i++) {
console.log(methods[i].toString());
}
});
Frida makes it easy to solve the OWASP UnCrackable Crackme Level 1. You
have already seen that you can hook method calls with Frida.
When you start the App on an emulator or a rooted device, you'll find that the
app presents a dialog box and exits as soon as you press "Ok" because it detected
root:
Let's see how we can prevent this. The main method (decompiled with CFR)
looks like this:
package sg.vantagepoint.uncrackable1;
import android.app.Activity;
import android.app.AlertDialog;
import android.content.Context;
import android.content.DialogInterface;
import android.os.Bundle;
import android.text.Editable;
import android.view.View;
import android.widget.EditText;
import sg.vantagepoint.uncrackable1.a;
import sg.vantagepoint.uncrackable1.b;
import sg.vantagepoint.uncrackable1.c;
Notice the "Root detected" message in the onCreate method and the various
methods called in the preceding if -statement (which perform the actual root
checks). Also note the "This is unacceptable..." message from the first method of
the class, private void a . Obviously, this displays the dialog box. There is an
alertDialog.onClickListener callback set in the setButton method call,
class b implements
android.content.DialogInterface$OnClickListener {
final sg.vantagepoint.uncrackable1.MainActivity a;
b(sg.vantagepoint.uncrackable1.MainActivity a0)
{
this.a = a0;
super();
}
It just exits the app. Now intercept it with Frida to prevent the app from exiting
after root detection:
Java.perform(function() {
bClass = Java.use("sg.vantagepoint.uncrackable1.b");
bClass.onClick.implementation = function(v) {
console.log("[*] onClick called");
};
console.log("[*] onClick handler modified");
});
});
Wrap your code in the function setImmediate to prevent timeouts (you may or
may not need to do this), then call Java.perform to use Frida's methods for
dealing with Java. Afterwards retrieve a wrapper for the class that implements
the OnClickListener interface and overwrite its onClick method. Unlike the
original, the new version of onClick just writes console output and doesn't exit
the app. If you inject your version of this method via Frida, the app should not
exit when you click the "OK" dialog button.
After you see the "onClickHandler modified" message, you can safely press
"OK". The app will not exit anymore.
You can now try to input a "secret string." But where do you get it?
package sg.vantagepoint.uncrackable1;
import android.util.Base64;
import android.util.Log;
public class a {
public static boolean a(String string) {
byte[] arrby =
Base64.decode((String)"5UJiFctbmgbDoLXmpL12mkno8HT4Lv8dlat8FxR2
GOc=", (int)0);
byte[] arrby2 = new byte[]{};
try {
arrby2 = arrby =
sg.vantagepoint.a.a.a((byte[])a.b((String)"8d127684cbc37c17616d
806cf50473cc"), (byte[])arrby);
}
catch (Exception var2_2) {
Log.d((String)"CodeCheck", (String)("AES error:" +
var2_2.getMessage()));
}
if (!string.equals(new String(arrby2))) return false;
return true;
}
Notice the string.equals comparison at the end of the a method and the
creation of the string arrby2 in the try block above. arrby2 is the return
value of the function sg.vantagepoint.a.a.a . string.equals comparison
compares your input with arrby2 . So we want the return value of
sg.vantagepoint.a.a.a.
Instead of reversing the decryption routines to reconstruct the secret key, you can
simply ignore all the decryption logic in the app and hook the
sg.vantagepoint.a.a.a function to catch its return value. Here is the complete
script that prevents exiting on root and intercepts the decryption of the secret
string:
setImmediate(function() {
console.log("[*] Starting script");
Java.perform(function() {
bClass = Java.use("sg.vantagepoint.uncrackable1.b");
bClass.onClick.implementation = function(v) {
console.log("[*] onClick called.");
};
console.log("[*] onClick handler modified");
aaClass = Java.use("sg.vantagepoint.a.a");
aaClass.a.implementation = function(arg1, arg2) {
retval = this.a(arg1, arg2);
password = '';
for(i = 0; i < retval.length; i++) {
password += String.fromCharCode(retval[i]);
}
});
});
After running the script in Frida and seeing the "[*] sg.vantagepoint.a.a.a
modified" message in the console, enter a random value for "secret string" and
press verify. You should get an output similar to the following:
michael@sixtyseven:~/Development/frida$ frida -U -l
uncrackable1.js sg.vantagepoint.uncrackable1
____
/ _ | Frida 9.1.16 - A world-class dynamic
instrumentation framework
| (_| |
> _ | Commands:
/_/ |_| help -> Displays the help system
. . . . object? -> Display information about
'object'
. . . . exit/quit -> Exit
. . . .
. . . . More info at http://www.frida.re/docs/home/
The hooked function outputted the decrypted string. You extracted the secret
string without having to dive too deep into the application code and its
decryption routines.
Our target program is a simple license key validation program. Granted, you
won't usually find license key validators like this, but the example should
demonstrate the basics of static/symbolic analysis of native code. You can use
the same techniques on Android apps that ship with obfuscated native libraries
(in fact, obfuscated code is often put into native libraries specifically to make de-
obfuscation more difficult).
Installing Angr
Angr is written in Python 2, and it's available from PyPI. With pip, it's easy to
install on *nix operating systems and Mac OS:
Symbolic execution is useful when you need to find the right input for reaching a
certain block of code. In the following example, you'll use Angr to solve a
simple Android crackme in an automated fashion. The crackme takes the form of
a native ELF binary that you can download here:
https://github.com/angr/angr-
doc/tree/master/examples/android_arm_license_validation
Running the executable on any Android device should give you the following
output:
So far so good, but you know nothing about what a valid license key looks like.
Where do we start? Fire up IDA Pro to get a good look at what is happening.
The main function is located at address 0x1874 in the disassembly (note that this
is a PIE-enabled binary, and IDA Pro chooses 0x0 as the image base address).
Function names have been stripped, but you can see some references to
debugging strings. The input string appears to be base32-decoded (call to
sub_1340). At the beginning of main , there's a length check at loc_1898. It
makes sure that the length of the input string is exactly 16 characters. So you're
looking for a base32-encoded 16-character string! The decoded input is then
passed to the function sub_1760, which validates the license key.
The decoded 16-character input string totals 10 bytes, so you know that the
validation function expects a 10-byte binary string. Next, look at the core
validation function at 0x1760:
.text:00001760 ; =============== S U B R O U T I N E
=======================================
.text:00001760
.text:00001760 ; Attributes: bp-based frame
.text:00001760
.text:00001760 sub_1760 ; CODE
XREF: sub_1874+B0
.text:00001760
.text:00001760 var_20 = -0x20
.text:00001760 var_1C = -0x1C
.text:00001760 var_1B = -0x1B
.text:00001760 var_1A = -0x1A
.text:00001760 var_19 = -0x19
.text:00001760 var_18 = -0x18
.text:00001760 var_14 = -0x14
.text:00001760 var_10 = -0x10
.text:00001760 var_C = -0xC
.text:00001760
.text:00001760 STMFD SP!, {R4,R11,LR}
.text:00001764 ADD R11, SP, #8
.text:00001768 SUB SP, SP, #0x1C
.text:0000176C STR R0, [R11,#var_20]
.text:00001770 LDR R3, [R11,#var_20]
.text:00001774 STR R3, [R11,#var_10]
.text:00001778 MOV R3, #0
.text:0000177C STR R3, [R11,#var_14]
.text:00001780 B loc_17D0
.text:00001784 ; ----------------------------------------------
-----------------------------
.text:00001784
.text:00001784 loc_1784 ; CODE
XREF: sub_1760+78
.text:00001784 LDR R3, [R11,#var_10]
.text:00001788 LDRB R2, [R3]
.text:0000178C LDR R3, [R11,#var_10]
.text:00001790 ADD R3, R3, #1
.text:00001794 LDRB R3, [R3]
.text:00001798 EOR R3, R2, R3
.text:0000179C AND R2, R3, #0xFF
.text:000017A0 MOV R3, #0xFFFFFFF0
.text:000017A4 LDR R1, [R11,#var_14]
.text:000017A8 SUB R0, R11, #-var_C
.text:000017AC ADD R1, R0, R1
.text:000017B0 ADD R3, R1, R3
.text:000017B4 STRB R2, [R3]
.text:000017B8 LDR R3, [R11,#var_10]
.text:000017BC ADD R3, R3, #2
.text:000017C0 STR R3, [R11,#var_10]
.text:000017C4 LDR R3, [R11,#var_14]
.text:000017C8 ADD R3, R3, #1
.text:000017CC STR R3, [R11,#var_14]
.text:000017D0
.text:000017D0 loc_17D0 ; CODE
XREF: sub_1760+20
.text:000017D0 LDR R3, [R11,#var_14]
.text:000017D4 CMP R3, #4
.text:000017D8 BLE loc_1784
.text:000017DC LDRB R4, [R11,#var_1C]
.text:000017E0 BL sub_16F0
.text:000017E4 MOV R3, R0
.text:000017E8 CMP R4, R3
.text:000017EC BNE loc_1854
.text:000017F0 LDRB R4, [R11,#var_1B]
.text:000017F4 BL sub_170C
.text:000017F8 MOV R3, R0
.text:000017FC CMP R4, R3
.text:00001800 BNE loc_1854
.text:00001804 LDRB R4, [R11,#var_1A]
.text:00001808 BL sub_16F0
.text:0000180C MOV R3, R0
.text:00001810 CMP R4, R3
.text:00001814 BNE loc_1854
.text:00001818 LDRB R4, [R11,#var_19]
.text:0000181C BL sub_1728
.text:00001820 MOV R3, R0
.text:00001824 CMP R4, R3
.text:00001828 BNE loc_1854
.text:0000182C LDRB R4, [R11,#var_18]
.text:00001830 BL sub_1744
.text:00001834 MOV R3, R0
.text:00001838 CMP R4, R3
.text:0000183C BNE loc_1854
.text:00001840 LDR R3, =(aProductActivat -
0x184C)
.text:00001844 ADD R3, PC, R3 ;
"Product activation passed. Congratulati"...
.text:00001848 MOV R0, R3 ; char *
.text:0000184C BL puts
.text:00001850 B loc_1864
.text:00001854 ; ----------------------------------------------
-----------------------------
.text:00001854
.text:00001854 loc_1854 ; CODE
XREF: sub_1760+8C
.text:00001854 ;
sub_1760+A0 ...
.text:00001854 LDR R3, =(aIncorrectSer_0 -
0x1860)
.text:00001858 ADD R3, PC, R3 ;
"Incorrect serial."
.text:0000185C MOV R0, R3 ; char *
.text:00001860 BL puts
.text:00001864
.text:00001864 loc_1864 ; CODE
XREF: sub_1760+F0
.text:00001864 SUB SP, R11, #8
.text:00001868 LDMFD SP!, {R4,R11,PC}
.text:00001868 ; End of function sub_1760
You can see a loop with some XOR-magic happening at loc_1784, which
supposedly decodes the input string. Starting from loc_17DC, you can see a
series of decoded values compared with values from further subfunction calls.
Even though this doesn't look like highly sophisticated stuff, you'd still need to
analyze more to completely reverse this check and generate a license key that
passes it. Now comes the twist: dynamic symbolic execution enables you to
construct a valid key automatically! The symbolic execution engine maps a path
between the first instruction of the license check (0x1760) and the code that
prints the "Product activation passed" message (0x1840) to determine the
constraints on each byte of the input string. The solver engine then finds an input
that satisfies those constraints: the valid license key.
An address from which execution will start. Initialize the state with the first
instruction of the serial validation function. This makes the problem
significantly easier to solve because you avoid symbolically executing the
base32 implementation.
The address of the code block you want execution to reach. You need to
find a path to the code responsible for printing the "Product activation
passed" message. This code block starts at 0x1840.
Addresses you don't want to reach. You're not interested in any path that
ends with the block of code that prints the "Incorrect serial" message
(0x1854).
Note that the Angr loader will load the PIE executable with a base address of
0x400000, so you must add this to the addresses above. The solution is
#!/usr/bin/python
import angr
import claripy
import base64
load_options = {}
state = b.factory.blank_state(addr=0x401760)
initial_path = b.factory.path(state)
path_group = b.factory.path_group(state)
path_group.explore(find=0x401840, avoid=0x401854)
found = path_group.found[0]
print base64.b32encode(solution)
Note the last part of the program, where the final input string is retrieved—it
appears as if you were simply reading the solution from memory. You are,
however, reading from symbolic memory—neither the string nor the pointer to it
actually exist! Actually, the solver is computing concrete values that you could
find in that program state if you observed the actual program run up to that point.
$ cat /default.prop
#
# ADDITIONAL_DEFAULT_PROPERTIES
#
ro.secure=1
ro.allow.mock.location=0
ro.debuggable=1
ro.zygote=zygote32
persist.radio.snapshot_enabled=1
persist.radio.snapshot_timer=2
persist.radio.use_cc_names=true
persist.sys.usb.config=mtp
rild.libpath=/system/lib/libril-qc-qmi-1.so
camera.disable_zsl_mode=1
ro.adb.secure=1
dalvik.vm.dex2oat-Xms=64m
dalvik.vm.dex2oat-Xmx=512m
dalvik.vm.image-dex2oat-Xms=64m
dalvik.vm.image-dex2oat-Xmx=64m
ro.dalvik.vm.native.bridge=0
Setting ro.debuggable to 1 makes all running apps debuggable (i.e., the debugger
thread will run in every process), regardless of the value of the
android:debuggable attribute in the app's Manifest. Setting ro.secure to 0 causes
adbd to run as root. To modify initrd on any Android device, back up the original
boot image with TWRP or dump it with the following command:
To extract the contents of the boot image, use the abootimg tool as described in
Krzysztof Adamski's how-to :
$ mkdir boot
$ cd boot
$ ../abootimg -x /tmp/boot.img
$ mkdir initrd
$ cd initrd
$ cat ../initrd.img | gunzip | cpio -vid
Note the boot parameters written to bootimg.cfg; you'll need them when booting
your new kernel and ramdisk.
$ cd initrd
$ find . | cpio --create --format='newc' | gzip >
../myinitd.img
Android apps have several ways to interact with the OS. Interacting through the
Android Application Framework's APIs is standard. At the lowest level,
however, many important functions (such as allocating memory and accessing
files) are translated into old-school Linux system calls. On ARM Linux, system
calls are invoked via the SVC instruction, which triggers a software interrupt.
This interrupt calls the vector_swi() kernel function, which then uses the
system call number as an offset into a table (known as sys_call_table on
Android) of function pointers.
The most straightforward way to intercept system calls is to inject your own
code into kernel memory, then overwrite the original function in the system call
table to redirect execution. Unfortunately, current stock Android kernels enforce
memory restrictions that prevent this. Specifically, stock Lollipop and
Marshmallow kernels are built with the CONFIG_STRICT_MEMORY_RWX
option enabled. This prevents writing to kernel memory regions marked as read-
only, so any attempt to patch kernel code or the system call table result in a
segmentation fault and reboot. To get around this, build your own kernel. You
can then deactivate this protection and make many other useful customizations
that simplify reverse engineering. If you reverse Android apps on a regular basis,
building your own reverse engineering sandbox is a no-brainer.
For hacking, I recommend an AOSP-supported device. Google's Nexus
smartphones and tablets are the most logical candidates because kernels and
system components built from the AOSP run on them without issues. Sony's
Xperia series is also known for its openness. To build the AOSP kernel, you need
a toolchain (a set of programs for cross-compiling the sources) and the
appropriate version of the kernel sources. Follow Google's instructions to
identify the correct git repo and branch for a given device and Android version.
https://source.android.com/source/building-kernels.html#id-version
For example, to get kernel sources for Lollipop that are compatible with the
Nexus 5, you need to clone the msm repo and check out one of the android-
msm-hammerhead branches (hammerhead is the codename of the Nexus 5, and
finding the right branch is confusing). Once you have downloaded the sources,
create the default kernel config with the command make
hammerhead_defconfig (replacing "hammerhead" with your target device).
CONFIG_MODULES=Y
CONFIG_STRICT_MEMORY_RWX=N
CONFIG_DEVMEM=Y
CONFIG_DEVKMEM=Y
CONFIG_KALLSYMS=Y
CONFIG_KALLSYMS_ALL=Y
CONFIG_HAVE_KPROBES=Y
CONFIG_HAVE_KRETPROBES=Y
CONFIG_HAVE_FUNCTION_TRACER=Y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=Y
CONFIG_TRACING=Y
CONFIG_FTRACE=Y
CONFIG KDB=Y
Once you're finished editing save the .config file, build the kernel.
$ export ARCH=arm
$ export SUBARCH=arm
$ export CROSS_COMPILE=/path_to_your_ndk/arm-eabi-4.8/bin/arm-
eabi-
$ make
You can now create a standalone toolchain for cross-compiling the kernel and
subsequent tasks. To create a toolchain for Android Nougat, run make-
standalone-toolchain.sh from the Android NDK package:
$ cd android-ndk-rXXX
$ build/tools/make-standalone-toolchain.sh --arch=arm --
platform=android-24 --install-dir=/tmp/my-android-toolchain
root@hammerhead:/dev # ls -al
/dev/block/platform/msm_sdcc.1/by-name/
lrwxrwxrwx root root 1970-08-30 22:31 DDR ->
/dev/block/mmcblk0p24
lrwxrwxrwx root root 1970-08-30 22:31 aboot ->
/dev/block/mmcblk0p6
lrwxrwxrwx root root 1970-08-30 22:31 abootb -
> /dev/block/mmcblk0p11
lrwxrwxrwx root root 1970-08-30 22:31 boot ->
/dev/block/mmcblk0p19
(...)
lrwxrwxrwx root root 1970-08-30 22:31 userdata
-> /dev/block/mmcblk0p28
$ abootimg -x boot.img
This should create the files bootimg.cfg, initrd.img, and zImage (your original
kernel) in the local directory.
You can now use fastboot to test the new kernel. The fastboot boot command
allows you to run the kernel without actually flashing it (once you're sure
everything works, you can make the changes permanent with fastboot flash, but
you don't have to). Restart the device in fastboot mode with the following
command:
Then use the fastboot boot command to boot Android with the new kernel.
Specify the kernel offset, ramdisk offset, tags offset, and command line (use the
values listed in your extracted bootimg.cfg) in addition to the newly built kernel
and the original ramdisk.
The system should now boot normally. To quickly verify that the correct kernel
is running, navigate to Settings->About phone and check the "kernel version"
field.
System Call Hooking with Kernel Modules
System call hooking allows you to attack any anti-reversing defenses that
depend on kernel-provided functionality . With your custom kernel in place, you
can now use an LKM to load additional code into the kernel. You also have
access to the /dev/kmem interface, which you can use to patch kernel memory
on-the-fly. This is a classic Linux rootkit technique that has been described for
Android by Dong-Hoon You [1].
This is the only memory address you need for writing your kernel module—you
can calculate everything else with offsets taken from the kernel headers
(hopefully, you didn't delete them yet).
In this how-to, we will use a Kernel module to hide a file. Create a file on the
device so you can hide it later:
You can find the function prototypes for all system calls in
the kernel header file arch/arm/include/asm/unistd.h. Create a
file called kernel_hook.c with the following code:
```c
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/unistd.h>
#include <linux/slab.h>
#include <asm/uaccess.h>
void **sys_call_table;
kbuf=(char*)kmalloc(256,GFP_KERNEL);
len = strncpy_from_user(kbuf,pathname,255);
if (strcmp(kbuf, "/data/local/tmp/nowyouseeme") == 0) {
printk("Hiding file!\n");
return -ENOENT;
}
kfree(kbuf);
int init_module() {
sys_call_table = (void*)0xc000f984;
real_openat = (void*)(sys_call_table[__NR_openat]);
return 0;
}
To build the kernel module, you need the kernel sources and a working
toolchain. Since you've already built a complete kernel, you're all set. Create a
Makefile with the following content:
obj-m := kernel_hook.o
all:
make ARCH=arm CROSS_COMPILE=$(TOOLCHAIN)/bin/arm-eabi-
-C $(KERNEL) M=$(shell pwd) CFLAGS_MODULE=-fno-pic modules
clean:
make -C $(KERNEL) M=$(shell pwd) clean
Run make to compile the code—this should create the file kernel_hook.ko. Copy
kernel_hook.ko to the device and load it with the insmod command. Using the
lsmod command, verify that the module has been loaded successfully.
$ make
(...)
$ adb push kernel_hook.ko /data/local/tmp/
[100%] /data/local/tmp/kernel_hook.ko
$ adb shell su -c insmod /data/local/tmp/kernel_hook.ko
$ adb shell lsmod
kernel_hook 1160 0 [permanent], Live 0xbf000000 (PO)
Now you'll access /dev/kmem to overwrite the original function pointer in
sys_call_table with the address of your newly injected function (this could have
been done directly in the kernel module, but /dev/kmem provides an easy way to
toggle your hooks on and off). I have adapted the code from Dong-Hoon You's
Phrack article for this purpose. However, I used the file interface instead of
mmap() because I found that the latter caused kernel panics. Create a file called
kmem_util.c with the following code:
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <asm/unistd.h>
#include <sys/mman.h>
int kmem;
void read_kmem2(unsigned char *buf, off_t off, int sz)
{
off_t offset; ssize_t bread;
offset = lseek(kmem, off, SEEK_SET);
bread = read(kmem, buf, sz);
return;
}
off_t sys_call_table;
unsigned int addr_ptr, sys_call_number;
if (argc < 3) {
return 0;
}
kmem=open("/dev/kmem",O_RDWR);
if(kmem<0){
perror("Error opening kmem"); return 0;
}
return 0;
}
Beginning with Android Lollipop, all executables must be compiled with PIE
support. Build kmem_util.c with the prebuilt toolchain and copy it to the device :
$ /tmp/my-android-toolchain/bin/arm-linux-androideabi-gcc -pie
-fpie -o kmem_util kmem_util.c
$ adb push kmem_util /data/local/tmp/
$ adb shell chmod 755 /data/local/tmp/kmem_util
Before you start accessing kernel memory, you still need to know the correct
offset into the system call table. The openat system call is defined in unistd.h,
which is in the kernel sources:
The final piece of the puzzle is the address of your replacement-openat. Again,
you can get this address from /proc/kallsyms.
Now you have everything you need to overwrite the sys_call_table entry. The
syntax for kmem_util is:
The following command patches the openat system call table so that it points to
your new function.
$ adb shell su -c /data/local/tmp/kmem_util c000f984 322
bf000000
Original value: c017a390
New value: bf000000
Assuming that everything worked, /bin/cat shouldn't be able to "see" the file.
Voilà! The file "nowyouseeme" is now somewhat hidden from all usermode
processes (note that you need to do a lot more to properly hide a file, including
hooking stat(), access(), and other system calls).
File-hiding is of course only the tip of the iceberg: you can accomplish a lot
using kernel modules, including bypassing many root detection measures,
integrity checks, and anti-debugging measures. You can find more examples in
the "case studies" section of Bernhard Mueller's Hacking Soft Tokens Paper.
Android Anti-Reversing Defenses
Testing Root Detection
Overview
In the context of anti-reversing, the goal of root detection is to make running the
app on a rooted device a bit more difficult, which in turn blocks some of the
tools and techniques reverse engineers like to use. Like most other defenses, root
detection is not very effective by itself, but implementing multiple root checks
that are scattered throughout the app can improve the effectiveness of the overall
anti-tampering scheme.
For Android, we define "root detection" a bit more broadly, including custom
ROMs detection, i.e., determining whether the device is a stock Android build or
a custom build.
In the following section, we list some common root detection methods you'll
encounter. You'll find some of these methods implemented in the crackme
examples that accompany the OWASP Mobile Testing Guide.
SafetyNet
SafetyNet is an Android API that provides a set of services and creates profiles
of devices according to software and hardware information. This profile is then
compared to a list of whitelisted device models that have passed Android
compatibility testing. Google recommends using the feature as "an additional in-
depth defense signal as part of an anti-abuse system."
How exactly SafetyNet works is not well documented and may change at any
time. When you call this API, SafetyNet downloads a binary package containing
the device validation code provided from Google, and the code is then
dynamically executed via reflection. An analysis by John Kozyrakis showed that
SafetyNet also attempts to detect whether the device is rooted, but exactly how
that's determined is unclear.
To use the API, an app may call the SafetyNetApi.attest method (which
returns a JWS message with the Attestation Result) and then check the following
fields:
{
"nonce": "R2Rra24fVm5xa2Mg",
"timestampMs": 9860437986543,
"apkPackageName": "com.package.name.of.requesting.app",
"apkCertificateDigestSha256": ["base64 encoded, SHA-256 hash
of the
certificate used to sign
requesting app"],
"apkDigestSha256": "base64 encoded, SHA-256 hash of the app's
APK",
"ctsProfileMatch": true,
"basicIntegrity": true,
}
Programmatic Detection
Perhaps the most widely used method of programmatic detection is checking for
files typically found on rooted devices, such as package files of common rooting
apps and their associated files and directories, including the following:
/system/app/Superuser.apk
/system/etc/init.d/99SuperSUDaemon
/dev/com.koushikdutta.superuser.daemon/
/system/xbin/daemonsu
Detection code also often looks for binaries that are usually installed once a
device has been rooted. These searches include checking for busybox and
attempting to open the su binary at different locations:
/system/xbin/busybox
/sbin/su
/system/bin/su
/system/xbin/su
/data/local/su
/data/local/xbin/su
Checking whether su is on the PATH also works:
File checks can be easily implemented in both Java and native code. The
following JNI example (adapted from rootinspector) uses the stat system call
to retrieve information about a file and returns "1" if the file exists.
return 0;
}
if(list != null){
String tempName;
for(int i=0;i<list.size();++i){
tempName = list.get(i).process;
if(tempName.contains("supersu") ||
tempName.contains("superuser")){
returnValue = true;
}
}
}
return returnValue;
}
You can use the Android package manager to obtain a list of installed packages.
The following package names belong to popular rooting tools:
com.thirdparty.superuser
eu.chainfire.supersu
com.noshufou.android.su
com.koushikdutta.superuser
com.zachspong.temprootremovejb
com.ramdroid.appquarantine
Run execution traces with JDB, DDMS, strace , and/or kernel modules to find
out what the app is doing. You'll usually see all kinds of suspect interactions with
the operating system, such as opening su for reading and obtaining a list of
processes. These interactions are surefire signs of root detection. Identify and
deactivate the root detection mechanisms, one at a time. If you're performing a
black box resilience assessment, disabling the root detection mechanisms is your
first step.
To bypass these checks, you can use several techniques, most of which were
introduced in the "Reverse Engineering and Tampering" chapter:
Effectiveness Assessment
Check for root detection mechanisms, including the following criteria:
Multiple detection methods are scattered throughout the app (as opposed to
putting everything into a single method).
The root detection mechanisms operate on multiple API layers (Java APIs,
native library functions, assembler/system calls).
The mechanisms are somehow original (they're not copied and pasted from
StackOverflow or other sources).
Develop bypass methods for the root detection mechanisms and answer the
following questions:
Testing Anti-Debugging
Overview
Debugging is a highly effective way to analyze run-time app behavior. It allows
the reverse engineer to step through the code, stop app execution at arbitrary
points, inspect the state of variables, read and modify memory, and a lot more.
Anti-JDWP-Debugging Examples
In the chapter "Reverse Engineering and Tampering," we talked about JDWP, the
protocol used for communication between the debugger and the Java Virtual
Machine. We showed that it is easy to enable debugging for any app by patching
its manifest file, and changing the ro.debuggable system property which
enables debugging for all apps. Let's look at a few things developers do to detect
and disable JDWP debuggers.
return
((context.getApplicationContext().getApplicationInfo().flags &
ApplicationInfo.FLAG_DEBUGGABLE) != 0);
isDebuggerConnected
The Android Debug system class offers a static method to determine whether a
debugger is connected. The method returns a boolean value.
Timer Checks
has been executing code. Because debugging slows down process execution, you
can use the difference in execution time to guess whether a debugger is attached.
In Dalvik, the global virtual machine state is accessible via the DvmGlobals
structure. The global variable gDvm holds a pointer to this structure.
DvmGlobals contains various variables and pointers that are important for
struct DvmGlobals {
/*
* Some options that could be worth tampering with :)
*/
Thread* threadList;
bool nativeDebuggerActive;
bool debuggerConnected; /* debugger or DDMS is
connected */
bool debuggerActive; /* debugger is making
requests */
JdwpState* jdwpState;
};
For example, setting the gDvm.methDalvikDdmcServer_dispatch function
pointer to NULL crashes the JDWP thread:
You can disable debugging by using similar techniques in ART even though the
gDvm variable is not available. The ART runtime exports some of the vtables of
JDWP-related classes as global symbols (in C++, vtables are tables that hold
pointers to class methods). This includes the vtables of the classes
JdwpSocketState and JdwpAdbState , which handle JDWP connections via
network sockets and ADB, respectively. You can manipulate the behavior of the
debugging runtime by overwriting the method pointers in the associated vtables.
One way to overwrite the method pointers is to overwrite the address of the
function jdwpAdbState::ProcessIncoming with the address of
JdwpAdbState::Shutdown . This will cause the debugger to disconnect
immediately.
#include <jni.h>
#include <string>
#include <android/log.h>
#include <dlfcn.h>
#include <sys/mman.h>
#include <jdwp/jdwp.h>
struct VT_JdwpAdbState {
unsigned long x;
unsigned long y;
void * JdwpSocketState_destructor;
void * _JdwpSocketState_destructor;
void * Accept;
void * showmanyc;
void * ShutDown;
void * ProcessIncoming;
};
extern "C"
if (lib == NULL) {
log("Error loading libart.so");
dlerror();
}else{
if (vtable == 0) {
log("Couldn't resolve symbol
'_ZTVN3art4JDWP12JdwpAdbStateE'.\n");
}else {
log("Vtable for JdwpAdbState at: %08x\n", vtable);
vtable->ProcessIncoming = vtable->ShutDown;
}
}
}
Anti-Native-Debugging Examples
Most Anti-JDWP tricks (which may be safe for timer-based checks) won't catch
classical, ptrace-based debuggers, so other defenses are necessary. Many
"traditional" Linux anti-debugging tricks are used in this situation.
Checking TracerPid
When the ptrace system call is used to attach to a process, the "TracerPid"
field in the status file of the debugged process shows the PID of the attaching
process. The default value of "TracerPid" is 0 (no process attached).
Consequently, finding anything other than 0 in that field is a sign of debugging
or other ptrace shenanigans.
On Linux, the ptrace system call is used to observe and control the execution
of a process (the "tracee") and to examine and change that process' memory and
registers. ptrace is the primary way to implement breakpoint debugging and
system call tracing. Many anti-debugging tricks include ptrace , often
exploiting the fact that only one debugger at a time can attach to a process.
You can prevent debugging of a process by forking a child process and attaching
it to the parent as a debugger via code similar to the following simple example
code:
void fork_and_attach()
{
int pid = fork();
if (pid == 0)
{
int ppid = getppid();
Attempting to attach to the parent process with gdbserver fails with an error:
You can easily bypass this failure, however, by killing the child and "freeing" the
parent from being traced. You'll therefore usually find more elaborate schemes,
involving multiple processes and threads as well as some form of monitoring to
impede tampering. Common methods include
Let's look at a simple improvement for the method above. After the initial
fork , we launch in the parent an extra thread that continually monitors the
child's status. Depending on whether the app has been built in debug or release
mode (which is indicated by the android:debuggable flag in the manifest), the
child process should do one of the following things:
In release mode: The call to ptrace fails and the child crashes immediately
with a segmentation fault (exit code 11).
In debug mode: The call to ptrace works and the child should run
indefinitely. Consequently, a call to waitpid(child_pid) should never
return. If it does, something is fishy and we would kill the whole process
group.
The following is the complete code for implementing this improvement with a
JNI function:
#include <jni.h>
#include <unistd.h>
#include <sys/ptrace.h>
#include <sys/wait.h>
#include <pthread.h>
void *monitor_pid() {
int status;
}
void anti_debug() {
child_pid = fork();
if (child_pid == 0)
{
int ppid = getppid();
int status;
if (WIFSTOPPED(status)) {
ptrace(PTRACE_CONT, ppid, NULL, NULL);
} else {
// Process has exited
_exit(0);
}
}
}
} else {
pthread_t t;
anti_debug();
}
Again, we pack this into an Android app to see if it works. Just as before, two
processes show up when we run the app's debug build.
However, if we terminate the child process at this point, the parent exits as well:
To bypass this, we must modify the app's behavior slightly (the easiest ways to
do so are patching the call to _exit with NOPs and hooking the function
_exit in libc.so ). At this point, we have entered the proverbial "arms race":
When dealing with obfuscated apps, you'll often find that developers purposely
"hide away" data and functionality in native libraries. You'll find an example of
this in level 2 of the "UnCrackable App for Android."
At first glance, the code looks like the prior challenge. A class called
CodeCheck is responsible for verifying the code entered by the user. The actual
package sg.vantagepoint.uncrackable2;
public class CodeCheck {
public CodeCheck() {
super();
}
static {
System.loadLibrary("foo");
}
Please see different proposed solutions for the Android Crackme Level 2 in
Github.
Effectiveness Assessment
Check for anti-debugging mechanisms, including the following criteria:
Overview
There are two topics related to file integrity:
AndroidManifest.xml,
class files *.dex,
native libraries (*.so).
if ( ze.getCrc() != dexCrc ) {
// dex has been modified
modified = true;
}
else {
// dex not tampered with
modified = false;
}
}
When providing integrity on the storage itself, you can either create an HMAC
over a given key-value pair (as for the Android SharedPreferences ) or create
an HMAC over a complete file that's provided by the file system.
When using an HMAC, you can use a bouncy castle implementation or the
AndroidKeyStore to HMAC the given content.
When generating the HMAC based on the Android Keystore, then it is best to
only do this for Android 6 and higher.
return diff == 0;
}
static {
Security.addProvider(new BouncyCastleProvider());
}
}
Another way to provide integrity is to sign the byte array you obtained and add
the signature to the original byte array.
1. Retrieve the data from the device, as described in the section on device
binding.
2. Alter the retrieved data and then put it back into storage.
Effectiveness Assessment
For application-source integrity checks
Run the app in an unmodified state and make sure that everything works. Apply
simple patches to classes.dex and any .so libraries in the app package. Re-
package and re-sign the app as described in the "Basic Security Testing" chapter,
then run the app. The app should detect the modification and respond in some
way. At the very least, the app should alert the user and/or terminate. Work on
bypassing the defenses and answer the following questions:
Overview
Reverse engineers use a lot of tools, frameworks, and apps, many of which
you've encountered in this guide. Consequently, the presence of such tools on the
device may indicate that the user is attempting to reverse engineer the app. Users
increase their risk by installing such tools.
Detection Methods
You can detect popular reverse engineering tools that have been installed in an
unmodified form by looking for associated application packages, files,
processes, or other tool-specific modifications and artifacts. In the following
examples, we'll demonstrate different ways to detect the Frida instrumentation
framework, which is used extensively in this guide. Other tools, such as
Substrate and Xposed, can be detected similarly. Note that
DBI/injection/hooking tools can often be detected implicitly, through run time
integrity checks, which are discussed below.
if(list != null){
String tempName;
for(int i=0;i<list.size();++i){
tempName = list.get(i).process;
if(tempName.contains("fridaserver")) {
returnValue = true;
}
}
}
return returnValue;
}
This works if Frida is run in its default configuration. Perhaps it's also enough to
stump some script kiddies during their first steps in reverse engineering. It can,
however, be easily bypassed by renaming the frida-server binary, so we should
find a better method.
frida-server binds to TCP port 27047 by default, so checking whether this port is
open is another method of detecting the daemon. The following native code
implements this method:
boolean is_frida_server_listening() {
struct sockaddr_in sa;
memset(&sa, 0, sizeof(sa));
sa.sin_family = AF_INET;
sa.sin_port = htons(27047);
inet_aton("127.0.0.1", &(sa.sin_addr));
}
Again, this code detects frida-server in its default mode, but the listening port
can be changed via a command line argument, so bypassing this is a little too
trivial. This method can be improved with an nmap -sV . frida-server uses
the D-Bus protocol to communicate, so we send a D-Bus AUTH message to
every open port and check for an answer, hoping that frida-server will reveal
itself.
/*
* Mini-portscan to detect frida-server on any local port.
*/
__android_log_print(ANDROID_LOG_VERBOSE, APPNAME,
"FRIDA DETECTION [1]: Open Port: %d", i);
memset(res, 0 , 7);
usleep(100);
We now have a fairly robust method of detecting frida-server , but there are
still some glaring issues. Most importantly, Frida offers alternative modes of
operation that don't require frida-server! How do we detect those?
The common theme for all Frida's modes is code injection, so we can expect to
have Frida libraries mapped into memory whenever Frida is used. The
straightforward way to detect these libraries is to walk through the list of loaded
libraries and check for suspicious ones:
char line[512];
FILE* fp;
fp = fopen("/proc/self/maps", "r");
if (fp) {
while (fgets(line, 512, fp)) {
if (strstr(line, "frida")) {
/* Evil library is loaded. Do something… */
}
}
fclose(fp);
} else {
/* Error opening /proc/self/maps. If this happens,
something is of. */
}
}
This detects any libraries whose names include "frida." This check works, but
there are some major issues:
void scan() {
if (num_found > 1) {
/* Frida Detected */
}
Note the use of my_openat , etc., instead of the normal libc library functions.
These are custom implementations that do the same things as their Bionic libc
counterparts: they set up the arguments for the respective system call and
execute the swi instruction (see the following code). Using these functions
eliminates the reliance on public APIs, thus making them less susceptible to the
typical libc hooks. The complete implementation is in syscall.S . The
following is an assembler implementation of my_openat .
#include "bionic_asm.h"
.text
.globl my_openat
.type my_openat,function
my_openat:
.cfi_startproc
mov ip, r7
.cfi_register r7, ip
ldr r7, =__NR_openat
swi #0
mov r7, ip
.cfi_restore r7
cmn r0, #(4095 + 1)
bxls lr
neg r0, r0
b __set_errno_internal
.cfi_endproc
Effectiveness Assessment
Launch the app with various apps and frameworks installed. Include at least the
following:
The app should respond in some way to the presence of each of those tools. At
the very least, the app should alert the user and/or terminate the app. Work on
bypassing the detection of the reverse engineering tools and answer the
following questions:
Can the mechanisms be bypassed trivially (e.g., by hooking a single API
function)?
How difficult is identifying the anti-debugging code via static and dynamic
analysis?
Did you need to write custom code to disable the defenses? How much time
did you need?
What is your assessment of the difficulty of bypassing the mechanisms?
Overview
In the context of anti-reversing, the goal of emulator detection is to increase the
difficulty of running the app on an emulated device, which impedes some tools
and techniques reverse engineers like to use. This increased difficulty forces the
reverse engineer to defeat the emulator checks or utilize the physical device,
thereby barring the access required for large-scale device analysis.
You can edit the file build.prop on a rooted Android device or modify it while
compiling AOSP from source. Both techniques will allow you to bypass the
static string checks above.
The next set of static indicators utilize the Telephony manager. All Android
emulators have fixed values that this API can query.
API Value
Meaning
TelephonyManager.getDeviceId() 0's
emulator
TelephonyManager.getLine1 Number()
155552155 emulator
TelephonyManager.getNetworkCountryIso() us
possibly emulator
TelephonyManager.getNetworkType() 3
possibly emulator
TelephonyManager.getNetworkOperator().substring(0,3) 310
possibly emulator
TelephonyManager.getNetworkOperator().substring(3) 260
possibly emulator
TelephonyManager.getPhoneType() 1
possibly emulator
TelephonyManager.getSimCountryIso() us
possibly emulator
TelephonyManager.getSimSerial Number()
89014103211118510720 emulator
TelephonyManager.getSubscriberId()
310260000000000 emulator
TelephonyManager.getVoiceMailNumber()
15552175049 emulator
Keep in mind that a hooking framework, such as Xposed or Frida, can hook this
API to provide false data.
Effectiveness Assessment
Install and run the app in the emulator. The app should detect that it is being
executed in an emulator and terminate or refuse to execute the functionality
that's meant to be protected.
How difficult is identifying the emulator detection code via static and
dynamic analysis?
Can the detection mechanisms be bypassed trivially (e.g., by hooking a
single API function)?
Did you need to write custom code to disable the anti-emulation feature(s)?
How much time did you need?
What is your assessment of the difficulty of bypassing the mechanisms?
Overview
Controls in this category verify the integrity of the app's memory space to defend
the app against memory patches applied during run time. Such patches include
unwanted changes to binary code, byte-code, function pointer tables, and
important data structures, as well as rogue code loaded into process memory.
Integrity can be verified by
try {
throw new Exception();
}
catch(Exception e) {
int zygoteInitCallCount = 0;
for(StackTraceElement stackTraceElement : e.getStackTrace())
{
if(stackTraceElement.getClassName().equals("com.android.interna
l.os.ZygoteInit")) {
zygoteInitCallCount++;
if(zygoteInitCallCount == 2) {
Log.wtf("HookDetection", "Substrate is active on the
device.");
}
}
if(stackTraceElement.getClassName().equals("com.saurik.substrat
e.MS$2") &&
stackTraceElement.getMethodName().equals("invoked")) {
Log.wtf("HookDetection", "A method on the stack trace has
been hooked using Substrate.");
}
if(stackTraceElement.getClassName().equals("de.robv.android.xpo
sed.XposedBridge") &&
stackTraceElement.getMethodName().equals("main")) {
Log.wtf("HookDetection", "Xposed is active on the
device.");
}
if(stackTraceElement.getClassName().equals("de.robv.android.xpo
sed.XposedBridge") &&
stackTraceElement.getMethodName().equals("handleHookedMethod"))
{
Log.wtf("HookDetection", "A method on the stack trace has
been hooked using Xposed.");
}
}
}
The Global Offset Table (GOT) is used to resolve library functions. During run
time, the dynamic linker patches this table with the absolute addresses of global
symbols. GOT hooks overwrite the stored function addresses and redirect
legitimate function calls to adversary-controlled code. This type of hook can be
detected by enumerating the process memory map and verifying that each GOT
entry points to a legitimately loaded library.
In contrast to GNU ld , which resolves symbol addresses only after they are
needed for the first time (lazy binding), the Android linker resolves all external
functions and writes the respective GOT entries immediately after a library is
loaded (immediate binding). You can therefore expect all GOT entries to point to
valid memory locations in the code sections of their respective libraries during
run time. GOT hook detection methods usually walk the GOT and verify this.
Overview
The goal of device binding is to impede an attacker who tries to both copy an
app and its state from device A to device B and continue executing the app on
device B. After device A has been determined trustworthy, it may have more
privileges than device B. These differential privileges should not change when
an app is copied from device A to device B.
Before we describe the usable identifiers, let's quickly discuss how they can be
used for binding. There are three methods that allow device binding:
Static Analysis
In the past, Android developers often relied on the
Settings.Secure.ANDROID_ID (SSAID) and MAC addresses. However, the
behavior of the SSAID has changed since Android O, and the behavior of MAC
addresses changed with the release of Android N. In addition, there are new
recommendations for identifiers in Google's SDK documentation.
There are a few key terms you can look for when the source code is available:
persist.service.bdroid.bdadd
TelephonyManager tm = (TelephonyManager)
context.getSystemService(Context.TELEPHONY_SERVICE);
String IMEI = tm.getDeviceId();
Apps for Android O will get the result "UNKNOWN" when they request
Build.Serial .
Dynamic Analysis
There are several ways to test the application binding:
Dynamic Analysis with an Emulator
Google Instance ID
Google Instance ID uses tokens to authenticate the running application instance.
The moment the application is reset, uninstalled, etc., the Instance ID is reset,
meaning that you'll have a new "instance" of the app. Go through the following
steps for Instance ID:
dependencies {
compile 'com.google.android.gms:play-services-
gcm:10.2.4'
}
4. Generate a token.
5. Make sure that you can handle callbacks from Instance ID, in case of
invalid device information, security issues, etc. This requires extending
Instance IDListenerService and handling the callbacks there:
When you submit the Instance ID (iid) and the tokens to your server, you can use
that server with the Instance ID Cloud Service to validate the tokens and the iid.
When the iid or token seems invalid, you can trigger a safeguard procedure (e.g.,
informing the server of possible copying or security issues or removing the data
from the app and asking for a re-registration).
Google recommends not using these identifiers unless the application is at a high
risk.
For devices running Android version O and later, you can request the device's
serial as follows:
<uses-permission
android:name="android.permission.READ_PHONE_STATE"/>
TelephonyManager tm = (TelephonyManager)
context.getSystemService(Context.TELEPHONY_SERVICE);
String IMEI = tm.getDeviceId();
SSAID
Google recommends not using these identifiers unless the application is at a high
risk. You can retrieve the SSAID as follows:
The behavior of the SSAID has changed since Android O, and the behavior of
MAC addresses changed with the release of Android N. In addition, there are
new recommendations for identifiers in Google's SDK documentation. Because
of this new behavior, we recommend that developers not rely on the SSAID
alone. The identifier has become less stable. For example, the SSAID may
change after a factory reset or when the app is reinstalled after the upgrade to
Android O. There are devices that have the same ANDROID_ID and/or have an
ANDROID_ID that can be overridden.
Effectiveness Assessment
There are a few key terms you can look for when the source code is available:
persist.service.bdroid.bdadd
TelephonyManager tm = (TelephonyManager)
context.getSystemService(Context.TELEPHONY_SERVICE);
String IMEI = tm.getDeviceId();
To make sure that the identifiers can be used, check AndroidManifest.xml for
usage of the IMEI and Build.Serial . The manifest should contain the
permission <uses-permission
android:name="android.permission.READ_PHONE_STATE"/> .
Using an Emulator
Testing Obfuscation
Overview
Obfuscation is the process of transforming code and data to make it more
difficult to comprehend. It is an integral part of every software protection
scheme. What's important to understand is that obfuscation isn't something that
can be simply turned on or off. Programs can be made incomprehensible, in
whole or in part, in many ways and to different degrees.
In this test case, we describe a few basic obfuscation techniques that are
commonly used on Android.
Effectiveness Assessment
Attempt to decompile the byte-code, disassemble any included library files, and
perform static analysis. At the very least, the app's core functionality (i.e., the
functionality meant to be obfuscated) shouldn't be easily discerned. Verify that
References
OWASP MASVS
V8.3: "The app detects, and responds to, tampering with executable files
and critical data within its own sandbox."
V8.4: "The app detects, and responds to, the presence of widely used
reverse engineering tools and frameworks on the device."
V8.5: "The app detects, and responds to, being run in an emulator."
V8.6: "The app detects, and responds to, tampering the code and data in its
own memory space."
V8.9: "All executable files and libraries belonging to the app are either
encrypted on the file level and/or important code and data segments inside
the executables are encrypted or packed. Trivial static analysis doesn't
reveal important code or data."
V8.10: "Obfuscation is applied to programmatic defenses, which in turn
impede de-obfuscation via dynamic analysis."
V8.11: "The app implements a 'device binding' functionality using a device
fingerprint derived from multiple properties unique to the device."
V8.13: "If the goal of obfuscation is to protect sensitive computations, an
obfuscation scheme is used that's both appropriate for the particular task
and robust against manual and automated de-obfuscation methods,
considering currently published research. The effectiveness of the
obfuscation scheme must be verified through manual testing. Note that
hardware-based isolation features are preferred over obfuscation whenever
possible."
Tools
frida - https://www.frida.re/
ADB & DDMS
iOS Platform Overview
iOS is a mobile operating system that powers Apple mobile devices, including
the iPhone, iPad, and iPod Touch. It is also the basis for Apple tvOS, which
inherits many functionalities from iOS.
Like the Apple desktop operating system macOS (formerly OS X), iOS is based
on Darwin, an open source Unix operating system developed by Apple. Darwin's
kernel is XNU ("X is Not Unix"), a hybrid kernel that combines components of
the Mach and FreeBSD kernels.
However, iOS apps run in a more restricted environment than their desktop
counterparts do. iOS apps are isolated from each other at the file system level
and are significantly limited in terms of system API access.
To protect users from malicious applications, Apple restricts and controls access
to the apps that are allowed to run on iOS devices. The Apple App store is the
only official application distribution platform. There developers can offer their
apps and consumers can buy, download, and install apps. This distribution style
differs from Android, which supports several app stores and sideloading
(installing an app on your iOS device without using the official App store).
iOS apps are isolated from each other via the Apple sandbox (historically called
Seatbelt),s a mandatory access control (MAC) mechanisms describing the
resources an app can and can't access. Compared to Android's extensive Binder
IPC facilities, iOS offers very few IPC options, minimizing the potential attack
surface.
In spite of the numerous strengths of iOS, iOS app developers still need to worry
about security. Data protection, Keychain, Touch ID authentication, and network
security still leave a large margin for errors. In the following chapters, we
describe iOS security architecture, explain a basic security testing methodology,
and provide reverse engineering how-tos.
Hardware Security
Secure Boot
Code Signing
Sandbox
Encryption and Data Protection
General Exploit Mitigations
Hardware Security
The iOS security architecture makes good use of hardware-based security
features that enhance overall performance. Each iOS device comes with two
built-in Advanced Encryption Standard (AES) 256-bit keys – GID and UID –
that are fused and compiled into the application processor and Secure Enclave
during manufacturing. There's no direct way to read these keys with software or
debugging interfaces such as JTAG. Encryption and decryption operations are
performed by hardware AES crypto-engines that have exclusive access to these
keys.
The GID is a value shared by all processors in a class of devices used to prevent
tampering with firmware files and other cryptographic tasks not directly related
to the user's private data. UIDs, which are unique to each device, are used to
protect the key hierarchy that's used for device-level file system encryption.
Because UIDs aren't recorded during manufacturing, not even Apple can restore
the file encryption keys for a particular device.
To allow secure deletion of sensitive data on flash memory, iOS devices include
a feature called Effaceable Storage. This feature provides direct low-level access
to the storage technology, making it possible to securely erase selected blocks.
Secure Boot
When an iOS device is powered on, it reads the initial instructions from the read-
only Boot ROM, which bootstraps the system. The Boot ROM contains
immutable code and the Apple Root CA, which is etched into the silicon die
during the fabrication process, thereby creating the root of trust. Next, the Boot
ROM makes sure that the iBoot bootloader's signature is correct. After the
signature is validated, the iBoot checks the signature of the next boot stage,
which is the iOS kernel. If any of these steps fail, the boot process will terminate
immediately and the device will enter recovery mode and display the "Connect
to iTunes" screen. However, if the Boot ROM fails to load, the device will enter
a special low-level recovery mode called Device Firmware Upgrade (DFU). This
is the last resort for restoring the device to its original state. In this mode, the
device will show no sign of activity; i.e., its screen won't display anything.
This entire process is called the "Secure Boot Chain". Its purpose is ensuring that
the system and its components are written and distributed by Apple. The Secure
Boot chain consists of the kernel, the bootloader, the kernel extension, and the
baseband firmware.
Code Signing
Apple has implemented an elaborate DRM system to make sure that only Apple-
approved code runs on their devices. In other words, you won't be able to run
any code on an iOS device that hasn't been jailbroken unless Apple explicitly
allows it. End users are supposed to install apps through the official Apple app
store only. For this reason (and others), iOS has been compared to a crystal
prison.
Apple has built encryption into the hardware and firmware of its iOS devices
since the release of the iPhone 3GS. Every device has a dedicated hardware-
based crypto engine that's based on the 256-bit AES, which works with a SHA-1
cryptographic hash function. In addition, there's a unique identifier (UID) built
into each device's hardware with an AES 256-bit key fused into the application
processor. This UID is unique and not recorded elsewhere. At the time of
writing, neither software nor firmware can directly read the UID. Because the
key is burned into the silicon chip, it can't be tampered with or bypassed. Only
the crypto engine can access it.
When data protection is enabled, each data file is associated with a specific
class. Each class supports a different level of accessibility and protects data on
the basis of when the data needs to be accessed. The encryption and decryption
operations associated with each class are based on multiple key mechanisms that
utilize the device's UID and passcode, a class key, a file system key, and a per-
file key. The per-file key is used to encrypt the file's contents. The class key is
wrapped around the per-file key and stored in the file's metadata. The file system
key is used to encrypt the metadata. The UID and passcode protect the class key.
This operation is invisible to users. To enable data protection, the passcode must
be used when accessing the device. The passcode unlocks the device. Combined
with the UID, the passcode also creates iOS encryption keys that are more
resistant to hacking and brute-force attacks. Enabling data protection is the main
reason for users to use passcodes on their devices.
Sandbox
The appsandbox is an iOS access control technology. It is enforced at the kernel
level. Its purpose is limiting system and user data damage that may occur when
an app is compromised.
Sandboxing has been a core security feature since the first release of iOS. All
third-party apps run under the same user ( mobile ), and only a few system
applications and services run as root . Regular iOS apps are confined to a
container that restricts access to the app's own files and a very limited number of
system APIs. Access to all resources (such as files, network sockets, IPCs, and
shared memory) are controlled by the sandbox. These restrictions work as
follows [#levin]:
ASLR randomizes the memory location of the program's executable file, data,
heap, and stack every time the program is executed. Because the shared libraries
must be static to be accessed by multiple processes, the addresses of shared
libraries are randomized every time the OS boots instead of every time the
program is invoked. This makes specific function and library memory addresses
hard to predict, thereby preventing attacks such as the return-to-libc attack,
which involves the memory addresses of basic libc functions.
Apps on iOS
iOS apps are distributed in IPA (iOS App Store Package) archives. The IPA file
is a ZIP-compressed archive that contains all the code and resources required to
execute the app.
IPA files have a built-in directory structure. The example below shows this
structure at a high level:
/Payload/ folder contains all the application data. We will come back to
the contents of this folder in more detail.
/Payload/Application.app contains the application data itself (ARM-
icon.
/iTunesMetadata.plist contains various bits of information, including
the developer's name and ID, the bundle identifier, copyright information,
genre, the name of the app, release date, purchase date, etc.
/WatchKitSupport/WK is an example of an extension bundle. This specific
bundle contains the extension delegate and the controllers for managing the
interfaces and responding to user interactions on an Apple watch.
A language.lproj folder exists for each language that the application supports. It
contains a storyboard and strings file.
(Universal Unique Identifier), a 128-bit number. This number was the name of
the folder in which the application itself was stored. Static bundle and
application data folders are now stored somewhere else. These folders contain
information that must be examined closely during application security
assessments.
/var/mobile/Containers/Bundle/Application/[UUID]/Application.app
all the user-generated data. The application end user initiates the creation of
this data.
/var/mobile/Containers/Data/Application/[UUID]/Library contains all
On the iOS device, the actual installation process is then handled by the installd
daemon, which will unpack and install the application. To integrate app services
or be installed on an iOS device, all applications must be signed with a
certificate issued by Apple. This means that the application can be installed only
after successful code signature verification. On a jailbroken phone, however, you
can circumvent this security feature with AppSync, a package available in the
Cydia store. Cydia is an alternative app store. It contains numerous useful
applications that leverage jailbreak-provided root privileges to execute advanced
functionality. AppSync is a tweak that patches installd, allowing the installation
of fake-signed IPA packages.
The IPA can also be directly installed at the command line by with ipainstaller.
After you copy the IPA to the device using, for example, scp (secure copy), you
can execute the ipainstaller with the IPA's filename:
$ ipainstaller App_name.ipa
App Permissions
In contrast to Android apps, iOS apps don't have pre-assigned permissions.
Instead, the user is asked to grant permission during run time, when the app
attempts to use a sensitive API for the first time. Apps that have been granted
permissions are listed in the Settings > Privacy menu, allowing the user to
modify the app-specific setting. Apple calls this permission concept privacy
controls.
iOS developers can't set requested permissions directly—they indirectly request
them with sensitive APIs. For example, when accessing a user's contacts, any
call to CNContactStore blocks the app while the user is being asked to grant or
deny access. Starting with iOS 10.0, apps must include usage description keys
for the types of data they need to access (e.g., NSContactsUsageDescription).
Contacts
Microphone
Calendars
Camera
Reminders
HomeKit
Photos
Health
Motion activity and fitness
Speech recognition
Location Services
Bluetooth sharing
Media Library
Social media accounts
Setting up a Testing Environment for
iOS Apps
In the previous chapter, we provided an overview of the iOS platform and
described the structure of iOS apps. In this chapter, we'll introduce basic
processes and techniques you can use to test iOS apps for security flaws. These
basic processes are the foundation for the test cases outlined in the following
chapters.
Unlike the Android emulator, which fully emulates the hardware of an actual
Android device, the iOS SDK simulator offers a higher-level simulation of an
iOS device. Most importantly, emulator binaries are compiled to x86 code
instead of ARM code. Apps compiled for a real device don't run, making the
simulator useless for black box analysis and reverse engineering.
Although you can use a Linux or Windows machine for testing, you'll find that
many tasks are difficult or impossible on these platforms. In addition, the Xcode
development environment and the iOS SDK are only available for macOS. This
means that you'll definitely want to work on a Mac for source code analysis and
debugging (it also makes black box testing easier).
Jailbreaking an iOS Device
You should have a jailbroken iPhone or iPad for running tests. These devices
allow root access and tool installation, making the security testing process more
straightforward. If you don't have access to a jailbroken device, you can apply
the workarounds described later in this chapter, but be prepared for a difficult
experience.
Benefits of Jailbreaking
End users often jailbreak their devices to tweak the iOS system's appearance,
add new features, and install third-party apps from unofficial app stores. For a
security tester, however, jailbreaking an iOS device has even more benefits.
They include, but aren't limited to, the following:
Jailbreak Types
There are tethered, semi-tethered, semi-untethered, and untethered jailbreaks.
Tethered jailbreaks don't persist through reboots, so re-applying jailbreaks
requires the device to be connected (tethered) to a computer during every
reboot. The device may not reboot at all if the computer is not connected.
Semi-untethered jailbreaks allow the device to boot on its own, but the
kernel patches for disabling code signing aren't applied automatically. The
user must re-jailbreak the device by starting an app or visiting a website.
Untethered jailbreaks are the most popular choice for end users because
they need to be applied only once, after which the device will be
permanently jailbroken.
If you have a jailbroken device that you use for security testing, keep it as is
unless you're 100% sure that you can re-jailbreak it after upgrading to the latest
iOS version. Consider getting a spare device (which will be updated with every
major iOS release) and waiting for a jailbreak to be released publicly. Apple is
usually quick to release a patch once a jailbreak has been released publicly, so
you have only a couple of days to downgrade to the affected iOS version and
apply the jailbreak.
iOS upgrades are based on a challenge-response process. The device will allow
the OS installation only if the response to the challenge is signed by Apple. This
is what researchers call a "signing window," and it is the reason you can't simply
store the OTA firmware package you downloaded via iTunes and load it onto the
device whenever you want to. During minor iOS upgrades, two versions may
both be signed by Apple. This is the only situation in which you can downgrade
the iOS device. You can check the current signing window and download OTA
firmware from the IPSW Downloads website.
The jailbreak Pangu 1.3.0 is available for 64-bit devices running iOS 9.0. If you
have a device that's running an iOS version for which no jailbreak is available,
you can still jailbreak the device if you downgrade or upgrade to the target
jailbreakable iOS version (via IPSW download and iTunes). However, this may
not be possible if the required iOS version is no longer signed by Apple.
The iOS jailbreak scene evolves so rapidly that providing up-to-date instructions
is difficult. However, we can point you to some sources that are currently
reliable.
Can I Jailbreak?
The iPhone Wiki
Redmond Pie
Reddit Jailbreak
Note that OWASP and the MSTG won't be responsible if you end up
bricking your iOS device!
Once you've jailbroken your iOS device and Cydia has been installed (as shown
in the screenshot above), proceed as follows:
Cydia allows you to manage repositories. One of the most popular repositories is
BigBoss. If your Cydia installation isn't pre-configured with this repository, you
can add it by navigating to Sources -> Edit, then clicking "Add" in the top left
and entering the following URL:
http://apt.thebigboss.org/repofiles/cydia/
You may also want to add the HackYouriPhone repository to get the AppSync
package:
http://repo.hackyouriphone.org
The following are some useful packages you can install from Cydia to get
started:
BigBoss Recommended Tools: Installs many useful command line tools for
security testing including standard Unix utilities that are missing from iOS,
including wget, unrar, less, and sqlite3 client.
adv-cmds: Advanced command line. Includes finger, fingerd, last, lsvfs,
md, and ps.
IPA Installer Console: Tool for installing IPA application packages from the
command line. Package name is com.autopear.installipa .
Class Dump: A command line tool for examining the Objective-C runtime
information stored in Mach-O files.
Substrate: A platform that makes developing third-party iOS add-ons easier.
cycript: Cycript is an inlining, optimizing, Cycript-to-JavaScript compiler
and immediate-mode console environment that can be injected into running
processes.
AppList: Allows developers to query the list of installed apps and provides
a preference pane based on the list.
PreferenceLoader: A MobileSubstrate-based utility that allows developers
to add entries to the Settings application, similar to the SettingsBundles that
App Store apps use.
AppSync Unified: Allows you to sync and install unsigned iOS
applications.
an SSH client
an interception proxy. In this guide, we'll be using BURP Suite.
Introspy
Frida
IDB
Needle
Static Analysis
The preferred method of statically analyzing iOS apps involves using the
original Xcode project files. Ideally, you will be able to compile and debug the
app to quickly identify any potential issues with the source code.
Black box analysis of iOS apps without access to the original source code
requires reverse engineering. For example, no decompilers are available for iOS
apps, so a deep inspection requires you to read assembly code. We won't go into
too much detail of assembly code in this chapter, but we will revisit the topic in
the chapter "Reverse Engineering and Tampering on iOS."
The static analysis instructions in the following chapters are based on the
assumption that the source code is available.
Don't shy away from using automated scanners for your analysis-they help you
pick low-hanging fruit and allow you to focus on the more interesting aspects of
analysis, such as the business logic. Keep in mind that static analyzers may
produce false positives and false negatives; always review the findings carefully.
Needle
Needle is an all-in-one iOS security assessment framework. The following
section includes the steps necessary to install and use Needle.
Installing Needle
On Linux
# Unix packages
sudo apt-get install python2.7 python2.7-dev sshpass sqlite3
lib32ncurses5-dev
# Python packages
sudo pip install readline paramiko sshtunnel frida mitmproxy
biplist
# Download source
git clone https://github.com/mwrlabs/needle.git
On Mac
# Python packages
sudo -H pip install --upgrade --user readline
sudo -H pip install --upgrade --user paramiko
sudo -H pip install --upgrade --user sshtunnel
sudo -H pip install --upgrade --user frida
sudo -H pip install --upgrade --user biplist
# sshpass
brew install
https://raw.githubusercontent.com/kadwanev/bigboybrew/master/Li
brary/Formula/sshpass.rb
# mitmproxy
wget
https://github.com/mitmproxy/mitmproxy/releases/download/v0.17.
1/mitmproxy-0.17.1-osx.tar.gz
tar -xvzf mitmproxy-0.17.1-osx.tar.gz
sudo cp mitmproxy-0.17.1-osx/mitm* /usr/local/bin/
# Download source
git clone https://github.com/mwrlabs/needle.git
Cydia
If the setup process is successful, you'll find the NeedleAgent app on the
home screen.
Start the Framework
Start NeedleAgent
$ python needle.py
__ _ _______ _______ ______ ______
| \ | |______ |______ | \ | |______
| \_| |______ |______ |_____/ |_____ |______
Needle v1.0 [mwr.to/needle]
[MWR InfoSecurity (@MWRLabs) - Marco Lancini
(@LanciniMarco)]
[needle] >
The tool has the following global options (list them via the show options
command and set them via the set <option> <value> command):
Usbmuxd is a socket daemon that monitors USB iPhone connections. You can
use it to map the mobile device's localhost listening sockets to TCP ports on your
host machine. This allows you to conveniently SSH into your iOS device
without setting up an actual network connection. When usbmuxd detects an
iPhone running in normal mode, it connects to the phone and begins relaying
requests that it receives via /var/run/usbmuxd .
The above command maps port 22 on the iOS device to port 2222 on
localhost. With the following command, you should be able to connect to the
device:
$ ssh -p 2222 root@localhost
root@localhost's password:
iPhone:~ root#
Bundle
Data
The random string in the URI is the application's GUID. Every app installation
has a unique GUID. There is no relationship between an app's Bundle GUID and
its Data GUID.
You can now simply archive the Data directory and pull it from the device with
scp :
(...)
Generic Password
----------------
Service: myApp
Account: key3
Entitlement Group: RUD9L355Y.sg.vantagepoint.example
Label: (null)
Generic Field: (null)
Keychain Data: SmJSWxEs
Generic Password
----------------
Service: myApp
Account: key7
Entitlement Group: RUD9L355Y.sg.vantagepoint.example
Label: (null)
Generic Field: (null)
Keychain Data: WOg1DfuH
Note that this binary is signed with a self-signed certificate that has a "wildcard"
entitlement. The entitlement grants access to all items in the Keychain. If you
are paranoid or have very sensitive private data on your test device, you may
want to build the tool from source and manually sign the appropriate
entitlements into your build; instructions for doing this are available in the
GitHub repository.
Installing Frida
Frida is a runtime instrumentation framework that lets you inject JavaScript
snippets or portions of your own library into native Android and iOS apps. If
you've already read the Android section of this guide, you should be quite
familiar with this tool.
If you haven't already done so, you need to install the Frida Python package on
your host machine:
To connect Frida to an iOS app, you need a way to inject the Frida runtime into
that app. This is easy to do on a jailbroken device: just install frida-server
through Cydia. Once it has been installed, the Frida server will automatically run
with root privileges, allowing you to easily inject code into any process.
Start Cydia and add Frida's repository by navigating to Manage -> Sources ->
Edit -> Add and entering https://build.frida.re. You should then be able to find
and install the Frida package.
Connect your device via USB and make sure that Frida works by running the
frida-ps command and the flag '-U'. This should return the list of processes
$ frida-ps -U
PID Name
--- ----------------
963 Mail
952 Safari
416 BTServer
422 BlueTool
791 CalendarWidget
451 CloudKeychainPro
239 CommCenter
764 ContactsCoreSpot
(...)
We`ll demonstrate a few more uses for Frida below.
In the following examples, we'll assume that you are working on a jailbroken
device. If that's not the case, you first need to follow the steps outlined in the
previous section to repackage the Safari app.
Run Safari on the device and make sure the device is connected via USB. Then
start frida-trace as follows:
Next, navigate to a new website in Safari. You should see traced function calls
on the frida-trace console. Note that the initWithURL: method is called to
initialize a new URL request object.
/* TID 0xc07 */
20313 ms -[NSURLRequest _initWithCFURLRequest:0x1043bca30 ]
20313 ms -[NSURLRequest URL]
(...)
21324 ms -[NSURLRequest initWithURL:0x106388b00 ]
21324 ms | -[NSURLRequest initWithURL:0x106388b00
cachePolicy:0x0 timeoutInterval:0x106388b80
We can look up the declaration of this method on the Apple Developer Website:
- (instancetype)initWithURL:(NSURL *)url;
The method is called with a single argument of type NSURL . According to the
documentation, the NSRURL class has a property called absoluteString ,
whose value should be the absolute URL represented by the NSURL object.
We now have all the information we need to write a Frida script that intercepts
the initWithURL: method and prints the URL passed to the method. The full
script is below. Make sure you read the code and inline comments to understand
what's going on.
import sys
import frida
// JavaScript to be injected
frida_code = """
var pool =
ObjC.classes.NSAutoreleasePool.alloc().init();
NSLog(args[2].absoluteString_());
pool.release();
}
});
"""
process = frida.get_usb_device().attach("Safari")
script = process.create_script(frida_code)
script.on('message', message_callback)
script.load()
sys.stdin.read()
Start Safari on the iOS device. Run the above Python script on your connected
host and open the device log (we'll explain how to open device logs in the
following section). Try opening a new URL in Safari; you should see Frida's
output in the logs.
Of course, this example illustrates only one of the things you can do with Frida.
To unlock the tool's full potential, you should learn to use its JavaScript API.
The documentation section of the Frida website has a tutorial and examples for
using Frida on iOS.
1. Launch Xcode.
2. Connect your device to your host computer.
3. Choose Devices from the window menu.
4. Click on your connected iOS device in the left section of the Devices
window.
5. Reproduce the problem.
6. Click the triangle-in-a-box toggle located in the lower left-hand corner of
the Devices window's right section to view the console log's contents.
To save the console output to a text file, go to the bottom right and click the
circular downward-pointing-arrow icon.
Kill Switch 2") is one way to disable certificate pinning. It can be installed via
the Cydia store. It will hook on to all high-level API calls and bypass certificate
pinning.
The Burp Suite app "Mobile Assistant" can also be used to bypass certificate
pinning.
In some cases, certificate pinning is tricky to bypass. Look for the following
when you can access the source code and recompile the app:
If you don't have access to the source, you can try binary patching or runtime
manipulation:
Certificate pinning is a good security practice and should be used for all
applications that handle sensitive information. EFF's Observatory lists the root
and intermediate CAs that major operating systems automatically trust. Please
refer to the map of the roughly 650 organizations that are Certificate Authorities
Mozilla or Microsoft trust (directly or indirectly). Use certificate pinning if you
don't trust at least one of these CAs.
If you want to get more details about white box testing and typical code patterns,
refer to "iOS Application Security" by David Thiel. It contains descriptions and
code snippets illustrating the most common certificate pinning techniques.
To get more information about testing transport security, please refer to the
section "Testing Network Communication."
Network Monitoring/Sniffing
You can remotely sniff all traffic in real-time on iOS by creating a Remote
Virtual Interface for your iOS device. First make sure you have Wireshark
installed on your macOS machine.
Every file stored on the iOS file system is encrypted with its own per-file key,
which is contained in the file metadata. The metadata is encrypted with the file
system key and wrapped with the class key corresponding to the protection class
the app selected when creating the file.
Files can be assigned to one of four different protection classes, which are
explained in more detail in the iOS Security Guide:
All class keys except NSFileProtectionNone are encrypted with a key derived
from the device UID and the user's passcode. As a result, decryption can happen
only on the device itself and requires the correct passcode.
Since iOS 7, the default data protection class is "Protected Until First User
Authentication."
The Keychain
The iOS Keychain can be used to securely store short, sensitive bits of data, such
as encryption keys and session tokens. It is implemented as an SQLite database
that can be accessed through the Keychain APIs only.
On macOS, every user application can create as many Keychains as desired, and
every login account has its own Keychain. The structure of the Keychain on iOS
is different: only one Keychain is available to all apps. Access to the items can
be shared between apps signed by the same developer via the access groups
feature of the attribute kSecAttrAccessGroup . Access to the Keychain is
managed by the securityd daemon, which grants access according to the app's
Keychain-access-groups , application-identifier , and application-
group entitlements.
SecItemAdd
SecItemUpdate
SecItemCopyMatching
SecItemDelete
Data stored in the Keychain is protected via a class structure that is similar to the
class structure used for file encryption. Items added to the Keychain are encoded
as a binary plist and encrypted with a 128-bit AES per-item key in
Galois/Counter Mode (GCM). Note that larger blobs of data aren't meant to be
saved directly in the Keychain-that's what the Data Protection API is for. You
can configure data protection for Keychain items by setting the
kSecAttrAccessible key in the call to SecItemAdd or SecItemUpdate . The
can't be accessed after a restart until the device has been unlocked once by
the user.
kSecAttrAccessibleAfterFirstUnlockThisDeviceOnly : The data in the
Keychain item can't be accessed after a restart until the device has been
unlocked once by the user. Items with this attribute do not migrate to a new
device. Thus, after restoring from a backup of a different device, these
items will not be present.
kSecAttrAccessibleWhenUnlocked : The data in the Keychain item can be
Keychain item can be accessed only while the device is unlocked by the
user. The data won't be included in an iCloud or iTunes backup.
kSecAttrAccessibleWhenPasscodeSetThisDeviceOnly : The data in the
Enclave: The Keychain holds a token only, not the actual key. The key resides in
the Secure Enclave.
Starting with iOS 9, you can do ECC-based signing operations in the Secure
Enclave. In that scenario, the private key and the cryptographic operations reside
within the Secure Enclave. See the static analysis section for more info on
creating the ECC keys. iOS 9 supports only 256-bit ECC. Furthermore, you need
to store the public key in the Keychain because it can't be stored in the Secure
Enclave. After the key is created, you can use the kSecAttrKeyType to indicate
the type of algorithm you want to use the key with.
python needle.py
[needle] > use storage/data/keychain_dump
[needle] > run
{
"Creation Time" : "Jan 15, 2018, 10:20:02 GMT",
"Account" : "username",
"Service" : "",
"Access Group" : "ABCD.com.test.passwordmngr-test",
"Protection" : "kSecAttrAccessibleWhenUnlocked",
"Modification Time" : "Jan 15, 2018, 10:28:02 GMT",
"Data" : "testUser",
"AccessControl" : "Not Applicable"
},
{
"Creation Time" : "Jan 15, 2018, 10:20:02 GMT",
"Account" : "password",
"Service" : "",
"Access Group" : "ABCD.com.test.passwordmngr-test,
"Protection" : "kSecAttrAccessibleWhenUnlocked",
"Modification Time" : "Jan 15, 2018, 10:28:02 GMT",
"Data" : "rosebud",
"AccessControl" : "Not Applicable"
}
There's no iOS API that developers can use to force wipe data when an
application is uninstalled. Instead, developers should take the following steps to
prevent Keychain data from persisting between application installations:
Static Analysis
When you have access to the source code of an iOS app, try to spot sensitive
data that's saved and processed throughout the app. This includes passwords,
secret keys, and personally identifiable information (PII), but it may as well
include other data identified as sensitive by industry regulations, laws, and
company policies. Look for this data being saved via any of the local storage
APIs listed below. Make sure that sensitive data is never stored without
appropriate protection. For example, authentication tokens should not be saved
in NSUserDefaults without additional encryption.
The encryption must be implemented so that the secret key is stored in the
Keychain with secure settings, ideally
kSecAttrAccessibleWhenPasscodeSetThisDeviceOnly . This ensures the usage
KeyChain.
Generic examples of using the KeyChain to store, update, and delete data can be
found in the official Apple documentation. The official Apple documentation
also includes an example of using Touch ID and passcode protected keys.
Here is sample Swift code you can use to create keys (Notice the
kSecAttrTokenID as String: kSecAttrTokenIDSecureEnclave : this indicates
When checking an iOS app for insecure data storage, consider the following
ways to store data because none of them encrypt data by default:
NSUserDefaults
File system
dynamic data objects. NSData and NSMutableData are typically used for
data storage, but they are also useful for distributed objects applications, in
which data contained in data objects can be copied or moved between
applications. The following are methods used to write NSData objects:
NSDataWritingWithoutOverwriting
NSDataWritingFileProtectionNone
NSDataWritingFileProtectionComplete
NSDataWritingFileProtectionCompleteUnlessOpen
NSDataWritingFileProtectionCompleteUntilFirstUserAuthentication
system. You can use createFileAtPath to create a file and write to it.
The following example shows how to create a securely encrypted file using the
createFileAtPath method:
CoreData
Core Data is a framework for managing the model layer of objects in your
SQLite Databases
The SQLite 3 library must be added to an app if the app is to use SQLite. This
library is a C++ wrapper that provides an API for the SQLite commands.
Realm databases
Realm Objective-C and Realm Swift aren't supplied by Apple, but they are still
worth noting. They store everything unencrypted, unless the configuration has
encryption enabled.
YapDatabase
Dynamic Analysis
One way to determine whether sensitive information (like credentials and keys)
is stored insecurely without leveraging native iOS functions is to analyze the
app's data directory. Triggering all app functionality before the data is analyzed
is important because the app may store sensitive data only after specific
functionality has been triggered. You can then perform static analysis for the
data dump according to generic keywords and app-specific data.
The following steps can be used to determine how the application stores data
locally on a jailbroken iOS device:
3. Execute grep with the data that you've stored, for example: grep -iRn
"USERID" .
4. If the sensitive data is stored in plaintext, the app fails this test.
You can analyze the app's data directory on a non-jailbroken iOS device by using
third-party applications, such as iMazing.
Note that tools like iMazing don't copy data directly from the device. They
try to extract data from the backups they create. Therefore, getting all the
app data that's stored on the iOS device is impossible: not all folders are
included in backups. Use a jailbroken device or repackage the app with
Frida and use a tool like objection to access all the data and files.
If you added the Frida library to the app and repackaged it as described in
"Dynamic Analysis on Non-Jailbroken Devices" (from the "Basic Security
Testing" chapter), you can use objection to transfer files directly from the app's
data directory or read files in objection.
AppName.app
This app's bundle contains the app and all its resources.
This directory is visible to users, but users can't write to it.
Content in this directory is not backed up.
Documents/
Use this directory to store user-generated content.
Visible to users and users can write to it.
Content in this directory is backed up.
The app can disable paths by setting NSURLIsExcludedFromBackupKey .
Library/
This is the top-level directory for all files that aren't user data files.
iOS apps usually use the Application Support and Caches
subdirectories, but you can create custom subdirectories.
Library/Caches/
Contains semi-persistent cached files.
Invisible to users and users can't write to it.
Content in this directory is not backed up.
The OS may delete this directory's files automatically when the app is
not running and storage space is running low.
Library/Application Support/
Contains persistent files necessary for running the app.
Invisible to users and users can't write to it.
Content in this directory is backed up.
The app can disable paths by setting NSURLIsExcludedFromBackupKey
Library/Preferences/
Used for storing properties, objects that can persist even after an
application is restarted.
Information is saved, unencrypted, inside the application sandbox in a
plist file called [BUNDLE_ID].plist.
All the key/value pairs stored using NSUserDefaults can be found in
this file.
tmp/
Use this directory to write temporary files that need not persist
between app launches.
Contains non-persistent cached files.
Invisible to users.
Content in this directory is not backed up.
The OS may delete this directory's files automatically when the app is
not running and storage space is running low.
/private/var/Keychains/keychain-2.db
On a non-jailbroken device, you can use objection to dump the Keychain items
created and stored by the app.
On a jailbroken device, you can use the iOS security assessment framework
Needle to find vulnerabilities caused by the application's data storage
mechanism.
iOS applications often store binary cookie files in the application sandbox.
Cookies are binary files containing cookie data for application WebViews. You
can use Needle to convert these files to a readable format and inspect the data.
Use the following Needle module, which searches for binary cookie files stored
in the application container, lists their data protection values, and gives the user
the options to inspect or download the file:
iOS applications often store data in property list (plist) files that are stored in
both the application sandbox and the IPA package. Sometimes these files contain
sensitive information, such as usernames and passwords; therefore, the contents
of these files should be inspected during iOS assessments. Use the following
Needle module, which searches for plist files stored in the application container,
lists their data protection values, and gives the user the options to inspect or
download the file:
iOS applications can store data in cache databases. These databases contain data
such as web requests and responses. Sometimes the data is sensitive. Use the
following Needle module, which searches for cache files stored in the
application container, lists their data protection values, and gives the user the
options to inspect or download the file:
iOS applications typically use SQLite databases to store data required by the
application. Testers should check the data protection values of these files and
their contents for sensitive data. Use the following Needle module, which
searches for SQLite databases stored in the application container, lists their data
protection values, and gives the user the options to inspect or download the file:
NSLog Method
printf-like function
NSAssert-like function
Macro
Static Analysis
Use the following keywords to check the app's source code for predefined and
custom logging statements:
#ifdef DEBUG
# define NSLog (...) NSLog(__VA_ARGS__)
#else
# define NSLog (...)
#endif
Dynamic Analysis
Navigate to a screen that displays input fields that take sensitive user
information. Two methods apply to checking log files for sensitive data:
2. Connect your iOS device via USB and launch Xcode. Navigate to Window
> Devices and Simulators, select your device and then the Open Console
option (as of Xcode 9).
After starting either method one or two, fill in the input fields. If sensitive data is
displayed in the output, the app fails this test.
To capture the logs of an iOS application, you can monitor log files with Needle:
The downside is that a developer doesn’t know in detail what code is executed
via 3rd party libraries and therefore giving up visibility. Consequently it should
be ensured that not more than the information needed is sent to the service and
that no sensitive information is disclosed.
Most third-party services are implemented in two ways:
Static Analysis
To determine whether API calls and functions provided by the third-party library
are used according to best practices, review their source code.
Dynamic Analysis
All requests made to external services should be analyzed for embedded
sensitive information. By using an interception proxy, you can investigate the
traffic between the app and the third party's endpoints. When the app is in use,
all requests that don't go directly to the server that hosts the main function should
be checked for sensitive information that's sent to a third party. This information
could be PII in a request to a tracking or ad service.
enables autocorrection.
var secureTextEntry: BOOL determines whether text copying and text
caching are disabled and hides the text being entered for UITextField .
The default value of this property is "NO."
Static Analysis
Search through the source code for similar implementations, such as
textObject.autocorrectionType = UITextAutocorrectionTypeNo;
textObject.secureTextEntry = YES;
Open xib and storyboard files in the Interface Builder of Xcode and
verify the states of Secure Text Entry and Correction in the
Attributes Inspector for the appropriate object.
The application must prevent the caching of sensitive information entered into
text fields. You can prevent caching by disabling it programmatically, using the
textObject.autocorrectionType = UITextAutocorrectionTypeNo directive in
the desired UITextFields, UITextViews, and UISearchBars. For data that should
be masked, such as PINs and passwords, set textObject.secureTextEntry to
"YES."
Dynamic Analysis
If a jailbroken iPhone is available, execute the following steps:
1. Reset your iOS device keyboard cache by navigating to Settings > General
> Reset > Reset Keyboard Dictionary.
2. Use the application and identify the functionalities that allow users to enter
sensitive data.
3. Dump the keyboard cache file dynamic-text.dat into the following
directory (which might be different for iOS versions before 8.0):
/private/var/mobile/Library/Keyboard/
4. Look for sensitive data, such as username, passwords, email addresses, and
credit card numbers. If the sensitive data can be obtained via the keyboard
cache file, the app fails this test.
Overview
When typing data into input fields, the clipboard can be used to copy in data.
The clipboard is accessible system-wide and is therefore shared by apps. This
sharing can be misused by malicious apps to get sensitive data that has been
stored in the clipboard.
Before iOS 9, a malicious app might monitor the pasteboard in the background
while periodically retrieving [UIPasteboard generalPasteboard].string . As
of iOS 9, pasteboard content is accessible to apps in the foreground only.
Static Analysis
Search the source code for subclasses of UITextField .
@end
@implementation NoSelectTextField
- (BOOL)canPerformAction:(SEL)action withSender:(id)sender {
if (action == @selector(paste:) ||
action == @selector(cut:) ||
action == @selector(copy:) ||
action == @selector(select:) ||
action == @selector(selectAll:) ||
action == @selector(delete:) ||
action ==
@selector(makeTextWritingDirectionLeftToRight:) ||
action ==
@selector(makeTextWritingDirectionRightToLeft:) ||
action == @selector(toggleBoldface:) ||
action == @selector(toggleItalics:) ||
action == @selector(toggleUnderline:)
) {
return NO;
}
return [super canPerformAction:action withSender:sender];
}
@end
You can use Needle to check for sensitive data written to the clipboard on
jailbroken devices. Launch the following Needle module to start passively
monitoring the clipboard (all clipboard data will be written to the specified
output file):
Overview
Inter Process Communication (IPC) allows processes to send each other
messages and data. For processes that need to communicate with each other,
there are different ways to implement IPC on iOS:
XPC Services: XPC is a structured, asynchronous library that provides
basic interprocess communication. It is managed by launchd . It is the
most secure and flexible implementation of IPC on iOS and should be the
preferred method. It runs in the most restricted environment possible:
sandboxed with no root privilege escalation and minimal file system access
and network access. Two different APIs are used with XPC Services:
NSXPCConnection API
XPC Services API
Mach Ports: All IPC communication ultimately relies on the Mach Kernel
API. Mach Ports allow local communication (intra-device communication)
only. They can be implemented either natively or via Core Foundation
(CFMachPort) and Foundation (NSMachPort) wrappers.
NSFileCoordinator: The class NSFileCoordinator can be used to
manage and send data to and from apps via files that are available on the
local file system to various processes. NSFileCoordinator methods run
synchronously, so your code will be blocked until they stop executing.
That's convenient because you don't have to wait for an asynchronous block
callback, but it also means that the methods block the running thread.
Static Analysis
The following section summarizes keywords that you should look for to identify
IPC implementations within iOS source code.
XPC Services
NSXPCConnection
NSXPCInterface
NSXPCListener
NSXPCListenerEndpoint
You can set security attributes for the connection. The attributes should be
verified.
Check for the following two files in the Xcode project for the XPC Services API
(which is C-based):
xpc.h
connection.h
Mach Ports
mach_port_t
mach_msg_*
CFMachPort
CFMessagePort
NSMachPort
NSMessagePort
NSFileCoordinator
NSFileCoordinator
Dynamic Analysis
Verify IPC mechanisms with static analysis of the iOS source code. No iOS tool
is currently available to verify IPC usage.
Overview
Entering sensitive information when, for example, registering an account or
making payments, is an essential part of using many apps. This data may be
financial information such as credit card data or user account passwords. The
data may be exposed if the app doesn't properly mask it while it is being typed.
Masking sensitive data (by showing asterisks or dots instead of clear text) should
be enforced.
Static Analysis
A text field that masks its input can be configured in two ways:
Source Code If the text field is defined in the source code, make sure that the
option isSecureTextEntry is set to "true." This option obscures the text input by
showing dots.
sensitiveTextField.isSecureTextEntry = true
Dynamic Analysis
To determine whether the application leaks any sensitive information to the user
interface, run the application and identify components that either show such
information or take it as input.
If the information is masked by, for example, asterisks or dots, the app isn't
leaking data to the user interface.
Overview
iOS includes auto-backup features that create copies of the data stored on the
device. On iOS, backups can be made through iTunes or the cloud (via the
iCloud backup feature). In both cases, the backup includes nearly all data stored
on the device except highly sensitive data such as Apple Pay information and
Touch ID settings.
Since iOS backs up installed apps and their data, an obvious concern is whether
sensitive user data stored by the app might accidentally leak through the backup.
The answer to this question is "yes"-but only if the app insecurely stores
sensitive data in the first place.
decrypted only if the backup is restored to the backed up device. Someone trying
to extract this Keychain data from the backup couldn't decrypt it without access
to the crypto hardware inside the originating device.
Static Analysis
Therefore, avoid storing sensitive data in plaintext within any of the files or
folders that are in the app's private directory or subdirectories.
NSURLIsExcludedFromBackupKey key.
You can use the NSURLIsExcludedFromBackupKey and
CFURLIsExcludedFromBackupKey file system properties to exclude files and
directories from backups. An app that needs to exclude many files can do so by
creating its own subdirectory and marking that directory excluded. Apps should
create their own directories for exclusion instead of excluding system-defined
directories.
Both file system properties are preferable to the deprecated approach of directly
setting an extended attribute. All apps running on iOS version 5.1 and later
should use these properties to exclude data from backups.
The following is sample Objective-C code for excluding a file from a backup on
iOS 5.1 and later:
- (BOOL)addSkipBackupAttributeToItemAtPath:(NSString *)
filePathString
{
NSURL* URL= [NSURL fileURLWithPath: filePathString];
assert([[NSFileManager defaultManager] fileExistsAtPath:
[URL path]]);
assert(NSFileManager.defaultManager().fileExistsAtPath(filePath
), "File \(filePath) doesn't exist")
return success
}
Dynamic Analysis
After the app data has been backed up, review the data that's in the backed up
files and folders. The following directories should be reviewed for sensitive data:
Documents/
Library/Application Support/
Library/Preferences/
Refer to the overview of this section for more on the purpose of each of these
directories.
Overview
Manufacturers want to provide device users with an aesthetically pleasing effect
when an application is started or exited, so they introduced the concept of saving
a screenshot when the application goes into the background. This feature can
pose a security risk because screenshots (which may display sensitive
information such as an email or corporate documents) are written to local
storage, where they can be recovered by a rogue application with a sandbox
bypass exploit or someone who steals the device.
Static Analysis
While analyzing the source code, look for the fields or screens that take or
display sensitive data. Use UIImageView to determine whether the application
sanitizes the screen before being backgrounded.
The following is a sample remediation method that will set a default screenshot:
- (void)applicationDidEnterBackground:(UIApplication
*)application {
UIImageView *myBanner = [[UIImageView alloc]
initWithImage:@"overlayImage.png"];
self.backgroundImage = myBanner;
[self.window addSubview:myBanner];
}
Dynamic Analysis
Navigate to an application screen that displays sensitive information, such as a
username, an email address, or account details. Background the application by
hitting the Home button on your iOS device. Connect to the iOS device and
navigate to the following directory (which may be different for iOS versions
below 8.0):
/var/mobile/Containers/Data/Application/$APP_ID/Library/Caches/Snapshot
s/
If the application caches the sensitive information in a screenshot, the app fails
this test.
Overview
Analyzing memory can help developers to identify the root causes of problems
such as application crashes. However, it can also be used to access to sensitive
data. This section describes how to check process' memory for data disclosure.
First, identify the sensitive information that's stored in memory. Sensitive assets
are very likely to be loaded into memory at some point. The objective is to make
sure that this info is exposed as briefly as possible.
To investigate an application's memory, first create a memory dump.
Alternatively, you can analyze the memory in real time with, for example, a
debugger. Regardless of the method you use, this is a very error-prone process
because dumps provide the data left by executed functions and you might miss
executing critical steps. In addition, overlooking data during analysis is quite
easy to do unless you know the footprint of the data you're looking for (either its
exact value or its format). For example, if the app encrypts according to a
randomly generated symmetric key, you're very unlikely to spot the key in
memory unless you find its value by other means.
Static Analysis
Before looking into the source code, checking the documentation and identifying
application components provide an overview of where data might be exposed.
For example, while sensitive data received from a backend exists in the final
model object, multiple copies may also exist in the HTTP client or the XML
parser. All these copies should be removed from memory as soon as possible.
Understanding the application's architecture and its interaction with the OS will
help you identify sensitive information that doesn't have to be exposed in
memory at all. For example, assume your app receives data from one server and
transfers it to another without needing any additional processing. That data can
be received and handled in encrypted form, which prevents exposure via
memory.
However, if sensitive data does need to be exposed via memory, make sure that
your app exposes as few copies of this data as possible for as little time as
possible. In other words, you want centralized handling of sensitive data, based
on primitive and mutable data structures.
Such data structures give developers direct access to memory. Make sure that
this access is used to overwrite the sensitive data with dummy data (which is
typically zeroes). Examples of preferable data types include char [] and int
[] , but not NSString or String . Whenever you try to modify an immutable
object, such as a String , you actually create a copy and change the copy.
Avoid Swift data types other than collections regardless of whether they are
considered mutable. Many Swift data types hold their data by value, not by
reference. Although this allows modification of the memory allocated to simple
types like char and int , handling a complex type such as String by value
involves a hidden layer of objects, structures, or primitive arrays whose memory
can't be directly accessed or modified. Certain types of usage may seem to create
a mutable data object (and even be documented as doing so), but they actually
create a mutable identifier (variable) instead of an immutable identifier
(constant). For example, many think that the following results in a mutable
String in Swift, but this is actually an example of a variable whose complex
Notice that the base address of the underlying value changes with each string
operation. Here is the problem: To securely erase the sensitive information from
memory, we don't want to simply change the value of the variable; we want to
change the actual content of the memory allocated for the current value. Swift
doesn't offer such a function.
Swift collections ( Array , Set , and Dictionary ), on the other hand, may be
acceptable if they collect primitive data types such as char or int and are
defined as mutable (i.e., as variables instead of constants), in which case they are
more or less equivalent to a primitive array (such as char [] ). These
collections provide memory management, which can result in unidentified
copies of the sensitive data in memory if the collection needs to copy the
underlying buffer to a different location to extend it.
As we've seen so far, using Swift or Objective-C data types requires a deep
understanding of the language implementation. Furthermore, there has been
some core re-factoring in between major Swift versions, resulting in many data
types' behavior being incompatible with that of other types. To avoid these
issues, we recommend using primitive data types whenever data needs to be
securely erased from memory.
Unfortunately, few libraries and frameworks are designed to allow sensitive data
to be overwritten. Not even Apple considers this issue in the official iOS SDK
API. For example, most of the APIs for data transformation (passers, serializes,
etc.) operate on non-primitive data types. Similarly, regardless of whether you
flag some UITextField as Secure Text Entry or not, it always returns data in
the form of a String or NSString .
In summary, when performing static analysis for sensitive data exposed via
memory, you should
try to identify application components and map where the data is used,
make sure that sensitive data is handled with as few components as
possible,
make sure that object references are properly removed once the object
containing sensitive data is no longer needed,
make sure that highly sensitive data is overwritten as soon as it is no longer
needed,
not pass such data via immutable data types, such as String and
NSString ,
avoid non-primitive data types (because they might leave data behind),
overwrite the value in memory before removing references,
pay attention to third-party components (libraries and frameworks). Having
a public API that handles data according to the recommendations above is a
good indicator that developers considered the issues discussed here.
Dynamic Analysis
Several approaches and tools are available for dumping an iOS app's memory.
On a non-jailbroken device, you can dump the app's process memory with
objection and Fridump. To take advantage of these tools, the iOS app must be
repackaged with FridaGadget.dylib and re-signed. A detailed explanation of
this process is in the section "Dynamic Analysis on Non-Jailbroken Devices," in
the chapter "Basic Security Testing."
With objection it is possible to dump all memory of the running process on the
device.
_ _ _ _
___| |_ |_|___ ___| |_|_|___ ___
| . | . | | | -_| _| _| | . | |
|___|___|_| |___|___|_| |_|___|_|_|
|___|(object)inject(ion) v0.1.0
After the memory has been dumped, executing the command strings with the
dump as argument will extract the strings.
$ strings memory > strings.txt
The original version of Fridump is no longer maintained, and the tool works only
with Python 2. The latest Python version (3.x) should be used for Frida, so
Fridump doesn't work out of the box.
If you're getting the following error message despite your iOS device being
connected via USB, checkout Fridump with the fix for Python 3.
______ _ _
| ___| (_) | |
| |_ _ __ _ __| |_ _ _ __ ___ _ __
| _| '__| |/ _` | | | | '_ ` _ \| '_ \
| | | | | | (_| | |_| | | | | | | |_) |
\_| |_| |_|\__,_|\__,_|_| |_| |_| .__/
| |
|_|
Once Fridump is working, you need the name of the app you want to dump,
which you can get with frida-ps . Afterwards, specify the app name in
Fridump.
When you add the -s flag, all strings are extracted from the dumped raw
memory files and added to the file strings.txt , which is stored in Fridump's
dump directory.
References
Demystifying the Secure Enclave Processor
OWASP Mobile Top 10 2016
M1 - Improper Platform Usage
M2 - Insecure Data Storage
OWASP MASVS
V2.1: "System credential storage facilities are used appropriately to store
sensitive data, such as user credentials or cryptographic keys."
V2.2: "No sensitive data should be stored outside of the app container or
system credential storage facilities."
V2.3: "No sensitive data is written to application logs."
V2.4: "No sensitive data is shared with third parties unless it is a necessary
part of the architecture."
V2.5: "The keyboard cache is disabled on text inputs that process sensitive
data."
V2.6: "The clipboard is deactivated on text fields that may contain sensitive
data."
V2.7: "No sensitive data is exposed via IPC mechanisms."
V2.8: "No sensitive data, such as passwords or pins, is exposed through the
user interface."
V2.9: "No sensitive data is included in backups generated by the mobile
operating system."
V2.10: "The app removes sensitive data from views when backgrounded."
V2.11: "The app does not hold sensitive data in memory longer than
necessary, and memory is cleared explicitly after use."
CWE
CWE-117 - Improper Output Neutralization for Logs
CWE-200 - Information Exposure
CWE-311 - Missing Encryption of Sensitive Data
CWE-312 - Cleartext Storage of Sensitive Information
CWE-359 - "Exposure of Private Information ('Privacy Violation')"
CWE-522 - Insufficiently Protected Credentials
CWE-524 - Information Exposure Through Caching
CWE-532 - Information Exposure Through Log Files
CWE-534 - Information Exposure Through Debug Log Files
CWE-538 - File and Directory Information Exposure
CWE-634 - Weaknesses that Affect System Processes
CWE-922 - Insecure Storage of Sensitive Information
Tools
Fridump
objection
OWASP ZAP
Burp Suite Professional
iOS Cryptography APIs
In the "Cryptography for Mobile Apps" chapter, we introduced general
cryptography best practices and described typical problems that may occur when
cryptography is used incorrectly. In this chapter, we'll detail the cryptography
APIs available for iOS. We'll show how to identify usage of those APIs in the
source code and how to interpret cryptographic configurations. When you're
reviewing code, compare the cryptographic parameters with the current best
practices linked in this guide.
their use. Because iOS constants are numeric, you should determine whether the
constants sent to the CCCrypt function represent an insecure or deprecated
algorithm.
CCCryptorStatus CCCryptorCreate(
CCOperation op, /* kCCEncrypt, etc. */
CCAlgorithm alg, /* kCCAlgorithmDES, etc. */
CCOptions options, /* kCCOptionPKCS7Padding, etc.
*/
const void *key, /* raw key material */
size_t keyLength,
const void *iv, /* optional initialization
vector */
CCCryptorRef *cryptorRef); /* RETURNED */
You can then compare all the enum types to determine which algorithm,
padding, and key material is used. Pay attention to the keying material, nothing
whether it's coming directly from a password (which is bad) or from a Key
Derivation Function (e.g., PBKDF2). Obviously, your application may use other
non-standard libraries ( openssl , for example), so look for those too.
code for these constants. iOS cryptography should be based on the best practices
described in the chapter "Cryptography for Mobile Apps."
References
OWASP MASVS
V3.3: "The app uses cryptographic primitives that are appropriate for the
particular use case, configured with parameters that adhere to industry best
practices."
V3.4: "The app does not use cryptographic protocols or algorithms that are
widely considered depreciated for security purposes."
V3.6: "All random values are generated using a sufficiently secure random
number generator."
CWE
CWE-337 - Predictable Seed in PRNG
CWE-338 - Use of Cryptographically Weak Pseudo Random Number
Generator (PRNG)
Local Authentication on iOS
During local authentication, an app authenticates the user against credentials
stored locally on the device. In other words, the user "unlocks" the app or some
inner layer of functionality by providing a valid PIN, password, or fingerprint,
verified by referencing local data. Generally, this done so that users can more
conveniently resume an existing session with a remote service or as a means of
step-up authentication to protect some critical function.
authenticate the user via Touch ID. The app can't access any data associated
with the enrolled fingerprint and is notified only whether authentication was
successful.
Security.framework is a lower level API to access Keychain Services.
This is a secure option if your app needs to protect some secret data with
biometric authentication, since the access control is managed on a system-
level and can not easily be bypassed. Security.framework has a C API,
but there are several open source wrappers available, making access to the
Keychain as simple as to NSUserDefaults. Security.framework underlies
LocalAuthentication.framework ; Apple recommends to default to higher-
deviceOwnerAuthentication (Swift) or
deviceOwnerAuthenticationWithBiometrics (Swift) or
LAPolicyDeviceOwnerAuthenticationWithBiometrics (Objective-C):
guard context.canEvaluatePolicy(.deviceOwnerAuthentication,
error: &error) else {
// Could not evaluate policy; look at error and present an
appropriate message to user
}
context.evaluatePolicy(.deviceOwnerAuthentication,
localizedReason: "Please, pass authorization to enter this
area") { success, evaluationError in
guard success else {
// User did not authenticate successfully, look at
evaluationError and take appropriate action
}
The iOS Keychain APIs can (and should) be used to implement local
authentication. During this process, the app stores either a secret authentication
token or another piece of secret data identifying the user in the Keychain. In
order to authenticate to a remote service, the user must unlock the Keychain
using their passphrase or fingerprint to obtain the secret data.
The Keychain allows saving items with the special SecAccessControl attribute,
which will allow access to the item from the Keychain only after the user has
passed Touch ID authentication (or passcode, if such fallback is allowed by
attribute parameters).
Swift
// 3. save item
if status == noErr {
// successfully saved
} else {
// error while saving
}
Objective-C
SecAccessControlRef sacRef =
SecAccessControlCreateWithFlags(kCFAllocatorDefault,
kSecAttrAccessibleWhenPasscodeSetThisDeviceOnly,
kSecAccessControlUserPresence,
err);
// 3. save item
OSStatus status = SecItemAdd((__bridge CFDictionaryRef)query,
nil);
if (status == noErr) {
// successfully saved
} else {
// error while saving
}
Now we can request the saved item from the Keychain. Keychain Services will
present the authentication dialog to the user and return data or nil depending on
whether a suitable fingerprint was provided or not.
Swift
// 1. define query
var query = [String: Any]()
query[kSecClass as String] = kSecClassGenericPassword
query[kSecReturnData as String] = kCFBooleanTrue
query[kSecAttrAccount as String] = "My Name" as CFString
query[kSecAttrLabel as String] = "com.me.myapp.password" as
CFString
query[kSecUseOperationPrompt as String] = "Please, pass
authorisation to enter this area" as CFString
// 2. get item
var queryResult: AnyObject?
let status = withUnsafeMutablePointer(to: &queryResult) {
SecItemCopyMatching(query as CFDictionary,
UnsafeMutablePointer($0))
}
if status == noErr {
let password = String(data: queryResult as! Data, encoding:
.utf8)!
// successfully received password
} else {
// authorization not passed
}
Objective-C
// 1. define query
NSDictionary *query = @{(__bridge id)kSecClass: (__bridge
id)kSecClassGenericPassword,
(__bridge id)kSecReturnData: @YES,
(__bridge id)kSecAttrAccount: @"My Name1",
(__bridge id)kSecAttrLabel: @"com.me.myapp.password",
(__bridge id)kSecUseOperationPrompt: @"Please, pass
authorisation to enter this area" };
// 2. get item
CFTypeRef queryResult = NULL;
OSStatus status = SecItemCopyMatching((__bridge
CFDictionaryRef)query, &queryResult);
if (status == noErr){
NSData *resultData = ( __bridge_transfer NSData
*)queryResult;
NSString *password = [[NSString alloc]
initWithData:resultData encoding:NSUTF8StringEncoding];
NSLog(@"%@", password);
} else {
NSLog(@"Something went wrong");
}
$ otool -L <AppName>.app/<AppName>
/System/Library/Frameworks/LocalAuthentication.framework/LocalA
uthentication
/System/Library/Frameworks/Security.framework/Security
Static Analysis
It is important to remember that Local Authentication framework is an event-
based procedure and as such, should not the sole method of authentication.
Though this type of authentication is effective on the user-interface level, it is
easily bypassed through patching or instrumentation.
Verify that sensitive processes, such as re-authenticating a user triggering a
payment transaction, are protected using the Keychain services method.
Verify that the kSecAccessControlUserPresence policy and
kSecAttrAccessibleWhenPasscodeSetThisDeviceOnly protection classes
Dynamic Analysis
On a jailbroken device tools like Swizzler2 and Needle can be used to bypass
LocalAuthentication. Both tools use Frida to instrument the evaluatePolicy
function so that it returns True even if authentication was not successfully
performed. Follow the steps below to activate this feature in Swizzler2:
Settings->Swizzler
Enable "Inject Swizzler into Apps"
Enable "Log Everything to Syslog"
Enable "Log Everything to File"
Enter the submenu "iOS Frameworks"
Enable "LocalAuthentication"
Enter the submenu "Select Target Apps"
Enable the target app
Close the app and start it again
When the Touch ID prompt shows click "cancel"
If the application flow continues without requiring the Touch ID then the
bypass has worked.
Alternatively, you can use objection to bypass Touch ID (this also works on a
non-jailbroken device), patch the app, or use Cycript or similar tools to
instrument the process.
References
OWASP MASVS
V4.7: "Biometric authentication, if any, is not event-bound (i.e. using an
API that simply returns "true" or "false"). Instead, it is based on unlocking
the keychain/keystore."
CWE
CWE-287 - Improper Authentication
iOS Network APIs
Almost every iOS app acts as a client to one or more remote services. As this
network communication usually takes place over untrusted networks such as
public Wifi, classical network based-attacks become a potential issue.
Most modern mobile apps use variants of HTTP based web-services, as these
protocols are well-documented and supported. On iOS, the NSURLConnection
class provides methods to load URL requests asynchronously and synchronously.
Overview
App Transport Security (ATS) is a set of security checks that the operating
system enforces when making connections with NSURLConnection,
NSURLSession and CFURL to public hostnames. ATS is enabled by default for
applications build on iOS SDK 9 and above.
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
ATS Exceptions
dictionary.
NSAppTransportSecurity : Dictionary {
NSAllowsArbitraryLoads : Boolean
NSAllowsArbitraryLoadsForMedia : Boolean
NSAllowsArbitraryLoadsInWebContent : Boolean
NSAllowsLocalNetworking : Boolean
NSExceptionDomains : Dictionary {
<domain-name-string> : Dictionary {
NSIncludesSubdomains : Boolean
NSExceptionAllowsInsecureHTTPLoads : Boolean
NSExceptionMinimumTLSVersion : String
NSExceptionRequiresForwardSecrecy : Boolean //
Default value is YES
NSRequiresCertificateTransparency : Boolean
}
}
}
The following table summarizes the global ATS exceptions. For more
information about these exceptions, please refer to table 2 in the official Apple
developer documentation.
Key Description
Disable ATS restrictions globally
NSAllowsArbitraryLoads excepts for individual domains
specified under
NSExceptionDomains
The following table summarizes the per-domain ATS exceptions. For more
information about these exceptions, please refer to table 3 in the official Apple
developer documentation.
Key Description
Indicates whether ATS
NSIncludesSubdomains exceptions should apply to
subdomains of the named domain
Allows HTTP connections to the
NSExceptionAllowsInsecureHTTPLoads named domain, but does not
affect TLS requirements
NSExceptionMinimumTLSVersion
Allows connections to servers
with TLS versions less than 1.2
NSExceptionRequiresForwardSecrecy
Disable perfect forward secrecy
(PFS)
Starting from January 1 2017, Apple App Store review requires justification if
one of the following ATS exceptions are defined.
NSAllowsArbitraryLoads
NSAllowsArbitraryLoadsForMedia
NSAllowsArbitraryLoadsInWebContent
NSExceptionAllowsInsecureHTTPLoads
NSExceptionMinimumTLSVersion
However this decline is extended later by Apple stating “To give you additional
time to prepare, this deadline has been extended and we will provide another
update when a new deadline is confirmed”
<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>
If the source code is not available, then the Info.plist file should be either
obtained from a jailbroken device or by extracting the application IPA file.
Since IPA files are ZIP archives, they can be extracted using any zip utility.
$ unzip app-name.ipa
the extract. It’s a binary encoded file and has to be converted to a human
readable format for the analysis.
plutil is a tool that’s designed for this purpose. It comes natively with Mac
The following command shows how to convert the Info.plist file into XML
format.
Once the file is converted to a human readable format, the exceptions can be
analyzed. The application may have ATS exceptions defined to allow it’s normal
functionality. For an example, the Firefox iOS application has ATS disabled
globally. This exception is acceptable because otherwise the application would
not be able to connect to any HTTP website that does not have all the ATS
requirements.
<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<true/>
<key>NSExceptionDomains</key>
<dict>
<key>example.com</key>
<dict>
<key>NSIncludesSubdomains</key>
<true/>
<key>NSExceptionMinimumTLSVersion</key>
<string>TLSv1.2</string>
<key>NSExceptionAllowsInsecureHTTPLoads</key>
<false/>
<key>NSExceptionRequiresForwardSecrecy</key>
<true/>
</dict>
</dict>
</dict>
If connections to 3rd party domains are made (that are not under control of
the app owner) it should be evaluated what ATS settings are not supported
by the 3rd party domain and if they can be deactivated.
If the application opens third party web sites in web views, then from iOS
10 onwards NSAllowsArbitraryLoadsInWebContent can be used to disable
ATS restrictions for the content loaded in web views
The certificate can be pinned during development, or at the time the app first
connects to the backend. In that case, the certificate associated or 'pinned' to the
host at when it seen for the first time. This second variant is slightly less secure,
as an attacker intercepting the initial connection could inject their own
certificate.
Static Analysis
Verify that the server certificate is pinned. Pinning can be implemented in
multiple ways:
The code presented below shows how it is possible to check if the certificate
provided by the server matches the certificate stored in the app. The method
below implements the connection authentication and tells the delegate that the
connection will send a request for an authentication challenge.
perform customary X509 checks. The snippet below implements a check of the
certificate.
(void)connection:(NSURLConnection *)connection
willSendRequestForAuthenticationChallenge:
(NSURLAuthenticationChallenge *)challenge
{
SecTrustRef serverTrust =
challenge.protectionSpace.serverTrust;
SecCertificateRef certificate =
SecTrustGetCertificateAtIndex(serverTrust, 0);
NSData *remoteCertificateData =
CFBridgingRelease(SecCertificateCopyData(certificate));
NSString *cerPath = [[NSBundle mainBundle]
pathForResource:@"MyLocalCertificate" ofType:@"cer"];
NSData *localCertData = [NSData
dataWithContentsOfFile:cerPath];
The control below can verify if the certificate received by
the server is matching the one pinned in the client.
if ([remoteCertificateData isEqualToData:localCertData]) {
NSURLCredential *credential = [NSURLCredential
credentialForTrust:serverTrust];
[[challenge sender] useCredential:credential
forAuthenticationChallenge:challenge];
}
else {
[[challenge sender] cancelAuthenticationChallenge:challenge];
}
Dynamic Analysis
Server certificate validation
Our test approach is to gradually relax security of the SSL handshake negotiation
and check which security mechanisms are enabled.
1. Having Burp set up as a proxy, make sure that there is no certificate added
to the trust store (Settings -> General -> Profiles) and that tools like SSL
Kill Switch are deactivated. Launch your application and check if you can
see the traffic in Burp. Any failures will be reported under 'Alerts' tab. If
you can see the traffic, it means that there is no certificate validation
performed at all. If however, you can't see any traffic and you have an
information about SSL handshake failure, follow the next point.
2. Now, install Burp certificate, as explained in the portswigger user
documentation. If the handshake is successful and you can see the traffic in
Burp, it means that certificate is validated against device's trust store, but
the pinning is not performed.
3. If executing instructions from previous step doesn't lead to traffic being
proxied through burp, it means that certificate is actually pinned and all
security measures are in place. However, you still need to bypass the
pinning in order to test the application. Please refer to section "Basic
Security Testing" for more information on this.
1. The client certificate contains a private key that will be used for the key
exchange.
2. Usually the certificate would also need a password to use (decrypt) it.
3. The certificate can be stored in the binary itself, data directory or in the
keychain.
Most common and improper way of doing two-way handshake is to store the
client certificate within the application bundle and hardcode the password. This
obviously does not bring much security, because all clients will share the same
certificate.
Second way of storing the certificate (and possibly password) is to use the
keychain. Upon first login, the application should download the personal
certificate and store it securely in the keychain.
Sometimes applications have one certificate that is hardcoded and use it for the
first login and then the personal certificate is downloaded. In this case, check if
it's possible to still use the 'generic' certificate to connect to the server.
Once you have extracted the certificate from the application (e.g. using Cycript
or Frida), add it as client certificate in Burp, and you will be able to intercept the
traffic.
References
OWASP Mobile Top 10 2016
OWASP MASVS
V5.1: "Data is encrypted on the network using TLS. The secure channel is
used consistently throughout the app."
V5.2: "The TLS settings are in line with current best practices, or as close
as possible if the mobile operating system does not support the
recommended standards."
V5.3: "The app verifies the X.509 certificate of the remote endpoint when
the secure channel is established. Only certificates signed by a trusted CA
are accepted."
V5.4: "The app either uses its own certificate store, or pins the endpoint
certificate or public key, and subsequently does not establish connections
with endpoints that offer a different certificate or key, even if signed by a
trusted CA."
CWE
Overview
In contrast to Android's rich Inter-Process Communication (IPC) capability, iOS
offers few options for communication between apps. In fact, there's no way for
apps to communicate directly. Instead, Apple offers two types of indirect
communication: file transfer through AirDrop and custom URL schemes.
Custom URL schemes allow apps to communicate via a custom protocol. An app
must declare support for the scheme and handle incoming URLs that use the
scheme. Once the URL scheme is registered, other apps can open the app that
registered the scheme, and pass parameters by creating appropriately formatted
URLs and opening them with the openURL method.
Security issues arise when an app processes calls to its URL scheme without
properly validating the URL and its parameters and when users aren't prompted
for confirmation before triggering an important action.
One example is the following bug in the Skype Mobile app, discovered in 2010:
The Skype app registered the skype:// protocol handler, which allowed other
apps to trigger calls to other Skype users and phone numbers. Unfortunately,
Skype didn't ask users for permission before placing the calls, so any app could
call arbitrary numbers without the user's knowledge.
Attackers exploited this vulnerability by putting an invisible <iframe
src="skype://xxx?call"></iframe> (where xxx was replaced by a premium
number), so any Skype user who inadvertently visited a malicious website called
the premium number.
Static Analysis
The first step to test custom URL schemes is finding out whether an application
registers any protocol handlers. This information is in the file info.plist in
the application sandbox folder. To view registered protocol handlers, simply
open a project in Xcode, go to the Info tab, and open the URL Types section,
presented in the screenshot below.
Next, determine how a URL path is built and validated. The method openURL is
responsible for handling user URLs. Look for implemented controls: how URLs
are validated (the input it accepts) and whether it needs user permission when
using the custom URL schema?
Dynamic Analysis
Once you've identified the custom URL schemes the app has registered, open the
URLs on Safari and observe how the app behaves.
If the app parses parts of the URL, you can perform input fuzzing to detect
memory corruption bugs. For this you can use IDB:
Start IDB, connect to your device and select the target app. You can find
details in the IDB documentation.
Go to the URL Handlers section. In URL schemes , click Refresh , and
on the left you'll find a list of all custom schemes defined in the app being
tested. You can load these schemes by clicking Open , on the right side. By
simply opening a blank URI scheme (e.g., opening myURLscheme:// ), you
can discover hidden functionality (e.g., a debug window) and bypass local
authentication.
To find out whether custom URI schemes contain any bugs, try to fuzz
them. In the URL Handlers section, go to the Fuzzer tab. On the left side
default IDB payloads are listed. The FuzzDB project offers fuzzing
dictionaries. Once your payload list is ready, go to the Fuzz Template
section in the left bottom panel and define a template. Use $@$ to define
an injection point, for example:
myURLscheme://$@$
While the URL scheme is being fuzzed, watch the logs (in Xcode, go to Window
-> Devices -> click on your device -> bottom console contains logs) to
observe the impact of each payload. The history of used payloads is on the right
side of the IDB Fuzzer tab .
Needle can be used to test custom URL schemes, manual fuzzing can be
performed against the URL scheme to identify input validation and memory
corruption bugs. The following Needle module should be used to perform these
attacks:
[needle] >
[needle] > use dynamic/ipc/open_uri
[needle][open_uri] > show options
Overview
WebViews are in-app browser components for displaying interactive web
content. They can be used to embed web content directly into an app's user
interface.
iOS WebViews support JavaScript execution by default, so script injection and
cross-site scripting attacks can affect them. Starting from iOS version 7.0, Apple
also introduced APIs that allow communication between the JavaScript runtime
in the WebView and the native Swift or Objective-C app. If these APIs are used
carelessly, important functionality might be exposed to attackers who manage to
inject malicious script into the WebView (e.g., through a successful cross-site
scripting attack).
WebViews support different URL schemas, like for example tel. Detection of the
tel:// schema can be disabled in the HTML page and will then not be interpreted
by the WebView.
Static Analysis
Look out for usages of the following components that implement WebViews:
UIWebView is deprecated and should not be used. Make sure that either
JavaScript Configuration
#import "ViewController.h"
#import <WebKit/WebKit.h>
@interface ViewController ()<WKNavigationDelegate,WKUIDelegate>
@property(strong,nonatomic) WKWebView *webView;
@end
@implementation ViewController
- (void)viewDidLoad {
UIWebView
Since iOS 7, the JavaScriptCore framework provides an Objective-C wrapper to
the WebKit JavaScript engine. This makes it possible to execute JavaScript from
Swift and Objective-C, as well as making Objective-C and Swift objects
accessible from the JavaScript runtime.
objective-c [webView
valueForKeyPath:@"documentView.webView.mainFrame.javaScriptContext"]
Note that only class members defined in the JSExport protocol are made
accessible to JavaScript code.
WKWebView
WebViews can load content remotely and locally from the app data directory. If
the content is loaded locally, users should not be able to change the filename or
path from which the file is loaded, and they shouldn't be able to edit the loaded
file.
Check the source code for WebViews usage. If you can identify a WebView
instance, check whether any local files have been loaded ("example_file.html" in
the below example).
- (void)viewDidLoad
{
[super viewDidLoad];
WKWebViewConfiguration *configuration =
[[WKWebViewConfiguration alloc] init];
Check the baseURL for dynamic parameters that can be manipulated (leading to
local file inclusion).
Dynamic Analysis
To simulate an attack, inject your own JavaScript into the WebView with an
interception proxy. Attempt to access local storage and any native methods and
properties that might be exposed to the JavaScript context.
References
CWE
CWE-79 - Improper Neutralization of Input During Web Page Generation
https://cwe.mitre.org/data/definitions/79.html
CWE-939: Improper Authorization in Handler for Custom URL Scheme
Info
[#THIEL] Thiel, David. iOS Application Security: The Definitive Guide for
Hackers and Developers (Kindle Locations 3394-3399). No Starch Press.
Kindle Edition.
Tools
IDB - http://www.idbtool.com/
Code Quality and Build Settings for iOS
Apps
Making Sure that the App Is Properly Signed
Overview
Code signing your app assures users that the app has a known source and hasn't
been modified since it was last signed. Before your app can integrate app
services, be installed on a device, or be submitted to the App Store, it must be
signed with a certificate issued by Apple. For more information on how to
request certificates and code sign your apps, review the App Distribution Guide.
You can retrieve the signing certificate information from the application's .app
file with codesign. Codesign is used to create, check, and display code
signatures, as well as inquire into the dynamic status of signed code in the
system.
After you get the application's .ipa file, re-save it as a ZIP file and decompress
the ZIP file. Navigate to the Payload directory, where the application's .app file
will be.
Overview
Generally, as little explanatory information as possible should be provided with
the compiled code. Some metadata (such as debugging information, line
numbers, and descriptive function or method names) makes the binary or byte-
code easier for the reverse engineer to understand but isn't necessary in a release
build. This metadata can therefore be discarded without impacting the app's
functionality.
These symbols can be saved in "Stabs" format or the DWARF format. In the
Stabs format, debugging symbols, like other symbols, are stored in the regular
symbol table. In the DWARF format, debugging symbols are stored in a special
"__DWARF" segment within the binary. DWARF debugging symbols can also
be saved as a separate debug-information file. In this test case, you make sure
that no debug symbols are contained in the release binary itself (in neither the
symbol table nor the __DWARF segment).
Static Analysis
Use gobjdump to inspect the main binary and any included dylibs for Stabs and
DWARF symbols.
Dynamic Analysis
Dynamic analysis is not applicable for finding debugging symbols.
Remediation
Make sure that debugging symbols are stripped when the application is being
built for production. Stripping debugging symbols will reduce the size of the
binary and increase the difficulty of reverse engineering. To strip debugging
symbols, set Strip Debug Symbols During Copy to "YES" via the project's
build settings.
A proper Crash Reporter System is possible because the system doesn't require
any symbols in the application binary.
Overview
To speed up verification and get a better understanding of errors, developers
often include debugging code, such as verbose logging statements (using
NSLog , println , print , dump , and debugPrint ) about responses from
their APIs and about their application's progress and/or state. Furthermore, there
may be debugging code for "management-functionality," which is used by
developers to set the application's state or mock responses from an API. Reverse
engineers can easily use this information to track what's happening with the
application. Therefore, debugging code should be removed from the
application's release version.
Static Analysis
You can take the following static analysis approach for the logging statements:
3. When you find one of them, determine whether the developers used a
wrapping function around the logging function for better mark up of the
statements to be logged; if so, add that function to your search.
4. For every result of steps 2 and 3, determine whether macros or debug-state
related guards have been set to turn the logging off in the release build.
Please note the change in how Objective-C can use preprocessor macros:
#ifdef DEBUG
// Debug-only code
#endif
The procedure for enabling this behavior in Swift has changed: you need to
either set environment variables in your scheme or set them as custom flags in
the target's build settings. Please note that the following functions (which allow
you to determine whether the app was built in the Swift 2.1. release-
configuration) aren't recommended, as Xcode 8 and Swift 3 don't support these
functions:
_isDebugAssertConfiguration
_isReleaseAssertConfiguration
_isFastAssertConfiguration .
Depending on the application's setup, there may be more logging functions. For
example, when CocoaLumberjack is used, static analysis is a bit different.
Dynamic Analysis
Dynamic analysis should be executed on both a simulator and a device because
developers sometimes use target-based functions (instead of functions based on a
release/debug-mode) to execute the debugging code.
1. Run the application on a simulator and check for output in the console
during the app's execution.
2. Attach a device to your Mac, run the application on the device via Xcode,
and check for output in the console during the app's execution in the
console.
For the other "manager-based" debug code: click through the application on both
a simulator and a device to see if you can find any functionality that allows an
app's profiles to be pre-set, allows the actual server to be selected or allows
responses from the API to be selected.
Remediation
As a developer, incorporating debug statements into your application's debug
version should not be a problem if you realize that the debugging statements
should never
#ifdef DEBUG
// Debug-only code
#endif
In Swift 2 (with Xcode 7), you have to set custom compiler flags for every
target, and compiler flags have to start with "-D." So you can use the following
annotations when the debug flag DMSTG-DEBUG is set:
#if MSTG-DEBUG
// Debug-only code
#endif
In Swift 3 (with Xcode 8), you can set Active Compilation Conditions in Build
settings/Swift compiler - Custom flags. Instead of a preprocessor, Swift 3 uses
conditional compilation blocks based on the defined conditions:
#if DEBUG_LOGGING
// Debug-only code
#endif
Overview
Exceptions often occur after an application enters an abnormal or erroneous
state. Testing exception handling is about making sure that the application will
handle the exception and get into a safe state without exposing any sensitive
information via its logging mechanisms or the UI.
@try {
//do work here
}
@finally {
//cleanup
NSError NSError is used for all other types of errors. Some Cocoa framework
APIs provide errors as objects in their failure callback in case something goes
wrong; those that don't provide them pass a pointer to an NSError object by
reference. It is a good practice to provide a BOOL return type to the method that
takes a pointer to an NSError object to indicate success or failure. If there's a
return type, make sure to return "nil" for errors. If "NO" or "nil" is returned, it
allows you to inspect the error/reason for failure.
Propagate the error from a function to the code that calls that function. In
this situation, there's no do-catch ; there's only a throw throwing the
actual error or a try to execute the method that throws. The method
containing the try also requires the throws keyword:
do {
try functionThatThrows()
defer {
//use this as your finally block as with Objective-c
}
statements
} catch pattern 1 {
statements
} catch pattern 2 where condition {
statements
}
Use the try! expression to assert that the error won't occur.
Static Analysis
Review the source code to understand how the application handles various types
of errors (IPC communications, remote services invocation, etc.). The following
sections list examples of what you should check for each language at this stage.
raise is rarely used (it's used when the program must be terminated
verify that the method that's called with try! can't throw an error).
Dynamic Testing
There are several dynamic analysis methods:
recover from the error or enter a state from which it can inform the user that
it can't continue,
provide a message (which shouldn't leak sensitive information) to get the
user to take appropriate action,
withhold information from the application's logging mechanisms.
Remediation
Developers can implement proper error handling in several ways:
Make sure that the application uses a well-designed and unified scheme to
handle errors.
Make sure that all logging is removed or guarded as described in the test
case "Testing for Debugging Code and Verbose Error Logging."
For a high-risk application written in Objective-C: create an exception
handler that removes secrets that shouldn't be easily retrievable. The
handler can be set via NSSetUncaughtExceptionHandler .
Refrain from using try! in Swift unless you're certain that there's no error
in the throwing method that's being called.
Make sure that the Swift error doesn't propagate into too many intermediate
methods.
Overview
Although Xcode enables all binary security features by default, it may be
relevant to verify this for an old application or to check for the misconfiguration
of compilation options. The following features are applicable:
Static Analysis
Xcode Project Settings
Stack-smashing protection
1. In Xcode, select your target in the "Targets" section, then click the "Build
Settings" tab to view the target's settings.
2. Make sure that the "-fstack-protector-all" option is selected in the "Other C
Flags" section.
1. In Xcode, select your target in the "Targets" section, then click the "Build
Settings" tab to view the target's settings.
2. Set the iOS Deployment Target to iOS 4.3 or later.
3. Make sure that "Generate Position-Dependent Code" is set to its default
value ("NO").
4. Make sure that "Don't Create Position Independent Executables" is set to its
default value ("NO").
5. ARC protection
1. In Xcode, select your target in the "Targets" section, then click the "Build
Settings" tab to view the target's settings.
2. Make sure that "Objective-C Automatic Reference Counting" is set to its
default value ("YES").
Below are procedures for checking the binary security features described above.
All the features are enabled in these examples.
PIE:
$ unzip DamnVulnerableiOSApp.ipa
$ cd Payload/DamnVulnerableIOSApp.app
$ otool -hv DamnVulnerableIOSApp
DamnVulnerableIOSApp (architecture armv7):
Mach header
magic cputype cpusubtype caps filetype ncmds sizeofcmds flags
MH_MAGIC ARM V7 0x00 EXECUTE 38 4292 NOUNDEFS DYLDLINK TWOLEVEL
WEAK_DEFINES BINDS_TO_WEAK PIE
DamnVulnerableIOSApp (architecture arm64):
Mach header
magic cputype cpusubtype caps filetype ncmds sizeofcmds flags
MH_MAGIC_64 ARM64 ALL 0x00 EXECUTE 38 4856 NOUNDEFS DYLDLINK
TWOLEVEL
WEAK_DEFINES BINDS_TO_WEAK PIE
stack canary:
With idb
IDB automates the processes of checking for stack canary and PIE support.
Select the target binary in the IDB GUI and click the "Analyze Binary…" button.
References
OWASP MASVS
V7.1: "The app is signed and provisioned with a valid certificate."
V7.4: "Debugging code has been removed, and the app does not log
verbose errors or debugging messages."
V7.6: "The app catches and handles possible exceptions."
V7.7: "Error handling logic in security controls denies access by default."
V7.9: "Free security features offered by the toolchain, such as byte-code
minification, stack protection, PIE support and automatic reference
counting, are activated."
Tools
idb - https://github.com/dmayer/idb
Codesign -
https://developer.apple.com/legacy/library/documentation/Darwin/Referenc
e/ManPages/man1/codesign.1.html
Tampering and Reverse Engineering on
iOS
Swift and Objective-C
Because Objective-C and Swift are fundamentally different, the programming
language in which the app is written affects the possibilities for reverse
engineering it. For example, Objective-C allows method invocations to be
changed at run time. This makes hooking into other app functions (a technique
heavily used by Cycript and other reverse engineering tools) easy. This "method
swizzling" is not implemented the same way in Swift, and the difference makes
the technique harder to execute with Swift than with Objective-C.
The iOS SDK (Software Development Kit), formerly known as the iPhone SDK,
is a software development kit developed by Apple for developing native iOS
applications. You can download it from the official Apple website as well.
Utilities
Class-dump by Steve Nygard "is a command line utility for examining the
Objective-C runtime information stored in Mach-O (Mach object) files. It
generates declarations for the classes, categories, and protocols."
MachoOView is a useful visual Mach-O file browser that also allows in-file
editing of ARM binaries.
Reversing Frameworks
Radare2 is a complete framework for reverse engineering and analyzing. It is
built with the Capstone disassembler engine, Keystone assembler, and Unicorn
CPU emulation engine. Radare2 supports iOS binaries and many useful iOS-
specific features, such as a native Objective-C parser and an iOS debugger.
Commercial Disassemblers
IDA Pro can deal with iOS binaries. It has a built-in iOS debugger. IDA is
widely seen as the gold standard for GUI-based interactive static analysis, but it
isn't cheap. For the more budget-minded reverse engineer, Hopper offers similar
static analysis features.
In this guide, we'll introduce static and dynamic analysis and instrumentation.
Throughout this chapter, we refer to the OWASP UnCrackable Apps for iOS, so
download them from the MSTG repository if you're planning to follow the
examples.
Static Analysis
itms-services://?action=download-manifest&url=https://s3-ap-
southeast-1.amazonaws.com/test-uat/manifest.plist
You can use the ITMS services asset downloader tool to download the IPS from
an OTA distribution URL. Install it via npm:
# itms-services -u "itms-services://?action=download-
manifest&url=https://s3-ap-southeast-1.amazonaws.com/test-
uat/manifest.plist" -o - > out.ipa
You can use Saurik's IPA Installer Console to recover IPAs from apps installed
on the device. To do this, install IPA Installer Console via Cydia. Then, SSH
into the device and look up the bundle ID of the target app. For example through
listing of the available apps:
If the app is available on iTunes, you can recover the IPA on macOS:
# Clutch -i
NOTE: Only applications distributed via the AppStore are protected by FairPlay
DRM. If your application was compiled in and exported directly from Xcode,
you don't need to decrypt it. The easiest way to disassemble is to load the
application into Hopper, which can be used to make sure that it's being correctly
disassembled. You can also check it with otool:
If the output contains cryptoff, cryptsize, and cryptid fields, the binary is
encrypted. If the output of this command is empty, the binary is not encrypted.
Remember to use otool on the binary, not on the IPA file.
$ unzip DamnVulnerableiOSApp.ipa
$ cd Payload/DamnVulnerableIOSApp.app
Note the architectures: armv7 (which is 32-bit) and arm64 . This design of a
fat binary allows an application to be deployed on all devices. To analyze the
application with class-dump, we must create a so-called thin binary, which
contains one architecture only:
@interface FlurryUtil :
./DVIA/DVIA/DamnVulnerableIOSApp/DamnVulnerableIOSApp/YapDataba
se/Extensions/Views/Internal/
{
}
+ (BOOL)appIsCracked;
+ (BOOL)deviceIsJailbroken;
Note the plus sign, which means that this is a class method that returns a BOOL
type. A minus sign would mean that this is an instance method. Refer to later
sections to understand the practical difference between these.
$ otool -L <binary>
Debugging
Debugging on iOS is generally implemented via Mach IPC. To "attach" to a
target process, the debugger process calls the task_for_pid function with the
process ID of the target process and receives a Mach port. The debugger then
registers as a receiver of exception messages and starts handling exceptions that
occur in the debugger. Mach IPC calls are used to perform actions such as
suspending the target process and reading/writing register states and virtual
memory.
The XNU kernel implements the ptrace system call, but some of the call's
functionality (including reading and writing register states and memory contents)
has been eliminated. Nevertheless, ptrace is used in limited ways by standard
debuggers, such as lldb and gdb . Some debuggers, including Radare2's iOS
debugger, don't invoke ptrace at all.
Using lldb
iOS ships with the console app debugserver, which allows remote debugging via
gdb or lldb. By default, however, debugserver can't be used to attach to arbitrary
processes (it is usually used only for debugging self-developed apps deployed
with Xcode). To enable debugging of third-party apps, the task_for_pid
entitlement must be added to the debugserver executable. An easy way to do this
is to add the entitlement to the debugserver binary shipped with Xcode.
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.p
latform/ DeviceSupport/<target-iOS-
version/DeveloperDiskImage.dmg
Copy the modified binary to any directory on the test device. The following
examples use usbmuxd to forward a local port through USB.
$ ./tcprelay.py -t 22:2222
$ scp -P2222 debugserver root@localhost:/tmp/
You can now attach debugserver to any process running on the device.
#on iphone
$ wget https://cydia.saurik.com/api/latest/3 -O cycript.zip &&
unzip cycript.zip
$ sudo cp -a Cycript.lib/*.dylib /usr/lib
$ sudo cp -a Cycript.lib/cycript-apl /usr/bin/cycript
$ cycyript
cy#
To inject into a running process, we first need to find the process ID (PID).
Running "cycript -p" with the PID injects Cycript into the process. To illustrate,
we will inject into SpringBoard.
We have injected Cycript into SpringBoard. Let's try to trigger an alert message
on SpringBoard with Cycript.
cy# alertView = [[UIAlertView alloc] initWithTitle:@"OWASP
MSTG" message:@"Mobile Security Testing Guide" delegate:nil
cancelButtonitle:@"OK" otherButtonTitles:nil]
#"<UIAlertView: 0x1645c550; frame = (0 0; 0 0); layer =
<CALayer: 0x164df160>>"
cy# [alertView show]
cy# [alertView release]
Find the document directory with Cycript:
cy# a.delegate
Installing Frida
Frida is a runtime instrumentation framework that lets you inject JavaScript
snippets or portions of your own library into native Android and iOS apps. If
you've already read the Android section of this guide, you should be quite
familiar with this tool.
If you haven't already done so, install the Frida Python package on your host
machine:
To connect Frida to an iOS app, you need to inject the Frida runtime into the
app. This is easy to do on a jailbroken device: just install frida-server through
Cydia. Once it is installed, frida-server will automatically run with root
privileges, allowing you to easily inject code into any process.
Start Cydia and add Frida's repository by navigating to Manage -> Sources ->
Edit -> Add and entering https://build.frida.re . You should then be able to
find and install the Frida package.
Connect your device via USB and make sure that Frida works by running the
frida-ps command. This should return a list of processes running on the
device:
$ frida-ps -U
PID Name
--- ----------------
963 Mail
952 Safari
416 BTServer
422 BlueTool
791 CalendarWidget
451 CloudKeychainPro
239 CommCenter
764 ContactsCoreSpot
(...)
We'll demonstrate a few more uses for Frida below, but let's first look at what
you should do if you're forced to work on a non-jailbroken device.
Manual Repackaging
If you don't have access to a jailbroken device, you can patch and repackage the
target app to load a dynamic library at startup. This way, you can instrument the
app and do pretty much everything you need to do for a dynamic analysis (of
course, you can't break out of the sandbox this way, but you won't often need to).
However, this technique works only if the app binary isn't FairPlay-encrypted
(i.e., obtained from the App Store).
We'll use optool , Apple's build tools, and some shell commands. Our method
is inspired by Vincent Tan's Swizzler project. The NCC group has described an
alternative repackaging method.
To reproduce the steps listed below, download UnCrackable iOS App Level 1
from the OWASP Mobile Testing Guide repo. Our goal is to make the
UnCrackable app load FridaGadget.dylib during startup so we can instrument
the app with Frida.
Please note that the following steps apply to macOS only, as Xcode is only
available for macOS.
If you've developed and deployed iOS apps with Xcode before, you already have
your own code-signing certificate installed. Use the security tool to list your
signing identities:
Apple will issue a free development provisioning profile even if you're not a
paying developer. You can obtain the profile via Xcode and your regular Apple
account: simply create an empty iOS project and extract
embedded.mobileprovision from the app container, which is in the Xcode
Once you've obtained the provisioning profile, you can check its contents with
the security tool. You'll find the entitlements granted to the app in the profile,
along with the allowed certificates and devices. You'll need these for code-
signing, so extract them to a separate plist file as shown below. Have a look at
the file contents to make sure everything is as expected.
$ security cms -D -i AwesomeRepackaging.mobileprovision >
profile.plist
$ /usr/libexec/PlistBuddy -x -c 'Print :Entitlements'
profile.plist > entitlements.plist
$ cat entitlements.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>application-identifier</key>
<string>LRUD9L355Y.sg.vantagepoint.repackage</string>
<key>com.apple.developer.team-identifier</key>
<string>LRUD9L355Y</string>
<key>get-task-allow</key>
<true/>
<key>keychain-access-groups</key>
<array>
<string>LRUD9L355Y.*</string>
</array>
</dict>
</plist>
Other Preparations
To make our app load an additional library at startup, we need some way of
inserting an additional load command into the main executable's Mach-O header.
Optool can be used to automate this process:
We'll also use ios-deploy, a tool that allows iOS apps to be deployed and
debugged without Xcode:
The last line in both the optool and ios-deploy code snippets creates a symbolic
link and makes the executable available system-wide.
zsh: # . ~/.zshrc
bash: # . ~/.bashrc
To execute the examples below, you need FridaGadget.dylib :
$ curl -O
https://build.frida.re/frida/ios/lib/FridaGadget.dylib
We'll be using standard tools that come with macOS and Xcode in addition to the
tools mentioned above. Make sure you have the Xcode command line developer
tools installed.
$ unzip UnCrackable_Level1.ipa
$ cp FridaGadget.dylib Payload/UnCrackable\ Level\ 1.app/
$ optool install -c load -p
"@executable_path/FridaGadget.dylib" -t Payload/UnCrackable\
Level\ 1.app/UnCrackable\ Level\ 1
Found FAT Header
Found thin header...
Found thin header...
Inserting a LC_LOAD_DYLIB command for architecture: arm
Successfully inserted a LC_LOAD_DYLIB command for arm
Inserting a LC_LOAD_DYLIB command for architecture: arm64
Successfully inserted a LC_LOAD_DYLIB command for arm64
Writing executable to Payload/UnCrackable Level
1.app/UnCrackable Level 1...
Of course, such blatant tampering invalidates the main executable's code
signature, so this won't run on a non-jailbroken device. You'll need to replace the
provisioning profile and sign both the main executable and FridaGadget.dylib
with the certificate listed in the profile.
$ cp AwesomeRepackaging.mobileprovision Payload/UnCrackable\
Level\ 1.app/embedded.mobileprovision
Next, we need to make sure that the BundleID in Info.plist matches the one
specified in the profile because the codesign tool will read the Bundle ID from
Info.plist during signing; the wrong value will lead to an invalid signature.
Finally, we use the codesign tool to re-sign both binaries. You need to use your
signing identity (in this example
8004380F331DCA22CC1B47FB1A805890AE41C938), which you can output
by executing the command security find-identity -p codesigning -v .
If everything went well, the app should start in debugging mode with lldb
attached. Frida should then be able to attach to the app as well. You can verify
this via the frida-ps command:
$ frida-ps -U
PID Name
--- ------
499 Gadget
Troubleshooting
When something goes wrong (and it usually does), mismatches between the
provisioning profile and code-signing header are the most likely causes. Reading
the official documentation helps you understand the code-signing process.
Apple's entitlement troubleshooting page is also a useful resource.
Run Safari on the device and make sure the device is connected via USB. Then
start frida-trace :
Next, navigate to a new website in Safari. You should see traced function calls
on the frida-trace console. Note that the initWithURL: method is called to
initialize a new URL request object.
/* TID 0xc07 */
20313 ms -[NSURLRequest _initWithCFURLRequest:0x1043bca30 ]
20313 ms -[NSURLRequest URL]
(...)
21324 ms -[NSURLRequest initWithURL:0x106388b00 ]
21324 ms | -[NSURLRequest initWithURL:0x106388b00
cachePolicy:0x0 timeoutInterval:0x106388b80
We can look up the declaration of this method on the Apple Developer Website:
- (instancetype)initWithURL:(NSURL *)url;
The method is called with a single argument of type NSURL . According to the
documentation, the NSURL class has a property called absoluteString , whose
value should be the absolute URL represented by the NSURL object.
We now have all the information we need to write a Frida script that intercepts
the initWithURL: method and prints the URL passed to the method. The full
script is below. Make sure you read the code and inline comments to understand
what's going on.
import sys
import frida
// JavaScript to be injected
frida_code = """
var pool =
ObjC.classes.NSAutoreleasePool.alloc().init();
NSLog(args[2].absoluteString_());
pool.release();
}
});
"""
process = frida.get_usb_device().attach("Safari")
script = process.create_script(frida_code)
script.on('message', message_callback)
script.load()
sys.stdin.read()
Start Safari on the iOS device. Run the above Python script on your connected
host and open the device log (we'll explain how to open device logs in the
following section). Try opening a new URL in Safari; you should see Frida's
output in the logs.
Of course, this example illustrates only one of the things you can do with Frida.
To unlock the tool's full potential, you should learn to use its JavaScript API.
The documentation section of the Frida website has a tutorial and examples of
Frida usage on iOS.
To identify the exact location of the application folder, you can use the tool
ipainstaller:
Overview
Jailbreak detection mechanisms are added to reverse engineering defense to
make running the app on a jailbroken device more difficult. This blocks some of
the tools and techniques reverse engineers like to use. Like most other types of
defense, jailbreak detection is not very effective by itself, but scattering checks
throughout the app's source code can improve the effectiveness of the overall
anti-tampering scheme. A list of typical jailbreak detection techniques for iOS
was published by Trustwave.
File-based Checks
Check for files and directories typically associated with jailbreaks, such as
/Applications/Cydia.app
/Applications/FakeCarrier.app
/Applications/Icy.app
/Applications/IntelliScreen.app
/Applications/MxTube.app
/Applications/RockApp.app
/Applications/SBSettings.app
/Applications/WinterBoard.app
/Applications/blackra1n.app
/Library/MobileSubstrate/DynamicLibraries/LiveClock.plist
/Library/MobileSubstrate/DynamicLibraries/Veency.plist
/Library/MobileSubstrate/MobileSubstrate.dylib
/System/Library/LaunchDaemons/com.ikey.bbot.plist
/System/Library/LaunchDaemons/com.saurik.Cydia.Startup.plist
/bin/bash
/bin/sh
/etc/apt
/etc/ssh/sshd_config
/private/var/lib/apt
/private/var/lib/cydia
/private/var/mobile/Library/SBSettings/Themes
/private/var/stash
/private/var/tmp/cydia.log
/usr/bin/sshd
/usr/libexec/sftp-server
/usr/libexec/ssh-keysign
/usr/sbin/sshd
/var/cache/apt
/var/lib/apt
/var/lib/cydia
NSError *error;
NSString *stringToBeWritten = @"This is a test.";
[stringToBeWritten writeToFile:@"/private/jailbreak.txt"
atomically:YES
encoding:NSUTF8StringEncoding error:&error];
if(error==nil){
//Device is jailbroken
return YES;
} else {
//Device is not jailbroken
[[NSFileManager defaultManager]
removeItemAtPath:@"/private/jailbreak.txt" error:nil];
}
You can check protocol handlers by attempting to open a Cydia URL. The Cydia
app store, which practically every jailbreaking tool installs by default, installs
the cydia:// protocol handler.
In the first case, make sure the application is fully functional on non-jailbroken
devices. The application may be crashing or it may have a bug that causes it to
terminate. This may happen while you're testing a preproduction version of the
application.
Let's again look at bypassing jailbreak detection using the Damn Vulnerable iOS
application as an example. After loading the binary into Hopper, you need to
wait until the application is fully disassembled (look at the top bar to check the
status). Then look for the "jail" string in the search box. You'll see two classes:
SFAntiPiracy and JailbreakDetectionVC . You may want to decompile the
functions to see what they are doing and, in particular, what they return.
isJailbroken] ). The main difference is that we can inject Cycript in the app
and call the class method directly, whereas the instance method requires first
looking for instances of the target class. The function choose will look in the
memory heap for known signatures of a given class and return an array of
instances. Putting an application into a desired state (so that the class is indeed
instantiated) is important.
Let's inject Cycript into our process (look for your PID with top ):
As you can see, our class method was called directly, and it returned "true."
Now, let's call the -[JailbreakDetectionVC isJailbroken] instance method.
First, we have to call the choose function to look for instances of the
JailbreakDetectionVC class.
cy# a=choose(JailbreakDetectionVC)
[]
Ooops! The return value is an empty array. That means that there are no
instances of this class registered in the runtime. In fact, we haven't clicked the
second "Jailbreak Test" button, which initializes this class:
cy# a=choose(JailbreakDetectionVC)
[#"<JailbreakDetectionVC: 0x14ee15620>"]
cy# [a[0] isJailbroken]
True
Now you understand why having your application in a desired state is important.
At this point, bypassing jailbreak detection with Cycript is trivial. We can see
that the function returns a boolean; we just need to replace the return value. We
can replace the return value by replacing the function implementation with
Cycript. Please note that this will actually replace the function under its given
name, so beware of side effects if the function modifies anything in the
application:
cy# JailbreakDetectionVC.prototype.isJailbroken=function()
{return false}
cy# [a[0] isJailbroken]
false
In this case we have bypassed the jailbreak detection of the application!
Now, imagine that the application is closing immediately after detecting that the
device is jailbroken. You don't have time to launch Cycript and replace the
function implementation. Instead, you have to use CydiaSubstrate, employ a
proper hooking function like MSHookMessageEx , and compile the tweak. There
are good sources for how to do this; however, we will provide a potentially faster
and more flexible approach.
$ frida-trace -U -f
/Applications/DamnVulnerableIOSApp.app/DamnVulnerableIOSApp -m
"-[JailbreakDetectionVC isJailbroken]"
$ frida-trace -U -f
/Applications/DamnVulnerableIOSApp.app/DamnVulnerableIOSApp -m
"-[JailbreakDetectionVC isJailbroken]:"
Instrumenting functions...
`...
-[JailbreakDetectionVC isJailbroken]: Loaded handler at
"./__handlers__/__JailbreakDetectionVC_isJailbroken_.js"
Started tracing 1 function. Press Ctrl+C to stop.
Function [JailbreakDetectionVC isJailbroken] originally
returned:0x1
Changing the return value to:0x0
/* TID 0x303 */
6890 ms -[JailbreakDetectionVC isJailbroken]
Function [JailbreakDetectionVC isJailbroken] originally
returned:0x1
Changing the return value to:0x0
22475 ms -[JailbreakDetectionVC isJailbroken]
Frida is a very powerful and versatile tool. Refer to the documentation for more
details.
Please see below a Python script for hooking Objective-C methods and native
functions:
import frida
import sys
try:
session = frida.get_usb_device().attach("Target Process")
except frida.ProcessNotFoundError:
print "Failed to attach to the target process. Did you
launch the app?"
sys.exit(0);
script = session.create_script("""
Interceptor.attach(canOpenURL.implementation, {
onEnter: function(args) {
var url = ObjC.Object(args[2]);
send("[UIApplication canOpenURL:] " +
path.toString());
},
onLeave: function(retval) {
send ("canOpenURL returned: " + retval);
}
});
Interceptor.attach(fileExistsAtPath.implementation, {
onEnter: function(args) {
var path = ObjC.Object(args[2]);
// send("[NSFileManager fileExistsAtPath:] " +
path.toString());
if (path.toString() == "/Applications/Cydia.app" ||
path.toString() == "/bin/bash") {
hideFile = 1;
}
},
onLeave: function(retval) {
if (hideFile) {
send("Hiding jailbreak file...");MM
retval.replace(0);
hideFile = 0;
}
var openat =
Module.findExportByName("libsystem_c.dylib", "openat");
var stat = Module.findExportByName("libsystem_c.dylib",
"stat");
var fopen =
Module.findExportByName("libsystem_c.dylib", "fopen");
var open = Module.findExportByName("libsystem_c.dylib",
"open");
var faccesset =
Module.findExportByName("libsystem_kernel.dylib", "faccessat");
*/
""")
script.on('message', on_message)
script.load()
sys.stdin.read()
Anti-Debugging Checks
Overview
Debugging and exploring applications are helpful during reversing. Using a
debugger, a reverse engineer can not only track critical variables but also read
and modify memory.
Given the damage debugging can be used for, application developers use many
techniques to prevent it. These are called anti-debugging techniques. As
discussed in the "Testing Resiliency Against Reverse Engineering" chapter for
Android, anti-debugging techniques can be preventive or reactive.
Preventive techniques prevent the debugger from attaching to the application at
all, and reactive techniques allow the presence of a debugger to be verified and
allow the application to diverge from expected behavior.
There are several anti-debugging techniques; a few of them are discussed below.
Using ptrace
iOS runs on an XNU kernel. The XNU kernel implements a ptrace system
call that's not as powerful as the Unix and Linux implementations. The XNU
kernel exposes another interface via Mach IPC to enable debugging. The iOS
implementation of ptrace serves an important function: preventing the
debugging of processes. This feature is implemented as the
PT_DENY_ATTACH option of the ptrace syscall. Using
PT_DENY_ATTACH is a fairly well-known anti-debugging technique, so you
may encounter it often during iOS pentests.
PT_DENY_ATTACH
This request is the other operation used by the traced process;
it allows a process that's not currently being traced to deny
future traces by its parent. All other arguments are ignored.
If the process is currently being traced, it will exit with the
exit status of ENOTSUP; otherwise, it sets a flag that denies
future traces. An attempt by the parent to trace a process
which has set this flag will result in the segmentation
violation in the parent.
In other words, using ptrace with PT_DENY_ATTACH ensures that no other
debugger can attach to the calling process; if a debugger attempts to attach, the
process will terminate.
Before diving into the details, it is important to know that ptrace is not part of
the public iOS API. Non-public APIs are prohibited, and the App Store may
reject apps that include them. Because of this, ptrace is not directly called in
the code; it's called when a ptrace function pointer is obtained via dlsym .
#import <dlfcn.h>
#import <sys/types.h>
#import <stdio.h>
typedef int (*ptrace_ptr_t)(int _request, pid_t _pid, caddr_t
_addr, int _data);
void anti_debug() {
ptrace_ptr_t ptrace_ptr = (ptrace_ptr_t)dlsym(RTLD_SELF,
"ptrace");
ptrace_ptr(31, 0, 0, 0); // PTRACE_DENY_ATTACH = 31
}
Using sysctl
sysctl can also be used to retrieve information about the current process (such
info.kp_proc.p_flag = 0;
mib[0] = CTL_KERN;
mib[1] = KERN_PROC;
mib[2] = KERN_PROC_PID;
mib[3] = getpid();
// Call sysctl.
size = sizeof(info);
junk = sysctl(mib, sizeof(mib) / sizeof(*mib), &info,
&size, NULL, 0);
assert(junk == 0);
When the code above is compiled, the disassembled version of the second half of
the code is similar to the following:
After the instruction at offset 0xC13C, MOVNE R0, #1 is patched and changed
to MOVNE R0, #0 (0x00 0x20 in in byte-code), the patched code is similar to
the following:
You can bypass a sysctl check by using the debugger itself and setting a
breakpoint at the call to sysctl . This approach is demonstrated in iOS Anti-
Debugging Protections #2.
Overview
There are two topics related to file integrity:
1. Application source code integrity checks: In the "Tampering and Reverse
Engineering" chapter, we discussed the iOS IPA application signature
check. We also saw that determined reverse engineers can easily bypass this
check by re-packaging and re-signing an app using a developer or
enterprise certificate. One way to make this harder is to add an internal run-
time check that determines whether the signatures still match at run time.
2. File storage integrity checks: When files are stored by the application, key-
value pairs in the keychain, UserDefaults / NSUserDefaults , a SQLite
database, or a Realm database, their integrity should be protected.
Apple takes care of integrity checks with DRM. However, additional controls
(such as in the example below) are possible. The mach_header is parsed to
calculate the start of the instruction data, which is used to generate the signature.
Next, the signature is compared to the given signature. Make sure that the
generated signature is stored or coded somewhere else.
while(1) {
// return strcmp(originalSignature,
signature) == 0; // verify signatures match
return 0;
}
}
cmd = (struct load_command *)((uint8_t *)cmd + cmd-
>cmdsize);
}
}
When ensuring the integrity of the application storage itself, you can create an
HMAC or signature over either a given key-value pair or a file stored on the
device. The CommonCrypto implementation is best for creating an HMAC. If
you need encryption, make sure that you encrypt and then HMAC as described
in Authenticated Encryption.
When you generate an HMAC with CC:
Alternatively, you can use NSData for steps 1 and 3, but you'll need to create a
new buffer for step 4.
1. Retrieve the data from the device, as described in the section on device
binding.
2. Alter the retrieved data and return it to storage.
Effectiveness Assessment
For the application source code integrity checks Run the app on the device in an
unmodified state and make sure that everything works. Then apply patches to the
executable using optool, re-sign the app as described in the chapter "Basic
Security Testing," and run it. The app should detect the modification and respond
in some way. At the very least, the app should alert the user and/or terminate the
app. Work on bypassing the defenses and answer the following questions:
For the storage integrity checks A similar approach works. Answer the
following questions:
Device Binding
Overview
The purpose of device binding is to impede an attacker who tries to copy an app
and its state from device A to device B and continue the execution of the app on
device B. After device A has been determined trusted, it may have more
privileges than device B. This situation shouldn't change when an app is copied
from device A to device B.
Since iOS 7.0, hardware identifiers (such as MAC addresses) are off-limits. The
ways to bind an application to a device are based on identifierForVendor ,
storing something in the keychain, or using Google's InstanceID for iOS. See the
"Remediation" section for more details.
Static Analysis
When the source code is available, there are a few bad coding practices you can
look for, such as
MAC addresses: there are several ways to find the MAC address. When
you use CTL_NET (a network subsystem) or NET_RT_IFLIST (getting the
configured interfaces) or when the mac-address gets formatted, you'll often
see formatting code for printing, such as "%x:%x:%x:%x:%x:%x" .
using the UDID: [[[UIDevice currentDevice] identifierForVendor]
UUIDString]; and UIDevice.current.identifierForVendor?.uuidString
in Swift3.
as kSecAttrAccessibleAlways and
kSecAttrAccessibleAlwaysThisDeviceOnly .
Dynamic Analysis
There are several ways to test the application binding.
Dynamic Analysis with A Simulator
Take the following steps when you want to verify app-binding in a simulator:
ID>/data/Library/Keychains .
4. Start the application on another simulator and find its data location as
described in step 3.
5. Stop the application on the second simulator. Overwrite the existing data
with the data copied in step 3.
6. Can you continue in an authenticated state? If so, then binding may not be
working properly.
We are saying that the binding "may" not be working because not everything is
unique in simulators.
Dynamic Analysis Using Two Jailbroken Devices
Take the following steps when you want to verify app-binding with two
jailbroken devices:
/private/var/mobile/Containers/Data/Application/<Application
uuid> .
SSH into the directory indicated by the given command's output or use
SCP ( scp <ipaddress>:/<folder_found_in_previous_step>
targetfolder ) to copy the folders and it's data. You can use an FTP
Swift2). These may not be available after you reinstall the application if no
other applications from the same vendor are installed.
You can store something in the keychain to identify the application's
instance. To make sure that this data is not backed up, use
kSecAttrAccessibleWhenPasscodeSetThisDeviceOnly (if you want to
kSecAttrAccessibleWhenUnlockedThisDeviceOnly .
Any scheme based on these methods will be more secure the moment a passcode
and/or touch-id is enabled, the materials stored in the Keychain or filesystem are
protected with protection classes (such as
kSecAttrAccessibleAfterFirstUnlockThisDeviceOnly and
only).
References
Dana Geist, Marat Nigmatullin: Jailbreak/Root Detection Evasion Study on
iOS and Android
OWASP MASVS
V8.1: "The app detects, and responds to, the presence of a rooted or
jailbroken device either by alerting the user or terminating the app."
V8.9: "All executable files and libraries belonging to the app are either
encrypted on the file level and/or important code and data segments inside
the executables are encrypted or packed. Trivial static analysis does not
reveal important code or data."
V8.10: "Obfuscation is applied to programmatic defenses, which in turn
impede de-obfuscation via dynamic analysis."
V8.11: "The app implements a 'device binding' functionality using a device
fingerprint derived from multiple properties unique to the device."
V8.13: "If the goal of obfuscation is to protect sensitive computations, an
obfuscation scheme is used that is both appropriate for the particular task
and robust against manual and automated de-obfuscation methods,
considering currently published research. The effectiveness of the
obfuscation scheme must be verified through manual testing. Note that
hardware-based isolation features are preferred over obfuscation whenever
possible."
Tools
Frida - http://frida.re/
Keychain Dumper - https://github.com/ptoomey3/Keychain-Dumper
Appsync Unified - https://cydia.angelxwind.net/?
page/net.angelxwind.appsyncunified
Testing Tools
To perform security testing different tools are available in order to be able to
manipulate requests and responses, decompile Apps, investigate the behavior of
running Apps and other test cases and automate them.
Interception Proxies
Burp Suite - Burp Suite is an integrated platform for performing security
testing of applications.
OWASP ZAP - The OWASP Zed Attack Proxy (ZAPis a free security tools
which can help you automatically find security vulnerabilities in your web
applications and web services.
Fiddler - Fiddler is an HTTP debugging proxy server application which can
captures HTTP and HTTPS traffic and logs it for the user to review. Fiddler
can also be used to modify HTTP traffic for troubleshooting purposes as it
is being sent or received.
Charles Proxy - HTTP proxy / HTTP monitor / Reverse Proxy that enables
a developer to view all of the HTTP and SSL / HTTPS traffic between their
machine and the Internet.
IDEs
Android Studio - is the official integrated development environment (IDE)
for Google's Android operating system, built on JetBrains' IntelliJ IDEA
software and designed specifically for Android development.
IntelliJ - IntelliJ IDEA is a Java integrated development environment (IDE)
for developing computer software.
Eclipse - Eclipse is an integrated development environment (IDE) used in
computer programming, and is the most widely used Java IDE.
Xcode - Xcode is an integrated development environment (IDE) available
only for macOS to create apps for iOS, watchOS, tvOS and macOS.
Suggested Reading
Mobile App Security
Android
Dominic Chell, Tyrone Erasmus, Shaun Colley, Ollie Whitehous (2015)
Mobile Application Hacker's Handbook. Wiley. Available at:
http://www.wiley.com/WileyCDA/WileyTitle/productCd-1118958500.html
Joshua J. Drake, Zach Lanier, Collin Mulliner, Pau Oliva, Stephen A.
Ridley, Georg Wicherski (2014) Android Hacker's Handbook. Wiley.
Available at: http://www.wiley.com/WileyCDA/WileyTitle/productCd-
111860864X.html
Godfrey Nolan (2014) Bulletproof Android. Addison-Wesley Professional.
Available at: https://www.amazon.com/Bulletproof-Android-Practical-
Building-Developers/dp/0133993329
iOS
Charlie Miller, Dionysus Blazakis, Dino Dai Zovi, Stefan Esser, Vincenzo
Iozzo, Ralf-Philipp Weinmann (2012) iOS Hacker's Handbook. Wiley.
Available at: http://www.wiley.com/WileyCDA/WileyTitle/productCd-
1118204123.html
David Thiel (2016) iOS Application Security, The Definitive Guide for
Hackers and Developers. no starch press. Available at:
https://www.nostarch.com/iossecurity
Jonathan Levin (2017), Mac OS X and iOS Internals, Wiley. Available at:
http://newosxbook.com/index.php
Misc
Reverse Engineering
Bruce Dang, Alexandre Gazet, Elias Backaalany (2014) Practical Reverse
Engineering. Wiley. Available at:
http://as.wiley.com/WileyCDA/WileyTitle/productCd-
1118787315,subjectCd-CSJ0.html
Skakenunny, Hangcom iOS App Reverse Engineering. Online. Available at:
https://github.com/iosre/iOSAppReverseEngineering/
Bernhard Mueller (2016) Hacking Soft Tokens - Advanced Reverse
Engineering on Android. HITB GSEC Singapore. Available at:
http://gsec.hitb.org/materials/sg2016/D1%20-
%20Bernhard%20Mueller%20-%20Attacking%20Software%20Tokens.pdf
Dennis Yurichev (2016) Reverse Engineering for Beginners. Online.
Available at: https://github.com/dennis714/RE-for-beginners
Michael Hale Ligh, Andrew Case, Jamie Levy, Aaron Walters (2014) The
Art of Memory Forensics. Wiley. Available at:
http://as.wiley.com/WileyCDA/WileyTitle/productCd-1118825098.html