Unpacking Android Security: Part 3 — Insecure Communication | by Ed Holloway-George | Aug, 2022

Photo credit: Clem Onojeghuo

👋 Hi and welcome to the third post in this series where we deep-dive into Android Security. This series focuses on the Top 10 Mobile security threats as determined by The Open Web Application Security Project (OWASP) Foundation, the leading application security community in our field.

Before checking this post, please consider checking out the previous one ‘Insecure Data Storage’ which is available on my site, and on Medium.

⚠️ Please note that this series is for educational purposes only. Remember to only test on apps where you have permission to do so and most of all, don’t be evil.

Finally, if you enjoy this series or have any feedback, please drop me a message. Thanks!

In this helping of my series on Android Security, we shall take a look into the #3 threat to mobile application security as determined by OWASP, “Insecure Communication”.

When we talk about communication in the context of mobile security, we are actually referring to technologies that can transmit and/or receive data. This may include the device’s internet connection (via WiFi or otherwise), connection to the mobile network, Bluetooth, NFC, and so on and so forth. This unfortunately gives us a pretty broad surface area to cover 😅 However, let’s address the big one today, as perhaps we will revisit this in the future!

Unless an app has some very bespoke communication functionality or none at all, it’s more than likely it communicates via the internet with one or more services ¹. This could take the form of requests to an API, visiting a webpage in an in-app browser and pretty much everything in-between. The list of possibilities is near-endless, which means there is plenty of attack surface for malicious actors to prey upon.

For the sake of brevity I won’t cover an “endless list”, let us instead cover some of the blindingly obvious points and answer the critical question of “How should we be making calls over the network?”.

At the time of writing, we are already approaching the halfway point of the year 2022. However, believe it or not, we still live in a world where mobile apps occasionally do not transmit data via HTTPS.

“What is HTTPS and why does it even matter?”

HTTPS is the security-enriched use of the Hypertext Transfer Protocol (HTTP), the fundamental protocol (ie a pre-defined set of rules) used when communicating on the internet.

“Ok, so the ‘S’ stands for secure, but how does HTTP actually become HTTPS?”

HTTPS utilises Transport Layer Security (TLS)², another protocol, to communicate securely using cryptography to encrypt data in transit between client and server. This cryptography is achieved, in part, through the use of ‘digital certificates’ (more on those shortly). Therefore, whenever an application sends data via HTTPS it is encrypted and avoids threats such as man-in-the-middle (MITM) attacks where an attacker can intercept HTTP calls, spoof them and let a victim believe they are talking to the legitimate server . It also makes eavesdropping via a compromised network extremely difficult, again thanks to the built-in encryption provided.

From an Android standpoint, if your app targets Android 9 (API level 28) and above, sending data via HTTP (ie cleartext) is disabled by default. However, it is possible to disable this for all network calls through the use of the network security config file or through a manifest field android:usesCleartextTraffic³. In general You should never do this and instead consider a cleartext ‘allow list’ within your network security file as seen below:

<?xml version="1.0" encoding="utf-8"?>

<base-config cleartextTrafficPermitted="true">

<domain-config cleartextTrafficPermitted="true">
<domain includeSubdomains="true">example.com</domain>


Again, wherever possible, avoid doing this and instead migrate all URLs to use HTTPS. Your users (and conscience) will thank you ✨

As previously mentioned, one key aspect of securely communicating over the internet is the use of digital certificates ⁴.

In order to verify the server is actually who they say they are, some form of certificate is required to prove the identity. However, a certificate alone is not sufficient, as there would be nothing to stop an attacker from creating a fraudulent one and passing it off as legitimate. To combat this, we use a shared ‘trusted third-party’ known as a ‘certificate authority’ (CA) whose signature on a certificate proves a certificate is authentic when the CA is trusted by both the client and server ⁵. Your Android device comes pre-loaded with a common list of CA’s that are ‘trusted’, which means you can make secure calls to the internet immediately without (usually) any issues.

However, Android also allows users to supply their own ‘trusted’ certificates. The most common use case for user certificates comes through their use with proxy software such as Charles, Fiddler or Wireshark, allowing the capturing of data sent by a device for debugging purposes. They will often provide a certificate for a user to add to their device while proxying through the software, allowing the decryption of secure HTTPS calls and the contents to be freely viewed/modified. For debugging purposes, this is often invaluable and extremely useful, but allowing any user certificate on a production app is a huge security lapse.

You might well be vulnerable if your network security config files contain something similar to the example below. In fact, if your app does not target Android 7.0 (API level 24) or above it will act like this by default ⁶. This example would allow user certificates to be available on all configurations of the application, including any production/app store builds.

<!-- DO NOT DO THIS -->
<certificates src="user" />

The recommended approach is to allow user certificates to Only be used by your application during debugging. ie non-production and non-public facing builds. Thankfully, this is quite straightforward to implement through the use of debug-overrides.

<certificates src="user" />

Nice. 🥳

Another noteworthy point here. Should your server use a self-signed certificate And not one provided by default CAs, you can also provide the raw PEM or DER certificate file(s) within your app to allow them to be used when networking. By adding one or more of these files to res/raw/trusted_rootsit becomes possible to import them as a certificate source using @raw/trusted_roots at the top or domain-specific level. The docs are especially helpful in this slightly more niche case, so please give them a read!

On the subject of certificates, to further bolster your security you may wish to or have considered certificate ‘pinning’. For those who aren’t aware, certificate pinning is the act of checking the chain of certificates for a request against an ‘expected certificate hash’ to check it is present. In reality, this is usually the hash of one (or more) of your certificate’s public key and is hard-coded into your application as part of your network layer or again, as part of your network security config XML file.

This is a very common approach utilised by apps to ensure they are only communicating with the expected server. While, this approach has a number of issues, including requiring rotation app releases for certificates and ensuring that you make the best choice when it comes to choosing certificates to pin in the chain.

Howeverthere is another slightly lesser-known but arguably much better option on the table 👀

Certificate Transparency (CT) is a growing alternative approach and addresses some of the pain points that come with working with certificate pinning. CT is achieved by CAs publicly logging when they have issued a new certificate to a log server. Critically, these log servers are ‘append only’, meaning they can only add new data to them. When a new certificate from a CA is issued, the log server issues a signed certificate timestamp (SCT) which can be verified by a client. As there is no longer a reliance on public keys of certificates, the need to release an app when certificates rotate is eliminated and the certificate’s authenticity can be guaranteed.

Matt Dolan’s talk “Move over certificate pinning. Certificate Transparency is here!” gives an excellent insight into how to implement this in your apps, as well as a great overview of both pinning and CT.

You’d be amazed how many times I have downloaded an app to my device, used it for a while, opened logcat and to my horror been greeted by the networking calls I just made staring me in the face. I really wish I was joking. Login credentials, full headers, the works. I’ve seen pretty much everything. Worst of all, I saw it just this month⁷.

Many of us will use OkHttp within your app’s networking layer and thus potentially also be using the HttpLoggingInterceptor to print your networking calls to logcat. There is absolutely nothing wrong with this, so long as this code does not make it to the production/public-facing version of your app.

You should certainly consider disabling all logging in your production apps by using R8 to strip the Log class of its functionality. This can be achieved by adding the following rule to your obfuscation file:

-assumenosideeffects class android.util.Log {
public static boolean isLoggable(java.lang.String, int);
public static int v(...);
public static int i(...);
public static int w(...);
public static int d(...);
public static int e(...);

assumenosideeffects strips any calls to the provided methods from within your app as it assumes the return values ​​aren’t used. This will ensure no logs are output by your app.

However, this extreme approach may not work for everyone. Alternatives include using custom logger implementations (through libraries such as Timber) to discard logs when the app is built using a certain flavor, or just do not use Log at all 😅 The choice is very much up to you and your situation!

I do apologise if this sounds condescending, and admittedly this is not just limited to this particular area of ​​security, but often a sprinkling of common sense when working with areas such as networking can make a big difference.

Try to think about the data you send to the server and the impact it might have if it was intercepted.

Here are some example questions you could be asking yourself:

  • Do I need to be sending {XYZ} data in this network call?
    – You ideally should be sending the minimum amount of information required
  • Is {XYZ} dangerous in the wrong hands?
    – What could an attacker do with the information should it be intercepted
    – What is the worst-case scenario?
  • Does {XYZ} require appropriate authorization and/or authentication to use?
    – Can the networking be locked down further using roles or identity verification⁸

I hope these examples kick start a conversation with yourself, your team and hopefully other engineers!

In the upcoming posts within this series, we shall explore more of the OWASP Top 10 for Mobile. Next up is #4 Insecure Authentication

Thanks as always for reading! I hope you found this post interesting, please feel free to tweet me with any feedback at @Sp4ghettiCode And don’t forget to clap, like, tweet, share, star etc

Further Reading


[1] Assuming you remembered to add the android.permission.INTERNET permission… 😬

[2] “The artist formerly known as Secure Sockets Layer (SSL)” 👨‍🎤 — TLS mostly replaced SSL when it was deprecated in 2015

[3] Available as of Android 6.0 (API 23) but ignored in API 24+ if a network security file exists

[4] These certificates are still commonly known by some as ‘SSL Certificates’, despite SSL being deprecated in favor of TLS

[5] Yes, I am afraid to say that in 2022 the entire internet hinges on this trust of third parties. Oh, and it does go wrong. Some nightmare fuel for you there, you’re welcome!

[6] The ‘default network security behavior’ of apps targeting different versions is described in the docs here.

[7] Well, ‘this month’ at the time of writing. It was also responsibly disclosed to the relevant security team of the company

[8] Keep an eye on my blog for more on this in future…

Leave a Comment