Google Security Blog

Have an iPhone? Use it to protect your Google Account with the Advanced Protection Program

Google Security Blog - Wed, 01/15/2020 - 9:00am
Posted by Christiaan Brand, Product Manager, Google Cloud and Kaiyu Yan, Software Engineer, Google

Phishing—when an online attacker tries to trick you into giving them your username and password—is one of the most common causes of account compromises. We recently partnered with The Harris Poll to survey 500 high-risk users (politicians and their staff, journalists, business executives, activists, online influencers) living in the U.S. Seventy-four percent of them reported having been the target of a phishing attempt or compromised by a phishing attack.

Gmail automatically blocks more than 100 million phishing emails every day and warns people that are targeted by government-backed attackers, but you can further strengthen the security of your Google Account by enrolling in the Advanced Protection Program—our strongest security protections that automatically help defend against evolving methods attackers use to gain access to your personal and work Google Accounts and data.

Security keys are an important feature of the Advanced Protection Program, because they provide the strongest protection against phishing attacks. In the past, you had to separately purchase and carry physical security keys. Last year, we built security keys into Android phones—and starting today, you can activate a security key on your iPhone to help protect your Google Account.

Activating the security key on your iPhone with Google’s Smart Lock app
Security keys use public-key cryptography to verify your identity and URL of the login page, so that an attacker can’t access your account even if they have your username or password. Unlike other two-factor authentication (2FA) methods that try to verify your sign-in, security keys are built with FIDO standards that provide the strongest protection against automated bots, bulk phishing attacks, and targeted phishing attacks. You can learn more about security keys from our Cloud Next ‘19 presentation.

Approving the sign-in to a Google Account with Google’s SmartLock app on an iPhone
On your iPhone, the security key can be activated with Google’s Smart Lock app; on your Android phone, the functionality is built in. The security key in your phone uses Bluetooth to verify your sign-in on Chrome OS, iOS, macOS and Windows 10 devices without requiring you to pair your devices. This helps protect your Google Account on virtually any device with the convenience of your phone.

How to get started

Follow these simple steps to help protect your personal or work Google Account today:
  • Activate your phone’s security key (Android 7+ or iOS 10+)
  • Enroll in the Advanced Protection Program
  • When signing in to your Google Account, make sure Bluetooth is turned on on your phone and the device you’re signing in on.
We also highly recommend registering a backup security key to your account and keeping it in a safe place, so you can get into your account if you lose your phone. You can get a security key from a number of vendors, including Google, with our own Titan Security Key.

If you’re a Google Cloud customer, you can find out more about the Advanced Protection Program for the enterprise on our G Suite Updates blog.

Here’s to stronger account security—right in your pocket.
Categories: Google Security Blog

Securing open-source: how Google supports the new Kubernetes bug bounty

Google Security Blog - Tue, 01/14/2020 - 12:35pm
Posted by Maya Kaczorowski, Product Manager, Container Security and Aaron Small, Product Manager, GKE On-Prem Security

At Google, we care deeply about the security of open-source projects, as they’re such a critical part of our infrastructure—and indeed everyone’s. Today, the Cloud-Native Computing Foundation (CNCF) announced a new bug bounty program for Kubernetes that we helped create and get up and running. Here’s a brief overview of the program, other ways we help secure open-source projects and information on how you can get involved.

Launching the Kubernetes bug bounty program

Kubernetes is a CNCF project. As part of its graduation criteria, the CNCF recently funded the project’s first security audit, to review its core areas and identify potential issues. The audit identified and addressed several previously unknown security issues. Thankfully, Kubernetes already had a Product Security Committee, including engineers from the Google Kubernetes Engine (GKE) security team, who respond to and patch any newly discovered bugs. But the job of securing an open-source project is never done. To increase awareness of Kubernetes’ security model, attract new security researchers, and reward ongoing efforts in the community, the Kubernetes Product Security Committee began discussions in 2018 about launching an official bug bounty program.

Find Kubernetes bugs, get paid

What kind of bugs does the bounty program recognize? Most of the content you’d think of as ‘core’ Kubernetes, included at, is in scope. We’re interested in common kinds of security issues like remote code execution, privilege escalation, and bugs in authentication or authorization. Because Kubernetes is a community project, we’re also interested in the Kubernetes supply chain, including build and release processes that might allow a malicious individual to gain unauthorized access to commits, or otherwise affect build artifacts. This is a bit different from your standard bug bounty as there isn’t a ‘live’ environment for you to test—Kubernetes can be configured in many different ways, and we’re looking for bugs that affect any of those (except when existing configuration options could mitigate the bug). Thanks to the CNCF’s ongoing support and funding of this new program, depending on the bug, you can be rewarded with a bounty anywhere from $100 to $10,000.
The bug bounty program has been in a private release for several months, with invited researchers submitting bugs and to help us test the triage process. And today, the new Kubernetes bug bounty program is live! We’re excited to see what kind of bugs you discover, and are ready to respond to new reports. You can learn more about the program and how to get involved here.

Dedicated to Kubernetes security

Google has been involved in this new Kubernetes bug bounty from the get-go: proposing the program, completing vendor evaluations, defining the initial scope, testing the process, and onboarding HackerOne to implement the bug bounty solution. Though this is a big effort, it’s part of our ongoing commitment to securing Kubernetes. Google continues to be involved in every part of Kubernetes security, including responding to vulnerabilities as part of the Kubernetes Product Security Committee, chairing the sig-auth Kubernetes special interest group, and leading the aforementioned Kubernetes security audit. We realize that security is a critical part of any user’s decision to use an open-source tool, so we dedicate resources to help ensure we’re providing the best possible security for Kubernetes and GKE.

Although the Kubernetes bug bounty program is new, it isn’t a novel strategy for Google. We have enjoyed a close relationship with the security research community for many years and, in 2010, Google established our own Vulnerability Rewards Program (VRP). The VRP provides rewards for vulnerabilities reported in GKE and virtually all other Google Cloud services. (If you find a bug in GKE that isn’t specific to Kubernetes core, you should still report it to the Google VRP!) Nor is Kubernetes the only open-source project with a bug bounty program. In fact, we recently expanded our Patch Rewards program to provide financial rewards both upfront and after-the-fact for security improvements to open-source projects.

Help keep the world’s infrastructure safe. Report a bug to the Kubernetes bug bounty, or a GKE bug to the Google VRP.
Categories: Google Security Blog

PHA Family Highlights: Bread (and Friends)

Google Security Blog - Thu, 01/09/2020 - 4:01pm
Posted by Alec Guertin and Vadim Kotov, Android Security & Privacy Team
In this edition of our PHA Family Highlights series we introduce Bread, a large-scale billing fraud family. We first started tracking Bread (also known as Joker) in early 2017, identifying apps designed solely for SMS fraud. As the Play Store has introduced new policies and Google Play Protect has scaled defenses, Bread apps were forced to continually iterate to search for gaps. They have at some point used just about every cloaking and obfuscation technique under the sun in an attempt to go undetected. Many of these samples appear to be designed specifically to attempt to slip into the Play Store undetected and are not seen elsewhere. In this post, we show how Google Play Protect has defended against a well organized, persistent attacker and share examples of their techniques.

  • Google Play Protect detected and removed 1.7k unique Bread apps from the Play Store before ever being downloaded by users
  • Bread apps originally performed SMS fraud, but have largely abandoned this for WAP billing following the introduction of new Play policies restricting use of the SEND_SMS permission and increased coverage by Google Play Protect
  • More information on stats and relative impact is available in the Android Security 2018 Year in Review report
BILLING FRAUDBread apps typically fall into two categories: SMS fraud (older versions) and toll fraud (newer versions). Both of these types of fraud take advantage of mobile billing techniques involving the user’s carrier.
SMS BillingCarriers may partner with vendors to allow users to pay for services by SMS. The user simply needs to text a prescribed keyword to a prescribed number (shortcode). A charge is then added to the user’s bill with their mobile service provider.
Toll BillingCarriers may also provide payment endpoints over a web page. The user visits the URL to complete the payment and enters their phone number. Verification that the request is coming from the user’s device is completed using two possible methods:
  1. The user connects to the site over mobile data, not WiFi (so the service provider directly handles the connection and can validate the phone number); or
  2. The user must retrieve a code sent to them via SMS and enter it into the web page (thereby proving access to the provided phone number).
FraudBoth of the billing methods detailed above provide device verification, but not user verification. The carrier can determine that the request originates from the user’s device, but does not require any interaction from the user that cannot be automated. Malware authors use injected clicks, custom HTML parsers and SMS receivers to automate the billing process without requiring any interaction from the user.
STRING & DATA OBFUSCATIONBread apps have used many innovative and classic techniques to hide strings from analysis engines. Here are some highlights.
Standard EncryptionFrequently, Bread apps take advantage of standard crypto libraries in `java.util.crypto`. We have discovered apps using AES, Blowfish, and DES as well as combinations of these to encrypt their strings.
Custom EncryptionOther variants have used custom-implemented encryption algorithms. Some common techniques include: basic XOR encryption, nested XOR and custom key-derivation methods. Some variants have gone so far as to use a different key for the strings of each class.
Split StringsEncrypted strings can be a signal that the code is trying to hide something. Bread has used a few tricks to keep strings in plaintext while preventing basic string matching.
String click_code = new StringBuilder().append(".cli").append("ck();");
Going one step further, these substrings are sometimes scattered throughout the code, retrieved from static variables and method calls. Various versions may also change the index of the split (e.g. “.clic” and “k();”).
DelimitersAnother technique to obfuscate unencrypted strings uses repeated delimiters. A short, constant string of characters is inserted at strategic points to break up keywords:
String js = "javm6voTascrm6voTipt:window.SDFGHWEGSG.catcm6voThPage(docm6voTument.getElemm6voTentsByTm6voTagName('html')[m6voT0].innerHTML);"
At runtime, the delimiter is removed before using the string:
js = js.replaceAll("m6voT", "");
API OBFUSCATIONSMS and toll fraud generally requires a few basic behaviors (for example, disabling WiFi or accessing SMS), which are accessible by a handful of APIs. Given that there are a limited number of behaviors required to identify billing fraud, Bread apps have had to try a wide variety of techniques to mask usage of these APIs.
ReflectionMost methods for hiding API usage tend to use Java reflection in some way. In some samples, Bread has simply directly called the Reflect API on strings decrypted at runtime.
Class smsManagerClass = Class.forName(p.a().decrypt("wI7HmhUo0OYTnO2rFy3yxE2DFECD2I9reFnmPF3LuAc=")); // android.telephony.SmsManager
smsManagerClass.getMethod(p.a().decrypt("0oXNjC4kzLwqnPK9BiL4qw=="), // sendTextMessage
String.class, String.class, String.class, PendingIntent.class, PendingIntent.class).invoke(smsManagerClass.getMethod(p.a().decrypt("xoXXrB8n1b0LjYfIYUObrA==")).invoke(null), addr, null, message, null, null); // getDefault
JNIBread has also tested our ability to analyze native code. In one sample, no SMS-related code appears in the DEX file, but there is a native method registered.
public static native void nativesend(String arg0, String arg1);
Two strings are passed into the call, the shortcode and keyword used for SMS billing (getter methods renamed here for clarity).
JniManager.nativesend(this.get_shortcode(), this.get_keyword());
In the native library, it stores the strings to access the SMS API.
The nativesend method uses the Java Native Interface (JNI) to fetch and call the Android SMS API. The following is a screenshot from IDA with comments showing the strings and JNI functions.
WebView JavaScript InterfaceContinuing on the theme of cross-language bridges, Bread has also tried out some obfuscation methods utilizing JavaScript in WebViews. The following method is declared in the DEX.
public void method1(String p7, String p8, String p9, String p10, String p11) {
Class v0_1 = Class.forName(p7);
Class[] v1_1 = new Class[0];
Object[] v3_1 = new Object[0];
Object v1_3 = v0_1.getMethod(p8, v1_1).invoke(0, v3_1);
Class[] v2_2 = new Class[5];
v2_2[0] = String.class;
v2_2[1] = String.class;
v2_2[2] = String.class;
v2_2[3] =;
v2_2[4] =;
reflect.Method v0_2 = v0_1.getMethod(p9, v2_2);
Object[] v2_4 = new Object[5];
v2_4[0] = p10;
v2_4[1] = 0;
v2_4[2] = p11;
v2_4[3] = 0;
v2_4[4] = 0;
v0_2.invoke(v1_3, v2_4);
Without context, this method does not reveal much about its intended behavior, and there are no calls made to it anywhere in the DEX. However, the app does create a WebView and registers a JavaScript interface to this class.
this.webView.addJavascriptInterface(this, "stub");
This gives JavaScript run in the WebView access to this method. The app loads a URL pointing to a Bread-controlled server. The response contains some basic HTML and JavaScript.
In green, we can see the references to the SMS API. In red, we see those values being passed into the suspicious Java method through the registered interface. Now, using these strings method1 can use reflection to call sendTextMessage and process the payment.
PACKINGIn addition to implementing custom obfuscation techniques, apps have used several commercially available packers including: Qihoo360, AliProtect and SecShell.
More recently, we have seen Bread-related apps trying to hide malicious code in a native library shipped with the APK. Earlier this year, we discovered apps hiding a JAR in the data section of an ELF file which it then dynamically loads using DexClassLoader.
The figure below shows a fragment of encrypted JAR stored in .rodata section of a shared object shipped with the APK as well as the XOR key used for decryption.
After we blocked those samples, they moved a significant portion of malicious functionality into the native library, which resulted in a rather peculiar back and forth between Dalvik and native code:
COMMAND & CONTROLDynamic Shortcodes & ContentEarly versions of Bread utilized a basic command and control infrastructure to dynamically deliver content and retrieve billing details. In the example server response below, the green fields show text to be shown to the user. The red fields are used as the shortcode and keyword for SMS billing.
State MachinesSince various carriers implement the billing process differently, Bread has developed several variants containing generalized state machines implementing all possible steps. At runtime, the apps can check which carrier the device is connected to and fetch a configuration object from the command and control server. The configuration contains a list of steps to execute with URLs and JavaScript.
"trankUrl": ""
"params":"function jsFun(){document.getElementsByTagName('a')[1].click()};",
The steps implemented include:
  • Load a URL in a WebView
  • Run JavaScript in WebView
  • Toggle WiFi state
  • Toggle mobile data state
  • Read/modify SMS inbox
  • Solve captchas
CaptchasOne of the more interesting states implements the ability to solve basic captchas (obscured letters and numbers). First, the app creates a JavaScript function to call a Java method, getImageBase64, exposed to WebView using addJavascriptInterface.
The value used to replace GET_IMG_OBJECT comes from the JSON configuration.
"params": "document.getElementById('captcha')"
The app then uses JavaScript injection to create a new script in the carrier’s web page to run the new function.
The base64-encoded image is then uploaded to an image recognition service. If the text is retrieved successfully, the app uses JavaScript injection again to submit the HTML form with the captcha answer.
CLOAKINGClient-side Carrier ChecksIn our basic command & control example above, we didn’t address the (incorrectly labeled) “imei” field.
"button": "ยินดีต้อนรับ",
"code": 0,
"content": "F10",
"imei": "52003,52005,52000",
"rule": "Here are all the pictures you need, about
happiness, beauty, beauty, etc., with our most
sincere service, to provide you with the most
complete resources.",
"service": "4219245"
This contains the Mobile Country Code (MCC) and Mobile Network Code (MNC) values that the billing process will work for. In this example, the server response contains several values for Thai carriers. The app checks if the device’s network matches one of those provided by the server. If it does, it will commence with the billing process. If the value does not match, the app skips the “disclosure” page and billing process and brings the user straight to the app content.
In some versions, the server would only return valid responses several days after the apps were submitted.
Server-side Carrier ChecksIn the JavaScript bridge API obfuscation example covered above, the server supplied the app with the necessary strings to complete the billing process. However, analysts may not always see the indicators of compromise in the server’s response.
In this example, the requests to the server take the following form:
Here, the “operator” query parameter is the Mobile Country Code and Mobile Network Code . The server can use this information to determine if the user’s carrier is one of Bread’s targets. If not, the response is scrubbed of the strings used to complete the billing fraud.
<a onclick="Sub()">ไปเดี๋ยวนี้</a>
<div style="display:none">
<p id="deviceid">deadbeefdeadbeefdeadbeefdeadbeef</p>
<p id="cmobi"></p>
<p id="deni"></p>
<p id="ssm"></p>
<p id="shortcode"></p>
<p id="keyword"></p>
MISLEADING USERSBread apps sometimes display a pop-up to the user that implies some form of compliance or disclosure, showing terms and conditions or a confirm button. However, the actual text would often only display a basic welcome message.
Translation: “This app is a place to be and it will feel like a superhero with this new app. We hope you enjoy it!”
Other versions included all the pieces needed for a valid disclosure message.
When translated the disclosure reads:
“Apply Car Racing Clip \n Please enter your phone number for service details. \n Terms and Conditions \nFrom 9 Baht / day, you will receive 1 message / day. \nPlease stop the V4 printing service at 4739504 \n or call 02-697-9298 \n Monday - Friday at 8.30 - 5.30pm \n”
However, there are still two issues here:
  1. The numbers to contact for cancelling the subscription are not real
  2. The billing process commences even if you don’t hit the “Confirm” button
Even if the disclosure here displayed accurate information, the user would often find that the advertised functionality of the app did not match the actual content. Bread apps frequently contain no functionality beyond the billing process or simply clone content from other popular apps.
VERSIONINGBread has also leveraged an abuse tactic unique to app stores: versioning. Some apps have started with clean versions, in an attempt to grow user bases and build the developer accounts’ reputations. Only later is the malicious code introduced, through an update. Interestingly, early “clean” versions contain varying levels of signals that the updates will include malicious code later. Some are first uploaded with all the necessary code except the one line that actually initializes the billing process. Others may have the necessary permissions, but are missing the classes containing the fraud code. And others have all malicious content removed, except for log comments referencing the payment process. All of these methods attempt to space out the introduction of possible signals in various stages, testing for gaps in the publication process. However, GPP does not treat new apps and updates any differently from an analysis perspective.
FAKE REVIEWSWhen early versions of apps are first published, many five star reviews appear with comments like:

“very beautiful”
Later, 1 star reviews from real users start appearing with comments like:
“The app is not honest …”
SUMMARYSheer volume appears to be the preferred approach for Bread developers. At different times, we have seen three or more active variants using different approaches or targeting different carriers. Within each variant, the malicious code present in each sample may look nearly identical with only one evasion technique changed. Sample 1 may use AES-encrypted strings with reflection, while Sample 2 (submitted on the same day) will use the same code but with plaintext strings.
At peak times of activity, we have seen up to 23 different apps from this family submitted to Play in one day. At other times, Bread appears to abandon hope of making a variant successful and we see a gap of a week or longer before the next variant. This family showcases the amount of resources that malware authors now have to expend. Google Play Protect is constantly updating detection engines and warning users of malicious apps installed on their device.
SELECTED SAMPLES Package Name SHA-256 Digest com.rabbit.artcamera 18c277c7953983f45f2fe6ab4c7d872b2794c256604e43500045cb2b2084103f org.horoscope.astrology.predict 6f1a1dbeb5b28c80ddc51b77a83c7a27b045309c4f1bff48aaff7d79dfd4eb26 com.theforest.rotatemarswallpaper 4e78a26832a0d471922eb61231bc498463337fed8874db5f70b17dd06dcb9f09 com.jspany.temp 0ce78efa764ce1e7fb92c4de351ec1113f3e2ca4b2932feef46d7d62d6ae87f5 780936deb27be5dceea20a5489014236796a74cc967a12e36cb56d9b8df9bc86 com.rongnea.udonood 8b2271938c524dd1064e74717b82e48b778e49e26b5ac2dae8856555b5489131 com.mbv.a.wp 01611e16f573da2c9dbc7acdd445d84bae71fecf2927753e341d8a5652b89a68 b4822eeb71c83e4aab5ddfecfb58459e5c5e10d382a2364da1c42621f58e119b
Categories: Google Security Blog