Android’s defense-in-depth strategy applies not only to the Android OS running on the Application Processor (AP) but also the firmware that runs on devices. We particularly prioritize hardening the cellular baseband given its unique combination of running in an elevated privilege and parsing untrusted inputs that are remotely delivered into the device.

This post covers how to use two high-value sanitizers which can prevent specific classes of vulnerabilities found within the baseband. They are architecture agnostic, suitable for bare-metal deployment, and should be enabled in existing C/C++ code bases to mitigate unknown vulnerabilities. Beyond security, addressing the issues uncovered by these sanitizers improves code health and overall stability, reducing resources spent addressing bugs in the future.

An increasingly popular attack surface

As we outlined previously, security research focused on the baseband has highlighted a consistent lack of exploit mitigations in firmware. Baseband Remote Code Execution (RCE) exploits have their own categorization in well-known third-party marketplaces with a relatively low payout. This suggests baseband bugs may potentially be abundant and/or not too complex to find and exploit, and their prominent inclusion in the marketplace demonstrates that they are useful.

Baseband security and exploitation has been a recurring theme in security conferences for the last decade. Researchers have also made a dent in this area in well-known exploitation contests. Most recently, this area has become prominent enough that it is common to find practical baseband exploitation trainings in top security conferences.

Acknowledging this trend, combined with the severity and apparent abundance of these vulnerabilities, last year we introduced updates to the severity guidelines of Android’s Vulnerability Rewards Program (VRP). For example, we consider vulnerabilities allowing Remote Code Execution (RCE) in the cellular baseband to be of CRITICAL severity.

Mitigating Vulnerability Root Causes with Sanitizers

Common classes of vulnerabilities can be mitigated through the use of sanitizers provided by Clang-based toolchains. These sanitizers insert runtime checks against common classes of vulnerabilities. GCC-based toolchains may also provide some level of support for these flags as well, but will not be considered further in this post. We encourage you to check your toolchain’s documentation.

Two sanitizers included in Undefined Behavior Sanitizer (UBSan) will be our focus – Integer Overflow Sanitizer (IntSan) and BoundsSanitizer (BoundSan). These have been widely deployed in Android userspace for years following a data-driven approach. These two are well suited for bare-metal environments such as the baseband since they do not require support from the OS or specific architecture features, and so are generally supported for all Clang targets.

Integer Overflow Sanitizer (IntSan)

IntSan causes signed and unsigned integer overflows to abort execution unless the overflow is made explicit. While unsigned integer overflows are technically defined behavior, it can often lead to unintentional behavior and vulnerabilities – especially when they’re used to index into arrays.

As both intentional and unintentional overflows are likely present in most code bases, IntSan may require refactoring and annotating the code base to prevent intentional or benign overflows from trapping (which we consider a false positive for our purposes). Overflows which need to be addressed can be uncovered via testing (see the Deploying Sanitizers section)

BoundsSanitizer (BoundSan)

BoundSan inserts instrumentation to perform bounds checks around some array accesses. These checks are only added if the compiler cannot prove at compile time that the access will be safe and if the size of the array will be known at runtime, so that it can be checked against. Note that this will not cover all array accesses as the size of the array may not be known at runtime, such as function arguments which are arrays.

As long as the code is correctly written C/C++, BoundSan should produce no false positives. Any violations discovered when first enabling BoundSan is at least a bug, if not a vulnerability. Resolving even those which aren’t exploitable can greatly improve stability and code quality.

Modernize your toolchains

Adopting modern mitigations also means adopting (and maintaining) modern toolchains. The benefits of this go beyond utilizing sanitizers however. Maintaining an old toolchain is not free and entails hidden opportunity costs. Toolchains contain bugs which are addressed in subsequent releases. Newer toolchains bring new performance optimizations, valuable in the highly constrained bare-metal environment that basebands operate in. Security issues can even exist in the generated code of out-of-date compilers.

Maintaining a modern up-to-date toolchain for the baseband entails some costs in terms of maintenance, especially at first if the toolchain is particularly old, but over time the benefits, as outlined above, outweigh the costs.

Where to apply sanitizers

Both BoundSan and IntSan have a measurable performance overhead. Although we were able to significantly reduce this overhead in the past (for example to less than 1% in media codecs), even very small increases in CPU load can have a substantial impact in some environments.

Enabling sanitizers over the entire codebase provides the most benefit, but enabling them in security-critical attack surfaces can serve as a first step in an incremental deployment. For example:


Systems such as Gmail, YouTube and Google Play rely on text classification models to identify harmful content including phishing attacks, inappropriate comments, and scams. These types of texts are harder for machine learning models to classify because bad actors rely on adversarial text manipulations to actively attempt to evade the classifiers. For example, they will use homoglyphs, invisible characters, and keyword stuffing to bypass defenses. 




To help make text classifiers more robust and efficient, we’ve developed a novel, multilingual text vectorizer called RETVec (Resilient & Efficient Text Vectorizer) that helps models achieve state-of-the-art classification performance and drastically reduces computational cost. Today, we’re sharing how RETVec has been used to help protect Gmail inboxes.




Strengthening the Gmail Spam Classifier with RETVec


Figure 1. RETVec-based Gmail Spam filter improvements.




Over the past year, we battle-tested RETVec extensively inside Google to evaluate its usefulness and found it to be highly effective for security and anti-abuse applications. In particular, replacing the Gmail spam classifier’s previous text vectorizer with RETVec allowed us to improve the spam detection rate over the baseline by 38% and reduce the false positive rate by 19.4%. Additionally, using RETVec reduced the TPU usage of the model by 83%, making the RETVec deployment one of the largest defense upgrades in recent years. RETVec achieves these improvements by sporting a very lightweight word embedding model (~200k parameters), allowing us to reduce the Transformer model’s size at equal or better performance, and having the ability to split the computation between the host and TPU in a network and memory efficient manner.




RETVec Benefits

RETVec achieves these improvements by combining a novel, highly-compact character encoder, an augmentation-driven training regime, and the use of metric learning. The architecture details and benchmark evaluations are available in our NeurIPS 2023 paper and we open-source RETVec on Github.




Due to its novel architecture, RETVec works out-of-the-box on every language and all UTF-8 characters without the need for text preprocessing, making it the ideal candidate for on-device, web, and large-scale text classification deployments. Models trained with RETVec exhibit faster inference speed due to its compact representation. Having smaller models reduces computational costs and decreases latency, which is critical for large-scale applications and on-device models.




Figure 1. RETVec architecture diagram.





Models trained with RETVec can be seamlessly converted to TFLite for mobile and edge devices, as a result of a native implementation in TensorFlow Text. For web application model deployment, we provide a TensorflowJS layer implementation that is available on Github and you can check out a demo web page running a RETVec-based model.




Figure 2.  Typo resilience of text classification models trained from scratch using different vectorizers.




RETVec is a novel open-source text vectorizer that allows you to build more resilient and efficient server-side and on-device text classifiers. The Gmail spam filter uses it to help protect Gmail inboxes against malicious emails.





If you would like to use RETVec for your own use cases or research, we created a tutorial to help you get started.







This research was conducted by Elie Bursztein, Marina Zhang, Owen Vallis, Xinyu Jia, and Alexey Kurakin. We would like to thank Gengxin Miao, Brunno Attorre, Venkat Sreepati, Lidor Avigad, Dan Givol, Rishabh Seth and Melvin Montenegro and all the Googlers who contributed to the project.


Nearly half of third-parties fail to meet two or more of the Minimum Viable Secure Product controls. Why is this a problem? Because "98% of organizations have a relationship with at least one third-party that has experienced a breach in the last 2 years."

In this post, we're excited to share the latest improvements to the Minimum Viable Secure Product (MVSP) controls. We'll also shed light on how adoption of MVSP has helped Google improve its security processes, and hope this example will help motivate third-parties to increase their adoption of MVSP controls and thus improve product security across the industry.

About MVSP

In October 2021, Google publicly launched MVSP alongside launch partners. Our original goal remains unchanged: to provide a vendor-neutral application security baseline, designed to eliminate overhead, complexity, and confusion in the end-to-end process of onboarding third-party products and services. It covers themes such as procurement, security assessment, and contract negotiation.


What is Minimum Viable Secure Product (MVSP)  MVSP is a list of fundamental application security controls that should be integrated into enterprise-ready products and services. The controls are designed to be simple in order to implement and provide a good foundation for building secure and resilient systems and services.


Improvements since launch

As part of MVSP’s annual control review, and our core philosophy of evolution over revolution, the working group sought input from the broader security community to ensure MVSP maintains a balance between security and achievability.

As a result of these discussions, we launched updated controls. Key changes include: expanded guidance around external vulnerability reporting to protect bug hunters, and discouraging additional costs for access to basic security features – inline with CISA’s "Secure-by-Design" principles.

In 2022, we developed guidance on build process security based on SLSA, to reflect the importance of supply chain security and integrity.

From an organizational perspective, in the two years since launching, we've seen the community around MVSP continue to expand. The working group has grown to over 20 global members, helping to diversify voices and broaden expertise. We've also had the opportunity to present and discuss the program with a number of key groups, including an invitation to present at the United Nations International Computing Centre – Common Secure Conference.

Google at the UNICC conference in Valencia, Spain

Google at the UNICC conference in Valencia, Spain

How Google uses MVSP

Since its inception, Google has looked to integrate improvements to our own processes using MVSP as a template. Two years later, we can clearly see the impact through faster procurement processes, streamlined contract negotiations, and improved data-driven decision making.

Highlights

  • After implementing MVSP into key areas of Google's third-party life-cycle, we've observed a 68% reduction in the time required for third-parties to complete assessment process.

  • By embedding MVSP into select procurement processes, Google has increased data-driven decision making in earlier phases of the cycle.

  • Aligning our Information Protection Addendum’s safeguards with MVSP has significantly improved our third-party privacy and security risk management processes.

68% time reduction observed after implementing MVSP

You use MVSP to enhance your software or procurement processes by reviewing some common use-cases and adopting them into your third-party risk management and/or contracting workflows .

What's next?

Security Maturity Levels - Minimum, Basic, Advanced, and Expert

We're invested in helping the industry manage risk posture through continuous improvement, while increasing the minimum bar for product security across the industry.

By making MVSP available to the wider industry, we are helping to create a solid foundation for growing the maturity level of products and services. Google has benefited from driving security and safety improvements through the use of leveled sets of requirements. We expect the same to be true across the wider industry.


We've seen success, but there is still work to be done. Based on initial observations, as mentioned above, 48% of third-parties fail to meet two or more of the Minimum Viable Secure Product controls.


48% of third parties fail to meet two or more MVSP controls

As an industry, we can't stand still when it comes to product security. Help us raise the minimum bar for application security by adopting MVSP and ensuring we as an industry don’t accept anything less than a strong security baseline that works for the wider industry.

Acknowledgements

Google and the MVSP working group would like to thank those who have supported and contributed since its inception. If you'd like to get involved or provide feedback, please reach out.



Thank you to Chris John Riley, Gabor Acs-Kurucz, Michele Chubirka, Anna Hupa, Dirk Göhmann and Kaan Kivilcim from the Google MVSP Group for their contributions to this post.


To help give users a simplified view of which apps have undergone an independent security validation, we’re introducing a new Google Play store banner for specific app types, starting with VPN apps. We’ve launched this banner beginning with VPN apps due to the sensitive and significant amount of user data these apps handle. When a user searches for VPN apps, they will now see a banner at the top of Google Play that educates them about the “Independent security review” badge in the Data Safety Section. Users also have the ability to “Learn More”, which redirects them to the App Validation Directory, a centralized place to view all VPN apps that have been independently security reviewed. Users can also discover additional technical assessment details in the App Validation Directory, helping them to make more informed decisions about what VPN apps to download, use, and trust with their data.

VPN providers such as NordVPN, Google One, ExpressVPN, and others have already undergone independent security testing and publicly declared the badge showing their good standing with the MASA program. We encourage and anticipate additional VPN app developers to undergo independent security testing, bringing even more transparency to users. If you are a VPN developer and interested in learning more about this feature, please submit this form.

Our Commitment to App Security and Privacy Transparency on Google Play

By encouraging independent security testing and displaying security badges for validated apps, we highlight developers who prioritize user safety and data transparency. We also provide developers with app security and privacy best practices – through Play PolicyBytes videos, webinars, blog posts, the Developer Help Community, and other resources – in accordance with our developer policies that help keep Google Play safe. We are continually working on improvements to our app review process, policies, and programs to keep users safe and to help developers navigate our policies with ease. To learn more about how we give developers the tools to succeed while keeping users safe on Google Play, read the Google Safety Center article.

Our efforts to prioritize security and privacy transparency on Google Play are aligned with the needs and expectations we’ve heard from both users and developers. We believe that by prioritizing initiatives that bolster user safety and trust, we can foster a thriving app ecosystem where users can make more informed app decisions and developers are encouraged to uphold the highest standards of security and privacy.

New AI innovations and applications are reaching consumers and businesses on an almost-daily basis. Building AI securely is a paramount concern, and we believe that Google’s Secure AI Framework (SAIF) can help chart a path for creating AI applications that users can trust. Today, we’re highlighting two new ways to make information about AI supply chain security universally discoverable and verifiable, so that AI can be created and used responsibly. 




The first principle of SAIF is to ensure that the AI ecosystem has strong security foundations. In particular, the software supply chains for components specific to AI development, such as machine learning models, need to be secured against threats including model tampering, data poisoning, and the production of harmful content




Even as machine learning and artificial intelligence continue to evolve rapidly, some solutions are now within reach of ML creators. We’re building on our prior work with the Open Source Security Foundation to show how ML model creators can and should protect against ML supply chain attacks by using SLSA and Sigstore.



Supply chain security for ML


For supply chain security of conventional software (software that does not use ML), we usually consider questions like:


  • Who published the software? Are they trustworthy? Did they use safe practices?
  • For open source software, what was the source code?
  • What dependencies went into building that software?
  • Could the software have been replaced by a tampered version following publication? Could this have occurred during build time?




All of these questions also apply to the hundreds of free ML models that are available for use on the internet. Using an ML model means trusting every part of it, just as you would any other piece of software. This includes concerns such as:



  • Who published the model? Are they trustworthy? Did they use safe practices?
  • For open source models, what was the training code?
  • What datasets went into training that model?
  • Could the model have been replaced by a tampered version following publication? Could this have occurred during training time?




We should treat tampering of ML models with the same severity as we treat injection of malware into conventional software. In fact, since models are programs, many allow the same types of arbitrary code execution exploits that are leveraged for attacks on conventional software. Furthermore, a tampered model could leak or steal data, cause harm from biases, or spread dangerous misinformation. 




Inspection of an ML model is insufficient to determine whether bad behaviors were injected. This is similar to trying to reverse engineer an executable to identify malware. To protect supply chains at scale, we need to know how the model or software was created to answer the questions above.



Solutions for ML supply chain security


In recent years, we’ve seen how providing public and verifiable information about what happens during different stages of software development is an effective method of protecting conventional software against supply chain attacks. This supply chain transparency offers protection and insights with:



  • Digital signatures, such as those from Sigstore, which allow users to verify that the software wasn’t tampered with or replaced
  • Metadata such as SLSA provenance that tell us what’s in software and how it was built, allowing consumers to ensure license compatibility, identify known vulnerabilities, and detect more advanced threats





Together, these solutions help combat the enormous uptick in supply chain attacks that have turned every step in the software development lifecycle into a potential target for malicious activity.




We believe transparency throughout the development lifecycle will also help secure ML models, since ML model development follows a similar lifecycle as for regular software artifacts:




Similarities between software development and ML model development





An ML training process can be thought of as a “build:” it transforms some input data to some output data. Similarly, training data can be thought of as a “dependency:” it is data that is used during the build process. Because of the similarity in the development lifecycles, the same software supply chain attack vectors that threaten software development also apply to model development: 





Attack vectors on ML through the lens of the ML supply chain




Based on the similarities in development lifecycle and threat vectors, we propose applying the same supply chain solutions from SLSA and Sigstore to ML models to similarly protect them against supply chain attacks.



Sigstore for ML models



Code signing is a critical step in supply chain security. It identifies the producer of a piece of software and prevents tampering after publication. But normally code signing is difficult to set up—producers need to manage and rotate keys, set up infrastructure for verification, and instruct consumers on how to verify. Often times secrets are also leaked since security is hard to get right during the process.




We suggest bypassing these challenges by using Sigstore, a collection of tools and services that make code signing secure and easy. Sigstore allows any software producer to sign their software by simply using an OpenID Connect token bound to either a workload or developer identity—all without the need to manage or rotate long-lived secrets.




So how would signing ML models benefit users? By signing models after training, we can assure users that they have the exact model that the builder (aka “trainer”) uploaded. Signing models discourages model hub owners from swapping models, addresses the issue of a model hub compromise, and can help prevent users from being tricked into using a bad model. 




Model signatures make attacks similar to PoisonGPT detectable. The tampered models will either fail signature verification or can be directly traced back to the malicious actor. Our current work to encourage this industry standard includes:




  • Having ML frameworks integrate signing and verification in the model save/load APIs
  • Having ML model hubs add a badge to all signed models, thus guiding users towards signed models and incentivizing signatures from model developers
  • Scaling model signing for LLMs 




SLSA for ML Supply Chain Integrity



Signing with Sigstore provides users with confidence in the models that they are using, but it cannot answer every question they have about the model. SLSA goes a step further to provide more meaning behind those signatures. 




SLSA (Supply-chain Levels for Software Artifacts) is a specification for describing how a software artifact was built. SLSA-enabled build platforms implement controls to prevent tampering and output signed provenance describing how the software artifact was produced, including all build inputs. This way, SLSA provides trustworthy metadata about what went into a software artifact.




Applying SLSA to ML could provide similar information about an ML model’s supply chain and address attack vectors not covered by model signing, such as compromised source control, compromised training process, and vulnerability injection. Our vision is to include specific ML information in a SLSA provenance file, which would help users spot an undertrained model or one trained on bad data. Upon detecting a vulnerability in an ML framework, users can quickly identify which models need to be retrained, thus reducing costs.




We don’t need special ML extensions for SLSA. Since an ML training process is a build (shown in the earlier diagram), we can apply the existing SLSA guidelines to ML training. The ML training process should be hardened against tampering and output provenance just like a conventional build process. More work on SLSA is needed to make it fully useful and applicable to ML, particularly around describing dependencies such as datasets and pretrained models.  Most of these efforts will also benefit conventional software.




For models training on pipelines that do not require GPUs/TPUs, using an existing, SLSA-enabled build platform is a simple solution. For example, Google Cloud Build, GitHub Actions, or GitLab CI are all generally available SLSA-enabled build platforms. It is possible to run an ML training step on one of these platforms to make all of the built-in supply chain security features available to conventional software.



How to do model signing and SLSA for ML today



By incorporating supply chain security into the ML development lifecycle now, while the problem space is still unfolding, we can jumpstart work with the open source community to establish industry standards to solve pressing problems. This effort is already underway and available for testing.  




Our repository of tooling for model signing and experimental SLSA provenance support for smaller ML models is available now. Our future ML framework and model hub integrations will be released in this repository as well. 




We welcome collaboration with the ML community and are looking forward to reaching consensus on how to best integrate supply chain protection standards into existing tooling (such as Model Cards). If you have feedback or ideas, please feel free to open an issue and let us know.