
The Cybersecurity and Infrastructure Security Agency (CISA), alongside the FBI, NSA, and key international cybersecurity partners, has released a new guide calling on the software industry to transition toward memory-safe programming languages (MSLs) as a critical step in reducing software vulnerabilities.
The guide, titled “Memory Safe Languages: Reducing Software Vulnerabilities in Modern Software Development,” emphasizes that a significant portion of today’s exploitable security flaws stem from memory safety issues in software written with non–memory-safe languages like C and C++.
For decades, memory safety vulnerabilities—such as buffer overflows, use-after-free bugs, and data races—have plagued software systems. These bugs arise primarily from languages like C and C++ that grant low-level memory control but offer limited safety guarantees. The risk is critical. As the guide highlights:
“Heartbleed affected over 800,000 of the most visited websites… BadAlloc impacted embedded devices, industrial control systems, and over 195 million vehicles, demonstrating how memory vulnerabilities threaten national security and critical infrastructure.”
Google’s Project Zero found that in 2021, 67% of in-the-wild zero-days were memory safety vulnerabilities, a statistic that vividly illustrates the urgent need for systemic change.
Memory safe languages such as Rust, Go, Java, and Swift are designed to enforce strict memory safety by default, drastically reducing the potential for such vulnerabilities. Unlike traditional languages that rely on developer discipline, MSLs embed safety mechanisms directly into the language:
“MSLs offer built-in safeguards that shift safety burdens from developers to the language and the development environment.”
These safeguards include features like bounds checking, strict ownership and borrowing rules (as in Rust), garbage collection (in languages like Go and Java), and runtime safety checks. These measures prevent common errors like buffer overflows and use-after-free access that are often the entry points for exploits.
One of the guide’s most compelling examples comes from Android. In 2019, 76% of its vulnerabilities stemmed from memory safety issues. Recognizing this, Google made a strategic shift:
“The Android team made a strategic decision to prioritize MSLs, specifically Rust and Java, for all new development… By 2024, memory safety vulnerabilities had plummeted to 24% of the total.”
A key message in the guide is practicality. While a complete overhaul of existing codebases may not be feasible, integrating MSLs into new projects and high-risk components is both achievable and impactful. The guidance encourages:
- Using MSLs for new development.
- Prioritizing high-risk components like network-facing services and file parsers.
- Employing modular designs to integrate MSLs alongside legacy code via well-defined APIs.
As the guide explains:
“Starting MSL adoption is not currently practical in all circumstances or solution areas; additional investments may be necessary to reduce memory safety bugs.”
MSLs not only boost security—they also enhance system reliability and developer productivity. By eliminating entire classes of bugs and supporting better debugging through runtime checks, they reduce downtime and enable faster innovation cycles.
“Early error detection during compilation or runtime testing accelerates debugging, reduces troubleshooting time, and minimizes the risk of costly incidents.”
CISA and NSA recommend that organizations publish Memory Safety Adoption Roadmaps and align with frameworks like the NIST Secure Software Development Framework (SSDF). They also stress the importance of industry, government, and academia in cultivating MSL awareness and skills.
Ultimately, this guide is a call to action:
“Strategic MSL adoption is an investment in a secure software future. By defining memory safety roadmaps and leading the adoption of best practices, organizations can significantly improve software resilience and help ensure a safer digital landscape.”