Sometimes enterprise adoption happens quietly, then all at once.
This week, AWS promoted Rust from "experimental" to generally available on Lambda, backed by their full SLA. Google published data showing Rust code in Android has 1000x fewer memory vulnerabilities than C/C++, while simultaneously improving development velocity by every meaningful metric. And JetBrains explored why Rust and Python are becoming partners rather than competitors.
These aren't just announcements, they're data points in a larger story: Rust is crossing the production chasm. From serverless functions to mobile operating systems, the "memory-safe future" isn't theoretical anymore. It's shipping at scale, backed by SLAs, and proving that faster development is safer development.
Let's look at the numbers.
Improve your Rust build times [sponsored]
Building the Zed code editor reveals just how much build times can vary. We cut build time from 43 to 26 minutes, a 61% improvement by using Depot's GitHub Action Runners. Let's enhance your Rust build speed
AWS Lambda Officially Supports Rust with Production SLA
AWS Lambda has promoted Rust from "experimental" to Generally Available status, complete with AWS Support and the full Lambda availability SLA. The announcement marks a fundamental shift: Rust is now officially backed for business-critical serverless applications across all AWS Regions, including GovCloud and China.
The runtime uses Amazon Linux 2023 (provided.al2023 or provided.al2) since Rust compiles to native machine code rather than requiring a language-specific runtime. You'll use the lambda_runtime crate to handle the Lambda event processing, and AWS recommends Cargo Lambda for streamlined development, testing, and deployment workflows.
What changed? This isn't a new technical capability, Rust has worked on Lambda for years. What's new is AWS putting their name behind it with SLA guarantees and roadmap commitments. That's the difference between "experimentally supported" and "bet your production workload on it."
For teams evaluating Rust for serverless architectures, the calculation just shifted. You're no longer early adopters. You're using a GA service with enterprise support.
Google Security: Rust in Android Delivers 1000x Fewer Vulnerabilities
Here's the data that settles the memory safety debate. Google's Android Security team published comprehensive metrics on Rust adoption in Android, and the numbers are staggering. Memory safety vulnerabilities in Rust code occur at a rate of 0.2 per million lines of code. In C/C++? Around 1,000 per million lines. That's not 10% better or 2x better, it's a 1000x reduction in vulnerability density.
But here's what makes this story remarkable: Rust isn't just safer. It's faster to develop.
Google measured development velocity using the DORA framework and found that Rust changes require 20% fewer revisions than equivalent C++ code, spend 25% less time in code review, and have a 4x lower rollback rate for medium and large changes. Their conclusion: "Secure code development is simultaneously faster development."
Android now has roughly 5 million lines of Rust in production, powering everything from the Nearby Presence protocol to MLS encryption for RCS, Chromium parsers for PNG and JSON, and even the first production Rust driver in Linux kernel 6.12 (an Arm GPU driver collaboration).
The article includes a fascinating near-miss case study: CVE-2025-48530, a linear buffer overflow in the CrabbyAVIF library caught before release. The vulnerability would have been exploitable in C/C++, but Android's Scudo hardened allocator made it non-exploitable. Defense in depth works, but starting with memory-safe code is the better strategy.
The takeaway? Memory safety vulnerabilities have dropped below 20% of total Android vulnerabilities for the first time in 2025. This isn't incremental progress. It's a sea change in platform security, backed by data across millions of lines of production code.
JetBrains Analysis: Rust vs Python—Partners, Not Competitors
The Python vs Rust debate misses the point. They're increasingly complementary, not competitive.
JetBrains' analysis highlights the philosophical divide: Python prioritizes rapid iteration and accessibility (57% developer adoption, up from 32% in 2017). Rust prioritizes compile-time safety and performance (11% adoption with 80%+ retention for 9 consecutive years as "Most Admired").
The technical differences are fundamental:
- Memory: Rust's ownership rules prevent null pointers and dangling references at compile time. Python relies on garbage collection with unpredictable pauses.
- Concurrency: Rust enables true multicore parallelism. Python's GIL limits CPU-bound concurrency.
- Type systems: Rust catches errors before execution with static typing. Python trades early detection for flexibility with dynamic typing.
But here's the trend that matters: Rust is powering high-performance components inside Python applications. Tools like Polars (DataFrames), Ruff (linting), and uv (package management) deliver "modest to dramatic speed-ups" by compiling Rust to native machine code with zero-cost abstractions and no GC overhead.
The future isn't "Rust or Python." It's "Python for rapid development, Rust where performance matters," often within the same application. That's not compromise, it's choosing the right tool for each layer of the stack.
The State of SIMD in Rust: Safer, More Mature, Still Evolving
SIMD in Rust crossed a major usability threshold: most intrinsics are no longer unsafe to call as of Rust 1.87.
Shnatsel's 2025 analysis breaks down four approaches to SIMD in ascending order of effort: autovectorization, intrinsics, portable SIMD abstractions, and manual assembly.
The decision framework is straightforward:
- Zero dependencies, minimal hassle? → Autovectorization (let LLVM handle it)
- Porting C code or targeting specific hardware? → Intrinsics (now much safer)
- Everything else? → Portable SIMD abstractions
The portable SIMD library ecosystem is maturing but still fragmented. pulp offers built-in multiversioning and powers the faer linear algebra library, with support for NEON, AVX2, and AVX-512. macerator (a pulp fork) adds better generic programming support and expanded instruction sets. wide is noted as mature and complete for broader use cases.
The barrier to entry dropped significantly with safer intrinsics, but choosing the right abstraction layer still requires understanding your performance constraints and target platforms. SIMD in Rust is production-ready—just know which tool matches your needs.
Snippets
Rust 1.91.1 Released Point release fixing critical WebAssembly linker failures and Cargo file locking on illumos systems.
Rust Foundation Announces Maintainers Fund New fund for long-term support of Rust maintainers, starting with $100K focused on Compiler and Language teams.
Niko Matsakis: Just Call Clone (or Alias) Memory optimization insights from Rust language team member on when cloning is the right choice.
Engineering a Rust Optimization Quiz Fasterthanlime's deep dive into compiler optimizations with interactive examples.
Patterns for Defensive Programming in Rust Comprehensive guide to writing robust, error-resistant Rust code.
We are thrilled to have you as part of our growing community of Rust enthusiasts! If you found value in this newsletter, don't keep it to yourself — share it with your network and let's grow the Rust community together.
👉 Take Action Now:
Share: Forward this email to share this newsletter with your colleagues and friends.
Engage: Have thoughts or questions? Reply to this email.
Subscribe: Not a subscriber yet? Click here to never miss an update from Rust Trends.
Cheers,
Bob Peters
Want to sponsor Rust Trends? We reach thousands of Rust developers biweekly. Get in touch!