Movatterモバイル変換


[0]ホーム

URL:


LWN.net LogoLWN
.net
News from the source
LWN
|
|
Subscribe /Log in /New account

Rustls: memory safety for TLS

This article brought to you by LWN subscribers

Subscribers to LWN.net made this article — and everything that surrounds it — possible. If you appreciate our content, pleasebuy a subscription and make the next set of articles possible.

ByJake Edge
May 4, 2021

The movement toward usingmemory-safelanguages, and Rust in particular, has picked up a lot of steam over the past year or two. Removing thepossibility of buffer overflows, use-after-free bugs, and other woes associatedwith unmanaged pointers is an attractive feature, especially given thatthe majority of today's vulnerabilities stem from memory-safetyissues. On April 20, theInternet Security ResearchGroup (ISRG)announceda funding initiative targeting theRustls TLS library in order toprepare it for more widespread adoption—including by ISRG'sLet's Encrypt project.

Google has provided the funds that allowed ISRG to contract Dirkjan Ochtmanto make some improvements to the library. Two of the items listedin the announcement are aimed at making Rustls integrate more easily withthe large body of C code in use today; most of those programs use the C-basedOpenSSL library for their TLS needs. As might be expected, ISRG and itsexecutive director, Josh Aas, who authored the announcement, are ratherexcited by the possibilities of Rust and Rustls going forward:

Rustls is an excellent alternative to OpenSSL and similar libraries. Muchof its critical code is written in Rust so it’s largely memory-safe withoutsacrificing performance. It hasbeenaudited and found to be a high quality implementation. Here’s one of our favorite lines from the report:

“Using the type system to statically encode properties such as the TLSstate transition function is one just one example of great defense-in-depthdesign decisions.”

[...] We currently live in a world where deploying a few million lines of Ccode on a network edge to handle requests is standard practice, despite allof the evidence we have that such behavior is unsafe. Our industry needs toget to a place where deploying code that isn’t memory safe to handlenetwork traffic is widely understood to be dangerous andirresponsible. People need memory safe software that suits their needs tobe available to them though, and that’s why we’re getting to work.

The Rustls project—the name is pronounced "rustles" incidentally—has beenaround since 2016, but the pace of releases has picked up over thelast year or so. It is based on thering cryptographic library,which is written in Rust, C, and assembly, and the relatedwebpki library forcertificate verification that is also written in Rust. The Rustls project ismaking a serious effort to leave mistakes in earlier TLS implementationsbehind (including outdated protocol versions and ciphers) and to forgeahead with a robust library that avoids many of the pitfalls that exist forcode of this nature, which handles untrusted input from the internet at large.

According to the announcement, ISRG has already started work on integratingRustlsinto curlandintothe Apache web server. For the latter, ISRG said that having a fullweb server in a memory-safe language would be beneficial, but would take anenormous effort; instead,allowing the TLS handling to be done by Rustls is a good incrementalstep. Google has provided funding for Stefan Eissing to dothe work on the web server, while ISRG is funding curl author and maintainer DanielStenberg to work on making curl more memory safe.Beyond allowing the use of Rustls, the curl refit is also adding support for thehyper Rust-based HTTP library. As can be seen, there is a growing ecosystem of memory-safelibraries for these kinds of tasks.

ISRG is targeting several different improvements to Rustls as part ofOchtman's work, including working toeliminate functions thatpanic, which effectively cause the program to crash. As the Rustdocumentationpointsout, usingpanic!() precludes callers from recovering from anerror, which is probably not appropriate in a lot of Rust code. Thepresence of panics in the Rust code making its way toward the Linuxkernel washighlighted as a problem byLinus Torvalds as part of his review. The Rustls project is working toward eliminatingpanics in its own code as well as eliminating calls into library routinesthat might panic, so that it provides a panic-free API for applications.

TheC API forRustls is also a focus of the work being sponsored. Since so much oftoday's TLS-using landscape is written in C, it is important for thelibrary to interface well with that language. The announcement mentionsmoving the API into the main Rustls repository, though theGitHub issue trackingthe plan shows no overwhelming preference for making that move. Thedevelopers do seem to be leaning in that direction, however.

Another improvement targeted by ISRG is support for IP addresses inSubjectAlternative Name (SAN) fields in certificates. SAN is an extension totheX.509 format used forpublic-key certificates (such as SSL/TLS certificates used by web sites).Some environments use IP addresses, rather than domain names, because thereis no DNS being used by the hosts on the network.The final feature listed is adding support for configuring the serverconnection based on information provided by the client. This is usedfor handling theServer NameIndication (SNI) andApplication-LayerProtocol Negotiation (ALPN) TLS extensions that allow clients toindicate their preferences to the server.

Handling user-controlled (or attacker-controlled) input coming across thenetwork is obviously one of the bigger areas of concern from a securityperspective. One of the earliest "attacks" on the internet, theMorris worm in 1988,relied in part on a buffer overflow in a network service; many other flawsof that sort have been seen in the decades since. Those kinds of problemscan be avoided by using a memory-safe language like Rust, though there willstill be opportunities for mistakes, as a look at the C API shows.

The mismatch between how C and Rust handle memory is still a potentialproblem area when the two languages are being used together. For example, the Rustls CAPI provides a mechanism toallocate andfree structures for use in calling into the library, but there are somesignificant caveats on their use. Programs need to ensure that only onethread is accessing the structure at any given time (by use of a mutex orother exclusion mechanism) and that the structure is freed only once.Those kinds of restrictions are not a surprise for C programmers, but theydo show the (probably obvious) limits of the memory-safety guarantees when callingRust from C.

Over the course of the next few years, we will get multiple opportunities to see howwell this hybrid approach to dealing with the lack of memory safety inC-language programs plays out. There is a lot of promise in the approach,and the Rust language has garnered a fair bit of optimism for being a majorpart of our, hopefully, memory-safe future. It will be interesting to seeif the reality is as grand as the vision—and, to a certain extent, hype.


Index entries for this article
SecurityMemory safety
SecurityTLS


to post comments

Rustls: memory safety for TLS

Posted May 4, 2021 21:36 UTC (Tue) bydjc (subscriber, #56880) [Link] (1 responses)

The idea of removing the calls to panic!() and unwrap() was actually initiated because of the issues in safely handling unwinds across language boundaries, although it is, of course, useful even if you just use the library from Rust code.


These lines seem to be somewhat contradictory. While the ISRG initially leaned towards moving the C API into the main rustls repository, it currently looks like ~all of the developers involved prefer a separate repository in the rustls GitHub org.


This is actually already possible to a certain extent. The `ResolvesServerCert` gives the trait implementation the `ClientHello` data (including SNI, ALPN) so it can select a preferred certificate, but the goal inhttps://github.com/ctz/rustls/issues/89 is to decouple this more, allowing callers to asynchronously look up the preferred certificate.

Happy to answer any questions folks might have.

Rustls: memory safety for TLS

Posted May 5, 2021 20:21 UTC (Wed) byrvolgers (guest, #63218) [Link]


That could in theory be solved by compiling with the option which aborts the process on panic instead of unwinding.

I'm a little worried about setting "no panics" as a hard goal, to the degree that there is nothing in the binary which calls the panic machinery. A lot of panics are simply things which would be assert()s, hard process kills triggered by various sanitizers, or segfaults in C.

Often the Rust compiler is already able to prove that a certain panic can never happen and eliminate it. But going to great lengths to remove the remaining ones seems likely to make the code more confusing or even less safe.

Rustls: memory safety for TLS

Posted May 4, 2021 21:39 UTC (Tue) byrandomguy3 (subscriber, #71063) [Link] (13 responses)

I find the discussion about panics particularly interesting. On the one hand, a lot of things that would just end up as undefined behaviour in C code become panics in Rust, which allows for doing some sort of handling (even if it's just giving up in a controlled way) rather than continuing in a bad state and doing who knows what. But it's also clear that the standard library's reliance on panics to handle unexpected situations, as well as the potential for panics to have unpleasant knock-on effects like mutex poisoning, is causing issues for projects like this.

And I can't help but be reminded of the pain of exception-safety in C++ - a property C++ developers are aware is important in principle, but often ignore in practice because it seems more trouble than it's worth a lot of the time.

Rustls: memory safety for TLS

Posted May 4, 2021 22:12 UTC (Tue) byBigos (subscriber, #96807) [Link] (12 responses)

Yeah, panics in Rust seem to be the "necessary evil". There are many useful "safe" APIs that just panic on unusual input (like trying to allocate more than there is virtual memory) that make the surrounding code less noisy, but on the other hand lose a lot of predictability - we just assume certain classes of errors are "so rare" that they don't deserve proper handling and we just assume they fail miserably and they either bring the whole process down (not acceptable in a kernel FWIW) or can be caught selectively with numerous caveats (not stable yet, IIRC?).

It is quite unfortunate and should really be brought up when discussing language safety. It is not something without a precedent, though, as panics usually replace C/C++ assertions or std::terminate() that just (99% of cases) abort() the whole process without any second thought.

That's why the vast majority of errors should be returned by std::Result (or similar) and the callers should decide whether to unwrap()/expect() it or handle it differently, so that we can implement libraries on a varying level of error-handling. Nevertheless, the fact a piece of software uses Rust doesn't mean it handles errors correctly, or "sufficiently-enough" for a given use-case. Unfortunately, a language cannot guard the programmers from doing bad things (unless the language is very restricted). As many things, it's just a tool to make the programs less error-prone (at least that is the hope) and potentially more verifiable.

Rustls: memory safety for TLS

Posted May 5, 2021 3:34 UTC (Wed) byjosh (subscriber, #17465) [Link] (11 responses)

Assertions are definitely the closest analogue. But also, in many cases there are non-panicking analogues, and the panicking version is a convenience for the common case.

As an example, one common source of panics is indexing: if you write `x[12]` and x only has 4 elements, you'll get a panic. You can write the same thing without a panic, by writing `x.get(12)`, which returns an Option<T> containing None in that case. But if you're writing an algorithm that's intending to only index in-bounds, a panic on out-of-bounds indexing may be what you want. On the other hand, if you're effectively accepting an index from user input, you might want to use `get` and handle the error.

There are efforts underway to make sure there are non-panicking analogues of more standard library functions (notably for memory allocations). Beyond that, it'll be up to applications and libraries to call the versions that fit their use cases.

Rustls: memory safety for TLS

Posted May 5, 2021 7:10 UTC (Wed) byepa (subscriber, #39769) [Link] (7 responses)

...and I guess this is why the exception model of error handling has its virtues. Rather than panicking the whole program, an array out of bounds can throw an exception. The immediate caller doesn't have to be crufted up with explicitly handling this case, but can let it propagate to a higher level which takes some action such as failing the individual http request (or whatever) while allowing the program as a whole to continue.

Rustls: memory safety for TLS

Posted May 5, 2021 7:51 UTC (Wed) byjosh (subscriber, #17465) [Link] (4 responses)

You can follow that model with Rust too, if you want. Try `catch_panic`.

Rustls: memory safety for TLS

Posted May 5, 2021 10:53 UTC (Wed) byepa (subscriber, #39769) [Link] (3 responses)

Ah cool, so panic in Rust is not panic in the kernel sense: a totally unrecoverable error where the safest thing is just to halt the system immediately. It can be safely caught and handled.

Rustls: memory safety for TLS

Posted May 5, 2021 12:31 UTC (Wed) byCyberax (✭ supporter ✭, #52523) [Link] (1 responses)

It kinda is. In fact, some Rust systems just abort() on panic.

Rust allows to recover from panics, but it suffers all the regular C++ issues with exceptions. There are no guarantees that the system will return to internally consistent state afterwards.

In particular, panics permanently poison locked mutexes during unwinding:https://doc.rust-lang.org/std/sync/struct.Mutex.html

Rustls: memory safety for TLS

Posted May 7, 2021 1:38 UTC (Fri) byanp (guest, #130817) [Link]

FWIW lock poisoning is more of an advisory API -- you can always get the guarded value withhttps://doc.rust-lang.org/std/sync/struct.PoisonError.htm... and the parking_lot crate is quite popular and does not implement poisoning. Lock poisoning isn't necessary to protect memory safety, more of a guard rail against garden variety partial state invalidation bugs.

Rustls: memory safety for TLS

Posted May 5, 2021 13:30 UTC (Wed) byfarnz (subscriber, #17727) [Link]

In kernel terms, Rust panic is basically BUG - it can be recovered from, and will be in a kernel case, but it represents a programmer error.

Rustls: memory safety for TLS

Posted May 5, 2021 11:11 UTC (Wed) byfarnz (subscriber, #17727) [Link] (1 responses)

Rust panics are the exception model, and handled as such - by default,catch_unwind exists to let you catch panics and handle them, andpanic::set_hook exists as a way to customize Rust's equivalent of C++'sstd::terminate.

Just like C++ exceptions, though, there's an equivalent of GCC's-fno-exceptions for Rust, which changes panics to a direct abort that you can't catch, instead of being something you can catch and handle. Rust's stdlib and documentation is just designed so that you expect the-fno-exceptions behaviour, where C++'s stdlib expects you to have exception handling.

So in this sense, it's about emphasis - Rust emphasises that the exception model is for "the programmer has made a mistake, and really we need a code fix to resolve this" andResult<T, E> andOption<T> types are the preferred model for "this is a potential error case that code can sensibly handle". The default for panics in Rust is hence abort (with a stack trace unless you turn that off) because an abort will result in a restart back to a known sensible state, but there's room to change that where you have a way to recover sanely (e.g. in an OS kernel, where a reboot is not necessarily the best option).

Rustls: memory safety for TLS

Posted May 7, 2021 1:40 UTC (Fri) byanp (guest, #130817) [Link]

Small nit: std::panic::set_hook's callback still executes with panic=abort in Rust, you just can't do anything after it returns before the abort is called.

Rustls: memory safety for TLS

Posted May 5, 2021 11:58 UTC (Wed) byibukanov (guest, #3942) [Link] (2 responses)

The problem with the current Rust is that there is no way to say to the compiler: Allow we to use array[index], not array.get(index) but only if you see that this is safe. Thus if one wants to code panic free, one ends up with a lot of array.get() and all the source bloat that it entitles even if the compiler knows that this is unnecessary.

Rustls: memory safety for TLS

Posted May 5, 2021 13:18 UTC (Wed) byroc (subscriber, #30627) [Link]

That would make array bounds check elimination part of the language definition. That would add a lot of complexity to the language.

Rustls: memory safety for TLS

Posted May 5, 2021 19:29 UTC (Wed) byGaelan (guest, #145108) [Link]

I've seen some hacky implementations that go something like this:

extern "C" {  fn undefined_symbol() -> !;}#[inline(always)]fn compile_time_checked_get(slice: &[u8], idx: usize) -> u8 {  match slice.get(idx) {    Some(val) => val,    None => undefined_symbol()  }}

When compiled with optimizations, if the compiler can prove the call will be in bounds, everything works fine; otherwise, it emits a call to undefined_symbol(), causing a linker error.

This is, admittedly, a huge hack.

Rustls: memory safety for TLS

Posted May 4, 2021 22:03 UTC (Tue) bygspr (guest, #91542) [Link] (8 responses)

I really love Rust, and projects like this are impressive and important. But I do feel a bit dismayed when I see notices like this (from ring, the cryptographic library that underlies rustls):


This approach seems pervasive in the Rust community. I'm in no position to complain — I'm not the one doing the super impressive and important work — but it does seem like a worrisome way to do things for libraries that are meant to be foundational.

Rustls: memory safety for TLS

Posted May 4, 2021 22:17 UTC (Tue) byJoeBuck (subscriber, #2330) [Link] (5 responses)

I would expect that at some point, things will be stable enough so that a long-term support version would be branched off, but perhaps it's too early for that to happen yet.

Rustls: memory safety for TLS

Posted May 4, 2021 23:00 UTC (Tue) byNYKevin (subscriber, #129325) [Link] (1 responses)

IMHO this depends on the nature of the library. If you always maintain extremely strong backwards compatibility guarantees, don't plan on doing a lot of heavy feature development very often (e.g. no new features except for implementing new versions of TLS if/when they are standardized and widely supported), and the library will almost always need to handle untrusted input across a privilege boundary, then:

1. It's reasonable to demand that clients update frequently (for their own security), and
2. Maintaining a separate stable branch may not be worth the effort (how would it differ from the latest release in the first place?).

But this is all very project-specific, of course.

Rustls: memory safety for TLS

Posted May 5, 2021 17:00 UTC (Wed) bymcatanzaro (subscriber, #93033) [Link]

It's a totally fine approach, as long as you are OK with your software not making any headway in Linux distributions, and don't mind this defeating your goal of improving the safety of network services....

Rustls: memory safety for TLS

Posted May 5, 2021 9:27 UTC (Wed) byceplm (subscriber, #41334) [Link] (2 responses)

Absolutely, but if it is too early to be stable, it is probably too early to use in the production environment (where `git pull` as method of update of security software is just a lunacy).

Rustls: memory safety for TLS

Posted May 5, 2021 13:21 UTC (Wed) byroc (subscriber, #30627) [Link] (1 responses)

"stable" there refers solely to the rate of change, not the quality of releases.

Rustls: memory safety for TLS

Posted May 6, 2021 10:01 UTC (Thu) byriking (subscriber, #95706) [Link]

I've seen this around a lot. Some people see a reference to "stable as in stasis" and think it means "stable as in done".

Rustls: memory safety for TLS

Posted May 7, 2021 4:53 UTC (Fri) byilammy (subscriber, #145312) [Link] (1 responses)

ring is more of an outlier here. Previous maintainance policy even included yanking all releases except for the tip, making building libraries of top of ring much difficult: every time upstream made a release you had to scramble to update your broken code and builds.

Rustls: memory safety for TLS

Posted May 7, 2021 14:24 UTC (Fri) bygspr (guest, #91542) [Link]

Oh dear. This sounds more like a modern art project than it does software development. I really hope you're right in this being an outlier. I do see a worrying amount of "just 'rustup' to the latest version of everything" as a response to any kind of problem in the Rust community. It's a shame, and I hope it gets better over time as more and more things are built with Rust.

The language and the community are otherwise top notch though, gotta say.

webpki

Posted May 5, 2021 1:07 UTC (Wed) bytialaramex (subscriber, #21167) [Link]

One thing that the related Hacker News articlehttps://news.ycombinator.com/item?id=26875551 touched on:

webpki prefers to focus on the core Trusted/Not question rather than risk introducing exciting errors in less tested codepaths for figuring out exactly why we can't trust this. That's fine as far as it goes, but it's problematic that today for any "Not trusted" situation you get UntrustedIssuer as your error. Is this because the issuer is, in fact untrusted? Um, no.https://github.com/briansmith/webpki/issues/221

I was reminded of this more recently when a goof by IdenTrust left the CRL for their DST Root CA X3 unavailable. Initial reports about this (to the Let's Encrypt community site, because IdenTrust itself no longer really uses DST Root CA X3) often involved claims that somehow previously working certificates now lacked an OCSP responder. This makes no sense - the Authority Information Access section of the certificate containing the name of its OCSP responder is immutable. But it turns out that (popular implementations of) the Java JDK give a misleading error about this data being missing if anything goes wrong in their revocation code, including if the CRL isn't available. IdenTrust has a responsibility (as condition of root trust) to ensure that CRL exists and is accessible, but if the Java errors actually said what was wrong it might have been diagnosed and so fixed considerably sooner.

Whenever the computer says "FooBongled" the obvious first two thoughts as "I'm pretty sure I didn't bongle it, did I?" and then "Huh, I guess the Foo is Bongled, how did that happen?", but of course it's always possible some idiot copy-pasted the FooBongled error into code that has nothing whatsoever to do with whether your foo is bongled.

Anyway, one of the nice things about Rust as a programming language is the helpful compiler errors. But right now one of the unhelpful things about webpki as a replacement for OpenSSL is that your diagnostics may be unhelpful at best.

There's Caddy

Posted May 5, 2021 9:34 UTC (Wed) bytekNico (subscriber, #22) [Link] (11 responses)

Garbage-collected, compiled languages like Go are memory-safe too, and Go is heavily used for Internet-facing network services.


effort already expended to create Caddy:

https://caddyserver.com/

a production-grade web server written in Go with state-of-the-art TLS support. I'd use Caddy rather than Apache or nginx, if at all possible.

There's Caddy

Posted May 5, 2021 12:04 UTC (Wed) byibukanov (guest, #3942) [Link] (8 responses)

Go is not memory safe! It relies on slices and fat pointers without ensuring that modifications to those are atomic. Thus 2 Go-routings modifying the same slice can corrupt memory. In practice such usage is vary rare and is strongly discourages, but still Go is not Java where the language guarantee memory-safety.

There's Caddy

Posted May 5, 2021 13:13 UTC (Wed) byathei (guest, #121603) [Link] (5 responses)

Java is memory safe? AFAIK there is nothing stopping me from accessing a variable from two different threads without any synchronisation, or is there?

There's Caddy

Posted May 5, 2021 13:20 UTC (Wed) byroc (subscriber, #30627) [Link] (4 responses)

There is nothing stopping that, but that isn't allowed to corrupt memory.

There's Caddy

Posted May 5, 2021 14:31 UTC (Wed) bymathstuf (subscriber, #69389) [Link]

If I'm understanding correctly, the JVM doesn't disallow such behaviors, but it does guarantee that if one thread writes "A" and another writes "B" to the same location, the result is that either "A" or "B" is observed (not some mix of "A" and "B")?

There's Caddy

Posted May 5, 2021 19:03 UTC (Wed) byjezuch (subscriber, #52988) [Link] (2 responses)

Write tearing is definitely a thing in Java. JVM specification allows it with 64-bit values and will even more allow it with primitive classes in the near future. So it's not safe by default, and you have to use explicit synchronization.

There's Caddy

Posted May 5, 2021 19:20 UTC (Wed) byCyberax (✭ supporter ✭, #52523) [Link] (1 responses)

Tearing can happen only with primitive types. Java pointer writes are always atomic on all real architectures, they are not fat pointers, so they fit into one machine word. This does allow Java to stay memory safe with unsynchronized multithreaded access.

There's Caddy

Posted May 5, 2021 22:07 UTC (Wed) byroc (subscriber, #30627) [Link]

That's right.

There's Caddy

Posted May 5, 2021 15:23 UTC (Wed) bydottedmag (subscriber, #18590) [Link] (1 responses)

Could you please point to the issue in the Go issue tracker, mailing list discussion, or any other material that confirms this?

There's Caddy

Posted May 5, 2021 16:10 UTC (Wed) byrandomguy3 (subscriber, #71063) [Link]

Go'smemory model doesn't make the consequences of data races explicit, butdata race detector mentions that races "can lead to crashes and memory corruption". There's alsothis delve into using data races to break through Go's memory safety mechanisms.

There's Caddy

Posted May 5, 2021 14:46 UTC (Wed) bydjc (subscriber, #56880) [Link] (1 responses)

As I understand it, organizations the ISRG talked to expressed a feeling that Go code did not cut it in some scenarios due to performance concerns.

There's Caddy

Posted May 5, 2021 22:13 UTC (Wed) byroc (subscriber, #30627) [Link]

Go's GC means you buy into the iron triangle of trading off throughput, pause times, and memory overhead. IIUC Go mostly optimizes for pause times and throughput and imposes significant memory overhead.

There's also the issue that pulling in GC adds complexity to the runtime and complicates FFI.

Side channels

Posted May 5, 2021 17:14 UTC (Wed) byfloppus (guest, #137245) [Link] (13 responses)

How does Rust compare to C or other languages when it comes to side channels?

For example, is it easy to ensure that some function runs in constant time? If sensitive data is stored in stack or heap variables, can it be reliably erased when it's deallocated?

C, of course, is full of pitfalls, though I feel like those pitfalls are somewhat well understood by now. Rust being a somewhat higher-level language, I don't have a feeling for whether it's generally better or worse.

Side channels

Posted May 5, 2021 17:22 UTC (Wed) byjosh (subscriber, #17465) [Link] (1 responses)

Neither C nor Rust currently has a mechanism to run functions in constant time. There are libraries likehttps://crates.io/crates/subtle-ng that do so on current Rust compilers, but the most reliable way to do so is still assembly.

Supporting constant-time operations would require some work in LLVM first.

Side channels

Posted May 5, 2021 19:20 UTC (Wed) bywahern (subscriber, #37304) [Link]

The ultimate problem is that Intel doesn't guarantee the required semantics. See DJB's 2014 post "Some small suggestions for the Intel instruction set",http://blog.cr.yp.to/20140517-insns.html

It's especially problematic for a compiler to provide high-level constructs for promises that it fundamentally can't keep. What's ultimately needed is for vendors to offer a small set of guaranteed constant-time instructions. That way if timings change it's crystal clear who is at fault. Such ISAs would also help define the scope of the compiler/language constructs. No doubt there's plenty of prior art, but you really need Intel to make a commitment so you know what your baseline is--GCD(Intel, EverybodyElse). Any ISA commitments might require establishing long-term architectural tradeoffs wrt cache and speculation isolation for those operations.

Side channels

Posted May 5, 2021 18:07 UTC (Wed) bymatthias (subscriber, #94967) [Link] (6 responses)


Use a datatype that overwrites its data in the destructor. In rust, the compiler enforces that the destructor is run when the object is deallocated (goes out of scope).

Side channels

Posted May 5, 2021 18:47 UTC (Wed) bydanobi (subscriber, #102249) [Link] (1 responses)

I don't believe that's true. It's safe in rust to skip destructors:
https://doc.rust-lang.org/stable/std/mem/fn.forget.html

Side channels

Posted May 5, 2021 18:52 UTC (Wed) bymathstuf (subscriber, #69389) [Link]

Yes, but that's an explicit decision (and it's as easy to grep for that function as for `unsafe`). You probably also want to watch forhttps://doc.rust-lang.org/std/mem/struct.ManuallyDrop.html while you're at it too.

Side channels

Posted May 5, 2021 18:50 UTC (Wed) bymathstuf (subscriber, #69389) [Link] (3 responses)

In fact, there's a crate for this :) .

https://docs.rs/zeroize/1.3.0/zeroize/index.html

You can either manually zero with:


or ensure that data will be zero'd at the end of the variable's lifetime with:



There are other crates that do similar things as well, but this is one that I know of at least.

Side channels

Posted May 6, 2021 3:21 UTC (Thu) byncm (guest, #165) [Link] (2 responses)

There is a crate for it. But that does not mean it works. Optimizers are equally as happy to eliminate writes to dying volatiles and atomics, as to other dying objects, particularly dying objects on the stack.

Rust, being currently wed to LLVM, might be able to bind its volatile or atomic apparatus to internal machinery in LLVM. But that is fragile, and the Gcc version might not get such access.

Side channels

Posted May 6, 2021 10:41 UTC (Thu) bymathstuf (subscriber, #69389) [Link] (1 responses)

I would expect that such a crate has a test suite to test such behaviors (one such way would be to check the bytes from across an FFI boundary). Of course, any such consumer of it should test that as well, but IMO it's far nicer to get such tricky behavior as a well-tested dependency rather than to screw it up themselves.

Side channels

Posted May 6, 2021 23:09 UTC (Thu) byncm (guest, #165) [Link]

Packaged facilities are always nice, but it is devilishly hard to make a test that is convincingly reliable enough for intended cryptographic use, in situ, where it may really matter.

A promise to achieve what the crate docs say within the defined bounds of the language, with no "asm", is even harder to deliver on reliably--even when the language has an actual Standard definition. Without, it shades toward impossible.

It might be that, today, someone walks all the possible code paths in LLVM and attests that the goal is achieved, for this release. But one small improvement can change LLVM's behavior in ways too complicated to preserve the attestation; the analysis has to be started again from scratch.

Actually using the crate, you might *instead* inspect the assembly language output for your particular use case, for all your targets and all supported compiler releases, and again for each subsequent compiler release; but that doesn't help the next project at all, and is a chore bound to be soon neglected.

All of the above assumes the physical hardware (or emulated environment!) actually implements the instructions the way you hope, an assumption we have had our noses rubbed in quite a lot in recent years. Generally, equipment manufacturers make the kind of promise we need only about instructions that compilers never emit--when they make any promises at all. (Microcontrollers are often enough sold "as-is": it does what you see it do.)

Zeroing cryptographically sensitive memory contents *reliably* is a Hard Problem under ideal conditions, under full control of target hardware. Any promise to deliver it reliably, working only within the Abstract Machine, should inspire deepest skepticism. No suite of unit tests can be large enough.

I hope we can agree that an empty promise is worse than none. The more confidence the docs try to inspire, the less we should trust them.

Side channels

Posted May 5, 2021 20:03 UTC (Wed) bycesarb (subscriber, #6266) [Link] (1 responses)


Same as C, unfortunately.


You have to use the same tricks you would use in C. Which includes using empty inline assembly statements at key points to defeat the optimizer.


Unfortunately, no; as in C, the compiler is allowed to spill sensitive data to the stack and registers (but you can do the same ugly tricks as in C to try to zeroise the stack). As for the heap, Rust is actually slightly worse than C, since it's move-happy; transferring ownership through moves is a standard Rust idiom, but this does not erase the former place of the data, and unlike C++ there are no move constructors to override this mechanism.


It's generally better IMO. It's slightly harder to make sure sensitive data is reliably erased, but it's a lot easier to make sure leftover sensitive data is not accessed. As for constant time computations, it's basically the same.

Side channels

Posted May 6, 2021 10:09 UTC (Thu) byriking (subscriber, #95706) [Link]

The great news is that Rust has specified semantics for inline assembly (essentially, "anything a function could do. In fact, it might be a function.") so tricks using it will be significantly more portable across compilers than in C.

Side channels

Posted May 6, 2021 3:05 UTC (Thu) byncm (guest, #165) [Link]

Rust, like C++, provides facilities to clean up objects whose lifetime has ended, on a deterministic and easily understood schedule.

HOWEVER: Clearing memory in a destructor or Drop whatsit may, and *probably will*, be elided by the compiler. This includes both assigning to members, and memset.

To ensure memory is wiped, you generally need to call a function that the compiler *cannot* see into: thus, not inline; and placed where it will compile into some other ".o" file not accessible to "link-time optimization". It used to be that inserting an asm block would suffice, but compilers sometimes look into those nowadays. Sometimes the asm code may be marked "volatile" to prevent this, but that is even less portable than the asm code.

A great deal of work has gone into making it very hard to reliably clear memory in dying objects, mostly under the banner of dead-code elimination. That does make programs faster, sometimes crucially so. But it makes it increasingly hard to zero memory in objects that are going away.

I am beginning to doubt that seL4 gets this right.

Side channels

Posted May 6, 2021 16:32 UTC (Thu) byzlynx (guest, #2285) [Link]

The Ring library is used by at least some Rust crates. I know because it fails to build on Power CPU architecture.

You cannot always trust the CPU to execute the machine code exactly as you send it, but it is the closest thing you can get. It's good for avoiding side channels and for making sure memory is properly set to zero.

Rustls: memory safety for TLS

Posted May 5, 2021 20:13 UTC (Wed) byangdraug (subscriber, #7487) [Link] (1 responses)

Rustls is based on Ring which is derived from BoringSSL with is a fork of OpenSSL and in part falls under the GPL-incompatible OpenSSL license.

Considering how essential and ubiquitous the TLS protocol is (it's even baked directly into HTTP/3 now, thanks to QUIC), how is not a bigger concern that it's currently impossible to write GPL-compatible secure networking software in Rust?

Rustls: memory safety for TLS

Posted May 5, 2021 20:43 UTC (Wed) byjosh (subscriber, #17465) [Link]

I'm hoping that once OpenSSL finally releases 3.0 under Apache 2.0, it'll be possible to rebase atop that and use Apache 2.0, which is GPLv3-compatible.

Rustls: memory safety for TLS

Posted May 6, 2021 6:20 UTC (Thu) byrhdxmr (guest, #44404) [Link]

I had used C in my professional job for 5 years. Nowadays I prefer rust and I am contributing to open source library written in rust. But still I am little confused when I deal with C API in rust. I should double check that I am doing something correctly because rust compiler does not guarantee everything about memory safety when I make use of `unsafe` blocks treating FFI.
I guess majority of rust users only deal with safe rust and only small number of people understand `unsafe` rust perfectly. So I think that making perfect C binding of rustls is not an easy task.
But it will be fine if the most part of program including core is written in safe rust and relatively small part for C interface is written in unsafe rust and C. But the excellent isolation of error propagation between core and unsafe C interface part should be needed.

Rustls: memory safety for TLS

Posted May 14, 2021 13:37 UTC (Fri) bykrizhanovsky (guest, #112444) [Link]

Looks like a bad joke: a crypto library can not be fast without serious amount of assembly code, so assembly is good, but C isn't safe enough due to raw pointers.

There are also many security problems besides problems with raw memory, e.g. side channels. While OOB access in most cases can be easily caught with tools like KASAN, the side channel vulnerabilities are quite hard to find. Even mature libraries like WolfSSL might have themhttps://github.com/wolfSSL/wolfssl/issues/3184

Rust compiler is immature and provides very restricted performance features in comparison with GCC/Clang C compilers: I discussed several of them in http://tempesta-tech.com/blog/fast-programming-languages-c-cpp-rust-assembly .


Copyright © 2021, Eklektix, Inc.
This article may be redistributed under the terms of theCreative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds


[8]ページ先頭

©2009-2025 Movatter.jp