This article brought to you by LWN subscribersSubscribers to LWN.net made this article — and everything that surrounds it — possible. If you appreciate our content, pleasebuy a subscription and make the next set of articles possible.
The movement toward usingmemory-safelanguages, and Rust in particular, has picked up a lot of steam over the past year or two. Removing thepossibility of buffer overflows, use-after-free bugs, and other woes associatedwith unmanaged pointers is an attractive feature, especially given thatthe majority of today's vulnerabilities stem from memory-safetyissues. On April 20, theInternet Security ResearchGroup (ISRG)announceda funding initiative targeting theRustls TLS library in order toprepare it for more widespread adoption—including by ISRG'sLet's Encrypt project.
Google has provided the funds that allowed ISRG to contract Dirkjan Ochtmanto make some improvements to the library. Two of the items listedin the announcement are aimed at making Rustls integrate more easily withthe large body of C code in use today; most of those programs use the C-basedOpenSSL library for their TLS needs. As might be expected, ISRG and itsexecutive director, Josh Aas, who authored the announcement, are ratherexcited by the possibilities of Rust and Rustls going forward:
Rustls is an excellent alternative to OpenSSL and similar libraries. Muchof its critical code is written in Rust so it’s largely memory-safe withoutsacrificing performance. It hasbeenaudited and found to be a high quality implementation. Here’s one of our favorite lines from the report:“Using the type system to statically encode properties such as the TLSstate transition function is one just one example of great defense-in-depthdesign decisions.”
[...] We currently live in a world where deploying a few million lines of Ccode on a network edge to handle requests is standard practice, despite allof the evidence we have that such behavior is unsafe. Our industry needs toget to a place where deploying code that isn’t memory safe to handlenetwork traffic is widely understood to be dangerous andirresponsible. People need memory safe software that suits their needs tobe available to them though, and that’s why we’re getting to work.
The Rustls project—the name is pronounced "rustles" incidentally—has beenaround since 2016, but the pace of releases has picked up over thelast year or so. It is based on thering cryptographic library,which is written in Rust, C, and assembly, and the relatedwebpki library forcertificate verification that is also written in Rust. The Rustls project ismaking a serious effort to leave mistakes in earlier TLS implementationsbehind (including outdated protocol versions and ciphers) and to forgeahead with a robust library that avoids many of the pitfalls that exist forcode of this nature, which handles untrusted input from the internet at large.
According to the announcement, ISRG has already started work on integratingRustlsinto curlandintothe Apache web server. For the latter, ISRG said that having a fullweb server in a memory-safe language would be beneficial, but would take anenormous effort; instead,allowing the TLS handling to be done by Rustls is a good incrementalstep. Google has provided funding for Stefan Eissing to dothe work on the web server, while ISRG is funding curl author and maintainer DanielStenberg to work on making curl more memory safe.Beyond allowing the use of Rustls, the curl refit is also adding support for thehyper Rust-based HTTP library. As can be seen, there is a growing ecosystem of memory-safelibraries for these kinds of tasks.
ISRG is targeting several different improvements to Rustls as part ofOchtman's work, including working toeliminate functions thatpanic, which effectively cause the program to crash. As the Rustdocumentationpointsout, usingpanic!() precludes callers from recovering from anerror, which is probably not appropriate in a lot of Rust code. Thepresence of panics in the Rust code making its way toward the Linuxkernel washighlighted as a problem byLinus Torvalds as part of his review. The Rustls project is working toward eliminatingpanics in its own code as well as eliminating calls into library routinesthat might panic, so that it provides a panic-free API for applications.
TheC API forRustls is also a focus of the work being sponsored. Since so much oftoday's TLS-using landscape is written in C, it is important for thelibrary to interface well with that language. The announcement mentionsmoving the API into the main Rustls repository, though theGitHub issue trackingthe plan shows no overwhelming preference for making that move. Thedevelopers do seem to be leaning in that direction, however.
Another improvement targeted by ISRG is support for IP addresses inSubjectAlternative Name (SAN) fields in certificates. SAN is an extension totheX.509 format used forpublic-key certificates (such as SSL/TLS certificates used by web sites).Some environments use IP addresses, rather than domain names, because thereis no DNS being used by the hosts on the network.The final feature listed is adding support for configuring the serverconnection based on information provided by the client. This is usedfor handling theServer NameIndication (SNI) andApplication-LayerProtocol Negotiation (ALPN) TLS extensions that allow clients toindicate their preferences to the server.
Handling user-controlled (or attacker-controlled) input coming across thenetwork is obviously one of the bigger areas of concern from a securityperspective. One of the earliest "attacks" on the internet, theMorris worm in 1988,relied in part on a buffer overflow in a network service; many other flawsof that sort have been seen in the decades since. Those kinds of problemscan be avoided by using a memory-safe language like Rust, though there willstill be opportunities for mistakes, as a look at the C API shows.
The mismatch between how C and Rust handle memory is still a potentialproblem area when the two languages are being used together. For example, the Rustls CAPI provides a mechanism toallocate andfree structures for use in calling into the library, but there are somesignificant caveats on their use. Programs need to ensure that only onethread is accessing the structure at any given time (by use of a mutex orother exclusion mechanism) and that the structure is freed only once.Those kinds of restrictions are not a surprise for C programmers, but theydo show the (probably obvious) limits of the memory-safety guarantees when callingRust from C.
Over the course of the next few years, we will get multiple opportunities to see howwell this hybrid approach to dealing with the lack of memory safety inC-language programs plays out. There is a lot of promise in the approach,and the Rust language has garnered a fair bit of optimism for being a majorpart of our, hopefully, memory-safe future. It will be interesting to seeif the reality is as grand as the vision—and, to a certain extent, hype.
Index entries for this article | |
---|---|
Security | Memory safety |
Security | TLS |
Posted May 4, 2021 21:36 UTC (Tue) bydjc (subscriber, #56880) [Link] (1 responses) These lines seem to be somewhat contradictory. While the ISRG initially leaned towards moving the C API into the main rustls repository, it currently looks like ~all of the developers involved prefer a separate repository in the rustls GitHub org. This is actually already possible to a certain extent. The `ResolvesServerCert` gives the trait implementation the `ClientHello` data (including SNI, ALPN) so it can select a preferred certificate, but the goal inhttps://github.com/ctz/rustls/issues/89 is to decouple this more, allowing callers to asynchronously look up the preferred certificate. Happy to answer any questions folks might have. Posted May 5, 2021 20:21 UTC (Wed) byrvolgers (guest, #63218) [Link] That could in theory be solved by compiling with the option which aborts the process on panic instead of unwinding. I'm a little worried about setting "no panics" as a hard goal, to the degree that there is nothing in the binary which calls the panic machinery. A lot of panics are simply things which would be assert()s, hard process kills triggered by various sanitizers, or segfaults in C. Often the Rust compiler is already able to prove that a certain panic can never happen and eliminate it. But going to great lengths to remove the remaining ones seems likely to make the code more confusing or even less safe. Posted May 4, 2021 21:39 UTC (Tue) byrandomguy3 (subscriber, #71063) [Link] (13 responses) And I can't help but be reminded of the pain of exception-safety in C++ - a property C++ developers are aware is important in principle, but often ignore in practice because it seems more trouble than it's worth a lot of the time. Posted May 4, 2021 22:12 UTC (Tue) byBigos (subscriber, #96807) [Link] (12 responses) It is quite unfortunate and should really be brought up when discussing language safety. It is not something without a precedent, though, as panics usually replace C/C++ assertions or std::terminate() that just (99% of cases) abort() the whole process without any second thought. That's why the vast majority of errors should be returned by std::Result (or similar) and the callers should decide whether to unwrap()/expect() it or handle it differently, so that we can implement libraries on a varying level of error-handling. Nevertheless, the fact a piece of software uses Rust doesn't mean it handles errors correctly, or "sufficiently-enough" for a given use-case. Unfortunately, a language cannot guard the programmers from doing bad things (unless the language is very restricted). As many things, it's just a tool to make the programs less error-prone (at least that is the hope) and potentially more verifiable. Posted May 5, 2021 3:34 UTC (Wed) byjosh (subscriber, #17465) [Link] (11 responses) As an example, one common source of panics is indexing: if you write `x[12]` and x only has 4 elements, you'll get a panic. You can write the same thing without a panic, by writing `x.get(12)`, which returns an Option<T> containing None in that case. But if you're writing an algorithm that's intending to only index in-bounds, a panic on out-of-bounds indexing may be what you want. On the other hand, if you're effectively accepting an index from user input, you might want to use `get` and handle the error. There are efforts underway to make sure there are non-panicking analogues of more standard library functions (notably for memory allocations). Beyond that, it'll be up to applications and libraries to call the versions that fit their use cases. Posted May 5, 2021 7:10 UTC (Wed) byepa (subscriber, #39769) [Link] (7 responses) Posted May 5, 2021 7:51 UTC (Wed) byjosh (subscriber, #17465) [Link] (4 responses) Posted May 5, 2021 10:53 UTC (Wed) byepa (subscriber, #39769) [Link] (3 responses) Posted May 5, 2021 12:31 UTC (Wed) byCyberax (✭ supporter ✭, #52523) [Link] (1 responses) Rust allows to recover from panics, but it suffers all the regular C++ issues with exceptions. There are no guarantees that the system will return to internally consistent state afterwards. In particular, panics permanently poison locked mutexes during unwinding:https://doc.rust-lang.org/std/sync/struct.Mutex.html Posted May 7, 2021 1:38 UTC (Fri) byanp (guest, #130817) [Link] Posted May 5, 2021 13:30 UTC (Wed) byfarnz (subscriber, #17727) [Link] In kernel terms, Rust panic is basically BUG - it can be recovered from, and will be in a kernel case, but it represents a programmer error. Posted May 5, 2021 11:11 UTC (Wed) byfarnz (subscriber, #17727) [Link] (1 responses) Rust panics are the exception model, and handled as such - by default,catch_unwind exists to let you catch panics and handle them, andpanic::set_hook exists as a way to customize Rust's equivalent of C++'sstd::terminate. Just like C++ exceptions, though, there's an equivalent of GCC's-fno-exceptions for Rust, which changes panics to a direct abort that you can't catch, instead of being something you can catch and handle. Rust's stdlib and documentation is just designed so that you expect the-fno-exceptions behaviour, where C++'s stdlib expects you to have exception handling. So in this sense, it's about emphasis - Rust emphasises that the exception model is for "the programmer has made a mistake, and really we need a code fix to resolve this" andResult<T, E> andOption<T> types are the preferred model for "this is a potential error case that code can sensibly handle". The default for panics in Rust is hence abort (with a stack trace unless you turn that off) because an abort will result in a restart back to a known sensible state, but there's room to change that where you have a way to recover sanely (e.g. in an OS kernel, where a reboot is not necessarily the best option). Posted May 7, 2021 1:40 UTC (Fri) byanp (guest, #130817) [Link] Posted May 5, 2021 11:58 UTC (Wed) byibukanov (guest, #3942) [Link] (2 responses) Posted May 5, 2021 13:18 UTC (Wed) byroc (subscriber, #30627) [Link] Posted May 5, 2021 19:29 UTC (Wed) byGaelan (guest, #145108) [Link] I've seen some hacky implementations that go something like this: When compiled with optimizations, if the compiler can prove the call will be in bounds, everything works fine; otherwise, it emits a call to undefined_symbol(), causing a linker error. This is, admittedly, a huge hack. Posted May 4, 2021 22:03 UTC (Tue) bygspr (guest, #91542) [Link] (8 responses) This approach seems pervasive in the Rust community. I'm in no position to complain — I'm not the one doing the super impressive and important work — but it does seem like a worrisome way to do things for libraries that are meant to be foundational. Posted May 4, 2021 22:17 UTC (Tue) byJoeBuck (subscriber, #2330) [Link] (5 responses) Posted May 4, 2021 23:00 UTC (Tue) byNYKevin (subscriber, #129325) [Link] (1 responses) 1. It's reasonable to demand that clients update frequently (for their own security), and But this is all very project-specific, of course. Posted May 5, 2021 17:00 UTC (Wed) bymcatanzaro (subscriber, #93033) [Link] Posted May 5, 2021 9:27 UTC (Wed) byceplm (subscriber, #41334) [Link] (2 responses) Posted May 5, 2021 13:21 UTC (Wed) byroc (subscriber, #30627) [Link] (1 responses) Posted May 6, 2021 10:01 UTC (Thu) byriking (subscriber, #95706) [Link] Posted May 7, 2021 4:53 UTC (Fri) byilammy (subscriber, #145312) [Link] (1 responses) Posted May 7, 2021 14:24 UTC (Fri) bygspr (guest, #91542) [Link] The language and the community are otherwise top notch though, gotta say. Posted May 5, 2021 1:07 UTC (Wed) bytialaramex (subscriber, #21167) [Link] webpki prefers to focus on the core Trusted/Not question rather than risk introducing exciting errors in less tested codepaths for figuring out exactly why we can't trust this. That's fine as far as it goes, but it's problematic that today for any "Not trusted" situation you get UntrustedIssuer as your error. Is this because the issuer is, in fact untrusted? Um, no.https://github.com/briansmith/webpki/issues/221 I was reminded of this more recently when a goof by IdenTrust left the CRL for their DST Root CA X3 unavailable. Initial reports about this (to the Let's Encrypt community site, because IdenTrust itself no longer really uses DST Root CA X3) often involved claims that somehow previously working certificates now lacked an OCSP responder. This makes no sense - the Authority Information Access section of the certificate containing the name of its OCSP responder is immutable. But it turns out that (popular implementations of) the Java JDK give a misleading error about this data being missing if anything goes wrong in their revocation code, including if the CRL isn't available. IdenTrust has a responsibility (as condition of root trust) to ensure that CRL exists and is accessible, but if the Java errors actually said what was wrong it might have been diagnosed and so fixed considerably sooner. Whenever the computer says "FooBongled" the obvious first two thoughts as "I'm pretty sure I didn't bongle it, did I?" and then "Huh, I guess the Foo is Bongled, how did that happen?", but of course it's always possible some idiot copy-pasted the FooBongled error into code that has nothing whatsoever to do with whether your foo is bongled. Anyway, one of the nice things about Rust as a programming language is the helpful compiler errors. But right now one of the unhelpful things about webpki as a replacement for OpenSSL is that your diagnostics may be unhelpful at best. Posted May 5, 2021 9:34 UTC (Wed) bytekNico (subscriber, #22) [Link] (11 responses) effort already expended to create Caddy: a production-grade web server written in Go with state-of-the-art TLS support. I'd use Caddy rather than Apache or nginx, if at all possible. Posted May 5, 2021 12:04 UTC (Wed) byibukanov (guest, #3942) [Link] (8 responses) Posted May 5, 2021 13:13 UTC (Wed) byathei (guest, #121603) [Link] (5 responses) Posted May 5, 2021 13:20 UTC (Wed) byroc (subscriber, #30627) [Link] (4 responses) Posted May 5, 2021 14:31 UTC (Wed) bymathstuf (subscriber, #69389) [Link] Posted May 5, 2021 19:03 UTC (Wed) byjezuch (subscriber, #52988) [Link] (2 responses) Posted May 5, 2021 19:20 UTC (Wed) byCyberax (✭ supporter ✭, #52523) [Link] (1 responses) Posted May 5, 2021 22:07 UTC (Wed) byroc (subscriber, #30627) [Link] Posted May 5, 2021 15:23 UTC (Wed) bydottedmag (subscriber, #18590) [Link] (1 responses) Posted May 5, 2021 16:10 UTC (Wed) byrandomguy3 (subscriber, #71063) [Link] Posted May 5, 2021 14:46 UTC (Wed) bydjc (subscriber, #56880) [Link] (1 responses) Posted May 5, 2021 22:13 UTC (Wed) byroc (subscriber, #30627) [Link] There's also the issue that pulling in GC adds complexity to the runtime and complicates FFI. Posted May 5, 2021 17:14 UTC (Wed) byfloppus (guest, #137245) [Link] (13 responses) For example, is it easy to ensure that some function runs in constant time? If sensitive data is stored in stack or heap variables, can it be reliably erased when it's deallocated? C, of course, is full of pitfalls, though I feel like those pitfalls are somewhat well understood by now. Rust being a somewhat higher-level language, I don't have a feeling for whether it's generally better or worse. Posted May 5, 2021 17:22 UTC (Wed) byjosh (subscriber, #17465) [Link] (1 responses) Supporting constant-time operations would require some work in LLVM first. Posted May 5, 2021 19:20 UTC (Wed) bywahern (subscriber, #37304) [Link] It's especially problematic for a compiler to provide high-level constructs for promises that it fundamentally can't keep. What's ultimately needed is for vendors to offer a small set of guaranteed constant-time instructions. That way if timings change it's crystal clear who is at fault. Such ISAs would also help define the scope of the compiler/language constructs. No doubt there's plenty of prior art, but you really need Intel to make a commitment so you know what your baseline is--GCD(Intel, EverybodyElse). Any ISA commitments might require establishing long-term architectural tradeoffs wrt cache and speculation isolation for those operations. Posted May 5, 2021 18:07 UTC (Wed) bymatthias (subscriber, #94967) [Link] (6 responses) Use a datatype that overwrites its data in the destructor. In rust, the compiler enforces that the destructor is run when the object is deallocated (goes out of scope). Posted May 5, 2021 18:47 UTC (Wed) bydanobi (subscriber, #102249) [Link] (1 responses) Posted May 5, 2021 18:52 UTC (Wed) bymathstuf (subscriber, #69389) [Link] Posted May 5, 2021 18:50 UTC (Wed) bymathstuf (subscriber, #69389) [Link] (3 responses) https://docs.rs/zeroize/1.3.0/zeroize/index.html You can either manually zero with: or ensure that data will be zero'd at the end of the variable's lifetime with: There are other crates that do similar things as well, but this is one that I know of at least. Posted May 6, 2021 3:21 UTC (Thu) byncm (guest, #165) [Link] (2 responses) Rust, being currently wed to LLVM, might be able to bind its volatile or atomic apparatus to internal machinery in LLVM. But that is fragile, and the Gcc version might not get such access. Posted May 6, 2021 10:41 UTC (Thu) bymathstuf (subscriber, #69389) [Link] (1 responses) Posted May 6, 2021 23:09 UTC (Thu) byncm (guest, #165) [Link] A promise to achieve what the crate docs say within the defined bounds of the language, with no "asm", is even harder to deliver on reliably--even when the language has an actual Standard definition. Without, it shades toward impossible. It might be that, today, someone walks all the possible code paths in LLVM and attests that the goal is achieved, for this release. But one small improvement can change LLVM's behavior in ways too complicated to preserve the attestation; the analysis has to be started again from scratch. Actually using the crate, you might *instead* inspect the assembly language output for your particular use case, for all your targets and all supported compiler releases, and again for each subsequent compiler release; but that doesn't help the next project at all, and is a chore bound to be soon neglected. All of the above assumes the physical hardware (or emulated environment!) actually implements the instructions the way you hope, an assumption we have had our noses rubbed in quite a lot in recent years. Generally, equipment manufacturers make the kind of promise we need only about instructions that compilers never emit--when they make any promises at all. (Microcontrollers are often enough sold "as-is": it does what you see it do.) Zeroing cryptographically sensitive memory contents *reliably* is a Hard Problem under ideal conditions, under full control of target hardware. Any promise to deliver it reliably, working only within the Abstract Machine, should inspire deepest skepticism. No suite of unit tests can be large enough. I hope we can agree that an empty promise is worse than none. The more confidence the docs try to inspire, the less we should trust them. Posted May 5, 2021 20:03 UTC (Wed) bycesarb (subscriber, #6266) [Link] (1 responses) Same as C, unfortunately. You have to use the same tricks you would use in C. Which includes using empty inline assembly statements at key points to defeat the optimizer. Unfortunately, no; as in C, the compiler is allowed to spill sensitive data to the stack and registers (but you can do the same ugly tricks as in C to try to zeroise the stack). As for the heap, Rust is actually slightly worse than C, since it's move-happy; transferring ownership through moves is a standard Rust idiom, but this does not erase the former place of the data, and unlike C++ there are no move constructors to override this mechanism. It's generally better IMO. It's slightly harder to make sure sensitive data is reliably erased, but it's a lot easier to make sure leftover sensitive data is not accessed. As for constant time computations, it's basically the same. Posted May 6, 2021 10:09 UTC (Thu) byriking (subscriber, #95706) [Link] Posted May 6, 2021 3:05 UTC (Thu) byncm (guest, #165) [Link] HOWEVER: Clearing memory in a destructor or Drop whatsit may, and *probably will*, be elided by the compiler. This includes both assigning to members, and memset. To ensure memory is wiped, you generally need to call a function that the compiler *cannot* see into: thus, not inline; and placed where it will compile into some other ".o" file not accessible to "link-time optimization". It used to be that inserting an asm block would suffice, but compilers sometimes look into those nowadays. Sometimes the asm code may be marked "volatile" to prevent this, but that is even less portable than the asm code. A great deal of work has gone into making it very hard to reliably clear memory in dying objects, mostly under the banner of dead-code elimination. That does make programs faster, sometimes crucially so. But it makes it increasingly hard to zero memory in objects that are going away. I am beginning to doubt that seL4 gets this right. Posted May 6, 2021 16:32 UTC (Thu) byzlynx (guest, #2285) [Link] You cannot always trust the CPU to execute the machine code exactly as you send it, but it is the closest thing you can get. It's good for avoiding side channels and for making sure memory is properly set to zero. Posted May 5, 2021 20:13 UTC (Wed) byangdraug (subscriber, #7487) [Link] (1 responses) Considering how essential and ubiquitous the TLS protocol is (it's even baked directly into HTTP/3 now, thanks to QUIC), how is not a bigger concern that it's currently impossible to write GPL-compatible secure networking software in Rust? Posted May 5, 2021 20:43 UTC (Wed) byjosh (subscriber, #17465) [Link] Posted May 6, 2021 6:20 UTC (Thu) byrhdxmr (guest, #44404) [Link] Posted May 14, 2021 13:37 UTC (Fri) bykrizhanovsky (guest, #112444) [Link] There are also many security problems besides problems with raw memory, e.g. side channels. While OOB access in most cases can be easily caught with tools like KASAN, the side channel vulnerabilities are quite hard to find. Even mature libraries like WolfSSL might have themhttps://github.com/wolfSSL/wolfssl/issues/3184 Rust compiler is immature and provides very restricted performance features in comparison with GCC/Clang C compilers: I discussed several of them in http://tempesta-tech.com/blog/fast-programming-languages-c-cpp-rust-assembly .Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
extern "C" { fn undefined_symbol() -> !;}#[inline(always)]fn compile_time_checked_get(slice: &[u8], idx: usize) -> u8 { match slice.get(idx) { Some(val) => val, None => undefined_symbol() }}
Rustls: memory safety for TLS
I would expect that at some point, things will be stable enough so that a long-term support version would be branched off, but perhaps it's too early for that to happen yet.Rustls: memory safety for TLS
Rustls: memory safety for TLS
2. Maintaining a separate stable branch may not be worth the effort (how would it differ from the latest release in the first place?).Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
webpki
There's Caddy
There's Caddy
There's Caddy
There's Caddy
There's Caddy
There's Caddy
There's Caddy
There's Caddy
There's Caddy
Go'smemory model doesn't make the consequences of data races explicit, butdata race detector mentions that races "can lead to crashes and memory corruption". There's alsothis delve into using data races to break through Go's memory safety mechanisms.There's Caddy
There's Caddy
There's Caddy
Side channels
Side channels
Side channels
Side channels
Side channels
https://doc.rust-lang.org/stable/std/mem/fn.forget.htmlSide channels
Side channels
Side channels
Side channels
Side channels
Side channels
Side channels
Side channels
Side channels
Rustls: memory safety for TLS
Rustls: memory safety for TLS
Rustls: memory safety for TLS
I guess majority of rust users only deal with safe rust and only small number of people understand `unsafe` rust perfectly. So I think that making perfect C binding of rustls is not an easy task.
But it will be fine if the most part of program including core is written in safe rust and relatively small part for C interface is written in unsafe rust and C. But the excellent isolation of error propagation between core and unsafe C interface part should be needed.Rustls: memory safety for TLS
Copyright © 2021, Eklektix, Inc.
This article may be redistributed under the terms of theCreative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds