Although the original post was tongue-in-cheek, cap-std would disallow things like totally-safe-transmute (discussed at the time), since the caller would need a root capability to access /proc/self/mem (no more sneaking filesystem calls inside libraries!)
Having the entire standard library work with capabilities would be a great thing. Pony (and Monte too, I think) uses capabilities extensively in the standard library, which allows users to trust third party packages: if the package doesn’t use FFI (the compiler can check this) nor requires the appropriate capabilities, it won’t be able to do much: no printing to the screen, using the filesystem, or connecting to the network.
Yes. While Rust cannot be capability-safe (as explored in a sibling thread), this sort of change to a library is very welcome, because it prevents many common sorts of bugs from even being possible for programmers to write. This is the process of taming, and a tamed standard library is a great idea for languages which cannot guarantee capability-safety. The Monte conversation about /proc/self/mem still exists, but is only a partial compromise of security, since filesystem access is privileged by default.
Pony and Monte are capability-safe; they treat every object reference as a capability. Pony uses compile-time guarantees to make modules safe, while Monte uses runtime auditors to prove that modules are correct. The main effect of this, compared to Rust, is to remove the need for a tamed standard library. Instead, Pony and Monte tame the underlying operating system API directly. This is a more monolithic approach, but it removes the possibility of unsafe artifacts in standard-library code.
Yeah, I reckon capabilities would have helped with the security issues surrounding procedural macros too. I hope more new languages take heed of this, it’s a nice approach!
It can’t help with proc macros, unless you run the macros in a (Rust-agnostic) process-wide sandbox like WASI. Rust is not a sandbox/VM language, and has no way to enforce it itself.
In Rust, the programmer is always on the trusted side. Rust safety features are for protecting programs from malicious external inputs and/or programmer mistakes when the programmer is cooperating. They’re ill-suited for protecting against programs from intentionally malicious parts of the same program.
We might trust the compiler while compiling proc macros, though, yes? And the compiler could prevent calling functions that use ambient authority (along with unsafe rust). That would provide capability security, no?
No, we can’t trust the compiler. It hasn’t been designed to be a security barrier. It also sits on top of LLVM and C linkers that also historically assumed that the programmer is trusted and in full control.
Rust will allow the programmer to break and bypass language’s rules. There are obvious officially-sanctioned holes, like #[no_mangle] (this works in Rust too) and linker options. There are less obvious holes like hash collisions of TypeId, and a few known soundness bugs. Since security within the compiler was never a concern (these are bugs on the same side of the airtight hatchway) there’s likely many many more.
It’s like a difference between a “Do Not Enter” sign and a vault. Both keep people out, but one is for stopping cooperating people, and the other is against determined attackers. It’s not easy to upgrade a “Do Not Enter” sign to be a vault.
You can disagree with the premise of trusting the compiler, but I think the argument is still valid. If the compiler can be trusted, then we could have capability security for proc macros.
Whether to trust the compiler is a risk that some might accept, others would not.
But this makes the situation entirely hypothetical. If Rust was a different language, with different features, and a different compiler implementation, then you could indeed trust that not-Rust compiler.
The Rust language as it exists today has many features that intentionally bypass compiler’s protections if the programmer wishes so.
Rust’s safety is already often misunderstood. fs::remove_dir_all("/") is safe by Rust’s definition. I really don’t want to give people an idea that you could ban a couple of features and make Rust have safety properties of JavaScript in a browser. Rust has an entirely different threat model. The “safe” subset of Rust is not a complete language, and it’s closer to being a linter for undefined behavior than a security barrier.
Security promises in computing are often binary. What does it help if a proc macro can’t access the filesystem through std::fs, but can by making a syscall directly? It’s a few lines of code extra for the attacker, and a false sense of security for users.
There are plenty of formal proofs of the security properties that follow… patterns for achieving cooperation without vulnerability. See peer reviewed articles in https://gihub.com/dckc/awesome-ocap
This cap-std work aims to address #3. For example, with compiler support to deny ambient authority, it addresses std::fs.
Safe rust, especially run on wasm, is memory safe much like JS, yes? i.e. safe modulo bugs. Making a syscall requires using asm, which is not in safe rust.
Rust’s encapsulation is at the module level rather than object level, but it’s there.
While this cap-std and tools to deny ambient authority are not as mature as std, I do want to give people an idea that this is a good approach to building scalable secure systems.
I grant that the relevant threat model isn’t emphasized around rust the way it is around JS, but I don’t see why rust would have to be a different language to shift this emphasis.
I see plenty of work on formalizing safe rust. Safety problems seem to be considered serious bugs, not intentional design decisions.
In presence of malicious code, Rust on WASM is exactly as safe as C on WASM. All of the safety is thanks to the WASM VM, not due to anything that Rust does.
Safe Rust formalizations assume the programmer won’t try to exploit bugs in the compiler, and the Rust compiler has exploitable bugs. For example, symbol mangling uses a hash that has 1 in 2⁶⁴ chance of colliding (or less due to bday attack). I haven’t heard of anyone running into this by accident, but a determined attacker could easily compute a collision that makes their cap-approved innocent_foo() actually link to the code of evil_bar() and bypass whatever formally-proven safety the compiler tried to have.
This is very interesting. I’m currently building an ocaps version of the OCaml standard library (as part of moving everything to use effects - https://github.com/ocaml-multicore/eio). I’ll see what techniques I can copy from this!
Oooh, very cool that you are using effects and handlers for this! This is something I think they’d be really great at, and would love to see more languages take advantage of! See also: Why Is Crochet Object Oriented?
One question about how effects work in Multicore OCaml: I don’t seem to see the effects appearing in the type. Eg. in lib_eunix/eunix.mli we have the following type given for alloc:
Yes, eventually. There’s a separate effort to get effects into the type system (see https://www.janestreet.com/tech-talks/effective-programming/). But that isn’t ready yet, so we’re using a version of the compiler with only the runtime effects for now.
Yeah, that’s the video I was thinking of! And yeah this explains why I’ve been confused reading the Multicore OCaml examples, so thanks for filling in the gaps in my understanding! Cool that the work can be decoupled at least (the runtime and library stuff alone seems like massive effort).
Child processes on Fuchsia are only capable of accessing the resources provided to them – this is an essential idea encompassing microkernels and other “capability-based” systems.
I think the first example does a really poor job explaining the library’s purpose.
But if the path passed in is ../../home/me/.ssh/id_dsa.pub
You wouldn’t even need to do that, /home/me/.ssh/id_dsa.pub suffices, because “if path is absolute, it replaces the current path”.
So the first step to solve this would be to stop standard library devs from creating APIs that are CVEs waiting to happen.
the best API is the one you already know.
Rust already has a standard library API.
Then the author goes on to introduce their own replacement type for the broken Path/PathBuf ones? Now I’m confused.
And the cap-directories library is pretty much an (outdated) adaption of my own library for cache/config/user dirs, which pretty much only exists because the standard library API couldn’t even implement home_dir correctly.
Dir looks interesting, though I don’t like that it doesn’t seem to be possible to compose paths, without immediately opening/creating/… them. I experimented with a different design years ago (AbsolutePath/RelativePath) that I think accomplishes this, without restricting path composition.
Their efforts deserve praise, but their messaging is really confusing.
(And I would have appreciated it if they had asked before changing the license and removing author information.)
You wouldn’t even need to do that, /home/me/.ssh/id_dsa.pub suffices, because “if path is absolute, it replaces the current path”.
Read with even the slightest bit of charity, one assumes that the library properly handles absolute paths too, and if not that’s “just” a bug.
Then the author goes on to introduce their own replacement type for the broken Path/PathBuf ones? Now I’m confused.
They… didn’t? A Path/PathBuf is just a string (encoded properly for the current OS), a Dir is an object giving you the ability to access files below it. You pass a Path to a Dir to specify which file/dir below it… I don’t see anything suggesting they’re replacing Path/PathBuf at all, just adding a new type (Dir) which gives you file system system rights.
I’m sort of skeptical about the idea of running a system call to create a Dir too, but nothing in their messaging strikes me as confusing like you suggest…
Although the original post was tongue-in-cheek,
cap-std
would disallow things like totally-safe-transmute (discussed at the time), since the caller would need a root capability to access/proc/self/mem
(no more sneaking filesystem calls inside libraries!)Having the entire standard library work with capabilities would be a great thing. Pony (and Monte too, I think) uses capabilities extensively in the standard library, which allows users to trust third party packages: if the package doesn’t use FFI (the compiler can check this) nor requires the appropriate capabilities, it won’t be able to do much: no printing to the screen, using the filesystem, or connecting to the network.
Yes. While Rust cannot be capability-safe (as explored in a sibling thread), this sort of change to a library is very welcome, because it prevents many common sorts of bugs from even being possible for programmers to write. This is the process of taming, and a tamed standard library is a great idea for languages which cannot guarantee capability-safety. The Monte conversation about /proc/self/mem still exists, but is only a partial compromise of security, since filesystem access is privileged by default.
Pony and Monte are capability-safe; they treat every object reference as a capability. Pony uses compile-time guarantees to make modules safe, while Monte uses runtime auditors to prove that modules are correct. The main effect of this, compared to Rust, is to remove the need for a tamed standard library. Instead, Pony and Monte tame the underlying operating system API directly. This is a more monolithic approach, but it removes the possibility of unsafe artifacts in standard-library code.
Yeah, I reckon capabilities would have helped with the security issues surrounding procedural macros too. I hope more new languages take heed of this, it’s a nice approach!
It can’t help with proc macros, unless you run the macros in a (Rust-agnostic) process-wide sandbox like WASI. Rust is not a sandbox/VM language, and has no way to enforce it itself.
In Rust, the programmer is always on the trusted side. Rust safety features are for protecting programs from malicious external inputs and/or programmer mistakes when the programmer is cooperating. They’re ill-suited for protecting against programs from intentionally malicious parts of the same program.
We might trust the compiler while compiling proc macros, though, yes? And the compiler could prevent calling functions that use ambient authority (along with unsafe rust). That would provide capability security, no?
No, we can’t trust the compiler. It hasn’t been designed to be a security barrier. It also sits on top of LLVM and C linkers that also historically assumed that the programmer is trusted and in full control.
Rust will allow the programmer to break and bypass language’s rules. There are obvious officially-sanctioned holes, like
#[no_mangle]
(this works in Rust too) and linker options. There are less obvious holes like hash collisions ofTypeId
, and a few known soundness bugs. Since security within the compiler was never a concern (these are bugs on the same side of the airtight hatchway) there’s likely many many more.It’s like a difference between a “Do Not Enter” sign and a vault. Both keep people out, but one is for stopping cooperating people, and the other is against determined attackers. It’s not easy to upgrade a “Do Not Enter” sign to be a vault.
You can disagree with the premise of trusting the compiler, but I think the argument is still valid. If the compiler can be trusted, then we could have capability security for proc macros.
Whether to trust the compiler is a risk that some might accept, others would not.
But this makes the situation entirely hypothetical. If Rust was a different language, with different features, and a different compiler implementation, then you could indeed trust that not-Rust compiler.
The Rust language as it exists today has many features that intentionally bypass compiler’s protections if the programmer wishes so.
Between “do not enter” signs and vaults, a lot of business gets done with doors, even with a known risk that the locks that can be picked.
You seem to argue that there is no such thing as safe rust or that there are no norms for denying unsafe rust.
Rust’s safety is already often misunderstood.
fs::remove_dir_all("/")
is safe by Rust’s definition. I really don’t want to give people an idea that you could ban a couple of features and make Rust have safety properties of JavaScript in a browser. Rust has an entirely different threat model. The “safe” subset of Rust is not a complete language, and it’s closer to being a linter for undefined behavior than a security barrier.Security promises in computing are often binary. What does it help if a proc macro can’t access the filesystem through
std::fs
, but can by making a syscall directly? It’s a few lines of code extra for the attacker, and a false sense of security for users.Ok, let’s talk binary security properties. Object Capability security consists of:
There are plenty of formal proofs of the security properties that follow… patterns for achieving cooperation without vulnerability. See peer reviewed articles in https://gihub.com/dckc/awesome-ocap
This cap-std work aims to address #3. For example, with compiler support to deny ambient authority, it addresses std::fs.
Safe rust, especially run on wasm, is memory safe much like JS, yes? i.e. safe modulo bugs. Making a syscall requires using asm, which is not in safe rust.
Rust’s encapsulation is at the module level rather than object level, but it’s there.
While this cap-std and tools to deny ambient authority are not as mature as std, I do want to give people an idea that this is a good approach to building scalable secure systems.
I grant that the relevant threat model isn’t emphasized around rust the way it is around JS, but I don’t see why rust would have to be a different language to shift this emphasis.
I see plenty of work on formalizing safe rust. Safety problems seem to be considered serious bugs, not intentional design decisions.
In presence of malicious code, Rust on WASM is exactly as safe as C on WASM. All of the safety is thanks to the WASM VM, not due to anything that Rust does.
Safe Rust formalizations assume the programmer won’t try to exploit bugs in the compiler, and the Rust compiler has exploitable bugs. For example, symbol mangling uses a hash that has 1 in 2⁶⁴ chance of colliding (or less due to bday attack). I haven’t heard of anyone running into this by accident, but a determined attacker could easily compute a collision that makes their cap-approved
innocent_foo()
actually link to the code ofevil_bar()
and bypass whatever formally-proven safety the compiler tried to have.This is very interesting. I’m currently building an ocaps version of the OCaml standard library (as part of moving everything to use effects - https://github.com/ocaml-multicore/eio). I’ll see what techniques I can copy from this!
Are you familiar with this previous effort to adopt ocap ideas to OCaml? https://www.hpl.hp.com/techreports/2006/HPL-2006-116.html
It used a source code verifier to limit programs to an ocap-safe subset.
Yes; that paper is linked from the README ;-)
I’ve never actually used Emily though. I got the impression from the paper that it was very much a prototype.
Oooh, very cool that you are using effects and handlers for this! This is something I think they’d be really great at, and would love to see more languages take advantage of! See also: Why Is Crochet Object Oriented?
One question about how effects work in Multicore OCaml: I don’t seem to see the effects appearing in the type. Eg. in
lib_eunix/eunix.mli
we have the following type given foralloc
:But in
lib_eunix/eunix.ml
there is this implementation:Shouldn’t
Alloc
appear somewhere in the type ofalloc
, or is that not tracked by Multicore OCaml?Yes, eventually. There’s a separate effort to get effects into the type system (see https://www.janestreet.com/tech-talks/effective-programming/). But that isn’t ready yet, so we’re using a version of the compiler with only the runtime effects for now.
Yeah, that’s the video I was thinking of! And yeah this explains why I’ve been confused reading the Multicore OCaml examples, so thanks for filling in the gaps in my understanding! Cool that the work can be decoupled at least (the runtime and library stuff alone seems like massive effort).
Reminds me of this explanation of why Fuchsia doesn’t have dotdot in the filesystem
I really hope there’s gonna be wider adoption of this. Would make it so easy to sandbox things with Capsicum :)
I think the first example does a really poor job explaining the library’s purpose.
You wouldn’t even need to do that,
/home/me/.ssh/id_dsa.pub
suffices, because “if path is absolute, it replaces the current path”.So the first step to solve this would be to stop standard library devs from creating APIs that are CVEs waiting to happen.
Then the author goes on to introduce their own replacement type for the broken
Path
/PathBuf
ones? Now I’m confused.And the
cap-directories
library is pretty much an (outdated) adaption of my own library for cache/config/user dirs, which pretty much only exists because the standard library API couldn’t even implementhome_dir
correctly.Dir
looks interesting, though I don’t like that it doesn’t seem to be possible to compose paths, without immediately opening/creating/… them. I experimented with a different design years ago (AbsolutePath
/RelativePath
) that I think accomplishes this, without restricting path composition.Their efforts deserve praise, but their messaging is really confusing.
(And I would have appreciated it if they had asked before changing the license and removing author information.)
Read with even the slightest bit of charity, one assumes that the library properly handles absolute paths too, and if not that’s “just” a bug.
They… didn’t? A
Path
/PathBuf
is just a string (encoded properly for the current OS), aDir
is an object giving you the ability to access files below it. You pass aPath
to aDir
to specify which file/dir below it… I don’t see anything suggesting they’re replacingPath
/PathBuf
at all, just adding a new type (Dir
) which gives you file system system rights.I’m sort of skeptical about the idea of running a system call to create a
Dir
too, but nothing in their messaging strikes me as confusing like you suggest…