I don’t want to disagree, but I want to share my experience. Before Rust, my primary language was Python, which is build around named arguments. I also used a lot of Kotlin, which has named arguments as well (although not as enshrined as in Python). I would expect that I’d miss named arguments in Rust, but this just doesn’t happen. In my day-to-day programming, I feel like I personally just never need neither overloading nor named parameters.
…would feel really unnatural. But somehow (in my experience) other aspects of Rust’s design come together to mean that I don’t miss the added clarity of those parameter labels.
I find the Swift approach appealing in theory, but it’s hard to do well in practice. Maybe it’s just me, but I can never wrap my head around the conventions. I can read this 100 times and still struggle with my own functions: https://swift.org/documentation/api-design-guidelines/#parameter-names
For example, why isn’t your example func setKey<K, V>(_ key: K, to value: V) ?
When we get to the final bullet point in that guideline “label all other arguments”, how should we label them? So that the call site reads like a sentence? That doesn’t seem possible. So just name them what they represent? Is this right: func resizeBox(_ box: Box, x: Int, y: Int, z: Int)? Then the call site is way less cool than your example: resizeBox(box, x: 1, y: 2, z: 3).
I agree that, in practice, I don’t often miss either feature. But it does happen sometimes.
Every once in a while I do miss overloading/default-args. Sometimes you have a function that has (more than one!) very common, sane, default values and it sucks to make NxN differently named functions that all call the same “true” function just because you have N parameters that would like a nice default.
Then there’s the obvious case for named parameters where you have multiple parameters of the same type, but different meaning, such as x, y, z coordinates, or- way worse- spherical coordinates: theta and phi, since mathematicians and physicists already confuse each other on which dimension theta and phi actually represent.
It’s not terrible to not have them, but it seems like it would be nice to do it the Kotlin way. Default params are always a code smell to me, but just because it smells doesn’t mean it’s always wrong and I do use them in rare circumstances.
I’d like to disagree, I really think functions should be as simple as possible, so that you can easily write higher order functions that wrap/transform other functions without worrying about how the side-channels like argument names will be affected. I really like the Haskellism where your function arguments are unnamed, but you recover the ergonomics of named arguments by defining a new record just for a single function, along with potentially multiple ways of constructing that record with defaults.
I’m not saying this pattern would work for every language, I just want to point out that named arguments aren’t necessarily a good thing.
I forgot: there’s no reason to make a language in the 2000s without named arguments as default.
Named arguments make renaming parameters a breaking change; this is why C# didn’t support them until version 4. If I ever design a language, I’ll add named arguments after the standard library is finalized.
Yeah, I think Swift nailed it. Its overloading/default args aren’t even a special calling convention or kwargs object. They are merely part of function’s name cleverly split into pieces. init(withFoo:f andBar:b) is something like init_withFoo_andBar_(f, b) under the hood.
There should be formal semantics for the borrow checker.
Rust’s module system seems overly complex for the benefit it provides.
Stop releasing every six weeks. Feels like a treadmill.
The operator overload for assignment requires generating a mutable reference, which makes some useful assignment scenarios difficult or impossible…not that I have a better suggestion.
A lot of things should be in the standard library and not separate crates.
Some of the “standard” crates are more difficult to use than they should be. I still can’t figure out how to embed an implements-the-RNG-trait in a struct.
Async is a giant tar pit. The immense complexity it adds doesn’t seem to be worth it, IMHO.
“We have a tree of namespaces. Depending on how you declare it the namespace names a file or it doesn’t. Namespaces nest, but you need to be explicit about importing from outer namespaces. Also, there’s crates which are another level of namespacing with workspaces.”
Versus something like Python: There is one namespace per file.
(Python does let you write custom importers and such but that’s truly deep magic that is extremely rarely used.)
I’m not saying there aren’t benefits to the way Rust does it. I’m saying I don’t feel like the juice is worth the squeeze.
I mean … it feels like this takes 30 minutes to understand. Maybe the problem is the docs? Or maybe it just already mapped to my pre-existing mental model of namespaces, but this was one of the least surprising parts of rust to me
Depending on how you declare it the namespace names a file or it doesn’t.
New file means a new namespace (module), new namespace (module) doesn’t mean a new file.
I mean … it feels like this takes 30 minutes to understand. Maybe the problem is the docs? Or maybe it just already mapped to my pre-existing mental model of namespaces, but this was one of the least surprising parts of rust to me
It was the opposite for me, for whatever reason; it feels like there’s active friction between my mental model of namespaces and the way Rust does it. It’s weird.
You know, I kinda got the same mental friction feeling with namespaces in Tcl. I couldn’t tell you why. Maybe I just hate nested namespaces…
I’ve over and over and over again heard from beginners that the docs do a notably bad job communicating how it works, in particular those that are the easiest to get your hands on as a beginner (the rust book and by example). They deal almost exclusively with submodules within a file (i.e. mod {}), since it’s difficult to denote multiple interrelated files in the text, playground example, text, playground example idiom they decided to use.
When they briefly do try to explain how the external file / directory thing works they say something like “you used to need a file named mod.rs in another directory but now in Rust 2018 you can just make a file named (the name of the module).rs” which is a really poor explanation of how that works and is also literally incorrect. Like, you can go without mod.rs but if you want to arrange your code into a directory structure you still need mod.rs. There have been issues on the Github for the rust book about making the explanation coherent (or more trivially making it actually true) but the writers couldn’t comprehend that it isn’t immediately intuitive to beginners and have refused to make very basic changes like having it just say something like “when you write mod foo, the compiler looks in the current directory for either foo.rs or foo/mod.rs”. A lot of the problem here is the mod.rs -> modname.rs addition. It’s an intuitive QOL improvement to people already familiar with the modules system but starting from no understanding of the modules system it makes it infinitely more difficult for newbies to understand.
Hmm, I feel like the following set of statements covers the way the module system works:
We have a tree of namespaces, which is called a crate
Declaring a module…
…with just a name refers to a file in a defined location relative to the one containing the declaration
…with a set of curly braces refers to the content of those curly braces
You have to explicitly import anything from outside the current module (file or mod {} block)
In practice, modules are almost always declared in separate files except for test modules, so it ends up being “there is one namespace per file” most of the time anyway.
I don’t really see what about that is all that complicated.
As someone who just dabbles with rust, it still confuses me. I know I’d get it if I used it more consistently, but for whatever reason it just isn’t intuitive to me.
For me, I think the largest problem is that it’s kind of the worst of both worlds of being neither an entirely syntactic construct nor being filesystem based. Rather, it requires both annotating files in certain ways and places, and also putting them in certain places in the file system.
By contrast, Python and Javascript lean more heavily on the filesystem. You put code here and you just import it by specifying the relative file path there.
On the other end of the spectrum you have Elixir, where it doesn’t matter where you put your files. You configure your project to look in “lib”, and it will recursively load up any file ending in .ex, read the names of the modules defined in there, and determine the dependency graph among them. As a developer I pop open a new text file anywhere in my project, type defmodule Foo, and know that any other module anywhere can simply, e.g., import Foo. For my money, Elixir has the most intuitive system out there.
Bringing it back to rust, it’s like, if I have to put these files specifically right here, why do I need any further annotation in my code to use those modules? I know they’re there, the compiler knows they’re there, shouldn’t that be enough? Or conversely, if I’m naming this module, then why do I have to put it anywhere in particular? Shouldn’t the compiler know it by name, and then shouldn’t I be able to use it anywhere?
I’m also not too familiar with C or C++ which is what it seems to be based on. I get that there’s this ambient sense of compilation units, and using a module is almost like a fancy macro that text substitutes this other file into this one, but that’s not really my mental model of how compilation has to work.
Hey, thanks, this is some interesting food for thought!
I’m also not too familiar with C or C++ which is what it seems to be based on.
I think they’re actually based on ML modules. They’re not really similar to C/C++… I’d actually describe it as more similar to python than C/C++ (but somewhere in the middle between them).
and using a module is almost like a fancy macro that text substitutes this other file into this one,
I think the mod module_name; syntax is actually exactly a fancy macro that does the equivalent of text substitution (up to error messages and line numbers). Of course it substitutes into the mod module_name { module_src }` form so module_src is still wrapped in a module.
Rust’s module model conceptually is very simple. The problem is that it’s different from what other languages do, and the difference is subtle, so it just surprises new users that it doesn’t work the way they imagine it would.
Being different, but not significantly better, makes it hard to justify learning yet another solution.
Do i need to declare my new mod in main.rs or in lib.rs? What about tests? Why am I being warned about unused code here, when I use it? Why can I import this thing here but not elsewhere?
I think the way all the explicit declaration stuff is really un-nerving coming from Python’s “if there’s a file there you can import it” strategy. Though I’m more comfortable with it now, I still wouldn’t be confident about answering questions about its rules
Another user on here (forgive me, I can’t remember who) said it well: if I cut my pizza into 12 slices or 36 slices, it’s the same amount of pizza but one takes more effort to eat.
Every six weeks I have to read release notes, decide if what’s changed matters to me, if what counts as “idiomatic” is different now, etc. 90% of the changes will be inconsequential, but I still gotta check.
Bigger, less frequent releases gives me the changes in a more digestible form.
Note that this is purely a matter of opinion: obviously a lot of people like the more frequent releases, but the frequent release schedule is a common complaint from more than just me.
Rust tried to do it with the “Edition Guide” for 2018 which — confusingly — was not actually describing features exclusive to the new 2018 parsing mode, but was a summary of the previous couple of years of small Rust releases.
The big edition guide freaked some people out, because it gave impression that Rust suddenly has changed a lot of things, and there were two different Rusts now. I think Rust is damned here no matter what it does.
I don’t remember exactly what I was doing but I ended up running into this:
for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
Point is, I got to that point trying to have an Rng in a struct and gave up. :)
My solution was to put it in a Box, but that didn’t work for one of the Rng traits (whichever one includes the seed functions), which is what I wanted.
Either way, I obviously need to do more research. Thanks.
Box is an owned pointer, despite being featured so prominently it doesn’t have many uses. It’s basically good for
Making unsized things (typically trait objects) sized
Making recursive structs (otherwise they’re infinite sized)
Efficiency (moving big values off of the stack)
C ffi
(Probably a few things I forgot, but the above should be the common cases)
RefCell is a single threaded rw-lock, except it panics where a lock would block because blocking on a single threaded lock would always be a deadlock. It’s purpose in life is to move the borrow checkers uniqueness checks from compile time to runtime.
Yup, I used RefCell here because I don’t think the changing internal state of the random number generator is relevant to the users of the CharacterMaker, so I preferred make to be callable without a mutable reference, but that’s an API design choice.
I know that there are use cases for unwind-on-panic, just like there are use cases for move constructors, green threads, and fork(). But all of these impose a peanut butter cost that every generic algorithm, optimization pass, formal proof, all have to put in work to support. It doesn’t carry its weight.
Hm, I think I disagree relatively strongly, with two points:
I don’t think panics impose unavoidable cost. There’s option to require that panic aborts, so, eg, formal proofs can just require that the code is compiled with panic=abort.
I also feel that supporting unwinding is important for a large class of important applications. In most long-running state full program you need catch_panic around internal boundaries for reliability. rust-analyzer relies heavily on this ability, as well as any HTTP server.
I can see that, if we restrict Rust to pure systems software (and write our IDE and Web stuff in Kotlin), than no unwinding has a big appeal, but Rust is much more than just systems programming.
In my experience writing a couple of long running stateful programs, catching panics never seems like a good idea.
Inevitably there is some shared state somewhere, and that state is negotiated via mutex locks (or similar).
Panics will just leave the entire thing in a big unknown with potentially poisoned locks.
I might be missing the point, but it seems to me rust panics are quite similar to some Java practices where one just turns an irrecoverable problem into a RuntimeException and hope the problem is solved somewhere else.
I don’t think it’s universally inevitable. Off the top of my head, here are some examples where panic recovery seems to work well in practice:
Erlang
web servers
IntelliJ ides
rust-analyzer (*)
So it seems that panic recovery works for some things and doesn’t work for others? This seems like a reasonable hypothesis to me.
Just slapping catch_panic everywhere won’t magically make a program reliable, quite the contrary. First, you need to architecture it in a way that it actually has reliability boundaries. And this is primarily about state management.
For example, if you make state updates transactional, then only the code that applies transaction writes can’t recover after a panic, but that’s a tiny fraction of code. For example, rust-analyzer’s state is guarded by a single mutex. If panic happens in a place where we .write it, directly on the main loop, than analyzer just crashes. But literally any part of the compiler can panic, and this won’t corrupt state or crash the process, because compiler only calculates derived data.
(*): there was a single case where our catch_panic bit us in the back rather painfully but that was due to a compiler bug. Initially, we had out catch_panic and all was good. Then, at Rust all hands, incremental compilation started to non-deterministically fail. The failure was traced to the trait’s system co-inductive reasoning for auto-traits like unwind-safe. So, to work-around this bug, we added AssertUnwindSafe as a way to short-circuit compiler’s trait inference. After that, chalk started to cache state, which actually wasn’t UnwindSafe. So we spend some time debugging mysterious runtime failures, which could have been caught at compile time, but alas.
In one form it is, and falls under “internal boundaries for reliability”.
But in a strict sense (as in, “kernel drivers should never die on OOM) you probably want just use failable (Result-returning) allocations everywhere. For that, you don’t need unwinding.
I was under the impression (potentially incorrectly) that much of the standard library doesn’t have the ability to use failable allocations, but just panics on OOM.
You are mostly correct. It’s even worse than that: at the moment, std aborts (rather than unwinds) on OOM.
std is just written with the assumption of a global infallible allocator. That’s a reasonable assumption for things std is intended for. The current custom allocators work will make this a bit more flexible, but won’t change the basic apis.
If you need to handle alloc failure 100%, you need different APIs which expose allocations to the caller.
The compiler always drops things at the end of their scope, and that can’t be changed because it would break code relying on drop order. Allowing the compiler to choose to drop things any time after last use (including last use of things that are borrowed) would mean it could automatically solve more borrow checking issues.
Types as expressions:
Currently rust has two different “expression” languages that evaluate syntax trees to values. The one for types, and the one for expressions. This feels like a mistake. For instance trying to glue the value-syntax into the type syntax makes const generics harder and more verbose (you need to wrap some things in {expr} to avoid ambiguity). It means you need to learn both syntaxes. Etc.
Moreover if we make types more first class, runtime reflection becomes natural at minimal complexity cost. Rust has very limited support for runtime reflection today, but I’d like to see it expand (if just for debugging purposes, like printing type names).
Mixed tuple/record syntax:
Currently rust structs are either struct TupleLike(Without, Named, Fields) or struct RecordLike{with: Only, named: Fields}. I’d like to see these unified so we could have only one type of struct Struct(UnnamedField, named: Field). Probably require unnamed fields to either all come first, or all come last.
Anonymous types:
Currently every type in Rust is named, I can’t do fn foo() -> enum {Bar, Baz}, only enum FooReturn {Bar, Baz}; fn foo() -> FooReturn. This makes error handling more painful.
We already have type Foo = Bar for type aliases, and I think we could improve on this to make this the only form of naming types for consistency, so type FooReturn = enum { Bar, Baz} if you really want to name the enum. Some thought would have to be put into visibility.
Unnamed types that are structurally identical should be treated as equivalent. Unnamed types should coerce to named types if the entirety of the named type is visible in the namespace… Named types shouldn’t coerce to eachother (which is a break from the current behavior of type Foo=Bar in rust, but IMO a good one).
Integrate structs into unions better:
Currently a common pattern in rust is struct Foo{ ... }; struct Bar { ... }; enum Baz { Foo(Foo), Bar(Bar) }. Where you have an enum that just contains one of a list of structs, but you end up repeating yourself a lot to access those structs. This should be improved to something like enum Baz { *Foo, *Bar } to minimize repetition. Likewise on the pattern matching side things like if let Foo{ ... } = baz_value {} should work, instead of needing to do if let Baz::Foo(Foo{ ... }) = baz_value {}.
Named arguments:
When I have some function, I don’t know, draw_rectangle(ctx, position, border_size, brush, target) remembering what argument goes where is unnecessarily difficult, named arguments make it not a problem.
Weird, I definitely disagree on expanding this. It just wastes horizontal space and keystrokes for no benefit.
Maybe I32 and Str instead of i32 and str, but abbreviations for commonly used things are good. You’re not even getting rid of the abbreviation, Int is after all short for Integer.
I agree with this (I think lowercase would be fine, too, though).
I think that Rust overdoes it a little bit on the terseness.
I understand that Rust is a systems language and that Unix greybeards love only typing two or three characters per thing, but there’s something to be said for being descriptive.
Examples of very terse things that might be confusing to a non-expert programmer:
Vec
fn
i32, u32, etc
str
foo.len() for length/size
mod for module
mut - this keyword is wrong anyway
impl
None of the above bothered me when I learned Rust, but I already had lots of experience with C++ and other languages, so I knew that Vec was short for “vector” immediately. But what if I had come from a language with “lists” rather than “vectors”? It might be a bit confusing.
And I’m not saying I would change all/most of the above, either. But maybe we could tolerate a few of them being a little more descriptive. I’d say i32 -> int32, Vec -> Vector, len() -> count() or length() or size(), and mut -> uniq or something.
For the context of those who aren’t familiar, &mut pointers are really more about guaranteeing uniqueness than mutability. The property &mut pointers guarantee is that there is only one pointing at a given object at a time, and that nothing access that object except through them while they exist.
Mut isn’t really correct because you can have mutability through a & pointer using Cell types. You can have nearly no mutability through a &mut pointer by just not implementing any mutable methods on the type (though you can’t stop people from doing *mut_ptr = new_value()).
The decision to call this mut was to be similar to let mut x = 3… I’m still unconvinced by that argument.
Not to mention the holy war over whether let mut x = 3 should even exist, or if every binding is inherently a mutable binding since you aren’t actually prevented from turning a non-mutable binding into a mutable one:
let x = 3;
let mut x = x;
// mutate the ever living crap out of x
For an example, check out some Swift code. Swift more or less took Rust’s syntax and made it a little more verbose. fn became func, the main integer type is Int, sequence length is .count, function arguments idiomatically have labels most of the time, and so on. The emphasis is on clarity, particularly clarity at the point of use of a symbol — a function should make sense where you find a call to it, not just at its own declaration. Conciseness is desirable, but after clarity.
Yep. I also work with Swift and I do like some of those choices. I still think the function param labels are weird, though. But that’s another topic. :)
I think this mostly doesn’t matter - I doubt anyone would first-try Rust, given its complexity, so it’s not really that much of an issue. Keywords are all sort of arbitrary anyway, and you’re just gonna have to learn them. Who’d think go would spawn a thread?
I, for one, think these are pretty nice - many people will learn Python so they expect len and str, and fn and mod are OK abbreviations. I think the terseness makes Rust code look nice (I sorta like looking at Rust code).
Though I’d agree on mut (quite misleading) and impl(implement what?).
I don’t care about the exact naming conventions, as long as it is consistent. (This is in fact exactly how I named types in my project though, what a coincidence. :-D)
In general the random abbreviations of everything, everywhere are pretty annoying.
Lowercase types are primitive types while camelcase are library types. One has special support from the compiler and usually map to the machine instructions set while the other could be implemented as a 3rd party library.
Because they are stack-allocated primitive types that implement Copy, unlike the other types which are not guaranteed to be stack-allocated and are definitely not primitive types.
How does anything convey anything? It’s a visual signal that the type is a primitive, stack-allocated type with copy semantics. As much as I hate to defend Java, it’s similar to the int/Integer dichotomy. If they were Int and Integer, it wouldn’t be quite so clear that one is a primitive type and the other is a class.
Total outsider here, but my understanding is that Rust newcomers struggle with satisfying the compiler. That seems necessary because of the safety you get, so OK, and the error messages have a great reputation. I would want to design in possible fixes for each error which would compile, and a way to apply them back to source code given your choice. If that’s a tractable problem, I think it could help cut trial and error down to one step and give you meaningful examples to learn from.
Actually, a lot of the error messages do offer suggestions for fixes and they often (not always) do “just work”. It’s really about as pleasant as I ever would’ve hoped for from a low-level systems language.
Yeah, it seems to be. I often use Emacs with lsp-mode and “rust-analyzer” as the LSP server and IIRC, I can hit the “fix it” key combo on at least some errors and warnings. I’m sure that’s less true the more egregious/ambiguous the compile error is.
My imaginary version that keeps most of the existing language would:
make vararg support first class
nuke macros, which would no longer be necessary in my heavily restricted usage with const generics and varargs
nuke async. worse performance for literally no benefit, just a zombie ecosystem that exists to bandaid its theoretically unfixable performance and correctness problems forever
generally make abstraction more painful (largely achieved through the macro abolition above), pushing at least a couple steps toward a better C away from its current place as a better C++
make it impossible for people to create their own traits. std has more than enough as it is, and custom traits are a huge documentation hazard that destroys the onboarding experience of any library. bwahahaha
make it impossible for people to create their own traits. std has more than enough as it is, and custom traits are a huge documentation hazard that destroys the onboarding experience of any library.
(I’m not really sure if this comment is serious; I read it as sarcastic. Just in case you are, though…)
I don’t really think it’s a good idea for to allow the standard library of a language to do more than the language’s users can do. I mean, just look at Elm and the drama that has happened there.
Rust will always take advantage of a large number of features that it has been judged that normal users will not be able to effectively use, for one reason or another. In my judgement, after seeing how the community tends to use various features for the 7 years that I’ve been participating, I would prefer to go without several of them. Stating these perspectives is the point of this thread.
Async in Rust is wonderful. It has solved real performance and reliability issues in my formerly thread-channel-spaghetti programs.
Result objects instead of exceptions enabled separation of error handling from function calls themselves (result-as-object allows building abstractions naturally using regular language features).
Similarly Future separates function execution from function calls, and adds a higher level way of capturing and controlling program’s flow as an object. It’s a very powerful feature, and very well designed given how low-level and low-overhead it is in Rust.
It’s definitely an improvement over non-async Rust. Compared to other languages it’s pretty good too:
Rust’s model naturally supports cancellation of async operations. If you drop a Future, it stops executing. This is amazingly easy compared to JS’s Promise model which has no room for cancellation and needs manually managed AbortControllers.
Rust’s Future encapsulates the entire call tree, not just a single operation. This allows async calls to be inlined and optimized to almost nothing. In may cases this is much more efficient than a heap-object-per-call model used in JS and C#.
.await as a suffix turns out to be quite convenient in practice. foo().await.bar().await rather than await (await foo()).bar().
It’s so brilliantly simple compared to C++ coroutines. At the lowest level Rust’s model boils down to just calling Future::poll() to completion. The C++ coroutine spec is more stateful, and has many many more sharp edges and details to handle.
Async calls are separate from async execution (Future objects are passive), so you can execute them however you want. For example, Dropbox runs their async code through a custom test harness that fuzzes it for race conditions, because it can control externally what and when runs in what order.
There’s some Rust-specific awkwardness:
Async functions can’t be recursive without some explicit syntax gymnastics (that’s because async function’s flow is expressed as a struct, and a struct can’t be infinitely recursive without an indirection)
Rust doesn’t bless any particular async executor (there’s no built-in implicit event loop like in Node or golang), which means there are multiple 3rd party options to choose from, which splits the ecosystem.
Rust demands being precise about memory management and thread safety, and async is no exception. People coming from GC languages with async can’t just switch to Rust’s async without learning the hard parts of Rust first.
Rewriting any codebase makes it significantly better. Async is strictly worse in every measurable way. I recommend measuring your workloads and avoiding the severe ergonomic, reliability, throughput, and compile time hits of async.
I shall add that in my case the “rewrite” was just strictly replacement of rayon and various thread::spawns with tokio, not a ground-up rewrite.
It has been a big ergonomic improvement for timeouts and cancellation. I had to have if should_stop_now() {return;} all over the code, and no easy to abort sync network requests in progress. Now wrapping things in timeout(async {}) is trivial.
It has been a huge improvement in throughput, because previously network-bound calls were clogging my rayon threadpool. With async it’s a non-issue automatically, and I can use semaphores to control concurrency of any section of the code, while having appropriate number of threads for the CPU.
It has been a huge improvement for reliability. Async has async-compatible mutexes. Previously I’ve had trouble with inability to use rayon while holding a mutex (it’s a recipe for a deadlock), so I couldn’t easily parallelize some important init-once computations.
Async may not work for your use-cases, but “strictly worse in every measurable way” is demonstrably false, a hyperbole, and implying that everyone who designed it and uses it is clueless.
While you might have had completely broken code before, your code is still worse than it could be without async. Throughout is strictly worse due to the scheduling decisions made by having a user space scheduler at all. You will always have more bugs due to accidental blocking, which is exactly the same issue you had with your previous broken rayon deadlock but it happens only in production instead of immediately on your laptop. And your compile times exploded. Come to the other side and your life and your programs will be clearly better.
Not all performance problems are in the raw throughput, and winning microbenchmarks doesn’t always make better programs.
I’ve just explained to you how my perf and reliability problems were higher-level (deadlocks, stuck tasks, difficulty of separating I/O-bound and CPU-bound work to control their concurrency separately).
That could have been totally my fault, and self-inflicted failure of the “completely broken code” I wrote, but I haven’t got magically smarter by switching to async, and yet with the async abstraction I was able to fix the issues plaguing my codebase and improve actual real-world performance I could measure.
I’ve had similar rewrites of projects from sync before to async after which made it easier to understand what was going on. I do not agree with the above comment that all async in rust is strictly worse.
I’m pretty sure spacejam’s answer is “don’t”, but…
Technically Async isn’t doing anything but automatically creating enums and structures for you, and handling lifetimes.
An async function is basically just (note that I’m formulating the signature in a different but approximately equivalent way to actual rust for simplicitly)
You can write this all out by hand if you really want to.
Long ago before rust had async I also had some fun implementing stackful coroutines with a bit of asm, which allows writing more go like async code, boost does the same in C++…
I think I am missing the point here. My current approach to a highly distributed service that runs on a cluster of nodes would be Erlang/Elixir + Rust (aka the Discord architecture). I have tried to implement Async code in Rust but half of the time I have no idea what a typical type signature means by looking at it, another chunk is spent on why the borrow checker is unhappy.
With Erlang I never had this problem with async code. OTP and even raw Erlang makes it super easy for the reader to understand what is happening. Combine that with binary pattern matching of raw socket data and you have an easy time.
I understand the downsides. Slow numerical computation, GC and so on. You could not write a game engine that runs on a single computer in Erlang. This is where Rust comes very handy. Have a super safe and fast data structure in Rust and let Erlang do the async part.
Some additional things. I was surprised how well Erlang worked on a Raspberry Pi running a network service (DNS). It was sufficiently fast to be the replacement of a service written in C performance wise. Anyways, I need to look into that Async alternative that was mentioned in this thread.
I’m not sure whose point you don’t understand here. As I understand spacejam’s point (one I don’t fully agree with), it’s “just use OS threads and blocking IO”, the claim being that the performance difference between a lot of threads, and a lot of async tasks running simultaneously, is low (a claim I don’t think is always true).
My point on async was a bit more nebulous really, really just about the details of how async rust works. This blog post is probably still the best resource I know of for explaining that in more detail (the gist of the blog is still correct, though various minor things have changed since then). I think the only way to really effectively use async today is to understand it at a deep level too, so hopefully that helps out with that portion of your post too.
I can’t say I’ve used Erlang at all. My opinion on async rust these days is it’s really good if you know what it’s doing, really foot-gunny if you don’t, and the ecosystem around it is really immature and also really foot-gunny. Using erlang as a fancy async executor… doesn’t sound sensible, but I don’t pretend to understand erlang enough to say that with confidence.
I would leave the built-in derives but I would remove user-definable proc macros entirely. They destroy compile times and reduce the composability of the language overall as I’ve experienced them.
serde should be built-in and first class, like a more useful Rustc{En,De}codable. it’s not like this is some cutting edge problem languages (or rust itself) has never handled before.
This is what rustc_serialize is. This is what you’d get as the exclusive built-in option if Rust wasn’t extensible.
The same is with std::mpsc vs crossbeam-channel. Third party ecosystem is free to experiment, try (and fail) many things, until something really good emerges. std has only one shot due to backwards-compat constraints. The Rust libs team is great, but you can’t expect them to write best-in-the-world implementation of everything, especially on the first try.
Not much to be honest. Although I’m finding that Rust and my own values are slowly diverging (as I get older I want simpler tools and I’m looking wantingly at Zig), I think that Rust accomplishes what it set out to really well. There are some weird warts here and there, e.g., the rules about what can be a trait object or two closures with the same type signatures having different types, that sometimes bit in practice; when reading the explanation, they make some sense, but they can still be a bit annoying. But overall, I think that Rust really nailed most things.
One thing I would probably like is if indexing in vectors/arrays was allowed on more types (e.g., u8, u16, u32) and that could let the compiler know whether it can omit bounds checks. (If an array has a length of 256 and you index with a u8, you cannot go out of bounds.)
Reminder that I need a proper motivation document for Garnet, my “what if Rust but simpler” language. Though I confess I don’t have solutions right now for some of the trickier things, such as being generic over mut/non-mut and owned/borrowed versions of functions.
Though it is tempting, we will NOT do arbitrary-precision integer types such as being able to define an integer via an arbitrary range such as [-1, 572) or arbitrary size such as i23. Maybe later.
Is this for like.. bit packing, or whats the temptation that draws you to arbitrary precision integers?
Its called bikeshedding, and always happens with complicated problems/questions. Its easiest to argue surface-level stuff rather than the really difficult stuff.
I think its an utterly fascinating aspect of human nature!
It’s like a bike shed, it’s easy to have an opinion on how to paint it/change syntax, hard to have an opinion on how to improve the internal structure.
Change the internet structure too much and it’s no longer “rust”. Obviously many people prefer python to rust, but saying “clone python” isn’t an interesting answer.
Most of rust’s warts are, at least in my opinion, in half finished features. But like the above it’s uninteresting to say “well just hurry up and fix const generics, GATs, custom allocators, self referential structs, generators, making everything object safe, etc”. Those aren’t really “redesigns”, they’re just “well keep at it and we’ll see what we end up with”.
Rust is a great language, but there’s a lot I plan to do differently in Dawn. The next post at https://www.dawn-lang.org will talk about how fine-grained capabilities / effects will work. And I’m working my way towards a post about an alternative to the borrow checker that I believe will be easier for users to understand and use.
Reading the comments here I’m so happy you can’t re-design Rust from scratch today :), regarding opinions on I32 and Vec, shouldn’t you first learn the language you’re going to use?
I like most things about Rust, maybe I’m biased as the only languages I’ve learned and wrote in are: python, c, go, javascript, lua and some haskell/ocaml.
Still I’m wondering how rust would look if it was space sensitive and without curly brackets. And how we could improve the async world.
btw if anyone is starting or thinking of working on source to source translator, interpreters or compilers count me in, I’m hungry to learn.
Fix the broken Eq/PartialEq, Ord/PartialOrd design.
Throw out the useless semicolon rules.
Replace -> in function declarations with :.
Remove ! syntax for macros (creates the wrong incentives).
Runner ups:
Fix/remove the horribly broken Path::join. (Or rather replace Path/PathBuf with an abstraction where you don’t have to decide how to implement the method, because it simply won’t compile.)
I seem to remember you and I have debated this before, but I don’t think it makes any sense to call the semicolon rules useless. Semicolons convert expressions into statements, and quite apart from the fact that there needs to be some syntax to do that, because Rust is an expression-based language, the visual distinction that you get from a line ending in a semicolon or not allows you to tell at a glance whether something is an expression or a statement.
What do you think you’re accomplishing here? Toddlerized argument via repetition pollutes conversations, demonstrates bad faith, and literally makes this site less valuable to others. I’d suggest that you follow the moderators’ advice: step away from the keyboard for a while until you can engage in mutually respectful discussions.
The issues are not made-up. Please accept that people may disagree with your opinions for other reasons than losing grip on reality.
Other languages have just chosen different trade offs in complexity of parsing, flexibility of the syntax, and risk of semantic changes caused by line wrapping.
I don’t have 5 complaints about Brainfuck. I have one: Lack of semantic richness.
It’s syntax is actually pretty good, considering what its semantics are. Other languages with the same semantics and different syntax exist, and they also suck.
In Verona, we’re doing the same thing as Pony: [] is for generics, () is call syntax, array access uses call syntax on array objects. It’s a slightly odd decision in C-family languages to have separate syntax for them because C doesn’t have anything that is both indexable and callable (you can’t use array indexing on function pointers), but it’s a necessary hack if you want to write a single-pass compiler that runs on a system with 128 KiB of RAM. A lot of earlier languages used the same syntax because an array is, at the abstract level, just a map from integers to some other value. C doesn’t think of the abstraction like this and thinks of an array as syntactic sugar for pointer arithmetic. C++ keeps this distinction for standard library collection types, but I don’t think I’ve ever seen a C++ class that overloaded both operator() and operator[].
The big issue with <> for generics (or anything else) is that < and > are both allowed in your program as stand-alone identifiers. This means that you can’t spot a missing angle bracket until you’ve done type checking, whereas you can spot a missing other kind of bracket / brace after tokenisation, before you even parse.
it’s not overloading array indexing, because [] for generics is in types, and [] for indexing is in expressions. As far as I can tell, rust grammar is sufficiently well designed that these are never ambiguous.
edit: that was snarky. It’s not worse than <>, we’d just have the turbo hammershark or something like that instead of turbofish, but you do have a point. D uses foo!(bar)(baaz) to separate template and regular arguments, for example.
I might be wrong, but I think the main motivation behind using [ rather than < is exactly to get rid of the need to disambiguate expressions vs types via ::
If we are fine with replacing turbo-fish with turbo-snail, than yes, just switching the sigils works.
Ok, foo isn’t an array, but indexing works for all sorts of types in rust not just arrays. It’s calling the Xth element of the collection foo, where foo happens to be a generic-function collection type.
Yeah, using the same grammar for type references and expressions and disambiguating on semantic level might work. But that’s a way bigger change than just using [].
I wonder if optionality of type arguments would prevent this? Like, foo[()][()] where the first indexing with type() seems like it can be ambiguous. As it, is foo[()] a type argument, or a value argument with an inferred type argument?
My gut feeling is that this might not be a huge problem. It seems like “syntax unambiguously classifies the role of expression” property is already lost in Rust in minor way (the prime example would be constant ambiguity in patterns). So perhaps treating the language as a homogeneous tree of things, and classifying then according to semantics later would actually be a clearer model?
The issue is that < and > are already used for less than and greater than signs, so Foo(baz) can either be (Foo less than bar) greater than baz or Foo instantiated with bar applied to baz.
Overloading indexing is fine, because it really is a kind of indexing. Indexing is a function (Container, Index) -> Value, and this is a function (GenericType, ConcreteType) -> ConcreteType. I.e. Container = Type and Index, Value = ConcreteType. As a result it doesn’t lead to the same kind of ambiguities.
Wouldn’t that make AST construction dependent on types, and thus a chicken-egg problem similar to what C has with typedefs?
Forbidding uses of types in expression contexts world be limiting, e.g. you couldn’t “simply” use static methods or associated constants, because they’re expressions, but start with a type name.
It would [1] if you need to represent Type[Type] and Expr[Expr] as different types of ast nodes. I don’t believe that you actually need to though.
[1] You could probably hack around this by checking what symbols are in scope too, but that’s ugly and would probably lead to poor handling of special cases and poor error messages.
The argument I’ve heard is that it makes the grammar more complicated, and leads to some ambiguity issues, which results in things like Rust’s turbofish syntax (::<> iirc) or C++’s .template syntax.
A different approach, Ada uses neither <>, nor [], and instead requires associating a new name to a function-call like explicit instantiation of a type or generic function. It seems like it leads to a proliferation of types, but is an approach I haven’t heard much about.
No, that’s my job. Take a break from this thread and come back way kinder in the future. Feeling like you’re right doesn’t make it OK to heap scorn on people.
What makes using [] for array/map syntax an “abuse”? What distinguishes it from literally any other syntax choice? I could claim that using () to surround function arguments is an abuse of those symbols, but I have absolutely nothing to back that up.
On the flip side, though, if macros aren’t distinguished with a !, then if the author of a macro does do something that isn’t “well-behaved”, the user has even less warning that it’s going to do that.
Which comment? I’ve read all of your replies in this thread and I haven’t seen anything that refutes the assertion that removing ! would just make the developer experience worse if a macro does choose to do crazy things.
Instead, macros should be so well-behaved that users shouldn’t even need to know whether something is a macro or not.
Imagine that Rust users would keep filing bugs about stupid macros until the issue is resolved, just like they successfully did with unnecessary unsafe blocks.
So your solution to this problem is to make Rust developers waste a bunch of extra time filing issues? Instead of the current state of play, which is that every user of these hypothetical macros has at least a measure of pre-warning that the thing they’re calling is a macro and therefore might perform some crazy AST transformation on its input? I haven’t run a survey, but I would hazard a guess that this “macros do arbitrary stuff” problem is not actually a real problem that Rust developers have, partly because the intention of macros is to allow you to do arbitrary stuff.
Rust macro syntax is informed by experience from C where macros have a function-like syntax, and still do surprising arbitrary stuff.
In C this syntax hasn’t stopped people from doing weird stuff with macros, but is stressing users who can’t be sure that everything that looks like a function call is just a function call.
It’s also worth noting that in C it’s considered a good practice to use all-caps for macro names, because even though macros could look just like functions, users don’t want that.
I would focus on “inline” metaprogramming (D style: pervasive CTFE + compile time reflection) early on.
Having to write a proc macro feels awful compared to what you can do in D (example). Rust now has const fn but not even anything like C++’s if constexpr to use them.
I forgot: there’s no reason to make a language in the 2000s without named arguments as default.
I don’t want to disagree, but I want to share my experience. Before Rust, my primary language was Python, which is build around named arguments. I also used a lot of Kotlin, which has named arguments as well (although not as enshrined as in Python). I would expect that I’d miss named arguments in Rust, but this just doesn’t happen. In my day-to-day programming, I feel like I personally just never need neither overloading nor named parameters.
I come from Swift, and I also (again, surprisingly to me) feel this way.
Well, Swift has its own weird parameter naming that it inherited from ObjC.
It’s definitely weird. But once you get used to writing function signatures and calls like
…it starts to feel pretty natural. My expectation was that the transition to
…would feel really unnatural. But somehow (in my experience) other aspects of Rust’s design come together to mean that I don’t miss the added clarity of those parameter labels.
I find the Swift approach appealing in theory, but it’s hard to do well in practice. Maybe it’s just me, but I can never wrap my head around the conventions. I can read this 100 times and still struggle with my own functions: https://swift.org/documentation/api-design-guidelines/#parameter-names
For example, why isn’t your example
func setKey<K, V>(_ key: K, to value: V)
?When we get to the final bullet point in that guideline “label all other arguments”, how should we label them? So that the call site reads like a sentence? That doesn’t seem possible. So just name them what they represent? Is this right:
func resizeBox(_ box: Box, x: Int, y: Int, z: Int)
? Then the call site is way less cool than your example:resizeBox(box, x: 1, y: 2, z: 3)
.After reading the API guidelines again, I think you’re right, it should have been that :)
Yes and yes, as far as I know.
I agree that, in practice, I don’t often miss either feature. But it does happen sometimes.
Every once in a while I do miss overloading/default-args. Sometimes you have a function that has (more than one!) very common, sane, default values and it sucks to make NxN differently named functions that all call the same “true” function just because you have N parameters that would like a nice default.
Then there’s the obvious case for named parameters where you have multiple parameters of the same type, but different meaning, such as x, y, z coordinates, or- way worse- spherical coordinates: theta and phi, since mathematicians and physicists already confuse each other on which dimension theta and phi actually represent.
It’s not terrible to not have them, but it seems like it would be nice to do it the Kotlin way. Default params are always a code smell to me, but just because it smells doesn’t mean it’s always wrong and I do use them in rare circumstances.
I’d like to disagree, I really think functions should be as simple as possible, so that you can easily write higher order functions that wrap/transform other functions without worrying about how the side-channels like argument names will be affected. I really like the Haskellism where your function arguments are unnamed, but you recover the ergonomics of named arguments by defining a new record just for a single function, along with potentially multiple ways of constructing that record with defaults.
I’m not saying this pattern would work for every language, I just want to point out that named arguments aren’t necessarily a good thing.
Named arguments make renaming parameters a breaking change; this is why C# didn’t support them until version 4. If I ever design a language, I’ll add named arguments after the standard library is finalized.
Swift has abi stable named parameters, it just defaults to the local parameter name being the same as the public api name
Yeah, I think Swift nailed it. Its overloading/default args aren’t even a special calling convention or
kwargs
object. They are merely part of function’s name cleverly split into pieces.init(withFoo:f andBar:b)
is something likeinit_withFoo_andBar_(f, b)
under the hood.There should be formal semantics for the borrow checker.
Rust’s module system seems overly complex for the benefit it provides.
Stop releasing every six weeks. Feels like a treadmill.
The operator overload for assignment requires generating a mutable reference, which makes some useful assignment scenarios difficult or impossible…not that I have a better suggestion.
A lot of things should be in the standard library and not separate crates.
Some of the “standard” crates are more difficult to use than they should be. I still can’t figure out how to embed an implements-the-RNG-trait in a struct.
Async is a giant tar pit. The immense complexity it adds doesn’t seem to be worth it, IMHO.
Add varargs and default argument values.
I genuinely do not understand what people find complex about the module system. It’s literally just “we have a tree of namespaces”.
“We have a tree of namespaces. Depending on how you declare it the namespace names a file or it doesn’t. Namespaces nest, but you need to be explicit about importing from outer namespaces. Also, there’s crates which are another level of namespacing with workspaces.”
Versus something like Python: There is one namespace per file.
(Python does let you write custom importers and such but that’s truly deep magic that is extremely rarely used.)
I’m not saying there aren’t benefits to the way Rust does it. I’m saying I don’t feel like the juice is worth the squeeze.
EDIT: @kornel said it better: https://lobste.rs/s/j7zv69/if_you_could_re_design_rust_from_scratch#c_3hsii6
I mean … it feels like this takes 30 minutes to understand. Maybe the problem is the docs? Or maybe it just already mapped to my pre-existing mental model of namespaces, but this was one of the least surprising parts of rust to me
New file means a new namespace (module), new namespace (module) doesn’t mean a new file.
It was the opposite for me, for whatever reason; it feels like there’s active friction between my mental model of namespaces and the way Rust does it. It’s weird.
You know, I kinda got the same mental friction feeling with namespaces in Tcl. I couldn’t tell you why. Maybe I just hate nested namespaces…
I’ve over and over and over again heard from beginners that the docs do a notably bad job communicating how it works, in particular those that are the easiest to get your hands on as a beginner (the rust book and by example). They deal almost exclusively with submodules within a file (i.e.
mod {}
), since it’s difficult to denote multiple interrelated files in the text, playground example, text, playground example idiom they decided to use.When they briefly do try to explain how the external file / directory thing works they say something like “you used to need a file named
mod.rs
in another directory but now in Rust 2018 you can just make a file named(the name of the module).rs
” which is a really poor explanation of how that works and is also literally incorrect. Like, you can go withoutmod.rs
but if you want to arrange your code into a directory structure you still needmod.rs
. There have been issues on the Github for the rust book about making the explanation coherent (or more trivially making it actually true) but the writers couldn’t comprehend that it isn’t immediately intuitive to beginners and have refused to make very basic changes like having it just say something like “when you writemod foo
, the compiler looks in the current directory for eitherfoo.rs
orfoo/mod.rs
”. A lot of the problem here is the mod.rs -> modname.rs addition. It’s an intuitive QOL improvement to people already familiar with the modules system but starting from no understanding of the modules system it makes it infinitely more difficult for newbies to understand.Hmm, I feel like the following set of statements covers the way the module system works:
mod {}
block)In practice, modules are almost always declared in separate files except for test modules, so it ends up being “there is one namespace per file” most of the time anyway.
I don’t really see what about that is all that complicated.
As someone who just dabbles with rust, it still confuses me. I know I’d get it if I used it more consistently, but for whatever reason it just isn’t intuitive to me.
For me, I think the largest problem is that it’s kind of the worst of both worlds of being neither an entirely syntactic construct nor being filesystem based. Rather, it requires both annotating files in certain ways and places, and also putting them in certain places in the file system.
By contrast, Python and Javascript lean more heavily on the filesystem. You put code here and you just import it by specifying the relative file path there.
On the other end of the spectrum you have Elixir, where it doesn’t matter where you put your files. You configure your project to look in “lib”, and it will recursively load up any file ending in
.ex
, read the names of the modules defined in there, and determine the dependency graph among them. As a developer I pop open a new text file anywhere in my project, typedefmodule Foo
, and know that any other module anywhere can simply, e.g.,import Foo
. For my money, Elixir has the most intuitive system out there.Bringing it back to rust, it’s like, if I have to put these files specifically right here, why do I need any further annotation in my code to use those modules? I know they’re there, the compiler knows they’re there, shouldn’t that be enough? Or conversely, if I’m naming this module, then why do I have to put it anywhere in particular? Shouldn’t the compiler know it by name, and then shouldn’t I be able to use it anywhere?
I’m also not too familiar with C or C++ which is what it seems to be based on. I get that there’s this ambient sense of compilation units, and using a module is almost like a fancy macro that text substitutes this other file into this one, but that’s not really my mental model of how compilation has to work.
Hey, thanks, this is some interesting food for thought!
I think they’re actually based on ML modules. They’re not really similar to C/C++… I’d actually describe it as more similar to python than C/C++ (but somewhere in the middle between them).
I think the
mod module_name;
syntax is actually exactly a fancy macro that does the equivalent of text substitution (up to error messages and line numbers). Of course it substitutes into the mod module_name { module_src }` form so module_src is still wrapped in a module.Rust’s module model conceptually is very simple. The problem is that it’s different from what other languages do, and the difference is subtle, so it just surprises new users that it doesn’t work the way they imagine it would.
Being different, but not significantly better, makes it hard to justify learning yet another solution.
Do i need to declare my new
mod
inmain.rs
or inlib.rs
? What about tests? Why am I being warned about unused code here, when I use it? Why can I import this thing here but not elsewhere?I think the way all the explicit declaration stuff is really un-nerving coming from Python’s “if there’s a file there you can import it” strategy. Though I’m more comfortable with it now, I still wouldn’t be confident about answering questions about its rules
What benefit is there to releasing less often?
Another user on here (forgive me, I can’t remember who) said it well: if I cut my pizza into 12 slices or 36 slices, it’s the same amount of pizza but one takes more effort to eat.
Every six weeks I have to read release notes, decide if what’s changed matters to me, if what counts as “idiomatic” is different now, etc. 90% of the changes will be inconsequential, but I still gotta check.
Bigger, less frequent releases gives me the changes in a more digestible form.
Note that this is purely a matter of opinion: obviously a lot of people like the more frequent releases, but the frequent release schedule is a common complaint from more than just me.
This would be purely aesthetic, but would bundling release notes together and publishing those every 2 or 3 releases help?
Rust tried to do it with the “Edition Guide” for 2018 which — confusingly — was not actually describing features exclusive to the new 2018 parsing mode, but was a summary of the previous couple of years of small Rust releases.
The big edition guide freaked some people out, because it gave impression that Rust suddenly has changed a lot of things, and there were two different Rusts now. I think Rust is damned here no matter what it does.
Not sure the issue you’ve hit with embedding something that implements the
Rng
trait in a struct. Here’s an example that does just that without issue.Replying again just for future reference.
I don’t remember exactly what I was doing but I ended up running into this:
Point is, I got to that point trying to have an
Rng
in a struct and gave up. :)My solution was to put it in a
Box
, but that didn’t work for one of the Rng traits (whichever one includes the seed functions), which is what I wanted.Either way, I obviously need to do more research. Thanks.
Thank you, I appreciate that. My problem boils down to not knowing when to use
Box
and when to useCell
, apparently.Box is an owned pointer, despite being featured so prominently it doesn’t have many uses. It’s basically good for
RefCell is a single threaded rw-lock, except it panics where a lock would block because blocking on a single threaded lock would always be a deadlock. It’s purpose in life is to move the borrow checkers uniqueness checks from compile time to runtime.
In this case, you don’t really need either. We can just modify the example so that
make
takes a mutable reference, and get rid of the Refcell. See here: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=6f64a7192a1680181200bf577c285b9dYup, I used
RefCell
here because I don’t think the changing internal state of the random number generator is relevant to the users of theCharacterMaker
, so I preferredmake
to be callable without a mutable reference, but that’s an API design choice.[Comment removed by author]
Panicking should not unwind. Just abort. Always.
I know that there are use cases for unwind-on-panic, just like there are use cases for move constructors, green threads, and
fork()
. But all of these impose a peanut butter cost that every generic algorithm, optimization pass, formal proof, all have to put in work to support. It doesn’t carry its weight.Hm, I think I disagree relatively strongly, with two points:
I don’t think panics impose unavoidable cost. There’s option to require that panic aborts, so, eg, formal proofs can just require that the code is compiled with panic=abort.
I also feel that supporting unwinding is important for a large class of important applications. In most long-running state full program you need catch_panic around internal boundaries for reliability. rust-analyzer relies heavily on this ability, as well as any HTTP server.
I can see that, if we restrict Rust to pure systems software (and write our IDE and Web stuff in Kotlin), than no unwinding has a big appeal, but Rust is much more than just systems programming.
In my experience writing a couple of long running stateful programs, catching panics never seems like a good idea.
Inevitably there is some shared state somewhere, and that state is negotiated via mutex locks (or similar).
Panics will just leave the entire thing in a big unknown with potentially poisoned locks.
I might be missing the point, but it seems to me rust panics are quite similar to some Java practices where one just turns an irrecoverable problem into a RuntimeException and hope the problem is solved somewhere else.
I don’t think it’s universally inevitable. Off the top of my head, here are some examples where panic recovery seems to work well in practice:
So it seems that panic recovery works for some things and doesn’t work for others? This seems like a reasonable hypothesis to me.
Just slapping
catch_panic
everywhere won’t magically make a program reliable, quite the contrary. First, you need to architecture it in a way that it actually has reliability boundaries. And this is primarily about state management.For example, if you make state updates transactional, then only the code that applies transaction writes can’t recover after a panic, but that’s a tiny fraction of code. For example, rust-analyzer’s state is guarded by a single mutex. If panic happens in a place where we
.write
it, directly on the main loop, than analyzer just crashes. But literally any part of the compiler can panic, and this won’t corrupt state or crash the process, because compiler only calculates derived data.(*): there was a single case where our catch_panic bit us in the back rather painfully but that was due to a compiler bug. Initially, we had out catch_panic and all was good. Then, at Rust all hands, incremental compilation started to non-deterministically fail. The failure was traced to the trait’s system co-inductive reasoning for auto-traits like unwind-safe. So, to work-around this bug, we added AssertUnwindSafe as a way to short-circuit compiler’s trait inference. After that, chalk started to cache state, which actually wasn’t UnwindSafe. So we spend some time debugging mysterious runtime failures, which could have been caught at compile time, but alas.
Isn’t panic-handling required to “gracefully” handle out-of-memory situations?
In one form it is, and falls under “internal boundaries for reliability”.
But in a strict sense (as in, “kernel drivers should never die on OOM) you probably want just use failable (Result-returning) allocations everywhere. For that, you don’t need unwinding.
I was under the impression (potentially incorrectly) that much of the standard library doesn’t have the ability to use failable allocations, but just panics on OOM.
Again, I don’t know that I’m correct.
You are mostly correct. It’s even worse than that: at the moment, std aborts (rather than unwinds) on OOM.
std is just written with the assumption of a global infallible allocator. That’s a reasonable assumption for things std is intended for. The current custom allocators work will make this a bit more flexible, but won’t change the basic apis.
If you need to handle alloc failure 100%, you need different APIs which expose allocations to the caller.
Intelligent drop placement:
The compiler always drops things at the end of their scope, and that can’t be changed because it would break code relying on drop order. Allowing the compiler to choose to drop things any time after last use (including last use of things that are borrowed) would mean it could automatically solve more borrow checking issues.
Types as expressions:
Currently rust has two different “expression” languages that evaluate syntax trees to values. The one for types, and the one for expressions. This feels like a mistake. For instance trying to glue the value-syntax into the type syntax makes const generics harder and more verbose (you need to wrap some things in
{expr}
to avoid ambiguity). It means you need to learn both syntaxes. Etc.Moreover if we make types more first class, runtime reflection becomes natural at minimal complexity cost. Rust has very limited support for runtime reflection today, but I’d like to see it expand (if just for debugging purposes, like printing type names).
Mixed tuple/record syntax:
Currently rust structs are either
struct TupleLike(Without, Named, Fields)
orstruct RecordLike{with: Only, named: Fields}
. I’d like to see these unified so we could have only one type ofstruct Struct(UnnamedField, named: Field)
. Probably require unnamed fields to either all come first, or all come last.Anonymous types:
Currently every type in Rust is named, I can’t do
fn foo() -> enum {Bar, Baz}
, onlyenum FooReturn {Bar, Baz}; fn foo() -> FooReturn
. This makes error handling more painful.We already have
type Foo = Bar
for type aliases, and I think we could improve on this to make this the only form of naming types for consistency, sotype FooReturn = enum { Bar, Baz}
if you really want to name the enum. Some thought would have to be put into visibility.Unnamed types that are structurally identical should be treated as equivalent. Unnamed types should coerce to named types if the entirety of the named type is visible in the namespace… Named types shouldn’t coerce to eachother (which is a break from the current behavior of
type Foo=Bar
in rust, but IMO a good one).Integrate structs into unions better:
Currently a common pattern in rust is
struct Foo{ ... }; struct Bar { ... }; enum Baz { Foo(Foo), Bar(Bar) }
. Where you have an enum that just contains one of a list of structs, but you end up repeating yourself a lot to access those structs. This should be improved to something likeenum Baz { *Foo, *Bar }
to minimize repetition. Likewise on the pattern matching side things likeif let Foo{ ... } = baz_value {}
should work, instead of needing to doif let Baz::Foo(Foo{ ... }) = baz_value {}
.Named arguments:
When I have some function, I don’t know,
draw_rectangle(ctx, position, border_size, brush, target)
remembering what argument goes where is unnecessarily difficult, named arguments make it not a problem.Int32 instead of i32.
Weird, I definitely disagree on expanding this. It just wastes horizontal space and keystrokes for no benefit.
Maybe I32 and Str instead of i32 and str, but abbreviations for commonly used things are good. You’re not even getting rid of the abbreviation, Int is after all short for Integer.
I agree with this (I think lowercase would be fine, too, though).
I think that Rust overdoes it a little bit on the terseness.
I understand that Rust is a systems language and that Unix greybeards love only typing two or three characters per thing, but there’s something to be said for being descriptive.
Examples of very terse things that might be confusing to a non-expert programmer:
None of the above bothered me when I learned Rust, but I already had lots of experience with C++ and other languages, so I knew that Vec was short for “vector” immediately. But what if I had come from a language with “lists” rather than “vectors”? It might be a bit confusing.
And I’m not saying I would change all/most of the above, either. But maybe we could tolerate a few of them being a little more descriptive. I’d say i32 -> int32, Vec -> Vector, len() -> count() or length() or size(), and mut -> uniq or something.
Definitely this!
For the context of those who aren’t familiar,
&mut
pointers are really more about guaranteeing uniqueness than mutability. The property &mut pointers guarantee is that there is only one pointing at a given object at a time, and that nothing access that object except through them while they exist.Mut isn’t really correct because you can have mutability through a
&
pointer usingCell
types. You can have nearly no mutability through a&mut
pointer by just not implementing any mutable methods on the type (though you can’t stop people from doing*mut_ptr = new_value()
).The decision to call this
mut
was to be similar tolet mut x = 3
… I’m still unconvinced by that argument.Not to mention the holy war over whether
let mut x = 3
should even exist, or if every binding is inherently a mutable binding since you aren’t actually prevented from turning a non-mutable binding into a mutable one:My favorite is
{x}
allowing mutability, because now you’re not accessingx
, but a temporary value returned by{}
.I never knew about that one! Cute.
For an example, check out some Swift code. Swift more or less took Rust’s syntax and made it a little more verbose.
fn
becamefunc
, the main integer type isInt
, sequence length is.count
, function arguments idiomatically have labels most of the time, and so on. The emphasis is on clarity, particularly clarity at the point of use of a symbol — a function should make sense where you find a call to it, not just at its own declaration. Conciseness is desirable, but after clarity.Yep. I also work with Swift and I do like some of those choices. I still think the function param labels are weird, though. But that’s another topic. :)
I think this mostly doesn’t matter - I doubt anyone would first-try Rust, given its complexity, so it’s not really that much of an issue. Keywords are all sort of arbitrary anyway, and you’re just gonna have to learn them. Who’d think
go
would spawn a thread?I, for one, think these are pretty nice - many people will learn Python so they expect
len
andstr
, andfn
andmod
are OK abbreviations. I think the terseness makes Rust code look nice (I sorta like looking at Rust code).Though I’d agree on
mut
(quite misleading) andimpl
(implement what?).Oh, true.
I don’t care about the exact naming conventions, as long as it is consistent. (This is in fact exactly how I named types in my project though, what a coincidence. :-D)
In general the random abbreviations of everything, everywhere are pretty annoying.
It’s the consistency, yes. Why should some types be written with minuscles?
Lowercase types are primitive types while camelcase are library types. One has special support from the compiler and usually map to the machine instructions set while the other could be implemented as a 3rd party library.
[Comment removed by moderator pushcx: Seriously, stop posting in this thread. Take a break and lose the tedious scorn.]
Because they are stack-allocated primitive types that implement
Copy
, unlike the other types which are not guaranteed to be stack-allocated and are definitely not primitive types.And how does the lower-case letter convey this fact?
How does anything convey anything? It’s a visual signal that the type is a primitive, stack-allocated type with copy semantics. As much as I hate to defend Java, it’s similar to the
int
/Integer
dichotomy. If they wereInt
andInteger
, it wouldn’t be quite so clear that one is a primitive type and the other is a class.I just searched for “Rust stack allocation lower-case” but couldn’t find anything. Do you have a link that explains the connection?
Because they are used more than anything else :)
(and really it should be s32 to match u32)
Not a good reason. Code is read way more often than it is written.
I don’t see how i32 is less readable. It makes the code overall more readable by making lines shorter and looks better.
Total outsider here, but my understanding is that Rust newcomers struggle with satisfying the compiler. That seems necessary because of the safety you get, so OK, and the error messages have a great reputation. I would want to design in possible fixes for each error which would compile, and a way to apply them back to source code given your choice. If that’s a tractable problem, I think it could help cut trial and error down to one step and give you meaningful examples to learn from.
Maybe add a rusty paperclip mascot…
Actually, a lot of the error messages do offer suggestions for fixes and they often (not always) do “just work”. It’s really about as pleasant as I ever would’ve hoped for from a low-level systems language.
That’s great! Is it exposed well enough to, say, click a button to apply the suggestion in an editor?
In some cases, yes. See https://rust-analyzer.github.io/
In a lot of cases, actually. It becomes too easy sometimes, because I don’t bother trying to figure out why it works.
Yeah, it seems to be. I often use Emacs with lsp-mode and “rust-analyzer” as the LSP server and IIRC, I can hit the “fix it” key combo on at least some errors and warnings. I’m sure that’s less true the more egregious/ambiguous the compile error is.
There is this but it doesn’t seem to have a logo, someone should make one!
My imaginary version that keeps most of the existing language would:
(I’m not really sure if this comment is serious; I read it as sarcastic. Just in case you are, though…)
I don’t really think it’s a good idea for to allow the standard library of a language to do more than the language’s users can do. I mean, just look at Elm and the drama that has happened there.
I also hate it, but it seems to be very common:
Rust will always take advantage of a large number of features that it has been judged that normal users will not be able to effectively use, for one reason or another. In my judgement, after seeing how the community tends to use various features for the 7 years that I’ve been participating, I would prefer to go without several of them. Stating these perspectives is the point of this thread.
Async in Rust is wonderful. It has solved real performance and reliability issues in my formerly thread-channel-spaghetti programs.
Result
objects instead of exceptions enabled separation of error handling from function calls themselves (result-as-object allows building abstractions naturally using regular language features).Similarly
Future
separates function execution from function calls, and adds a higher level way of capturing and controlling program’s flow as an object. It’s a very powerful feature, and very well designed given how low-level and low-overhead it is in Rust.You mean wonderful compared to how it was in Rust before async/await or wonderful compared to other existing designs as well?
It’s definitely an improvement over non-async Rust. Compared to other languages it’s pretty good too:
Rust’s model naturally supports cancellation of async operations. If you drop a
Future
, it stops executing. This is amazingly easy compared to JS’sPromise
model which has no room for cancellation and needs manually managedAbortController
s.Rust’s
Future
encapsulates the entire call tree, not just a single operation. This allows async calls to be inlined and optimized to almost nothing. In may cases this is much more efficient than a heap-object-per-call model used in JS and C#..await
as a suffix turns out to be quite convenient in practice.foo().await.bar().await
rather thanawait (await foo()).bar()
.It’s so brilliantly simple compared to C++ coroutines. At the lowest level Rust’s model boils down to just calling
Future::poll()
to completion. The C++ coroutine spec is more stateful, and has many many more sharp edges and details to handle.Async calls are separate from async execution (
Future
objects are passive), so you can execute them however you want. For example, Dropbox runs their async code through a custom test harness that fuzzes it for race conditions, because it can control externally what and when runs in what order.There’s some Rust-specific awkwardness:
Async functions can’t be recursive without some explicit syntax gymnastics (that’s because async function’s flow is expressed as a
struct
, and a struct can’t be infinitely recursive without an indirection)Rust doesn’t bless any particular async executor (there’s no built-in implicit event loop like in Node or golang), which means there are multiple 3rd party options to choose from, which splits the ecosystem.
Rust demands being precise about memory management and thread safety, and async is no exception. People coming from GC languages with async can’t just switch to Rust’s async without learning the hard parts of Rust first.
Rewriting any codebase makes it significantly better. Async is strictly worse in every measurable way. I recommend measuring your workloads and avoiding the severe ergonomic, reliability, throughput, and compile time hits of async.
I shall add that in my case the “rewrite” was just strictly replacement of
rayon
and variousthread::spawn
s withtokio
, not a ground-up rewrite.It has been a big ergonomic improvement for timeouts and cancellation. I had to have
if should_stop_now() {return;}
all over the code, and no easy to abort sync network requests in progress. Now wrapping things intimeout(async {})
is trivial.It has been a huge improvement in throughput, because previously network-bound calls were clogging my rayon threadpool. With async it’s a non-issue automatically, and I can use semaphores to control concurrency of any section of the code, while having appropriate number of threads for the CPU.
It has been a huge improvement for reliability. Async has async-compatible mutexes. Previously I’ve had trouble with inability to use
rayon
while holding a mutex (it’s a recipe for a deadlock), so I couldn’t easily parallelize some important init-once computations.Async may not work for your use-cases, but “strictly worse in every measurable way” is demonstrably false, a hyperbole, and implying that everyone who designed it and uses it is clueless.
While you might have had completely broken code before, your code is still worse than it could be without async. Throughout is strictly worse due to the scheduling decisions made by having a user space scheduler at all. You will always have more bugs due to accidental blocking, which is exactly the same issue you had with your previous broken rayon deadlock but it happens only in production instead of immediately on your laptop. And your compile times exploded. Come to the other side and your life and your programs will be clearly better.
This was not my experience. I think its more nuanced than this
Measure and see.
Not all performance problems are in the raw throughput, and winning microbenchmarks doesn’t always make better programs.
I’ve just explained to you how my perf and reliability problems were higher-level (deadlocks, stuck tasks, difficulty of separating I/O-bound and CPU-bound work to control their concurrency separately).
That could have been totally my fault, and self-inflicted failure of the “completely broken code” I wrote, but I haven’t got magically smarter by switching to
async
, and yet with theasync
abstraction I was able to fix the issues plaguing my codebase and improve actual real-world performance I could measure.I’ve had similar rewrites of projects from sync before to async after which made it easier to understand what was going on. I do not agree with the above comment that all async in rust is strictly worse.
How should one write async code in Rust without Async?
I’m pretty sure spacejam’s answer is “don’t”, but…
Technically Async isn’t doing anything but automatically creating enums and structures for you, and handling lifetimes.
An async function is basically just (note that I’m formulating the signature in a different but approximately equivalent way to actual rust for simplicitly)
You can write this all out by hand if you really want to.
Long ago before rust had async I also had some fun implementing stackful coroutines with a bit of asm, which allows writing more go like async code, boost does the same in C++…
I think I am missing the point here. My current approach to a highly distributed service that runs on a cluster of nodes would be Erlang/Elixir + Rust (aka the Discord architecture). I have tried to implement Async code in Rust but half of the time I have no idea what a typical type signature means by looking at it, another chunk is spent on why the borrow checker is unhappy.
With Erlang I never had this problem with async code. OTP and even raw Erlang makes it super easy for the reader to understand what is happening. Combine that with binary pattern matching of raw socket data and you have an easy time.
I understand the downsides. Slow numerical computation, GC and so on. You could not write a game engine that runs on a single computer in Erlang. This is where Rust comes very handy. Have a super safe and fast data structure in Rust and let Erlang do the async part.
Some additional things. I was surprised how well Erlang worked on a Raspberry Pi running a network service (DNS). It was sufficiently fast to be the replacement of a service written in C performance wise. Anyways, I need to look into that Async alternative that was mentioned in this thread.
I’m not sure whose point you don’t understand here. As I understand spacejam’s point (one I don’t fully agree with), it’s “just use OS threads and blocking IO”, the claim being that the performance difference between a lot of threads, and a lot of async tasks running simultaneously, is low (a claim I don’t think is always true).
My point on async was a bit more nebulous really, really just about the details of how async rust works. This blog post is probably still the best resource I know of for explaining that in more detail (the gist of the blog is still correct, though various minor things have changed since then). I think the only way to really effectively use async today is to understand it at a deep level too, so hopefully that helps out with that portion of your post too.
I can’t say I’ve used Erlang at all. My opinion on async rust these days is it’s really good if you know what it’s doing, really foot-gunny if you don’t, and the ecosystem around it is really immature and also really foot-gunny. Using erlang as a fancy async executor… doesn’t sound sensible, but I don’t pretend to understand erlang enough to say that with confidence.
Pretty hardcore!
I like that it’s a better C++, so I disagree with your vision, but I get what you’re going for.
Would you nuke all macros or just proc macros? Would you leave the built-in ones (#[derive(Debug)]) like you’d leave the standard library traits?
I would leave the built-in derives but I would remove user-definable proc macros entirely. They destroy compile times and reduce the composability of the language overall as I’ve experienced them.
How do you do things like json, protobufs, etc in a reasonable manner then?
serde should be built-in and first class, like a more useful Rustc{En,De}codable. it’s not like this is some cutting edge problem languages (or rust itself) has never handled before.
This is what
rustc_serialize
is. This is what you’d get as the exclusive built-in option if Rust wasn’t extensible.The same is with
std::mpsc
vscrossbeam-channel
. Third party ecosystem is free to experiment, try (and fail) many things, until something really good emerges.std
has only one shot due to backwards-compat constraints. The Rust libs team is great, but you can’t expect them to write best-in-the-world implementation of everything, especially on the first try.Stating things with hindsight is the point of this thread.
Yeah. If the language had varargs, 80% of macro uses would lose their raison d’être.
Rust is like a gifted child, idiosyncrasies and all. I don’t feel qualified enough to judge what needs changing because it might ruin the recipe.
Not much to be honest. Although I’m finding that Rust and my own values are slowly diverging (as I get older I want simpler tools and I’m looking wantingly at Zig), I think that Rust accomplishes what it set out to really well. There are some weird warts here and there, e.g., the rules about what can be a trait object or two closures with the same type signatures having different types, that sometimes bit in practice; when reading the explanation, they make some sense, but they can still be a bit annoying. But overall, I think that Rust really nailed most things.
One thing I would probably like is if indexing in vectors/arrays was allowed on more types (e.g., u8, u16, u32) and that could let the compiler know whether it can omit bounds checks. (If an array has a length of 256 and you index with a u8, you cannot go out of bounds.)
Reminder that I need a proper motivation document for Garnet, my “what if Rust but simpler” language. Though I confess I don’t have solutions right now for some of the trickier things, such as being generic over mut/non-mut and owned/borrowed versions of functions.
This looks very cool.
Is this for like.. bit packing, or whats the temptation that draws you to arbitrary precision integers?
I find it quite interesting most people are suggesting surface-level changes to syntax. I wonder why that is?
Note: this observation is called Wadler’s law: https://wiki.haskell.org/Wadler's_Law
Its called bikeshedding, and always happens with complicated problems/questions. Its easiest to argue surface-level stuff rather than the really difficult stuff.
I think its an utterly fascinating aspect of human nature!
A few reasons I think
Rust is a great language, but there’s a lot I plan to do differently in Dawn. The next post at https://www.dawn-lang.org will talk about how fine-grained capabilities / effects will work. And I’m working my way towards a post about an alternative to the borrow checker that I believe will be easier for users to understand and use.
Reading the comments here I’m so happy you can’t re-design Rust from scratch today :), regarding opinions on I32 and Vec, shouldn’t you first learn the language you’re going to use?
I like most things about Rust, maybe I’m biased as the only languages I’ve learned and wrote in are: python, c, go, javascript, lua and some haskell/ocaml.
Still I’m wondering how rust would look if it was space sensitive and without curly brackets. And how we could improve the async world.
btw if anyone is starting or thinking of working on source to source translator, interpreters or compilers count me in, I’m hungry to learn.
Probably my five largest pain points:
<>
is wrong and broken, use[]
).Eq
/PartialEq
,Ord
/PartialOrd
design.->
in function declarations with:
.!
syntax for macros (creates the wrong incentives).Runner ups:
Path::join
. (Or rather replacePath
/PathBuf
with an abstraction where you don’t have to decide how to implement the method, because it simply won’t compile.)env::home_dir
.I seem to remember you and I have debated this before, but I don’t think it makes any sense to call the semicolon rules useless. Semicolons convert expressions into statements, and quite apart from the fact that there needs to be some syntax to do that, because Rust is an expression-based language, the visual distinction that you get from a line ending in a semicolon or not allows you to tell at a glance whether something is an expression or a statement.
The problem with this kind of argument is that languages exist that work without mandatory
;
and have none of those made-up issues.With semicolons this works fine:
But this wouldn’t:
Because
- 3
is a perfectly valid expression on its own. This is just one of the simplest examples I can imagine where “no semicolons” falls apart.JavaScript has similar issues when semicolons aren’t used and can lead to unexpected behaviour, so the problem isn’t new. Python side-steps this issue by requiring either being within
(
or adding a\
at EOL.The problem with this kind of argument is that languages exist that work without mandatory ; and have none of those made-up issues.
You’re an asshole. I’ve never seen you argue languages with people without acting like an asshole.
What do you think you’re accomplishing here? Toddlerized argument via repetition pollutes conversations, demonstrates bad faith, and literally makes this site less valuable to others. I’d suggest that you follow the moderators’ advice: step away from the keyboard for a while until you can engage in mutually respectful discussions.
The issues are not made-up. Please accept that people may disagree with your opinions for other reasons than losing grip on reality.
Other languages have just chosen different trade offs in complexity of parsing, flexibility of the syntax, and risk of semantic changes caused by line wrapping.
If 4 out of 5 of your top 5 complaints are just syntax, that’s pretty damn good IMHO.
That’s what they say about BF too
I don’t have 5 complaints about Brainfuck. I have one: Lack of semantic richness.
It’s syntax is actually pretty good, considering what its semantics are. Other languages with the same semantics and different syntax exist, and they also suck.
[Comment removed by moderator pushcx: Same as previous.]
I’ve heard this been said generally, but I never found out what the issue is. Isn’t this overloading array/map indexing?
In Verona, we’re doing the same thing as Pony:
[]
is for generics,()
is call syntax, array access uses call syntax on array objects. It’s a slightly odd decision in C-family languages to have separate syntax for them because C doesn’t have anything that is both indexable and callable (you can’t use array indexing on function pointers), but it’s a necessary hack if you want to write a single-pass compiler that runs on a system with 128 KiB of RAM. A lot of earlier languages used the same syntax because an array is, at the abstract level, just a map from integers to some other value. C doesn’t think of the abstraction like this and thinks of an array as syntactic sugar for pointer arithmetic. C++ keeps this distinction for standard library collection types, but I don’t think I’ve ever seen a C++ class that overloaded bothoperator()
andoperator[]
.The big issue with
<>
for generics (or anything else) is that<
and>
are both allowed in your program as stand-alone identifiers. This means that you can’t spot a missing angle bracket until you’ve done type checking, whereas you can spot a missing other kind of bracket / brace after tokenisation, before you even parse.it’s not overloading array indexing, because
[]
for generics is in types, and[]
for indexing is in expressions. As far as I can tell, rust grammar is sufficiently well designed that these are never ambiguous.Counterexample:
foo[X]()
. Is this calling a generic function foo with X as a type argument, or is this calling the Xth element of the foo array?surely you mean
foo::[X]()
? :-)edit: that was snarky. It’s not worse than
<>
, we’d just have the turbo hammershark or something like that instead of turbofish, but you do have a point. D usesfoo!(bar)(baaz)
to separate template and regular arguments, for example.I might be wrong, but I think the main motivation behind using [ rather than < is exactly to get rid of the need to disambiguate expressions vs types via ::
If we are fine with replacing turbo-fish with turbo-snail, than yes, just switching the sigils works.
Both!
Ok, foo isn’t an array, but indexing works for all sorts of types in rust not just arrays. It’s calling the Xth element of the collection foo, where foo happens to be a generic-function collection type.
Yeah, using the same grammar for type references and expressions and disambiguating on semantic level might work. But that’s a way bigger change than just using [].
I wonder if optionality of type arguments would prevent this? Like,
foo[()][()]
where the first indexing with type()
seems like it can be ambiguous. As it, isfoo[()]
a type argument, or a value argument with an inferred type argument?My gut feeling is that this might not be a huge problem. It seems like “syntax unambiguously classifies the role of expression” property is already lost in Rust in minor way (the prime example would be constant ambiguity in patterns). So perhaps treating the language as a homogeneous tree of things, and classifying then according to semantics later would actually be a clearer model?
The issue is that
<
and>
are already used for less than and greater than signs, so Foo(baz) can either be(Foo less than bar) greater than baz
orFoo instantiated with bar applied to baz
.Overloading indexing is fine, because it really is a kind of indexing. Indexing is a function (Container, Index) -> Value, and this is a function (GenericType, ConcreteType) -> ConcreteType. I.e. Container = Type and Index, Value = ConcreteType. As a result it doesn’t lead to the same kind of ambiguities.
Wouldn’t that make AST construction dependent on types, and thus a chicken-egg problem similar to what C has with typedefs?
Forbidding uses of types in expression contexts world be limiting, e.g. you couldn’t “simply” use static methods or associated constants, because they’re expressions, but start with a type name.
It would [1] if you need to represent
Type[Type]
andExpr[Expr]
as different types of ast nodes. I don’t believe that you actually need to though.[1] You could probably hack around this by checking what symbols are in scope too, but that’s ugly and would probably lead to poor handling of special cases and poor error messages.
The argument I’ve heard is that it makes the grammar more complicated, and leads to some ambiguity issues, which results in things like Rust’s turbofish syntax (
::<>
iirc) or C++’s.template
syntax.A different approach, Ada uses neither <>, nor [], and instead requires associating a new name to a function-call like explicit instantiation of a type or generic function. It seems like it leads to a proliferation of types, but is an approach I haven’t heard much about.
No. A side benefit of using
[]
for types is that it makes it unlikely that[]
will be abused for array/map syntax.Why do you frame everything in such an antagonistic hyperbole?
You may not like
[]
for array indexing, but calling it “abuse” just brings down the discussion to shit-slinging level.Do you have an actual argument? (Because tone policing isn’t one.)
No, that’s my job. Take a break from this thread and come back way kinder in the future. Feeling like you’re right doesn’t make it OK to heap scorn on people.
What makes using
[]
for array/map syntax an “abuse”? What distinguishes it from literally any other syntax choice? I could claim that using()
to surround function arguments is an abuse of those symbols, but I have absolutely nothing to back that up.[Comment removed by moderator pushcx: Stop posting in this thread.]
What are the right incentives?
!
gives macro authors a card blanche to do arbitrary stuff because “the user sees it’s a macro, right?”.Instead, macros should be so well-behaved that users shouldn’t even need to know whether something is a macro or not.
On the flip side, though, if macros aren’t distinguished with a
!
, then if the author of a macro does do something that isn’t “well-behaved”, the user has even less warning that it’s going to do that.See comment above.
Which comment? I’ve read all of your replies in this thread and I haven’t seen anything that refutes the assertion that removing
!
would just make the developer experience worse if a macro does choose to do crazy things.Imagine that Rust users would keep filing bugs about stupid macros until the issue is resolved, just like they successfully did with unnecessary
unsafe
blocks.So your solution to this problem is to make Rust developers waste a bunch of extra time filing issues? Instead of the current state of play, which is that every user of these hypothetical macros has at least a measure of pre-warning that the thing they’re calling is a macro and therefore might perform some crazy AST transformation on its input? I haven’t run a survey, but I would hazard a guess that this “macros do arbitrary stuff” problem is not actually a real problem that Rust developers have, partly because the intention of macros is to allow you to do arbitrary stuff.
Rust macro syntax is informed by experience from C where macros have a function-like syntax, and still do surprising arbitrary stuff.
In C this syntax hasn’t stopped people from doing weird stuff with macros, but is stressing users who can’t be sure that everything that looks like a function call is just a function call.
It’s also worth noting that in C it’s considered a good practice to use all-caps for macro names, because even though macros could look just like functions, users don’t want that.
Yes, except macros do things that functions literally cannot, such as ye olde
try!()
.Worked out great, didn’t it?
You’re advocating for
try()
syntax, not removal of macros.I would focus on “inline” metaprogramming (D style: pervasive CTFE + compile time reflection) early on.
Having to write a proc macro feels awful compared to what you can do in D (example). Rust now has
const fn
but not even anything like C++’sif constexpr
to use them.If I needed a modern C++, Rust would be pretty much perfect. But I don’t think I need that.