Nice project Av! On the same vibe I’m using s3db extension in couple hobby projects, very useful, slower but cheaper and simpler than a rqlite cluster or even libsql/turso, btw I’m aware that you work there, like the project and the idea of local replicas but unfortunately I was affected by the free-tier incident trying to use the platform in a PoC, I’ll eventually try it again…
As someone who never again wants to write a k8s yaml file, nor a yaml file to give to the cloud provider to stand up a k8s cluster to consume additional yaml files, is learning a BEAM-based language for me? Or do these BEAM VM apps just get deployed to k8s pods defined with yaml files?
I feel since I’m a distributed systems guy I should probably just learn Erlang or Gleam for the vibes alone.
Learning Erlang* / OTP is definitely worth it if you do distributed systems things. It has a lot of good ideas. Monitors, live code updates, and so on are all interesting. Even if you never use them, some of the ideas will apply in other contexts.
If someone told me I’d built an Erlang, I’d consider it praise, so the article doesn’t quite work for me.
*Or probably Elixir now. It didn’t exist when I learned Erlang. Also, I am the person who looks at Prolog-like syntax and is made happy.
Another huge benefit is that it gives you a new way to think about and solve problems. The main application I maintain at work is a high throughput soft-real-time imaging/ML pipeline. In a previous life I worked on a pretty big Elixir-based distributed system and this imaging system is definitely highly Elixir/OTP-inspired despite being written in C++. All message passing, shared nothing except for a a few things for performance (we have shared_ptrs that point to immutable image buffers, as an example). Let It Die for all of the processing threads with supervisors that restart pipeline stages if they crash. It’s been running in production for close to 6 years now and it continually amazes me how absolutely robust it has been.
All message passing, shared nothing except for a a few things for performance (we have shared_ptrs that point to immutable image buffers, as an example).
Funny thing, Erlang does exactly the same thing with large binaries.
Given your interest in formal methods, I can recommend having a look at Joe Armstrong’s thesis. I think he makes a very good case for why Erlang behaviours make sense from a testing and formal verification point of view (even though the thesis itself doesn’t contain any verification).
Erlang’s behaviours are interfaces that give you nice building blocks once you implement them, e.g. there’s a state machine behaviour, once you implement it using your sequential business logic, you get a concurrent server (all the concurrency is hidden away in the OTP implementation which is programmed against the interface).
The thesis is quite special in that he was 50+ years old when he wrote it, with like 20 years of distributed systems programming experience already, when compared to most theses that are written by 20-something year olds.
This reminds me the effort of couple projects years ago to use BEAM on the cloud like Erlang on Xen (ling) or using rumprun unikernel, there also other things like ergo (“golang otp”).
Yes! I was in the exact situation as you and learned elixir. The processes API is straightforward and relatively simple. Elixir documentation on the topic is superb.
On top of that elixir is a great language. Functional and very ergonomic. Erlang is fine too but elixir has that quick to write syntax sugar comparable to python or ruby.
I have seen at least 4 different K8s based teams.
Not only they spend more on the infrastructure in terms of bills but the human cost to run a K8s infrastructure is at least 4-8 hours a week of a mid-level engineer.
What’s the alternative?
Google cloud run, AWS fargate, render.com, Railway app and several others.
Unless you are an infrastructure company or have 50+ engineers, K8s is a distraction.
Exactly, can’t agree more… This post reminds me pre-k8s era, AWS struggling to deliver ECS, everyone creating tooling to handle containers, every mid-company creating their own PaaS… “Let’s use CoreOS… try Deis.. damn, wiped entire environment, Rancher now will solve everything..!” etc.. My team at time was happy using the old droneio deploying on hardened ec2 instances, relying on couple scripts handling the daemons and health checking, faster deployments, high availability… Most of other apps running on Heroku.
Now containers are everywhere, we have infinite tools to manage them and k8s became cloud defacto, consulting companies are happy and devs sad… There is huge space between couple scripts and k8s where we need to analyze the entire context.. IMHO If you don’t have a budget for an entire SRE/DevOps team, weeks or months for planning/provisioning and just run stateless apps, even managed k8s (EKS, GKE, AKS) does not make any sense, as you said, you can achieve high availability using ultra-managed-solutions which runs over k8s (also appengine flex, fly.io) or a combination of other things, we also have nice tools for self-hosting like kamal and coolify and another options like nanovms or unikraft.
btw, I’m a former SRE from payment gateway/acquirer in Brazil (valued at $2.15bi), responsible for hundred thousands of users and PoS terminals connected to AppEngine clusters complaint with PCI-DSS managed by small team. So, yes, I have some idea about what I’m saying…
I looked at App Runner but it would be hell to have 20-30 ‘services’ talk to each other over that and then we’d be stuck in a dead end. We are moving everything to Kubernetes because with a vanilla EKS setup it just works (surprisingly low amount of head ache) and it offers customisability into the far heavens for when we would need it.
It’s hard to find a good reason to use C in the last 10 years or so for mainstream development, but so many alternatives just seem like “one opinionated guy’s take for people that don’t like Rust”. Rust has its problems but it’s a genuine leap forward with outstanding package management, modern type system, extreme performance.
Odin and others feel like C with some Go syntax and the GC dial in a different position. They’re almost all anti package management due to the authors being disappointed with Go modules.
This project originally started with C because I wanted to learn C. But I didn’t like it on the long run.
I chose Odin because yeah, I don’t really like Rust for gamedev. It makes prototyping difficult due to expecting absolute correctness always. This is good for final product, but in gamedev when the project is being planned out as it’s still being made and it’s in constant state of flux, Rust just didn’t work for me.
Odin is used in production gamedev anyway, since the language is used in many visual effects. You can see it on their site: https://odin-lang.org/
edit: I admit I find it amusing that Rust is mentioned nowhere in the post, yet someone managed to bring it up. :P
edit: I admit I find it amusing that Rust is mentioned nowhere in the post, yet someone managed to bring it up. :P
Funny, I though exactly the same, “C mentioned, whatever, let’s talk about Rust”, now anything from C must be Rust, any other opinion is wrong, has flaws…You have tried and are leaving Rust? Heresy… C’mon people, respect and peace lol… First time I heard about Odin was watching Bill Hall interview on DeveloperVoices, so I’m interested in your experience because have friends from Go using GDScript for gamedev and follow slimsag blogging about Mach (Zig engine), so nice to see Odin as alternative, but I’m suspect (started my career on C and Pascal), your comment reminds me Jonathan Blow opinion [1][2] on Rust, interesting because he also come/works with C++ along his Jai language. As @xyproto commented the lower cognitive load, your motivation and joy for hacking is what matter most I think, keep it up.
Mentioning different languages in an article about another language is fairly common. I’m really put off by specifically anti-Rust rhetoric, which usually, strangely, involves calling the language a religion and referring to it’s many users as cultists, because, for whatever reason, people who don’t know Rust are incapable of seeing it as anything else. Not only is it not productive, it’s actively sabotaging projects like Linux.
It’s a popular technology, it’s often comparted to C because it’s deliberately designed to overcome C’s issues. It’s not a religion, it’s not meant to belittle anyone who uses another language. Stop making scarecrows out of people trying to discuss IT. If it sounds like I’m getting sick of this, you should see the poor guys behind Rust For Linux and what they have to put up with.
This project originally started with C because I wanted to learn C. But I didn’t like it on the long run.
Neither me or him are against Rust, it’s silly, I’ve used, still trying in couple projects and will use in the future, we are programmers, technology changes, so no anti-rust but also no silver-bullet, the point of @Aks post for me was clear about learning, we must be open to learn new stuff, we like and miss features on languages we are using, or don’t have enough time to try things, sometimes is not the better fit for the context, or we not enjoy the process or result, and we have different opinions, that’s ok, it can change over time also… IMHO we don’t need bully or evangelize others, today you’re totally into Rust, that’s ok, one day will use another one or will start to work in another area with different tech, it’s how thing works…
Regarding the Rust for Linux case, It’s clear that Rust is a better option than C, but there is a bunch of things involved and a complex context, would be awesome if the job was only about technical stuff, what about leadership and communication skills? how gain respect from the old developers? how manage the entire thing? Afaik most of devs not remain on the kernel team for long time, so how gain confidence from them, so the Rust team must be prepared to face all this non technical challenges, otherwise would be easier to start a new Rust kernel project from zero in parallel than the frustration of dealing with old grumpy C devs (or just focus/work on RedoxOS), but well, it’s just my humble opinion.
If it helps, I am not anti-Rust, and I hope to see more adoption for Rust in Linux kernel. Like I say in the post, hating programming languages is silly.
But I meant what I said: It was just amusing to me to see someone mention it so quickly. :)
Just because a language is in many places at all it’s not a good thing to expect to frame all other alternatives as “one opinionated guy’s take for people that don’t like Rust”. There’s a pretty annoying undertone from some Rust “advocacy” of making sure everyone knows not using it is your moral failure. Whereas realistically, everyone in such discussions is aware Rust exists, just as they are aware C++ exists, they do not need mentioning everywhere.
your motivation and joy for hacking is what matter most I think, keep it up.
Indeed, this is the key point I wanted to bring out with this blog post. I am personally not against Rust, I have tinkered with it and I can see why people like it. But when it comes to gamedev, which I do as a hobby just for fun, it’s maybe not for me.
I don’t really like Rust for gamedev. It makes prototyping difficult due to expecting absolute correctness always.
I understand that, I even feel it fairly often and usually wish Rust had more dynamic features. But I feel like it just as equally helps me prototype than it does hinder me. I guess you can say the same thing about any functional language. The issue with an idea in your head is, it almost never meshes with reality, and the sooner the language tells you “this won’t work”, the better, for me. Of course you can turn composability and TDD into a religious practice, but it’s even nicer if the language itself follows the laws of physics.
I don’t think this is a good approach to Rust advocacy, and probably just adds fuel to the fire. Regarding some aspects, C can be seen as a rather low bar and moving to many languages (Odin, Ada, even Pascal) could be seen as “progress” from a Rust POV.
So even if one thinks that Rust is the (currently? general?) pinnacle here, it might be better to regard this as a step towards that mountain top, not a denial of self-evident truths.
Their point is if you don’t want dependencies in the first place, you’re not going to add any. Thus the number of transitive dependencies is irrelevant.
IME Rust crates are generally good at exposing “features” to opt in/out of dependencies. The ecosystem is far from reaching the is-even point, and I regularly see efforts being made towards keeping dependencies in check.
It could definitely be better, but there’s a huge gap between NodeJS and Rust.
For random numbers specifically, not having it in std is likely wiser than would appear: Go has math/rand and math/rand/v2 now.
I would like to see more crates decide their 0.x is good enough for 1.0 after a while, and for some, move parts to std. That does happen, but it’s closer to C/C++ speed than Go/Python. It’s all tradeoffs anyways, and Rust can be conservative without inflicting too much dependency pain which I’d say is a decent situation.
I secifically remember that I wanted to create a simple client for an API, tried to install a simple HTTP client library and found out that I ran of of storage space because Cargo had produced over a gigabyte of files.
It’s not irrelevant because your matrix of possible incompatibilities is n x m and also if there’s some security problem in a transitive library via a one person project it might get overlooked or never patched.
This is kinda not measurable but if you compare to C++, if you include Boost there’s a certain standard and you can be reasonably sure it works, even with several of the libs working together. You only put trust in “the boost team” and not in 20 random individuals who never talk to each other or read the others code.
I’m not saying it’s better, but it’s not irrelevant. Or maybe take Rails. If you only depend on Rails then you need to watch one dependency, because they are watching their dependencies - and there are many people, many of them doing this as a day job. If you use Joe Random’s framework that’s a different story.
[EDIT: This in response to @xigoi , lobsters is being very buggy and I can’t find out who this post in responding to right now, no amount of deleting and reposing is fixing it]
People can scream this from the heavens and engrave into into stone, it’s still never going to be as bad as everyone rolling their own buggy, half-working solutions to everything and creating an renaissance of Greenspun’s Tenth Rule. Besides NPM, which has a dozen security flaws baked-in to the architecture that Cargo does not, and is not a valid criticism to package management, rather only being valuable as a criticism to NPM’s architecture specifically, none of this has ever been a significant problem.
And to address each of these points:
This is conditional. What dependencies are you compiling? Are they little stuff like serialization? Or big stuff like a database? You can’t simply argue that all packages add significant or even noticeable build times and storage space. This is as moot as the “Java is bloated” pseudo-argument people like spouting when they mean they simply dislike Java.
Do you know your compiler’s source code? Your operating system’s? Do you know the TCP stack you’re working with from the OS API to the driver itself? If you do, I applaud you, you’re very dedicated. This is a non-problem for everyone else, because most people have no problem building off the shoulders of giants. However, I’ll also counter this point by saying that, if you can’t read the code in the very language you’re using, you’re probably not using a good language. I have no problem reading the source code of my dependencies, and, in fact, often do.
This is a genuine concern and can’t really be spoken of in isolation, as it is a social problem first and foremost (the whole xz thing was due to a significant social engineering campaign). However, at least for Rust, it has the RustSec advisory database which is built-in to Cargo, meaning the package manager itself helps alleviate this issue. It’s definitely not a solved issue, but it’s an example of this argument theoretically being turned on it’s head and suggesting package management can actually help combat malicious injection.
This is not available in Italy or California. I guessed from the submitter’s domain it would be available in Brazil, and indeed it was. Here’s what the page says.
Note that this is about messages shared with or sent to Meta AI, not all your messages (which are end-to-end encrypted). I think the submitted title is misleading, and I suggested changing it to the title of the form. Also, I flagged as off-topic because it feels more relevant to pitchforks than computing.
Object to Your Information Being Used for AI at Meta (WhatsApp)
You have the right to object to Meta using your messages with AIs on WhatsApp to develop and improve generative AI models for AI at Meta. You can submit this form to exercise that right.
Messages with AIs on WhatsApp include messages you:
Share with Meta AI
Send to Meta AI
As always, your personal messages and calls remain end-to-end encrypted, meaning not even WhatsApp or Meta can see or listen to them.
AI at Meta is our collection of generative AI features and experiences, like Meta AI and AI Creative Tools, along with the models that power them.
To identify the right account for your objection, we need the phone number associated with your WhatsApp account, including country code and area code. If you change the phone number associated with your account, you’ll need to submit a new form with the updated phone number.
Your objection will be honored if you provide a valid phone number associated with your WhatsApp account. If your objection is honored, we won’t use your messages with AIs on WhatsApp for future development and improvement of generative AI models for AI at Meta.
This form only applies to your messages with AIs on WhatsApp. If you also use Facebook or Instagram and want to submit an objection request for either app, you will need to log into your account and submit a separate form. You can review the forms for Facebook and Instagram for more information.
You can learn more about how we develop and improve generative AI models for AI at Meta on Privacy Center.
To learn more about your rights related to Meta Products and services, visit our Privacy Policy.
Whatsapp conversations are end-to-end encrypted, both individual and group conversations. It’s all documented here. I believe they do this so they don’t have to faff around with police requests for users’ chat logs.
it is, from the vague things on can glean from news reports it seems the opt-out for Whatsapp is specifically about messages you’ve sent to their AI chatbots? (vs your public posts for all the other Meta products)
I also find it strange how the author only claims Twitter was a propaganda machine after the Musk acquisition, instead of simply always being the case. Twitter has always been the most egregious example of a political battlefield.
Just self host your own blog. There is zero reason to write any content on a blog platform.
In fact, this is literally the cure to technofeudalism. Just don’t use these platforms. The web is open, anyone can publish there. You can market via other channels than SEO via search engines.
I can think of plenty of reasons not to self host.. I think the golden rule should simply be; use your own domain name. And if you use a service you don’t own, frequently export your content.
I agree that self-hosting is a good thing! But I also think it not that easy to set up and maintain for everyone; and more importantly, threads like this one right now make it look like self-hosting is a prerequisite for writing about technofeudalism (or any other topic, actually). It would be a shame if people held back with writing only because they think they don’t publish their content in the “appropriate” way. Sure, a self-hosted website might be nicer. But publishing on Medium is still much better than not publishing at all.
& I agree with @amw-zero here, we could probably just reword it to “educating the masses to ‘host’ their own blog is a practical cure to technofeudalism” for some spectrum of ‘host’
When you say “self host”, did you mean buy a domain and set up a VPS with a web server? Or point DNS to a server on your personal internet connection? Or sign up with some free hosting service?
I’m curious about the easiest way to get started without relying on some large entity. Where should the line be drawn?
Yeah, for sure, but I decided to recommend write.as as “middle option” after seeing his linked profile on about.me, maybe he is awaking (not a cognitive dissonance hopefully), he is a professor and researcher, I have friends like him who don’t have time or like to use a static site generator and self-host it yet…
Can someone who is knowledgeable in both Zig and Rust speak to which would be “better” (not even sure how to define that for this case) to learn for someone who knows Bash, Python and Go, but isn’t a software developer by trade? I’m an infrastructure engineer, but I do enjoy writing software (mostly developer tooling) and I’m looking for a new language to dip my toes into.
I second this and will also add that Zig’s use of “explicit” memory allocation (i.e. requiring an Allocator object anytime you want to allocate memory) will train you to think about memory allocation patterns in ways no other language will. Not everyone wants to think about this of course (there’s a reason most languages hide this from the user), but it’s a useful skill for writing high performance software.
I think the “both” answer is kinda right, which annoys me a little, because it is a lot to learn. But I can accept that we’ll have more and more languages in the future – more heterogeneity, with little convergence, because computing itself is getting bigger and diverse
e.g. I also think Mojo adds significant new things – not just the domain of ML, but also the different hardware devices, and some different philosophies around borrow checking, and close integration with Python
And that means there will be combinatorial amounts of glue. Some of it will be shell, some will be extern "C" kind of stuff … Hopefully not combinatorial numbers of build systems and package managers, but probably :-)
Whichever the case, you need to learn to appraise software yourself, otherwise you will have to depend on marketing pitches forever.
Try both, I usually recommend to give Rust a week and Zig a weekend (or any length of time you deem appropriate with a similar ratio), and make up your own mind.
If you’re new to low-level programming in general then Rust will almost certainly be easier for you – not easy, but easier.
Zig is a language designed by people who love the feeling of writing in C, but want better tooling and the benefit of 50 years of language design knowledge. If Rust is an attempt at “C++ done right”, Zig is maybe the closest there is right now to “C done right”. The flip side to that is part of the C idiom they cherish is being terse to the point of obscurity, and having relatively fewer places where the compiler will tell you you’re doing something wrong.
IMO the best ordering is Rust to learn the basics, C to learn the classics, and then Zig when you’ve written enough C to get physically angry at the existence of GNU Autotools.
I would also recommend “Learn Rust the Dangerous Way” once you know C (even if you already know Rust by then), to learn how to go from “C-like” code to idiomatic Rust code without losing any performance (in fact, gaining). It’s quite enlightening to see how you can literally write C code in Rust, then slowly improve it.
The quote doesn’t say that he intends it to replace C++, just that he wants to use it for problems he previously used C++ for
That is a very important distinction, because I’m very sure there are lots of C++ programmers who like programming with more abstraction and syntax than Zig will provide. They’ll prefer something closer to Rust
I’m more on the side of less abstraction for most things, i.e. “plain code”, Rust being fairly elaborate, but people’s preferences are diverse.
BTW Rob Pike and team “designed Go to replace C++” as well. They were writing C++ at Google when they started working on Go, famously because the compile times were too long for him.
That didn’t end up happening – experienced C++ programmers often don’t like Go, because it makes a lot of decisions for them, whereas C++ gives you all the knobs.
I was asked a few weeks ago, “What was the biggest surprise you encountered rolling out Go?” I knew the answer instantly: Although we expected C++ programmers to see Go as an alternative, instead most Go programmers come from languages like Python and Ruby. Very few come from C++.
Some people understand “replacement” to mean, “it can fill in the same niche”, while others mean, “it works with my existing legacy code”.
I always interpreted it to mean the former, so to me Zig is indeed a C++ replacement. As in, throw C++ in the garbage can, stop using it forever, and use Zig instead. Replace RAII with batch operations.
To the world: Your existing C++ code is not worth saving. Your C code might be OK.
As a university student, I’d prefer Zig more. Zig is easier to learn (it depends) and for me, I can understand some knowledge deeper when writing Zig code and using Zig libraries. Rust has higher level of abstraction which prevents you to touch some deeper concepts to some content. Zig’s principle is to let user have direct control over the code they write. Currently Zig’s documentation isn’t detailed, but the codes in std library is every straightforward, you can read it without enabling zls language server or you can use a text editor with only code highlighting feature to have a comfortable code reading expreience.
I am not an expert in Zig, but there was a thread by the person maintining the Linux kernel driver for the new apple that was written in rust about rust and zig here:
More specifically, if you’re coming from Python and Go in particular, I think you will enjoy Rust’s RAII and lifetime semantics more. Those are roughly equivalent to Python’s reference counting at compile time (or at runtime if you need to use Rc/Arc). It all ends up being a flavor of automatic memory management, which is broadly comparable to Go’s GC too. And Rust gives you the best of both worlds: 100% safe code by default (like Python, in fact, even stronger since Python lets you write “high-level, memory safe” data races without thinking but Rust makes it more explicit) and equal or higher performance than Go, with fast threading.
Zig sounds more aimed towards folks that come from C, and don’t want to jump into the “let the compiler take care of things for me” world. That said, I’m not experienced with Zig by any means, so you might want to hear from someone who is.
Regarding the original post, what if de-initialization can fail? I always found RAII to be relatively limited for reasons like that
It shouldn’t always be silent/invisible.
And I feel like if RAII actually works, then your resource problem was in some sense “easy”.
I’m not sure if RAII still works with async in Rust, but it doesn’t with C++. Once you write (manual) async code you are kind of on your own. You’re back to managing resources manually.
I googled and found some links that suggest there are some issues in that area with Rust:
And I feel like if RAII actually works, then your resource problem was in some sense “easy”.
Then why do people fuck up so much?
I’m not sure if RAII still works with async in Rust, but it doesn’t with C++. Once you write (manual) async code you are kind of on your own. You’re back to managing resources manually.
If the resource doesn’t need any asynchronous operations to be freed, works great. Which is to say, 99% of resources will still be handled by RAII.
I read through it, and as someone who has used both that whole thread is not arguing well for zig, only for rust, it has a lot of trolls in it that probably just are after lina (I know there are multiple) Most of us who prefer zig to rust are not deranged loonies like many in that conversation.
This post on why someone rewrote their Rust keyboard firmware in Zig might help you understand some of the differences between the two languages: https://kevinlynagh.com/rust-zig/
You’ll probably get along easier with Rust, but Zig might just bend your mind a little more. You need a bit more tolerance of bullshit with Zig since there’s less tooling, less existing code, and you might get stuck in ways that are new, so your progress will likely be slower. (I have one moderately popular library in Rust, but spend all my “free” time doing Zig, which I think demonstrates the difference nicely!)
I guess I think of what’s involved with learning to write Rust as more of an exercise (learn the rules of the borrow checker to effectively write programs that pass it), whereas imo with Zig there’s some real novelty in expressing things with comptime. It of course depends on your baseline; maybe sum types are new enough to you already.
One of the things I dislike about Rust’s documentation and educational material the most is that it’s structured around learning the rules of the borrow checker to write programs that pass it (effectively or not :-) ), instead of learning the rules of the borrow checker to write programs that leverage it – as you put it, effectively writing programs that pass it.
The “hands-on” approach of a lot of available materials is based on fighting the compiler until you come up with something that works, instead of showing how to build a model structured around borrow-checking from the very beginning. It really pissed me off when I was learning Rust. It’s very difficult to follow, like teaching dynamic memory allocation in C by starting with nothing but null pointers and gradually mallocing and freeing memory until the program stops segfaulting and leaking memory. And it’s really counterproductive: at the end of the day all you’ve learned is how to fix yet another weird cornercase, instead of gaining more fundamental insight into building models that don’t exhibit it.
I hope this will slowly go out of fashion as the Rust community grows beyond its die-hard fan base. I understand why a lot of material from the “current era” of Rust development is structured like this, because I saw it with Common Lisp, too. It’s hard to teach how to build borrow checker-aware models without devoting ample space to explaining its shortcomings, showing alternatives to idioms that the borrow checker just doesn’t deal well with, explaining workarounds for when there’s no way around them and so on. This is not the kind of thing I’d want to cover in a tutorial on my favourite language, either.
I don’t know Zig so I can’t weigh in on the parent question. But with the state of Rust documentation back when I learned it (2020/2021-ish) I am pretty sure there’s no way I could’ve learned how to write Rust programs without ample software development experience. Learning the syntax was pretty easy (prior exposure to functional programming helped :-) ) but learning how to structure my programs was almost completely a self-guided effort. The documentation didn’t cover it too much and asking the community for help was not the most pleasant experience, to put it lightly.
like teaching dynamic memory allocation in C by starting with nothing but null pointers and gradually mallocing and freeing memory until the program stops segfaulting and leaking memory.
That’s a good one! There is a thin line between fearless and thoughtless.
If you like Go, you might like Zig, since both are comparatively simple languages. You can keep all of either language in your head. This means lots of things are not done for you.
Rust is more like Python, both are complicated languages, that do more things for you. It’s unlikely you can keep either one fully in your head, but you can keep enough in your head to be useful.
I think this is why many people compare Rust to C++ and Zig to C. C++ is also a complicated language, I’d say it’s one of the most complicated around. Rust is not as bad as C++ yet, since it hasn’t been around long enough to have loads of cruft. Perhaps the way Rust is structured around backwards compatibility it will find a way to keep the complications reasonable. So far most Rust code-bases have enough in common that you can get along. In C++ you can find code-bases that are not similar enough that they even feel like the same language.
It should also be noted that Zig is a lot younger than Rust, so it’s not entirely clear how far down the complicated path Zig will end up, but I’d guess based on their path so far, they won’t go all in on complicated like Rust and C++.
Well, @matklad is already here, but for me coming from Go and frustrated after some time trying Rust (two times) I was motivated to try Zig by @mitchellhtalking with @kristoff why he chooses Zig for Ghostty (his terminal emulator project), and how it matches with my experience/profile…
… the reason I personally don’t like working too much in Rust, I have written rust, I think as a technology it’s a great language it has great merits but the reason I personally don’t like writing Rust is every project that I read with Rust ends up basically being chase the trade implementation around, it’s like what file is this trait defined, what file is the implementation is, how many implementations are there.. and I feel like I’m just chasing the traits and I don’t find that particularly.. I don’t know, productive I should say, I like languages that you read start on line one you read ten and that’s exactly what happened and so I think Zig’s a good fit …
At this point I felt crazy for even considering Rust. I had accomplished more in 4 days what took me 16 days in Rust. But more importantly, my abstractions were holding up.
Not speaking to the languages at all, but I’d say to choose the more mature language - Rust. Even after learning Rust, I still told people to just learn C++ if the goal was to learn that kind of language. That’s a trickier choice now (C++ vs Rust) because Rust has reached a tipping point in terms of resources, so it’s easier to recommend. Zig is just way too early and it’s still not a stable language, I wouldn’t spend the time on it unless you have a specific interest.
[…] the software projects I’ve written with htmx are a much better experience for users, and orders of magnitude easier to maintain, than the software projects they replaced. They are likely to remain useful for longer than anything else I’ve ever written (so far).
This is the main reason why I now exclusively use HTMX+Go+Templ in my personal projects. I simply got tired of updating my React dependencies to keep up with security and bug fixes, just to see that my router, state management and query libraries had a major version release that breaks my project. I don’t have any more spare time to fix the mess created by the dependencies update, I just want my toy project to work!
I was introduced to Go + HTMX (Goth Stack) by memes [1], but consider it seriously now, as I’m equally as fed up from keeping up with rewrite inducing framework updates. Can you link some of your personal projects, if possible. Would love to see how you ended up building things with that setup.
The repo in which I am currently using that stack is private.
But I can tell you that if you already have some experience with Go, your backend will look pretty similar to any other backend that you have previously developed. The only major difference being that I have a couple of helper functions to render the Templ components, for example:
const initialBufferSize = 4096
func RenderComponent(ctx context.Context, w http.ResponseWriter, component templ.Component) error {
buf := bytes.NewBuffer(make([]byte, 0, initialBufferSize))
err := component.Render(ctx, buf)
if err != nil {
return fmt.Errorf("an error happened while rendering a component: %w", err)
}
_, err = w.Write(buf.Bytes())
if err != nil {
return fmt.Errorf("an error happened while writing a view to http.ResponseWriter: %w", err)
}
return nil
}
I know, but I would rather prioritize a good UX (user gets an error if the rendering process fails), than some performance gains that I currently don’t need.
I recently dipped my toe into full-stack webdev with Go + HTMX. It’s my first time using Go also, but very smooth sailing converting my static site into a dynamic one. The snazziest feature I’ve added so far is a login modal that doesn’t require reloading the page, but it’s a blog so most of the interactivity is optional.
I’m using the stdlib templates (html/template) and air for auto-reloading.
And what about your professional projects? I believe that server-side HTML and htmx are more than enough for most website and webapps, no matter how refined the UX is, and no matter how many users you’ve got.
I totally share your opinions. I would love to use this stack at work as well and it would probably fulfill all our requirements.
Nonetheless, it is not 100% up to me to decide what tech stack we use at work. I am going to advocate for Go+HTMX+Templ in future projects.
But, it might be difficult to convince the hardcore JS enthusiasts of the team.
Things which do some serverside lookups, which are a bit annoying to have a pull page refresh for. Think dropdowns where you can search a customer name by typing, and then the dropdown fills with the matches. But anything more complicated goes to old-school forms.
That example looks nice. When I check htmx there was very minimal/bad handling of errors. For example if the server returns a 500, what happens then? I think there were improvements related to that, but my svelte stuff now works and is fun to do. As long as it’s small components it’s all similar enough :)
For example if the server returns a 500, what happens then?
Then htmx triggers a DOM event with the details of the error and you can handle the event. htmx can trigger a bunch of different events in response to different conditions. Here’s a small example https://zettelkit.xyz/static/script.js
Yeah, feel the same, no time to manage deps and crazy tooling, now I’m using it but with Zig, couple weeks ago I rewrote a crud app HTMX+Zig+ZMPL definitely will use more htmx in my future projects…!
Is it real? I’ve heard folks saying that it is not, but I looked up the chips involved and it at least seams plausible. The YouTube video included in the article seems real sketchy and only shows playing a single game, no DOS or Windows 95 action.
I saw it on the same person’s Mastodon account, but I had not seen the longer video. That’s a bit more convincing than the 15 second clip in the Tom’s Hardware article.
I’m using a ThinkPad T14 Gen 1 Ryzen 7 4750U and 32 GB running ZorinOS for my personal projects, definitely the best ThinkPad I ever had, old Thinkpad T480 with i5-8350U and 16 GB running Ubuntu for work, both on external Dell P2418HZM 24” video conference full hd led monitor, keychron k3 keyboard, Logitech MX Vertical mouse and Pinebook PRO with OpenBSD for fun.
It accreted so many features from other shells, and so many programs, build systems, and autocompletion scripts grew to depend on its features, that it takes tremendous effort to remove…
LIDAR aside, computer vision and a raw video feed is more than enough to have prevented this collision.
Exactly! Engineers designing autonomous cars are required to account for low-visibility conditions, even way worse than what this video shows (think hail, rain, dust, etc.). This was easy! And yet the car made no signs of slowing down.
EDIT: twitter comments like this pain me. People need to be educated about the capabilities of autonomous cars:
She is walking across a dark road. No lights even though she has a bike. She is not in a cross walk. Not the car’s fault.
Yes it was the car’s fault. This is shocking, extraordinary behavior for an autonomous car.
In reality, both the pedestrian and the car (and Uber) share some responsibility. You shouldn’t cross a four lane road at night wearing black outside of a crosswalk. A human driver is very unlikely to see you and stop. Not blaming the victim here, just saying it’s easier to stay safe if you don’t do that. However, the promise of autonomous cars with IR and LIDAR and fancy sensors is that they can see better than humans. In this case, they failed. Not to mention the human backup was very distracted, which is really bad.
From the video I don’t think a human would have stopped in time either, but Uber’s car isn’t human. It should be better, it should see better, it should react better. Automatic collision avoidance is a solved problem already in mass-market cars today, and Uber failed it big time. Darkness is an excuse for humans, but not for autonomous cars, not in the slightest.
She should still be alive right now. Shame on Uber.
You can’t conclude that someone would not have stopped in time from the video. Not even a little. Cameras aren’t human eyes. They are much much worse in low visibility and in particular with large contrasts; like say those of headlights in the dark. I can see just fine in dark rooms where my phone can’t produce anything aside from a black image. It will take an expert to have a look at the camera and its characteristics to understand how visible that person was and from what distance.
From the video I don’t think a human would have stopped in time either, but Uber’s car isn’t human.
Certainly not when distracted by a cell phone. If anything, this just provides more evidence that driving while distracted by a cell phone, even in an autonomous vehicle, is a threat to life, and should be illegal everywhere.
She was driving . The whole point now of sitting in a driver seat for a TEST self driving car is for the driver to take over and overcome situations like this.
Without this incident, you would have seen soon a TV spot precisely with a (hot) business woman looking at the new photos uploaded on Facebook by her family. With a voice saying something like: ’we can bring you to those you Like”.
The fact that she was paid to drive a prototype does not mean she was an experienced software engineer trained to not trust the AI and to keep continuous control of the car.
And indeed the software choosed the speed. At that speed the human intervention was impossible.
Also the software did not deviate, despite the free lane beside and despite the fact that the victim had to traversate that lane, so there was enough time for a computer to calculate several alternative trajectories or even simply to alert the victim via light signaling or sounds.
So the full responsibility must be tracked back to people at Uber.
The driver was just fooled to think that he could trust the AI by an stupidly broken UI.
And indeed the driver/passenger reactions were part of the Uber’s test.
Looking at your phone while riding in the drivers seat is a crime for a reason. Uber’s AI failed horribly and all their cars should be recalled, but also the driver failed. If the driver had not been looking at their phone literally any action at all could have been taken to avoid the accident. It’s the responsibility of that driver to stay alert with attention on the road not looking at your phone or reading a book or watching a film, plane pilots do it every single day. Is their attention much more diminished? Yes of course it is. Should we expect literally 0 attention from the “driver”, absolutely no we should not.
And indeed I guess that the “driver” behaviour was pretty frequent among the prototypes’ testers.
And I hope somebody will ask Uber to provide in court the recording of all the tests done so far to prove that they did not know drivers do not actually drive.
NO. The passenger must not be used as a scapegoat.
This is an engineering issue that was completely avoidable.
The driver behaviour was expected and desired by Uber
You’ve gotta stop doing this black and white nonsense. Firstly stop yelling. I’m not using the passenger as a scapegoat so I don’t know who you’re talking to. The way the law was written it’s abundantly clear that this technology is to be treated as semi autonomous. That does not mean that Uber is not negligent. If you are sitting in a driver’s seat and you’re watching harry potter while your car drives through a crowd of people you should be found guilty of negligence independent of any charges that come to both the lead engineers and owners of Uber. You have a responsibility to at least take any action at all to prevent deaths that otherwise may be at no fault of your own. You can’t just lounge back while your car murders people, and in the same respect when riding in the drivers seat your eyes should not be on your phone, period.
Edit: That image is of a fully autonomous car, not a semi-autonomous car. There is actually a difference despite your repeated protestations. Uber still failed miserably here, and I hope their cars get taken off the road. I know better than to hope their executives will receive any punishment except maybe by shareholders.
I guess you are not an engineer, Nor a programmer.
This isn’t the first time you’ve pulled statements out of a hat as if they are gospel truth without any evidence and I doubt it will be the last. I think your argument style is dishonest and for me this is the nail in the coffin.
If there is “no way” a human can do this, then we’ve certainly never had astronauts pilot a tiny spacecraft to the moon without being able to physically change position, and we certainly don’t have military pilots in fighter jets continuously concentrating while refueling in air on missions lasting 12 hours or more… or… or…. truck drivers driving on roads with no one for miles…or…
Maybe Uber is at fault here for not adequately psychologically screening, and training its operators for “scenarios of intense boredom.”
You are talking about professionals specifically trained to keep that kind of concentration.
And even a military pilot won’t maintain concentration on the road if her husband is driving and she knows by experience that his trustworthy.
I’m talking about the actual Uber’s goal here, which is to build “self driving cars” for the masses.
It’s just a stupid UI design error. A very obvious one to see and to fix.
Do you really need some hints?
Remove the car’s control from the AI and turn it into something that enhance the driver’s senses.
Make it observes the driver’s state and forbid to start in case of he’s drunk or too tired to drive
Stop it from starting if any of its part is not working properly.
This way the responsibility of an incident would be of the driver, not of Uber’s board of directors (unless factory defects, obviously).
You’re being adversarial just to try to prove your point, which we all understand.
You are talking about professionals specifically trained to keep that kind of concentration. And even a military pilot won’t maintain concentration on the road if her husband is driving and she knows by experience that his trustworthy.
A military pilot isn’t being asked (or trained) to operate an autonomous vehicle. You’re comparing apples and oranges!
I’m talking about the actual Uber’s goal here, which is to build “self driving cars” for the masses.
Yes, the goal of Uber is to build a self driving car. We know. The goal of Uber is to build a car that is fully autonomous; one that allows all passengers to enjoy doing whatever it is they want to do: reading a book, watching a movie, etc. We get it. The problem is that those goals, are just that, goals. They aren’t reality, yet. And, there are laws in which Uber, and its operators must continue to follow in order for any department of transportation to allow these tests to continue–in order to build up confidence that autonomous vehicles are as safe, or (hopefully) safer than already licensed motorists. (IANAL, nor do I have any understanding of said laws, so that’s all I’ll say there)
It’s just a stupid UI design error. A very obvious one to see and to fix.
So, your point is that the operator’s driving experience should be enhanced by the sensors, and that the car should never be fully autonomous? I can agree to that, and have advocated for that in the past. But, that’s a different conversation. That’s not the goal of Uber, or Waymo.
The reason a pedestrian is dead is because of some combination of flaws in:
the autonomous vehicle itself
a distracted operator
(apparently) a stretch of road with too infrequent cross walks
a pedestrian jaywalking (perhaps because of the previous point)
a pedestrian not wearing proper safety gear for traveling at night
an extremely ambitious engineering goal of building a fully autonomous vehicle that can handle all of these things safely
… in a world where engineering teams use phrases like, “move fast and break things.” I’m not sure what development methodology is being used to develop these cars, but I would wager a guess that it’s not being developed with the same rigor and processes used to develop autopilot systems for aircraft, or things like air traffic controllers, space craft systems, and missile guidance systems…
… in a world where engineering teams use phrases like, “move fast and break things.” I’m not sure what development methodology is being used to develop these cars, but I would wager a guess that it’s not being developed with the same rigor and processes used to develop autopilot systems for aircraft, or things like air traffic controllers, space craft systems, and missile guidance systems…
Upvoted for this.
I’m not being adversarial to prove a point.
I’m just arguing that Uber’s board of directors are responsible and must be accountable for this death.
Very well-said on all of it. If anyone is wondering, I’ll even add to your last point what kind of processes developers of things like autopilots are following. That’s things like DO-178B with so much assurance activities and independent vetting put into it that those evaluated claim it can cost thousands of dollars per line of code. The methods to similarly certify the techniques used in things like deep learning are in the protoype phase working on simpler instances of the tech. That’d have had to do rigorous processes at several times the pace and size at a fraction of the cost of experienced companies… on cutting-edge techniques requiring new R&D to know how to vet.
Or they cut a bunch of corners hacking stuff together and misleading regulators to grab a market quickly like they usually do. And that killed someone who, despite human factors, should’ve lived if the tech (a) worked at all and (b) evaluated against common, road scenarios that could cause trouble. One or both of these is false.
I don’t know if you can conclude that’s the point. Perhaps the driver is there in case the car says “I’m stuck” or triggers some other alert. They may not be an always on hot failover.
People often say this when they’re partly blaming the victim to not seem overly mean or unfair. We shouldn’t have to when they do deserve partial blame based on one fact: people who put in a bit of effort to avoid common problems/risks are less likely to get hit with negative outcomes. Each time someone ignores one to their peril is a reminder of how important it is to address risks in a way that makes sense. A road with cars flying down it is always a risk. It gets worse at night. Some drivers will have limited senses, be on drugs, or drunk. Assume the worst might happen since it often does and act accordingly.
In this case, it was not only a four lane road at night the person crossed: people who live in the area on HN said it’s a spot noticeably darker than the other dark spots that stretches out longer. Implication is that there are other places on that road with with more light. When I’m crossing at night, I do two to three things to avoid being hit by a car:
(a) cross somewhere where there’s light
(b) make sure I see or hear no car coming before I cross.
Optionally, (c) where I cross first 1-2 lanes, get to the very middle, pause for a double check of (b), and then cross next two.
Even with blame mostly on car & driver, the video shows the human driver would’ve had relatively little reaction time even if the vision was further out than video shows. It’s just a bad situation to hit a driver with. I think person crossing at night doing (a)-(c) above might have prevented the accident. I think people should always be doing (a)-(c) above if they value their life since nobody can guarantee other people will drive correctly. Now, we can add you can’t guarantee their self-driving cars will drive correctly.
I just saw a video on that from “Adam Ruins Everything.” You should check that show out if you like that kind of stuff. Far as that point, it’s true that it was originally done for one reason but now we’re here in our current situation. Most people’s beliefs have been permanently shaped by that propaganda. The laws have been heavily reinforced. So, our expectations of people’s actions and what’s lawful must be compatible with those until they change.
That’s a great reason to consider eliminating or modifying the laws on jaywalking. You can bet the cops can still ticket you on it, though.
And every single thing you listed is mitigated by just slowing down.
Camera feed getting fuzzy ? Slow down. Now you can get more images of what’s around you, combine them for denoising, and re-run your ML classifiers to figure out what the situation is.
ML don’t just classify what’s in your sensor feeds. They also give you numerical measures for how close your feed is to the data they previously trained on. When those measures decline,, it could be because the sensors are malfunctioning. It could be rain’/dust/etc. It could be a novel untrained situation. Every single one of those things can be mitigated by just slowing down. In the worst case, you come to a full stop and tell the rider he needs to drive.
Nice project Av! On the same vibe I’m using s3db extension in couple hobby projects, very useful, slower but cheaper and simpler than a rqlite cluster or even libsql/turso, btw I’m aware that you work there, like the project and the idea of local replicas but unfortunately I was affected by the free-tier incident trying to use the platform in a PoC, I’ll eventually try it again…
As someone who never again wants to write a k8s yaml file, nor a yaml file to give to the cloud provider to stand up a k8s cluster to consume additional yaml files, is learning a BEAM-based language for me? Or do these BEAM VM apps just get deployed to k8s pods defined with yaml files?
I feel since I’m a distributed systems guy I should probably just learn Erlang or Gleam for the vibes alone.
Learning Erlang* / OTP is definitely worth it if you do distributed systems things. It has a lot of good ideas. Monitors, live code updates, and so on are all interesting. Even if you never use them, some of the ideas will apply in other contexts.
If someone told me I’d built an Erlang, I’d consider it praise, so the article doesn’t quite work for me.
*Or probably Elixir now. It didn’t exist when I learned Erlang. Also, I am the person who looks at Prolog-like syntax and is made happy.
Another huge benefit is that it gives you a new way to think about and solve problems. The main application I maintain at work is a high throughput soft-real-time imaging/ML pipeline. In a previous life I worked on a pretty big Elixir-based distributed system and this imaging system is definitely highly Elixir/OTP-inspired despite being written in C++. All message passing, shared nothing except for a a few things for performance (we have shared_ptrs that point to immutable image buffers, as an example). Let It Die for all of the processing threads with supervisors that restart pipeline stages if they crash. It’s been running in production for close to 6 years now and it continually amazes me how absolutely robust it has been.
Funny thing, Erlang does exactly the same thing with large binaries.
The vibes, as you suggested, are a great reason.
That was excellent, thank you!
Given your interest in formal methods, I can recommend having a look at Joe Armstrong’s thesis. I think he makes a very good case for why Erlang behaviours make sense from a testing and formal verification point of view (even though the thesis itself doesn’t contain any verification).
Erlang’s behaviours are interfaces that give you nice building blocks once you implement them, e.g. there’s a state machine behaviour, once you implement it using your sequential business logic, you get a concurrent server (all the concurrency is hidden away in the OTP implementation which is programmed against the interface).
The thesis is quite special in that he was 50+ years old when he wrote it, with like 20 years of distributed systems programming experience already, when compared to most theses that are written by 20-something year olds.
This reminds me the effort of couple projects years ago to use BEAM on the cloud like Erlang on Xen (ling) or using rumprun unikernel, there also other things like ergo (“golang otp”).
Yes! I was in the exact situation as you and learned elixir. The processes API is straightforward and relatively simple. Elixir documentation on the topic is superb. On top of that elixir is a great language. Functional and very ergonomic. Erlang is fine too but elixir has that quick to write syntax sugar comparable to python or ruby.
I have seen at least 4 different K8s based teams. Not only they spend more on the infrastructure in terms of bills but the human cost to run a K8s infrastructure is at least 4-8 hours a week of a mid-level engineer.
What’s the alternative? Google cloud run, AWS fargate, render.com, Railway app and several others.
Unless you are an infrastructure company or have 50+ engineers, K8s is a distraction.
Exactly, can’t agree more… This post reminds me pre-k8s era, AWS struggling to deliver ECS, everyone creating tooling to handle containers, every mid-company creating their own PaaS… “Let’s use CoreOS… try Deis.. damn, wiped entire environment, Rancher now will solve everything..!” etc.. My team at time was happy using the old droneio deploying on hardened ec2 instances, relying on couple scripts handling the daemons and health checking, faster deployments, high availability… Most of other apps running on Heroku.
Now containers are everywhere, we have infinite tools to manage them and k8s became cloud defacto, consulting companies are happy and devs sad… There is huge space between couple scripts and k8s where we need to analyze the entire context.. IMHO If you don’t have a budget for an entire SRE/DevOps team, weeks or months for planning/provisioning and just run stateless apps, even managed k8s (EKS, GKE, AKS) does not make any sense, as you said, you can achieve high availability using ultra-managed-solutions which runs over k8s (also appengine flex, fly.io) or a combination of other things, we also have nice tools for self-hosting like kamal and coolify and another options like nanovms or unikraft.
btw, I’m a former SRE from payment gateway/acquirer in Brazil (valued at $2.15bi), responsible for hundred thousands of users and PoS terminals connected to AppEngine clusters complaint with PCI-DSS managed by small team. So, yes, I have some idea about what I’m saying…
Yes, if you’re not pushing more traffic than my M2 laptop could also do on a rainy day, you do not need Kubernetes.
I have served 100qps+ on Google Cloud Run. Not huge by FAANG standards but still more than what most startups need.
I looked at App Runner but it would be hell to have 20-30 ‘services’ talk to each other over that and then we’d be stuck in a dead end. We are moving everything to Kubernetes because with a vanilla EKS setup it just works (surprisingly low amount of head ache) and it offers customisability into the far heavens for when we would need it.
How will you deal with 20-30 services talking to each other on K8s? Isn’t complexity the same?
It’s hard to find a good reason to use C in the last 10 years or so for mainstream development, but so many alternatives just seem like “one opinionated guy’s take for people that don’t like Rust”. Rust has its problems but it’s a genuine leap forward with outstanding package management, modern type system, extreme performance.
Odin and others feel like C with some Go syntax and the GC dial in a different position. They’re almost all anti package management due to the authors being disappointed with Go modules.
This project originally started with C because I wanted to learn C. But I didn’t like it on the long run.
I chose Odin because yeah, I don’t really like Rust for gamedev. It makes prototyping difficult due to expecting absolute correctness always. This is good for final product, but in gamedev when the project is being planned out as it’s still being made and it’s in constant state of flux, Rust just didn’t work for me.
Odin is used in production gamedev anyway, since the language is used in many visual effects. You can see it on their site: https://odin-lang.org/
edit: I admit I find it amusing that Rust is mentioned nowhere in the post, yet someone managed to bring it up. :P
Funny, I though exactly the same, “C mentioned, whatever, let’s talk about Rust”, now anything from C must be Rust, any other opinion is wrong, has flaws…You have tried and are leaving Rust? Heresy… C’mon people, respect and peace lol… First time I heard about Odin was watching Bill Hall interview on DeveloperVoices, so I’m interested in your experience because have friends from Go using GDScript for gamedev and follow slimsag blogging about Mach (Zig engine), so nice to see Odin as alternative, but I’m suspect (started my career on C and Pascal), your comment reminds me Jonathan Blow opinion [1][2] on Rust, interesting because he also come/works with C++ along his Jai language. As @xyproto commented the lower cognitive load, your motivation and joy for hacking is what matter most I think, keep it up.
Mentioning different languages in an article about another language is fairly common. I’m really put off by specifically anti-Rust rhetoric, which usually, strangely, involves calling the language a religion and referring to it’s many users as cultists, because, for whatever reason, people who don’t know Rust are incapable of seeing it as anything else. Not only is it not productive, it’s actively sabotaging projects like Linux.
It’s a popular technology, it’s often comparted to C because it’s deliberately designed to overcome C’s issues. It’s not a religion, it’s not meant to belittle anyone who uses another language. Stop making scarecrows out of people trying to discuss IT. If it sounds like I’m getting sick of this, you should see the poor guys behind Rust For Linux and what they have to put up with.
but @Aks was clear at beginning
and then commented
Neither me or him are against Rust, it’s silly, I’ve used, still trying in couple projects and will use in the future, we are programmers, technology changes, so no anti-rust but also no silver-bullet, the point of @Aks post for me was clear about learning, we must be open to learn new stuff, we like and miss features on languages we are using, or don’t have enough time to try things, sometimes is not the better fit for the context, or we not enjoy the process or result, and we have different opinions, that’s ok, it can change over time also… IMHO we don’t need bully or evangelize others, today you’re totally into Rust, that’s ok, one day will use another one or will start to work in another area with different tech, it’s how thing works…
Regarding the Rust for Linux case, It’s clear that Rust is a better option than C, but there is a bunch of things involved and a complex context, would be awesome if the job was only about technical stuff, what about leadership and communication skills? how gain respect from the old developers? how manage the entire thing? Afaik most of devs not remain on the kernel team for long time, so how gain confidence from them, so the Rust team must be prepared to face all this non technical challenges, otherwise would be easier to start a new Rust kernel project from zero in parallel than the frustration of dealing with old grumpy C devs (or just focus/work on RedoxOS), but well, it’s just my humble opinion.
If it helps, I am not anti-Rust, and I hope to see more adoption for Rust in Linux kernel. Like I say in the post, hating programming languages is silly.
But I meant what I said: It was just amusing to me to see someone mention it so quickly. :)
It should be expected. There are very few if any languages that succeed at being in so many places at once. Ocaml maybe? C++ ?
Just because a language is in many places at all it’s not a good thing to expect to frame all other alternatives as “one opinionated guy’s take for people that don’t like Rust”. There’s a pretty annoying undertone from some Rust “advocacy” of making sure everyone knows not using it is your moral failure. Whereas realistically, everyone in such discussions is aware Rust exists, just as they are aware C++ exists, they do not need mentioning everywhere.
Indeed, this is the key point I wanted to bring out with this blog post. I am personally not against Rust, I have tinkered with it and I can see why people like it. But when it comes to gamedev, which I do as a hobby just for fun, it’s maybe not for me.
I understand that, I even feel it fairly often and usually wish Rust had more dynamic features. But I feel like it just as equally helps me prototype than it does hinder me. I guess you can say the same thing about any functional language. The issue with an idea in your head is, it almost never meshes with reality, and the sooner the language tells you “this won’t work”, the better, for me. Of course you can turn composability and TDD into a religious practice, but it’s even nicer if the language itself follows the laws of physics.
Have you read https://loglog.games/blog/leaving-rust-gamedev/ ?
I don’t think this is a good approach to Rust advocacy, and probably just adds fuel to the fire. Regarding some aspects, C can be seen as a rather low bar and moving to many languages (Odin, Ada, even Pascal) could be seen as “progress” from a Rust POV.
So even if one thinks that Rust is the (currently? general?) pinnacle here, it might be better to regard this as a step towards that mountain top, not a denial of self-evident truths.
My main gripe with Rust is the npm-style ecosystem where everything has hundreds of dependencies.
This is often brought up as an issue, but if you don’t want to use other people’s code, why is it a problem for you that other people do?
Their point is if you don’t want dependencies in the first place, you’re not going to add any. Thus the number of transitive dependencies is irrelevant.
IME Rust crates are generally good at exposing “features” to opt in/out of dependencies. The ecosystem is far from reaching the
is-even
point, and I regularly see efforts being made towards keeping dependencies in check.It could definitely be better, but there’s a huge gap between NodeJS and Rust.
Unfortunately, the Rust standard library is so small that you can’t even get basic stuff like random number generation without external libraries.
You actually can! It’s hacky and not recommended: Cheap random number generator with
std
. (This is meant as a fun fact, not gotcha)For random numbers specifically, not having it in std is likely wiser than would appear: Go has
math/rand
andmath/rand/v2
now.I would like to see more crates decide their 0.x is good enough for 1.0 after a while, and for some, move parts to std. That does happen, but it’s closer to C/C++ speed than Go/Python. It’s all tradeoffs anyways, and Rust can be conservative without inflicting too much dependency pain which I’d say is a decent situation.
But the “foundational” crates filling the gaps in the standard library don’t tend to have tons of dependencies.
I secifically remember that I wanted to create a simple client for an API, tried to install a simple HTTP client library and found out that I ran of of storage space because Cargo had produced over a gigabyte of files.
It’s not irrelevant because your matrix of possible incompatibilities is n x m and also if there’s some security problem in a transitive library via a one person project it might get overlooked or never patched.
This is kinda not measurable but if you compare to C++, if you include Boost there’s a certain standard and you can be reasonably sure it works, even with several of the libs working together. You only put trust in “the boost team” and not in 20 random individuals who never talk to each other or read the others code.
I’m not saying it’s better, but it’s not irrelevant. Or maybe take Rails. If you only depend on Rails then you need to watch one dependency, because they are watching their dependencies - and there are many people, many of them doing this as a day job. If you use Joe Random’s framework that’s a different story.
[EDIT: This in response to @xigoi , lobsters is being very buggy and I can’t find out who this post in responding to right now, no amount of deleting and reposing is fixing it]
People can scream this from the heavens and engrave into into stone, it’s still never going to be as bad as everyone rolling their own buggy, half-working solutions to everything and creating an renaissance of Greenspun’s Tenth Rule. Besides NPM, which has a dozen security flaws baked-in to the architecture that Cargo does not, and is not a valid criticism to package management, rather only being valuable as a criticism to NPM’s architecture specifically, none of this has ever been a significant problem.
And to address each of these points:
This is not available in Italy or California. I guessed from the submitter’s domain it would be available in Brazil, and indeed it was. Here’s what the page says.
Note that this is about messages shared with or sent to Meta AI, not all your messages (which are end-to-end encrypted). I think the submitted title is misleading, and I suggested changing it to the title of the form. Also, I flagged as off-topic because it feels more relevant to pitchforks than computing.
Thanks.
oh i thought whatsapp is end to end encrypted :(
Whatsapp conversations are end-to-end encrypted, both individual and group conversations. It’s all documented here. I believe they do this so they don’t have to faff around with police requests for users’ chat logs.
it is, from the vague things on can glean from news reports it seems the opt-out for Whatsapp is specifically about messages you’ve sent to their AI chatbots? (vs your public posts for all the other Meta products)
LOL, I chuckled loudly at my desk! That was a good one!
The form apparently is not available in Canada. I wonder where this is available. EU perhaps?
Nope.
Why did it make you chuckle?
Well is available here in Brazil due to “our GDPR”, earlier Brazil blocks Meta from using social media posts to train AI.
There’s some irony in a post about “digital feudalism” being posted on Medium…
Also very strange to see Tor lumped in with billionaire-owned Twitter; what a bizarre comparison.
I also find it strange how the author only claims Twitter was a propaganda machine after the Musk acquisition, instead of simply always being the case. Twitter has always been the most egregious example of a political battlefield.
And crossposted to Substack.
I don’t think irony is the right word. perhaps congruity?
@mhatta would be nice to read your posts in another place like write.as (which is open and support ActivityPub, markdown etc) ; )
Just self host your own blog. There is zero reason to write any content on a blog platform.
In fact, this is literally the cure to technofeudalism. Just don’t use these platforms. The web is open, anyone can publish there. You can market via other channels than SEO via search engines.
“Just”.
That’s a lot of things wrapped up in “just”
The web is open, if you have a credit card that US companies accept.
I can think of plenty of reasons not to self host.. I think the golden rule should simply be; use your own domain name. And if you use a service you don’t own, frequently export your content.
What are the reasons to not self-host?
I think the use of “just” in your post is not a good idea: https://www.tbray.org/ongoing/When/202x/2022/11/07/Just-Dont .
I agree that self-hosting is a good thing! But I also think it not that easy to set up and maintain for everyone; and more importantly, threads like this one right now make it look like self-hosting is a prerequisite for writing about technofeudalism (or any other topic, actually). It would be a shame if people held back with writing only because they think they don’t publish their content in the “appropriate” way. Sure, a self-hosted website might be nicer. But publishing on Medium is still much better than not publishing at all.
see also https://www.todepond.com/wikiblogarden/better-computing/just/
& I agree with @amw-zero here, we could probably just reword it to “educating the masses to ‘host’ their own blog is a practical cure to technofeudalism” for some spectrum of ‘host’
When you say “self host”, did you mean buy a domain and set up a VPS with a web server? Or point DNS to a server on your personal internet connection? Or sign up with some free hosting service?
I’m curious about the easiest way to get started without relying on some large entity. Where should the line be drawn?
Yeah, for sure, but I decided to recommend write.as as “middle option” after seeing his linked profile on about.me, maybe he is awaking (not a cognitive dissonance hopefully), he is a professor and researcher, I have friends like him who don’t have time or like to use a static site generator and self-host it yet…
Can someone who is knowledgeable in both Zig and Rust speak to which would be “better” (not even sure how to define that for this case) to learn for someone who knows Bash, Python and Go, but isn’t a software developer by trade? I’m an infrastructure engineer, but I do enjoy writing software (mostly developer tooling) and I’m looking for a new language to dip my toes into.
The real answer is both. Rust’s borrow checker is a game changer. But Zig’s
comptime
is a game changer as well.If you only have space for one, then go with Rust as the boring choice which is already at 1.0
I second this and will also add that Zig’s use of “explicit” memory allocation (i.e. requiring an Allocator object anytime you want to allocate memory) will train you to think about memory allocation patterns in ways no other language will. Not everyone wants to think about this of course (there’s a reason most languages hide this from the user), but it’s a useful skill for writing high performance software.
Reminds me of Type Checking vs. Metaprogramming; ML vs. Lisp :-) Someone should write Borrow Checking vs Metaprogramming; Rust vs. Zig
I think the “both” answer is kinda right, which annoys me a little, because it is a lot to learn. But I can accept that we’ll have more and more languages in the future – more heterogeneity, with little convergence, because computing itself is getting bigger and diverse
e.g. I also think Mojo adds significant new things – not just the domain of ML, but also the different hardware devices, and some different philosophies around borrow checking, and close integration with Python
And that means there will be combinatorial amounts of glue. Some of it will be shell, some will be
extern "C"
kind of stuff … Hopefully not combinatorial numbers of build systems and package managers, but probably :-)Whichever the case, you need to learn to appraise software yourself, otherwise you will have to depend on marketing pitches forever.
Try both, I usually recommend to give Rust a week and Zig a weekend (or any length of time you deem appropriate with a similar ratio), and make up your own mind.
If you’re interested in the more philosophical perspective behind each project, check out this talk from Andrew, creator of Zig https://www.youtube.com/watch?v=YXrb-DqsBNU
I’m sure Rust must have an equivalent talk or two that knowledgeable Rust users could recommend you.
If you’re new to low-level programming in general then Rust will almost certainly be easier for you – not easy, but easier.
Zig is a language designed by people who love the feeling of writing in C, but want better tooling and the benefit of 50 years of language design knowledge. If Rust is an attempt at “C++ done right”, Zig is maybe the closest there is right now to “C done right”. The flip side to that is part of the C idiom they cherish is being terse to the point of obscurity, and having relatively fewer places where the compiler will tell you you’re doing something wrong.
IMO the best ordering is Rust to learn the basics, C to learn the classics, and then Zig when you’ve written enough C to get physically angry at the existence of GNU Autotools.
I would also recommend “Learn Rust the Dangerous Way” once you know C (even if you already know Rust by then), to learn how to go from “C-like” code to idiomatic Rust code without losing any performance (in fact, gaining). It’s quite enlightening to see how you can literally write C code in Rust, then slowly improve it.
https://cliffle.com/p/dangerust/
FWIW, the main author of zig hates this comparison, and intends zig to replace C++ more than C.
(I can’t find him saying that right now, so it’s from memory)
https://mastodon.social/@andrewrk/113229093827385106
Thank you! I scrolled his feed a bit and must have skipped over it
The quote doesn’t say that he intends it to replace C++, just that he wants to use it for problems he previously used C++ for
That is a very important distinction, because I’m very sure there are lots of C++ programmers who like programming with more abstraction and syntax than Zig will provide. They’ll prefer something closer to Rust
I’m more on the side of less abstraction for most things, i.e. “plain code”, Rust being fairly elaborate, but people’s preferences are diverse.
BTW Rob Pike and team “designed Go to replace C++” as well. They were writing C++ at Google when they started working on Go, famously because the compile times were too long for him.
That didn’t end up happening – experienced C++ programmers often don’t like Go, because it makes a lot of decisions for them, whereas C++ gives you all the knobs.
http://lambda-the-ultimate.org/node/4554
Some people understand “replacement” to mean, “it can fill in the same niche”, while others mean, “it works with my existing legacy code”.
I always interpreted it to mean the former, so to me Zig is indeed a C++ replacement. As in, throw C++ in the garbage can, stop using it forever, and use Zig instead. Replace RAII with batch operations.
To the world: Your existing C++ code is not worth saving. Your C code might be OK.
Best 5 words argument I’ve ever read against RAII.
the raison detre for the language is “Focus on debugging your application rather than debugging your programming language knowledge.”
which seems aimed squarely at c++ rather than c
As a university student, I’d prefer Zig more. Zig is easier to learn (it depends) and for me, I can understand some knowledge deeper when writing Zig code and using Zig libraries. Rust has higher level of abstraction which prevents you to touch some deeper concepts to some content. Zig’s principle is to let user have direct control over the code they write. Currently Zig’s documentation isn’t detailed, but the codes in
std
library is every straightforward, you can read it without enablingzls
language server or you can use a text editor with only code highlighting feature to have a comfortable code reading expreience.I am not an expert in Zig, but there was a thread by the person maintining the Linux kernel driver for the new apple that was written in rust about rust and zig here:
https://mastodon.social/@[email protected]/113327856747090187
Read the comments as well, @lina is much more into rust than zig, so those might provide some extra perspective.
More specifically, if you’re coming from Python and Go in particular, I think you will enjoy Rust’s RAII and lifetime semantics more. Those are roughly equivalent to Python’s reference counting at compile time (or at runtime if you need to use Rc/Arc). It all ends up being a flavor of automatic memory management, which is broadly comparable to Go’s GC too. And Rust gives you the best of both worlds: 100% safe code by default (like Python, in fact, even stronger since Python lets you write “high-level, memory safe” data races without thinking but Rust makes it more explicit) and equal or higher performance than Go, with fast threading.
Zig sounds more aimed towards folks that come from C, and don’t want to jump into the “let the compiler take care of things for me” world. That said, I’m not experienced with Zig by any means, so you might want to hear from someone who is.
Regarding the original post, what if de-initialization can fail? I always found RAII to be relatively limited for reasons like that
It shouldn’t always be silent/invisible.
And I feel like if RAII actually works, then your resource problem was in some sense “easy”.
I’m not sure if RAII still works with async in Rust, but it doesn’t with C++. Once you write (manual) async code you are kind of on your own. You’re back to managing resources manually.
I googled and found some links that suggest there are some issues in that area with Rust:
https://internals.rust-lang.org/t/wanted-a-way-to-safely-manage-scoped-resources-in-async-code/14544/4
https://github.com/rust-lang/wg-async/issues/175
Then why do people fuck up so much?
If the resource doesn’t need any asynchronous operations to be freed, works great. Which is to say, 99% of resources will still be handled by RAII.
I don’t know of any evidence that there are more mistakes, compared with say
defer
Also, please tone it down a bit … some of your comments are low on information, high on emotion
I read through it, and as someone who has used both that whole thread is not arguing well for zig, only for rust, it has a lot of trolls in it that probably just are after lina (I know there are multiple) Most of us who prefer zig to rust are not deranged loonies like many in that conversation.
The Meatlotion troll admitting they were a script kiddie at the end was the pure catharsis I needed today. Thank you.
This post on why someone rewrote their Rust keyboard firmware in Zig might help you understand some of the differences between the two languages: https://kevinlynagh.com/rust-zig/
Discussed on Lobsters
You’ll probably get along easier with Rust, but Zig might just bend your mind a little more. You need a bit more tolerance of bullshit with Zig since there’s less tooling, less existing code, and you might get stuck in ways that are new, so your progress will likely be slower. (I have one moderately popular library in Rust, but spend all my “free” time doing Zig, which I think demonstrates the difference nicely!)
Oh, I had the impression as an observer that this was the reverse. Doesn’t rust bend the mind enough?
I guess I think of what’s involved with learning to write Rust as more of an exercise (learn the rules of the borrow checker to effectively write programs that pass it), whereas imo with Zig there’s some real novelty in expressing things with comptime. It of course depends on your baseline; maybe sum types are new enough to you already.
One of the things I dislike about Rust’s documentation and educational material the most is that it’s structured around learning the rules of the borrow checker to write programs that pass it (effectively or not :-) ), instead of learning the rules of the borrow checker to write programs that leverage it – as you put it, effectively writing programs that pass it.
The “hands-on” approach of a lot of available materials is based on fighting the compiler until you come up with something that works, instead of showing how to build a model structured around borrow-checking from the very beginning. It really pissed me off when I was learning Rust. It’s very difficult to follow, like teaching dynamic memory allocation in C by starting with nothing but
null
pointers and graduallymalloc
ing andfree
ing memory until the program stops segfaulting and leaking memory. And it’s really counterproductive: at the end of the day all you’ve learned is how to fix yet another weird cornercase, instead of gaining more fundamental insight into building models that don’t exhibit it.I hope this will slowly go out of fashion as the Rust community grows beyond its die-hard fan base. I understand why a lot of material from the “current era” of Rust development is structured like this, because I saw it with Common Lisp, too. It’s hard to teach how to build borrow checker-aware models without devoting ample space to explaining its shortcomings, showing alternatives to idioms that the borrow checker just doesn’t deal well with, explaining workarounds for when there’s no way around them and so on. This is not the kind of thing I’d want to cover in a tutorial on my favourite language, either.
I don’t know Zig so I can’t weigh in on the parent question. But with the state of Rust documentation back when I learned it (2020/2021-ish) I am pretty sure there’s no way I could’ve learned how to write Rust programs without ample software development experience. Learning the syntax was pretty easy (prior exposure to functional programming helped :-) ) but learning how to structure my programs was almost completely a self-guided effort. The documentation didn’t cover it too much and asking the community for help was not the most pleasant experience, to put it lightly.
That’s a good one! There is a thin line between fearless and thoughtless.
If you like Go, you might like Zig, since both are comparatively simple languages. You can keep all of either language in your head. This means lots of things are not done for you.
Rust is more like Python, both are complicated languages, that do more things for you. It’s unlikely you can keep either one fully in your head, but you can keep enough in your head to be useful.
I think this is why many people compare Rust to C++ and Zig to C. C++ is also a complicated language, I’d say it’s one of the most complicated around. Rust is not as bad as C++ yet, since it hasn’t been around long enough to have loads of cruft. Perhaps the way Rust is structured around backwards compatibility it will find a way to keep the complications reasonable. So far most Rust code-bases have enough in common that you can get along. In C++ you can find code-bases that are not similar enough that they even feel like the same language.
It should also be noted that Zig is a lot younger than Rust, so it’s not entirely clear how far down the complicated path Zig will end up, but I’d guess based on their path so far, they won’t go all in on complicated like Rust and C++.
Well, @matklad is already here, but for me coming from Go and frustrated after some time trying Rust (two times) I was motivated to try Zig by @mitchellh talking with @kristoff why he chooses Zig for Ghostty (his terminal emulator project), and how it matches with my experience/profile…
… the reason I personally don’t like working too much in Rust, I have written rust, I think as a technology it’s a great language it has great merits but the reason I personally don’t like writing Rust is every project that I read with Rust ends up basically being chase the trade implementation around, it’s like what file is this trait defined, what file is the implementation is, how many implementations are there.. and I feel like I’m just chasing the traits and I don’t find that particularly.. I don’t know, productive I should say, I like languages that you read start on line one you read ten and that’s exactly what happened and so I think Zig’s a good fit …
Basically I’m more into suckless philosophy I think, also liked @andrewrk talking about the road to 1.0 and Non-Profits vs VC-backed Startup etc… So I recommend to create something real on both using the refs posted here, some rustlings (plus bleessed.rs) and ziglings (or my online version, plus zigistry.dev) to get a better fit for you ; )
At this point I felt crazy for even considering Rust. I had accomplished more in 4 days what took me 16 days in Rust. But more importantly, my abstractions were holding up.
@andrewrk before Zig on progress so far
Not speaking to the languages at all, but I’d say to choose the more mature language - Rust. Even after learning Rust, I still told people to just learn C++ if the goal was to learn that kind of language. That’s a trickier choice now (C++ vs Rust) because Rust has reached a tipping point in terms of resources, so it’s easier to recommend. Zig is just way too early and it’s still not a stable language, I wouldn’t spend the time on it unless you have a specific interest.
This is the main reason why I now exclusively use HTMX+Go+Templ in my personal projects. I simply got tired of updating my React dependencies to keep up with security and bug fixes, just to see that my router, state management and query libraries had a major version release that breaks my project. I don’t have any more spare time to fix the mess created by the dependencies update, I just want my toy project to work!
I was introduced to Go + HTMX (Goth Stack) by memes [1], but consider it seriously now, as I’m equally as fed up from keeping up with rewrite inducing framework updates. Can you link some of your personal projects, if possible. Would love to see how you ended up building things with that setup.
[1] https://twitter.com/IsaqueFranklin0/status/1812290445676011592
The repo in which I am currently using that stack is private. But I can tell you that if you already have some experience with Go, your backend will look pretty similar to any other backend that you have previously developed. The only major difference being that I have a couple of helper functions to render the Templ components, for example:
The rest of the backend might then look very similar to something like this (I solely work with the stdlib): https://github.com/erodrigufer/raspall
A part from this, I use
air
for live reloading, so I configured it to build the templ components as well.You can avoid allocating a buffer by doing:
I know, but I would rather prioritize a good UX (user gets an error if the rendering process fails), than some performance gains that I currently don’t need.
There are pros and cons to either way. If rendering has a non-trivial chance to fail, it’s better to be able to send an error page to the user.
I usually separate “getting the data” and “rendering the data”. So when I’m calling the
component.Render()
function, it is not supposed to fail.I recently dipped my toe into full-stack webdev with Go + HTMX. It’s my first time using Go also, but very smooth sailing converting my static site into a dynamic one. The snazziest feature I’ve added so far is a login modal that doesn’t require reloading the page, but it’s a blog so most of the interactivity is optional.
I’m using the stdlib templates (html/template) and air for auto-reloading.
I have an example project dmonstrating Go + HTMX + Templ for just this purpose: https://github.com/acaloiaro/hugo-htmx-go-template
The
hugo
part of this example isn’t particularly important.server.go
in this project is a standard Go server that servestemplate/html
andtempl
endpoints, e.g. https://github.com/acaloiaro/hugo-htmx-go-template/blob/main/server.go#L45There’s obviously a lot more involved with making a production ready application, but this is a decent starting point.
And what about your professional projects? I believe that server-side HTML and htmx are more than enough for most website and webapps, no matter how refined the UX is, and no matter how many users you’ve got.
I totally share your opinions. I would love to use this stack at work as well and it would probably fulfill all our requirements. Nonetheless, it is not 100% up to me to decide what tech stack we use at work. I am going to advocate for Go+HTMX+Templ in future projects. But, it might be difficult to convince the hardcore JS enthusiasts of the team.
By the way, I just now realize that you are the author of one of the talks that originally motivated me to try out HTMX.
Here is the link for anyone else who might be interested, he talks about using HTMX in production/work: https://www.youtube.com/watch?v=3GObi93tjZI
Similar: some svelte web components + Go templates, works great (and use nix to build the svelte stuff so it’ll keep working).
For what kind of stuff do you prefer to use svelte instead of pure HTML templates?
Things which do some serverside lookups, which are a bit annoying to have a pull page refresh for. Think dropdowns where you can search a customer name by typing, and then the dropdown fills with the matches. But anything more complicated goes to old-school forms.
I think there is even an example for a very similar use case of active search in the HTMX website: https://htmx.org/examples/active-search/
But, yeah, I can also understand that one would use JS for that :)
That example looks nice. When I check htmx there was very minimal/bad handling of errors. For example if the server returns a 500, what happens then? I think there were improvements related to that, but my svelte stuff now works and is fun to do. As long as it’s small components it’s all similar enough :)
Hooking into events allows you to surface errors to the user. There are several failure messages to handle though, that I wrote about in https://www.xvello.net/blog/htmx-error-handling/
Then htmx triggers a DOM event with the details of the error and you can handle the event. htmx can trigger a bunch of different events in response to different conditions. Here’s a small example https://zettelkit.xyz/static/script.js
Yeah, feel the same, no time to manage deps and crazy tooling, now I’m using it but with Zig, couple weeks ago I rewrote a crud app HTMX+Zig+ZMPL definitely will use more htmx in my future projects…!
Really great times, here my jornada running jlime with orinoco wifi pcmcia card attached https://pub.dgv.dev.br/HP_jornada_(running_JLime_Linux).jpg in ~2006.
Is it real? I’ve heard folks saying that it is not, but I looked up the chips involved and it at least seams plausible. The YouTube video included in the article seems real sketchy and only shows playing a single game, no DOS or Windows 95 action.
I found it on twitter from @dosnostalgic thread, where you can watch a hands on video and the related Aliexpress link.
I saw it on the same person’s Mastodon account, but I had not seen the longer video. That’s a bit more convincing than the 15 second clip in the Tom’s Hardware article.
I’m using a ThinkPad T14 Gen 1 Ryzen 7 4750U and 32 GB running ZorinOS for my personal projects, definitely the best ThinkPad I ever had, old Thinkpad T480 with i5-8350U and 16 GB running Ubuntu for work, both on external Dell P2418HZM 24” video conference full hd led monitor, keychron k3 keyboard, Logitech MX Vertical mouse and Pinebook PRO with OpenBSD for fun.
Adesso WKB-3150UB Wireless Ergonomic Keyboard with Built-in Removable Trackball and Scroll Wheel, Split Key, Long Battery Life, Small and Portable
I tried Miniflux (v1) for a while, then feedbin user for couple years, finally I’m happy running yarr on self-hosted instance at BuyVM.
This reminds me sadly of one of the basic laws of what somebody expressed to me as “software thermodynamics”:
plus
Hm, that reminds me of bash too :-/
It accreted so many features from other shells, and so many programs, build systems, and autocompletion scripts grew to depend on its features, that it takes tremendous effort to remove…
To quote another HN comment:
Exactly! Engineers designing autonomous cars are required to account for low-visibility conditions, even way worse than what this video shows (think hail, rain, dust, etc.). This was easy! And yet the car made no signs of slowing down.
EDIT: twitter comments like this pain me. People need to be educated about the capabilities of autonomous cars:
Yes it was the car’s fault. This is shocking, extraordinary behavior for an autonomous car.
EEVblog #1066 - Uber Autonomous Car Accident - LIDAR Failed?
In reality, both the pedestrian and the car (and Uber) share some responsibility. You shouldn’t cross a four lane road at night wearing black outside of a crosswalk. A human driver is very unlikely to see you and stop. Not blaming the victim here, just saying it’s easier to stay safe if you don’t do that. However, the promise of autonomous cars with IR and LIDAR and fancy sensors is that they can see better than humans. In this case, they failed. Not to mention the human backup was very distracted, which is really bad.
From the video I don’t think a human would have stopped in time either, but Uber’s car isn’t human. It should be better, it should see better, it should react better. Automatic collision avoidance is a solved problem already in mass-market cars today, and Uber failed it big time. Darkness is an excuse for humans, but not for autonomous cars, not in the slightest.
She should still be alive right now. Shame on Uber.
You can’t conclude that someone would not have stopped in time from the video. Not even a little. Cameras aren’t human eyes. They are much much worse in low visibility and in particular with large contrasts; like say those of headlights in the dark. I can see just fine in dark rooms where my phone can’t produce anything aside from a black image. It will take an expert to have a look at the camera and its characteristics to understand how visible that person was and from what distance.
Certainly not when distracted by a cell phone. If anything, this just provides more evidence that driving while distracted by a cell phone, even in an autonomous vehicle, is a threat to life, and should be illegal everywhere.
Just for everyone’s knowledge you’re 8 times as likely to get in an accident while texting, that’s double the rate for drinking and driving.
He was not driving.
He was carried around by a self driving car.
I hope that engineers at Uber (and Google, and…) do not need me to note that the very definition of “self driving car” is a huge UI flaw in itself.
That is obvious to anyone who understand UI, UX or even just humans!
She was driving . The whole point now of sitting in a driver seat for a TEST self driving car is for the driver to take over and overcome situations like this.
No, she was not.
Without this incident, you would have seen soon a TV spot precisely with a (hot) business woman looking at the new photos uploaded on Facebook by her family. With a voice saying something like: ’we can bring you to those you Like”.
The fact that she was paid to drive a prototype does not mean she was an experienced software engineer trained to not trust the AI and to keep continuous control of the car.
And indeed the software choosed the speed. At that speed the human intervention was impossible.
Also the software did not deviate, despite the free lane beside and despite the fact that the victim had to traversate that lane, so there was enough time for a computer to calculate several alternative trajectories or even simply to alert the victim via light signaling or sounds.
So the full responsibility must be tracked back to people at Uber.
The driver was just fooled to think that he could trust the AI by an stupidly broken UI.
And indeed the driver/passenger reactions were part of the Uber’s test.
Looking at your phone while riding in the drivers seat is a crime for a reason. Uber’s AI failed horribly and all their cars should be recalled, but also the driver failed. If the driver had not been looking at their phone literally any action at all could have been taken to avoid the accident. It’s the responsibility of that driver to stay alert with attention on the road not looking at your phone or reading a book or watching a film, plane pilots do it every single day. Is their attention much more diminished? Yes of course it is. Should we expect literally 0 attention from the “driver”, absolutely no we should not.
Do you realize that the driver/passenger reactions were part of the test?
This is the sort of self driving car that Uber and friends want to realize and sell worldwide.
And indeed I guess that the “driver” behaviour was pretty frequent among the prototypes’ testers.
And I hope somebody will ask Uber to provide in court the recording of all the tests done so far to prove that they did not know drivers do not actually drive.
NO. The passenger must not be used as a scapegoat.
This is an engineering issue that was completely avoidable.
The driver behaviour was expected and desired by Uber
You’ve gotta stop doing this black and white nonsense. Firstly stop yelling. I’m not using the passenger as a scapegoat so I don’t know who you’re talking to. The way the law was written it’s abundantly clear that this technology is to be treated as semi autonomous. That does not mean that Uber is not negligent. If you are sitting in a driver’s seat and you’re watching harry potter while your car drives through a crowd of people you should be found guilty of negligence independent of any charges that come to both the lead engineers and owners of Uber. You have a responsibility to at least take any action at all to prevent deaths that otherwise may be at no fault of your own. You can’t just lounge back while your car murders people, and in the same respect when riding in the drivers seat your eyes should not be on your phone, period.
Edit: That image is of a fully autonomous car, not a semi-autonomous car. There is actually a difference despite your repeated protestations. Uber still failed miserably here, and I hope their cars get taken off the road. I know better than to hope their executives will receive any punishment except maybe by shareholders.
This isn’t the first time you’ve pulled statements out of a hat as if they are gospel truth without any evidence and I doubt it will be the last. I think your argument style is dishonest and for me this is the nail in the coffin.
I’m not sure I understand what you mean…
The UI problem is really evident, isn’t it?
The passenger was not perceiving herself as a driver.
If there is “no way” a human can do this, then we’ve certainly never had astronauts pilot a tiny spacecraft to the moon without being able to physically change position, and we certainly don’t have military pilots in fighter jets continuously concentrating while refueling in air on missions lasting 12 hours or more… or… or…. truck drivers driving on roads with no one for miles…or…
Maybe Uber is at fault here for not adequately psychologically screening, and training its operators for “scenarios of intense boredom.”
You are talking about professionals specifically trained to keep that kind of concentration.
And even a military pilot won’t maintain concentration on the road if her husband is driving and she knows by experience that his trustworthy.
I’m talking about the actual Uber’s goal here, which is to build “self driving cars” for the masses.
It’s just a stupid UI design error. A very obvious one to see and to fix.
Do you really need some hints?
This way the responsibility of an incident would be of the driver, not of Uber’s board of directors (unless factory defects, obviously).
You’re being adversarial just to try to prove your point, which we all understand.
A military pilot isn’t being asked (or trained) to operate an autonomous vehicle. You’re comparing apples and oranges!
Yes, the goal of Uber is to build a self driving car. We know. The goal of Uber is to build a car that is fully autonomous; one that allows all passengers to enjoy doing whatever it is they want to do: reading a book, watching a movie, etc. We get it. The problem is that those goals, are just that, goals. They aren’t reality, yet. And, there are laws in which Uber, and its operators must continue to follow in order for any department of transportation to allow these tests to continue–in order to build up confidence that autonomous vehicles are as safe, or (hopefully) safer than already licensed motorists. (IANAL, nor do I have any understanding of said laws, so that’s all I’ll say there)
So, your point is that the operator’s driving experience should be enhanced by the sensors, and that the car should never be fully autonomous? I can agree to that, and have advocated for that in the past. But, that’s a different conversation. That’s not the goal of Uber, or Waymo.
The reason a pedestrian is dead is because of some combination of flaws in:
… in a world where engineering teams use phrases like, “move fast and break things.” I’m not sure what development methodology is being used to develop these cars, but I would wager a guess that it’s not being developed with the same rigor and processes used to develop autopilot systems for aircraft, or things like air traffic controllers, space craft systems, and missile guidance systems…
Upvoted for this.
I’m not being adversarial to prove a point.
I’m just arguing that Uber’s board of directors are responsible and must be accountable for this death.
Nobody here is arguing that the board of directors should not be held accountable. You’re being adversarial because you’re bored is my best guess.
Very well-said on all of it. If anyone is wondering, I’ll even add to your last point what kind of processes developers of things like autopilots are following. That’s things like DO-178B with so much assurance activities and independent vetting put into it that those evaluated claim it can cost thousands of dollars per line of code. The methods to similarly certify the techniques used in things like deep learning are in the protoype phase working on simpler instances of the tech. That’d have had to do rigorous processes at several times the pace and size at a fraction of the cost of experienced companies… on cutting-edge techniques requiring new R&D to know how to vet.
Or they cut a bunch of corners hacking stuff together and misleading regulators to grab a market quickly like they usually do. And that killed someone who, despite human factors, should’ve lived if the tech (a) worked at all and (b) evaluated against common, road scenarios that could cause trouble. One or both of these is false.
I don’t know if you can conclude that’s the point. Perhaps the driver is there in case the car says “I’m stuck” or triggers some other alert. They may not be an always on hot failover.
IMO they should be, since they are testing a high risk alpha technology that has the possibility to kill people.
The car does not share any responsibility, simply because it’s just a thing.
Nor does Uber, which again is a thing, a human artifact like others.
Indeed we cannot put in jail the car. Nor Uber.
The responsibility must be tracked back to people.
Who is ultimately accountable for the AI driving the car?
I’d say the Uber’s CEO, the board of directors and the stock holders.
If Uber was an Italian company, probably the the CEO and the boars of directors would be put in jail.
People often say this when they’re partly blaming the victim to not seem overly mean or unfair. We shouldn’t have to when they do deserve partial blame based on one fact: people who put in a bit of effort to avoid common problems/risks are less likely to get hit with negative outcomes. Each time someone ignores one to their peril is a reminder of how important it is to address risks in a way that makes sense. A road with cars flying down it is always a risk. It gets worse at night. Some drivers will have limited senses, be on drugs, or drunk. Assume the worst might happen since it often does and act accordingly.
In this case, it was not only a four lane road at night the person crossed: people who live in the area on HN said it’s a spot noticeably darker than the other dark spots that stretches out longer. Implication is that there are other places on that road with with more light. When I’m crossing at night, I do two to three things to avoid being hit by a car:
(a) cross somewhere where there’s light
(b) make sure I see or hear no car coming before I cross.
Optionally, (c) where I cross first 1-2 lanes, get to the very middle, pause for a double check of (b), and then cross next two.
Even with blame mostly on car & driver, the video shows the human driver would’ve had relatively little reaction time even if the vision was further out than video shows. It’s just a bad situation to hit a driver with. I think person crossing at night doing (a)-(c) above might have prevented the accident. I think people should always be doing (a)-(c) above if they value their life since nobody can guarantee other people will drive correctly. Now, we can add you can’t guarantee their self-driving cars will drive correctly.
Well put. People should always care about their own lifes.
And they cannot safely assume that others will care as much.
However note that Americans have learned to blame “jaywalking” by strong marketing campaigns after 1920.
Before, the roads were for people first.
I just saw a video on that from “Adam Ruins Everything.” You should check that show out if you like that kind of stuff. Far as that point, it’s true that it was originally done for one reason but now we’re here in our current situation. Most people’s beliefs have been permanently shaped by that propaganda. The laws have been heavily reinforced. So, our expectations of people’s actions and what’s lawful must be compatible with those until they change.
That’s a great reason to consider eliminating or modifying the laws on jaywalking. You can bet the cops can still ticket you on it, though.
I’ve also seen it argued (convincingly, IMO) that poor civil engineering is also partially responsible.
And every single thing you listed is mitigated by just slowing down.
Camera feed getting fuzzy ? Slow down. Now you can get more images of what’s around you, combine them for denoising, and re-run your ML classifiers to figure out what the situation is.
ML don’t just classify what’s in your sensor feeds. They also give you numerical measures for how close your feed is to the data they previously trained on. When those measures decline,, it could be because the sensors are malfunctioning. It could be rain’/dust/etc. It could be a novel untrained situation. Every single one of those things can be mitigated by just slowing down. In the worst case, you come to a full stop and tell the rider he needs to drive.
Spectre/Meltdown documentation and resource collection repo
Its reminds me.. If macOS High Sierra shows your password instead of the password hint for an encrypted APFS volume
related: Reverse Engineering macOS High Sierra Supplemental Update
https://mastodon.social/@dgv
Another happy fastmail user here, besides all things already mentioned here they have yubico support for 2FA.