I think the author makes a good broad point: front-end development is difficult, therefore keep things simple and lightweight, use progressive engagement, make precise engineering decisions about which technologies you use, etc.
But the broader rhetorical device of “don’t use React ever, it’s just bad” seems overplayed. I say this as someone who doesn’t particularly like React. Partway through, there’s a note that says, if you’re building an SPA, choose SolidJS, Svelte, HTMX, etc. I work mainly in SolidJS, and it’s great, but if you wrote a bad app in React, you’ll probably write the same bad app (if not worse) in SolidJS.
I think the author is mixing up two effects. Engaged developers are more likely to make better decisions when building applications, and therefore will build better front-end applications (in general, at least). But they’re also more likely to look further afield at new technologies, and less likely to stick with React. Meanwhile, less engaged developers are more likely to choose the default tools and take the path of least resistance. This will generally produce worse front-end projects, because as the author points out, web development is hard. (This point is probably generally applicable to most software engineering.)
But none of this means that React will make you a bad developer, or that you’ll become a good developer if you avoid React. It just means that the people who tend to choose alternatives to React also tend to spend more time thinking about other decisions, and those other decisions are probably more meaningful.
To be clear, I think there’s plenty of room to criticise React, but I think this post would have been better without that criticism, because React isn’t really the underlying problem here.
You can do SSR rendering on React so I can still view your content with NoScript on, right?
But nobody does it, probably because it requires extra effort. So to me, React is white-page city and it makes me want to never learn about React.
But I will argue one thing: I’m not sure it’s all lazy developers. If your organization pushes you to deliver everything by last week and they don’t care about quality, I think you are limited in what you can do. Obviously, if you are highly skilled and can deliver quality quickly, perhaps it does not matter so much, but I kinda despise that we are forced to be the ones pushing for quality in opposition to the organizations that pay us. (Cue “shitty companies win all the time”.)
You should be thinking about users with cheap phones, not the tiny number of users with JS turned off. If the page displays without JS, but sends a lot of JS anyway, it’s hurting most of its users. Footnote 9 says:
does the tool in question send a copy of a library to support SPA navigations down the wire by default?
This test is helpful, as it shows us that React-based tools like Next.js are wholly unsuitable for this class of site, while React-friendly tools like Astro are appropriate.
Oh, that’s fair. I’m an egotistic human being and I think mostly about myself (I don’t really do any public webdev nowadays.)
In any case, I’m also pretty sure you can make efficient and performant websites using React, even for lowend hardware. In this case, in addition to the “organizations don’t give a f…”, I think the problem is also that even if you deliver something efficient, in most cases the organization will want to load it with trackers and ads, and the end result will be horrible anyway.
With all due respect, I can display most websites just fine even on a freakin’ smart watch.
JS has not been slow, neither loading it, neither executing it. What is slow is bad engineering and bloat, like tracking every single browser event, loading ads, etc. These are business decisions, and they would make even the most optimized AAA game engine crawl to a halt.
Taking minutes to load an app (react or anything else) doesn’t have much to do with the cost of parsing JS. Parsing JS is fast, very fast. If someone decides to ship something that parses 5mb of JS while doing other thousands of things, that will be slow. But the same badly engineered system in the backend would be even less scalable.
I don’t know, vanilla react loads plenty fast, and if you actually look at it from a data usage perspective, then an SPA will perform better, then a server-side rendered solution. Not every interaction should result in a round-trip to the server.
Not every interaction should result in a round-trip to the server.
This isn’t what happens on …. almost any website. I think Facebook is an example, it seems like every keypress there sets off a flurry of network activity. Twitter likes to server round trip when you just wiggle the mouse. But traditional websites don’t do this stuff - the server roundtrips happen to finalize a batch rather than on every interaction. (filling in the form is all client side, submitting it goes to the server)
And in data usage, that depends on a lot of factors too. I made a little shopping list program a few years ago. I actually thought about doing it as a progressive web app to minimize data, but like the javascript to do that was alone the cost of like 30 plain forms, so decided against it. I often see json blobs that are larger than the html they generate because it sends down a bunch of stuff that wasn’t actually necessary!
I often see json blobs that are larger than the html they generate because it sends down a bunch of stuff that wasn’t actually necessary!
A friend of mine worked on a React app (using NextJS) where SSR caused bandwidth problems because they had a large page with a lot of HTML, and the data for that page was being sent twice: once in HTML form and once in JSON form for the rehydration. Compression helped because of the redundancy, but not enough.
I think the important metric here is latency. Modern network infrastructure has become plenty good in terms of payload bearing, so the difference between loading 500 bytes or 10-50 times that is hardly noticeable, it will largely come in one small or big flow. But doing a small interaction that does a non-async roundtrip to the server will always be on the order of human reaction time, ergo, perceptible.
I’m not saying that everything should be an SPA, but for many web applications the added flexibility (e.g. send something back async, do something strictly on client-side, etc) may well be worth it.
To be clear, I think there’s plenty of room to criticise React, but I think this post would have been better without that criticism, because React isn’t really the underlying problem here.
The author just beats their drum about React being bad and working in a React shop, it all sounds incredibly hollow.
Speaking as a dev who remembers when Express and wheeze jQuery were new, this tracks. Save this post for later because a little search and replace will keep it perennial indefinitely!
This is definitely a topic that inspires a lot of strong opinions, but I have trouble reading the same conclusion as the author. I suspect part of this is there are different audiences for Python and they have wildly different expectations for dependencies.
You use Python for ML/AI: You have a million packages, Python is basically the “glue” that sticks your packages together and you should probably use ul or frankly bite the bullet and just switch to only using containers for local and remote work.
You use Python for web stuff. You have “some” packages but this list doesn’t change a lot and the dependencies don’t have an extremely complex relationship with each other. When you need to do something you should probably attempt to do it with the standard library and only if absolutely necessary go get another package.
You use Python for system administration stuff. Your life is pretty much pure pain. You don’t understand virtual environments, you have trouble understanding pip vs using a Linux package manager to go get your dependency, you don’t have a clean environment to work from ever. You should probably switch to a different language.
As someone who uses Python primarily for 2, I’m just not seeing anywhere near as much pain as people are talking about. venv and pip-tools solves most of my problems and for work stuff I just use containers because that’s how the end product is going to be consumed anyway so why bother with a middle-state.
In my experience this actually works pretty well, as long as you never touch a Python package manager. Important Python libraries do tend to be available as distribution packages, a README with “apt install py3-foo py3-bar” isn’t complicated, and you can make your Python into a proper distribution package if you want automatic dependency management. “System administration tasks” tend not to require a gazillion libraries, nor tracking the very latest version of your libraries.
Mixing distribution packages with pip, poetry, uv, … is every bit as painful as you describe, though - I agree that one should avoid that if at all possible!
So what you are describing is how people did it for a long time. You write a python script, you install the Debian package for that script system-wide and you bypass the entire pip ecosystem.
My understanding from Debian maintainers and from Python folks is this is canonically not the right way to do it anymore. If your thing has Python dependencies you should make a virtual environment, install those dependencies inside of that environment. Like if you want to ship Python as a .deb I think you need to end up doing something like this:
To be honest, that’s one of the big reasons I use Rust for “shell scripting” where I used to use Python. Deployment and maintenance across distro release upgrades are just so much easier with a static binary than with a venv.
I can only imagine that’s also why so many people are now using Go for that sort of thing.
(To be honest, I’m hoping that going from Kubuntu 22.04 LTS to 24.04 LTS on this machine will be much less irritating than previous version bumps because of how much I’ve moved from Python to Rust.)
Yeah, used Python professionally for a decade. Never had a single problem. Came to think that people railing against Python dependency management had a “them problem”.
Python packaging has its issues but have you seen Node where there are a handful of wildly incompatible ways to build/include libraries and nobody can push anything forward?
You use Python for system administration stuff. Your life is pretty much pure pain. You don’t understand virtual environments, you have trouble understanding pip vs using a Linux package manager to go get your dependency, you don’t have a clean environment to work from ever. You should probably switch to a different language.
I have been finding success with uv run for these users and either uploading packages to the registry for them to execute with uvx or inline script metadata for dependencies.
I love being able to tell somebody to just go install pipx and then pipx install a tool directly from a git repo. It’s so straightforward! I’m very glad that uv supports this too. It’s quickly becoming the only python dependency tool I need.
It is a nightmare. Venv make no sense and I have yet to understand how to use pip-tools. That is after having written and fixed package managers in othet languages and even being a nixpkgs maintainer.
Even nix is easier to write and use that venv. As long as venv are part of the UX of your solution, this is not going to fly.
Can you clarify how venvs make no sense to you? They seem to be the one part of Python that actually make sense to me, and bring some sanity. (I still prefer Nix, too.)
I’m surprised? This isn’t me attacking you, just genuine curiosity. My primary issue with onboarding interns or junior folks onto python projects has been explaining that venv exists, but once they grasp the basics of it seems to “just work”. Especially with pip-compile and pyproject.toml there doesn’t seem to be a ton of places for it to go wrong, but I’d love to know what about it doesn’t work for you.
In my experience, case 1 usually results in “it works on my machine” issues. I worked for a research company providing software engineering support to scientists for a while, and this was a really common problem, and we spent a lot of time trying to come up with fixes and never really found anything ideal.
Case 2 works better, but usually only if the person initially seeking up the project has enough experience with Python to understand what’s going on. I’ve seen projects where Python was added as a quick-n-easy addition to an existing codebase, where the packaging was just a bunch of scripts and makefile commands wrapping virtualenv. This invariably caused problems, but was again difficult to fix later because of subtle behaviours in the hand-written build system.
Case 3, ironically, feels like the easiest to solve here - put everything in a venv and as long as you never need to update anything and never need to share your code with someone else, you’re golden.
Compare and contrast this with Cargo or NPM: these tools ensure that you do the right thing from the start, without having to know what additional tools to install, and without needing to think particularly hard about how to set this stuff up. I worked on a project with a Python component and a Javascript component, both set up by the same developer who had minimal experience in either ecosystem, and the difference was like night and day. On the Javascript side we had a consistent, reproducible set of packages that we could rely on well, and on the Python side of things we had pretty consistent issues with dependencies the entire time I worked there.
I think that the way we look at the Software Crisis is rooted in revisionism.
I strongly recommend ditching all reference to Boehm. Everytime I tried to get evidence of what he describes, or source for his claims, I have ended in Leprechauns. There is no data out there that shows that fixing problems later cost more in term of time or effort than earlier. If anything, the limited reliably data we have on this aspect of project management shows a really limited impact.
The second aspect is that the “Software Crisis” was… not really a crisis. I strongly recommend looking at the publications by the authors of the NATO conference proceedings, or at the work of historians looking at these events. There are a few floating around, they can be found.
Thirdly, the fabled time in which we had “design documents” and “architects” seems to more come from the 90s, in trends to handle the so called “software crisis”, than in “older time”. A lot of what we have seen of projects from the 60s to the early 90s is practices of constant change. The same way we see that in construction (go ask people that manage a construction project how much the work is applying the blueprint produced by the architect. The reality is far closer to software development, with constant adaptation and change).
I understand that this is not the mainstream narrative in the “SDLC methodology” circles. And there are indeed existing tools and ways to analyze software development as a dynamic system over time, far closer to design. But the “death of the architect” is a trope that need “the time of the architect” to have existed. It is highly dubious, from looking at history, that it ever really existed. At least in practice. It may have existed in theory, just like I have architects in my organisation diagram these days.
But I doubt their plan have any bearing with reality, and that any of our devs does what they tell them.
Note also that we do have architects. They are the OpenSource maintainers. One of the reason we do not do as much “architecting” in software is that it takes a lot of time doing “nothing” (or more precisely, learning about the field and thinking slowly through the problem), and that does not align with the modern structure of corporate work. So it all happens in OpenSource, mostly through hobby time.
The use of “architecture” in software was coined by Fred Brooks and Gerritt Blaauw, who were principal designers for the IBM System/360. While the project was famously difficult (it was one of the precipitating events for the 1968 NATO conference), Brooks and Blaauw strongly believed in the value of their approach. See the 1972 paper by Blaauw referenced in the post for more on that.
I don’t have any numbers for how widely their ideas were followed, but they were definitely used at IBM, and since IBM was responsible for training a large fraction of the early programmer workforce, I’d expect that it was used in a number of projects elsewhere.
Either way, I agree my four-paragraph history of early software methodologies is oversimplified. There’s a reason it begins with “once upon a time.” I’d argue, however, that it captures the trends and ideas that Beck and the other Agile methodologists were responding to, which was the main concern for this post.
I wonder how much this is a case of chasing the metric rather than organic adoption.
Some large consultancies here really push the adoption of assistants onto programmers so that they can boast the customers they’re on the bleeding edge of development. The devs grumble, attend mandatory AI training and for the most part pretend they lean on their now indispensable Copilots. It is possible something like that is happening here. The VP of AI Adoption and their department that Google for sure has counts all the lines from AI enabled editors. This then communicated with a good part of wishful thinking all the way up to the CEO.
Or who knows, maybe Google has a secret model which is not an utter crap for anything non-trivial and just holding it back for competitive advantage. Hope googlers here would let us know!
If you read their report on it, it is definitely chasing for metrics. All their other “AI tool for devs” initiatives have abysmal numbers in term of adoption and results, and they are already saying that all their future growth are in other domains of development. Translation: We are out of things we can easily growth hack.
FWIW, if this is anything like Copilot (which I do use for personal projects because I’ll take anything that’ll fry my brain a little less after 8 PM) it’s not even a particularly difficult metric to chase. I guess about 25% of my code is written by AI, too, as in the default completion that Copilot offers is good enough to cover things like convenience methods in an API, most function prototypes of common classes (e.g. things like constructors/destructors), basic initialization code for common data structures, hardware-specific flags and bitmasks and so on.
It’s certainly useful in that it lets me spend more of my very limited supply of unfried neurons on the harder parts, but also hardly a game changer. That 25% of the code accounts for maybe 1% of my actual mental effort.
I will give it that, though, it’s the one thing that generative AI tools have nailed. I’m firmly in the AI should do the dishes and vacuum so I can write music and poetry, not write music and poetry so I can do the dishes and vacuum camp. This is basically the one application where LLM tools really are doing the dishes so I can do the poetry.
I think there are some valuable ideas in this paper. On the other hand… do we really need to get gender into programming languages? Are we going to have toxic masculinity of language design? Is everything in life about oppression, or do people just build systems in a way that they are useful to them, and maybe a different set of people builds systems in a different way, and in a pluralistic world we can have both and learn from each other?
If I had written this paper, I would not have brought gender/sex into play. You can easily substitute feminism for accessibility or other terms for part of their reasoning, and make this paper useful to programming language designers without evoking a political agenda.
Section 2, titled “Setting the Scene: Why Feminism and PL?” spends 2.5 pages answering your question, and sections 5.1 and 5.2 have more.
To expand on the last paragraph of section 2.4, using feminism allows the authors to build on existing work. There’s dozens of citations from outside of programming, bringing in 50 years of material to hybridize with the familiar.
To your suggested accessibility framing, there’s a recent talk How to Accessibility If You’re Mostly Back-End that hits some similar points, but it’s much more about industry practice than language design. (I saw an unrecorded version at Madison Ruby a few months ago but at a skim this presentation is at least very close.)
To expand on the last paragraph of section 2.4, using feminism allows the authors to build on existing work. There’s dozens of citations from outside of programming, bringing in 50 years of material to hybridize with the familiar.
Yes, yes, this is an essay “escaped from the lab” to justify an academic’s take on the field and to publish a paper.
To expand on the last paragraph of section 2.4, using feminism allows the authors to build on existing work.
The existing work being built upon should arguably be, you know, programming languages and programming, instead of feminist theory. I imagine that’s what many folks will bristle at.
I’ve skimmed this once, and I’ll give it a thorough reading later, but sections like 5.1 and 5.2 emphasize–to me at least–that the target audience of this sort of thing is fellow academics and people already sold on the feminist lens. This is an academic talking to other academics, and we do tend to skew a bit more towards professionals creating artifacts and systems.
I don’t really have high hopes of useful discussion on Lobsters here, since the major reactions I expect to this are either “Oh, neat, a feminist look at programming, okay whatever”, “Hell yes, a paper talking about how unfair we are to non-white non-men”, or “Hell no, why do we have to inject gender into everything?”. To the degree to which any of those are correct, our community’s ability and capacity to discuss them civilly while remaining on topic for the site is suspect.
The idea that feminism is a novel (and somehow intrusive/invasive) political agenda, rather than a lens through which you can view and critique the political agenda we’ve all been raised within, seems to be part of the paper’s basic point. Gender is already implicitly part of programming languages (and all fields of human endeavour), the idea is to recognize it and question if and how (and to what degree) it’s influenced the field. The act of doing so isn’t advancing (or advocating for) a new political agenda, it’s critiquing one that already exists.
BTW, a non-author of this paper swapping “accessibility” for “feminism” here, when the author chose the latter precisely because it is not equivalent to the former, would actually be pretty spot-on example of why adopting a feminist perspective is necessary. Accessibility is about making systems more adoptable to humans with other-than-default access needs irrespective of gender, feminism is about making systems more adoptable to humans with other-than-default gender irrespective of their access needs… they’re literally two topics that don’t overlap except in causing us to critique our “default” way of designing systems; if you think looking at the accessibility bias built into systems is important and/or valuable you probably should think of looking at the gender (and other) bias of those systems is important and/and valuable too.
I have only read the linked article and paper intro (for now), so there might be more, but what seems to be taken from feminism here is the analysis of power structures, social dynamics, shared values and how that all holds back the field by narrowing what research is done.
Reading the intro will provide concrete examples from the authors’ personal experiences.
If the paper was about, say, applying an economic approach to PLT, would you have engaged more deeply to get answers?
I ask this not as a gotcha, but to create an opportunity to reflect on bias.
I personally acknowledge my initial reaction was “why feminism?” but am happy that I also had the reflex of going past that.
I am considerably more willing to believe that feminist critiques are written in good faith than economic ones, and it behooves the reader to understand that neither will represent an “apolitical” perspective.
If the paper was about, say, applying an economic approach to PLT, would you have engaged more deeply to get answers?
Conversely, if it applied a flat earth theory approach, would you engage less? I probably would. Is it wrong to use our past experiences (our biases) with particular fields to determine which lengthy papers we do and don’t read?
Horkheimer described a theory as critical insofar as it seeks “to liberate human beings from the circumstances that enslave them”. – Critical Theory
So the “theory” in the name is already a lie. This is not “theory”, it is politics and ideology.
There is nothing wrong with politics, but please don’t pass off politics as science. And in particular, don’t try to make this non-science the arbiter of all the other sciences. Yeah, I know that this “theory” claims that all the other sciences are actually not scientific because they are just power grabs. Well, that’s just projection.
You, coming to these comments to tear down the work of others published in the same proceedings as your own, by calling it “non-science” and “not far away” from flat earth-ism is demonstrative of the bare misogyny that this paper is asking the audience to start taking notice and accept less of. Stop it.
On the other hand… do we really need to get gender into programming languages?
When we exclude spreadsheet from programming languages, I think we already have to some extent: one big reason it’s excluded is because spreadsheets are perceived as not as prestigious as “actual” programming. And I bet my hat one reason it’s not is because spreadsheets are generally a secretary’s tool. Female secretary most of the time.
There used to be a time where computers were women (or “girls” as the men around them often called them). With the advent of the automatic computer, a good deal of those women turned to programming. And for a time, this role was not that prestigious. Over time it did became so, though. And over time we did see a smaller and smaller share of women going into programming. Coincidence? I think not.
Anyway, giving “programming language” status to spreadsheets would elevate the status of secretaries to programmers, and “real” programmers can’t have that. Hmm “real programmer”. Why do this always conjure an image of a man in my head? You’d have to admit, the XCKD über hacker mom isn’t your stereotypical hacker.
I think the simpler and more correct explanation is that spreadsheets dominate other sectors and industries (engineering, medicine, hospitality) so thoroughly that it simply never occurs to most programmers that they’re a valid programming environment.
This is also why I’ve seen an MBA beat a bunch of programmers’ asses using only pivot tables and sheer stubbornness.
I bet my hat one reason it’s not is because spreadsheets are generally a secretary’s tool.
Spreadsheets, in the context of programming (generally excel) are coded as management tools or more generally “business people”. Not by gender (these are, more often than not, men as well, although probably not quite as male dominated as programming).
When we exclude spreadsheet from programming languages, I think we already have to some extent: one big reason it’s excluded is because spreadsheets are perceived as not as prestigious as “actual” programming. And I bet my hat one reason it’s not is because spreadsheets are generally a secretary’s tool. Female secretary most of the time.
Do we exclude spreadsheets? Microsoft claimed excel was the most popular programming language in the world in 2020.
Excel isn’t present at all in the technology section of the 2024 Stack Overflow survey results. And that isn’t even specifically a list of programming languages; the page has several categories of technologies, including an “other” category. So while Microsoft may rate Excel—and to point out the obvious, they have a financial interest in doing so!—I don’t think that’s necessarily a widespread view.
I think I disagree on the widespread view comment. Anecdotally most people I talk to agree that excel (or spreadsheets more generally) meet the definition of a programming language/environment. I would argue that the community of users on stack overflow is not representative of the broader population and the population choice is the crux of the issue here.
The original question was whether Excel is a PL in the context of PL research. And in that context, I think it’s especially obvious that it is. It has variables (called cells), and loops. It’s Turing complete, and not in a gotcha-kinda way. Excel only has user-defined functions defined in another language, but Google Sheets has user-defined functions defined in the same language. It has control flow, made interesting by cells referencing each other in a DAG that needs to be toposorted before being evaluated. It has compound data: 1D and 2D arrays.
You could absolutely write a small step semantics for it, or a type system for it, and neither would be trivial. In fact I’d like to read such a paper for Google Sheets to understand what ARRAYFORMULA is doing, there’s some quantifier hiding in there but I’m not sure where.
Oh, I do think that Excel is a programming language! I realize that my comment didn’t make that clear at all. I was trying to push back on the claim that spreadsheets are commonly considered to be programming languages. I think Excel is a PL, but my impression is that there aren’t a lot of other people who think that.
Maybe it’s just because I’m young, but the first introduction to “historical” programmers I had was Ada Lovelace, and later then Grace Hopper.
Honestly, I can’t say I even necessarily have a generally positive perspective on the “typical hacker” persona - maybe partially because there are some like RMS who come with a lot of baggage.
That is the fun bit. This totally eclipse the reality of the history of our field.
I recommend a tremendous book, “Programmed Inequality” by Mar Hicks, for a historian work on some of this. It is fascinating and may help shed some light on the topic and the lens.
spreadsheets are generally a secretary’s tool. Female secretary most of the time.
Maybe this is true in your culture, but do you have evidence for this? I have no evidence, only my perceptions.
My perception, in my culture, is that spreadsheets are stereotyped as being for “business people” in general, or for very organized people, or for “numbers-oriented people”.
My perception, in my culture, is that “business people in general” and “numbers-oriented people”, and maybe “very organized people”, are stereotypically male, i.e., that women are stereotyped as less likely than men to be in those groups.
Although secretaries exist in my culture, I perceive their numerousness and cultural prominence as being low now, much decreased since their peak in the 20th Century, I guess because computers and economic downturns made businesses decide that they could do with many fewer secretaries.
When we exclude spreadsheet from programming languages, I think we already have to some extent: one big reason it’s excluded is because spreadsheets are perceived as not as prestigious as “actual” programming. And I bet my hat one reason it’s not is because spreadsheets are generally a secretary’s tool. Female secretary most of the time.
TBH, I figured it was excluded because there is one hyper dominant spreadsheet implementation (Excel) that was created by a much maligned company (Microsoft).
Though I suppose that might be why there is one hyper dominant implementation. If people were more interested in spreadsheets we might have a lot more options and genomic studies would be safe from coercion errors.
Building software is complicated. Build systems are complicated. The complexity is further multiplied by the number of platforms/architectures/OS’s/etc … that need to be supported. And this software is AFAIK the project of one guy who releases it for free.
I’m not intending to have a crack at the author personally, but the general mindset really irks me. The nature of open source software has often felt like something of an outlier to me. How many other examples are there at a similar scale where people spend vast amounts of time working on projects that end up being widely used, often for others’ own commercial gain, and yet are given away for free? And not only is it free, but the “instructions” are too so you can make your own version and modify it as you please. It doesn’t feel like there’s a lot in this world that’s free these days, but open-source software is one such thing.
And yet, people still get annoyed when the thing that was freely given doesn’t work for them, as if the author is in any way obliged to handle their specific configuration. It just feels … unkind?
I get you and I don’t really have a good answer for this. I don’t intend to be unkind. It looks like others have been successful building 7-Zip and I’ve updated the article accordingly.
By making your build system obtuse, you’re asking distro maintainers and people who come after you to do more work to be able to package and use your software.
I really doubt the author deliberately made his build system obtuse. 7zip is software from the late 1990s. Most of it is written in C/C++, and it started on Windows then was eventually ported to Linux. In that context, it’s not even particularly obtuse; it actually doesn’t seem as bad as I remember, having dealt with quite a bit of similar software in the early 2000s.
Supporting both non-cygwin Windows builds in that era and native Linux builds from the same tree was always obtuse. Did you ever try to build the original mozilla suite back then? I remember spending a solid week getting that working. When StarOffice was released as open source, did you try building that? It was hell on wheels. And those were projects with large teams behind them. Not one-developer shows.
I also don’t think the author is asking distro maintainers to do anything, FWIW.
On the plus side, once you script up such a build, it tends to be pretty stable, as there’s a strong incentive not to mess with the build system if it can be avoided at all :-)
I think that this is a problematic way to look at it, and one that hurt opensource.
It is distro maintainers and people that come after me that want to use my software. The onus should be on them to decide if they want to use my gift or not, based on their ressources.
Not on the maintainer to spend ressource and knowledge they may not have or want to, in order to make packagers life easier.
I would go deeper. It is the fundamental element that make FOSS works to invert that usual relationship, and fighting against it is one of the major contributor to burn out and anger in FOSS.
This is something you can definitely optimise for.
“Deletability” is a real quality your code can have and I recommend optimising for it. It is why I recommend against class based OOP. Why I do Elixir, Erlang or Rust. Some environments help you in that direction.
The argument is that most vulnerabilities come from recently-added code, so writing all the new code in a safe language (without touching old code) is effective at reducing the amount of vulnerabilities, because after a few years only safe code has been recently added, and older code is much less likely to still contain vulnerabilities. (More precisely, they claim that vulnerabilities have an exponentially-decreasing lifetime, pointing at experimental findings from previous research.)
I find the claim rather hard to believe, it is implausible and my intuition is that it is completely wrong for many codebases. For example, if I have an unsafe-language codebase that has very few users and does not change often, by the reasoning above we could wait a few years and all bugs would have evaporated on their own? Obviously this is not true, so the claim that vulnerabilities have an exponentially-decreasing lifetime must only hold under certain conditions of usage and scrutiny for the software. Looking at the abstract of the academic publication they use to back their claim, the researchers looked at vulnerability lifetimes in Chromium and OpenSSL. Those are two of the most actively audited codebases for security vulnerability, and the vast majority of software out there does not have this level of scrutiny. Google has setup some automated fuzzing for open source infrastructure software, but is that level of scrutiny enough to get into the “exponential decay” regime?
So my intuition is that the claim should be rephrased as:
if your unsafe-language software gets similar level of security scrutiny as Chromium or OpenSSL
and you start writing all new code in a safe language or idiom
and you keep actively looking for vulnerabilities in the unsafe code
then after a few years most safety vulnerabilities will be gone (or at least very hard to find), even if a large fraction of your codebase remains unsafe
Phrased like this, this starts sounding plausible. It is also completely different from the messaging in the blog post, which makes much, much broader claims.
(The post reads as if Google security people make recommendations to other software entities assuming that everyone has development and security practices similar to Google’s. This is obviously not the case, and it would be very strange if the Google security people believed that. They probably have a much narrower audience in mind, but miscommunicate?)
For example, if I have an unsafe-language codebase that has very few users and does not change often,
I think another difference between Google’s perspective and yours, in addition to that their old code gets vulnerabilities actively hunted, is that they’re focussing on codebases where large amounts of new code are added every year, as they add features to their products.
If the alternative is “keep doing what you’re doing” (and “rewrite everything in a safe language” not being an option), I’m sure everyone’s better off adding new stuff in safe languages, even if the unsafe bits don’t get as much scrutiny as Google’s stuff. Eventually, you’ll probably rewrite bits you have to touch anyway in a safe language because you’ll feel more proficient in it.
Okay, yeah, “your software will be safer if you write new stuff in a safe language” sounds very true. But the claims in the blog post are quite a bit stronger than that. Let me quote the second paragraph:
This post demonstrates why focusing on Safe Coding for new code quickly and counterintuitively reduces the overall security risk of a codebase, finally breaking through the stubbornly high plateau of memory safety vulnerabilities and starting an exponential decline, all while being scalable and cost-effective.
An exponential decline in vulnerabilities is a rather strong claim.
But it’s an extremely realistic claim for any code base that is being actively worked on with bugs being fixed as they are found. That may not apply to your code bases, but I think it’s a very reasonable claim in the context of this blog, which is making something that is widely used much safer.
I don’t find it realistic. Bugs in general, sure: we find bugs by daily usage of the software, report them, and they get fixed over time – the larger the bug, the sooner it is found by a user by chance. But security vulnerabilities? You need people actively looking for those to find them (at least by running automated vuln-finding tools), and most software out there has no one doing that on a regular basis.
I went to look a bit more at the PDF. One selection criterion is:
The project should have a considerable number of reported
CVEs. In order to allow a thorough analysis of all projects,
we limited ourselves to those with at least 100 CVEs to en-
sure meaningful results
How many CVEs have been reported against the software that you are writing? For mine, I believe that the answer is “2” – and it is used by thousands of people.
My intuition is that the experiments in the paper (that claim exponential decay) only apply to specific software development practices that do not generalize at all to how the rest of us write software.
That claim is based on some Google Project Zero work, but it’s not aligned with my experience either. I suspect that it’s an artefact of the following flow:
Find a new vulnerability.
Search the code for similar code patterns.
Fix all of the instances you find.
Imagine that you fix all of the occurrences of bug class A in a codebase. Now you write some new code. A year later, you look for instances of bug class A. They will all be in the new code. In practice, you don’t fix all instances, but you fix a big chunk. Now you’ll see exponential decay.
The converse is also common: Find an instance of a bug class, add a static analyser check for it, never see it in new code that’s committed to the project.
The problem with all of these claims is that there’s no ground truth. If you could enumerate all of the bugs in, say, Linux, then you could (moderately) easily map them back to the commits that introduced them. If you could do this, you could also ship a 100% bug-free version of Linux. In practice, you only have data on the bugs that are found. That tends to be bursty as people find new techniques for identifying bugs.
In the things that we’ve ported to CHERI, I don’t think we’ve seen evidence that memory-safety bugs are more likely to be present in new code. Quite a few of the bugs we’ve found and fixed have been over 20 years old. There is certainly an effect that bugs that cause frequent crashes get fixed quickly, but the more pernicious ones where you’ve got a small out-of-bounds write, or a use-after-free that depends on concurrency and doesn’t trigger deterministically, are much more likely to hide in codebases for a long time.
In the things that we’ve ported to CHERI, I don’t think we’ve seen evidence that memory-safety bugs are more likely to be present in new code. Quite a few of the bugs we’ve found and fixed have been over 20 years old.
Doesn’t this undermine an argument you’ve used for why to use an old TCP stack in C rather than newly written one in Rust? As I recall, the thinking went that the old TCP stack was well tested and studied, and thus likely to be better both in terms of highly visible bugs and in security bugs, than a newly written Rust version.
Doesn’t this undermine an argument you’ve used for why to use an old TCP stack in C rather than newly written one in Rust? As I recall, the thinking went that the old TCP stack was well tested and studied, and thus likely to be better both in terms of highly visible bugs and in security bugs, than a newly written Rust version.
Possibly. I’d like to see a new TCP/IP stack in Rust that we could use (which needs some Rust compiler support first, which is on my list…) but yes, I would expect very new code to be buggy.
I think I expect something less of a decay. Very new code has had little real-world testing. A lot of things tend to be shaken out in the first couple of years. Very old code likely has a lot of bugs hiding in it that no one has looked at properly with more modern tooling. I’m not sure where the sweet spot is.
My main worry with a new TCP/IP stack is not that it’s new code, it’s that the relevant domain knowledge is rare. There’s a big difference between a new project and new code in an existing project. Someone contributing code to an existing TCP/IP stack will have it reviewed by people who have already learned (often the painful way) about many ways to introduce vulnerabilities in network protocol implementations. If these people learned Rust and wrote a new stack, they’d probably do a better job (modulo second system problems) than if they did the same in C. But finding people who are experts in Rust, experts in network stack implementations, and experts in resource-constrained embedded systems is hard. Even any two out of three is pretty tricky.
The most popular Rust TCP stack for embedded is, I think, smoltcp, which was created by someone who I am very sure is an expert in both Rust and resource-constrained embedded systems, but I have no idea how to evaluate their expertise in network stack implementations, nor the expertise of its current maintainers.
It might not be suitable anyway since it is missing a bunch of features.
We use smoltcp at Oxide, so it is at least good enough for production use, if it fits your use case. As you say at the end, it’s possible there are requirements that may make that not work out.
I didn’t really find this too insightful. Main takeaways imo:
He acknowledges that some people take this all very personally but doesn’t really say anything at all about it. He half jokingly says he likes arguments right off the bat.
He thinks it’s good that Rust and C people see things differently, they bring different perspectives to the table.
He thinks Rust will likely succeed but that even if it fails it’ll be fine because they’ll have learned something. Some people seem to think Rust has already failed in the kernel, he doesn’t feel that way.
Kinda just random stuff otherwise, like that C in the kernel is weird and abnormal.
Notably, he doesn’t seem to actually express any kind of disapproval at all or acknowledge any problems brought up by various contributors like Asahi, despite being asked. He doesn’t address the core issue of Rust devs wanting defined semantics either, which is a real shame since I think that’d be an area he could really meaningfully make a call on in his position.
I wish he’d just said “Yeah so my perspective is that Rust people want defined semantics and that blah blah my opinion blah. And also, in terms of how they interact/ how this led to a maintainer resigning, I want to say blah blah blah blah”. I didn’t get that, so I’m a bit disappointed.
Notably, he doesn’t seem to actually express any kind of disapproval at all or acknowledge any problems brought up by various contributors like Asahi, despite being asked.
I think that’s why this is interesting. IMO, it sounds like things are working as expected. People burning out or having strong disagreements are not considered problems. They are considered a sign of energy.
Whoever joins the project will probably need pretty thick skin to push things forward. Not too surprising at the end of the day.
People burning out or having strong disagreements are not considered problems.
It’s considered a pretty big problem. See this LWN article from last year, for instance. They just don’t want to acknowledge the problems that they themselves create by their behavior.
It’s considered a pretty big problem… They just don’t want to acknowledge the problems that they themselves create by their behavior.
Yeah, I know other people consider it to be a problem. I’m saying that I don’t think Linus considers it to be a problem. He’s known for having a hard edge so I don’t think he’s avoiding acknowledging it. He seems to think this is a productive process.
That’s what I find interesting about this. Clearly it’s a bit toxic.
Ted Ts’o is the first maintainer cited in that article, and he is describing some reasons for burnout. Wasn’t he the very aggressive audience member at a recently linked video of a conference who told Rust people he will do nothing to help them and will break their code at will? With that attitude being cited as a key reason for Rust contributors burnout?
(Not that this would add much to the discussion if true; it’s just funny in a sad way)
Whoever joins the project will probably need pretty thick skin to push things forward. Not too surprising at the end of the day.
Personally that sounds sad, if not outright terrible. That good work can’t stand on pure merit. Instead you have to dig in and fight to improve something
It’s any wonder open source developers of all stripes get burned out and leave
But like… He didn’t say anything about the actual problem. There’s two things that happened.
A discussion was had about how to encode kernel semantics into types.
That discussion went horribly.
He had virtually nothing to say on (1), which feels really unfortunate. He had almost nothing to say about (2) other than sort of vaguely saying that some people get upset and argue and that’s okay.
That is just so weird and useless to me. We got nothing about the actual situation.
But like… He didn’t say anything about the actual problem.
I think those two problems are actually symptoms of the culture. Linus seems to think that the situation was somewhat productive. He seems to like the clash of ideas. That’s the real problem, IMO. We should be able to have debates, but with a more moderate intensity.
I find it interesting because the technical issues are clearly locked behind the culture. Given what he said, I don’t think there is going to be any movement socially. Whoever goes into the project will probably need to have really thick skin in order to get anything done.
He doesn’t address the core issue of Rust devs wanting defined semantics either, which is a real shame since I think that’d be an area he could really meaningfully make a call on in his position.
I watched the video really hoping he was going to weigh in on this, in particular.
Whether or not rust in the kernel ever becomes interesting on its own, it would be a big win for everyone if it made those semantics better understood. I was disappointed that he didn’t choose to discuss it.
It’d have been nice if he weighed in on the social issues that pushed the maintainer out, but given his history I’d have been surprised if he did. I really thought he might have something to say about the under-understood semantics, though.
I think it’s naive to expect him to make any strong statement in an interview. Whatever work he might be doing to facilitate interactions between contributors has to happen behind private doors.
If you were involved in a similar thing at your job, would you prefer that your CEO tries to sort things out via small meetings, or by publicly blasting the people that he thinks are not doing a good job in an interview, maybe without even ever speaking to them directly first?
Usually these things are indeed handled publicly on the mailing lists. This is exactly the sort of thing I would expect Linus to address directly, yes.
I’m not involved in the development of Linux but I would be extremely surprised if this kind of situation didn’t have any private communication attached to it.
Regardless, even if it is fully handled via mailing lists, this interview is not a mailing list.
Indeed these sorts of things have historically been handled quite publicly, with Linus weighing in. Even fairly recently, even specifically with regards to maintainer burnout, even specifically specifically with regards to filesystem maintainer burnout - see BCacheFS.
It is definitely the status quo for these things to be handled publicly. It’s also the status quo at companies. When something happens to a company publicly it is not uncommon at all for a CEO to have a company-wide meeting (say, the “town hall”) and to weigh in on it directly, even if they have discussed such things privately.
Regardless, even if it is fully handled via mailing lists, this interview is not a mailing list.
Okay but he hasn’t weighed in on the mailing lists. This is the first time, to my knowledge, that he has talked about this. So yes, I expected him to say at least something relevant - he barely discussed the topic at all.
Okay but he hasn’t weighed in on the mailing lists.
So you’re saying that it is more probable that him and other leadership did absolutely nothing instead of having private conversations. Not what I would bet my money on, sorry.
That is not what I’m saying at all. What I’m saying is that regardless of what has or has not been discussed in private, the normal expectation is for these things to be handled publicly as well.
All the complaints about the Linux Kernel culture sound very immature and naive. They sidestep acknowledging that it’s a hugely successful collaboration - one of the biggest software projects in history that welcomes a huge diversity of contributions.
In itself, accepting the Rust attempt is extraordinarily open minded. What other critical system is willing to experiment like that?
Passionate people are going to burn out and most projects take the easy way out by never letting them contribute. Rather than blaming Linus let’s praise him and appreciate his new more diplomatic approach
In itself, accepting the Rust attempt is extraordinarily open minded. What other critical system is willing to experiment like that?
Many of them? This is broadly speaking the norm - though not every experiment is about rust specifically.
For example, the most similar projects that come to mind are:
Windows, which is rewriting the components of itself including parts of the kernel. in rust.
MacOS/iOS/iPadOS/VisionOS, which has created it’s own entirely new language (swift) and been rewriting parts of itself in it (at least core user-space system components, I don’t know about the kernel specifically though).
If we stop and look at large scale open source projects…
Firefox, literally paid for rust to be created.
Chromium, started integrating rust.
Android (both userspace and the android specific parts of the linux kernel), has been experimenting with rust.
Gnome, has had “hackfests” about integrating with rust for literally years now.
KDE, appears to be experimenting slightly (but less so than the gnome people?)
…
take the easy way out by never letting them contribute.
Maybe it wasn’t your intent, but this statement falsely implies that the people contributing to the rust for linux project don’t/wouldn’t contribute to linux otherwise. Rather it’s led by people with a long history of contributing to linux in C. The most impactful piece of rust code in linux (or really a fork therefore) is probably the Asahi linux graphics driver, and not only was that not only written because the kernel was accepting rust, but the author learned rust in order to write that driver in it instead of C.
Note that of your list, the only items that actually count are Chromium, Gnome and KDE.
All other items are projects that belong to Big Tech companies that have a huge interest in riding hype waves. Rust will most probably provide them with value past hype/marketing/hiring, but there’s absolutely nothing open minded in their behavior.
The criteria was “other critical system” not “other critical system not run by a big tech company”. If anything Gnome and KDE are the least applicable not exactly being critical systems… admittedly I included them anticipating the objection that the most similar projects (windows and apple’s OSes) are run by very different organizations… and while not critical systems they are infrastructure level projects.
If you really want to avoid “big tech company” vibes though, the best example I can think of off the top of my head is curl. Open source project, run by some dude, used literally everywhere, security sensitive. There too, experiments with rust (largely failed, but experiments none the less). I’m not even cherry picking here, because I don’t have to, experimentation with new technology is the norm in most of the programming industry.
Incidentally I’d note that a huge amount of linux development is funded by big tech companies.
but there’s absolutely nothing open minded in their behavior.
Swift really doesn’t have any hype/marketing/hiring benefits beyond what Apple itself created, so I don’t see how this argument stands up as it applies to them.
Microsoft may have adopted rust after public enthusiasm for it, but Microsoft has a long history of programming languages research along the same lines (e.g. see TypeScript, F#, F*, Project Verona, Dafny, Bosque). I don’t think the idea that they were open to this experiment merely because of hype stands up to plain evidence that they have been very interested in languages that provide better correctness guarantees for a long time - and investing huge sums of moneys into trying to make it happen.
PS. Presumably you mean Firefox and not Chromium, Google being a rather larger company than Mozilla?
Note that of your list, the only items that actually count are Chromium, Gnome and KDE. All other items are projects that belong to Big Tech companies
Chromium is controlled entirely by Google…
All other items are projects that belong to Big Tech companies that have a huge interest in riding hype waves. Rust will most probably provide them with value past hype/marketing/hiring, but there’s absolutely nothing open minded in their behavior.
Android is more than experimenting, last I checked they were at at least 30% of new code in a major release being in Rust. This is quickly becoming their main language for the low level stuff.
All the complaints about the Linux Kernel culture sound very immature and naive. They sidestep acknowledging that it’s a hugely successful collaboration - one of the biggest software projects in history that welcomes a huge diversity of contributions.
Are you saying that Linus has been a big success because of, rather than in spite of, the toxic culture that pushes people away? Is accusing contributors of being religious nuts good or bad for Linux?
I think it’s saying neither? Simply, this is the culture that produces the Linux kernel. It may not always be a pretty process, but it’s managed to produce a very successful kernel.
(quote some old adage about not wanting to know how the sausage is made)
Is accusing contributors of being religious nuts good or bad for Linux?
It depends? There are people on both side of that accusation. To me, the connotation of “religious nuts” is that they’re unwilling to compromise or change their opinion when presented with evidence. So this could just be the culture trying to drive away people they find difficult to work with?
There is a lot to say here, as a critic of the Frontend indistry.
I do not buy fully the conspiracy theory, but I think that the “non conspiracy” explanation stand even less the contact with reality. The main reason to use React I see today is “everyone use it, so it is easy to hire”. Nothing else.
I see a blatant lack of discussion on how to integrate with non JS stuff.
Following on that, where is my import map for non JS content, with SRI, like for CSS? Pretty please?
It saddens me to read the negativity of other comments? Why hoping no one replicates this? If anything, the very reason whe ended up where we are is that people don’t do more or this. This is shining a super bright light that we are too far gone on the mindset of everything being disposable and just buy new, don’t even think of fixing it.
This video got over a million views. For the awareness it has risen alone, it has already been worth it.
It has been a long time since I saw an interesting hardware project reclaiming used parts. Good work!
Well, maybe not no one, but I suspect that the vast majority of those million viewers probably weren’t people I’d trust to do this safely without risking turning their interesting hardware project into an improvised incendiary device.
Lithium batteries have a variety of possible failure modes that aren’t necessarily obvious but they all typically end in overheating and possibly a fire. If you’re “lucky” it will happen while you’re working on it and are likely to be able to control it quickly. If you aren’t it will happen later while charging unattended and burn your house down.
Jesus, mate! Take a chill pill. Isn’t the sole reality of the numbers presented in the beginning of the video a human scale tragedy per se already? Why this obsession with the hypothetical case some one replicates it without basic knowledge of it? Surely that would be their fault, their stupidity and ultimately them who would suffer the consequences?
I don’t know how to say this nicely, the mindset of hyper idiot proofing anything is so boring. Just enjoy the video. If anything I am sure it inspires more people to do something constructive and to be more mindful. Why not focusing on that instead?
Several kids have lost hands to DIY with lion batteries already. There is a reason these things are controlled. We do not let people run around with orphan source either.
You do not need to be “stupid” or “idiot” to badly hurt yourself and others with this.
I’m wondering about battery drain with both LiveView Native and LiveView. Doesn’t a persistent connection necessitate keeping the radios on (or at least, to switch them on frequently)? My understanding was that it would result in significant battery drain but maybe that’s outdated.
Mobile devices also have other constraints that could make LiveView less suitable:
spotty internet connectivity in some places
I assume that without internet, a user could scroll within the current view but any button taps would be ignored.
bandwidth limits on data plans
LiveView sends only diffs of the UI. But this could still be significant if the app lets users browse through images and if those images are not cached.
Spotty: yes. Note that this is rarely well handled by native and web app anyway in my experience, so it may not be a big problem
That is… The same as every app? The handling of images and what format you use compared to your connection and bandwidth is not dependant on the framework you use?
… this [bandwidth] could still be significant if the app lets users browse through images and if those images are not cached.
That is… The same as every app?
I was thinking that specifically the “if those images are not cached” condition would be more likely to be true with LiveView Native than with a native app. I know that browsers cache previously-downloaded resources, but I don’t know whether the LiveView Native runtime caches previously-downloaded parts of UI.
Edit: Sorry, I just realized I was mentally comparing LiveView Native with browsers, not with native apps. I don’t actually know if normally-written mobile apps cache resources like browsers do.
On iOS at least you’d only be able to keep the connection open reliably when the app is foregrounded, most apps these days would probably make a bunch of network requests at that point anyway. Although it does optimise these by batching requests together and turning the radio on at once. I wonder how big of a difference it really makes, I guess it really depends on the app and how it’s used. I agree with the sibling comment, there are other reasons this model might not be ideal for mobile.
I recently re-encountered Mark Pilgrim’s argument about why rejecting invalid XHTML is a bad idea which is now over 20 years old. It’s a funny story, a spicy takedown, and a sobering example of how hard it is to ensure valid syntax when everything is stringly typed and textually templated.
I wonder what we have learned in the last 20 years.
There’s a strong emphasis in the web world that (repeating a “joke” I first heard too many years ago to remember the source) “standard” is better than “better”. I kind of both like and hate this aphorism: I think it works as a straightforward truism about how change works in widely-deployed systems; but I hate what it says about the difficulty of improving things. I also hate it because it can be used as a trite way to dismiss efforts to improve things, which is the main reason I don’t repeat it very much. There are lots of specifics about different technologies that can affect adoption, and sometimes there is pent-up demand or a qualitative change that means better really is better enough to displace the standard.
One thing that strikes me now when looking back at the Web 2.0 period is that it wasn’t until after the XML fad started to deflate that the browser began to be treated as a development platform as opposed to a consumer appliance. Partly that was because of ambitious web apps like Google Maps, but also because Firefox and Firebug made it possible to understand how a web app worked (or not) without heroic effort. I wonder if strict XHTML and XML might have been more palatable if that order had been reversed and browsers had grown rich debugging tools first.
Another thing is the persistent popularity of stringy templatey tooling. I loved the LangSec call-to-arms, that we can and should eliminate whole classes of bugs by using parsers and serializers to and from typed internal representations. But it fizzled out because it’s evidently too hard. Even when the IR has the bottom rung complexity of JSON, we still end up with tooling that composes it by textual templating YAML. Good grief.
Having said that, the web is actually getting better in this respect, because there’s a lot more DOM hacking than templating, convenient quasiquoting with JSX, easy CSS-style element selectors. It’s a huge amount of intricate technology. But it has taken a huge amount of time and effort to get the DX to the point where many developers prefer to work with HTML’s IR instead of saying, fuck it I’ll use a template.
The lesson I take away is that the kind of correct-by-construction software required by XHTML and advocated by LangSec requires much more effort and much higher quality tooling than programmers expect (ironically!). It’s achievable for trivial syntaxes like Lisp, hard for even slightly more complicated cases like JSON, only possible for seriously complicated cases like HTML if they are overwhelmingly popular.
I would say that LangSec won, but differently than they had thought.
Rust came through and with it a whole new set of tools for parsing and (at the margin) serialising. And an easier on ramp for people interested in that kind of problems.
So now, we got a ton of Rust based good parsing all around.
I mean, as much as I tried to like XHTML 1.0 strict and managed to get my personal websites to conform, it was just unrealistic to deploy it at scale, even if we ignore any CMS, libraries or whatever. HTML 5 in contrast, I’m just as confused as the author why people don’t validate and fix their errors.
Also another one. Where is the proof that “healthy habits” work?
So far, what seems to have worked, with quantifiable results is
creating tools that make it easy for engineers to understand and fix the problems, case study: Rust
creating a more practical and fast UX for password entry: password managers with autocomplete
easy to use dependency management and update churn handling tools, allowing fast distribution of patches: package management tools
centralising control and implementation of mitigations measures and fixes to a few deeply responsible people at the infrastructure level: FOSS like SQL libraries, webservers for CQRS, spectre mitigations, etc
etc (HIBP, MFA with hardware tokens,…)
None of that seems to be things that business leaders control or have an impact on. And none seems impacted by healthy habits.
Maybe it is time for the infosec crowd to stop navel gazing, look at the world as it is, and rethink their paradigms.
Maybe the problem is not that noone cares. Maybe it is that what you offers is not helpful and does not work because it is not adapted to the problems at hand
In my experience, unfortunately, it does, because both the adoption and the implementation of every solution you’ve listed hinges on office politics to some degree. It’s not a good thing, mind you, but right now it’s one of the things that helps.
For instance, better tools work, but they have to displace worse tools that, more often than not, someone has personally championed and isn’t willing to give up. And if they don’t need to displace anything, they’re still an extra cog, and people will complain. I’ve literally seen a three-year MFA postponement saga that ran on nothing but department heads going “but what if I lose my phone/token when I’m travelling for a conference or to meet a client” and asking if maybe MFA could only be deployed to employees in a non-leadership position and/or who only work from the office.
So in many cases, someone just has to take out the big hammer, and right now someone at the top caring is about the only way that happens.
This sucks, 100%. Nobody other than specialists in a field should “care” about that field. Business leaders also don’t care about e.g. export regulation compliance, and don’t understand its benefits, but their companies nonetheless comply because they understand the one thing they need to understand to make it happen – the company is liable if it doesn’t comply.
That’s what allows a whole array of unhelpful solutions to proliferate. There’s some value in going through the motions, but in the absence of risk, there’s no motivation to either hurry or to improve the status quo. That’s why half the security solutions market right now is either snake oil or weird web-based tools where people click around to eventually produce Excel sheets where half the cells are big scary red and the other half are yellow.
Honestly, I see very little value in “educating” business leaders on this topic (whether in order to get them to care or in order to get them to understand). Half the tech companies out there operate with so much personal data that adequate controls on its disclosure shouldn’t hinge on having someone in the organisation who can explain “business leaders” the benefits of not having it stolen by identity theft rings. Especially with “business leaders” being such a blanket term.
Software (and software security, in particular) isn’t exceptional. If we can get companies ran by people who don’t know the first thing about physics to handle radioactive material correctly for the most part, we can get companies ran by people who don’t know the first thing about software to handle personal data correctly for the most part, too. Like, if all else fails, it should be possible to just tell them it’s radioactive :-).
For every leader that cares, there’s 8 who just have been told to do X because it’s in this big book from ISO and there’s a big bank customer we’re trying to sell a fat support contract to.
All of these are INCREDIBLY good points, and very close to my thesis: People don’t care about security because it is often presented as a set of tasks for the sake of security, not for the specific meaningful benefits they produce.
If someone said “we have to do backups because backups are important”, nobody would do them. No matter how often data was lost. Until someone connects the dots between the activity called “backups” and the easily understandable benefit called “recovering data”, it’s hard to get leadership to justify, care, or pay.
Once folks have that in mind, all the other stuff - easier tools, better organization around healthy security practices, etc - starts to make sense and look good (and reasonable.
Mostly. I have nearly never seen an org with good backups.
In my experience, there are two exceptions to this:
Orgs that lost valuable data in recent memory due to a lack of backups. Note the ‘recent’ here: after a while, people stop caring.
Orgs that have regulatory requirements for data retention. These will typically have a compliance officer who will ask for evidence that they are retaining the required data, and lawyers who will expect a paper trail that valid backups exist.
Aside from this, it’s basically only places where backups are so easy that no one thinks about it. For example, places that store all of their data in cloud things that are automatically backed up.
And I think this is a great analogy. This is why CHERIoT (and CHERI in general) has been so focused on developer ease of use. Writing secure software should be easier than writing insecure software.
Right? But then we should focus most of that infosec budget toward tools to help build secure software and their UX. Not into “healthy habits” or whatever the infosec crowd is sniffing this week? Righr?
The problem is that this needs to start at the bottom of the stack. To do it well, you need changes in the hardware and then in kernels, and then in userspace abstractions, and so on. The companies with the ability to fix it have no commercial incentive to do so.
I dunno, I’ve been doing this for a bit (35 years). A LOT of time spent on backups - both structuring them, executing them, validating them, etc.
To be clear, nobody WANTS backups. They want restores. And when they want them, they want them YESTERDAY. But backups - either on the modular level (of a specific db or codebase) or the global level (a whole constellation of systems around a related service) are done, and done dilligently and regularly. Nestle, National City Bank (then PNC), Cardinal Health, a couple of metro school systems, a hospital or 3… Not to mention the common chatter at DevOpsDays and SRECon type events.
Everyone’s experiences are different, but that’s what I’ve seen.
I think the difference is that for a lot of businesses there’s no way to wave away backups, and they are one specific thing. “Security” especially in the era of “best practices” is super amorphous and ever changing rather than one thing everyone can agree it would be negligent to not have.
I don’t see how this disagrees. These sound like things that make healthy habits possible to actually sustain and/or the habits themselves. The equivalent of having nutrition labels on food, readily accessible gyms, whatever - point being, insisting on healthy habits in a vacuum with no actionable plan, resources, or support is basically just wishful thinking, too.
It does matter if business leaders care because otherwise time and other resources don’t get allocated to any of these things.
Nobody uses Rust because nobody has time or incentive to learn Rust, at least unless we wait long enough for everyone on the team to have learned it in university.
Nobody uses password managers because they can remember their easily-updated formulaic password used everywhere just fine, and if they can’t a text file is easier than a database they could forget the password to.
Nobody handles update churn because the site works fine in prod and they don’t want to risk downtime if an update breaks something.
Nobody gets time set aside to be deeply responsible for including mitigations in core systems used throughout the org; you’re lucky if core systems used throughout the org exist instead of every development team rolling their own.
Nobody integrates HIBP into password updates, nobody gets budget for hardware tokens,…
The infosec crowd does love its navel gazing, but you’ve made a poor case for this being an example. Best case, I think you could argue that it would’ve been better to post this with the actual actionable plans because then it could be fairly criticized, and right now it’s just one or two steps up from tautologically true.
And my point (and the points the actions I’ll be sharing soon) is not WHETHER leadership should care, but WHAT they should care about?
I think we (IT practitioners in general, infosec folx in particular) have been pushing the wrong message. We’re insisting on emphasizing the technical aspects rather than the business aspects; or we’ve been proving business benefits with technical examples.
I think we IT folks can do a better job speaking the language of business, and in so doing, we’ll have a better shot at getting these critical needs addressed.
In the 80s and 90s, there was big push to get programmers to stop writing new code and reuse existing code. I guess now the pendulum is now starting to swing back the other way.
Serious question: isn’t it still called that and isn’t is still portrayed as not good? (I’m genuinely asking. This website aside, have things really changed that much?)
A good balance that I’ve used in some projects (like Apache Superset, eg), is to have an interface to the dependencies, and always use the interface instead of the dependency directly. This way if there’s a CVE or the need to change to a different dependency you only need to do it in one place. In Superset we do that for frontend components, and for JSON in the backend.
The wealth of Open Source code probably plays a part in that. It was so easy to just incorporate code that you didn’t “have to” write. But by now people have realized that they use maybe 20% of any dependency they add and most of them are not even that helpful.
A balance has to be struck somewhere in the middle. Smaller dependencies mean they’re more composable so it’s better in the long run.
Smaller dependencies mean they’re more composable so it’s better in the long run.
The race to decompose everything into its smallest functional components and the compose everything from those components is a fool’s errand whether it is implementing it internally or adopting an external dependency. The left-pad incident was nearly a decade ago and wasn’t just about the risk of third party dependency, but also an abject lesson in over abstraction.
The reality is that we’re left with the axiomatic notion that the “right size” for a module is the “right size” and that the correct number of external dependencies is somewhere between zero and some number much larger than zero but certainly much smaller than what is typical in a contemporary codebase.
A tangential thought I’ve had recently….
I wonder if all the current craze around “AI” could lead to some better fuzzy heuristics that help drive towards smarter static analysis for code smells that are other wise in the category of things which aren’t easy to define objectively but are “I know it when I see it” types of things. I’d certainly welcome some mechanical code review that has the tone of a helpful but critical human saying things like “are you sure you need to pull in a dependency for this?”
I like Linux and everything, but it’s often surprising to me how immature the project seems, both culturally and technically. Culturally, episodes like this Rust for Linux saga reveal the deep, deep toxicity in the Linux community (e.g., deliberately conflating “we would like interfaces documented” with “you’re trying to force your religion on us!”), but even more mundane things like “no one actually knows for sure how the ext4 filesystem code works (much less an actual specification) so people can’t build compliant libraries”.
I’m a little surprised that there aren’t any adults forking the project. I also wonder how the BSD communities compare.
I’d be deeply surprised if you couldn’t point to similar dynamics in any large, multi-decade project of a scope anywhere near the Linux kernel. I’d be willing to bet the only reason you don’t see similar dynamics from, say, Apple or Microsoft, is that they’re behind closed doors.
The FreeBSD project has recently had discussions around Rust and they played out somewhat similarly. (Disclaimer - I’m the author of the story at the link.)
Most of the time Linux folks (and FreeBSD folks) manage to do amazing things together - occasionally there is friction, but it (IMO) is blown out of proportion because “maintainers agree and project XYZ goes off with minimal hassle” is neither interesting, newsworthy, or observed by the majority of people outside the community / company in question. That’s not saying that things shouldn’t improve, but I think some perspective is in order.
Actually, I would think that large long lived projects that are like Linux would be rare unless propped up by powerful forces. Apple and Microsoft must surely have internal documentation for the internals of their software. FreeBSD I am fairly certain is much cleaner and well documented than Linux. And surely the level of immaturity often displayed on the LKML is rare outside of Linux.
Apple documentation might be poor, but I suspect in a functioning organization that lacks documentation, you can at least ask people who work on it how to properly use it. (And hopefully, write documentation with that.)
Same, though there are some exceptions (e.g., the original NT team had good design doc culture, as did the original .NET CLR team). One of the great things about the DoJ settlement was the requirement to document the server and Office protocols and formats, which weren’t documented well even internally. Sometimes it takes a Federal judge to get developers to write stuff down…
Perhaps, but it was somewhat surreal to be told we’d be fined €1.5 million per day for failing to hand over documentation that didn’t exist. I don’t think anyone was acting in bad faith, but just like the earlier comment, under some over-confident assumptions.
The long run result can be seen in things like MS-FSA, MS-FSCC and all the rest. It’s very clearly documentation by developers reading code and transcribing its behavior into a document, not some form of specification of how it was intended to behave.
I’d be deeply surprised if you couldn’t point to similar dynamics in any large, multi-decade project of a scope anywhere near the Linux kernel. I’d be willing to bet the only reason you don’t see similar dynamics from, say, Apple or Microsoft, is that they’re behind closed doors.
OS/360 has, under various names, been under development since the early 1960s. During that time, multiple generations- literally- have worked on it.
More than anything else, this is a documentation issue: if a slowly-changing population of practitioners is to understand something, then it has to be documented.
Even medieval alchemists understood that, and when their ability to describe and explain the behaviour they were observing became inadequate their philosophy was supplanted by something by something more suited to the task.
To be clear, when I said “similar dynamics” I meant that there’s plenty of immaturity and in-fighting in other communities and within companies - not the documentation part, really. I’d freely admit that Linux has documentation problems, one of which surfaced today, because companies like to pay developers but not people to document things. (That has been a hobby horse of mine since at least the first year of Google Summer of Code when I asked the people running it when the Hell they’d sponsor a summer of documentation - which is more sorely needed.)
Anyway - you look at things like Ballmer throwing a chair at somebody or Steve Jobs’ treatment of employees during his time at Apple or find someone willing to talk to you about the skeletons in the closet of any major, long-lived tech company. I’m pretty sure you will find lots of toxicity, some that surpasses anything on LKML, because people are people and the emphasis on kindness and non-toxicity is a relatively new development (pun only slightly intended) in tech circles. It’s a welcome one, but cultures take time to change.
That is very uncharitable take on the Linux project, especially given its success. I’d go out on a limb and state this kind of drama is involved in every single human endeavour, technical or not, government or private.
The article seems to come from someone who is unhappy with the way the Rust saga unfolded, and is simply sounding off his frustration over the internet. There isn’t a lot of information to gain from it, since it a lot of it is personal too (Because Linux is a bunch of communes in a trenchcoat…).
Edited to add: I believe there will be many such articles from the Rust side of the aisle, while none or very few from the Linux side. The S/N ratio is going to favor Linux, than Rust.
My comment isn’t disputing or diminishing the success of the Linux project, but the project has been notoriously plagued with toxicity since its inception. And yes, toxicity exists in all large projects, but toxicity is not a binary and most projects seem to have quite a lower degree of it than is present in the Linux project. It’s worth noting that my remarks are not based solely or even principally off of this article or the larger Rust for Linux fiasco, but also on a lot of lurking on Linux mailing lists and so on. Other projects have their fair share of drama and toxicity (even the Rust project has had some controversy), but Linux stands apart. I don’t think it’s controversial to say that Linus has a reputation for having been fairly toxic, and it doesn’t seem like a flight of fancy that this might have had cultural consequences relative to projects that were founded by more emotionally regulated people.
I’ve heard arguments similar to what you are saying several times in the last two decades. I have to politely disagree, that’s all I’d say. And whether I like it or not, Linux is a successful project, and maybe some of its success is attributed to how it has been run by its founder.
On a lighter note, “emotionally regulated people”, I believe, is a polite speak referring to those who behave according to my mental model of how others should behave ;-)
That’s fine. I’m happy to agree to disagree. But again, no one is disputing that Linux is a successful project, so I’m not sure why you keep focusing on that. I don’t even doubt that Linus’s behavior has contributed to its success; that doesn’t mean there aren’t adverse consequences to that behavior.
On a lighter note, “emotionally regulated people”, I believe, is a polite speak referring to those who behave according to my mental model of how others should behave ;-)
It’s not intended as a euphemism, I just mean “people who can regulate their emotions”. But yes, my mental model for how people should collaborate does frown upon “stop forcing your religion on us!” responses to a polite request for documentation. :)
This probably is a natural effect when a codebase grows so much that even subsets of it cannot be understood by a single human being. By definition, adding to said codebase brings humans often to the limit of their mental capacity, even though I find it even worse with the Chromium codebase. Speaking of the Linux kernel community as a collective working on the same code is probably not a correct representation, and it’s more like a conglomerate of duchies with an unwritten agreement on certain rules to follow. The whole-system approach of the Rust for Linux community is completely diametral to this.
I am a big fan of microkernels because even though you also have duchies, they interoperate much more cleanly: You can have well-defined, tight interfaces, but the revolutionary potential of using a stronger-typed language with more guarantees like Rust, or as I mentioned above even more fittingly Ada, you could probably get away with a monokernel while still keeping these benefits and dropping all the downsides of microkernels at the same time.
I don’t like the drama though. It’s unrealistic to think that one can change the momentum of hundreds of Linux kernel developers, so it’s better to just start fresh. While the kernel has 30 million lines of code, a lot of it is for very old hardware, and going even further, because you would start development in a virtual machine context, using spoof drivers at first would be quite straightforward.
Speaking of the Linux kernel community as a collective working on the same code is probably not a correct representation, and it’s more like a conglomerate of duchies with an unwritten agreement on certain rules to follow.
Can you, by chance, cite just where M. De Voltaire wrote that down? It’s something I’ve quoted mindlessly more than once, and it absolutely sounds like something he’d write. And I think I’ve read damn near every scrap of his oeuvre that one might read without going to some significant trouble to gain access to very rare books. My best guess would be that it’d be somewhere in his Essai sur l’histoire générale (ca 1756) but I’m not spotting it after quickly re-reading (for the 3rd time this week) the chapters most likely to contain such a thing. (I’m thinking ch. 69, 70, 71 are most probable.) I was trying to quote the exact passage just recently, had no luck finding it, and it’s turning into something of a white whale for me.
I know it’s unlikely that I’d find an answer here, but being that you just quoted it, it seems worth asking.
I don’t think it’s apocryphal, and it’s making me crazy trying to find the citation to demonstrate that it’s not.
Thank you. I see it there, and it’s just absent in my edition. In my edition, that’s chapter 58. The paragraph
Les électeurs dont les droits avaient été affermis par la bulle d’or de Charles IV, les firent bientôt valoir contre son propre fils, l’empereur Venceslas, roi de Bohême.
Speaking of the Linux kernel community as a collective working on the same code is probably not a correct representation, and it’s more like a conglomerate of duchies with an unwritten agreement on certain rules to follow. The whole-system approach of the Rust for Linux community is completely diametral to this.
This seems incredibly insightful.
If “grownups” use the Cathedral model, yes, that does achieve great things, though does come with some downsides too.
The Bazaar model has its own upsides and downsides. One significant downside is, to those used to organized projects, the interconnection between parts seems like amateur hour.
I’ve also contributed to Linux and sort of been through some of the “toxicity.”
I saw the episode with the Rust maintainer leaving 2 distinct ways. 1) A young up comer in the community was putting himself out there pointing out some issues as he tries to make the world better, he got some strong pushback while he was on stage and that looks like it feels awful, and it did feel awful, he resigned. 2) I put myself in Ted’s place and someone is giving a presentation on bugs and shortcomings in work that he has done for decades and is relied upon by millions or even billions of people around the world. “Hey, this code is bad, let me give a Ted Talk on it and get detailed about bugs in it, what’s this interface supposed to do?” feels pretty bad too. It seems toxic both ways.
Which criticism is adult to accept in public and which is not?
Which criticism is adult to accept in public and which is not?
I guess I disagree that the kind of constructive criticism of code presented in the talk is toxic in the first place, but it’s certainly less toxic than the personal attacks that followed (“by asking for documentation of interfaces you are literally forcing your religion on us!”). The latter feels like wildly inappropriate behavior.
I guess I disagree that the kind of constructive criticism of code presented in the talk is toxic in the first place, but it’s certainly less toxic than the personal attacks that followed (“by asking for documentation of interfaces you are literally forcing your religion on us!”). The latter feels like wildly inappropriate behavior.
What’s the definition of toxic then? “less toxic” and “constructive criticism” are very subjective. I think it feels very different when it’s code you’ve written and you maintain.
I honestly don’t know any more of the backstory, but a private message asking about inconsistent uses of a function seems like it is potentially constructive. Being called out in a presentation almost seems like an attack.
What’s the definition of toxic then? “less toxic” and “constructive criticism” are very subjective. I think it feels very different when it’s code you’ve written and you maintain.
I don’t purport to know exactly where the boundaries of toxicity lie, but I can say with certainty that constructive criticism lies outside of those boundaries and the “documentation -> forcing religion” lies within it. I agree that constructive criticism feels differently when it’s your code under critique, but I don’t think the impulse toward defensiveness delineates toxicity; accepting constructive criticism is part of being a healthy, mature adult functioning in a collaborative environment. Responding to constructive criticism with a personal attack is not healthy, mature adult behavior IMHO.
Is there a reason /why/ firmware cannot be updated on the YubiKey? The docs only state it as fact that it cannot be updated.
I‘d rather wipe and update for a software issue than shell out money for a new secure key eventually, when mine is no longer trusted for enterprise stuff.
Could just be the “Google is a big company with different teams who don’t necessarily agree with each other” effect.
I remember hearing somewhere that Microsoft suffers from the Windows, Office, and Visual Studio teams having a somewhat adversarial relationship with each other and I think it might have been the story of ispc where I read that Intel’s corporate culture has a somewhat self-sabotaging “make the teams compete for resources and let the fittest ideas win out” element to it.
Microsoft suffers from the Windows, Office, and Visual Studio teams having a somewhat adversarial relationship with each other
It’s less bad than it was. In particular, Visual Studio is no longer regarded as a direct revenue source and is largely driven by ‘Azure Attach’ (good developer tools with Azure integration make people deploy things in Azure). In the ‘90s, I’m told, it was incredibly toxic because they had directly competing incentives.
Windows wanted to ship rich widget sets so that developing on Windows was easy and Windows apps were better than Mac ones and people bought Windows.
The developer division wanted those same widgets to be bundled with VB / VC++ so that developers had to buy them (but could then use them for free in Windows apps), so developing for Windows with MS tools was easy and so people would by MS developer tools.
The Office team wanted to ensure that anything necessary to build anything in the Office suite was a part of MS Office, so creating an Office competitor was hard work. They didn’t want rich widget sets in Windows because then anyone could easily copy the Office look and feel.
This is why there still isn’t a good rich text editing control on Windows. Something like NSTextView (even the version from OPENSTEP) has everything you need to build a simple word processor and makes it fairly easy to build a complex one and the Office team hated that idea.
Another reason Office implemented their own…well, everything…is that they wanted to have the same UX in older versions of Windows, and be able to roll out improvements with nothing but an Office install. In many customer environments Office was upgraded more often than Windows.
For the OG Office folks, I think Windows was regarded as “device drivers for Office”. (Which, to be fair, it basically was in version 1.)
(I used to be on the Windows Shell team, having the same hopeless conversation annually about getting Office to adopt any of our components.)
I guess the browser is in a similar position, but browser vendors seem OK with using (some) native OS components, perhaps because they grew up with a different political relationship with the OSes than Office.
We will be removing the JPEG XL code and flag from Chromium for the following reasons:
Experimental flags and code should not remain indefinitely
There is not enough interest from the entire ecosystem to continue experimenting with JPEG XL
The new image format does not bring sufficient incremental benefits over existing formats to warrant enabling it by default
By removing the flag and the code in M110, it reduces the maintenance burden and allows us to focus on improving existing formats in Chrome
The Google JPEG XL team wants to reverse this decision.
Concerning the “not enough interest from the ecosystem” argument, it’s noted that Safari has since added JPEG XL support (after Chrome dropped it). The team has made progress towards convincing Mozilla to add JPEG XL support, but they need to deliver a Rust implementation. If Mozilla and Safari both have JPEG XL, it means increased ecosystem interest.
Concerning the “maintainance burden” argument, Mozilla is hesitant to add 100,000 lines of concurrent C++ to their codebase, given the cost (increased attack surface of that kind of code), vs the benefit. If the decoder is written in safe Rust, the maintenance burden decreases.
The reference implementation in C++ (libjxl) is developed by people on Google’s payroll. Only one person on the JXL team is outside of Google. They’re at Google Research in Zurich, and don’t follow orders from the Chromium team.
Huawei is getting soft power by picking brilliant researchers all over the world and funding them to work on whatever they want in Huawei’s name. My personal guess is that they don’t care what people do as long as they are famous and it is recognized as good work by the rest of the community. They have an easy time doing this because western countries are chronically under-funding research, far from meeting their commitments in term of GDP percentage, and moving to less and less pleasant research management and bureaucracies to try to hide the lack of support. (As in: concentrate all your money to a few key areas that politicians think are going to deliver breakthrough (for example: AI, quantum computing), and stop funding the rest.) Foreign countries with deep-enough pockets that play the long game can come up, create prestigious research institutes for reasonable amounts of money, and get mathematicians or whatever to work in their name. You can tremendously improve the material working conditions (travel funding, ability to hire students and post-docs, etc.) of a Fields medal for one million euro a year, that’s certainly a good deal.
(Google and Microsoft did exactly the same, hiring famous computer scientists and letting them do whatever they wanted. In several cases they eventually got rid of the teams that were created on this occasion, and people were bitter about it. Maybe China can offer a more permanent situation.)
Lafforgue claims that Huawei is interested in applications of topos theory, and more broadly category theory, to AI and what not. Maybe he is right because brilliant researchers manage to convince themselves and Huawei intermediate managers of potential applications. Maybe he is delusional and Huawei does not give a damn about industrial applications as long as they get recognition out of it.
Rust is something that makes sense as a part of a sound long-term business strategy. It’s a bit too early to tell, but being one step ahead of other companies in Rust may be strongly beneficial in order for the company to build better products and have larger impact. It is a good opportunity for visibility, soft power, but it also directly gives power/leverage to the companies that develop the language. (I view this as a similar investment to being an active participant to the Javascript evolution process, in particular its standardization bodies. Or wasm, etc.) The situation with topos theory is very different, because it is, in terms of practical applications, completely useless just like most contemporary mathematics; I don’t think that anyone in the field expects any kind of industrial applications of topos theory in the next 30 years. Of course we never know, but let’s say it is not more likely than many other sub-fields of mathematics. This is interesting and valuable fundamental research but, from an industrial perspective, a vanity project.
(There is another sub-field of category called “applied category theory” which is more interested in the relation to applications, and may have applications in the future, for example by helping design modelling languages for open systems. Industrial impact is still much farther ahead than most companies would tolerate, and this is not the same sub-field that is being discussed in the article and by Lafforgue.)
I think the author makes a good broad point: front-end development is difficult, therefore keep things simple and lightweight, use progressive engagement, make precise engineering decisions about which technologies you use, etc.
But the broader rhetorical device of “don’t use React ever, it’s just bad” seems overplayed. I say this as someone who doesn’t particularly like React. Partway through, there’s a note that says, if you’re building an SPA, choose SolidJS, Svelte, HTMX, etc. I work mainly in SolidJS, and it’s great, but if you wrote a bad app in React, you’ll probably write the same bad app (if not worse) in SolidJS.
I think the author is mixing up two effects. Engaged developers are more likely to make better decisions when building applications, and therefore will build better front-end applications (in general, at least). But they’re also more likely to look further afield at new technologies, and less likely to stick with React. Meanwhile, less engaged developers are more likely to choose the default tools and take the path of least resistance. This will generally produce worse front-end projects, because as the author points out, web development is hard. (This point is probably generally applicable to most software engineering.)
But none of this means that React will make you a bad developer, or that you’ll become a good developer if you avoid React. It just means that the people who tend to choose alternatives to React also tend to spend more time thinking about other decisions, and those other decisions are probably more meaningful.
To be clear, I think there’s plenty of room to criticise React, but I think this post would have been better without that criticism, because React isn’t really the underlying problem here.
Wish I could upvote harder.
You can do SSR rendering on React so I can still view your content with NoScript on, right?
But nobody does it, probably because it requires extra effort. So to me, React is white-page city and it makes me want to never learn about React.
But I will argue one thing: I’m not sure it’s all lazy developers. If your organization pushes you to deliver everything by last week and they don’t care about quality, I think you are limited in what you can do. Obviously, if you are highly skilled and can deliver quality quickly, perhaps it does not matter so much, but I kinda despise that we are forced to be the ones pushing for quality in opposition to the organizations that pay us. (Cue “shitty companies win all the time”.)
You should be thinking about users with cheap phones, not the tiny number of users with JS turned off. If the page displays without JS, but sends a lot of JS anyway, it’s hurting most of its users. Footnote 9 says:
Oh, that’s fair. I’m an egotistic human being and I think mostly about myself (I don’t really do any public webdev nowadays.)
In any case, I’m also pretty sure you can make efficient and performant websites using React, even for lowend hardware. In this case, in addition to the “organizations don’t give a f…”, I think the problem is also that even if you deliver something efficient, in most cases the organization will want to load it with trackers and ads, and the end result will be horrible anyway.
It is possible but actually atrociously hard. Like next level near impossible.
You completely underestimate the cost of parsing JS
With all due respect, I can display most websites just fine even on a freakin’ smart watch.
JS has not been slow, neither loading it, neither executing it. What is slow is bad engineering and bloat, like tracking every single browser event, loading ads, etc. These are business decisions, and they would make even the most optimized AAA game engine crawl to a halt.
I … Recommend you to go check benchmarks. Like the one the Original Author does every year.
You would be surprised by the really low power of the mass market of smartphone.
I have seen them in action. Taking minutes to load a react website is the default for them.
Could it be made better? Sure. But when 99% of the experience is that, we can tell them that paradise exist. But that doesn’t help them.
Are you sure it’s due to react’s 140 kb? Or the megabytes of tracking and ad bloatware that routinely comes with these websites?
The kind of website you talk about would be absolutely unusable even when they would send a single HTML file on first load.
Taking minutes to load an app (react or anything else) doesn’t have much to do with the cost of parsing JS. Parsing JS is fast, very fast. If someone decides to ship something that parses 5mb of JS while doing other thousands of things, that will be slow. But the same badly engineered system in the backend would be even less scalable.
I don’t know, vanilla react loads plenty fast, and if you actually look at it from a data usage perspective, then an SPA will perform better, then a server-side rendered solution. Not every interaction should result in a round-trip to the server.
This isn’t what happens on …. almost any website. I think Facebook is an example, it seems like every keypress there sets off a flurry of network activity. Twitter likes to server round trip when you just wiggle the mouse. But traditional websites don’t do this stuff - the server roundtrips happen to finalize a batch rather than on every interaction. (filling in the form is all client side, submitting it goes to the server)
And in data usage, that depends on a lot of factors too. I made a little shopping list program a few years ago. I actually thought about doing it as a progressive web app to minimize data, but like the javascript to do that was alone the cost of like 30 plain forms, so decided against it. I often see json blobs that are larger than the html they generate because it sends down a bunch of stuff that wasn’t actually necessary!
A friend of mine worked on a React app (using NextJS) where SSR caused bandwidth problems because they had a large page with a lot of HTML, and the data for that page was being sent twice: once in HTML form and once in JSON form for the rehydration. Compression helped because of the redundancy, but not enough.
I think the important metric here is latency. Modern network infrastructure has become plenty good in terms of payload bearing, so the difference between loading 500 bytes or 10-50 times that is hardly noticeable, it will largely come in one small or big flow. But doing a small interaction that does a non-async roundtrip to the server will always be on the order of human reaction time, ergo, perceptible.
I’m not saying that everything should be an SPA, but for many web applications the added flexibility (e.g. send something back async, do something strictly on client-side, etc) may well be worth it.
I’m jealous of your internet connection for you to have this perspective. I agree Latency is also an important aspect though.
The author just beats their drum about React being bad and working in a React shop, it all sounds incredibly hollow.
Maybe consider what we see from the outside?
Looking at the comments here it does not seem that many people buy this rhetoric.
Speaking as a dev who remembers when Express and wheeze jQuery were new, this tracks. Save this post for later because a little search and replace will keep it perennial indefinitely!
This is definitely a topic that inspires a lot of strong opinions, but I have trouble reading the same conclusion as the author. I suspect part of this is there are different audiences for Python and they have wildly different expectations for dependencies.
You use Python for ML/AI: You have a million packages, Python is basically the “glue” that sticks your packages together and you should probably use
ul
or frankly bite the bullet and just switch to only using containers for local and remote work.You use Python for web stuff. You have “some” packages but this list doesn’t change a lot and the dependencies don’t have an extremely complex relationship with each other. When you need to do something you should probably attempt to do it with the standard library and only if absolutely necessary go get another package.
You use Python for system administration stuff. Your life is pretty much pure pain. You don’t understand virtual environments, you have trouble understanding
pip
vs using a Linux package manager to go get your dependency, you don’t have a clean environment to work from ever. You should probably switch to a different language.As someone who uses Python primarily for 2, I’m just not seeing anywhere near as much pain as people are talking about.
venv
andpip-tools
solves most of my problems and for work stuff I just use containers because that’s how the end product is going to be consumed anyway so why bother with a middle-state.In my experience this actually works pretty well, as long as you never touch a Python package manager. Important Python libraries do tend to be available as distribution packages, a README with “apt install py3-foo py3-bar” isn’t complicated, and you can make your Python into a proper distribution package if you want automatic dependency management. “System administration tasks” tend not to require a gazillion libraries, nor tracking the very latest version of your libraries.
Mixing distribution packages with pip, poetry, uv, … is every bit as painful as you describe, though - I agree that one should avoid that if at all possible!
So what you are describing is how people did it for a long time. You write a python script, you install the Debian package for that script system-wide and you bypass the entire
pip
ecosystem.My understanding from Debian maintainers and from Python folks is this is canonically not the right way to do it anymore. If your thing has Python dependencies you should make a virtual environment, install those dependencies inside of that environment. Like if you want to ship Python as a .deb I think you need to end up doing something like this:
my_python_package/
├── DEBIAN/
│ ├── control
├── usr/
│ └── local/
│ └── my_python_package/
│ ├── venv/ (virtual environment)
│ ├── script.py
│ ├── other_files_or_scripts
To be honest, that’s one of the big reasons I use Rust for “shell scripting” where I used to use Python. Deployment and maintenance across distro release upgrades are just so much easier with a static binary than with a venv.
I can only imagine that’s also why so many people are now using Go for that sort of thing.
(To be honest, I’m hoping that going from Kubuntu 22.04 LTS to 24.04 LTS on this machine will be much less irritating than previous version bumps because of how much I’ve moved from Python to Rust.)
Yeah, used Python professionally for a decade. Never had a single problem. Came to think that people railing against Python dependency management had a “them problem”.
Python packaging has its issues but have you seen Node where there are a handful of wildly incompatible ways to build/include libraries and nobody can push anything forward?
I have been finding success with
uv run
for these users and either uploading packages to the registry for them to execute withuvx
or inline script metadata for dependencies.I love being able to tell somebody to just go install pipx and then
pipx install
a tool directly from a git repo. It’s so straightforward! I’m very glad thatuv
supports this too. It’s quickly becoming the only python dependency tool I need.I use python for 2.
It is a nightmare. Venv make no sense and I have yet to understand how to use pip-tools. That is after having written and fixed package managers in othet languages and even being a nixpkgs maintainer.
Even nix is easier to write and use that venv. As long as venv are part of the UX of your solution, this is not going to fly.
Can you clarify how venvs make no sense to you? They seem to be the one part of Python that actually make sense to me, and bring some sanity. (I still prefer Nix, too.)
I’m surprised? This isn’t me attacking you, just genuine curiosity. My primary issue with onboarding interns or junior folks onto python projects has been explaining that
venv
exists, but once they grasp the basics of it seems to “just work”. Especially withpip-compile
andpyproject.toml
there doesn’t seem to be a ton of places for it to go wrong, but I’d love to know what about it doesn’t work for you.In my experience, case 1 usually results in “it works on my machine” issues. I worked for a research company providing software engineering support to scientists for a while, and this was a really common problem, and we spent a lot of time trying to come up with fixes and never really found anything ideal.
Case 2 works better, but usually only if the person initially seeking up the project has enough experience with Python to understand what’s going on. I’ve seen projects where Python was added as a quick-n-easy addition to an existing codebase, where the packaging was just a bunch of scripts and makefile commands wrapping virtualenv. This invariably caused problems, but was again difficult to fix later because of subtle behaviours in the hand-written build system.
Case 3, ironically, feels like the easiest to solve here - put everything in a venv and as long as you never need to update anything and never need to share your code with someone else, you’re golden.
Compare and contrast this with Cargo or NPM: these tools ensure that you do the right thing from the start, without having to know what additional tools to install, and without needing to think particularly hard about how to set this stuff up. I worked on a project with a Python component and a Javascript component, both set up by the same developer who had minimal experience in either ecosystem, and the difference was like night and day. On the Javascript side we had a consistent, reproducible set of packages that we could rely on well, and on the Python side of things we had pretty consistent issues with dependencies the entire time I worked there.
I think that the way we look at the Software Crisis is rooted in revisionism.
I strongly recommend ditching all reference to Boehm. Everytime I tried to get evidence of what he describes, or source for his claims, I have ended in Leprechauns. There is no data out there that shows that fixing problems later cost more in term of time or effort than earlier. If anything, the limited reliably data we have on this aspect of project management shows a really limited impact.
The second aspect is that the “Software Crisis” was… not really a crisis. I strongly recommend looking at the publications by the authors of the NATO conference proceedings, or at the work of historians looking at these events. There are a few floating around, they can be found.
Thirdly, the fabled time in which we had “design documents” and “architects” seems to more come from the 90s, in trends to handle the so called “software crisis”, than in “older time”. A lot of what we have seen of projects from the 60s to the early 90s is practices of constant change. The same way we see that in construction (go ask people that manage a construction project how much the work is applying the blueprint produced by the architect. The reality is far closer to software development, with constant adaptation and change).
I understand that this is not the mainstream narrative in the “SDLC methodology” circles. And there are indeed existing tools and ways to analyze software development as a dynamic system over time, far closer to design. But the “death of the architect” is a trope that need “the time of the architect” to have existed. It is highly dubious, from looking at history, that it ever really existed. At least in practice. It may have existed in theory, just like I have architects in my organisation diagram these days.
But I doubt their plan have any bearing with reality, and that any of our devs does what they tell them.
Note also that we do have architects. They are the OpenSource maintainers. One of the reason we do not do as much “architecting” in software is that it takes a lot of time doing “nothing” (or more precisely, learning about the field and thinking slowly through the problem), and that does not align with the modern structure of corporate work. So it all happens in OpenSource, mostly through hobby time.
The use of “architecture” in software was coined by Fred Brooks and Gerritt Blaauw, who were principal designers for the IBM System/360. While the project was famously difficult (it was one of the precipitating events for the 1968 NATO conference), Brooks and Blaauw strongly believed in the value of their approach. See the 1972 paper by Blaauw referenced in the post for more on that.
I don’t have any numbers for how widely their ideas were followed, but they were definitely used at IBM, and since IBM was responsible for training a large fraction of the early programmer workforce, I’d expect that it was used in a number of projects elsewhere.
Either way, I agree my four-paragraph history of early software methodologies is oversimplified. There’s a reason it begins with “once upon a time.” I’d argue, however, that it captures the trends and ideas that Beck and the other Agile methodologists were responding to, which was the main concern for this post.
quoting Slim Charles from The Wire, “the thing about the old days, they the old days”
I wonder how much this is a case of chasing the metric rather than organic adoption.
Some large consultancies here really push the adoption of assistants onto programmers so that they can boast the customers they’re on the bleeding edge of development. The devs grumble, attend mandatory AI training and for the most part pretend they lean on their now indispensable Copilots. It is possible something like that is happening here. The VP of AI Adoption and their department that Google for sure has counts all the lines from AI enabled editors. This then communicated with a good part of wishful thinking all the way up to the CEO.
Or who knows, maybe Google has a secret model which is not an utter crap for anything non-trivial and just holding it back for competitive advantage. Hope googlers here would let us know!
If you read their report on it, it is definitely chasing for metrics. All their other “AI tool for devs” initiatives have abysmal numbers in term of adoption and results, and they are already saying that all their future growth are in other domains of development. Translation: We are out of things we can easily growth hack.
FWIW, if this is anything like Copilot (which I do use for personal projects because I’ll take anything that’ll fry my brain a little less after 8 PM) it’s not even a particularly difficult metric to chase. I guess about 25% of my code is written by AI, too, as in the default completion that Copilot offers is good enough to cover things like convenience methods in an API, most function prototypes of common classes (e.g. things like constructors/destructors), basic initialization code for common data structures, hardware-specific flags and bitmasks and so on.
It’s certainly useful in that it lets me spend more of my very limited supply of unfried neurons on the harder parts, but also hardly a game changer. That 25% of the code accounts for maybe 1% of my actual mental effort.
I will give it that, though, it’s the one thing that generative AI tools have nailed. I’m firmly in the AI should do the dishes and vacuum so I can write music and poetry, not write music and poetry so I can do the dishes and vacuum camp. This is basically the one application where LLM tools really are doing the dishes so I can do the poetry.
I think there are some valuable ideas in this paper. On the other hand… do we really need to get gender into programming languages? Are we going to have toxic masculinity of language design? Is everything in life about oppression, or do people just build systems in a way that they are useful to them, and maybe a different set of people builds systems in a different way, and in a pluralistic world we can have both and learn from each other?
If I had written this paper, I would not have brought gender/sex into play. You can easily substitute feminism for accessibility or other terms for part of their reasoning, and make this paper useful to programming language designers without evoking a political agenda.
Section 2, titled “Setting the Scene: Why Feminism and PL?” spends 2.5 pages answering your question, and sections 5.1 and 5.2 have more.
To expand on the last paragraph of section 2.4, using feminism allows the authors to build on existing work. There’s dozens of citations from outside of programming, bringing in 50 years of material to hybridize with the familiar.
To your suggested accessibility framing, there’s a recent talk How to Accessibility If You’re Mostly Back-End that hits some similar points, but it’s much more about industry practice than language design. (I saw an unrecorded version at Madison Ruby a few months ago but at a skim this presentation is at least very close.)
Yes, yes, this is an essay “escaped from the lab” to justify an academic’s take on the field and to publish a paper.
The existing work being built upon should arguably be, you know, programming languages and programming, instead of feminist theory. I imagine that’s what many folks will bristle at.
I’ve skimmed this once, and I’ll give it a thorough reading later, but sections like 5.1 and 5.2 emphasize–to me at least–that the target audience of this sort of thing is fellow academics and people already sold on the feminist lens. This is an academic talking to other academics, and we do tend to skew a bit more towards professionals creating artifacts and systems.
I don’t really have high hopes of useful discussion on Lobsters here, since the major reactions I expect to this are either “Oh, neat, a feminist look at programming, okay whatever”, “Hell yes, a paper talking about how unfair we are to non-white non-men”, or “Hell no, why do we have to inject gender into everything?”. To the degree to which any of those are correct, our community’s ability and capacity to discuss them civilly while remaining on topic for the site is suspect.
So, flag and move on.
Accessibility, and the lack thereof, is also political.
The idea that feminism is a novel (and somehow intrusive/invasive) political agenda, rather than a lens through which you can view and critique the political agenda we’ve all been raised within, seems to be part of the paper’s basic point. Gender is already implicitly part of programming languages (and all fields of human endeavour), the idea is to recognize it and question if and how (and to what degree) it’s influenced the field. The act of doing so isn’t advancing (or advocating for) a new political agenda, it’s critiquing one that already exists.
BTW, a non-author of this paper swapping “accessibility” for “feminism” here, when the author chose the latter precisely because it is not equivalent to the former, would actually be pretty spot-on example of why adopting a feminist perspective is necessary. Accessibility is about making systems more adoptable to humans with other-than-default access needs irrespective of gender, feminism is about making systems more adoptable to humans with other-than-default gender irrespective of their access needs… they’re literally two topics that don’t overlap except in causing us to critique our “default” way of designing systems; if you think looking at the accessibility bias built into systems is important and/or valuable you probably should think of looking at the gender (and other) bias of those systems is important and/and valuable too.
I have only read the linked article and paper intro (for now), so there might be more, but what seems to be taken from feminism here is the analysis of power structures, social dynamics, shared values and how that all holds back the field by narrowing what research is done.
Reading the intro will provide concrete examples from the authors’ personal experiences.
If the paper was about, say, applying an economic approach to PLT, would you have engaged more deeply to get answers?
I ask this not as a gotcha, but to create an opportunity to reflect on bias.
I personally acknowledge my initial reaction was “why feminism?” but am happy that I also had the reflex of going past that.
I am considerably more willing to believe that feminist critiques are written in good faith than economic ones, and it behooves the reader to understand that neither will represent an “apolitical” perspective.
I agree, FWIW I chose economics with the mindset of “it’s also politics but more accepted.”
Conversely, if it applied a flat earth theory approach, would you engage less? I probably would. Is it wrong to use our past experiences (our biases) with particular fields to determine which lengthy papers we do and don’t read?
So you put feminism and critical studies on the same level as flat earthism?
Not exactly, but not far away.
So the “theory” in the name is already a lie. This is not “theory”, it is politics and ideology.
There is nothing wrong with politics, but please don’t pass off politics as science. And in particular, don’t try to make this non-science the arbiter of all the other sciences. Yeah, I know that this “theory” claims that all the other sciences are actually not scientific because they are just power grabs. Well, that’s just projection.
You, coming to these comments to tear down the work of others published in the same proceedings as your own, by calling it “non-science” and “not far away” from flat earth-ism is demonstrative of the bare misogyny that this paper is asking the audience to start taking notice and accept less of. Stop it.
When we exclude spreadsheet from programming languages, I think we already have to some extent: one big reason it’s excluded is because spreadsheets are perceived as not as prestigious as “actual” programming. And I bet my hat one reason it’s not is because spreadsheets are generally a secretary’s tool. Female secretary most of the time.
There used to be a time where computers were women (or “girls” as the men around them often called them). With the advent of the automatic computer, a good deal of those women turned to programming. And for a time, this role was not that prestigious. Over time it did became so, though. And over time we did see a smaller and smaller share of women going into programming. Coincidence? I think not.
Anyway, giving “programming language” status to spreadsheets would elevate the status of secretaries to programmers, and “real” programmers can’t have that. Hmm “real programmer”. Why do this always conjure an image of a man in my head? You’d have to admit, the XCKD über hacker mom isn’t your stereotypical hacker.
I think the simpler and more correct explanation is that spreadsheets dominate other sectors and industries (engineering, medicine, hospitality) so thoroughly that it simply never occurs to most programmers that they’re a valid programming environment.
This is also why I’ve seen an MBA beat a bunch of programmers’ asses using only pivot tables and sheer stubbornness.
Spreadsheets, in the context of programming (generally excel) are coded as management tools or more generally “business people”. Not by gender (these are, more often than not, men as well, although probably not quite as male dominated as programming).
Do we exclude spreadsheets? Microsoft claimed excel was the most popular programming language in the world in 2020.
https://www.theregister.com/AMP/2020/12/04/microsoft_excel_lambda/
I can’t find the references for them but I’ve listened to a number of talks dating back to at least the 2010s that said more or less the same thing.
Excel isn’t present at all in the technology section of the 2024 Stack Overflow survey results. And that isn’t even specifically a list of programming languages; the page has several categories of technologies, including an “other” category. So while Microsoft may rate Excel—and to point out the obvious, they have a financial interest in doing so!—I don’t think that’s necessarily a widespread view.
I think I disagree on the widespread view comment. Anecdotally most people I talk to agree that excel (or spreadsheets more generally) meet the definition of a programming language/environment. I would argue that the community of users on stack overflow is not representative of the broader population and the population choice is the crux of the issue here.
A few references for more widespread acceptance:
https://ieeexplore.ieee.org/abstract/document/7476773
https://dl.acm.org/doi/pdf/10.1145/2814189.2814201
https://onlinelibrary.wiley.com/doi/abs/10.1002/9780470050118.ecse415
The original question was whether Excel is a PL in the context of PL research. And in that context, I think it’s especially obvious that it is. It has variables (called cells), and loops. It’s Turing complete, and not in a gotcha-kinda way. Excel only has user-defined functions defined in another language, but Google Sheets has user-defined functions defined in the same language. It has control flow, made interesting by cells referencing each other in a DAG that needs to be toposorted before being evaluated. It has compound data: 1D and 2D arrays.
You could absolutely write a small step semantics for it, or a type system for it, and neither would be trivial. In fact I’d like to read such a paper for Google Sheets to understand what ARRAYFORMULA is doing, there’s some quantifier hiding in there but I’m not sure where.
EDIT: Clarity.
Oh, I do think that Excel is a programming language! I realize that my comment didn’t make that clear at all. I was trying to push back on the claim that spreadsheets are commonly considered to be programming languages. I think Excel is a PL, but my impression is that there aren’t a lot of other people who think that.
Maybe it’s just because I’m young, but the first introduction to “historical” programmers I had was Ada Lovelace, and later then Grace Hopper.
Honestly, I can’t say I even necessarily have a generally positive perspective on the “typical hacker” persona - maybe partially because there are some like RMS who come with a lot of baggage.
That is the fun bit. This totally eclipse the reality of the history of our field.
I recommend a tremendous book, “Programmed Inequality” by Mar Hicks, for a historian work on some of this. It is fascinating and may help shed some light on the topic and the lens.
Oh I’ve heard of Mar Hicks before! Thanks for the recommendation. I’ll add it to my reading list. :)
Maybe this is true in your culture, but do you have evidence for this? I have no evidence, only my perceptions.
My perception, in my culture, is that spreadsheets are stereotyped as being for “business people” in general, or for very organized people, or for “numbers-oriented people”.
My perception, in my culture, is that “business people in general” and “numbers-oriented people”, and maybe “very organized people”, are stereotypically male, i.e., that women are stereotyped as less likely than men to be in those groups.
Although secretaries exist in my culture, I perceive their numerousness and cultural prominence as being low now, much decreased since their peak in the 20th Century, I guess because computers and economic downturns made businesses decide that they could do with many fewer secretaries.
TBH, I figured it was excluded because there is one hyper dominant spreadsheet implementation (Excel) that was created by a much maligned company (Microsoft).
Though I suppose that might be why there is one hyper dominant implementation. If people were more interested in spreadsheets we might have a lot more options and genomic studies would be safe from coercion errors.
Building software is complicated. Build systems are complicated. The complexity is further multiplied by the number of platforms/architectures/OS’s/etc … that need to be supported. And this software is AFAIK the project of one guy who releases it for free.
I’m not intending to have a crack at the author personally, but the general mindset really irks me. The nature of open source software has often felt like something of an outlier to me. How many other examples are there at a similar scale where people spend vast amounts of time working on projects that end up being widely used, often for others’ own commercial gain, and yet are given away for free? And not only is it free, but the “instructions” are too so you can make your own version and modify it as you please. It doesn’t feel like there’s a lot in this world that’s free these days, but open-source software is one such thing.
And yet, people still get annoyed when the thing that was freely given doesn’t work for them, as if the author is in any way obliged to handle their specific configuration. It just feels … unkind?
I guess I’m feeling a bit melancholy tonight …
I get you and I don’t really have a good answer for this. I don’t intend to be unkind. It looks like others have been successful building 7-Zip and I’ve updated the article accordingly.
By making your build system obtuse, you’re asking distro maintainers and people who come after you to do more work to be able to package and use your software.
I really doubt the author deliberately made his build system obtuse. 7zip is software from the late 1990s. Most of it is written in C/C++, and it started on Windows then was eventually ported to Linux. In that context, it’s not even particularly obtuse; it actually doesn’t seem as bad as I remember, having dealt with quite a bit of similar software in the early 2000s.
Supporting both non-cygwin Windows builds in that era and native Linux builds from the same tree was always obtuse. Did you ever try to build the original mozilla suite back then? I remember spending a solid week getting that working. When StarOffice was released as open source, did you try building that? It was hell on wheels. And those were projects with large teams behind them. Not one-developer shows.
I also don’t think the author is asking distro maintainers to do anything, FWIW.
On the plus side, once you script up such a build, it tends to be pretty stable, as there’s a strong incentive not to mess with the build system if it can be avoided at all :-)
I think that this is a problematic way to look at it, and one that hurt opensource.
It is distro maintainers and people that come after me that want to use my software. The onus should be on them to decide if they want to use my gift or not, based on their ressources.
Not on the maintainer to spend ressource and knowledge they may not have or want to, in order to make packagers life easier.
I would go deeper. It is the fundamental element that make FOSS works to invert that usual relationship, and fighting against it is one of the major contributor to burn out and anger in FOSS.
This is something you can definitely optimise for.
“Deletability” is a real quality your code can have and I recommend optimising for it. It is why I recommend against class based OOP. Why I do Elixir, Erlang or Rust. Some environments help you in that direction.
The argument is that most vulnerabilities come from recently-added code, so writing all the new code in a safe language (without touching old code) is effective at reducing the amount of vulnerabilities, because after a few years only safe code has been recently added, and older code is much less likely to still contain vulnerabilities. (More precisely, they claim that vulnerabilities have an exponentially-decreasing lifetime, pointing at experimental findings from previous research.)
I find the claim rather hard to believe, it is implausible and my intuition is that it is completely wrong for many codebases. For example, if I have an unsafe-language codebase that has very few users and does not change often, by the reasoning above we could wait a few years and all bugs would have evaporated on their own? Obviously this is not true, so the claim that vulnerabilities have an exponentially-decreasing lifetime must only hold under certain conditions of usage and scrutiny for the software. Looking at the abstract of the academic publication they use to back their claim, the researchers looked at vulnerability lifetimes in Chromium and OpenSSL. Those are two of the most actively audited codebases for security vulnerability, and the vast majority of software out there does not have this level of scrutiny. Google has setup some automated fuzzing for open source infrastructure software, but is that level of scrutiny enough to get into the “exponential decay” regime?
So my intuition is that the claim should be rephrased as:
Phrased like this, this starts sounding plausible. It is also completely different from the messaging in the blog post, which makes much, much broader claims.
(The post reads as if Google security people make recommendations to other software entities assuming that everyone has development and security practices similar to Google’s. This is obviously not the case, and it would be very strange if the Google security people believed that. They probably have a much narrower audience in mind, but miscommunicate?)
I think another difference between Google’s perspective and yours, in addition to that their old code gets vulnerabilities actively hunted, is that they’re focussing on codebases where large amounts of new code are added every year, as they add features to their products.
If the alternative is “keep doing what you’re doing” (and “rewrite everything in a safe language” not being an option), I’m sure everyone’s better off adding new stuff in safe languages, even if the unsafe bits don’t get as much scrutiny as Google’s stuff. Eventually, you’ll probably rewrite bits you have to touch anyway in a safe language because you’ll feel more proficient in it.
Okay, yeah, “your software will be safer if you write new stuff in a safe language” sounds very true. But the claims in the blog post are quite a bit stronger than that. Let me quote the second paragraph:
An exponential decline in vulnerabilities is a rather strong claim.
But it’s an extremely realistic claim for any code base that is being actively worked on with bugs being fixed as they are found. That may not apply to your code bases, but I think it’s a very reasonable claim in the context of this blog, which is making something that is widely used much safer.
I don’t find it realistic. Bugs in general, sure: we find bugs by daily usage of the software, report them, and they get fixed over time – the larger the bug, the sooner it is found by a user by chance. But security vulnerabilities? You need people actively looking for those to find them (at least by running automated vuln-finding tools), and most software out there has no one doing that on a regular basis.
Because most software is not critical to safety? At least yet, because there are juicier targets?
They support the claim with real world measurements over several years.
I went to look a bit more at the PDF. One selection criterion is:
How many CVEs have been reported against the software that you are writing? For mine, I believe that the answer is “2” – and it is used by thousands of people.
My intuition is that the experiments in the paper (that claim exponential decay) only apply to specific software development practices that do not generalize at all to how the rest of us write software.
Yeah, that sounds like hyperbole for sure
That claim is based on some Google Project Zero work, but it’s not aligned with my experience either. I suspect that it’s an artefact of the following flow:
Imagine that you fix all of the occurrences of bug class A in a codebase. Now you write some new code. A year later, you look for instances of bug class A. They will all be in the new code. In practice, you don’t fix all instances, but you fix a big chunk. Now you’ll see exponential decay.
The converse is also common: Find an instance of a bug class, add a static analyser check for it, never see it in new code that’s committed to the project.
The problem with all of these claims is that there’s no ground truth. If you could enumerate all of the bugs in, say, Linux, then you could (moderately) easily map them back to the commits that introduced them. If you could do this, you could also ship a 100% bug-free version of Linux. In practice, you only have data on the bugs that are found. That tends to be bursty as people find new techniques for identifying bugs.
In the things that we’ve ported to CHERI, I don’t think we’ve seen evidence that memory-safety bugs are more likely to be present in new code. Quite a few of the bugs we’ve found and fixed have been over 20 years old. There is certainly an effect that bugs that cause frequent crashes get fixed quickly, but the more pernicious ones where you’ve got a small out-of-bounds write, or a use-after-free that depends on concurrency and doesn’t trigger deterministically, are much more likely to hide in codebases for a long time.
Nota bene, one from a code base of similar heritage is about to drop with incredibly wide attack surface.
Doesn’t this undermine an argument you’ve used for why to use an old TCP stack in C rather than newly written one in Rust? As I recall, the thinking went that the old TCP stack was well tested and studied, and thus likely to be better both in terms of highly visible bugs and in security bugs, than a newly written Rust version.
Possibly. I’d like to see a new TCP/IP stack in Rust that we could use (which needs some Rust compiler support first, which is on my list…) but yes, I would expect very new code to be buggy.
I think I expect something less of a decay. Very new code has had little real-world testing. A lot of things tend to be shaken out in the first couple of years. Very old code likely has a lot of bugs hiding in it that no one has looked at properly with more modern tooling. I’m not sure where the sweet spot is.
My main worry with a new TCP/IP stack is not that it’s new code, it’s that the relevant domain knowledge is rare. There’s a big difference between a new project and new code in an existing project. Someone contributing code to an existing TCP/IP stack will have it reviewed by people who have already learned (often the painful way) about many ways to introduce vulnerabilities in network protocol implementations. If these people learned Rust and wrote a new stack, they’d probably do a better job (modulo second system problems) than if they did the same in C. But finding people who are experts in Rust, experts in network stack implementations, and experts in resource-constrained embedded systems is hard. Even any two out of three is pretty tricky.
The most popular Rust TCP stack for embedded is, I think, smoltcp, which was created by someone who I am very sure is an expert in both Rust and resource-constrained embedded systems, but I have no idea how to evaluate their expertise in network stack implementations, nor the expertise of its current maintainers.
It might not be suitable anyway since it is missing a bunch of features.
We use smoltcp at Oxide, so it is at least good enough for production use, if it fits your use case. As you say at the end, it’s possible there are requirements that may make that not work out.
I didn’t really find this too insightful. Main takeaways imo:
He acknowledges that some people take this all very personally but doesn’t really say anything at all about it. He half jokingly says he likes arguments right off the bat.
He thinks it’s good that Rust and C people see things differently, they bring different perspectives to the table.
He thinks Rust will likely succeed but that even if it fails it’ll be fine because they’ll have learned something. Some people seem to think Rust has already failed in the kernel, he doesn’t feel that way.
Kinda just random stuff otherwise, like that C in the kernel is weird and abnormal.
Notably, he doesn’t seem to actually express any kind of disapproval at all or acknowledge any problems brought up by various contributors like Asahi, despite being asked. He doesn’t address the core issue of Rust devs wanting defined semantics either, which is a real shame since I think that’d be an area he could really meaningfully make a call on in his position.
I wish he’d just said “Yeah so my perspective is that Rust people want defined semantics and that blah blah my opinion blah. And also, in terms of how they interact/ how this led to a maintainer resigning, I want to say blah blah blah blah”. I didn’t get that, so I’m a bit disappointed.
I think that’s why this is interesting. IMO, it sounds like things are working as expected. People burning out or having strong disagreements are not considered problems. They are considered a sign of energy.
Whoever joins the project will probably need pretty thick skin to push things forward. Not too surprising at the end of the day.
It’s considered a pretty big problem. See this LWN article from last year, for instance. They just don’t want to acknowledge the problems that they themselves create by their behavior.
Yeah, I know other people consider it to be a problem. I’m saying that I don’t think Linus considers it to be a problem. He’s known for having a hard edge so I don’t think he’s avoiding acknowledging it. He seems to think this is a productive process.
That’s what I find interesting about this. Clearly it’s a bit toxic.
Ted Ts’o is the first maintainer cited in that article, and he is describing some reasons for burnout. Wasn’t he the very aggressive audience member at a recently linked video of a conference who told Rust people he will do nothing to help them and will break their code at will? With that attitude being cited as a key reason for Rust contributors burnout?
(Not that this would add much to the discussion if true; it’s just funny in a sad way)
That is correct.
Personally that sounds sad, if not outright terrible. That good work can’t stand on pure merit. Instead you have to dig in and fight to improve something
It’s any wonder open source developers of all stripes get burned out and leave
You’ve captured the essence so well in so few words! It is like that though, and I suspect it always will be.
But like… He didn’t say anything about the actual problem. There’s two things that happened.
A discussion was had about how to encode kernel semantics into types.
That discussion went horribly.
He had virtually nothing to say on (1), which feels really unfortunate. He had almost nothing to say about (2) other than sort of vaguely saying that some people get upset and argue and that’s okay.
That is just so weird and useless to me. We got nothing about the actual situation.
I think those two problems are actually symptoms of the culture. Linus seems to think that the situation was somewhat productive. He seems to like the clash of ideas. That’s the real problem, IMO. We should be able to have debates, but with a more moderate intensity.
I find it interesting because the technical issues are clearly locked behind the culture. Given what he said, I don’t think there is going to be any movement socially. Whoever goes into the project will probably need to have really thick skin in order to get anything done.
I watched the video really hoping he was going to weigh in on this, in particular.
Whether or not rust in the kernel ever becomes interesting on its own, it would be a big win for everyone if it made those semantics better understood. I was disappointed that he didn’t choose to discuss it.
It’d have been nice if he weighed in on the social issues that pushed the maintainer out, but given his history I’d have been surprised if he did. I really thought he might have something to say about the under-understood semantics, though.
I think it’s naive to expect him to make any strong statement in an interview. Whatever work he might be doing to facilitate interactions between contributors has to happen behind private doors.
If you were involved in a similar thing at your job, would you prefer that your CEO tries to sort things out via small meetings, or by publicly blasting the people that he thinks are not doing a good job in an interview, maybe without even ever speaking to them directly first?
Usually these things are indeed handled publicly on the mailing lists. This is exactly the sort of thing I would expect Linus to address directly, yes.
I’m not involved in the development of Linux but I would be extremely surprised if this kind of situation didn’t have any private communication attached to it.
Regardless, even if it is fully handled via mailing lists, this interview is not a mailing list.
Indeed these sorts of things have historically been handled quite publicly, with Linus weighing in. Even fairly recently, even specifically with regards to maintainer burnout, even specifically specifically with regards to filesystem maintainer burnout - see BCacheFS.
It is definitely the status quo for these things to be handled publicly. It’s also the status quo at companies. When something happens to a company publicly it is not uncommon at all for a CEO to have a company-wide meeting (say, the “town hall”) and to weigh in on it directly, even if they have discussed such things privately.
Okay but he hasn’t weighed in on the mailing lists. This is the first time, to my knowledge, that he has talked about this. So yes, I expected him to say at least something relevant - he barely discussed the topic at all.
So you’re saying that it is more probable that him and other leadership did absolutely nothing instead of having private conversations. Not what I would bet my money on, sorry.
That is not what I’m saying at all. What I’m saying is that regardless of what has or has not been discussed in private, the normal expectation is for these things to be handled publicly as well.
+1
All the complaints about the Linux Kernel culture sound very immature and naive. They sidestep acknowledging that it’s a hugely successful collaboration - one of the biggest software projects in history that welcomes a huge diversity of contributions.
In itself, accepting the Rust attempt is extraordinarily open minded. What other critical system is willing to experiment like that?
Passionate people are going to burn out and most projects take the easy way out by never letting them contribute. Rather than blaming Linus let’s praise him and appreciate his new more diplomatic approach
Many of them? This is broadly speaking the norm - though not every experiment is about rust specifically.
For example, the most similar projects that come to mind are:
If we stop and look at large scale open source projects…
Maybe it wasn’t your intent, but this statement falsely implies that the people contributing to the rust for linux project don’t/wouldn’t contribute to linux otherwise. Rather it’s led by people with a long history of contributing to linux in C. The most impactful piece of rust code in linux (or really a fork therefore) is probably the Asahi linux graphics driver, and not only was that not only written because the kernel was accepting rust, but the author learned rust in order to write that driver in it instead of C.
Note that of your list, the only items that actually count are Chromium, Gnome and KDE.
All other items are projects that belong to Big Tech companies that have a huge interest in riding hype waves. Rust will most probably provide them with value past hype/marketing/hiring, but there’s absolutely nothing open minded in their behavior.
The criteria was “other critical system” not “other critical system not run by a big tech company”. If anything Gnome and KDE are the least applicable not exactly being critical systems… admittedly I included them anticipating the objection that the most similar projects (windows and apple’s OSes) are run by very different organizations… and while not critical systems they are infrastructure level projects.
If you really want to avoid “big tech company” vibes though, the best example I can think of off the top of my head is curl. Open source project, run by some dude, used literally everywhere, security sensitive. There too, experiments with rust (largely failed, but experiments none the less). I’m not even cherry picking here, because I don’t have to, experimentation with new technology is the norm in most of the programming industry.
Incidentally I’d note that a huge amount of linux development is funded by big tech companies.
Swift really doesn’t have any hype/marketing/hiring benefits beyond what Apple itself created, so I don’t see how this argument stands up as it applies to them.
Microsoft may have adopted rust after public enthusiasm for it, but Microsoft has a long history of programming languages research along the same lines (e.g. see TypeScript, F#, F*, Project Verona, Dafny, Bosque). I don’t think the idea that they were open to this experiment merely because of hype stands up to plain evidence that they have been very interested in languages that provide better correctness guarantees for a long time - and investing huge sums of moneys into trying to make it happen.
PS. Presumably you mean Firefox and not Chromium, Google being a rather larger company than Mozilla?
Chromium is controlled entirely by Google…
This is nonsense.
Android is more than experimenting, last I checked they were at at least 30% of new code in a major release being in Rust. This is quickly becoming their main language for the low level stuff.
Are you saying that Linus has been a big success because of, rather than in spite of, the toxic culture that pushes people away? Is accusing contributors of being religious nuts good or bad for Linux?
I think it’s saying neither? Simply, this is the culture that produces the Linux kernel. It may not always be a pretty process, but it’s managed to produce a very successful kernel.
(quote some old adage about not wanting to know how the sausage is made)
It depends? There are people on both side of that accusation. To me, the connotation of “religious nuts” is that they’re unwilling to compromise or change their opinion when presented with evidence. So this could just be the culture trying to drive away people they find difficult to work with?
There is a lot to say here, as a critic of the Frontend indistry.
Well, “easy to hire” goes along with the Fordist explanation the author mentions for the prevalence of React.
Lovely engineering work!
It saddens me to read the negativity of other comments? Why hoping no one replicates this? If anything, the very reason whe ended up where we are is that people don’t do more or this. This is shining a super bright light that we are too far gone on the mindset of everything being disposable and just buy new, don’t even think of fixing it.
This video got over a million views. For the awareness it has risen alone, it has already been worth it.
It has been a long time since I saw an interesting hardware project reclaiming used parts. Good work!
Well, maybe not no one, but I suspect that the vast majority of those million viewers probably weren’t people I’d trust to do this safely without risking turning their interesting hardware project into an improvised incendiary device.
Lithium batteries have a variety of possible failure modes that aren’t necessarily obvious but they all typically end in overheating and possibly a fire. If you’re “lucky” it will happen while you’re working on it and are likely to be able to control it quickly. If you aren’t it will happen later while charging unattended and burn your house down.
In addition, you have the big sponge soaked in liquid nicotine, which is poisonous and absorbs through skin
Jesus, mate! Take a chill pill. Isn’t the sole reality of the numbers presented in the beginning of the video a human scale tragedy per se already? Why this obsession with the hypothetical case some one replicates it without basic knowledge of it? Surely that would be their fault, their stupidity and ultimately them who would suffer the consequences?
I don’t know how to say this nicely, the mindset of hyper idiot proofing anything is so boring. Just enjoy the video. If anything I am sure it inspires more people to do something constructive and to be more mindful. Why not focusing on that instead?
This is not idiot proofing.
Several kids have lost hands to DIY with lion batteries already. There is a reason these things are controlled. We do not let people run around with orphan source either.
You do not need to be “stupid” or “idiot” to badly hurt yourself and others with this.
I’m wondering about battery drain with both LiveView Native and LiveView. Doesn’t a persistent connection necessitate keeping the radios on (or at least, to switch them on frequently)? My understanding was that it would result in significant battery drain but maybe that’s outdated.
Mobile devices also have other constraints that could make LiveView less suitable:
I was thinking that specifically the “if those images are not cached” condition would be more likely to be true with LiveView Native than with a native app. I know that browsers cache previously-downloaded resources, but I don’t know whether the LiveView Native runtime caches previously-downloaded parts of UI.
Edit: Sorry, I just realized I was mentally comparing LiveView Native with browsers, not with native apps. I don’t actually know if normally-written mobile apps cache resources like browsers do.
On iOS at least you’d only be able to keep the connection open reliably when the app is foregrounded, most apps these days would probably make a bunch of network requests at that point anyway. Although it does optimise these by batching requests together and turning the radio on at once. I wonder how big of a difference it really makes, I guess it really depends on the app and how it’s used. I agree with the sibling comment, there are other reasons this model might not be ideal for mobile.
I recently re-encountered Mark Pilgrim’s argument about why rejecting invalid XHTML is a bad idea which is now over 20 years old. It’s a funny story, a spicy takedown, and a sobering example of how hard it is to ensure valid syntax when everything is stringly typed and textually templated.
I wonder what we have learned in the last 20 years.
There’s a strong emphasis in the web world that (repeating a “joke” I first heard too many years ago to remember the source) “standard” is better than “better”. I kind of both like and hate this aphorism: I think it works as a straightforward truism about how change works in widely-deployed systems; but I hate what it says about the difficulty of improving things. I also hate it because it can be used as a trite way to dismiss efforts to improve things, which is the main reason I don’t repeat it very much. There are lots of specifics about different technologies that can affect adoption, and sometimes there is pent-up demand or a qualitative change that means better really is better enough to displace the standard.
One thing that strikes me now when looking back at the Web 2.0 period is that it wasn’t until after the XML fad started to deflate that the browser began to be treated as a development platform as opposed to a consumer appliance. Partly that was because of ambitious web apps like Google Maps, but also because Firefox and Firebug made it possible to understand how a web app worked (or not) without heroic effort. I wonder if strict XHTML and XML might have been more palatable if that order had been reversed and browsers had grown rich debugging tools first.
Another thing is the persistent popularity of stringy templatey tooling. I loved the LangSec call-to-arms, that we can and should eliminate whole classes of bugs by using parsers and serializers to and from typed internal representations. But it fizzled out because it’s evidently too hard. Even when the IR has the bottom rung complexity of JSON, we still end up with tooling that composes it by textual templating YAML. Good grief.
Having said that, the web is actually getting better in this respect, because there’s a lot more DOM hacking than templating, convenient quasiquoting with JSX, easy CSS-style element selectors. It’s a huge amount of intricate technology. But it has taken a huge amount of time and effort to get the DX to the point where many developers prefer to work with HTML’s IR instead of saying, fuck it I’ll use a template.
The lesson I take away is that the kind of correct-by-construction software required by XHTML and advocated by LangSec requires much more effort and much higher quality tooling than programmers expect (ironically!). It’s achievable for trivial syntaxes like Lisp, hard for even slightly more complicated cases like JSON, only possible for seriously complicated cases like HTML if they are overwhelmingly popular.
I would say that LangSec won, but differently than they had thought.
Rust came through and with it a whole new set of tools for parsing and (at the margin) serialising. And an easier on ramp for people interested in that kind of problems.
So now, we got a ton of Rust based good parsing all around.
Oh wow, Mark Pilgrim. I haven’t thought about him in a while. He was wise to get out the game before social media came and made posters of us all.
I mean, as much as I tried to like XHTML 1.0 strict and managed to get my personal websites to conform, it was just unrealistic to deploy it at scale, even if we ignore any CMS, libraries or whatever. HTML 5 in contrast, I’m just as confused as the author why people don’t validate and fix their errors.
Here is my question.
Does it matter if business leader care?
Also another one. Where is the proof that “healthy habits” work?
So far, what seems to have worked, with quantifiable results is
None of that seems to be things that business leaders control or have an impact on. And none seems impacted by healthy habits.
Maybe it is time for the infosec crowd to stop navel gazing, look at the world as it is, and rethink their paradigms.
Maybe the problem is not that noone cares. Maybe it is that what you offers is not helpful and does not work because it is not adapted to the problems at hand
The classic…
Yes. This, exactly.
In my experience, unfortunately, it does, because both the adoption and the implementation of every solution you’ve listed hinges on office politics to some degree. It’s not a good thing, mind you, but right now it’s one of the things that helps.
For instance, better tools work, but they have to displace worse tools that, more often than not, someone has personally championed and isn’t willing to give up. And if they don’t need to displace anything, they’re still an extra cog, and people will complain. I’ve literally seen a three-year MFA postponement saga that ran on nothing but department heads going “but what if I lose my phone/token when I’m travelling for a conference or to meet a client” and asking if maybe MFA could only be deployed to employees in a non-leadership position and/or who only work from the office.
So in many cases, someone just has to take out the big hammer, and right now someone at the top caring is about the only way that happens.
This sucks, 100%. Nobody other than specialists in a field should “care” about that field. Business leaders also don’t care about e.g. export regulation compliance, and don’t understand its benefits, but their companies nonetheless comply because they understand the one thing they need to understand to make it happen – the company is liable if it doesn’t comply.
That’s what allows a whole array of unhelpful solutions to proliferate. There’s some value in going through the motions, but in the absence of risk, there’s no motivation to either hurry or to improve the status quo. That’s why half the security solutions market right now is either snake oil or weird web-based tools where people click around to eventually produce Excel sheets where half the cells are big scary red and the other half are yellow.
Honestly, I see very little value in “educating” business leaders on this topic (whether in order to get them to care or in order to get them to understand). Half the tech companies out there operate with so much personal data that adequate controls on its disclosure shouldn’t hinge on having someone in the organisation who can explain “business leaders” the benefits of not having it stolen by identity theft rings. Especially with “business leaders” being such a blanket term.
Software (and software security, in particular) isn’t exceptional. If we can get companies ran by people who don’t know the first thing about physics to handle radioactive material correctly for the most part, we can get companies ran by people who don’t know the first thing about software to handle personal data correctly for the most part, too. Like, if all else fails, it should be possible to just tell them it’s radioactive :-).
For every leader that cares, there’s 8 who just have been told to do X because it’s in this big book from ISO and there’s a big bank customer we’re trying to sell a fat support contract to.
All of these are INCREDIBLY good points, and very close to my thesis: People don’t care about security because it is often presented as a set of tasks for the sake of security, not for the specific meaningful benefits they produce.
If someone said “we have to do backups because backups are important”, nobody would do them. No matter how often data was lost. Until someone connects the dots between the activity called “backups” and the easily understandable benefit called “recovering data”, it’s hard to get leadership to justify, care, or pay.
Once folks have that in mind, all the other stuff - easier tools, better organization around healthy security practices, etc - starts to make sense and look good (and reasonable.
People do not do backup. Mostly. I have nearly never seen an org with good backups.
Backup are done by engineers in the default stack everyone runs without being defended or asked for permission.
That is my point. You are looking at the wrong problem.
In my experience, there are two exceptions to this:
Aside from this, it’s basically only places where backups are so easy that no one thinks about it. For example, places that store all of their data in cloud things that are automatically backed up.
And I think this is a great analogy. This is why CHERIoT (and CHERI in general) has been so focused on developer ease of use. Writing secure software should be easier than writing insecure software.
Right? But then we should focus most of that infosec budget toward tools to help build secure software and their UX. Not into “healthy habits” or whatever the infosec crowd is sniffing this week? Righr?
The problem is that this needs to start at the bottom of the stack. To do it well, you need changes in the hardware and then in kernels, and then in userspace abstractions, and so on. The companies with the ability to fix it have no commercial incentive to do so.
I dunno, I’ve been doing this for a bit (35 years). A LOT of time spent on backups - both structuring them, executing them, validating them, etc.
To be clear, nobody WANTS backups. They want restores. And when they want them, they want them YESTERDAY. But backups - either on the modular level (of a specific db or codebase) or the global level (a whole constellation of systems around a related service) are done, and done dilligently and regularly. Nestle, National City Bank (then PNC), Cardinal Health, a couple of metro school systems, a hospital or 3… Not to mention the common chatter at DevOpsDays and SRECon type events.
Everyone’s experiences are different, but that’s what I’ve seen.
I think the difference is that for a lot of businesses there’s no way to wave away backups, and they are one specific thing. “Security” especially in the era of “best practices” is super amorphous and ever changing rather than one thing everyone can agree it would be negligent to not have.
Yes, this exactly.
I don’t see how this disagrees. These sound like things that make healthy habits possible to actually sustain and/or the habits themselves. The equivalent of having nutrition labels on food, readily accessible gyms, whatever - point being, insisting on healthy habits in a vacuum with no actionable plan, resources, or support is basically just wishful thinking, too.
It does matter if business leaders care because otherwise time and other resources don’t get allocated to any of these things.
The infosec crowd does love its navel gazing, but you’ve made a poor case for this being an example. Best case, I think you could argue that it would’ve been better to post this with the actual actionable plans because then it could be fairly criticized, and right now it’s just one or two steps up from tautologically true.
And my point (and the points the actions I’ll be sharing soon) is not WHETHER leadership should care, but WHAT they should care about?
I think we (IT practitioners in general, infosec folx in particular) have been pushing the wrong message. We’re insisting on emphasizing the technical aspects rather than the business aspects; or we’ve been proving business benefits with technical examples.
I think we IT folks can do a better job speaking the language of business, and in so doing, we’ll have a better shot at getting these critical needs addressed.
In the 80s and 90s, there was big push to get programmers to stop writing new code and reuse existing code. I guess now the pendulum is now starting to swing back the other way.
Yeah, I remember when this was called “NIH Syndrome” and it was not portrayed as good.
Serious question: isn’t it still called that and isn’t is still portrayed as not good? (I’m genuinely asking. This website aside, have things really changed that much?)
Not for long once they all start to realise the costs
A good balance that I’ve used in some projects (like Apache Superset, eg), is to have an interface to the dependencies, and always use the interface instead of the dependency directly. This way if there’s a CVE or the need to change to a different dependency you only need to do it in one place. In Superset we do that for frontend components, and for JSON in the backend.
There was a very spirited conversation about this recently!
Oh, thanks! I completely missed that!
The wealth of Open Source code probably plays a part in that. It was so easy to just incorporate code that you didn’t “have to” write. But by now people have realized that they use maybe 20% of any dependency they add and most of them are not even that helpful.
A balance has to be struck somewhere in the middle. Smaller dependencies mean they’re more composable so it’s better in the long run.
The race to decompose everything into its smallest functional components and the compose everything from those components is a fool’s errand whether it is implementing it internally or adopting an external dependency. The left-pad incident was nearly a decade ago and wasn’t just about the risk of third party dependency, but also an abject lesson in over abstraction.
The reality is that we’re left with the axiomatic notion that the “right size” for a module is the “right size” and that the correct number of external dependencies is somewhere between zero and some number much larger than zero but certainly much smaller than what is typical in a contemporary codebase.
A tangential thought I’ve had recently…. I wonder if all the current craze around “AI” could lead to some better fuzzy heuristics that help drive towards smarter static analysis for code smells that are other wise in the category of things which aren’t easy to define objectively but are “I know it when I see it” types of things. I’d certainly welcome some mechanical code review that has the tone of a helpful but critical human saying things like “are you sure you need to pull in a dependency for this?”
I like Linux and everything, but it’s often surprising to me how immature the project seems, both culturally and technically. Culturally, episodes like this Rust for Linux saga reveal the deep, deep toxicity in the Linux community (e.g., deliberately conflating “we would like interfaces documented” with “you’re trying to force your religion on us!”), but even more mundane things like “no one actually knows for sure how the ext4 filesystem code works (much less an actual specification) so people can’t build compliant libraries”.
I’m a little surprised that there aren’t any adults forking the project. I also wonder how the BSD communities compare.
I’d be deeply surprised if you couldn’t point to similar dynamics in any large, multi-decade project of a scope anywhere near the Linux kernel. I’d be willing to bet the only reason you don’t see similar dynamics from, say, Apple or Microsoft, is that they’re behind closed doors.
The FreeBSD project has recently had discussions around Rust and they played out somewhat similarly. (Disclaimer - I’m the author of the story at the link.)
Most of the time Linux folks (and FreeBSD folks) manage to do amazing things together - occasionally there is friction, but it (IMO) is blown out of proportion because “maintainers agree and project XYZ goes off with minimal hassle” is neither interesting, newsworthy, or observed by the majority of people outside the community / company in question. That’s not saying that things shouldn’t improve, but I think some perspective is in order.
Actually, I would think that large long lived projects that are like Linux would be rare unless propped up by powerful forces. Apple and Microsoft must surely have internal documentation for the internals of their software. FreeBSD I am fairly certain is much cleaner and well documented than Linux. And surely the level of immaturity often displayed on the LKML is rare outside of Linux.
What makes you think this? Apple barely has documentation for the externals of their software.
Apple documentation might be poor, but I suspect in a functioning organization that lacks documentation, you can at least ask people who work on it how to properly use it. (And hopefully, write documentation with that.)
Assuming any of them are still around…
Microsoft didn’t, when I was there.
Same, though there are some exceptions (e.g., the original NT team had good design doc culture, as did the original .NET CLR team). One of the great things about the DoJ settlement was the requirement to document the server and Office protocols and formats, which weren’t documented well even internally. Sometimes it takes a Federal judge to get developers to write stuff down…
Perhaps, but it was somewhat surreal to be told we’d be fined €1.5 million per day for failing to hand over documentation that didn’t exist. I don’t think anyone was acting in bad faith, but just like the earlier comment, under some over-confident assumptions.
The long run result can be seen in things like MS-FSA, MS-FSCC and all the rest. It’s very clearly documentation by developers reading code and transcribing its behavior into a document, not some form of specification of how it was intended to behave.
But that’s fine. Knowing how something works is more important than knowing how something was intended to work, if you want to use it.
Agreed. And I imagine if the RfL folks manage to pry anything loose about the Linux rules, it might look kinda like that too.
you forgot your /s
OS/360 has, under various names, been under development since the early 1960s. During that time, multiple generations- literally- have worked on it.
More than anything else, this is a documentation issue: if a slowly-changing population of practitioners is to understand something, then it has to be documented.
Even medieval alchemists understood that, and when their ability to describe and explain the behaviour they were observing became inadequate their philosophy was supplanted by something by something more suited to the task.
To be clear, when I said “similar dynamics” I meant that there’s plenty of immaturity and in-fighting in other communities and within companies - not the documentation part, really. I’d freely admit that Linux has documentation problems, one of which surfaced today, because companies like to pay developers but not people to document things. (That has been a hobby horse of mine since at least the first year of Google Summer of Code when I asked the people running it when the Hell they’d sponsor a summer of documentation - which is more sorely needed.)
Anyway - you look at things like Ballmer throwing a chair at somebody or Steve Jobs’ treatment of employees during his time at Apple or find someone willing to talk to you about the skeletons in the closet of any major, long-lived tech company. I’m pretty sure you will find lots of toxicity, some that surpasses anything on LKML, because people are people and the emphasis on kindness and non-toxicity is a relatively new development (pun only slightly intended) in tech circles. It’s a welcome one, but cultures take time to change.
That is very uncharitable take on the Linux project, especially given its success. I’d go out on a limb and state this kind of drama is involved in every single human endeavour, technical or not, government or private.
The article seems to come from someone who is unhappy with the way the Rust saga unfolded, and is simply sounding off his frustration over the internet. There isn’t a lot of information to gain from it, since it a lot of it is personal too (Because Linux is a bunch of communes in a trenchcoat…).
Edited to add: I believe there will be many such articles from the Rust side of the aisle, while none or very few from the Linux side. The S/N ratio is going to favor Linux, than Rust.
My comment isn’t disputing or diminishing the success of the Linux project, but the project has been notoriously plagued with toxicity since its inception. And yes, toxicity exists in all large projects, but toxicity is not a binary and most projects seem to have quite a lower degree of it than is present in the Linux project. It’s worth noting that my remarks are not based solely or even principally off of this article or the larger Rust for Linux fiasco, but also on a lot of lurking on Linux mailing lists and so on. Other projects have their fair share of drama and toxicity (even the Rust project has had some controversy), but Linux stands apart. I don’t think it’s controversial to say that Linus has a reputation for having been fairly toxic, and it doesn’t seem like a flight of fancy that this might have had cultural consequences relative to projects that were founded by more emotionally regulated people.
I’ve heard arguments similar to what you are saying several times in the last two decades. I have to politely disagree, that’s all I’d say. And whether I like it or not, Linux is a successful project, and maybe some of its success is attributed to how it has been run by its founder.
On a lighter note, “emotionally regulated people”, I believe, is a polite speak referring to those who behave according to my mental model of how others should behave ;-)
That’s fine. I’m happy to agree to disagree. But again, no one is disputing that Linux is a successful project, so I’m not sure why you keep focusing on that. I don’t even doubt that Linus’s behavior has contributed to its success; that doesn’t mean there aren’t adverse consequences to that behavior.
It’s not intended as a euphemism, I just mean “people who can regulate their emotions”. But yes, my mental model for how people should collaborate does frown upon “stop forcing your religion on us!” responses to a polite request for documentation. :)
This probably is a natural effect when a codebase grows so much that even subsets of it cannot be understood by a single human being. By definition, adding to said codebase brings humans often to the limit of their mental capacity, even though I find it even worse with the Chromium codebase. Speaking of the Linux kernel community as a collective working on the same code is probably not a correct representation, and it’s more like a conglomerate of duchies with an unwritten agreement on certain rules to follow. The whole-system approach of the Rust for Linux community is completely diametral to this.
I am a big fan of microkernels because even though you also have duchies, they interoperate much more cleanly: You can have well-defined, tight interfaces, but the revolutionary potential of using a stronger-typed language with more guarantees like Rust, or as I mentioned above even more fittingly Ada, you could probably get away with a monokernel while still keeping these benefits and dropping all the downsides of microkernels at the same time.
I don’t like the drama though. It’s unrealistic to think that one can change the momentum of hundreds of Linux kernel developers, so it’s better to just start fresh. While the kernel has 30 million lines of code, a lot of it is for very old hardware, and going even further, because you would start development in a virtual machine context, using spoof drivers at first would be quite straightforward.
Do they take such an approach? Rust interfaces have been added very incrementally on an as-needed basis.
Holy Penguin Empire
It could only rightfully claim to be a holy empire if the kernel was written in Holy C rather than C. :)
The Holy Roman Empire was neither holy, Roman, nor an empire according to Voltaire, and I’m inclined to agree with him.
Can you, by chance, cite just where M. De Voltaire wrote that down? It’s something I’ve quoted mindlessly more than once, and it absolutely sounds like something he’d write. And I think I’ve read damn near every scrap of his oeuvre that one might read without going to some significant trouble to gain access to very rare books. My best guess would be that it’d be somewhere in his Essai sur l’histoire générale (ca 1756) but I’m not spotting it after quickly re-reading (for the 3rd time this week) the chapters most likely to contain such a thing. (I’m thinking ch. 69, 70, 71 are most probable.) I was trying to quote the exact passage just recently, had no luck finding it, and it’s turning into something of a white whale for me.
I know it’s unlikely that I’d find an answer here, but being that you just quoted it, it seems worth asking.
I don’t think it’s apocryphal, and it’s making me crazy trying to find the citation to demonstrate that it’s not.
Chapter 70 of that work, on this page, first paragraph.
Credit to wikiquote for pointing me in the right direction
Thank you. I see it there, and it’s just absent in my edition. In my edition, that’s chapter 58. The paragraph
is there. But the one preceding it is not the same: https://imgur.com/a/ihVkYve
Thank you for confirming I’m not just crazy and for giving me a usable citation, all at once.
This seems incredibly insightful.
If “grownups” use the Cathedral model, yes, that does achieve great things, though does come with some downsides too.
The Bazaar model has its own upsides and downsides. One significant downside is, to those used to organized projects, the interconnection between parts seems like amateur hour.
I like your allegory to the cathedral-bazar-model, which is definitely fitting here.
You might be interested in https://en.m.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar :)
Yeah, I am familiar with that one, which is why I highlighted that I liked your allegory to it! Sorry for the misunderstanding. :)
I believe in the goals of rust.
I’ve also contributed to Linux and sort of been through some of the “toxicity.”
I saw the episode with the Rust maintainer leaving 2 distinct ways. 1) A young up comer in the community was putting himself out there pointing out some issues as he tries to make the world better, he got some strong pushback while he was on stage and that looks like it feels awful, and it did feel awful, he resigned. 2) I put myself in Ted’s place and someone is giving a presentation on bugs and shortcomings in work that he has done for decades and is relied upon by millions or even billions of people around the world. “Hey, this code is bad, let me give a Ted Talk on it and get detailed about bugs in it, what’s this interface supposed to do?” feels pretty bad too. It seems toxic both ways.
Which criticism is adult to accept in public and which is not?
I guess I disagree that the kind of constructive criticism of code presented in the talk is toxic in the first place, but it’s certainly less toxic than the personal attacks that followed (“by asking for documentation of interfaces you are literally forcing your religion on us!”). The latter feels like wildly inappropriate behavior.
What’s the definition of toxic then? “less toxic” and “constructive criticism” are very subjective. I think it feels very different when it’s code you’ve written and you maintain.
I honestly don’t know any more of the backstory, but a private message asking about inconsistent uses of a function seems like it is potentially constructive. Being called out in a presentation almost seems like an attack.
I don’t purport to know exactly where the boundaries of toxicity lie, but I can say with certainty that constructive criticism lies outside of those boundaries and the “documentation -> forcing religion” lies within it. I agree that constructive criticism feels differently when it’s your code under critique, but I don’t think the impulse toward defensiveness delineates toxicity; accepting constructive criticism is part of being a healthy, mature adult functioning in a collaborative environment. Responding to constructive criticism with a personal attack is not healthy, mature adult behavior IMHO.
Is there a reason /why/ firmware cannot be updated on the YubiKey? The docs only state it as fact that it cannot be updated.
I‘d rather wipe and update for a software issue than shell out money for a new secure key eventually, when mine is no longer trusted for enterprise stuff.
Because then you need to have a way to validate that the firmware you are running is the same. Which is also an attack point. Turtles all the way down
I’m curious why Google is writing it, given that they removed JPEG XL support from Chromium. Are they considering adding it back?
Could just be the “Google is a big company with different teams who don’t necessarily agree with each other” effect.
I remember hearing somewhere that Microsoft suffers from the Windows, Office, and Visual Studio teams having a somewhat adversarial relationship with each other and I think it might have been the story of ispc where I read that Intel’s corporate culture has a somewhat self-sabotaging “make the teams compete for resources and let the fittest ideas win out” element to it.
It’s less bad than it was. In particular, Visual Studio is no longer regarded as a direct revenue source and is largely driven by ‘Azure Attach’ (good developer tools with Azure integration make people deploy things in Azure). In the ‘90s, I’m told, it was incredibly toxic because they had directly competing incentives.
Windows wanted to ship rich widget sets so that developing on Windows was easy and Windows apps were better than Mac ones and people bought Windows.
The developer division wanted those same widgets to be bundled with VB / VC++ so that developers had to buy them (but could then use them for free in Windows apps), so developing for Windows with MS tools was easy and so people would by MS developer tools.
The Office team wanted to ensure that anything necessary to build anything in the Office suite was a part of MS Office, so creating an Office competitor was hard work. They didn’t want rich widget sets in Windows because then anyone could easily copy the Office look and feel.
This is why there still isn’t a good rich text editing control on Windows. Something like NSTextView (even the version from OPENSTEP) has everything you need to build a simple word processor and makes it fairly easy to build a complex one and the Office team hated that idea.
Another reason Office implemented their own…well, everything…is that they wanted to have the same UX in older versions of Windows, and be able to roll out improvements with nothing but an Office install. In many customer environments Office was upgraded more often than Windows.
For the OG Office folks, I think Windows was regarded as “device drivers for Office”. (Which, to be fair, it basically was in version 1.)
(I used to be on the Windows Shell team, having the same hopeless conversation annually about getting Office to adopt any of our components.)
I guess the browser is in a similar position, but browser vendors seem OK with using (some) native OS components, perhaps because they grew up with a different political relationship with the OSes than Office.
Yep, after all Google was a primary contributor to JXL, both from PIK and directly.
Different teams.
There are teams at Google that kept telling Chrome they had need for it and budget to do it. In the public bugtracker.
Chrome still yanked it. Seems the teams with the need and budget still have both.
The reasons Chrome gave for removing JPEG XL support were:
The Google JPEG XL team wants to reverse this decision.
The reference implementation in C++ (libjxl) is developed by people on Google’s payroll. Only one person on the JXL team is outside of Google. They’re at Google Research in Zurich, and don’t follow orders from the Chromium team.
Huawei is getting soft power by picking brilliant researchers all over the world and funding them to work on whatever they want in Huawei’s name. My personal guess is that they don’t care what people do as long as they are famous and it is recognized as good work by the rest of the community. They have an easy time doing this because western countries are chronically under-funding research, far from meeting their commitments in term of GDP percentage, and moving to less and less pleasant research management and bureaucracies to try to hide the lack of support. (As in: concentrate all your money to a few key areas that politicians think are going to deliver breakthrough (for example: AI, quantum computing), and stop funding the rest.) Foreign countries with deep-enough pockets that play the long game can come up, create prestigious research institutes for reasonable amounts of money, and get mathematicians or whatever to work in their name. You can tremendously improve the material working conditions (travel funding, ability to hire students and post-docs, etc.) of a Fields medal for one million euro a year, that’s certainly a good deal.
(Google and Microsoft did exactly the same, hiring famous computer scientists and letting them do whatever they wanted. In several cases they eventually got rid of the teams that were created on this occasion, and people were bitter about it. Maybe China can offer a more permanent situation.)
Lafforgue claims that Huawei is interested in applications of topos theory, and more broadly category theory, to AI and what not. Maybe he is right because brilliant researchers manage to convince themselves and Huawei intermediate managers of potential applications. Maybe he is delusional and Huawei does not give a damn about industrial applications as long as they get recognition out of it.
Check the employment of Rust core active devs ;)
Rust is something that makes sense as a part of a sound long-term business strategy. It’s a bit too early to tell, but being one step ahead of other companies in Rust may be strongly beneficial in order for the company to build better products and have larger impact. It is a good opportunity for visibility, soft power, but it also directly gives power/leverage to the companies that develop the language. (I view this as a similar investment to being an active participant to the Javascript evolution process, in particular its standardization bodies. Or wasm, etc.) The situation with topos theory is very different, because it is, in terms of practical applications, completely useless just like most contemporary mathematics; I don’t think that anyone in the field expects any kind of industrial applications of topos theory in the next 30 years. Of course we never know, but let’s say it is not more likely than many other sub-fields of mathematics. This is interesting and valuable fundamental research but, from an industrial perspective, a vanity project.
(There is another sub-field of category called “applied category theory” which is more interested in the relation to applications, and may have applications in the future, for example by helping design modelling languages for open systems. Industrial impact is still much farther ahead than most companies would tolerate, and this is not the same sub-field that is being discussed in the article and by Lafforgue.)