Destructuring has its place. I don’t care much for the usage of itertools.count() in that example — seems ripe for an error related to failing to break while paging, but it’s fine — but the usage of enumerate and zip are normal to me.
You wouldn’t want someone to read this and try to shoehorn it into their next PR, but it’s good to know it’s possible for those niche cases where it’s actually the best way to write the code.
First, I don’t work for Google, so I have not had my sense of humor removed.
Still, a much better message would be “E3141 - HODIE NATUS EST RADICI FRATER”. That is, it has a clear, searchable link to where there might be random detailed information, or even ramblings. It is a mistake that each detectable, obscure, arcane error needs a professionally produced tome. The tome can be accumulated if the error ever occurs.
I really would like to add numbered errors to my programs to make them easy to look up. I still wonder what can make it a relatively low-maintenance task, though.
Someone like Oracle can just throw more human-hours at the problem, I can’t.
It doesn’t have to be hard. I just keep a text file that lists all the errors (and their numbers). I picked up the idea from a previous job where we had over 1,000 different number errors. Not hard to keep it up once started.
You could represent all errors in your program as variants of a given type, and then implement an error_code() method on that type that forces each error variant to have a unique code. In Rust this might look like:
The usual issue is growing that list. Protobuf uses that approach and for something that will have to remain stable and thus should be carefully designed before you release it to other people, but error conditions are naturally dynamic.
Then I realize that there can also be permissions issues and now I need to add PermissionDenied. If I add it between FileNotFound and UserError, I’ll have to change all error codes after it, so the error message formatting and the manual will have to change as well. If I add it at the end, it’s no longer grouped with other IO errors.
One possible approach is to leave gaps and make it like:
If I add it at the end, it’s no longer grouped with other IO errors.
This is only a problem if your error codes are hierarchical identifiers whose first digits are categorical and whose latter digits are the specific error. But it’s not clear that that’s a desirable property in the first place: errors can fall into multiple categories, and with enough errors you’re likely to end up with a “misc” category anyway.
The point of the error code is to be a direct index to the error. If you want to generate documentation where error codes are grouped by topic, you can do that. You could even list the same error code in multiple sections if it’s relevant to multiple. The code number itself is arbitrary.
I really would like to add numbered errors to my programs to make them easy to look up. I still wonder what can make it a relatively low-maintenance task, though.
I ended up with codes, not numbers: “system-something”. For example: “contact-notfound”. Easy to grep for, and having the code tells me the error already.
I’m so tired of this kind of humor. When an error message comes up, I am already in a stressful situation. The last thing I want in that case is another riddle on top.
The programmer who put a Latin error message in that place apparently cared more about their own fun than about the other people who have to deal with it.
Then again, maybe (probably) I’m complaining unfairly. First of all, I have no idea how often thing like Latin error messages were put in the code. And the linked page at least does not indicate that the afflicted technicians appreciated this humor; so maybe everyone else was as tired of this back then just as I am today.
Second, that XKCD comic (and many others) shows again that nerds can be humble and self-reflective; and given XKCDs popularity it probably even educates people to this nicer behavior. That’s a nice progression from the seventies to today.
I dunno, seems easy enough to say something like “SSTN recovery code had a brother pointer at the bottom of the tree when it was finished, this should never happen”. That’s a longer message to shove into the code, sure, but you could always just give it an error code and print it in the manual.
A good error message for the user should give at least some hints about possible further steps. There’s certainly a “spectrum of helpfulness” depending on how much the program knows about its environment and how many external factors are involved:
No space left on device — free up some space on the device.
Unrecognized option --forbnicate, did you mean --frobnicate — correct the typo.
Unrecognized option --frobnicate — check the docs for option syntax.
File not found — check if you point the program at the right path, or check why the file is not created where you expect it.
ICMP host administratively prohibited — this is a network problem, caused by a policy, you should check the configs or contact the network admin.
Connection reset by peer — uhm, try to reconnect maybe, but at least you can be certain it’s not a problem on your side.
Beyong that there’s basically the “check the source code or contact the developer” territory. Say, assertion error: the system has a negative number of logged in users. The only way it can happen is if there’s a flaw in the program logic.
And then there are problems like the duplicate root pointer. From that article, my impression is that no one understood how and why that condition could occur or what exactly a possible fix could be, so a message that didn’t need translation from Latin wouldn’t bring the user any closer to a solution. In that situation, the only thing people can do it either contact the developer or grep the source code for the error, and for that purpose, any unique message is as good as any other.
Yes, the HODIE NATUS EST RADICI FRATER error message was a “an impossible situation” from Multics’ point of view. As it turns out, the problem was in the hardware; specifically the system had been mis-configured in such a way the the root disk appeared twice as distinct physical devices. Thus the “brother” is a second root disk, a situation which cannot arise as part of the Multics software configuration; and lack of debugging guidance from Multics is quite understandable.
Obviously here the error can’t say exactly how to fix the issue but the fact that 2 teams had to grasp at straws should be all the proof you need that it should state the “impossible” situation clearly. That would’ve narrowed what to look at for them, just like the phone call did.
And nothing prevents you from having the joke and then a real message.
Somewhat unrelated to this article, but relevant to the trademark policy: I’m just happy they fixed the overt politicization of their trademark policy.
Their proposal (from some time ago) included language banning anyone from using the trademark at a conference which allows guns. And no, it was not a general conference organizers should take reasonable steps to ensure safety, it was specifically and only about guns (not food safety, not access control, just those).
Imagine living in a world where you have right-wing people “owning” programming languages banning you from using their language in a place where people are allowed to blasphemize about God, because of their personal convictions, and because they feel the need to enforce these convictions onto others.
Not quite a world I want to live in. I’m glad that for the most part, programming is apolitical, and I hope it stays that way.
Events and conferences are a valuable opportunity to grow your network and learning. Please contact us at ‘Where to go for further information’ below if you would like to hold an event using the Marks in the event name. We will consider requests to use the Marks on a case by case basis, but at a minimum, would expect events and conferences using the Marks to be non-profit-making, focused on discussion of, and education on, Rust software, prohibit the carrying of firearms, comply with local health regulations, and have a robust Code of Conduct.
SQLite adopted the standard Mozilla CC after people complained about using the Rule of St Benedict as a CoC. They weren’t actually interested in enforcing the Rule on contributors, they were offering it as an ideal to aim for.
It was also a storm in a teacup because SQLite does not accept outside contributions anyway. It was a box-ticking exercise for one of their customers, which got out of control when the internet found out about it.
Ironically, lots of those person-named things aren’t even named for the original inventor. But I don’t think anyone really named things after themselves_ — I suppose the advice should be “don’t name stuff after people”.
Few names make my blood boil as much as “Type 1 error” and “Type 2 error.”
Can you imagine putting so little thought into naming something that your name is indistinguishable from the output of a random name generator? Would you do that to your company? Your child?
I find it odd how the article jumps from discussing the issue of naming phenomena to a completely unrelated issue of naming unique objects. Names of projects, people, geographic locations, and similar have completely different requirements.
Most names for humans are essentially random, and really absurd if you start thinking about them:
Inherited from a language where they meant something but no longer understood. I don’t think any English speaker is thinking that “Daniel” is “god is my judge” in Hebrew, or that “Peter” is “a rock” in Greek.
Originating from a language still spoken in the region and, possibly, contradictory to the person’s characteristics or views: how’s “Fiona” (“a blond one” in Irish) for a dark-haired woman or “Abdullah” (“a servant of god” in Arabic) for an atheist? Or a “John Smith” who’s actually a baker?
Human names work well if there aren’t too many people with the same name (“Mary the network admin? or Mary the backend developer?”) but come from a widely-known set that helps them be memorable.
By contrast, if you use phrases like “message queue”, “cache”, “data processor,” someone can get the gist of the conversation without knowing the specific technologies.
If someone needs a connection to a new warehouse, the version for people from different departments will certainly be like “we need a network administrator to set up a secure connection to the new warehouse”, while inside the responsible department that will not even be specific enough — it will need to be like “we need Mary to set up site-to-site IPsec to the new warehouse”.
At least the type N things in this thread so far have usable alternative names. I can never remember what a class II lever is, but I don’t know a sensible name for that, either.
Names for humans in Scandinavia at least used to be locally meaningful, e.g. Odd Evensen Nullsrudlå would be “son of Even, lives in Nullsrudlå” (and if Odd later moved to the farm One in Øygarden, he’d change his name to Odd Evensen One). Even Odd has a meaning (pointy), though first names tended to be inherited from grandparents and such.
The practice of changing names as you moved was outlawed about a century ago, since although it was locally meaningful, it made censuses (ie. tax collection) more difficult for the state.
Names are still mostly meaningful in Iceland, I would say. There’s a list of approved names, and while an obvious meaning is not enforced, it should at least conform to Icelandic grammar, and inflect for case.
Two of the first neuter names that were approved were Frost and Regn (rain).
Many many everyday nouns. And no surnames as such, just Daughter/Son/Child of Parent.
Although given / first names have transparent meanings (everyone in Iceland knows what Örn or Björn means), I’m guessing the name isn’t typically meaningful in the sense of giving information about the person. The -son/-dottir name tells me something about the family relations, place name tells me where they live (and then you have the old British tradition of naming yourself after your trade like “Shepherd”, “Goddard”, “Coward” or “iPhone Repair Guy”). But what does it tell you about the person and their relations that they’re named Hrönn instead of Hrefna?
(Or hm, actually maybe it does tell you a bit about their parents if they’re named Eldhamar instead of Jón…)
Interesting. There’s a local tradition in Dalarna in Sweden where the middle name was a “gårdsnamn” (farm namn). It’s the reason a surprising number of men in Sweden have the middle name “Anna”.
I read somewhere a long time ago that it’s in the interest of states to allow as many unique names as possible (especially in countries like Sweden where -son names are extremely prevalent).
Sweden also had “soldier names” / noms de guerre, epithets like “Strong” or “Quick” given to differentiate the many Erik Karlssons and Olaf Larssons https://sv.wikipedia.org/wiki/Soldatnamn Some of these were even passed on to children as last names.
Even more peculiar are the nature-inspired family names that showed up in Sweden some hundreds of years ago. Some say these were inspired by the family names of nobility, e.g. Gyllenstierna. You can’t just start using an existing “noble” name, so if you wanted to improve your social standing you would combine two nature terms to create a new family name, giving things like “Lindberg” (linden hill) or “Bergstrøm” (hill stream), but also impossible combinations like “Granlöf” (spruce leaf) or “Sandgren” (sand branch)
Inherited from a language where they meant something but no longer understood. I don’t think any English speaker is thinking that “Daniel” is “god is my judge” in Hebrew, or that “Peter” is “a rock” in Greek.
I’m guessing you don’t know very many conservative evangelical Christians.
Even then, the reason is usually “I’ll name my child Peter in memory of the Apostle Peter”, not “I’ll name my child Peter because I want him to/think he will be steady like a rock”. The meaning of the name is obscured behind its most famous holder
Maybe. It’s a unique example because the meaning of the name is plainly and explicitly preserved in the Bible:
So I will call you Peter, which means “a rock.” On this rock I will build my church, and death itself will not have any power over it. -Matthew 16:18
A fair number of evangelical Christians would be able to answer the trivia offhand if you asked them what Peter meant in the original language. But you may be right that any of them naming their child Peter may be doing it as much for the sake of the apostle as the meaning of the name.
I’m not sure “plainly and explicitly preserved” is the right way to think about that passage. The explanation of what it means isn’t in the original text, adding it is something some translators do. The New International Version tries to stay closer to the original wording but provides a footnote.
The original text doesn’t need an explanation because it’s a pun. It’s like saying, I will call you Rose because like a rose you will be beautiful but anyone who grabs you will regret it. I will call you Cliff because like a cliff you will look rugged but be a bit wet and weedy. But it gets broken in translation because πετρος -> Peter but πετρα -> stone
Of course. But the meaning isn’t “plainly and explicitly” preserved for future languages by the Bible. It’s not plain nor explicit. It’s conscious effort on the part of the translator. Sometimes Bible translators don’t translate something, and not even have a footnote, see for example the confusion around the Morning-star, “Lucifer”, from the KJV.
Actually, the KJV is a great example because it doesn’t tell you about the Peter pun in any way. Anyone brought up with the KJV may in fact be entirely ignorant of the meaning even if they’re very familiar with the passage.
Very fair point (I admittedly cherry-picked one of the translations that makes it most plain and explicit). I have no numbers to back it up, but I’d still guess the majority of evangelical Christians would be able to tell you that Peter meant rock based on this passage regardless of which translation they’re most accustomed to.
It’s an entropy problem, right? UUIDs make good identifiers because they are effectively random (and so highly unlikely to collide with other identifiers), and the same applies for the names humans give one-another. In a world without any central authority doling out identifiers for people, your best best is to pick a word out of a hat of words that don’t conflict with commonly used terminology, and hope that nobody else chose the same word.
A little bit of collision is good though because it makes it easier for people to hear and remember your name. You just want it to be unique in context.
While I strongly support this report and shared it on other networks, I flagged it as off-topic as it doesn’t relate to “computing”. It already sparked more bad faith than what I was expecting to see from this community.
Discussion of FSF’s governance is absolutely on-topic for this forum, IMO.
Ultimately, I think the whole report is less about condemning one man (which the report explicitly does) and more about the FSF moving past its problematic leadership and hopefully becoming relevant again.
That’s a fair point. I just don’t think we’ll be able to move past Stallman apologists and have an honest discussion about the FSF, even if we remove the person tag and try to recenter the debate.
So many analogues to the FSF have been founded specifically to enable ignoring RMS, and thriving for that reason, that I don’t think the relevance of the FSF-per-se is even that important any more.
This isn’t really about FSF’s governance though, it’s about the behaviour of someone governing FSF. Would Hans Reiser’s murdering of his wife be on topic at the time he was project lead on ReiserFS?
Topicality: … Some rules of thumb for great stories to submit: Will this improve the reader’s next program? Will it deepen their understanding of their last program? Will it be more interesting in five or ten years?
The Reiser case seems to me to be a “no” on all three questions, although I acknowledge that alone doesn’t necessarily mean it was off-topic.
I disagree on at least 2 here: Knowing that the core maintainer of a major filesystem is in jail with a chance to stay there for decades is useful info when picking the FS for my next project and it certainly had repercussions 10 years later (reiserFS isn’t relevant anymore). Of course a lot of things happened in addition to that, but the stability of its leadership was certainly an issue in its decline.
I would agree that following Reisers court case would not be on subject here, but I would find the fact that there’s something going on around Reiser while he was still in that leadership position - I’d see this definitely in scope for me as a software developer, as long as the source is reputable and factual.
Discussion of FSF’s governance is absolutely on-topic for this forum, IMO.
I don’t have a stake in this forum, but such discussions should absolutely be off-topic for a technical forum, especially if it involves cancelling people based on politics. This is mostly because the material is triggering, regardless of your opinions about it, and I don’t see a way to filter such crap out.
Furthermore, as an opinion, I couldn’t care less about the FSF. This is an organization that teaches people that proprietary software is immoral. I mean, this is basically the reason for why Open Source happened because, despite all the good that RMS and the FSF did in jump-starting an ecosystem around GPL licensing, the FSF was and will always be a political cult, all the discussions it generates are political, and I have better things to do.
I don’t know what sense exactly oz meant, but here’s what the About page says:
Brigading: Lobsters is not to be used to whip up an outrage mob and direct them at targets, especially individuals and small projects. It always feels righteous at first and becomes an awful tool for abuse. There isn’t a clear-cut line between this and discussing trends and advocating for improvements in the field, so expect frustrating judgement calls.
I guess the definition of “brigading” there is “to whip up an outrage mob and direct them at targets”.
Which is why it probably has no place on Lobsters. It is drama in our community (at least judging by the Pareto distribution of upvotes vs. flags for this post). You’ll be more likely to find people that won’t downplay this report if you go to Reddit/Twitter/etc.
Off-topic or not, I find it too significant to flag it even if it is not strictly about computing.
It refers to a person who has profound impact on the shape of the worldwide software community.
It documents thoroughly, with countless references to sources, that is Stallman’s statements and manifests in his own site - a concern far too significant to be overlooked, flagged and deleted, even if for many it’s off-topic as in well, paedophilia and sexual harassment are not relevant for me.
Flagging does more than hide it from your feed. It marks the account. If you get enough flags you get a nice message suggesting you delete your account (not a joke).
I think “off topic” should be used sparingly. It is not an “I disagree with this” button or a downvote substitute.
That’s the one. It is more nuanced. I was going from memory.
Even though it’s couched, the phrase “delete your account” was quite memorable for me. It’s also a little different experience when it shows up as a banner on the page rather than reading an ERB partial on mobile.
While I think the original banner was in place to “gently” remind trolls that their antics were not welcome, it had the unfortunate effect of also being applied to prolific posters who attracted an unproportionate amount of flags.
I think I’m the top / second most flagged poster on here every once in a while and it’s honestly ridiculous. I’ve reached out to the mods twice about it with no response, so the whole “delete your post or contact the mods to discuss” is seemingly not a meaningful choice.
Can’t blame burntsushi for it a bit. I’ve considered deleting my account over it since it literally suggests that you consider just that. At this point I’ve moved past it and don’t really care about the banner at all.
Unfortunately, lobsters has the same problem that every site with a scoring system has. People downvote or flag or whatever because they disagree with you, not because you lack substance. No amount of good faith engagement will ever keep people from flagging you, and there is seemingly no recourse for flagging erroneously.
It seems to me possible to strongly support a thing and yet more strongly support principles that one sees as saying it doesn’t belong here. (I am not commenting on whether it does belong here.)
I don’t even think it has to be “more strongly support” re: principles. It’s seems entirely consistent to say “this is the most important issue in my life” while also saying “and I will discuss it in the venues where it is appropriate to do so”, even if you consider discussion in appropriate venues to be of lesser importance. One dictates the value of the conversation, one is simply a practical acceptance of where that issue is best discussed.
I don’t think this gotcha! argument really applies here. I thought to be important to show that off-topic flags are not (all?) in support of Stallman because that’s not what they are for. In addition, the discussion immediately derailed with insults against the report’s authors and trolling—the report on itself is not controversial, but Stallman supporters are trying to make it look like so.
Compare it with the situation on the orange site, where this submission is much more topical: it was flagged to death and never reached the home page. That’s not what happens with off-topic flags on lobste.rs and the link is still #1.
Strip playlist params from YouTube so it doesn’t autoplay more: javascript:( function(){ location.href='https://www.youtube.com/watch?%27+(new%20RegExp(%22v=[^&]*%22).exec(location.href))%20}%20)();
New reddit to old reddit: javascript:( function(){ location.href='https://old.'+(new RegExp("reddit.com/.*").exec(location.href)) } )();
Open page in Internet Archive: javascript:window.location = "https://web.archive.org/web/*/" + window.location;
Save page in Internet Archive: javascript:window.location = "https://web.archive.org/save/" + window.location;
Decode base64 in place: javascript:function c(){}c.prototype.get=function(){var a="";window.getSelection?a=window.getSelection().toString():document.selection&&%22Control%22!=document.selection.type&&(a=document.selection.createRange().text);return%20a};c.prototype.set=function(a){if(window.getSelection){var%20b=window.getSelection();b.rangeCount&&(b=b.getRangeAt(0),b.deleteContents(),b.insertNode(document.createTextNode(a)))}else%20document.selection&&document.selection.createRange&&(b=document.selection.createRange(),b.text=a)};try{var%20d=new%20c,e=atob(d.get());d.set(e)}catch(a){alert(a.message)};
As an outsider (and at the risk of causing a flamewar), I have to wonder if there’s some connection to the zealotry that goes along with Rust.
Over the years I’ve seen a lot of “militant [insert new language] zealots”, but Rust (in my opinion) takes it further than languages like Haskell. It’s not enough for Rust to interact with an existing library - if that library isn’t written in Rust then they need a new one entirely.
This zealotry’s prevalence is overstated, and when it happens, is often from enthusiastic inexperienced users, not from the folks working on the language itself.
It’s not enough for Rust to interact with an existing library - if that library isn’t written in Rust then they need a new one entirely.
I can’t think of a single person that I know of involved in Rust leadership who thinks like that.
when it happens, is often from enthusiastic inexperienced users, not from the folks working on the language itself.
Partially disagree here.
Indeed the folks working on the Rust language itself are great.
My past experience with “enthusiastic” users, as you call them, is that they are usually strong specific domain experts with enough knowledge in Rust to anticipate that a (re)write in Rust is the right thing to do - even when it may not be.
(The indie dev guy that coded Rust for 3 years - I can’t find the link - made an excellent point; Rust is a language that forces a developer into a path of correctness. But correctness may not always be conducive to success).
This is an idea I’ve had kicking around for a bit but I do think there is a deep connection between the burnout in the blog post, the zealotry you mention, and a handful of other complaints that have surfaced from time to time about the language and its community.
My root observation is that the mindset of the community and the guiding principle of the language and its design is “we’re going to get this one right,” or in other words, the Rust contributors have set a high bar for themselves. This spills out as the burnout that gets brought up from time to time: if the bar is high there are few people, or maybe only a single person, who can “do it right.” That’s not just a feeling a maintainer might have: given the complexity of the compiler or the subtlety of what you’re trying to implement there are just not a lot of people who can do the work! You solve this by forcing people into management positions where they’re required to mentor and delegate but that’s hard to do in practice in a volunteer, open source project. The result is the burnout spiral mentioned in the article: “work doesn’t happen unless you do it personally”, you get tired, you get burnt out.
The “doing it right” also makes the language into fertile grounds for zealotry. Everything’s been built to such a high standard you can look at any use case for Rust and find one place, or several places, where Rust has some advantage over the opposing choice. If Rust’s design nurtures these kinds of arguments you’ll see people who make those arguments come to Rust, find it suitable for their purposes and stay with it.
if the bar is high there are few people, or maybe only a single person, who can “do it right.” That’s not just a feeling a maintainer might have: given the complexity of the compiler or the subtlety of what you’re trying to implement there are just not a lot of people who can do the work!
i think there is a pretty strong disparity between domains where bugs are ~fast to discover and fix, and domains where bugs might not show up for months or years. in the former you can kinda just try things and if it breaks oh well you fix it and it’s fine. in the latter, avoiding bugs requires a lot of domain expertise because you have a really hard time catching them with tests. https://youtu.be/tgaKAF_eiOg?si=P2QKBbZsGVYFAl4k&t=786
My root observation is that the mindset of the community and the guiding principle of the language and its design is “we’re going to get this one right,” or in other words, the Rust contributors have set a high bar for themselves.
This would be a good style of font for terminal. It’s always a little jarring when terminal output includes emojis and they’re in full color when everything else is monochrome.
I once heard someone distinguish between “diploma” work and “toothbrush” work: diploma work is work you do once and then it’s done, whereas toothbrush work is work you have to do continually forever. A lot of information-related things seem like diploma work, because you don’t need to re-discover electromagnetism or re-write libc to be a software engineer, but when you zoom out there’s a lot of toothbrush work needed to maintain them.
screenshots of whole page can be saved (no weird scrolling, fixed panels etc.)
100 times this one. When you print a page to PDF, any floating panels reappear on every page – which means a few lines on every printed page are covered by a floating panel.
Too often, I have used ‘save as HTML’, reopened the page after a few years, and discovered some external Javascript or other asset went break-a-doodle-doo. I started saving web pages as PDFs, only to discover the floating panel problem. Grr.
(Also, my smartphone has a very small screen, VGA resolution, 640x480. A 50-pixel header bar is a solid chunk of my screen. And some data-consent banners are so large the “Reject all” button scrolls offscreen. Thanks for letting me rant.)
I started saving pages using a combination of SingleFile, which basically just embeds all resources as b64 and downloads the result, and plain curl | htmlq if what’s important is just a bit of text. (With SingleFile you can also use your browser’s dev tools to delete parts of the page you don’t want, like consent banners or comment sections, and save the result as it appears in your browser. Highly useful.)
This article isn’t wrong but it also doesn’t add much. Digital media degrade, and require complex machines to read back.
Pretty much the best medium has proved to be clay tablets, baked (usually by accident). The next best medium is any medium with people actively copying it (including Hindu scriptures with multiple redundant versions to memorize).
Pretty much the best medium has proved to be clay tablets, baked (usually by accident). The next best medium is any medium with people actively copying it (including Hindu scriptures with multiple redundant versions to memorize).
Future scholars have to decipher the clay tablets, but the work transmitted by the living community can be re-translated as language changes or transmitted with commentary as to how it should be understood. So, arguably, the living community is a better medium for transmitting information and not merely graphemes that once encoded it.
Are there any examples of deciphering an ancient script without linking it to some known language (Coptic to hieroglyphics, contemporary Mayan, Linear B to Greek, etc.)?
It’s always easier to make a mess than to clean up after one. For those of you playing with LLMs and posting the resulting sludge online, take a long, hard look at yourself and reflect on the damage you’ve already done to the web ecosystem. Doubly so if you’re building the models, or actively helping people to use them in more places.
To quote Dr. Seuss,
Unless someone like you cares a whole awful lot, Nothing is going to get better. It’s not.
I don’t think there’s any reason to think that using a LLM to generate text and then posting it online is harmful in a way people should care about (or be legally/socially compelled to care about). I don’t publish text on the internet based on what makes things nice for people doing corpus linguistics on English text.
All the LLM-generated garbage drowns out the good stuff. I care about the fact that I can’t search for anything without 90% of the results being worthless nonsense spewed out by LLMs.
I don’t think that “All” is fair. People generating garbage to game SEO are doing harm, but there is surely plenty of AI generated content that is neutral at worst.
LLMs produce harmful garbage that is at best a lossy imitation of something they were trained upon.
There are naive users who can’t tell the difference between LLM slop and real information, there are grifters and hustlers who are happy to burn the world down if they can extract a profit, there are “useful idiots” who choose not to consider the externalities of their actions, and there are plenty of folks who fall into multiple such categories at once. The fact that you can personally extract benefit from applying this technology to problems does not change the fact that it is a massive and obvious net-negative for society and the environment. Lots of people found leaded gasoline useful, too!
You say that “LLMs produce harmful garbage” but also “it is a massive and obvious net-negative”.
The latter is a far weaker assertion than the former. If all you want to say is that you think that, overall, the technology causes more harm than it’s worth, that’s radically different from saying that it is useless or strictly harmful garbage.
It doesn’t follow (if you intended this to be a formal syllogism). The people who find AI to produce useful output could be uniformly mistaken.
If you didn’t mean it formally, consider ~lonjil may not have either — there may be extremely small amounts of useful AI generated content on the Internet, little enough to make it, as ~Internet_Janitor says, a massive and obvious net negative.
I could restructure it to address that, but I’m using words like “plausible” for a reason.
If people believe that LLMs provide value then LLMs plausibly provide value
People believe that LLMs provide value
LLM content exists on the internet
It is plausible that some of the content on the internet is the content that people find valuable
This isn’t intended to be a formal syllogism, strictly, it’s just a way of structuring my statements for clarity.
But just because I’m speaking somewhat formally doesn’t mean that I assume that they are. But they are making an extremely strong assertion, and they were not ambiguous. If they want to say “there’s little useful content from LLMs” or “the majority of LLM content on the internet is bad” that’s fine but I don’t consider that to be “close enough” to just hand wave it away as harmless hyperbole. I might disagree with those statements but I doubt I’d disagree with them to the extent that I consider them obviously wrong.
I dislike these extreme, obviously objectionable statements quite a lot. I think they’re a bad way to discuss interesting, complex topics. That’s why I responded in a way that I had hoped would be clear. A response of “that doesn’t follow” is a good response.
If people believe that LLMs provide value then LLMs plausibly provide value
Hm. Here you seem to assert that
“people believe $claim” entails “$claim is plausible”.
I guess I can see that as one definition of “plausible”, but, when lonjil asserted “There is no useful AI generated content on the internet”, you called their assertion “obviously implausible”. How does that reconcile with (1)?
I think it appears that way because I’ve structured my argument in a way that’s much less ambiguous. If you were to structure their argument such that it’s similarly express I suspect you’d find that they’re not symmetric in this regard - that is, they would have to add a premise like “I do not believe that there is useful LLM information therefor it is plausible that there is no useful LLM information” etc and you’d have to weight the arguments against one another by evaluating those premises, understanding “useful”, etc.
Regardless, someone saying “something is useless” is a far stronger assertion than “something may be useful”.
I reject this argument, on the basis that just because something is useful to you, doesn’t mean that it will be useful to anyone else. For example, let’s say someone uses ChatGPT instead of Google because Google never returns useful results anymore. If ChatGPT then gave a useful response to this person, would it be useful to put that ChatGPT output onto the public web? I would say no. The original information is already on the web, and putting more LLM-content on the web just makes it harder to find it.
It should be labeled, at the very least, so that LLMs don’t consume their own excrement thinking it was human-generated. The result would be the textual equivalent of high-pitched feedback from a microphone too close to a speaker.
Okay, I think that’s fine to say but it doesn’t really change things - the people who are causing these problems are not going to label their outputs, but that doesn’t mean that every output from an LLM is bad.
I’m skeptical that anybody on this site is creating vast quantities of slop to game SEO, which is what is actually causing the problem in TFA, versus posting interesting things they did with new technology, which is cool and they shouldn’t feel bad about.
Many of the people who wrote the software used by other(?) people in the SEO industry to generate vast quantities of slop thought what they were doing was just playing with “interesting new technology”, which doesn’t make them any less responsible for the outcomes. Designing bombs for the intellectual thrill isn’t free of ethical implications just because someone else is responsible for choosing where and when to drop them.
Reminds me of the recruiter trying to get me to work for a weapons manufacturer (sorry, “battlefield intelligence”) by telling me that my work would be good because it would help make the weapons more accurate and only kill the people they were aimed at more often and lessen collateral damage.
which doesn’t make them any less responsible for the outcomes
That’s a really strong stance to take and most people probably would and should reject it intuitively. Intent is almost universally understood to be a critical component of moral responsibility.
I’m not totally sure what point you’re trying to make. I think you’re equating my statement that intent factors into moral judgments to the (commonly rejected) argument that Nazis were “just following orders” and are therefor innocent? Or perhaps the “Wernehr von Braun” character’s uncaring attitude towards where bombs land?
I don’t think it’s contentious at all to say that intent factors into moral judgments, but it would certainly be contentious to say that nazis acted without intent.
Designing bombs for the intellectual thrill isn’t free of ethical implications just because someone else is responsible for choosing where and when to drop them.
Which I take to be more or less a reference to Wernher von Braun (who was a real person, not just a character). I’m not arguing that intent is irrelevant to moral culpability, but that intent only takes you so far, as the example of Wernher von Braun illustrates. Sometimes you aim for the stars, but nevertheless hit London.
That doesn’t sound quite right to me. A drunk driver’s intent may be simply to get from A to B, but most people would probably hold them responsible for the consequences of their recklessness.
Of course. Intent is just one factor. Similarly, if I got into my car and intentionally tried to run people over to kill them, people would take that into consideration.
I was under the impression you were claiming it was necessary. If you merely think it’s relevant, then yes, I agree. But I think what matters is the intent to act, not the intent to cause any particular outcome; if you work for FooCorp deliberately because you want them to pay you¹, as opposed to being tricked into working for them or something, IMO you are morally responsible for anything they do that you could reasonably² have predicted.
1. and you don’t consider yourself to be actively defrauding them
2. I realise “reasonably” is extremely woolly, but it’s hard to be quantitative about morality.
The grandposter said “any less responsible”, and you translated that to “any less morally responsible”, which I think is a misread.
If I push a button thinking I will get a coffee, when it actually sets fourteen people and a dog on fire, I may not be morally responsible, but I definitely am responsible–without my action, the subsequent suffering would not have happened.
I guess so. I’m not sure if the word “responsible” is ever used that way. I think if you said “I’m responsible for that” a ton of people would say “no it’s not your fault”. But sure, they may mean it in the sense that you are still causally prior to.
You back out of your garage. You do not check your rear view. You kill a small dog with your car.
Regardless of your frame of mind, you are responsible for that act. Not just causally, but also morally and legally. If frame of mind had no impact, then “murder” and “manslaughter” would not be separate crimes*. On the other hand, if “had no ill intent” was moral clearance to do anything, then “manslaughter” wouldn’t be a crime at all.
Your moral framework encourages willful ignorance to remain morally clear. It does not accurately reflect most moral frameworks in our world.
[*] In the US, both are (roughly) “responsible for death”. “Murder” is (again, roughly) for intentional death, while “manslaughter” is for unintentional but still culpable death. Local laws vary, but most jurisdictions have that distinction.
I think you’re misunderstanding me, but I don’t think it really matters, so long as we understand two things:
That I consider “responsible” as “morally responsible”, which you can say is not the case, in which case I simply have misunderstood the poster - but I believe they were speaking morally based on the context.
That intent factors into moral judgments.
As for various cases where intent factors in in a weak way, strong way, or perhaps even not at all, it doesn’t really change those points.
In the effort to understand what you’re saying then, are you suggesting that somebody who is “just playing with new interesting technology” is “manslaughter” responsible, but not “murder” responsible for the destructive results?
I think that whatever they are, they are not “just as” responsible.
For example, I think that AI may at some point be abused in a way that I don’t like. I also pay for ChatGPT, and perhaps in the future my dollar contribution to the company will somehow lead to that “badness” occurring. I do not believe that I am as morally responsible for that as whoever leverages the technology with direct intent to commit some evil deed.
Concretely, I am not convinced at all that AI is radically more “bad” than it is “good”. I am definitely convinced that it provides multiple “goods” and that it has concrete use cases that are neutral or good. It absolutely has some terrible use cases, there’s undoubtedly harm that can be done (I think deepfakes are quite a scary thing). I also think that, for those reasons and others, being a user of AI does not make someone “just as” responsible as someone who explicitly leverages AI to do something terrible.
We don’t have reliable information on language usage by humans for most centuries, and arguably not even for the 20th and 21st centuries, since the Internet is obviously not necessarily representative of all human language use.
Sure, but we have an NLP researcher basically saying
we can’t genuinely discern generative AI content well enough to distinguish it from human produced content
We simply haven’t encountered that before. It’s not the “oh no computers are alive” kind of scary. It’s kind of hard to even put a metaphor to it. I guess it’s something like being an uninformed patient at a hospital staffed with very confident but extremely stupid doctors:
I’m sorry sir but we had to amputate your leg
Wait. But I thought I had a…cold?
Delving quickly, I noticed it was moving to your legs. We had to operate immediately.
A loose analogy might be if the medical community started spraying antibiotics of last resort on every available surface (you’ll have to imagine there was some sort of tenuous potential for short-term profit and career advancement here), and then feigned surprise when untreatable antibiotic-resistant bacteria spread like wildfire. LLM output is very difficult to distinguish from semantically meaningful text with the statistical approaches that have traditionally been applied to spam, so while it is very cheap and easy to create this form of information pollution, we don’t have scalable techniques for eliminating it from a corpus. It is a grim sliver of comedy that the over-enthusiasm for this machine learning technique will assuredly stymie many future efforts at more sophisticated approaches, and leave the open web as useless for machines as it is becoming for humans.
Ah, that’s not good, then. Do you know what the Determinate installer does? I think my Ubuntu used that less than a year ago and it currently has nix 2.18.1.
We ship the latest version that has passed our validation suite. Occasionally this means we hold back, or roll back to prior versions. Yesterday we rolled back to 2.23.3 for the security issue. We’re in the process of rolling out 2.24.6 as I write this.
Precision.
“Nix” as a term is very overloaded (‘nix the complete system’, ‘nix the language’, ‘nix the tool you call to do things to/with the system’,…).
cppnix is a way to refer to the actual c++ based implementation most commonly in use.
Seems like it should be uncontroversial in a growing ecosystem with multiple implementations anyway.
Over in the world of Ruby we often refer to the canonical implementation as CRuby (formerly MRI, but Matz asked the community to move on from that name since the team is so much more than just him). It helps to disambiguate it from JRuby, Truffle Ruby, etc.
I don’t know what “Nix the complete system” is supposed to be. There is a language and a tool. The difference between a language and a tool is easily distinguishable from context. This seems to be an attempt by Lix people to portray Nix as “just another implementation of Nix”, rather than being the official implementation, which exists in contrast to reimplementations and forks.
I find it a clear disambiguating name with a well-defined prefix (‘the c++ implementation of nix”, just like a rust rewrite might be called “rustnix” (see: https://tvix.dev/) or a future Roc version might be called “rocnix”) so in that sense it doesn’t matter whether that “someone else” has a problem with it or not. You don’t get to pick your nicknames
I don’t think that is widely used. The distro is NixOS and calling things around the the package manager and language the nix ecosystem is totally reasonable.
cppnix is a way to refer to the actual c++ based implementation most commonly in use.
If it is the most common implementation, why change its name if it is already so common and widely used?
For the same reason you can have Java language as a generic term, but when speaking of behaviour of their impls you say OpenJDK or Android Runtime or OpenJ9.
Disambiguation putting precise spotlight on particular git tree maintained in github.com/nixos/nix. Otherwise it could mean the language itself, or broader ecosystem.
Just to be clear, there are several version of Nix in nixpkgs, 2.18.5 is just the current stable version e.g. pkgs.nixVersions.stable (or something like that, I’m on my phone).
I usually have WebGL turned off, for fingerprinting reasons (which the author even alludes to, funnily). So for me the blog post just ended with “Alright, so we’re dealing with 92 KiB for gzip vs 37 + 71 KiB for Brotli. Umm…” and lots of blank space.
The browser console did reveal the cause of the problem, but still, that dampened quite a lot of the thunder of this post for me. And I appreciate the idea, even! It’s an impressive technical solution! Just… being more thoughtful of people without WebGL, or without JS, I would appreciate as well. Maybe a link to a non-JS fallback page could help here?
It’s impressive as a “look what side channel my blog runs on” tech demo. As an accessibility demo, it’s introducing several potential points of failure into everyone‘s experience, not just people on slow connections, in order to save… 5KiB. I don’t think 83 vs 88 KiB is going to make the difference to anyone. If it is, surely linking the SVG with <i> so clients can choose to download it later or not at all would be better. Heck, even just doing the bar chart in ASCII would work.
I had the same situation (using LibreWolf, which disables Canvas). I literally get a better experience on this site by force disabling JS using uBlock Origin, which possibly loaded a fallback page. I’m sure there’s a joke in here somewhere.
Men do not experience our community the same way women do.
In my experience, tense conversations arise when deeply personal testimonies like this are shared. I believe that much of this tension stems from a reluctance to accept this simple, troubling premise. Accepting this premise requires compassion and empathy.
Men do not experience our community the same way women do.
… Accepting this premise requires compassion and empathy.
It seems to me entirely plausible that someone could accept this premise and just not care as long as it doesn’t affect them personally, which I suspect is not what you would call “compassion and empathy”.
If we wanna quibble over words I might call that “acknowledging” rather than “accepting”. “Yes this is true, but it’s not going to change my behavior.”
It’s not clear to me what problem is being avoided here. The author describes the combinatorial explosion as “unsustainable”, “ridiculous”, and “feels immediately uncomfortable”. The closest thing to an objective, concrete reason is that
if multiple variables are turning the message off, you can’t really say with 100% confidence that the test is passing because both fields are being checked.
But the way you test that “both fields are being checked” is to have tests for each field individually. Testing what happens when both variables disable the message is how you check that you’re not screwing up the logic in a different way, like accidentally XOR-ing the variables instead of AND-ing them.
What seems like the actual problem here is that the article’s testing framework is incredibly verbose and lacks parametrization, requiring them to write out eight test cases individually. In NUnit, for example, you would just write
and it would automatically call the test four times and assert against the expected result. When it takes just one line of code to add a new test case, why wouldn’t you be thorough and add every combination, especially for easy-to-test functions like this one?
NUnit actually makes combinatorics even easier:
public bool ShowMessage([Values] bool loggedIn, [Values] bool greenHair, [Values(0, 1, 2)] int bananaOpinion) { ... }
I’ve written exhaustive tests for things with way more than the number of cases here, so I’m also unsure what the actual problem is. Using code to generate a matrix of test values is a pretty common strategy, and baked into several testing frameworks.
(and when I say “way more than the number of cases here”, I do mean it – I’ve written exhaustive testing for conversion functions that work on 24-bit color values, for example)
You are absolutely right about testing all permutations when it’s cheap. If the CPU cost is negligible, and if the cost to the programmer is negligible (concise and easy to write), then why not?
But I can think of occasions when the cost might not be negligible:
The tests are slow, and you can’t fix the whole problem today for whatever reason.
The tests aren’t unit tests, and they consume some resource to run (also, they’re probably slow).
The framework/language sucks, and you can’t switch today.
The thing you’re testing takes something more complex than a list of booleans (or any other data type that has discrete values you’d like to test).
Here’s a contrived example of what I mean by that last point. Suppose I’m modeling the animal kingdom with classes and inheritance—everyone’s favorite starting point for contrived examples. I want to test animal locomotion: being able to walk, swim, and/or fly. Iterating over every possible combination isn’t helpful because Animal is an abstract class: I can’t construct new Animal(canWalk, canSwim, canFly). So if I want to test every permutation, I have to write 8 different blobs of setup code:
FFF: new Sponge()
FFT: new DragonFly("blue")
FTF: new Fish({name: "Gary", color: "gold"}, numScales)
FTT: new FlyingFish({name: "Orville"}, numScales, numWings)
Hey look, it’s our good friend: inheritance! We’re definitely in a contrived example now.
TFF: new Spider(), and so on.
Granted, I think many problems can be decomposed into their dimensions, and I think it’s cool that some frameworks make it easy to test all of them (like the NUnit example you gave). Real life is, most of the time, easier to test than the immediately above.
But I can think of occasions when the cost might not be negligible
Those are all much better reasons than the ones in the article.
I want to test animal locomotion: being able to walk, swim, and/or fly.
This seems like the kind of domain where I really do want to test every permutation. If I were writing this code, I could already picture the bugfix release note: “Fixed a bug where animals with multiple methods of locomotion would fail to use any of them”. If these aren’t simple booleans, you certainly want exhaustive tests for the details of their mutual interactions.
Deliberate reduction of test cases is apparently used in more high stake scenarios. This article claims one reduction strategy is used in Aerospace.
Modified Condition/Decision Coverage (MC/DC) is a code coverage criterion commonly used in software testing. For example, DO-178C software development guidance in the aerospace industry requires MC/DC for the highest Design Assurance Level (DAL) or Item Development Assurance Level (IDAL).
https://www.rapitasystems.com/mcdc-coverage
The core concern for me is legibility and maintainability. Frameworks do make it easier, or less succint, however, it’s still difficult to derive business logic from a flat list of permutations. spc476 makes some good points above too! As you expand this, it takes time to derive business logic, and determine how it gets expanded.
Automating the permutations is a low-cost upfront, but you pay for it over time with code that’s harder to understand. Testing each one in turn is much more legible and modifiable.
You can improve the legibility of testing code with helper functions to abstract away common setup or code comments to explain how the test works. You can’t improve missing test coverage by anything other than adding tests, and a bug that only emerges when more than one variable is flipped will not be covered if you write fewer tests. I don’t see a way around this except rationalization.
Yeah… it’s a delicate balance. More helper functions and abstractions increase the maintenance cost. It’ll take longer for folks to understand what’s happening and make a useful change. Also, the more maintenance cost the greater the probability of misusing it and not getting the desirable coverage.
I really like the simplification the article presents as it reduces the need for more abstractions or helper functions while still preserving coverage. The concept is called many things - one notion is that of Multiple Condition/Decision Coverage, where you’re focusing on the decisions. https://youtu.be/DivaWCNohdw?si=XfWuq7pJOO2Vk5eQ. This was a nice short talk about it.
This feels like a “Cute. Now don’t do that,” sort of thing.
Destructuring has its place. I don’t care much for the usage of
itertools.count()
in that example — seems ripe for an error related to failing to break while paging, but it’s fine — but the usage of enumerate and zip are normal to me.You wouldn’t want someone to read this and try to shoehorn it into their next PR, but it’s good to know it’s possible for those niche cases where it’s actually the best way to write the code.
MULTICS has a hard to interpret but descriptive error message for the extremely unlikely situation when the system ended up with two root volumes.
I’m not sure if it could be made actually easier to interpret.
First, I don’t work for Google, so I have not had my sense of humor removed.
Still, a much better message would be “E3141 - HODIE NATUS EST RADICI FRATER”. That is, it has a clear, searchable link to where there might be random detailed information, or even ramblings. It is a mistake that each detectable, obscure, arcane error needs a professionally produced tome. The tome can be accumulated if the error ever occurs.
I really would like to add numbered errors to my programs to make them easy to look up. I still wonder what can make it a relatively low-maintenance task, though.
Someone like Oracle can just throw more human-hours at the problem, I can’t.
It doesn’t have to be hard. I just keep a text file that lists all the errors (and their numbers). I picked up the idea from a previous job where we had over 1,000 different number errors. Not hard to keep it up once started.
You could represent all errors in your program as variants of a given type, and then implement an
error_code()
method on that type that forces each error variant to have a unique code. In Rust this might look like:And then you’d simply use this
Error
type pervasively throughout your code to return errors.The usual issue is growing that list. Protobuf uses that approach and for something that will have to remain stable and thus should be carefully designed before you release it to other people, but error conditions are naturally dynamic.
Suppose I started from something like this:
Then I realize that there can also be permissions issues and now I need to add
PermissionDenied
. If I add it betweenFileNotFound
andUserError
, I’ll have to change all error codes after it, so the error message formatting and the manual will have to change as well. If I add it at the end, it’s no longer grouped with other IO errors.One possible approach is to leave gaps and make it like:
Then I can add
PermissionDenied
as error code 102. Not a zero maintenance approach but might be serviceable enough to try…This is only a problem if your error codes are hierarchical identifiers whose first digits are categorical and whose latter digits are the specific error. But it’s not clear that that’s a desirable property in the first place: errors can fall into multiple categories, and with enough errors you’re likely to end up with a “misc” category anyway.
The point of the error code is to be a direct index to the error. If you want to generate documentation where error codes are grouped by topic, you can do that. You could even list the same error code in multiple sections if it’s relevant to multiple. The code number itself is arbitrary.
I ended up with codes, not numbers: “system-something”. For example: “contact-notfound”. Easy to grep for, and having the code tells me the error already.
I’m so tired of this kind of humor. When an error message comes up, I am already in a stressful situation. The last thing I want in that case is another riddle on top.
The programmer who put a Latin error message in that place apparently cared more about their own fun than about the other people who have to deal with it.
Then again, maybe (probably) I’m complaining unfairly. First of all, I have no idea how often thing like Latin error messages were put in the code. And the linked page at least does not indicate that the afflicted technicians appreciated this humor; so maybe everyone else was as tired of this back then just as I am today.
Second, that XKCD comic (and many others) shows again that nerds can be humble and self-reflective; and given XKCDs popularity it probably even educates people to this nicer behavior. That’s a nice progression from the seventies to today.
I dunno, seems easy enough to say something like “SSTN recovery code had a brother pointer at the bottom of the tree when it was finished, this should never happen”. That’s a longer message to shove into the code, sure, but you could always just give it an error code and print it in the manual.
A good error message for the user should give at least some hints about possible further steps. There’s certainly a “spectrum of helpfulness” depending on how much the program knows about its environment and how many external factors are involved:
No space left on device
— free up some space on the device.Unrecognized option --forbnicate, did you mean --frobnicate
— correct the typo.Unrecognized option --frobnicate
— check the docs for option syntax.File not found
— check if you point the program at the right path, or check why the file is not created where you expect it.host administratively prohibited
— this is a network problem, caused by a policy, you should check the configs or contact the network admin.Connection reset by peer
— uhm, try to reconnect maybe, but at least you can be certain it’s not a problem on your side.Beyong that there’s basically the “check the source code or contact the developer” territory. Say,
assertion error: the system has a negative number of logged in users
. The only way it can happen is if there’s a flaw in the program logic.And then there are problems like the duplicate root pointer. From that article, my impression is that no one understood how and why that condition could occur or what exactly a possible fix could be, so a message that didn’t need translation from Latin wouldn’t bring the user any closer to a solution. In that situation, the only thing people can do it either contact the developer or grep the source code for the error, and for that purpose, any unique message is as good as any other.
Yes, the HODIE NATUS EST RADICI FRATER error message was a “an impossible situation” from Multics’ point of view. As it turns out, the problem was in the hardware; specifically the system had been mis-configured in such a way the the root disk appeared twice as distinct physical devices. Thus the “brother” is a second root disk, a situation which cannot arise as part of the Multics software configuration; and lack of debugging guidance from Multics is quite understandable.
Obviously here the error can’t say exactly how to fix the issue but the fact that 2 teams had to grasp at straws should be all the proof you need that it should state the “impossible” situation clearly. That would’ve narrowed what to look at for them, just like the phone call did.
And nothing prevents you from having the joke and then a real message.
Somewhat unrelated to this article, but relevant to the trademark policy: I’m just happy they fixed the overt politicization of their trademark policy.
Their proposal (from some time ago) included language banning anyone from using the trademark at a conference which allows guns. And no, it was not a general conference organizers should take reasonable steps to ensure safety, it was specifically and only about guns (not food safety, not access control, just those).
Imagine living in a world where you have right-wing people “owning” programming languages banning you from using their language in a place where people are allowed to blasphemize about God, because of their personal convictions, and because they feel the need to enforce these convictions onto others.
Not quite a world I want to live in. I’m glad that for the most part, programming is apolitical, and I hope it stays that way.
That’s not exactly what was said:
You ever read the SQLite code of conduct? Not that they are necessarily right wing.
SQLite adopted the standard Mozilla CC after people complained about using the Rule of St Benedict as a CoC. They weren’t actually interested in enforcing the Rule on contributors, they were offering it as an ideal to aim for.
It was also a storm in a teacup because SQLite does not accept outside contributions anyway. It was a box-ticking exercise for one of their customers, which got out of control when the internet found out about it.
Please take note of the lobsters policy about self-promotion; almost all of your posts are your own blog and so are most of your comments.
Ah, I hadn’t realized there was a formal policy, and was thinking about the volume rather than the ratio. I’ll keep that in mind moving forward.
Ironically, lots of those person-named things aren’t even named for the original inventor. But I don’t think anyone really named things after themselves_ — I suppose the advice should be “don’t name stuff after people”.
Thankfully, nearly unused: “type 1” (bare metal) and “type 2” (hosted) hypervisors.
I find it odd how the article jumps from discussing the issue of naming phenomena to a completely unrelated issue of naming unique objects. Names of projects, people, geographic locations, and similar have completely different requirements.
Most names for humans are essentially random, and really absurd if you start thinking about them:
Human names work well if there aren’t too many people with the same name (“Mary the network admin? or Mary the backend developer?”) but come from a widely-known set that helps them be memorable.
If someone needs a connection to a new warehouse, the version for people from different departments will certainly be like “we need a network administrator to set up a secure connection to the new warehouse”, while inside the responsible department that will not even be specific enough — it will need to be like “we need Mary to set up site-to-site IPsec to the new warehouse”.
I hate this one so much. I have literally written a book on hypervisor internals and I need to look this one up every time I see it.
At least the type N things in this thread so far have usable alternative names. I can never remember what a class II lever is, but I don’t know a sensible name for that, either.
Names for humans in Scandinavia at least used to be locally meaningful, e.g. Odd Evensen Nullsrudlå would be “son of Even, lives in Nullsrudlå” (and if Odd later moved to the farm One in Øygarden, he’d change his name to Odd Evensen One). Even Odd has a meaning (pointy), though first names tended to be inherited from grandparents and such.
The practice of changing names as you moved was outlawed about a century ago, since although it was locally meaningful, it made censuses (ie. tax collection) more difficult for the state.
Names are still mostly meaningful in Iceland, I would say. There’s a list of approved names, and while an obvious meaning is not enforced, it should at least conform to Icelandic grammar, and inflect for case.
Two of the first neuter names that were approved were Frost and Regn (rain).
Many many everyday nouns. And no surnames as such, just Daughter/Son/Child of Parent.
Although given / first names have transparent meanings (everyone in Iceland knows what Örn or Björn means), I’m guessing the name isn’t typically meaningful in the sense of giving information about the person. The -son/-dottir name tells me something about the family relations, place name tells me where they live (and then you have the old British tradition of naming yourself after your trade like “Shepherd”, “Goddard”, “Coward” or “iPhone Repair Guy”). But what does it tell you about the person and their relations that they’re named Hrönn instead of Hrefna? (Or hm, actually maybe it does tell you a bit about their parents if they’re named Eldhamar instead of Jón…)
Interesting. There’s a local tradition in Dalarna in Sweden where the middle name was a “gårdsnamn” (farm namn). It’s the reason a surprising number of men in Sweden have the middle name “Anna”.
I read somewhere a long time ago that it’s in the interest of states to allow as many unique names as possible (especially in countries like Sweden where -son names are extremely prevalent).
Sweden also had “soldier names” / noms de guerre, epithets like “Strong” or “Quick” given to differentiate the many Erik Karlssons and Olaf Larssons https://sv.wikipedia.org/wiki/Soldatnamn Some of these were even passed on to children as last names.
Even more peculiar are the nature-inspired family names that showed up in Sweden some hundreds of years ago. Some say these were inspired by the family names of nobility, e.g. Gyllenstierna. You can’t just start using an existing “noble” name, so if you wanted to improve your social standing you would combine two nature terms to create a new family name, giving things like “Lindberg” (linden hill) or “Bergstrøm” (hill stream), but also impossible combinations like “Granlöf” (spruce leaf) or “Sandgren” (sand branch)
Not to mention the famous musician Thåström (“toe stream”).
I’m guessing you don’t know very many conservative evangelical Christians.
Even then, the reason is usually “I’ll name my child Peter in memory of the Apostle Peter”, not “I’ll name my child Peter because I want him to/think he will be steady like a rock”. The meaning of the name is obscured behind its most famous holder
Maybe. It’s a unique example because the meaning of the name is plainly and explicitly preserved in the Bible:
A fair number of evangelical Christians would be able to answer the trivia offhand if you asked them what Peter meant in the original language. But you may be right that any of them naming their child Peter may be doing it as much for the sake of the apostle as the meaning of the name.
I’m not sure “plainly and explicitly preserved” is the right way to think about that passage. The explanation of what it means isn’t in the original text, adding it is something some translators do. The New International Version tries to stay closer to the original wording but provides a footnote.
The original text doesn’t need an explanation because it’s a pun. It’s like saying, I will call you Rose because like a rose you will be beautiful but anyone who grabs you will regret it. I will call you Cliff because like a cliff you will look rugged but be a bit wet and weedy. But it gets broken in translation because πετρος -> Peter but πετρα -> stone
Of course. But the meaning isn’t “plainly and explicitly” preserved for future languages by the Bible. It’s not plain nor explicit. It’s conscious effort on the part of the translator. Sometimes Bible translators don’t translate something, and not even have a footnote, see for example the confusion around the Morning-star, “Lucifer”, from the KJV.
Actually, the KJV is a great example because it doesn’t tell you about the Peter pun in any way. Anyone brought up with the KJV may in fact be entirely ignorant of the meaning even if they’re very familiar with the passage.
Very fair point (I admittedly cherry-picked one of the translations that makes it most plain and explicit). I have no numbers to back it up, but I’d still guess the majority of evangelical Christians would be able to tell you that Peter meant rock based on this passage regardless of which translation they’re most accustomed to.
It’s an entropy problem, right? UUIDs make good identifiers because they are effectively random (and so highly unlikely to collide with other identifiers), and the same applies for the names humans give one-another. In a world without any central authority doling out identifiers for people, your best best is to pick a word out of a hat of words that don’t conflict with commonly used terminology, and hope that nobody else chose the same word.
A little bit of collision is good though because it makes it easier for people to hear and remember your name. You just want it to be unique in context.
Signed,
Not Carlotta
While I strongly support this report and shared it on other networks, I flagged it as off-topic as it doesn’t relate to “computing”. It already sparked more bad faith than what I was expecting to see from this community.
Discussion of FSF’s governance is absolutely on-topic for this forum, IMO.
Ultimately, I think the whole report is less about condemning one man (which the report explicitly does) and more about the FSF moving past its problematic leadership and hopefully becoming relevant again.
That’s a fair point. I just don’t think we’ll be able to move past Stallman apologists and have an honest discussion about the FSF, even if we remove the person tag and try to recenter the debate.
So many analogues to the FSF have been founded specifically to enable ignoring RMS, and thriving for that reason, that I don’t think the relevance of the FSF-per-se is even that important any more.
It theoretically could be but clearly isn’t in this specific case.
This isn’t really about FSF’s governance though, it’s about the behaviour of someone governing FSF. Would Hans Reiser’s murdering of his wife be on topic at the time he was project lead on ReiserFS?
Why would Reiser’s wife’s murder not be? It sure seems like something the community should know about.
“something the community should know about” is not Lobsters’s definition of “on topic”:
The Reiser case seems to me to be a “no” on all three questions, although I acknowledge that alone doesn’t necessarily mean it was off-topic.
I disagree on at least 2 here: Knowing that the core maintainer of a major filesystem is in jail with a chance to stay there for decades is useful info when picking the FS for my next project and it certainly had repercussions 10 years later (reiserFS isn’t relevant anymore). Of course a lot of things happened in addition to that, but the stability of its leadership was certainly an issue in its decline.
I would agree that following Reisers court case would not be on subject here, but I would find the fact that there’s something going on around Reiser while he was still in that leadership position - I’d see this definitely in scope for me as a software developer, as long as the source is reputable and factual.
I don’t have a stake in this forum, but such discussions should absolutely be off-topic for a technical forum, especially if it involves cancelling people based on politics. This is mostly because the material is triggering, regardless of your opinions about it, and I don’t see a way to filter such crap out.
Furthermore, as an opinion, I couldn’t care less about the FSF. This is an organization that teaches people that proprietary software is immoral. I mean, this is basically the reason for why Open Source happened because, despite all the good that RMS and the FSF did in jump-starting an ecosystem around GPL licensing, the FSF was and will always be a political cult, all the discussions it generates are political, and I have better things to do.
everything is political, and if you think otherwise, you’re a bloody fool.
Well, it also looks like brigading.
It is important work, but I think I’d rather not read about this here.
In what sense?
I don’t know what sense exactly oz meant, but here’s what the About page says:
I guess the definition of “brigading” there is “to whip up an outrage mob and direct them at targets”.
I also flagged it. I don’t come to lobsters for this kind of drama. Leave it to the orange site.
Respectfully, calling it “drama” downplays decades of clearly documented abuse by the person in question to a rather sickening degree
Which is why it probably has no place on Lobsters. It is drama in our community (at least judging by the Pareto distribution of upvotes vs. flags for this post). You’ll be more likely to find people that won’t downplay this report if you go to Reddit/Twitter/etc.
Off-topic or not, I find it too significant to flag it even if it is not strictly about computing.
It refers to a person who has profound impact on the shape of the worldwide software community.
It documents thoroughly, with countless references to sources, that is Stallman’s statements and manifests in his own site - a concern far too significant to be overlooked, flagged and deleted, even if for many it’s off-topic as in well, paedophilia and sexual harassment are not relevant for me.
Flagging does more than hide it from your feed. It marks the account. If you get enough flags you get a nice message suggesting you delete your account (not a joke).
I think “off topic” should be used sparingly. It is not an “I disagree with this” button or a downvote substitute.
This appears to be more nuanced than what you describe. For reference: app/views/users/standing.html.erb.
That’s the one. It is more nuanced. I was going from memory.
Even though it’s couched, the phrase “delete your account” was quite memorable for me. It’s also a little different experience when it shows up as a banner on the page rather than reading an ERB partial on mobile.
This was discussed extensively 3 years ago: https://lobste.rs/s/zp4ofg
While I think the original banner was in place to “gently” remind trolls that their antics were not welcome, it had the unfortunate effect of also being applied to prolific posters who attracted an unproportionate amount of flags.
I think I’m the top / second most flagged poster on here every once in a while and it’s honestly ridiculous. I’ve reached out to the mods twice about it with no response, so the whole “delete your post or contact the mods to discuss” is seemingly not a meaningful choice.
Can’t blame burntsushi for it a bit. I’ve considered deleting my account over it since it literally suggests that you consider just that. At this point I’ve moved past it and don’t really care about the banner at all.
Unfortunately, lobsters has the same problem that every site with a scoring system has. People downvote or flag or whatever because they disagree with you, not because you lack substance. No amount of good faith engagement will ever keep people from flagging you, and there is seemingly no recourse for flagging erroneously.
Computing is (still) made by humans. Denying that is not productive nor is it beneficial.
How does any software license fit that topic then as well? Computing is about the people as much as it’s about the computers and computing.
you “strongly support” it, but you took action to reduce its reach and visibility. Your actions are at odds with your claim.
It seems to me possible to strongly support a thing and yet more strongly support principles that one sees as saying it doesn’t belong here. (I am not commenting on whether it does belong here.)
I don’t even think it has to be “more strongly support” re: principles. It’s seems entirely consistent to say “this is the most important issue in my life” while also saying “and I will discuss it in the venues where it is appropriate to do so”, even if you consider discussion in appropriate venues to be of lesser importance. One dictates the value of the conversation, one is simply a practical acceptance of where that issue is best discussed.
I don’t think this gotcha! argument really applies here. I thought to be important to show that off-topic flags are not (all?) in support of Stallman because that’s not what they are for. In addition, the discussion immediately derailed with insults against the report’s authors and trolling—the report on itself is not controversial, but Stallman supporters are trying to make it look like so.
Compare it with the situation on the orange site, where this submission is much more topical: it was flagged to death and never reached the home page. That’s not what happens with off-topic flags on lobste.rs and the link is still #1.
Not every alert needs to go out on every channel. It is worth defending the existence of topical distinctions between different discussion spaces.
Strip playlist params from YouTube so it doesn’t autoplay more:
javascript:( function(){ location.href='https://www.youtube.com/watch?%27+(new%20RegExp(%22v=[^&]*%22).exec(location.href))%20}%20)();
New reddit to old reddit:
javascript:( function(){ location.href='https://old.'+(new RegExp("reddit.com/.*").exec(location.href)) } )();
Open page in Internet Archive:
javascript:window.location = "https://web.archive.org/web/*/" + window.location;
Save page in Internet Archive:
javascript:window.location = "https://web.archive.org/save/" + window.location;
Decode base64 in place:
javascript:function c(){}c.prototype.get=function(){var a="";window.getSelection?a=window.getSelection().toString():document.selection&&%22Control%22!=document.selection.type&&(a=document.selection.createRange().text);return%20a};c.prototype.set=function(a){if(window.getSelection){var%20b=window.getSelection();b.rangeCount&&(b=b.getRangeAt(0),b.deleteContents(),b.insertNode(document.createTextNode(a)))}else%20document.selection&&document.selection.createRange&&(b=document.selection.createRange(),b.text=a)};try{var%20d=new%20c,e=atob(d.get());d.set(e)}catch(a){alert(a.message)};
As an outsider (and at the risk of causing a flamewar), I have to wonder if there’s some connection to the zealotry that goes along with Rust.
Over the years I’ve seen a lot of “militant [insert new language] zealots”, but Rust (in my opinion) takes it further than languages like Haskell. It’s not enough for Rust to interact with an existing library - if that library isn’t written in Rust then they need a new one entirely.
Seems like a lot of pressure.
This zealotry’s prevalence is overstated, and when it happens, is often from enthusiastic inexperienced users, not from the folks working on the language itself.
I can’t think of a single person that I know of involved in Rust leadership who thinks like that.
Partially disagree here.
Indeed the folks working on the Rust language itself are great.
My past experience with “enthusiastic” users, as you call them, is that they are usually strong specific domain experts with enough knowledge in Rust to anticipate that a (re)write in Rust is the right thing to do - even when it may not be.
(The indie dev guy that coded Rust for 3 years - I can’t find the link - made an excellent point; Rust is a language that forces a developer into a path of correctness. But correctness may not always be conducive to success).
You mean this one? https://lobste.rs/s/nyikhk/lessons_learned_after_3_years_fulltime
This is an idea I’ve had kicking around for a bit but I do think there is a deep connection between the burnout in the blog post, the zealotry you mention, and a handful of other complaints that have surfaced from time to time about the language and its community.
My root observation is that the mindset of the community and the guiding principle of the language and its design is “we’re going to get this one right,” or in other words, the Rust contributors have set a high bar for themselves. This spills out as the burnout that gets brought up from time to time: if the bar is high there are few people, or maybe only a single person, who can “do it right.” That’s not just a feeling a maintainer might have: given the complexity of the compiler or the subtlety of what you’re trying to implement there are just not a lot of people who can do the work! You solve this by forcing people into management positions where they’re required to mentor and delegate but that’s hard to do in practice in a volunteer, open source project. The result is the burnout spiral mentioned in the article: “work doesn’t happen unless you do it personally”, you get tired, you get burnt out.
The “doing it right” also makes the language into fertile grounds for zealotry. Everything’s been built to such a high standard you can look at any use case for Rust and find one place, or several places, where Rust has some advantage over the opposing choice. If Rust’s design nurtures these kinds of arguments you’ll see people who make those arguments come to Rust, find it suitable for their purposes and stay with it.
i think this is very true, along with the rest of the pattern you’ve identified, but it’s not specific to rust. http://rhaas.blogspot.com/2024/05/hacking-on-postgresql-is-really-hard.html
i think there is a pretty strong disparity between domains where bugs are ~fast to discover and fix, and domains where bugs might not show up for months or years. in the former you can kinda just try things and if it breaks oh well you fix it and it’s fine. in the latter, avoiding bugs requires a lot of domain expertise because you have a really hard time catching them with tests. https://youtu.be/tgaKAF_eiOg?si=P2QKBbZsGVYFAl4k&t=786
This also explains the perpetual 0.x versioning.
This seems to be a thin wrapper over an AWS service.
This would be a good style of font for terminal. It’s always a little jarring when terminal output includes emojis and they’re in full color when everything else is monochrome.
I once heard someone distinguish between “diploma” work and “toothbrush” work: diploma work is work you do once and then it’s done, whereas toothbrush work is work you have to do continually forever. A lot of information-related things seem like diploma work, because you don’t need to re-discover electromagnetism or re-write libc to be a software engineer, but when you zoom out there’s a lot of toothbrush work needed to maintain them.
100 times this one. When you print a page to PDF, any floating panels reappear on every page – which means a few lines on every printed page are covered by a floating panel.
Too often, I have used ‘save as HTML’, reopened the page after a few years, and discovered some external Javascript or other asset went break-a-doodle-doo. I started saving web pages as PDFs, only to discover the floating panel problem. Grr.
(Also, my smartphone has a very small screen, VGA resolution, 640x480. A 50-pixel header bar is a solid chunk of my screen. And some data-consent banners are so large the “Reject all” button scrolls offscreen. Thanks for letting me rant.)
I started saving pages using a combination of SingleFile, which basically just embeds all resources as b64 and downloads the result, and plain
curl | htmlq
if what’s important is just a bit of text. (With SingleFile you can also use your browser’s dev tools to delete parts of the page you don’t want, like consent banners or comment sections, and save the result as it appears in your browser. Highly useful.)This article isn’t wrong but it also doesn’t add much. Digital media degrade, and require complex machines to read back.
Pretty much the best medium has proved to be clay tablets, baked (usually by accident). The next best medium is any medium with people actively copying it (including Hindu scriptures with multiple redundant versions to memorize).
Future scholars have to decipher the clay tablets, but the work transmitted by the living community can be re-translated as language changes or transmitted with commentary as to how it should be understood. So, arguably, the living community is a better medium for transmitting information and not merely graphemes that once encoded it.
Are there any examples of deciphering an ancient script without linking it to some known language (Coptic to hieroglyphics, contemporary Mayan, Linear B to Greek, etc.)?
It’s always easier to make a mess than to clean up after one. For those of you playing with LLMs and posting the resulting sludge online, take a long, hard look at yourself and reflect on the damage you’ve already done to the web ecosystem. Doubly so if you’re building the models, or actively helping people to use them in more places.
To quote Dr. Seuss,
I don’t think there’s any reason to think that using a LLM to generate text and then posting it online is harmful in a way people should care about (or be legally/socially compelled to care about). I don’t publish text on the internet based on what makes things nice for people doing corpus linguistics on English text.
All the LLM-generated garbage drowns out the good stuff. I care about the fact that I can’t search for anything without 90% of the results being worthless nonsense spewed out by LLMs.
I don’t think that “All” is fair. People generating garbage to game SEO are doing harm, but there is surely plenty of AI generated content that is neutral at worst.
There is no useful AI generated content on the internet. Even the best AI generated content competes with real content.
Okay, well, that’s a pretty extreme assertion and I think it’s at least obviously implausible.
There are people who find AI to be useful, and to produce useful output (this is a fact, I am one of them)
Some of that content is likely to end up on the internet
Ergo there is probably useful AI content on the internet.
Unless our definition of “useful” is radically divergent I just don’t see how what you’ve said could be plausible.
LLMs produce harmful garbage that is at best a lossy imitation of something they were trained upon.
There are naive users who can’t tell the difference between LLM slop and real information, there are grifters and hustlers who are happy to burn the world down if they can extract a profit, there are “useful idiots” who choose not to consider the externalities of their actions, and there are plenty of folks who fall into multiple such categories at once. The fact that you can personally extract benefit from applying this technology to problems does not change the fact that it is a massive and obvious net-negative for society and the environment. Lots of people found leaded gasoline useful, too!
You say that “LLMs produce harmful garbage” but also “it is a massive and obvious net-negative”.
The latter is a far weaker assertion than the former. If all you want to say is that you think that, overall, the technology causes more harm than it’s worth, that’s radically different from saying that it is useless or strictly harmful garbage.
It doesn’t follow (if you intended this to be a formal syllogism). The people who find AI to produce useful output could be uniformly mistaken.
If you didn’t mean it formally, consider ~lonjil may not have either — there may be extremely small amounts of useful AI generated content on the Internet, little enough to make it, as ~Internet_Janitor says, a massive and obvious net negative.
I could restructure it to address that, but I’m using words like “plausible” for a reason.
This isn’t intended to be a formal syllogism, strictly, it’s just a way of structuring my statements for clarity.
But just because I’m speaking somewhat formally doesn’t mean that I assume that they are. But they are making an extremely strong assertion, and they were not ambiguous. If they want to say “there’s little useful content from LLMs” or “the majority of LLM content on the internet is bad” that’s fine but I don’t consider that to be “close enough” to just hand wave it away as harmless hyperbole. I might disagree with those statements but I doubt I’d disagree with them to the extent that I consider them obviously wrong.
I dislike these extreme, obviously objectionable statements quite a lot. I think they’re a bad way to discuss interesting, complex topics. That’s why I responded in a way that I had hoped would be clear. A response of “that doesn’t follow” is a good response.
Hm. Here you seem to assert that
I guess I can see that as one definition of “plausible”, but, when lonjil asserted “There is no useful AI generated content on the internet”, you called their assertion “obviously implausible”. How does that reconcile with (1)?
I think it appears that way because I’ve structured my argument in a way that’s much less ambiguous. If you were to structure their argument such that it’s similarly express I suspect you’d find that they’re not symmetric in this regard - that is, they would have to add a premise like “I do not believe that there is useful LLM information therefor it is plausible that there is no useful LLM information” etc and you’d have to weight the arguments against one another by evaluating those premises, understanding “useful”, etc.
Regardless, someone saying “something is useless” is a far stronger assertion than “something may be useful”.
I reject this argument, on the basis that just because something is useful to you, doesn’t mean that it will be useful to anyone else. For example, let’s say someone uses ChatGPT instead of Google because Google never returns useful results anymore. If ChatGPT then gave a useful response to this person, would it be useful to put that ChatGPT output onto the public web? I would say no. The original information is already on the web, and putting more LLM-content on the web just makes it harder to find it.
It should all be labeled.
It could be a start, but essentially just an evil bit
I don’t follow.
It should be labeled, at the very least, so that LLMs don’t consume their own excrement thinking it was human-generated. The result would be the textual equivalent of high-pitched feedback from a microphone too close to a speaker.
Okay, I think that’s fine to say but it doesn’t really change things - the people who are causing these problems are not going to label their outputs, but that doesn’t mean that every output from an LLM is bad.
I’m skeptical that anybody on this site is creating vast quantities of slop to game SEO, which is what is actually causing the problem in TFA, versus posting interesting things they did with new technology, which is cool and they shouldn’t feel bad about.
Many of the people who wrote the software used by other(?) people in the SEO industry to generate vast quantities of slop thought what they were doing was just playing with “interesting new technology”, which doesn’t make them any less responsible for the outcomes. Designing bombs for the intellectual thrill isn’t free of ethical implications just because someone else is responsible for choosing where and when to drop them.
Reminds me of the recruiter trying to get me to work for a weapons manufacturer (sorry, “battlefield intelligence”) by telling me that my work would be good because it would help make the weapons more accurate and only kill the people they were aimed at more often and lessen collateral damage.
That’s a really strong stance to take and most people probably would and should reject it intuitively. Intent is almost universally understood to be a critical component of moral responsibility.
Nazi, Schmatzi, says Wernehr von Braun!
I’m not totally sure what point you’re trying to make. I think you’re equating my statement that intent factors into moral judgments to the (commonly rejected) argument that Nazis were “just following orders” and are therefor innocent? Or perhaps the “Wernehr von Braun” character’s uncaring attitude towards where bombs land?
I don’t think it’s contentious at all to say that intent factors into moral judgments, but it would certainly be contentious to say that nazis acted without intent.
It’s more about ~Internet_Janitor’s phrase:
Which I take to be more or less a reference to Wernher von Braun (who was a real person, not just a character). I’m not arguing that intent is irrelevant to moral culpability, but that intent only takes you so far, as the example of Wernher von Braun illustrates. Sometimes you aim for the stars, but nevertheless hit London.
Thanks, that clears things up re: Wernher von Braun.
There’s no question that intent is a mere factor. I just reject this:
That doesn’t sound quite right to me. A drunk driver’s intent may be simply to get from A to B, but most people would probably hold them responsible for the consequences of their recklessness.
Of course. Intent is just one factor. Similarly, if I got into my car and intentionally tried to run people over to kill them, people would take that into consideration.
I was under the impression you were claiming it was necessary. If you merely think it’s relevant, then yes, I agree. But I think what matters is the intent to act, not the intent to cause any particular outcome; if you work for FooCorp deliberately because you want them to pay you¹, as opposed to being tricked into working for them or something, IMO you are morally responsible for anything they do that you could reasonably² have predicted.
1. and you don’t consider yourself to be actively defrauding them
2. I realise “reasonably” is extremely woolly, but it’s hard to be quantitative about morality.
The grandposter said “any less responsible”, and you translated that to “any less morally responsible”, which I think is a misread.
If I push a button thinking I will get a coffee, when it actually sets fourteen people and a dog on fire, I may not be morally responsible, but I definitely am responsible–without my action, the subsequent suffering would not have happened.
I guess so. I’m not sure if the word “responsible” is ever used that way. I think if you said “I’m responsible for that” a ton of people would say “no it’s not your fault”. But sure, they may mean it in the sense that you are still causally prior to.
Another illustrative hypothetical:
You back out of your garage. You do not check your rear view. You kill a small dog with your car.
Regardless of your frame of mind, you are responsible for that act. Not just causally, but also morally and legally. If frame of mind had no impact, then “murder” and “manslaughter” would not be separate crimes*. On the other hand, if “had no ill intent” was moral clearance to do anything, then “manslaughter” wouldn’t be a crime at all.
Your moral framework encourages willful ignorance to remain morally clear. It does not accurately reflect most moral frameworks in our world.
[*] In the US, both are (roughly) “responsible for death”. “Murder” is (again, roughly) for intentional death, while “manslaughter” is for unintentional but still culpable death. Local laws vary, but most jurisdictions have that distinction.
I think you’re misunderstanding me, but I don’t think it really matters, so long as we understand two things:
That I consider “responsible” as “morally responsible”, which you can say is not the case, in which case I simply have misunderstood the poster - but I believe they were speaking morally based on the context.
That intent factors into moral judgments.
As for various cases where intent factors in in a weak way, strong way, or perhaps even not at all, it doesn’t really change those points.
In the effort to understand what you’re saying then, are you suggesting that somebody who is “just playing with new interesting technology” is “manslaughter” responsible, but not “murder” responsible for the destructive results?
I think that whatever they are, they are not “just as” responsible.
For example, I think that AI may at some point be abused in a way that I don’t like. I also pay for ChatGPT, and perhaps in the future my dollar contribution to the company will somehow lead to that “badness” occurring. I do not believe that I am as morally responsible for that as whoever leverages the technology with direct intent to commit some evil deed.
Concretely, I am not convinced at all that AI is radically more “bad” than it is “good”. I am definitely convinced that it provides multiple “goods” and that it has concrete use cases that are neutral or good. It absolutely has some terrible use cases, there’s undoubtedly harm that can be done (I think deepfakes are quite a scary thing). I also think that, for those reasons and others, being a user of AI does not make someone “just as” responsible as someone who explicitly leverages AI to do something terrible.
This is scary.
We don’t have reliable information on language usage by humans for most centuries, and arguably not even for the 20th and 21st centuries, since the Internet is obviously not necessarily representative of all human language use.
Sure, but we have an NLP researcher basically saying
We simply haven’t encountered that before. It’s not the “oh no computers are alive” kind of scary. It’s kind of hard to even put a metaphor to it. I guess it’s something like being an uninformed patient at a hospital staffed with very confident but extremely stupid doctors:
Wait. But I thought I had a…cold?
Um well, ok great. Glad to live another day.
A loose analogy might be if the medical community started spraying antibiotics of last resort on every available surface (you’ll have to imagine there was some sort of tenuous potential for short-term profit and career advancement here), and then feigned surprise when untreatable antibiotic-resistant bacteria spread like wildfire. LLM output is very difficult to distinguish from semantically meaningful text with the statistical approaches that have traditionally been applied to spam, so while it is very cheap and easy to create this form of information pollution, we don’t have scalable techniques for eliminating it from a corpus. It is a grim sliver of comedy that the over-enthusiasm for this machine learning technique will assuredly stymie many future efforts at more sophisticated approaches, and leave the open web as useless for machines as it is becoming for humans.
“Delving”. I see what you did there.
Note that the Nix version packaged in nixpkgs is still 2.18.4.
2.24.0 was tagged on Aug 1, so unless you’ve specifically opted into using bleeding-edge Nix, you’re not vulnerable.
Unfortunately, the version shipped by the official installer is 2.24.5 (which will apply for almost all non-NixOS users)
Ah, that’s not good, then. Do you know what the Determinate installer does? I think my Ubuntu used that less than a year ago and it currently has nix 2.18.1.
We ship the latest version that has passed our validation suite. Occasionally this means we hold back, or roll back to prior versions. Yesterday we rolled back to 2.23.3 for the security issue. We’re in the process of rolling out 2.24.6 as I write this.
See: https://status.determinate.systems/incidents/1js0r53719f4
Update: done.
detsys’ installer bumped to cppnix 2.24 only few weeks ago https://github.com/DeterminateSystems/nix-installer/commit/6d0bd3ef6b9887ab4e4b3d5c51f721fa505b82d2 There’s been some back and forth rollbacks.
What’s the purpose of calling it “cppnix”?
Precision. “Nix” as a term is very overloaded (‘nix the complete system’, ‘nix the language’, ‘nix the tool you call to do things to/with the system’,…).
cppnix is a way to refer to the actual c++ based implementation most commonly in use.
Seems like it should be uncontroversial in a growing ecosystem with multiple implementations anyway.
Over in the world of Ruby we often refer to the canonical implementation as CRuby (formerly MRI, but Matz asked the community to move on from that name since the team is so much more than just him). It helps to disambiguate it from JRuby, Truffle Ruby, etc.
Feels normal to see the same happen here.
I don’t know what “Nix the complete system” is supposed to be. There is a language and a tool. The difference between a language and a tool is easily distinguishable from context. This seems to be an attempt by Lix people to portray Nix as “just another implementation of Nix”, rather than being the official implementation, which exists in contrast to reimplementations and forks.
The name “CppNix” predates Lix by at least several months, if not years. I think Tvix folks coined it originally? See for example this commit message from last year (just a random example I found): https://code.tvl.fyi/commit/?id=625643559100c15afe79b799c3a76ef880a07210
This is not coining a name.
I’m under impression that you’re trying to police others’ usage of language.
I’m hoping to hear convincing arguments for this apparent attempt at renaming someone else’s project.
I find it a clear disambiguating name with a well-defined prefix (‘the c++ implementation of nix”, just like a rust rewrite might be called “rustnix” (see: https://tvix.dev/) or a future Roc version might be called “rocnix”) so in that sense it doesn’t matter whether that “someone else” has a problem with it or not. You don’t get to pick your nicknames
I don’t think that is widely used. The distro is NixOS and calling things around the the package manager and language the nix ecosystem is totally reasonable.
If it is the most common implementation, why change its name if it is already so common and widely used?
For the same reason you can have Java language as a generic term, but when speaking of behaviour of their impls you say OpenJDK or Android Runtime or OpenJ9.
Same idea as “CPython”.
They named there project themselves like that https://github.com/python/cpython
Yep! And perhaps in a few years the CppNix team will rename themselves as well. Community pressure can steer a core team towards better choices.
Disambiguation putting precise spotlight on particular git tree maintained in github.com/nixos/nix. Otherwise it could mean the language itself, or broader ecosystem.
IMO I think it is silly
2.24.X is not to be considered bleeding edge. It is just the latest version. Bleeding edge is the git variant.
Just to be clear, there are several version of Nix in nixpkgs, 2.18.5 is just the current stable version e.g.
pkgs.nixVersions.stable
(or something like that, I’m on my phone).Heck, even in unstable
pkgs.nix
is at 2.18.5. (This might’ve been recently changed, but if so I doubt they would’ve rolled back that far!)I usually have WebGL turned off, for fingerprinting reasons (which the author even alludes to, funnily). So for me the blog post just ended with “Alright, so we’re dealing with 92 KiB for gzip vs 37 + 71 KiB for Brotli. Umm…” and lots of blank space.
The browser console did reveal the cause of the problem, but still, that dampened quite a lot of the thunder of this post for me. And I appreciate the idea, even! It’s an impressive technical solution! Just… being more thoughtful of people without WebGL, or without JS, I would appreciate as well. Maybe a link to a non-JS fallback page could help here?
It’s impressive as a “look what side channel my blog runs on” tech demo. As an accessibility demo, it’s introducing several potential points of failure into everyone‘s experience, not just people on slow connections, in order to save… 5KiB. I don’t think 83 vs 88 KiB is going to make the difference to anyone. If it is, surely linking the SVG with
<i>
so clients can choose to download it later or not at all would be better. Heck, even just doing the bar chart in ASCII would work.I had the same situation (using LibreWolf, which disables Canvas). I literally get a better experience on this site by force disabling JS using uBlock Origin, which possibly loaded a fallback page. I’m sure there’s a joke in here somewhere.
There was a meta refresh for browsers w/o JS, just not one for browsers without WebGL. I’ve fixed that a couple hours ago.
Nice, thanks!
I highly recommend following the first link of this essay: What Happens to Us Does Not Happen to Most of You (sigarch.org). This phrase stands out to me as particularly incisive:
In my experience, tense conversations arise when deeply personal testimonies like this are shared. I believe that much of this tension stems from a reluctance to accept this simple, troubling premise. Accepting this premise requires compassion and empathy.
Thank you for posting this @carlana.
It seems to me entirely plausible that someone could accept this premise and just not care as long as it doesn’t affect them personally, which I suspect is not what you would call “compassion and empathy”.
If we wanna quibble over words I might call that “acknowledging” rather than “accepting”. “Yes this is true, but it’s not going to change my behavior.”
Or, worse, someone could accept this premise and infer from it that their community is not meant for women who cannot experience it the way they do.
It’s not clear to me what problem is being avoided here. The author describes the combinatorial explosion as “unsustainable”, “ridiculous”, and “feels immediately uncomfortable”. The closest thing to an objective, concrete reason is that
But the way you test that “both fields are being checked” is to have tests for each field individually. Testing what happens when both variables disable the message is how you check that you’re not screwing up the logic in a different way, like accidentally XOR-ing the variables instead of AND-ing them.
What seems like the actual problem here is that the article’s testing framework is incredibly verbose and lacks parametrization, requiring them to write out eight test cases individually. In NUnit, for example, you would just write
and it would automatically call the test four times and assert against the expected result. When it takes just one line of code to add a new test case, why wouldn’t you be thorough and add every combination, especially for easy-to-test functions like this one?
NUnit actually makes combinatorics even easier:
I’ve written exhaustive tests for things with way more than the number of cases here, so I’m also unsure what the actual problem is. Using code to generate a matrix of test values is a pretty common strategy, and baked into several testing frameworks.
(and when I say “way more than the number of cases here”, I do mean it – I’ve written exhaustive testing for conversion functions that work on 24-bit color values, for example)
You are absolutely right about testing all permutations when it’s cheap. If the CPU cost is negligible, and if the cost to the programmer is negligible (concise and easy to write), then why not?
But I can think of occasions when the cost might not be negligible:
Here’s a contrived example of what I mean by that last point. Suppose I’m modeling the animal kingdom with classes and inheritance—everyone’s favorite starting point for contrived examples. I want to test animal locomotion: being able to walk, swim, and/or fly. Iterating over every possible combination isn’t helpful because
Animal
is an abstract class: I can’t constructnew Animal(canWalk, canSwim, canFly)
. So if I want to test every permutation, I have to write 8 different blobs of setup code:new Sponge()
new DragonFly("blue")
new Fish({name: "Gary", color: "gold"}, numScales)
new FlyingFish({name: "Orville"}, numScales, numWings)
new Spider()
, and so on.Granted, I think many problems can be decomposed into their dimensions, and I think it’s cool that some frameworks make it easy to test all of them (like the NUnit example you gave). Real life is, most of the time, easier to test than the immediately above.
Those are all much better reasons than the ones in the article.
This seems like the kind of domain where I really do want to test every permutation. If I were writing this code, I could already picture the bugfix release note: “Fixed a bug where animals with multiple methods of locomotion would fail to use any of them”. If these aren’t simple booleans, you certainly want exhaustive tests for the details of their mutual interactions.
Deliberate reduction of test cases is apparently used in more high stake scenarios. This article claims one reduction strategy is used in Aerospace.
The core concern for me is legibility and maintainability. Frameworks do make it easier, or less succint, however, it’s still difficult to derive business logic from a flat list of permutations. spc476 makes some good points above too! As you expand this, it takes time to derive business logic, and determine how it gets expanded. Automating the permutations is a low-cost upfront, but you pay for it over time with code that’s harder to understand. Testing each one in turn is much more legible and modifiable.
You can improve the legibility of testing code with helper functions to abstract away common setup or code comments to explain how the test works. You can’t improve missing test coverage by anything other than adding tests, and a bug that only emerges when more than one variable is flipped will not be covered if you write fewer tests. I don’t see a way around this except rationalization.
Yeah… it’s a delicate balance. More helper functions and abstractions increase the maintenance cost. It’ll take longer for folks to understand what’s happening and make a useful change. Also, the more maintenance cost the greater the probability of misusing it and not getting the desirable coverage. I really like the simplification the article presents as it reduces the need for more abstractions or helper functions while still preserving coverage. The concept is called many things - one notion is that of Multiple Condition/Decision Coverage, where you’re focusing on the decisions. https://youtu.be/DivaWCNohdw?si=XfWuq7pJOO2Vk5eQ. This was a nice short talk about it.