Many websites don’t support a + in email addresses either, and only support some custom subset of characters instead. I’ve already collected more than 25 cases personally, which is extremely annoying.
six months later, you have pile of shell scripts that do not work—breaking every time there’s a slight shift in the winds of production.
This screams of projecting lack of skill into others. Perhaps your shellscripts don’t work. Where does the conclusion that mine won’t work comes from?
This is the same mindset as the common myth of “don’t write your own SQL, Hibernate developers can write much better SQL than you”. Yeah, how did that work out?
Your bash scripts are probably fine, but recognise that rolling your own requires maintenance.
I think it’s a bit lost in the format but for me, the takeaway from this article is that you should be conscious of the point at which maintenance cost starts to outstrip value.
You may well not be there and it’s definitely easy to fall into the trap of adopting technology X too early and eating cost and complexity as a result.
But I have worked at enough places where the world is run by collections of well-meaning and increasingly stretched shell scripts marinating in Historical Context. There’s a trap there as well.
Your bash scripts are probably fine, but recognise that rolling your own requires maintenance.
Rolling my own what? The reason why I disagree with this post is because it is vague and fails to detail what exactly would this shellscripts entail and what work does it go to set up deployments on Kubernetes even with an already working cluster. Frankly speaking, I find the amount of painfully complicated yaml it takes to set up a service is more evolved than a simple deployment script.
it is vague and fails to detail what exactly would this shellscripts entail
It kinda isn’t and doesn’t, though. It even provides a handy summary of the set of things that, if you need them, maybe you should consider k8s, after all:
standard config format, a deployment method, an overlay network, service discovery, immutable nodes, and an API server.
Invariably, I’ve seen collections of shell scripts spring up around maintaining kubernetes. The choice isn’t between bash scripts or kubernetes. The choice is around how you bin-pack the services onto servers and ship the code there. All options come with a large side of scripting to manage them.
Not my shell scripts! They’re perfect. Perfect in every ways. And my SQL — well escaped and injection proof! My memory? I bounds-checked it myself with a slide rule, not a single byte will by read out of bounds.
Other people have skill issues but you and me? We’re in a league of our own :)
Oh yeah, the old shortcut to fake humbleness “we all make mistakes, I’m not perfect, neither are you”.
That argumentative position is useless. So we completely relativize any and every bug? Are they all the same? All code has bugs… How many bugs are reasonable? Is it acceptable that a single 10 line shell script has 4 bugs? What about 8 bugs?
And what about Kubernetes manifests? Are they magically bug free because we just say they are?
Can we try to keep the discussion fruitful and somewhat technical?
Yes, I am claiming that the author sounds like they are not too familiar with shell scripts and dismiss them as something that attracts bugs and is difficult to maintain. What at is the technical foundation of such claims?
Your example being a good one. SQL injection was a problem from the old PHP era when lots of people jumped in using relational database without any prior programming knowledge. It is rather trivially avoided and it is virtually non existent nowadays. I think everyone expects it to be non problem and if people go about assembling SQL by string concatenation without proper escaping, that will certainly not fall under the “we all make bugs” category.
It’s a categorical difference. It requires dramatically more “skill” (I would argue that it becomes functionally impossible to do this at any but the most trivial scales but maybe you’re the rare genius who could be curing cancer but prefers to use bash to address already-solved problems?) to write correct, idempotent shell scripts as opposed to describing your desired state and letting a controller figure out how to update it.
Even if you think you are capable of writing immaculate scripts that can do everything you need and maintaining them, can you not conceive of a world where other people have to maintain them when you’re not around? In other words, even if you are perfect, if the baseline skill required to operate a shell-based deployment method is so high, aren’t you basically arguing against it?
Like, there’s plenty of technical arguments against kubernetes, and there’s great alternatives that are less complex. You can even argue about whether some of these things, like rolling deploys, are even required for most people. Skipping all of that and calling someone else a bad programmer because they’d rather use Kubernetes is just mean spirited. Just this week another user was banned for (among other things) saying “skill issue”, but if you rephrase that to “lack of skill” it sits at +22?
This is the same mindset as the common myth of “don’t write your own SQL, Hibernate developers can write much better SQL than you”. Yeah, how did that work out?
Most teams converge on using an ORM. Developers who can’t deal with ORMs and feel the need to constantly break out into SQL are a code smell.
This is largely untrue and the peak gas passed long ago, with ORM libraries that promised to take over the world up to around 2010 being pretty much all dead.
The explosion of popularity of postgrest, supabase, and the like seems unstoppable at this moment.
My experience has been by and large the inverse, with teams bemoaning ORMs systematically because they’d gotten bitten by Weird ORM Bugs more than once. Not saying that raw SQL is more fun, but I derive no fun from ORMs either (nor have I seen many teams having fun with ORMs). Of course, this is also anecdata.
I can’t believe I’m taking google’s side here but this is ludicrous. The motivation and proposed correctional measures are too ill-conceived; they’re just going to hurt the ecosystem instead. There is a right way to do this but this ain’t it.
The government could fund Firefox or core tech in FF or otherwise contribute to those projects, thus weakening Google’s hold over the company. US gov pours billions into tech startups and companies, seems perfectly reasonable for them to do so here.
Maybe a dim view, but I don’t think I would wish government funding on Firefox. I can only imagine them getting drawn into political fights, spending time justifying their work to the American people, and getting baroque requirements from the feds.
Government funding comes in all shapes and sizes. Most of it has nothing to do with politics. The air force and DoD are constantly investing or pouring money into startups. I myself had a government investor at my startup. No justification to the US needed.
I don’t think they care particularly much about Google’s hold over Mozilla. They care about Google using their simultaneous ownership of Google Search, Chrome, and all their ad-related stuff to unfairly enrich themselves, and they see Google’s payments to Apple as a method to defend that monopoly power. If Mozilla had an alternate source of funding, it wouldn’t really change anything except maybe make the browser that 5% of people use have a different default search engine. It probably wouldn’t help Firefox to become more popular, and it’d be a much smaller difference than whatever happens with Safari.
Regardless, it seems very likely to me that neither Chrome nor Firefox would survive this. But who knows, maybe that’s a good thing. Maybe that will pave the way for consumers paying for their browsers instead of paying through subjecting themselves to advertisement and propaganda. Doesn’t sound too bad since it’s probably the ad economy that turned the world into the propaganda wasteland that it is today.
Every time OpenBSD crash, and it happens very often for me when using it as a desktop
I am curious about that. I think OpenBSD might have never crashed on me. Windows (also post-XP), macOS, Linux, FreeBSD, NetBSD, DragonFly (though that was hardware related) all did multiple times though.
Is this due development? Is this hardware related?
Just surprised about that particular one. All the others feel like “sure, if that’s what you want then OpenBSD is probably not a good choice”.
I am curious about that. I think OpenBSD might have never crashed on me. Windows (also post-XP), macOS, Linux, FreeBSD, NetBSD, DragonFly (though that was hardware related) all did multiple times though.
Very similar experience. Don’t think I’ve ever had a crash. Her experience might also have something to do with solene being an OpenBSD dev, so testing untested stuff possibly leads to some I stability? That’s an assumption on my part.
I have grievances against OpenBSD file system. Every time OpenBSD crash, and it happens very often for me when using it as a desktop, it ends with file corrupted or lost files. This is just not something I can accept.
!!!
For comparison, ext3, with journaling, was merged into Linux mainline in 2001.
I developed embedded devices whose bootloader read and wrote ext4 files, including symlinks. There is really no excuse not to have a journaling file system on a system that’s larger than a fingernail.
For OpenBSD it may be as similar reason as why they got rid of Bluetooth. Nobody was maintaining that code. It got old, stale. So they got rid of it. I generally actually like this approach, sadly you lose functionality, but it keeps the entire codebase “clean” and maintained.
My guess is that they need people to work on the FS issue. Or, rather they don’t have anyone who cares enough about it onboard to actually write the code in a way that is conductive to OpenBSD’s “style”. Could be wrong, but that’s my assumption.
The pragmatic thing to do would be to grab WAPBL from NetBSD since NetBSD and OpenBSD are still relative kin. Kirk McKusick still maintains UFS on FreeBSD so the SU+J works well there but it would be a lot of work to pull up OpenBSD’s UFS.
IIRC WAPBL still has some issues and is not enabled by default.
Its primary purpose is not to make the filesystem more robust but rather to offer faster performance.
I’ve never experimented with SU+J, but I’d like to hear more feedback on it (:
I’m not sure how deep WABPL goes but a journal helps you to close some corruption events on an otherwise non-atomic FS by being more strict with the sync and flush events and without killing performance. You can also journal data which is an advantage of taking this approach, although I don’t know that WAPBL offers this currently. Empirically NetBSD UFS+WAPBL seems fairly reliable in my use.
SU+J orders operations in an ingenious way to avoid the same issues and make metadata atomic at the expense of code complexity, the J is just for pending unlinks which otherwise have to be garbage collected by fsck. A large, well known video streamer uses SU+J so it is well supported. Empirically SU+J is a little slower than other journaling filesystems but this might be as much implementation and not algorithm.
Nice, I had not seen that thread. I think the data journaling they are discussing would be important for OpenBSD as for instance on FreeBSD UFS is used in specific scenarios like embedded devices or fail-in-place content servers and ZFS is used anywhere data integrity is paramount. WAPBL was created in response to the complexity of SU+J as it seems OpenBSD was also impacted by. For that and easier code sharing I would be inclined to go the WAPBL direction, but there may be other merits to the SU+J direction in terms of syncing FFS and UFS against FreeBSD.
This is really surprising. I thought this was one of those instances of OP “handling things wrong”, but actually it doesn’t seem OpenBSD natively supports anything else than FFS and FFS2 (and without soft updates, like above).
…Web Browsers from ports that are patched to implement pledge(2) and unveil(8). […] FreeBSD 14.1, AFAIK, does not implement such feature.
I suppose the idiomatic method for securing processes on FreeBSD is capsicum(4). And at least for Firefox, it looks like someone has been working on adding support but they ran into some tricky cases that apparently aren’t well supported.
I guess for retrofitting huge, complex code bases, pledge and unveil or jails are probably easier to get working. I wonder how this affects things like file picker dialogs for choosing uploads, etc. If they’re implemented in-process, I guess they get to “see” the veiled or jailed file system - which means you don’t get any mysterious permission issues, but you also can’t upload arbitrary files. If they were implemented via IPC to some desktop environment process, the user could see the usual file system hierarchy to select arbitrary files, and if those files were sent back to the browser process as file descriptors, it would actually work. (I think the latter is how sandboxed apps are permitted to read and write arbitrary files on macOS, with Apple-typical disregard for slightly more complex requirements than just picking one file.)
I guess for retrofitting huge, complex code bases, pledge and unveil or jails are probably easier to get working.
In regard to pledge+unveil, it’s extremely simple to actually implement in code (though considerations for where+what in the code would be more complex) and the predefined promises make it pretty easy.
I wonder how this affects things like file picker dialogs for choosing uploads, etc.
For pledge+unveil, from memory, the dialog just cannot browse/see outside of the unveil’d paths. For browsers there’s a few files /etc/<browser>/unveil.* that lists the paths each kind of browser process is allowed access. Included here is ~/Downloads and common XDG-dirs for example, which allows for most file picking stuff to work fine for most users.
In regard to pledge+unveil, it’s extremely simple to actually implement in code (though considerations for where+what in the code would be more complex) and the predefined promises make it pretty easy.
One advantage is also that you can progressively enhance this over time. You can keep adding restrictions every time you fix instances of code which would previously have been violating a pledge. Capsicum would appear to require a more top-down approach. (I can’t help but wonder if you could add a kind of compatibility mode where it allows open() and similar as long as the provided path traverses a directory for which the process holds an appropriate file descriptor through which you’d normally be expected to call openat(). Or maybe that already exists, I really need to get hands-on with this one day.)
Included here is ~/Downloads and common XDG-dirs for example, which allows for most file picking stuff to work fine for most users.
I can see that the alternative would probably require a more holistic approach at the desktop environment level to implement well, but defining these directories statically up front seems like an awkward compromise. Various XDG directories contain precisely the kind of data you’d want to protect from exfiltration via a compromised process.
Capsicum lets you handle things like the downloads directory by passing a file descriptor with CAP_CREATE. This can be used with openat with the O_CREAT flag, but doesn’t let you open existing files in that directory. This is all visible in the code, so you don’t need to cross reference external policy files.
If you run a Capsicum app with ktrace, you can see every system call that Capsicum blocks, so it’s easy to fix them. With a default-deny policy and no access to global namespaces, it’s easy to write least-privilege software with Capsicum. I have not had that experience with any of the other sandboxing frameworks I’ve tried.
Capsicum lets you handle things like the downloads directory by passing a file descriptor with CAP_CREATE. This can be used with openat with the O_CREAT flag, but doesn’t let you open existing files in that directory. This is all visible in the code, so you don’t need to cross reference external policy files.
The download side is easier to handle as long as you’re keeping to a single download directory. I expect uploads to be somewhat more annoying UX wise: it’s rather unusual to have an “uploads” directory where the user would first copy any files they want to upload to a website, then select them in the “Browse…” dialog.
One slightly less annoying option is to agree on blanket read access to a bunch of stuff under $HOME, which appears to be what’s used here in practice, but it leaves you vulnerable to data exfiltration, which surely is one of the attack scenarios this whole exercise is trying to defend against.
Anything more comprehensive I can come up with will necessarily be a multi-process arrangement where the less-sandboxed process sends file descriptors of what the user selects to the heavily sandboxed one. And where drag & drop of a file sends (a) file descriptor(s) rather than just (a) path(s).
To be clear, I’m not saying this would be a bad system! I think it’d be great to have this in a desktop environment. Just that it’s a little tricky to retrofit onto a giant ball of code you’ve never even looked inside before.
If you run a Capsicum app with ktrace, you can see every system call that Capsicum blocks, so it’s easy to fix them. With a default-deny policy and no access to global namespaces, it’s easy to write least-privilege software with Capsicum. I have not had that experience with any of the other sandboxing frameworks I’ve tried.
That’s good to know - in contrast, dealing with the Sandbox on Apple’s platforms is super annoying as you’re mostly reduced to reading system log tea leaves when things aren’t working - macOS dtrace is falling apart more and more with every release and sometimes requires disabling the very security features you’re trying to debug.
But tracing only just begins to address the stated problem of retrofitting a large existing code base. If everything including dependencies including transitive ones is using open rather than openat, but open is completely non-functional, I suspect that might be rather a chore to get fixed. I mean it’s feasible if you actually “own” most of the project, but modifying something as big and unknown as a web browser in this way is quite an undertaking even if you can eventually get it all upstreamed.
The download side is easier to handle as long as you’re keeping to a single download directory. I expect uploads to be somewhat more annoying UX wise: it’s rather unusual to have an “uploads” directory where the user would first copy any files they want to upload to a website, then select them in the “Browse…” dialog.
Capsicum was designed to support this via the powerbox model (just as on macOS: it was designed to be able to more cleanly support the sandboxing model Apple was developing at the time). When you want to upload a file, the file dialog runs as a service in another process that has access to anything and gives file descriptors to selected files. You can also implement the same thing on top of a drag and drop protocol.
Alex Richardson did some Qt / KDE patches to support this and they worked well. Not sure what happened to them.
But tracing only just begins to address the stated problem of retrofitting a large existing code base. If everything including dependencies including transitive ones is using open rather than openat, but open is completely non-functional, I suspect that might be rather a chore to get fixed.
Alex Richardson did some Qt / KDE patches to support this and they worked well. Not sure what happened to them.
Good to know it’s been done and the code presumably is still out there somewhere. Something to keep note of in case I end up doing any UNIX desktop work. (And it sounds like this was done as part of an academic research project, so probably worth trying to get hold of any other published artifacts from that - perhaps part of this project?)
The nice thing about this is that open is a replaceable symbol. For example, in one project where I want to use some existing libraries in a Capsicum sandbox I simply replace open with something that calls openat with the right base depending on the prefix.
Providing this fallback compatibility wrapper as a user space libc override is a nifty technique, thanks for sharing!
I can see that the alternative would probably require a more holistic approach at the desktop environment level to implement well, but defining these directories statically up front seems like an awkward compromise.
I kind of agree. It did feel unintuitive to me at first. Worth noting these files are owned by root and normal users cannot write to them by default.
Various XDG directories contain precisely the kind of data you’d want to protect from exfiltration via a compromised process.
That’s right – from what I remember it was pretty well locked down though.
While you do not need to provide the type attribute, I do this to clarify its intended behavior.
It has an effect. If you omit type="button" and the button is inside of a form, it will act as submit button.
(I say nearly because there are some subtle differences between the two elements. Most notable is that links typically have different cursors than buttons.)
You can and should make the cursor behavior identical with CSS.
You can and should make the cursor behavior identical with CSS.
I’ve never been a huge fan of buttons using cursor: pointer on the web. It runs contrary to how UIs work on all desktop platforms, where buttons do not in fact change the cursor to a hand.
At first I thought that the pointer cursor was meant to indicate navigating to a different page, but it seems like web design practices perverted this into “clicking this will perform any arbitrary action, which may take you to another page, but may also not do that.”
For me it depends. It feels pretty natural on websites, but feels off in web apps (Spotify or Figma come to mind; they default to cursor: default on interactive elements pretty heavily.) In web apps a pointer cursor feels like it should always perform an action on a single click—similar to how this is handled in file browsers when switching between single- and double-click-to-open mode.
I just don’t think we should expect a HTTP(S) link aggregator/discussion forum to link to anything that requires third-party software to run. We don’t do .onion links here either, and I don’t even think sites with opennic TLDs, for example, would be appropriate.
While I can kind of agree with your sentiment, this is just not the forum for this kind of thing, in my view.
This is the right decision and it has nothing to do with “US law” as some of the lwn people seem to be talking about. Russia is a dictatorship with sophisticated state-powered cyberwarfare capabilities. Regardless of whether a Russian-based maintainer has malicious intent towards the Linux kernel, it’s beyond delusional to think that the Russian government isn’t aware of their status as kernel developers or would hesitate to force them to abuse their position if it was of strategic value to the Russian leadership. Frankly it’s a kindness to remove them from that sort of position and remove that risk to their personal safety.
It may or may not have been the right decision, but it was definitely the wrong way to go about it. At the very least there should have been an announcement and a reason provided. And thanks for their service so far. Not this cloak and dagger crap.
Indeed this was quite the inhumane way to let maintainers with hundreds of contributions go, this reply on the ML phrases it pretty well:
There is the form and there is the content – about the content one
cannot do much, when the state he or his organization resides in gives
an order.
But about the form one can indeed do much. No "Thank you!", no "I hope
we can work together again once the world has become sane(r)"... srsly,
what the hell.
Edit: There is another reply now with more details on which maintainers were removed, i.e. people whose employer is subject to an OFAC sanctions program - with a link to a list of specific companies.
I hope we can work together again once the world has become sane(r)
This would be a completely inappropriate response because it mischaracterizes the situation at hand: if the maintainers want to continue working on Linux, they only have to quit their jobs at companies producing weapons and parts used to kill Ukrainian children. It has nothing to do with the world being (in)sane, and everything to do with sanctions levied against companies complicit in mass murder.
Yes, the decision is reasonable whether or not it is right, but the communication and framing is terrible. “Sorry, but we’re forced to remove you due to US law and/or executive orders. Thanks for your past contributions” would have been the better approach.
This is true of quite a few governments, including those you think are friendly, and it is a huge blind spot to believe otherwise. Dictatorship doesn’t have anything to do with it, it isn’t as though these decisions are made right at the top.
Do you have the same reaction to contributions from US-based companies that have military contracts? While the US isn’t a dictatorship, the security and foreign policy apparatuses are very distant from democratic feedback.
Regardless of whether a Russian-based maintainer has malicious intent towards the Linux kernel, it’s beyond delusional to think that the Russian government isn’t aware of their status as kernel developers or would hesitate to force them to abuse their position if it was of strategic value to the Russian leadership.
It’s hard to single out Russia for this in a post-Snowden world. Not to mention that if maintainers can be forced to do something nefarious, then they can do the same thing of their own will or for their own benefit.
Frankly it’s a kindness to remove them from that sort of position and remove that risk to their personal safety.
The Wikimedia Foundation has taken similar action by removing Wikipedia administrators from e.g. Iran as a protective measure (sorry, don’t have links offhand), but even if that’s the reason, the Linux actions seem to have a major lack of compassion for the people affected.
It wasn’t xenophobia. The maintainers who were removed all worked for companies on a list of companies that US organizations and/or EU organizations are prohibited from “trading” with.
The message could have (and should have) been wrapped in a kinder envelope, but the rationale for the action was beyond the control of Linus & co.
Here’s what Linus has said, and it’s more than just “sanction.”
Moreover, we have to remove any maintainers who come from the following countries or regions, as they are listed in Countries of Particular Concern and are subject to impending sanctions:
Burma, People’s Republic of China, Cuba, Eritrea, Iran, the Democratic People’s Republic of Korea, Nicaragua, Pakistan, Russia, Saudi Arabia, Tajikistan, and Turkmenistan.
Algeria, Azerbaijan, the Central African Republic, Comoros, and Vietnam.
For People’s Republic of China, there are about 500 entities that are on the U.S. OFAC SDN / non-SDN lists, especially HUAWEI, which is one of the most active employers from versions 5.16 through 6.1, according to statistics. This is unacceptable, and we must take immediate action to address it, with the same reason
The same could be said of US contributors to Linux, even moreso considering the existence of National security letters. The US is also a far more powerful dictatorship than the Russian Federation, and is currently aiding at least two genocides.
The Linux Foundation should consider moving its seat to a country with more Free Software friendly legislation, like Iceland.
In other words, refusing to comply with international sanctions. This is in fact an incredibly high bar to clear for Iceland. It would require the country to dissociate itself from the Nordic Council, the EEA, and NATO.
a kernel dev quoted in the Phoronix article wrote:
Again, we’re really sorry it’s come to this, but all of the Linux infrastructure and a lot of its maintainers are in the US and we can’t ignore the requirements of US law. We are hoping that this action alone will be sufficient to satisfy the US Treasury department in charge of sanctions and we won’t also have to remove any existing patches.
that made me think it was due to US (not international) sanctions and that the demand was made by a US body without international jurisdiction. what am I missing?
Without a citation of which sanction they’re referencing it’s really hard to say. I assumed this sanction regime was one shared by the US and the EU, and that Iceland would follow as a member of NATO and the EEA. If it is specific to the US, like their continued boneheaded sanctions against Cuba, than basing the Linux foundation in another country would prevent this specific instance (a number of email addresses removed from a largely ceremonial text file in an open source project) from happening again.
Note however that Icelandic law might impose other restrictions on the foundation’s work. The status of taxation as a non-profit is probably different.
even if it has to do with international sanctions, their interpretation and enforcement seems to have been particular to the US. it reeks of “national security” with all the jackbootery that comes with it.
There are a however also some users that noticed that the site was a scam and used credentials like NoHacker123 to protest. Another person tried to convert the sinning scammers to Christianity by sending them the password repentnowcauseJesuslovesu
These could also be just real, actual passwords these people used, right?
Their username was also a variation of that or just “test”, so I find it unlikely.
But it’s a fun thought of someone trying to repel hackers through a magic password…
Its not much to do with privacy and its all to do with security. So in that regard:
It stops phishing.
That’s the biggest “need” for it. Which is a pretty good one if you ask me. And it is completely capable of this on paper. Its clear there seems to be issues across the front end and back end of it: massive UX issues and massive implementation issues.
Thunderbird has one of the worst user experiences I’ve ever seen. It takes seconds to delete an email from my inbox, the UI thread hangs all the time, if I click on a notification, it makes a black window because the filter moved the email while the notification was up and they didn’t track it by its ID or some basic mistake like that. Every update the UI gets clunkier and slower. Searching has a weird UI and fails to find matches. I could go on and on, there are so many UX issues with this worthless software. I have no idea what’s going on over at Mozilla. I think the org just needs to be burned to the ground.
I use it as my daily driver for now, but I feel like I’m on a sinking ship surrounded by nothing but ocean.
I use K-9 on Android and it’s fine… the idea of transforming it into Thunderbird blows my mind.
Yep, same boat. 100k+ emails, lots of filters, etc., it just works honestly. Thunderbird has only gotten better for me since Mozilla stopped supporting them.
Out of curiosity, are you using POP or IMAP? I imagine the performance characteristics would be very different, given their different network patterns.
I run Dovecot as an IMAP server on a Thinkpad which was first sold in 2010 with an SSD in it. I keep thinking I should change the hardware but it just keeps trucking & draws less than 10W so it never seems worth the effort.
It’s stuck behind a 100Mbit ethernet connection (for power saving reasons) which is roughly equivalent to my Internet connection but the latency is probably lower than it would be to an IMAP server on the wider Internet.
Having exclusive use of all that SSD bandwidth probably helps too of course.
I use K-9 on Android and it’s fine… the idea of transforming it into Thunderbird blows my mind.
Branding is a powerful concept. You think Outlook on iOS or Android shares anything with the desktop app? Nope, it’s also a rebranded M&A. It is kind of funny the same happened with Thunderbird.
Which leads to funny things where Outlook for mobile gets features before the desktop version (unified inbox and being able to see the sender’s email address as well as their name come to mind).
I don’t have it in front of me to double check but yeah the message UI is weird. It shows their name and if you hover over it then it pops up a little contact card that also doesn’t show the actual email address. IIRC hitting reply helps because it’s visible in the compose email UI.
I use K-9 on Android and it’s fine… the idea of transforming it into Thunderbird blows my mind.
i thought the idea in Mozilla’s head would be more like “okay we have this really good base for an email app (k-9), lets support it, and add our branding to it and ship it as Thunderbird “
Thunderbird does a lot of disk I/O on the main thread. On most platforms, this responds in, at most, one disk seek time (<10ms even for spinning rust, much less for SSDs, even less for things in the disk cache) so typically doesn’t hurt responsiveness. On Windows, Window Defender will intercept these things and scan them, which can add hundreds of milliseconds or even seconds of latency, depending on the size of the file.
This got better when Thunderbird moved from mbox to Maildir by default, but I don’t think it migrates automatically. If you have a 1 GB mbox file, Windows Defender will scan the whole thing before letting Thunderbird read on 1 KiB email from it. This caused pause times of 20-30 seconds in common operations. With Maildir, it will just scan individual emails. This can still be slow for things with big attachments, but it rarely causes stutters of more than a second.
Thunderbird was originally a refactoring of Mozilla Mail and News into a stand-alone app. Mozilla Mail and Newsgroups was the open-source version of Netscape Mail and Newsgroups (both ran in the same process as the browser, so a browser crash took out your mail app, and browser crashes happened a few times a day back then). Netscape Mail and Newsgroups was released in 1995.
It ran on Windows 3.11, Windows 95, Classic MacOS, and a handful of *NIX systems. Threading models were not present on all of them, and did not have the same semantics on the ones that did. Doing I/O on the UI thread wasn’t a thing, doing I/O and UI work on the one thread in the program was.
It’s been refactored a lot since those days, but there’s still a lot of legacy code. Next year, the codebase will be 30 years old.
I really agree. I’m a big fan of K-9 Mail and hearing that Thunderbird was taking it over did not sound like good news at all.
I’ll disagree with one of your points though – deleting an email happens instantly and usually by accident. Hitting undo and waiting for there to be any sign in the UI that it heard you, now that takes forever.
Backwards compatibility with Node.js and npm, allowing you to run existing Node applications seamlessly
I am not a TS,JS,Deno,Node dev, but I don’t feel like this is conducive to the original “goal” of Deno in the first place? It strikes me as a very weird decision, though I have not been following Deno much since the early days
I guess my overall point is that I thought Deno NOT being Node was the whole point of it in the first place. Someone mentions further up that “no node compat” was poised as a selling point in the announcement.
But yeah, I probably should go back and read years worth of blog posts before having an opinion.
> But yeah, I probably should go back and read years worth of blog posts before having an opinion.
Can you stop doing this ? I now retroactively have the feeling I wasted my time replying to you, as the whole question wasn’t in good faith anyway. Until that part I would have tried to give you a response, but I don’t think you even want that.
Edit: something glitched out - I saw this as a reply to my comment ?!
Yeah not sure what’s happened. I think there has been a few issues with comments being reparented to different threads or something recently. I definitely didn’t reply to you with that lol :)
Edit: The comment I originally replied to has been removed by mods and the user has also apparently been banned. Could have something to do with it?
Oh boy, this is the classic OS/2 paradox. Or maybe you can just group it in with the Osborne effect. Now nobody has to worry about Deno because they can just write Node software and be content that it’ll work on both.
I think one of the biggest boons to the Go ecosystem is that C interop is just slow enough to tempt people to rewrite it in Go, but not so slow to be impractical for most purposes.
So it’s not just me? I’m always saddened and surprised every time a project that was “interesting because it’s different” makes this pivot. Usually it’s in the form of “now with POSIX” but that’s just due to my taste for wacky operating systems.
I don’t think so? The explicit goal of WSL has always been to be able to run unmodified Linux software. They tried first with a translation layer with WSL 1, but then they switched that out for running the Linux kernel proper with WSL 2. The result is better Linux software compatibility (allegedly, I don’t use Windows so I can’t confirm), which is an unambiguous success. The value of WSL has never been that it’s an interesting new kind of OS API, it has always been to implement Linux as faithfully as possible.
I thought they were referring to the architecture of WSL1 as being “interesting” as opposed to the now “no longer interesting” VM approach of WSL2?
WSL1 was an actual Windows “subsystem” had a much more interesting architecture than WSL2 (which is just a VM + 9P). (edit: Didn’t see you already mentioned this)
You have it backwards. I don’t have to worry about Node and all its bullshit ecosystems anymore because I can just run deno [whatever] and do anything I need to do. I’m so sick of installing the same pkgs over and over for every project.
To clarify, my comparison to OS/2 went deeper than that; everyone can agree OS/2 was the superior product, but NT beat it because IBM double whammy’d itself by making OS/2 a less appealing product while also only being able to sell itself as being able to do what Windows does (in some ways better, in some ways worse). I feel like Deno is following down the same path where, it’s technically superior, but also doesn’t do enough to compete with it’s predecessor (and this is a common view, I’m sure you’ve seen similar comments in other Deno news articles.)
To clarify, my comparison to OS/2 went deeper than that; everyone can agree OS/2 was the superior product
I know this is a meme, but it’s just not accurate. I might grant that OS/2 was superior to Windows 3, which is probably where that story comes from, but NT? NT was a multiuser system with all the security benefits that entails, could run multiple DOS boxes at once (OS/2 could only do one), had a vastly more pleasant API, had true concurrency in its event queue (while OS/2 was preemptively multitasked, the event queue was cooperative, so an errant program could lock the GUI), had better Unicode support, had a better file system…it was honestly superior in almost every way.
I’m saying all that not to be pedantic, but rather because I think OS/2 is brought out too often as “proof” that having a compatible API is a death knell, whereas I think it failed for a whole pile of other reasons. If you want a better analogy, you might look at how Proton and WINE have largely killed native Linux gaming, or how that plus the Game Porting Toolkit is arguably killing Vulkan.
I think you have the DOS thing backwards, it was NT that could only run one DOS application at one time and OS/2 that could run multiple. It was one of the biggest bragging points about OS/2.
And, yes, I do agree that OS/2’s failure was multi-faceted, which I alluded to in my post. It’s certainly not that OS/2’s failure is only attributed to it’s Windows compatibility, and wasn’t even really a major reason why it failed. But it was a significant straw that broke the camel’s back. That was my point, relying on compatibility with a competing project isn’t the only reason a project can fail, but it’s definitely a bad omen.
it was NT that could only run one DOS application at one time and OS/2 that could run multiple. It was one of the biggest bragging points about OS/2.
Didn’t know that. However, NT’s architecture was fundamentally different from DOS, so if I’m not mistaken it couldn’t actually run DOS applications “natively”. Instead, it would virtualize DOS…and in that sense, I believe it was capable of “running multiple DOS applications at one time”, but it wasn’t actually doing any kind of DOS-level multi-tasking, so I assume this was less efficient. Probably didn’t matter for Windows users at the time (I certainly never noticed, but I really only used the DOS mode to play old video games).
It’s kinda funny to me how IBM marketed this as a “bragging point”, because it unintentionally suggests that OS/2 is simply DOS with a new coat of paint, whereas NT was a completely new green field OS that solved a lot of the problems people had with DOS/UNIX/etc. If I was a developer around that time, I’d probably be a lot more interested in NT as well.
I think you have the DOS thing backwards, it was NT that could only run one DOS application at one time and OS/2 that could run multiple. It was one of the biggest bragging points about OS/2.
No? NT could always run multiple NTVDMs. OS/2 was limited to a single DOS session (“the coffin”) until OS/2 2.0 when it became a 386 thing.
OS/2 wasn’t a superior product (the reliability guarantees are far worse than NT, i.e. a Presentation Manager that’s easy to edge), but it made sense for the systems and compromises of the time (systems with less than 16 MB of RAM, more DOS/Win16 apps than apps designed for a 32-bit world). As those compromises became less relevant, there was less reason to run OS/2 instead of something like 95 or NT.
I actually haven’t heard this. My experience is the opposite. Deno’s tooling, additional standard library, and security features make it well worth it on its own from my perspective!
I propose a better solution, systemd-pathd.
It’s simple: systemd will offload all this logic and keep track of both user and system binary paths. All we need in .profile is:
tbh it’s not that bad, it usually just adds your nix profiles, which are symlinked directories much like /usr/bin (and you usually have only one or two at the same time). And there’s no /usr/bin on NixOS (unless you directly add store paths to $PATH, then that’s nightmare fuel).
Using my mystical powers of prediction, I reckon this will be a total nothingburger, simply because of the unserious behavior of the person originating it (Simone Margaritelli).
Also, much less serious prediction, but I’ll guess that the problem is somewhere in CUPS. Especially some old decrepit part of CUPS that no one uses anymore.
This act of hyping vulnerabilities before public disclosure gives me the movie trailer vibe. We really can’t help but putting ads and monetize everything nowadays, can we? The next step will be to pair it with a videogame-like pre-order model.
The hindsight seems to be that the vendor was uncoöperative and needed some “social massaging
(bullying campaign for greater good) to get the fixes in.
I mean, regardless of the reporter, “the CVE turned out to be significantly less severe than its severity rating claimed” would be true, what, like 95% of the time anyway?
Sort of related: Inspired by the IOCCC, instead of doing school work in highschool, I would often play around in C with #defines and typdefs to make weird and obscure syntaxes. It was really fun to mess about with.
Along with this: don’t use booleans as “flags” in your database. Use some sort of timestamp instead. Now you know when it was set, which you’ll suddenly find useful down the line.
Dates make a lot of sense for things where a date is relevant to the actual thing - a publish date, a modification date, a “sale starts”/ “sale ends” field.
The fields where I’m using a boolean in a database, I want to be able to express two, or possibly three actual states (a nullable boolean): “on; off” or “on; off; no preference aka inherit default”.
A date gives you at best “yes, as of/until ; no or maybe inherit default”
If you want to know when some value changed, you want an audit log, which is more useful for auditing anyway because it isn’t limited to storing a the last time it was changed, and it can store who changed it, what else was changed at the same time, etc.
When you do this you use null as default and timestamp as set to not-default at timestamp or? If someone turns off a setting after turning it on you just set it back to null?
Good question, I guess it depends on what you’re using as a “flag” here, but I guess I should have specified for things you’re unlikely to toggle back. I guess once again, a kind of “status”, except not a linear status but various conditions. First one that comes to mind is a “soft delete” or an “archive”.
Why is it that every article attempting to explain how null is some unspeakable horror, includes a tale about some application somewhere comparing to the string'null' and ensuing chaos.
You might as well say “don’t use Boolean, someone might compare it with string 'false'.
Why is it that every article attempting to explain how null is some unspeakable horror, includes a tale about some application somewhere comparing to the string ‘null’ and ensuing chaos.
Dereferencing a boolean (or any other properly initialized value) won’t cause a program to crash. And comparing a boolean to a string will result in a compile-time error. I’m guessing you already know that and are trolling at this point.
Funny story. I was working at a company that was building a high-visibility student transcript system. During a code review, I saw that someone had hard-coded a default value for the last name of a student to the literal string “null”. I brought this to the attention of another developer stating that if a student actually had the last name of Null, they wouldn’t be able to use the system. He went away came back a half hour later and said, “You’re right, that’s a bug.”
That would not have been a fun bug to track down; Hoare strikes again.
Another issue, of course, is that a null sentinel allows comparing two different classes of objects to the same value. (It explicitly excludes the sentinel value from the set of values that are valid elements.)
There are many, many reasons to make software both immutable and null-hostile, which is why the industry is slowly moving in that direction.
Dereferencing a boolean (or any other properly initialized value) won’t cause a program to crash.
You haven’t answered the actual question. Why are people hardcoding a string literal 'null' when comparing to an actual null?
Also, for the record - comparing a string to a boolean is a perfectly valid operation in any number of languages. It will equate to false if you’re doing things properly. I’m guessing you already knew that and are trolling at this point.
someone had hard-coded a default value for the last name of a student to the literal string “null”
Yes, that is a bug, because they’re treating a literal null and the string null as the same thing. If your language, or database doesn’t distinguish between null and 'null', pick a better language/database before you start telling everyone else on the planet that they shouldn’t use literal null, because someone somewhere is stupid enough to compare it with a string 'null'.
As long as you have sane auditing you can always go look. In your version you know what and when. With sane auditing you get all the W’s, well perhaps not they why, unless you go ask the who ;)
It sounds like what they’re suggesting is instead of having is_active, you’d have activated_at with a timestamp instead, where it’s null | timestamp, null being not activated.
Many websites don’t support a
+
in email addresses either, and only support some custom subset of characters instead. I’ve already collected more than 25 cases personally, which is extremely annoying.I had one that allowed + in the email for signup, but the password reset form did not accept emails with a +
I’ve seen this same problem with unsubscribe forms. Absolutely ridiculous. Straight to the spam bin.
This screams of projecting lack of skill into others. Perhaps your shellscripts don’t work. Where does the conclusion that mine won’t work comes from?
This is the same mindset as the common myth of “don’t write your own SQL, Hibernate developers can write much better SQL than you”. Yeah, how did that work out?
Your bash scripts are probably fine, but recognise that rolling your own requires maintenance.
I think it’s a bit lost in the format but for me, the takeaway from this article is that you should be conscious of the point at which maintenance cost starts to outstrip value.
You may well not be there and it’s definitely easy to fall into the trap of adopting technology X too early and eating cost and complexity as a result.
But I have worked at enough places where the world is run by collections of well-meaning and increasingly stretched shell scripts marinating in Historical Context. There’s a trap there as well.
I am a heavy k8s user at work. I can confidently say my bash scripts require less maintenance than k8s.
Rolling my own what? The reason why I disagree with this post is because it is vague and fails to detail what exactly would this shellscripts entail and what work does it go to set up deployments on Kubernetes even with an already working cluster. Frankly speaking, I find the amount of painfully complicated yaml it takes to set up a service is more evolved than a simple deployment script.
It kinda isn’t and doesn’t, though. It even provides a handy summary of the set of things that, if you need them, maybe you should consider k8s, after all:
Invariably, I’ve seen collections of shell scripts spring up around maintaining kubernetes. The choice isn’t between bash scripts or kubernetes. The choice is around how you bin-pack the services onto servers and ship the code there. All options come with a large side of scripting to manage them.
Not my shell scripts! They’re perfect. Perfect in every ways. And my SQL — well escaped and injection proof! My memory? I bounds-checked it myself with a slide rule, not a single byte will by read out of bounds.
Other people have skill issues but you and me? We’re in a league of our own :)
Oh yeah, the old shortcut to fake humbleness “we all make mistakes, I’m not perfect, neither are you”.
That argumentative position is useless. So we completely relativize any and every bug? Are they all the same? All code has bugs… How many bugs are reasonable? Is it acceptable that a single 10 line shell script has 4 bugs? What about 8 bugs?
And what about Kubernetes manifests? Are they magically bug free because we just say they are?
Can we try to keep the discussion fruitful and somewhat technical?
Yes, I am claiming that the author sounds like they are not too familiar with shell scripts and dismiss them as something that attracts bugs and is difficult to maintain. What at is the technical foundation of such claims?
Your example being a good one. SQL injection was a problem from the old PHP era when lots of people jumped in using relational database without any prior programming knowledge. It is rather trivially avoided and it is virtually non existent nowadays. I think everyone expects it to be non problem and if people go about assembling SQL by string concatenation without proper escaping, that will certainly not fall under the “we all make bugs” category.
Well, my Kubernetes manifests are bug-free. I’m an expert.
OTOH do avoid writing your own database instead of using an off the shelf one. The durability on power loss alone would consume you for weeks. ;)
Yeah, “skill issue” arguments in the domain of software engineering never cease to tickle me.
“You aren’t writing your own database and operating system on bespoke hardware? Skill issue!”
😂
It’s a categorical difference. It requires dramatically more “skill” (I would argue that it becomes functionally impossible to do this at any but the most trivial scales but maybe you’re the rare genius who could be curing cancer but prefers to use bash to address already-solved problems?) to write correct, idempotent shell scripts as opposed to describing your desired state and letting a controller figure out how to update it.
Declarative programming sounds good but the effect is that you have an application whose runtime control flow relevant state is “your entire system”.
Even if you think you are capable of writing immaculate scripts that can do everything you need and maintaining them, can you not conceive of a world where other people have to maintain them when you’re not around? In other words, even if you are perfect, if the baseline skill required to operate a shell-based deployment method is so high, aren’t you basically arguing against it?
Like, there’s plenty of technical arguments against kubernetes, and there’s great alternatives that are less complex. You can even argue about whether some of these things, like rolling deploys, are even required for most people. Skipping all of that and calling someone else a bad programmer because they’d rather use Kubernetes is just mean spirited. Just this week another user was banned for (among other things) saying “skill issue”, but if you rephrase that to “lack of skill” it sits at +22?
Most teams converge on using an ORM. Developers who can’t deal with ORMs and feel the need to constantly break out into SQL are a code smell.
Seriously? The Vietnam och computing?
This is largely untrue and the peak gas passed long ago, with ORM libraries that promised to take over the world up to around 2010 being pretty much all dead.
The explosion of popularity of postgrest, supabase, and the like seems unstoppable at this moment.
I don’t see it for teams or for anybody who’s hiring/managing teams. Raw SQL is fun if you’re 1-2 developers but the fun wears off quickly.
Also, yes most ORMs suck (with “suck” I mean, they’re not the Django ORM).
My experience has been by and large the inverse, with teams bemoaning ORMs systematically because they’d gotten bitten by Weird ORM Bugs more than once. Not saying that raw SQL is more fun, but I derive no fun from ORMs either (nor have I seen many teams having fun with ORMs). Of course, this is also anecdata.
I love this. Inside but also outside of the context that surrounds this comment.
I can’t believe I’m taking google’s side here but this is ludicrous. The motivation and proposed correctional measures are too ill-conceived; they’re just going to hurt the ecosystem instead. There is a right way to do this but this ain’t it.
Change and unknowns versus keeping the status quo.
Is there a right way of breaking monopolies? They design themselves to make breaking them up look as unattractive as possible.
The government could fund Firefox or core tech in FF or otherwise contribute to those projects, thus weakening Google’s hold over the company. US gov pours billions into tech startups and companies, seems perfectly reasonable for them to do so here.
Maybe a dim view, but I don’t think I would wish government funding on Firefox. I can only imagine them getting drawn into political fights, spending time justifying their work to the American people, and getting baroque requirements from the feds.
Government funding comes in all shapes and sizes. Most of it has nothing to do with politics. The air force and DoD are constantly investing or pouring money into startups. I myself had a government investor at my startup. No justification to the US needed.
If it’s government funding with few or no strings attached that would be great. I just wouldn’t want to see Firefox become a political football.
Most government funding for tech has no strings attached, or they just own stock, which is ideal for everyone.
I feel like this would open a whole new can of worms and actually wouldn’t be good for Firefox in the longer term.
I don’t think they care particularly much about Google’s hold over Mozilla. They care about Google using their simultaneous ownership of Google Search, Chrome, and all their ad-related stuff to unfairly enrich themselves, and they see Google’s payments to Apple as a method to defend that monopoly power. If Mozilla had an alternate source of funding, it wouldn’t really change anything except maybe make the browser that 5% of people use have a different default search engine. It probably wouldn’t help Firefox to become more popular, and it’d be a much smaller difference than whatever happens with Safari.
It would reduce Google’s ability to exercise this monopolist power over Mozilla.
If the money were spent well, I think it absolutely could.
Nationalizing natural monopolies has not been a popular approach in the US, unfortunately.
Regardless, it seems very likely to me that neither Chrome nor Firefox would survive this. But who knows, maybe that’s a good thing. Maybe that will pave the way for consumers paying for their browsers instead of paying through subjecting themselves to advertisement and propaganda. Doesn’t sound too bad since it’s probably the ad economy that turned the world into the propaganda wasteland that it is today.
I am curious about that. I think OpenBSD might have never crashed on me. Windows (also post-XP), macOS, Linux, FreeBSD, NetBSD, DragonFly (though that was hardware related) all did multiple times though.
Is this due development? Is this hardware related?
Just surprised about that particular one. All the others feel like “sure, if that’s what you want then OpenBSD is probably not a good choice”.
Very similar experience. Don’t think I’ve ever had a crash. Her experience might also have something to do with solene being an OpenBSD dev, so testing untested stuff possibly leads to some I stability? That’s an assumption on my part.
!!!
For comparison, ext3, with journaling, was merged into Linux mainline in 2001.
I developed embedded devices whose bootloader read and wrote ext4 files, including symlinks. There is really no excuse not to have a journaling file system on a system that’s larger than a fingernail.
For OpenBSD it may be as similar reason as why they got rid of Bluetooth. Nobody was maintaining that code. It got old, stale. So they got rid of it. I generally actually like this approach, sadly you lose functionality, but it keeps the entire codebase “clean” and maintained.
My guess is that they need people to work on the FS issue. Or, rather they don’t have anyone who cares enough about it onboard to actually write the code in a way that is conductive to OpenBSD’s “style”. Could be wrong, but that’s my assumption.
For comparison, FreeBSD merged soft updates (a variant of journaling) in the 1990’s, and in 2008 announced ZFS support.
OpenBSD had soft updates, but they recently pulled it.
The pragmatic thing to do would be to grab WAPBL from NetBSD since NetBSD and OpenBSD are still relative kin. Kirk McKusick still maintains UFS on FreeBSD so the SU+J works well there but it would be a lot of work to pull up OpenBSD’s UFS.
IIRC WAPBL still has some issues and is not enabled by default. Its primary purpose is not to make the filesystem more robust but rather to offer faster performance.
I’ve never experimented with SU+J, but I’d like to hear more feedback on it (:
I’m not sure how deep WABPL goes but a journal helps you to close some corruption events on an otherwise non-atomic FS by being more strict with the sync and flush events and without killing performance. You can also journal data which is an advantage of taking this approach, although I don’t know that WAPBL offers this currently. Empirically NetBSD UFS+WAPBL seems fairly reliable in my use.
SU+J orders operations in an ingenious way to avoid the same issues and make metadata atomic at the expense of code complexity, the J is just for pending unlinks which otherwise have to be garbage collected by fsck. A large, well known video streamer uses SU+J so it is well supported. Empirically SU+J is a little slower than other journaling filesystems but this might be as much implementation and not algorithm.
Thanks for the feedback. I’ve been wanting to check out SU+J for a while, you got me hyped to dig into the concepts and the code!
Re WABPL: an interesting thread on netbsd-tech-kern
Nice, I had not seen that thread. I think the data journaling they are discussing would be important for OpenBSD as for instance on FreeBSD UFS is used in specific scenarios like embedded devices or fail-in-place content servers and ZFS is used anywhere data integrity is paramount. WAPBL was created in response to the complexity of SU+J as it seems OpenBSD was also impacted by. For that and easier code sharing I would be inclined to go the WAPBL direction, but there may be other merits to the SU+J direction in terms of syncing FFS and UFS against FreeBSD.
OpenBSD FFS has soft updates since a very long time too.
Soft updates have been removed in Feb 2024: https://marc.info/?l=openbsd-cvs&m=171489385310956&w=2
This is really surprising. I thought this was one of those instances of OP “handling things wrong”, but actually it doesn’t seem OpenBSD natively supports anything else than FFS and FFS2 (and without soft updates, like above).
I suppose the idiomatic method for securing processes on FreeBSD is capsicum(4). And at least for Firefox, it looks like someone has been working on adding support but they ran into some tricky cases that apparently aren’t well supported.
I guess for retrofitting huge, complex code bases, pledge and unveil or jails are probably easier to get working. I wonder how this affects things like file picker dialogs for choosing uploads, etc. If they’re implemented in-process, I guess they get to “see” the veiled or jailed file system - which means you don’t get any mysterious permission issues, but you also can’t upload arbitrary files. If they were implemented via IPC to some desktop environment process, the user could see the usual file system hierarchy to select arbitrary files, and if those files were sent back to the browser process as file descriptors, it would actually work. (I think the latter is how sandboxed apps are permitted to read and write arbitrary files on macOS, with Apple-typical disregard for slightly more complex requirements than just picking one file.)
In regard to pledge+unveil, it’s extremely simple to actually implement in code (though considerations for where+what in the code would be more complex) and the predefined promises make it pretty easy.
For pledge+unveil, from memory, the dialog just cannot browse/see outside of the unveil’d paths. For browsers there’s a few files
/etc/<browser>/unveil.*
that lists the paths each kind of browser process is allowed access. Included here is~/Downloads
and common XDG-dirs for example, which allows for most file picking stuff to work fine for most users.One advantage is also that you can progressively enhance this over time. You can keep adding restrictions every time you fix instances of code which would previously have been violating a pledge. Capsicum would appear to require a more top-down approach. (I can’t help but wonder if you could add a kind of compatibility mode where it allows
open()
and similar as long as the provided path traverses a directory for which the process holds an appropriate file descriptor through which you’d normally be expected to callopenat()
. Or maybe that already exists, I really need to get hands-on with this one day.)I can see that the alternative would probably require a more holistic approach at the desktop environment level to implement well, but defining these directories statically up front seems like an awkward compromise. Various XDG directories contain precisely the kind of data you’d want to protect from exfiltration via a compromised process.
Capsicum lets you handle things like the downloads directory by passing a file descriptor with
CAP_CREATE
. This can be used withopenat
with theO_CREAT
flag, but doesn’t let you open existing files in that directory. This is all visible in the code, so you don’t need to cross reference external policy files.If you run a Capsicum app with ktrace, you can see every system call that Capsicum blocks, so it’s easy to fix them. With a default-deny policy and no access to global namespaces, it’s easy to write least-privilege software with Capsicum. I have not had that experience with any of the other sandboxing frameworks I’ve tried.
The download side is easier to handle as long as you’re keeping to a single download directory. I expect uploads to be somewhat more annoying UX wise: it’s rather unusual to have an “uploads” directory where the user would first copy any files they want to upload to a website, then select them in the “Browse…” dialog.
One slightly less annoying option is to agree on blanket read access to a bunch of stuff under $HOME, which appears to be what’s used here in practice, but it leaves you vulnerable to data exfiltration, which surely is one of the attack scenarios this whole exercise is trying to defend against.
Anything more comprehensive I can come up with will necessarily be a multi-process arrangement where the less-sandboxed process sends file descriptors of what the user selects to the heavily sandboxed one. And where drag & drop of a file sends (a) file descriptor(s) rather than just (a) path(s).
To be clear, I’m not saying this would be a bad system! I think it’d be great to have this in a desktop environment. Just that it’s a little tricky to retrofit onto a giant ball of code you’ve never even looked inside before.
That’s good to know - in contrast, dealing with the Sandbox on Apple’s platforms is super annoying as you’re mostly reduced to reading system log tea leaves when things aren’t working - macOS
dtrace
is falling apart more and more with every release and sometimes requires disabling the very security features you’re trying to debug.But tracing only just begins to address the stated problem of retrofitting a large existing code base. If everything including dependencies including transitive ones is using
open
rather thanopenat
, butopen
is completely non-functional, I suspect that might be rather a chore to get fixed. I mean it’s feasible if you actually “own” most of the project, but modifying something as big and unknown as a web browser in this way is quite an undertaking even if you can eventually get it all upstreamed.Capsicum was designed to support this via the powerbox model (just as on macOS: it was designed to be able to more cleanly support the sandboxing model Apple was developing at the time). When you want to upload a file, the file dialog runs as a service in another process that has access to anything and gives file descriptors to selected files. You can also implement the same thing on top of a drag and drop protocol.
Alex Richardson did some Qt / KDE patches to support this and they worked well. Not sure what happened to them.
The nice thing about this is that open is a replaceable symbol. For example, in one project where I want to use some existing libraries in a Capsicum sandbox I simply replace
open
with something that callsopenat
with the right base depending on the prefix.It would be fairly easy to do something similar for the XDG paths and have pre-opened file descriptors for each with a sensible set of permissions.
Good to know it’s been done and the code presumably is still out there somewhere. Something to keep note of in case I end up doing any UNIX desktop work. (And it sounds like this was done as part of an academic research project, so probably worth trying to get hold of any other published artifacts from that - perhaps part of this project?)
Providing this fallback compatibility wrapper as a user space libc override is a nifty technique, thanks for sharing!
I kind of agree. It did feel unintuitive to me at first. Worth noting these files are owned by root and normal users cannot write to them by default.
That’s right – from what I remember it was pretty well locked down though.
It has an effect. If you omit
type="button"
and the button is inside of a form, it will act as submit button.You can and should make the cursor behavior identical with CSS.
I’ve never been a huge fan of buttons using
cursor: pointer
on the web. It runs contrary to how UIs work on all desktop platforms, where buttons do not in fact change the cursor to a hand.At first I thought that the
pointer
cursor was meant to indicate navigating to a different page, but it seems like web design practices perverted this into “clicking this will perform any arbitrary action, which may take you to another page, but may also not do that.”I’ve seen it enough now that not having
cursor: pointer
on a button feels off to me.For me it depends. It feels pretty natural on websites, but feels off in web apps (Spotify or Figma come to mind; they default to
cursor: default
on interactive elements pretty heavily.) In web apps apointer
cursor feels like it should always perform an action on a single click—similar to how this is handled in file browsers when switching between single- and double-click-to-open mode.The Post button on this comment box has a hand cursor.
I just don’t think we should expect a HTTP(S) link aggregator/discussion forum to link to anything that requires third-party software to run. We don’t do .onion links here either, and I don’t even think sites with opennic TLDs, for example, would be appropriate.
While I can kind of agree with your sentiment, this is just not the forum for this kind of thing, in my view.
This is the right decision and it has nothing to do with “US law” as some of the lwn people seem to be talking about. Russia is a dictatorship with sophisticated state-powered cyberwarfare capabilities. Regardless of whether a Russian-based maintainer has malicious intent towards the Linux kernel, it’s beyond delusional to think that the Russian government isn’t aware of their status as kernel developers or would hesitate to force them to abuse their position if it was of strategic value to the Russian leadership. Frankly it’s a kindness to remove them from that sort of position and remove that risk to their personal safety.
It may or may not have been the right decision, but it was definitely the wrong way to go about it. At the very least there should have been an announcement and a reason provided. And thanks for their service so far. Not this cloak and dagger crap.
Indeed this was quite the inhumane way to let maintainers with hundreds of contributions go, this reply on the ML phrases it pretty well:
Edit: There is another reply now with more details on which maintainers were removed, i.e. people whose employer is subject to an OFAC sanctions program - with a link to a list of specific companies.
This would be a completely inappropriate response because it mischaracterizes the situation at hand: if the maintainers want to continue working on Linux, they only have to quit their jobs at companies producing weapons and parts used to kill Ukrainian children. It has nothing to do with the world being (in)sane, and everything to do with sanctions levied against companies complicit in mass murder.
it has everything to do with sanity or lack thereof, when such a standard is applied so unevenly
Yes, the decision is reasonable whether or not it is right, but the communication and framing is terrible. “Sorry, but we’re forced to remove you due to US law and/or executive orders. Thanks for your past contributions” would have been the better approach.
This is true of quite a few governments, including those you think are friendly, and it is a huge blind spot to believe otherwise. Dictatorship doesn’t have anything to do with it, it isn’t as though these decisions are made right at the top.
Dictator, you say? I chuckled. Linus is literally a “BDFL”.
Maybe we’ll eventually see an official BRICS fork of the Linux kernel? Pretty sure China has been working on it.
Do you have the same reaction to contributions from US-based companies that have military contracts? While the US isn’t a dictatorship, the security and foreign policy apparatuses are very distant from democratic feedback.
much more distant than russia’s in fact
It’s hard to single out Russia for this in a post-Snowden world. Not to mention that if maintainers can be forced to do something nefarious, then they can do the same thing of their own will or for their own benefit.
Did you hear this from the affected parties?
The Wikimedia Foundation has taken similar action by removing Wikipedia administrators from e.g. Iran as a protective measure (sorry, don’t have links offhand), but even if that’s the reason, the Linux actions seem to have a major lack of compassion for the people affected.
It wasn’t xenophobia. The maintainers who were removed all worked for companies on a list of companies that US organizations and/or EU organizations are prohibited from “trading” with.
The message could have (and should have) been wrapped in a kinder envelope, but the rationale for the action was beyond the control of Linus & co.
Thank you for the explanation, makes sense as is common and compatible with sanctions to other countries. I was replying to the comment above mostly.
This was what Hangton Chen has to say about this:
Hi James,
Here’s what Linus has said, and it’s more than just “sanction.”
Moreover, we have to remove any maintainers who come from the following countries or regions, as they are listed in Countries of Particular Concern and are subject to impending sanctions:
Burma, People’s Republic of China, Cuba, Eritrea, Iran, the Democratic People’s Republic of Korea, Nicaragua, Pakistan, Russia, Saudi Arabia, Tajikistan, and Turkmenistan. Algeria, Azerbaijan, the Central African Republic, Comoros, and Vietnam. For People’s Republic of China, there are about 500 entities that are on the U.S. OFAC SDN / non-SDN lists, especially HUAWEI, which is one of the most active employers from versions 5.16 through 6.1, according to statistics. This is unacceptable, and we must take immediate action to address it, with the same reason
did you just deliberately ignore the fact that huawei is covered by special exemption in the sanctions?
The same could be said of US contributors to Linux, even moreso considering the existence of National security letters. The US is also a far more powerful dictatorship than the Russian Federation, and is currently aiding at least two genocides.
The Linux Foundation should consider moving its seat to a country with more Free Software friendly legislation, like Iceland.
I’m Icelandic and regret I only have two eyebrows to raise at that.
it’s an incredibly low bar that Iceland has to clear, as this story demonstrates
Please expand on how Iceland would act to be seen as a more FLOSS friendly place, as opposed to for example the United States.
not mandating the removal of maintainers
In other words, refusing to comply with international sanctions. This is in fact an incredibly high bar to clear for Iceland. It would require the country to dissociate itself from the Nordic Council, the EEA, and NATO.
a kernel dev quoted in the Phoronix article wrote:
that made me think it was due to US (not international) sanctions and that the demand was made by a US body without international jurisdiction. what am I missing?
Without a citation of which sanction they’re referencing it’s really hard to say. I assumed this sanction regime was one shared by the US and the EU, and that Iceland would follow as a member of NATO and the EEA. If it is specific to the US, like their continued boneheaded sanctions against Cuba, than basing the Linux foundation in another country would prevent this specific instance (a number of email addresses removed from a largely ceremonial text file in an open source project) from happening again.
Note however that Icelandic law might impose other restrictions on the foundation’s work. The status of taxation as a non-profit is probably different.
even if it has to do with international sanctions, their interpretation and enforcement seems to have been particular to the US. it reeks of “national security” with all the jackbootery that comes with it.
Looks like Tvix is powered by https://tvl.su. Neat.
Also found https://russiaishiring.com in their repository.
Hm wonder why Russian interests could be unusually eager to recruit foreign software engineers lately
I guess it’s a rhetorical question, but https://en.wikipedia.org/w/index.php?title=Russian_emigration_during_the_Russian_invasion_of_Ukraine&oldid=1252369612#Impact
huh… That Russia is hiring page refuses to load for me on mobile data. Just a blank page. Curious.
Edit: Can’t be a coincidence. It’s working via Tor.
Maybe they’re trying to hire someone to fix their cdn
These could also be just real, actual passwords these people used, right?
Having seen a bunch of password dumps, 100% yes, absolutely, these are very likely to be.
Yeah exactly! I’ve seen very similar in password dumps+word lists before.
Their username was also a variation of that or just “test”, so I find it unlikely. But it’s a fun thought of someone trying to repel hackers through a magic password…
What’s passkey again and why it’s needed, besides
big tech privacy
arguments?Its not much to do with privacy and its all to do with security. So in that regard:
It stops phishing.
That’s the biggest “need” for it. Which is a pretty good one if you ask me. And it is completely capable of this on paper. Its clear there seems to be issues across the front end and back end of it: massive UX issues and massive implementation issues.
In the general sense, it’s just adding a new way to use public/private key pairs for authentication in browser contexts.
Using public/private key pairs in browser has already been done at least 2 other times:
Thunderbird has one of the worst user experiences I’ve ever seen. It takes seconds to delete an email from my inbox, the UI thread hangs all the time, if I click on a notification, it makes a black window because the filter moved the email while the notification was up and they didn’t track it by its ID or some basic mistake like that. Every update the UI gets clunkier and slower. Searching has a weird UI and fails to find matches. I could go on and on, there are so many UX issues with this worthless software. I have no idea what’s going on over at Mozilla. I think the org just needs to be burned to the ground.
I use it as my daily driver for now, but I feel like I’m on a sinking ship surrounded by nothing but ocean.
I use K-9 on Android and it’s fine… the idea of transforming it into Thunderbird blows my mind.
I have an inbox with > 20k emails in it & deletions happen instantly in Thunderbird.
Likewise dialog boxes appear & disappear instantaneously.
Are you sure there isn’t something up with your system?
Yep, same boat. 100k+ emails, lots of filters, etc., it just works honestly. Thunderbird has only gotten better for me since Mozilla stopped supporting them.
Also for me. 157k mails, everything feels snappy.
I like it significantly better than any web client too, as those are usually pretty laggy.
OP, could it be slow hardware or no compaction?
Out of curiosity, are you using POP or IMAP? I imagine the performance characteristics would be very different, given their different network patterns.
IMAP.
I run Dovecot as an IMAP server on a Thinkpad which was first sold in 2010 with an SSD in it. I keep thinking I should change the hardware but it just keeps trucking & draws less than 10W so it never seems worth the effort.
Ah, IMAP, but on the local network? That’s likely to be much faster than IMAP over the internet, which I think is a very common use case.
It’s stuck behind a 100Mbit ethernet connection (for power saving reasons) which is roughly equivalent to my Internet connection but the latency is probably lower than it would be to an IMAP server on the wider Internet.
Having exclusive use of all that SSD bandwidth probably helps too of course.
Branding is a powerful concept. You think Outlook on iOS or Android shares anything with the desktop app? Nope, it’s also a rebranded M&A. It is kind of funny the same happened with Thunderbird.
Which leads to funny things where Outlook for mobile gets features before the desktop version (unified inbox and being able to see the sender’s email address as well as their name come to mind).
Wait what?
Be thankful if you’ve never had to use desktop Outlook…
I don’t have it in front of me to double check but yeah the message UI is weird. It shows their name and if you hover over it then it pops up a little contact card that also doesn’t show the actual email address. IIRC hitting reply helps because it’s visible in the compose email UI.
i thought the idea in Mozilla’s head would be more like “okay we have this really good base for an email app (k-9), lets support it, and add our branding to it and ship it as Thunderbird “
I think it’s more like “K-9, now known as Thunderbird”
Are you on Windows, by any chance?
Thunderbird does a lot of disk I/O on the main thread. On most platforms, this responds in, at most, one disk seek time (<10ms even for spinning rust, much less for SSDs, even less for things in the disk cache) so typically doesn’t hurt responsiveness. On Windows, Window Defender will intercept these things and scan them, which can add hundreds of milliseconds or even seconds of latency, depending on the size of the file.
This got better when Thunderbird moved from mbox to Maildir by default, but I don’t think it migrates automatically. If you have a 1 GB mbox file, Windows Defender will scan the whole thing before letting Thunderbird read on 1 KiB email from it. This caused pause times of 20-30 seconds in common operations. With Maildir, it will just scan individual emails. This can still be slow for things with big attachments, but it rarely causes stutters of more than a second.
It happens on any platform as long as you have enough mail. IDK why we still have GUI apps in $CURRENTYEAR doing work on the UI thread.
Thunderbird was originally a refactoring of Mozilla Mail and News into a stand-alone app. Mozilla Mail and Newsgroups was the open-source version of Netscape Mail and Newsgroups (both ran in the same process as the browser, so a browser crash took out your mail app, and browser crashes happened a few times a day back then). Netscape Mail and Newsgroups was released in 1995.
It ran on Windows 3.11, Windows 95, Classic MacOS, and a handful of *NIX systems. Threading models were not present on all of them, and did not have the same semantics on the ones that did. Doing I/O on the UI thread wasn’t a thing, doing I/O and UI work on the one thread in the program was.
It’s been refactored a lot since those days, but there’s still a lot of legacy code. Next year, the codebase will be 30 years old.
No, I’m on KDE. I also experienced this behavior on XFCE.
I really agree. I’m a big fan of K-9 Mail and hearing that Thunderbird was taking it over did not sound like good news at all.
I’ll disagree with one of your points though – deleting an email happens instantly and usually by accident. Hitting undo and waiting for there to be any sign in the UI that it heard you, now that takes forever.
I am not a TS,JS,Deno,Node dev, but I don’t feel like this is conducive to the original “goal” of Deno in the first place? It strikes me as a very weird decision, though I have not been following Deno much since the early days
Looks like incremental and seamless transition of existing code as well as integrating the NPM ecosystem turned out to be a higher priority.
Rust without C FFI would be equally dead in the water. Even with all the C-isms you need to handle in unsafe.
I guess my overall point is that I thought Deno NOT being Node was the whole point of it in the first place. Someone mentions further up that “no node compat” was poised as a selling point in the announcement.
But yeah, I probably should go back and read years worth of blog posts before having an opinion.
> But yeah, I probably should go back and read years worth of blog posts before having an opinion.Can you stop doing this ? I now retroactively have the feeling I wasted my time replying to you, as the whole question wasn’t in good faith anyway. Until that part I would have tried to give you a response, but I don’t think you even want that.Edit: something glitched out - I saw this as a reply to my comment ?!
Yeah not sure what’s happened. I think there has been a few issues with comments being reparented to different threads or something recently. I definitely didn’t reply to you with that lol :)
Edit: The comment I originally replied to has been removed by mods and the user has also apparently been banned. Could have something to do with it?
Hey thanks for understanding. I am able to reproduce this 100% with the threads view 0. Reported it to pushcx.
Weird yeah I can see what you mean from the threads view. From there it really does look like I replied to you.
Oh boy, this is the classic OS/2 paradox. Or maybe you can just group it in with the Osborne effect. Now nobody has to worry about Deno because they can just write Node software and be content that it’ll work on both.
I think one of the biggest boons to the Go ecosystem is that C interop is just slow enough to tempt people to rewrite it in Go, but not so slow to be impractical for most purposes.
So it’s not just me? I’m always saddened and surprised every time a project that was “interesting because it’s different” makes this pivot. Usually it’s in the form of “now with POSIX” but that’s just due to my taste for wacky operating systems.
Reminded of WSL and WSL2.
I don’t think so? The explicit goal of WSL has always been to be able to run unmodified Linux software. They tried first with a translation layer with WSL 1, but then they switched that out for running the Linux kernel proper with WSL 2. The result is better Linux software compatibility (allegedly, I don’t use Windows so I can’t confirm), which is an unambiguous success. The value of WSL has never been that it’s an interesting new kind of OS API, it has always been to implement Linux as faithfully as possible.
It was “interesting because it’s different” to me, but it is no longer that. Not really a matter of opinion, here.
What did the API differences between Linux proper and WSL do that was interesting, other than just breaking some software?
I thought they were referring to the architecture of WSL1 as being “interesting” as opposed to the now “no longer interesting” VM approach of WSL2?
WSL1 was an actual Windows “subsystem” had a much more interesting architecture than WSL2 (which is just a VM + 9P).(edit: Didn’t see you already mentioned this)MS thought they could do it in reverse. I wonder if it eventually bites them in the rear?
You have it backwards. I don’t have to worry about Node and all its bullshit ecosystems anymore because I can just run
deno [whatever]
and do anything I need to do. I’m so sick of installing the same pkgs over and over for every project.To clarify, my comparison to OS/2 went deeper than that; everyone can agree OS/2 was the superior product, but NT beat it because IBM double whammy’d itself by making OS/2 a less appealing product while also only being able to sell itself as being able to do what Windows does (in some ways better, in some ways worse). I feel like Deno is following down the same path where, it’s technically superior, but also doesn’t do enough to compete with it’s predecessor (and this is a common view, I’m sure you’ve seen similar comments in other Deno news articles.)
I know this is a meme, but it’s just not accurate. I might grant that OS/2 was superior to Windows 3, which is probably where that story comes from, but NT? NT was a multiuser system with all the security benefits that entails, could run multiple DOS boxes at once (OS/2 could only do one), had a vastly more pleasant API, had true concurrency in its event queue (while OS/2 was preemptively multitasked, the event queue was cooperative, so an errant program could lock the GUI), had better Unicode support, had a better file system…it was honestly superior in almost every way.
I’m saying all that not to be pedantic, but rather because I think OS/2 is brought out too often as “proof” that having a compatible API is a death knell, whereas I think it failed for a whole pile of other reasons. If you want a better analogy, you might look at how Proton and WINE have largely killed native Linux gaming, or how that plus the Game Porting Toolkit is arguably killing Vulkan.
I think you have the DOS thing backwards, it was NT that could only run one DOS application at one time and OS/2 that could run multiple. It was one of the biggest bragging points about OS/2.
And, yes, I do agree that OS/2’s failure was multi-faceted, which I alluded to in my post. It’s certainly not that OS/2’s failure is only attributed to it’s Windows compatibility, and wasn’t even really a major reason why it failed. But it was a significant straw that broke the camel’s back. That was my point, relying on compatibility with a competing project isn’t the only reason a project can fail, but it’s definitely a bad omen.
Didn’t know that. However, NT’s architecture was fundamentally different from DOS, so if I’m not mistaken it couldn’t actually run DOS applications “natively”. Instead, it would virtualize DOS…and in that sense, I believe it was capable of “running multiple DOS applications at one time”, but it wasn’t actually doing any kind of DOS-level multi-tasking, so I assume this was less efficient. Probably didn’t matter for Windows users at the time (I certainly never noticed, but I really only used the DOS mode to play old video games).
It’s kinda funny to me how IBM marketed this as a “bragging point”, because it unintentionally suggests that OS/2 is simply DOS with a new coat of paint, whereas NT was a completely new green field OS that solved a lot of the problems people had with DOS/UNIX/etc. If I was a developer around that time, I’d probably be a lot more interested in NT as well.
No? NT could always run multiple NTVDMs. OS/2 was limited to a single DOS session (“the coffin”) until OS/2 2.0 when it became a 386 thing.
OS/2 wasn’t a superior product (the reliability guarantees are far worse than NT, i.e. a Presentation Manager that’s easy to edge), but it made sense for the systems and compromises of the time (systems with less than 16 MB of RAM, more DOS/Win16 apps than apps designed for a 32-bit world). As those compromises became less relevant, there was less reason to run OS/2 instead of something like 95 or NT.
I actually haven’t heard this. My experience is the opposite. Deno’s tooling, additional standard library, and security features make it well worth it on its own from my perspective!
I propose a better solution, systemd-pathd.
It’s simple: systemd will offload all this logic and keep track of both user and system binary paths. All we need in .profile is:
(/s)
Welcome to Nix? (/s??)
tbh it’s not that bad, it usually just adds your nix profiles, which are symlinked directories much like
/usr/bin
(and you usually have only one or two at the same time). And there’s no/usr/bin
on NixOS (unless you directly add store paths to$PATH
, then that’s nightmare fuel).Yeah, I guess that was my point! :)
Using my mystical powers of prediction, I reckon this will be a total nothingburger, simply because of the unserious behavior of the person originating it (Simone Margaritelli).
Also, much less serious prediction, but I’ll guess that the problem is somewhere in CUPS. Especially some old decrepit part of CUPS that no one uses anymore.
you were spot on, btw. it was CUPS: https://lobste.rs/s/nqjmcy/attacking_unix_systems_via_cups_part_i
This act of hyping vulnerabilities before public disclosure gives me the movie trailer vibe. We really can’t help but putting ads and monetize everything nowadays, can we? The next step will be to pair it with a videogame-like pre-order model.
The hindsight seems to be that the vendor was uncoöperative and needed some “social massaging (bullying campaign for greater good) to get the fixes in.
I mean, regardless of the reporter, “the CVE turned out to be significantly less severe than its severity rating claimed” would be true, what, like 95% of the time anyway?
Sort of related: Inspired by the IOCCC, instead of doing school work in highschool, I would often play around in C with #defines and typdefs to make weird and obscure syntaxes. It was really fun to mess about with.
Along with this: don’t use booleans as “flags” in your database. Use some sort of timestamp instead. Now you know when it was set, which you’ll suddenly find useful down the line.
Dates make a lot of sense for things where a date is relevant to the actual thing - a publish date, a modification date, a “sale starts”/ “sale ends” field.
The fields where I’m using a boolean in a database, I want to be able to express two, or possibly three actual states (a nullable boolean): “on; off” or “on; off; no preference aka inherit default”.
A date gives you at best “yes, as of/until ; no or maybe inherit default”
If you want to know when some value changed, you want an audit log, which is more useful for auditing anyway because it isn’t limited to storing a the last time it was changed, and it can store who changed it, what else was changed at the same time, etc.
GP meant mutable state hacky flags. Not immutable boolean properties of conceptual relational entity.
If you have a boolean column and go around changing the value, you are very likely doing it wrong. Model your changes instead.
Check Rick Hickey take on how state is handled in clojure for a straight forward explanation of state vs data.
Checked around but got some ambiguous results - could you share the specific take by Rick Hickey?
https://clojure.org/about/state
It’s less detailed than I remember, but does make a good job setting values and state apart.
probably this one: https://www.infoq.com/presentations/Value-Identity-State-Rich-Hickey/
When you do this you use
null
as default andtimestamp
asset to not-default at timestamp
or? If someone turns off a setting after turning it on you just set it back tonull
?Good question, I guess it depends on what you’re using as a “flag” here, but I guess I should have specified for things you’re unlikely to toggle back. I guess once again, a kind of “status”, except not a linear status but various conditions. First one that comes to mind is a “soft delete” or an “archive”.
Use 5th or 6th normal form and eliminate
null
altogether.https://dave.autonoma.ca/blog/2019/06/06/web-of-knowledge/
Why is it that every article attempting to explain how
null
is some unspeakable horror, includes a tale about some application somewhere comparing to the string'null'
and ensuing chaos.You might as well say “don’t use Boolean, someone might compare it with string
'false'
.Dereferencing a boolean (or any other properly initialized value) won’t cause a program to crash. And comparing a boolean to a string will result in a compile-time error. I’m guessing you already know that and are trolling at this point.
https://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare/
Funny story. I was working at a company that was building a high-visibility student transcript system. During a code review, I saw that someone had hard-coded a default value for the last name of a student to the literal string “null”. I brought this to the attention of another developer stating that if a student actually had the last name of Null, they wouldn’t be able to use the system. He went away came back a half hour later and said, “You’re right, that’s a bug.”
That would not have been a fun bug to track down; Hoare strikes again.
https://www.wired.com/2015/11/null/
Another issue, of course, is that a
null
sentinel allows comparing two different classes of objects to the same value. (It explicitly excludes the sentinel value from the set of values that are valid elements.)There are many, many reasons to make software both immutable and null-hostile, which is why the industry is slowly moving in that direction.
https://github.com/google/guava/wiki/UsingAndAvoidingNullExplained
You haven’t answered the actual question. Why are people hardcoding a string literal
'null'
when comparing to an actualnull
?Also, for the record - comparing a string to a boolean is a perfectly valid operation in any number of languages. It will equate to false if you’re doing things properly. I’m guessing you already knew that and are trolling at this point.
Yes, that is a bug, because they’re treating a literal null and the string null as the same thing. If your language, or database doesn’t distinguish between
null
and'null'
, pick a better language/database before you start telling everyone else on the planet that they shouldn’t use literalnull
, because someone somewhere is stupid enough to compare it with a string'null'
.As long as you have sane auditing you can always go look. In your version you know what and when. With sane auditing you get all the W’s, well perhaps not they why, unless you go ask the who ;)
so i stead of
is_active
beingtrue | false
its eithernull
(for “false”/unset) or a timestamp? am i understanding you right?It sounds like what they’re suggesting is instead of having
is_active
, you’d haveactivated_at
with a timestamp instead, where it’snull | timestamp
,null
being not activated.Which is quite the opposite of the advice given in the featured article.