The reason you don’t need them is because linux always treats \n as newline in line processing and with icrnl enabled (by default) if your keyboard emits \r you will get \n before line processing.
Once upon a time, ASCII keyboards could often send a different character for the return keys on the number pad versus the “main” keyboard, or have a special “submit” or “send” key (instead of sending an escape sequence). Unix’s kernel-side line editor didn’t have much use for those keys, so users might configure stty to treat them as newlines rather than reconfigure their terminal (which they might use to connect to non-unix systems or to run interactive applications).
Ah, the digraphs you sometimes see mentioned in older C books (never encountered them in actual code, ’though I do remember a good deal of K&R style code bases).
It’s interesting how early languages interacted with the available key codes. The most obvious being systems where you only had upper case. The Algol family with its begin and end didn’t have the brace issue, but if I remember correctly, some early examples used the left-pointing arrow (←) for assignment instead of “:=” and couldn’t use underscore because that’s how early ASCII worked (which means that viewed only a few years later, some people probably wondered why an underscore is used for assignment).
(Not that begin/end was that easy with Algol’s “stropping”, but that’s an entirely different tale)
Not that we don’t still have this issue today. We could use all kinds of more semantically meaningful Unicode characters, but a lack of an easy way to enter them prevents us from doing it. And not just in programming languages, the lack of easily accessible dashes and proper apostrophes and quotation marks also leads to less ideal substitutes being way too common (I’ve yet to see a proper DIN 2137 E1 keyboard here in Germany).
It’s not keycodes or codepoints but keys and keyboards.
Early typewriters absolutely had a _: just take a close look at the remington 2 if you don’t believe me - you would use this with overstrike to underline some text on a page.
Early teletypes on the other hand, didn’t make as much use of the overstrike feature, so they often replaced the underscore with a left arrow but this wasn’t universal: plenty of typewriters of this era had underscores.
But neither teletypes or typewriters had so many brackets and braces, and whilst modern keyboards in the US and UK often have them, here in Portugal we definitely need keys for º and ª and ç and lots of other é ñ and so on, so my macbook pro’s keyboard doesn’t have [or ] keys. I think the mac german layout is similar but maybe you don’t have a ç key. In any event I’m lucky I learned C on a teletype because I still don’t have those keys!
You wouldn’t know it looking at my code because I’ve got this in my vimrc:
imap <% {
imap %> }
imap <: [
imap :> ]
APL programmers use a lot of symbols too, including ← for assignment, but I have to press so many keys to get it on the mac with the default layout that I just don’t bother on my laptop.
It absolutely was character sets. See page 21 of the rationale. A lot of the motivation was due to the way ISO 646 international ASCII used the higher punctuation character codes for extra letters, so {} could not be represented at all.
Incorrect: That document you referred isn’t “the rationale” for digraphs (or possibly anything), and that’s not the only mistake in it.
The digraphs were introduced in ISO/IEC 9899-1990-AMD1 (1995) and it makes it very clear it’s about internationalisation.
From the very first page, I quote:
Use of these features can help promote international portability of C programs. … Subclauses 6.1.5 and 6.1.6 of ISO/IEC 9899: 1990 are adjusted to include the following six additional tokens. In all aspects of the language, these six tokens
<: :> <% %> %: %:%:
behave, respectively, the same as these existing six tokens
It’s usually faster and easier to press two keys on opposite sides of the keyboard than to chord two keys (or three) on the same side of the keyboard; The option key is also in an awkward place requiring more hand travel.
The most influential language that used ← for assignment was SmallTalk — which was weird because as a 1970s language it came after ASCII replaced ← with _. Dunno why Xerox PARC used out-of-date ASCII.
Anyway, SmallTalk was a wordy language with a preference for longer identifiers than had been typical in older languages. Because they lacked _ they used camelCase to separate words.
Object-oriented programmers carried the SmallTalk camelCase identifier style to later languages that had _ and could have supported less typographically ugly styles.
I think at that time ASCII wasn’t as standardized as one would assume. Stanford had their own version, and I think this was also used by the Algol-60 variant Knuth used to write the first TeX version.
Every Smalltalk I’ve used has used := for assignment and ^ for return, but rendered them as ← and ꜛ. I didn’t know that they were characters you could type on the Alto!
My old tweet with this fun fact was surprisingly popular :-) It was prompted by Allen Wirfs-Brock saying, “Very few realize how much the software development world has been influenced over the last 20 years by the infiltration of former Smalltalk programmers into key communities.”
With the understanding that I might just have been lucky, and / or working for a company seen as a more valuable customer … in a previous role I was a director of engineering for a (by Australian standards) large tech company. We had two teams run into difficulties, one with AWS, and another with GCP.
One team reached out to Amazon, and our AWS TAM wound up giving us some great advice around how we’d be better off using different AWS tech, and put us in touch with another AWS person who provided some deeper technical guidance. We wound up saving a tonne of money on our AWS bill with the new implementation, and delighting our internal customers.
The other team reached out to Google, and Google wouldn’t let a human speak to us until we could prove our advertising spend, and as it wasn’t high enough, we never got a straight answer to our question.
Working with Amazon feels like working with a company that actually values its customers; working with Google feels like working with a company that would prefer to deal with its customers solely through APIs and black-box policies.
I’m still confused. What do you have to do in AWS that you don’t have to do in GCP or is significantly easier? I have very little experience with GCP and a lot with AWS, so interested to learn more about how GCP compares.
“Trust me bro” is a difficult sell; but I am here to back up the sentiment. There are simply too many little things that by themselves are not deal breakers at all - but make the experience much more frustrating.
GCP feels like if engineers made a cloud platform for themselves with nearly infinite time and budget (and some idiots tried to bolt crud on the side, like oracle, vmware and various AI stuff).
AWS feels like if engineers were forced to write adjacently workable services without talking to each other and on an obscenely tight deadline - by people who didnt really know what they wanted, and then accidentally made something everyone started to use.
I’m going yo write my own blog post about this, I used to have a list of all my minor frustrations so I could flesh it out.
I’m still working on it, there’s some missing points and I want to bring up a major point about cloud being good for prototyping, but here: https://blog.dijit.sh/gcp-the-only-good-cloud/
The thing you are describing I should have only needed to learn once for each cloud vendor and put in a script: The developer-time should amortise to basically zero for either platform as long as the cloud vendor doesn’t change anything.
The developer-time should amortise to basically zero for either platform as long as the cloud vendor doesn’t change anything.
Yes and no.
Some constructs you don’t even have to worry about on GCP, so, it is best for fast deployments.
However, if you spend 500+ hours a year doing cloud work, then your point stands.
Indeed, if you are doing 5 hours a week of cloud infra work, your application is likely a full-time job or equivalent. I do believe you made the right choice with AWS.
No.
If you are spending 5 hours a week on infrastructure, then your application would be worth spending 40-hours on. Or is infra the only component of your application?
Can you do an actual comparison of the work? I’d be curious. Setting up a web service from a Docker image is a few minutes. Env vars are no extra work. Secrets are trivial. A cert would take maybe 10 minutes?
Altogether, someone new should be able to do this in maybe 30 minutes to an hour. Someone who’s done it before could likely get it done in 10 minutes or less, some of that being downtime while you wait for provisioning.
You’ve posted a lot in this thread to say, roughly ‘they’re different, AWS is better, but I can’t tell you why, trust me’. This doesn’t add much to the discussion. It would help readers (especially people like me, who haven’t done this on either platform) if yo u could slow down a bit and explain the steps in AWS and the steps in GCP and why there’s more friction with the latter.
No, please don’t trust me.
Here’s an exercise for yourself.
Assuming you control a domain domain1.com.
Deploy a dockerized web service (pull one from the Internet if you don’t know how to write one).
Deploy it on gcp1.domain1.com,
Deploy another on aws1.domain1.com.
Compare how many steps it takes and how long it takes.
Here’s how I deploy it on GCP. I never open-sourced my AWS setup but I am happy to see a faster one.
As I said, I have not used AWS or GCP so I have no idea how many steps it requires on either platform. I can’t judge whether your command is best practice for CGP and have no idea what the equivalent is on AWS (I did once do this on Azure and the final deploy step looked similar, from what I recall, but there was some setup to create the Azure Container thingy instance, but I can’t tell from your example if this is not needed on GCP of if you’ve simply done it already). If I tried to do it on AWS, I’d have no idea if I were doing the most efficient thing or some stupid thing that most novices would manage to avoid.
You have, apparently, done it on both. You are in a position to tell me what steps are needed on AWS but not on GCP. Or what abstractions are missing on AWS but are better on GCP. Someone from AWS might even read your message and improve AWS. But at the moment I just see that a thing on GCP is one visible step plus at least zero setup steps, whereas on AWS it is at least one step.
You’ve posted ten times so far in this story to say that GCP is better, but not articulated how or why it’s better. As someone with no experience with either, nothing you have said gives me any information to make that comparison. At least one of the following is true:
GCP has more efficient flows than AWS.
You are more familiar with GCP than AWS and so you are comparing an efficient flow with GCP that you’ve learned to use to an inefficient flow with AWS.
You are paid by GCP to market for them (not very likely).
It sounds as if you believe the first is true and the second two are not but (again) as someone reading your posts who understands the problem but is not familiar with either of the alternatives in any useful level of detail, I cannot judge for myself from your posts.
If you wrote ‘GCP has this set of flows / abstractions that have no equivalent on AWS’ then someone familiar with AWS could say ‘actually, it has this, which is as good’ or ‘you are correct, this is missing and it’s annoying’. But when you write:
I have done it for myself and I’m speaking from experience.
That doesn’t help anyone reading the thread understand what is better or why.
I find AWS to be better organized and documented but much larger that any Google product I’ve seen. There is more to learn because it’s a large, modular system. And it’s trivial to do an easy things the hard way.
I don’t have much direct experience with Google’s hosting but if any of their service products are similar, the only advantage is they do less, which means you can skimp on organization and documentation without too many problems.
In terms of big players I would recommend GCP still, but only because I mostly work with Kubernetes and it’s best there. From smaller players Fly.io is actually works well for me.
Why are you excited? I have no idea, because upon loading the site, literally all of my screen is filled with advertising, a disgusting AI slop image, and an admittedly well-intentioned 1/3 height popup about kindness.
There is literally no content visible. It’s just awful slop, advertising, and pop-ups as far as the eye can see.
Hell, you didn’t even have to use AI slop for the image of the elephant … I did a quick search and found loads of Creative Commons licensed photos of real elephants.
Here’s what ellama-summarize-webpage gives me (I’m running Emacs integrated with an LLM running locally on my desktop):
The author shares their experience of reducing query times significantly for a Ruby on Rails app
that utilizes PostgreSQL, thanks to a proposed patch by Peter Geoghegan that improves B-tree
indexing. The app’s response time improved from 110ms to just 10ms with the new patch and PostgreSQL
17 beta. The author encourages upgrading to PostgreSQL 17 for further improvements, as well as
delving deeper into database optimization. They also recommend sharing feedback and benchmarks on
the PostgreSQL hackers mailing list. Overall, the author highlights how the proposed patch can lead
to faster response times, particularly in challenging scenarios where query times are high.
Sorry if my original reply seemed harsh - my issue was with dev.to, not at all with you or your article :) I’ve re-submitted your blog post to lobste.rs on the new URL :)
tl;dr? multiple IN expressions in a query now use indexes; you don’t have to write (a=? OR a=? OR a=?) you can just write a IN (?,?,?)
I haven’t used postgres seriously in a decade or more, so I’m unlikely to go change the one app I made once upon a time that still uses postgres to take advantage of this optimisation, but I clicked the link hoping to learn something anyway…
That seems to be it, and seems to be little more than a syntactic convenience when compared like this, but it could be significant if the new optimisation covers dynamic cases as well, like an ANY expression with a data-dependent array. Though even if it doesn’t work on dynamic cases, you could still say that a pitfall got ironed out.
That said, even though this post only mentions this IN optimisation, Postgres 17 does indeed seem to come with quite a lot of other optimiser improvements, seems significantly more than usual.
It may seem irrelevant to his technical qualifications that Curt Yarvin is a racist. Someone can be a good plumber but a bad electrician, so why not a good programmer but a bad human being? But I think it really is relevant in this case because once you realize that Yarvin is just as dumb and racist as your racist uncle at Thanksgiving and he’s just better at writing twenty thousand word blog posts, it helps you see all of his other ideas in a similar light. Is it a good idea or just a lot of words surrounding a bad idea?
Is Kelvin versioning a good idea? It’s an interesting idea on the surface, and it’s interesting to think about converging on a final design rather than evolving forever, but once you allow the version to go from v1 to v0.1 to v0.01 you realize that it’s actually the same as a normal version that always counts up, it’s just way less convenient to work with and it requires twenty thousand words of fog to cover it up.
Yarvin is the human equivalent of an LLM: there’s nothing of substance there, just a lot of hot air surrounding a core of absolute stupidity.
“Racist” is putting it mildly. Comparing Yarvin to a garden-variety racist uncle is like comparing Darth Vader to a stormtrooper. He’s explicitly a feudalist who believes democracy is bad and the world should be run by an elite.
But I think calling him stupid is a cop-out. I’ve read through some of the Urbit docs and there are some fascinating, but convoluted and impractical, ideas in there. I would rather call him evil. I just hope his plans for world domination are as unlikely to succeed as Urbit is.
It’s not irrelevant: people who can be racist have other bad ideas, so you are right to be suspicious!
Version number schemes are also a fetishism: they don’t make your code faster or smaller or more correctly handle inputs. When a racist tells you their fetish, why do you listen?
Wow. Before this I think PuTTY’s configuration dialogs were lazy, maybe even there wasn’t a “design decision” to make them that way, but the idea the author likes this layout seems very strange to me. I hope this is not what they mean.
But I like some idea of symbiosisware anyway; Most of my business is a bunch of perl scripts…
Heh, I somewhat agree with you but there’s some interesting contrasts there as well. I couldn’t tell you when I started using PuTTY, but I deeply appreciate how it works exactly the same today as it did when I started using it. There’s never been a moment where I’ve installed PuTTY on a new machine and couldn’t immediately use it without having to hunt around for “that thing that moved”. And… I just checked on a PuTTY process that that I’ve connected to a USB-Serial terminal right now and Windows is reporting that it’s using 0.1MB of RAM!
I didn’t read it that way, since the article only says it was designed for one user, not that it only has one user. Once upon a time, PuTTY only had one user, so yeah, I don’t agree it is clearly not symbiosisware.
Further, I don’t think “small screens” justifies just how weird PuTTY’s configuration/session interface is: People, including myself (and almost certainly including the author) get “used to it”, but I also watch new users who need an SSH client on Windows struggle to translate their familiarity with other Windows applications to that configuration screen, and I sometimes find myself wondering why it works that way.
but 15 items shows you how likely it is JavaScript will not be available, or available in a limited fashion
No it doesn’t. In a few million daily page loads and less than 0,05% of my traffic without javascript, and it’s usually curl or LWP or some other scraper doing something silly. Your traffic might be different, so it’s important to measure it, then see what you want to do about it. For me, such a small number the juice probably isn’t worth the squeeze, but I have other issues with this list:
A browser extension has interfered with the sitepossible but so what? ad blockers are designed to block ads. offer scripts help users find better prices than yours. my experience is this goes wrong almost never because that makes the user more likely to uninstall the extension.
A spotty connection hasn’t loaded the dependencies correctlydon’t depend on other people’s network. If my server is up, it can serve all of its assets. “spotty” connections don’t work any other way.
Internal IT policy has blocked dependenciesagain don’t depend on other people’s network
WIFI network has blocked certain CDNsagain don’t depend on other people’s network. bandwidth is so freaking cheap you should just serve it. don’t let some third-party harvest your web visitor data to save a few pennies. They might block your images or your css as well. Your brand could look like shit for pennies.
A user is viewing your site on a train which has just gone into a tunnelpossible but so what? the user knows they are in a tunnel and will hit the refresh button on the other side.
A device doesn’t have enough memory availablepossible but I don’t know anyone with a mobile phone older than 10 years, and that’s still iphone6 era performance, maybe HTC-Ones? Test with a old android and see. This isn’t affecting me. I don’t see it even the page requests.
There’s an error in your JavaScriptyou must be joking. have you met me?
An async fetch request wasn’t fenced off in a try catch and has failedha.
A user has a JavaScript toggle accidentally turned offi don’t believe this. turn it off and ask your mom to try ten of her favourite websites. if she doesn’t ask what’s wrong with your computer, she’s not going to buy anything I sell.
A user uses a JavaScript toggle to prevent ads loadingpossible but so what? that’s what they are designed to do.
An ad blocker has blocked your JavaScript from loadingpossible but unlikely. I test with a few different adblockers and most of them are pretty good about not blocking things that aren’t ads.
A user is using Opera Minipossible. i have something like 5^-6% of my page loads are some kind of Opera, so maybe when I start making a million dollars a day fixing Opera will be worth a dollar a day, but this is not my reality, and heck, even at Google’s revenue this can’t be worth more than a grand a day to them.
A user has data saving turned onpossible but so what? i do this too and it seems fine. Try it.
Rogue, interfering scripts have been added by Google Tag managerdon’t depend on other people’s network. Google tag manager is trash; I don’t use it and I don’t recommend anyone use it. I don’t sympathise with anyone having any variant of this problem.
The browser has locked up trying to parse your JS bundlepossible which is why I like to compare js-loads against telemetry that the JS sends me.
99,67% of my js page loads include the telemetry response; I don’t believe spending any time on a js-free experience is worth anything to the business, but I appreciate it is possible it could be to someone, so I would like to understand more things to check and try, not more things that could go wrong (but won’t or don’t).
I’m not sure how to put this politely, but I seriously doubt your numbers. Bots not running headless browsers themselves should be more than what you estimated.
I’d love to know how you can be so certain in your numbers? What tools do you use or how do you measure you traffic?
In our case our audience are high school students and schools WILL occasionally block just certain resource types. Their incompetence doesn’t make it less of your problem.
will load a.js then load b.txt or c.txt based on what happened in a.js.Then because I know basic math I can compare the number of times a.js loads and the number of times b.txt loads and c.txt loads according to my logfiles.
What tools do you use or how do you measure you traffic?
Tools I build.
I buy media to these web pages and I am motivated to understand every “impression” they want to charge me for.
In our case our audience are high school students and schools WILL occasionally block just certain resource types
I think it’s important to understand the mechanism by which the sysadmin at the school makes a decision to do anything;
If you’ve hosted an old version of jquery that has some XSS vector you’ve got to expect someone is going to block jquery regexes; Even if you’ve updated the version underneath. That’s life.
The way I look at it is this: I find if people can get to Bing or Google and not get to me, that’s my problem, but if they can’t get to Bing or Google either, then they’re going to sort that out. There’s a lot that’s under my control.
A spotty connection hasn’t loaded the dependencies correctly don’t depend on other people’s network. If my server is up, it can serve all of its assets. “spotty” connections don’t work any other way.
Can I invite you to a train ride in a German train from, say, Zurich to Hamburg? Some sites work fine the whole ride, some sites really can’t deal with the connection being, well, spotty.
Some sites work fine the whole ride, some sites really can’t deal with the connection being, well, spotty.
Yeah, and I think those sites can do something about it. Third-party resources is probably the number one issue for a lot of sites. Maybe they should try the network simulator in the browser developer tools once in a while. My point is the javascriptness isn’t as big the problem as the persons writing the javascript.
So someone points an issue in some code, provides a patch with clear explanations and the maintainer writes:
This function is a joke. Don’t you have better things to do? It’s not worth
arguing about and so to safe me time I added a modifeed versson of the patch.
And later:
Stop wasting people’s time! Nobody cares about this crap.
Guess he didn’t go to the “stop being an asocial basement nerd” class…
I realize now he has a wikipedia page which boils down to: “he’s a good coder, but a bigger jerk.”…
Guess he didn’t go to the “stop being an asocial basement nerd” class…
Can I suggest an alternative way to think about this?
The human being in this case is paid to maintain it, and in order to get paid they need to tell their employer what they were spending their money on, and they don’t want to say “I spent that money on a joke function zero of your customers care about” – because seriously, who would?
Every single person commenting on this thread is putting that relationship at risk… for a joke. This is their life!
Can you even imagine how frustrating it must be for someone to put your fucking life at risk for a joke and then get mad at you for not getting the joke?
On the other hand, Drepper’s behaviour caused Debian and other major distributions to drop glibc in favour of eglibc, and subsequently glibc maintainership was taken over by a committee without Drepper. It wasn’t the bug report that lost him his job, it was the way he handled it (and many others).
At some point in the ticket they mention something alongside: “if it’s a joke, remove it. Otherwise if it’s left in the codebase what’s wrong with a correct implementation?”.
Also, merging such a patch would take no more than a few minutes. I doubt his employers are tracking his work to the minute, nor judging if an applied patch is “worth it”.
put your fucking life at risk
Do you seriously think that merging a fix provided by a very competent developer puts “his life at risk”? Do you think the employer will be like: “hey, you spent 5 minutes merging this fix, but it’s for a joke function, you’re fired!!!”.
If that’s your idea of employment, you must have had very shitty bosses…
I’m curious - why not a database? Maildir/MH/mbox all have gross tradeoffs between performance and reliability, whereas SQLite would cut the gordian knot of those. Of course, schema design could be a problem, but even just stuffing the whole message without special columns for indexing would probably end up working better.
I think archival purposes used be a a really good reason, but SQLite is now a LOC recommendation for long-term storage, so I am not really convinced of it anymore.
On the subject of schemas: Once upon a time I added SQLite support to dbmail; If you only do IMAP/SMTP with the occasional ad-hoc query maybe that’s fine for you. The general architecture is good enough for archival purposes but then derived tables are used to make IMAP suck less and are sometimes useful for ad-hoc queries. Obviously you can add your own triggers and build up your own cache tables as well…
For a server or general backing storage for e.g. a GUI client, I agree. This is aimed at people who enjoy having stuff stored as regular files that can be inspected with regular command line tools, like grepping for something etc.
Even for the local MUA people, I think command line tools that abstract SQL queries so you can use a more robust format than plain text would be worthwhile.
classical HTTPS cerificates to authenticate your SSH3 server. This mechanism is more secure than the classical SSHv2 host key mechanism
I like the idea of trying something new, but between making claims like this without linking to full explanation and calling the thing SSH3 like it’s officially a new version… I’m not happy how they approach things.
It makes me wonder though, who has the right to name something SSH-3. Not legally, but rather who can we agree is the right set of people to assign that name. At least http3 existed under a different name and got adopted by standards body later.
I mean the name ssh is not really protection worthy, in the end the original implementation sshv1 apparently eventually became proprietary it self.
Though when it comes to the process, that’s some good feedback nevertheless. Perhaps someone should open an issue on their GitHub.
IETF initial working name for ssh-2 was “Secsh” and just later became ssh-2, so one might expect the same for anyone attempting to create a new ssh version.
In the end, I’m surprised there aren’t more attempts to lift ssh to a newer standard, a lot has moved. Now with ssh-3, but also with mosh and rise of the web, networked microcontrollers and so on there is definitively a great opportunity to do so.
At the same time, OpenSSH is like the cathedral on top of a mountain on an island. Carefully engineered, slowly crafted with a lot of thoughts spent on very important aspects (private keys encrypted in memory comes to mind), and whatever comes next should hopefully be on par with that later on.
In the end, I’m surprised there aren’t more attempts to lift ssh to a newer standard, a lot has moved
QUIC is the first time it was really worthwhile. The SSH protocol itself is fairly modular. You can plug in new encryption schemes, different hash schemes, and even different key types pretty trivially. Most of the evolution has been in that direction. We’ve moved from RSA to ECDH fairly smoothly over the last few years and you can store keys in TPMs or U2F tokens easily. SSH supports multiple channels and different out-of-band message types, so it’s easy to extend.
About the only thing that would require changes would be replacing TCP. SCTP never took off, so QUIC is the first thing that looks like a plausible replacement. QUIC bakes in a lot of the security at a lower level, so just running SSHv2 over QUIC is probably not useful and you need different encapsulation.
At the same time, OpenSSH is like the cathedral on top of a mountain on an island
It’s also not supporting the new versions of the various protocols. I recently implemented an sftp server and was quite surprised to discover that OpenSSH doesn’t support anything newer than v4 (the latest RFCs are for v7).
That said, it has an impressive security track record. There are a lot of CVEs but most of them are either in niche features or possible sandbox escapes that are only a problem if you couple them with an arbitrary-code execution vulnerability in the rest of the code. Few other things have the same track record.
I will recommend checking back on this project in a few years and see if issues continue to be uncovered and get better understood, or if this turns out to be another dead-end.
I mean the name ssh is not really protection worthy, in the end the original implementation sshv1 apparently eventually became proprietary it self.
I think it is protection worthy in the sense that any word with commonly understood meaning and connotations should be protected so that it can continue to be a useful word.
People understand SSH to be a standardized protocol that has been gradually developed over decades and is now ubiquitous on Unix machines. The transition from SSH-1 to SSH-2 respected people’s expectations in a way that SSH3 does not.
If a project’s name subverts expectations and creates the possibility for confusion, it’s not grounds for a lawsuit, but it reflects poorly on the project and will turn off a lot of people.
For example, this recently happened when blockchain bros tried to run off with the name “web 3”. That was the first thing I thought of when I saw that this new project was trying to land-grab the “ssh 3” name. It would have been a better first impression to pick a new name and let the SSH community figure it out.
The differentiation between a TUI and a GUI doesn’t hinge on whether it operates within a terminal, but rather, its design centered on interactivity. This can be exemplified through the use of modes, keybindings, panes, and so forth, with lazygit serving as a prime example. In my opinion, this project leans more towards the GUI side.
That’s an interesting way of thinking about it. I always assumed that TUIs were any text-based user interface as the name suggests, regardless of the interactivity. For example, vim / neovim are TUIs even though they are much more complex than simple GUI applications (for example minesweeper).
I always assumed that TUIs were any text-based user interface as the name suggests, regardless of the interactivity
Yes. Text-based. Line characters and VT220 escape sequences are not text though, any more than a base64-encoded png is text. Text can be composed and manipulated with lots of different tools (not just humans), so interactivity and composability really are the main things you will have with a Text-based user interface.
https://en.wikipedia.org/wiki/Impulse_Tracker runs in what is called “text mode”, and I don’t think it’s a TUI. I don’t think line art and box-drawing characters are “text” just because they’re conveniently-located and sized in a font-block and there’s special circuitry to translating a buffer of characters and attributes into pixels instead of working with pixels directly. You can’t run this in a terminal, you can’t pipe it to anything, and keyboard input is controlled by each control individually (like every GUI you have ever used). Minesweeper is a GUI even when it runs in a terminal for exactly the same reason: https://minesweeper.mia1024.io
http://acme.cat-v.org on the other hand, only ever works with a raster display (graphical pixels), and yet I think it’s undeniably a TUI because there is literally no other metaphor the user is exposed to except in terms of text. There are no widgets, and every control and button always works the same way, and you most certainly can pipe other programs into and out of acme (it’s way more powerful can :!/% on vi)
I usually refer to emacs and vim as a gui, especially when talking about pull-down menus, tool-bar clickable buttons and the protected forms interfaces used in configuration or in some vimscripts, but ex (the editor that vi is built on) is absolutely a TUI, and the level of composition that is possible in emacs and vi is high enough that they can be used-reasonably as TUI, so I don’t think it’s worth quibbling too much, but surely Microsoft Notepad.exe is a GUI, and so if composition isn’t as important as I think, maybe vim/neovim are GUI as well.
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Ok, I might have not gotten your point first time reading it. So, it’s more about what (text, both ‘plain’ and commands) you interact with, than how (acme via mouse interactions)?
Would it be fair to call Excel a TUI, by the same reasoning? I mean editing text in cells, writing formulas and such, not the graphing things. Maybe pre-ribbon versions of it.
I think it depends exactly on what you’re talking about: visicalc and (better) sc are both TUI spreadsheets, and I would agree that “editing text in cells, writing formulas and such” is a big part of it, so maybe it’s possible to talk about the TUI-ness of Excel, or the TUI that’s inside Excel. Some ancient MS-DOS version of Excel might’ve been a TUI.
But every piece marketing collateral I’ve ever seen about Excel shows front-and-center that graphing, a rich set of programmable interface controls and widgets, so I think it would be otherwise very strange for someone to refer to it as a TUI.
It seems like the the Conservancy lawyers say: Website/internet opinions that GPL is a license and not a contract, are not lawyer or judges opinions, therefore not relevant for the lawsuit
But then the Conservancy lawyers also argue that the community (the same that believes GPL is a license, not a contract), can sue Vizio for violation of contract.
Either way, “license, not contract” now indisputably stands between Conservancy and a potential legal ruling that anyone in the United States who wants GPL source code can sue for it, contributor or not. The old mantra’s gotta go, even as it still rings in our ears and echoes down deep caverns of the Web.
The two statements aren’t in-conflict, but require context to properly understand. I’m not so certain this article explains it very well.
This court can hear about contracts. Vizio says “a bunch of non-lawyers” think this is a “license not a contract” so this court cannot hear this case; Conservancy agrees with the first part, but says “those same bunch of non-lawyers” think this court can hear this case (which is what the second statement is).
The main thing is that “the contract” they are talking about isn’t clearly written down, and the reason it isn’t clear is because “a bunch of non-lawyers” did the writing, but like any verbal contract, all a contract really requires is agreement.
So Conservancy argues: If the terms are understood, and Vizio acts in accordance to those terms, then Vizio automatically enters “into contact” with everyone who could be harmed by Vizio failing to implement those terms. This is a contract, and so therefore, “this court” can hear the case and implement a cure.
Vizio wants this to be a “license”, because if it’s a license, we just hear Conservancy’s argument, but it’s moot, because this court can’t hear it, and we won’t need to hear Vizio explain what they think their rights are.
The Judge (in California) is elected, so they need to see press like this to understand if they make a mistake, the ramifications will be huge. They probably would be: If they refuse to hear the case, and Conservancy can’t get another court to hear it, then companies are probably going to start ignoring other files called “license” and this Judge will get blamed.
They probably would be: If they refuse to hear the case, and Conservancy can’t get another court to hear it, then companies are probably going to start ignoring other files called “license” and this Judge will get blamed.
If he gets blamed, he will also get credit from the concentrations of wealth that want him to rule that way.
Note that it is not that simple. Concentration of wealth also knows that killing the open source golden goose would be highlu detrimental and costly to their own wealth concentration machinery.
Sure. 75% to 90% of their code is open source from external coders. And that is the case across the industry. We actually have numbers about this. Killing the thing that attract maintainers to share the code they write would … Basically kill the software industry at this point.
Note that this is not only about copyleft. There are other things in licenses that you want to enforce. And in general if you start using code with licence and not respecting the terms, the fact it is copyleft does not change the fact we cannot trust you for non copyleft one.
It’s not that simple. The argument for the GPL being a contract and everyone else being third-party beneficiaries doesn’t extend to permissive licenses, so I don’t think this ruling will have much bearing on that.
Regardless, are you then saying that maintainers’/developers’ willingness to share code depends on third parties being able to enforce the attribution requirements of permissive licenses?
I am saying that anything that will make maintaineds feel like the opensource/free “social contract” is abused far more than it already is will make a lot of them rethink their involvement yes. We already are having a pretty bad burnout problem in that domain, which is starting to real scare governments and corporations at the edge.
Wait why say “yes” if that’s not what I asked. Does that mean “yes, maintainers’/developers’ willingness to share code depends on third parties being able to enforce attribution requirements of permissive licenses?”
I mean yes, because otherwise they have to do it themselves, which noone has money and time for. The US court system is based on third parties enforcing laws for you
Surely 75-90% of code used by commercial entities is not copyleft licensed, right? I would be surprised if this was the case. My understanding is that companies strongly prefer permissively licensed OSS.
I understand the issue for the specific lawsuit mentioned. But I didn’t follow why there was a preexisting preference for GPL to be a license and not a contract.
Anyone know what the perceived benefit was for arguing that GPL is a license?
My third hand and long after the fact impression is that the idea was that while a contract restricts the users rights, a license only grants more rights. So by arguing that it was a license they were trying to argue “this is strictly beneficial (more liberal in user rights) compared to proprietary software”.
I believe the key thing in the USA is that a contract must provide something to both parties. This is why a bunch of contracts have the weird nominal dollar thing.
The GPL does not provide anything to the copyright owner. The recipient of the GPL’d work receives the right to use the work. Customers of the recipient receive a set of rights. The author receives nothing as a direct result of the GPL (they may subsequently get code back if someone downstream happens to share it, but the GPL does not require this).
It’s quite surprising to see the defendant arguing that this should be treated as copyright infringement because the statutory penalties are much higher in that case, especially with the precedent that the RIAA set that each copy distributed counts as a separate incident and triggers the punitive damages again.
It’s quite surprising to see the defendant arguing that this should be treated as copyright infringement because the statutory penalties are much higher in that case, especially with the precedent that the RIAA set that each copy distributed counts as a separate incident and triggers the punitive damages again.
My suspicion (don’t quote me on this) is that a copyright claim would have to go through federal court, which lacks California’s rule allowing the SFC to sue as a third-party beneficiary.
Anyone know what the perceived benefit was for arguing that GPL is a license?
It has to do with standing: This is a contract-court.
Vizio wants to argue it is a license (and so it deals with e.g. copyright infringement) in the legal sense of the word so that this court cannot hear the case.
As mentioned, in a license law, the SFC has no standing as they are a third party. A concept that is not accepted in license court, as the SFC does not represent the copyright owner
Conservancy can represent several of the relevant copyright holders, but they cannot use a copyright case to ask for source code, nor wouud that get them a precedent that all other USA software users also have standing to sue.
Vizio argued that the lawsuit is really a copyright infringement lawsuit, and therefore belongs in federal court, not state court. Painting Conservancy’s legal claim as really about copyright could also help them avoid the whole issue of third-party beneficiaries, a contracts-law concept. So naturally, Vizio’s lawyers went online and dug up a bunch of places where Free Software people, including FSF and Conservancy people, wrote that the GPLs are licenses, not contracts, and that only copyright holders can enforce them
There are actually multiple relevant paragraphs, but this seems the most explicit about the issue
Painting Conservancy’s legal claim as really about copyright could also help them avoid the whole issue of third-party beneficiaries, a contracts-law concept.
which requires a leap of inference to get to “SFC has no standing under license law.” It seems plausible, but it was certainly not mentioned in the article. Are we supposed to know that the contracts-law “third-party beneficiary” concept is the only legal device that could give the SFC standing to sue?
Would it really be that hard for the SFC to find a Linux/bash/glibc contributor to sign on to the suit, if that’s even necessary?
which requires a leap of inference to get to “SFC has no standing under license law.” It seems plausible, but it was certainly not mentioned in the article. Are we supposed to know that the contracts-law “third-party beneficiary” concept is the only legal device that could give the SFC standing to sue?
Again, the article says that, it goes into great detail on the matter. It does not have a single sentence saying “by making this a copyright license case, Vizio is forcing this into federal courts where there is no concept of a third party beneficiary, and so SFC would have no legal standing to bring the case and it would be dismissed.” It does have numerous paragraphs that together make this point. e.g. one paragraph details how contract vs license suits are preempted by federal law, one details how third-party standing differs between contracts and licenses in CA vs federal law, one explains how everything combines to completely remove the SFCs right to sue.
Would it really be that hard for the SFC to find a Linux/bash/glibc contributor to sign on to the suit, if that’s even necessary?
In principle anyone could have done that already. The whole point is SFC wants to do this unilaterally without working with actual authors because that’s more expensive. My not at all a lawyer take is the issue is this:
If SFC has no standing all they can do is provide lawyers. But lawsuits take time and money for the non-lawyer person as well: you have to attend depositions, for which you may have to travel, that can take 10s of hours. So you not only need someone who’s copyright was violated, but they have to have the time and money to be able to handle the case workload. These cases are generally about forcing source code to be released, not extracting monetary damages, so the end result is you are fundamentally out of pocket (you can get compensated for say hotel time, but not for vacation time, etc). e.g. the end result may still be the contributor being out of pocket.
The other side is developers working for companies that contribute to open source (e.g. the ones that might be able to afford the time) are likely not the copyright owners - e.g. for the last 15 or 20 years all my open source code belongs to large corporations, so for that code I would not have standing either. So you actually need to convince the company to be part of the suit not the individual developers there.
Again, the article says that, it goes into great detail on the matter.
I now realize you are not the original person I asked about this, so why are you saying “again”? Did you already say the article says SFC has no standing under license law? I can address the rest of your comment but want to make sure I’m not having an aneurysm. Whats with the “again”?
Also of note is the image on the Canvas GFX website appears to be either squished horizontally or stretched vertically.
Mac resolution was 512x342, but a 4:3 projection would imply 384 pixels vertical, so to be square-on-screen, so unless Canvas compensated for this (which seems unlikely) it would be off by 11%
I don’t actually have the ability to see the video and vectors to verify that myself, but maybe that’s worth checking for?
hostname+pid+counter is guaranteed unique in unix.
using an lfsr (rand) to generate “unique” ids will fail eventually. i would just compose the name out of those three parts instead of trying to compress it into a single numeric token.
In fact, when I argued that users should be allowed to vote in bugzilla, I got permanently banned with no explanation.
If you really got no explanation, here it is: You went, and told someone else you wanted voting rights over their life. In this case: what they program on, but this isn’t ok! Not ever! It’s wrong to bully people, and you should be ashamed of doing wrong things.
Please don’t do this: Animations that never happen still waste electricity, which harms the environment, and reduces browser battery time.
You can use a CSS animation on a desired property and hook the animation events so your code only runs when something happens instead of every single frame.
Of course I mean something else. Where did I speak of “performance”?
All that statement means is that JavaScript can query a property on every node in a set of a dozen or so sixty times a second on multi-gigahertz machines.
This paper reads of sour grapes. I think fork() is great, but maybe some other metaphors would be great as well.
Once upon a time, I was working on a DNS server, and after finding it a bit slow, did a:
fork(),fork();
after binding the sockets and immediately got it running multicore. Yes, I could have rewrote things to allow main() to accept the socket to share, or to have a server process pass out threads, and then I could have used spawn, but I have a hard time being mad at those 14 bytes.
Yes this is a great paper. It’s a perspective from kernel implementers, and the summary is that fork() is only good for shells these days, not other apps like servers :)
It reminds me that I bookmarked a comment from the creator of Ninja about using posix_spawn() on OS X since 2016 for performance:
I think basically Ninja can do it because it only spawns simple processes like the compiler.
It doesn’t do anything between fork() and exec(), which is how a shell sets up pipelines, redirects, and does job control (just implemented by Melvin in Oil, with setpgid() etc. )
Also, anything like OCI / Docker / systemd-nspawn does a bunch of stuff between fork() and exec()
It would be cool if someone writes a demo of that functionality without fork(), makes a blog post, etc.
As a way to help alternative kernel implementers, I’d be open to making https://www.oilshell.org optionally use some different APIs, but I’m not sure of the details right now
It doesn’t do anything between fork() and exec(), which is how a shell sets up pipelines, redirects, and does job control (just implemented by Melvin in Oil, with setpgid() etc. )
Most of those things can be done with posix_spawn_file_actions_t.
posix_spawn_file_actions_t a,b;
int p[6];pid_t c[2];
pipe(p),pipe(p+2),pipe(p+4);
posix_spawn_file_actions_adddup2(&a,*p,0);
posix_spawn_file_actions_adddup2(&a,p[3],1)
posix_spawn_file_actions_adddup2(&b,p[2],0);
posix_spawn_file_actions_adddup2(&b,p[5],1);
posix_spawn(c,"a",&a,NULL,...);
posix_spawn(c+1,"b",&b,NULL,...);
// write to p[1] to put into head of pipeline (a) read the end of the pipelline (b) from p[4]
// wait for *c(a) and c[1](b) for status code.
You can get setpgid with POSIX_SPAWN_SETPGROUP, and the signal mask with posix_spawnattr_setsigmask.
I guess this kind of thing is what I’m looking for
4/ Linux does posix_spawn in user space, on top of fork - but sometimes vfork (an even faster, more dangerous fork). The vfork usage is what can make posix_spawn faster on Linux.
The “why” was largely political, IMO: The spawn() family of functions were introduced into POSIX so that Windows NT could get a POSIX certification and win government contracts, since it had spawn() and not fork().
Tannenbaum’s Modern Operating Systems was written around this time, and you might find its discussion of process-spawning APIs interesting: He doesn’t mention performance, and indeed Linux’s fork+exec was faster than NT’s CreateProcess so I find it incredibly unlikely NT’s omission for fork() was for performance, but more likely to simplify other parts of the NT design.
I guess this kind of thing is what I’m looking for
The suggestion to run a subprocess that calls tcsetpgrp before exec isn’t a bad one, and maybe obviates some of the performance benefits you get from posix_spawn, but it might not be so bad because that subprocess can be a real simple tiny static binary that does what it needs to and calls exec(). One day maybe we won’t have to worry about this.
Another option is to just wait+WIFSTOPPED and then kill+SIGCONT it if it’s supposed to be in the foreground.
Very strange claim. AFAIK it was possible to implement fork on top of Windows NT native API right from the beginning. (I can try to find links.) And early Windows POSIX subsystems (including Interix) actually implemented fork. (This was long before WSL happened.) And Interix actually directly implemented fork on top of Windows NT native API, as opposed to very hacky Cygwin’s fork implementation.
Also IIRC the very first Windows POSIX subsystem happened before posix_spawn was added to POSIX. (Windows had a lot of different official POSIX subsystems authored by Microsoft, WSL is the last one.)
AFAIK it was possible to implement fork on top of Windows NT native API right from the beginning.
I think you’re thinking of zwCreateSection, but I don’t think this was a win32 API call (or even well-documented), and it takes a careful reading to see how fork could be implemented with it, so I don’t think this is the same as having fork() – after all, there’s got to be lots of ways to get a fork-like API out of other things, including:
as opposed to very hacky Cygwin’s fork implementation.
I remembered they claimed they had reasons for not using zwCreateSection but I don’t know enough to know what problems they ran into though.
He doesn’t mention performance, and indeed Linux’s fork+exec was faster than NT’s CreateProcess so I find it incredibly unlikely NT’s omission for fork() was for performance, but more likely to simplify other parts of the NT design.
It’s not quite that clear cut. Modern versions of NT have a thing called a picoprocess, which was originally added for Drawbridge but is now used for WSL. These are basically processes that start (almost) empty. Creating a new process with CreateProcess[Ex] creates a new process very quickly, but then maps a huge amount of stuff into it. This is the equivalent of execve + ld-linux.so running and that’s what takes almost all of the time.
Even on Linux, vfork instead of fork is faster (especially on pre-Milan x86, where fork needs to IPI other cores for TLB synchronisation).
XNU has a lot of extensions to POSIX spawn that make it almost usable. Unfortunately, they’re not implemented anywhere else. The biggest problem with the API is that they constrained it to permit userspace implementations. As such, it is strictly less expressive than vfork + execve. That said, vfork isn’t actually that bad an API. It would be even better if the execve simply returned and vfork didn’t return twice. Then the sequence would be simply vfork, setup, execve, cleanup.
With a language like C++ that supports RAII, you can avoid the footguns of vfork by doing
pid_t pid = vfork();
if (pid == 0)
{
{
// Set up the child
}
execve(…);
pid = -1;
}
This ensures that anything that you created in between the setup is cleaned up. I generally use a std::vector for the execve arguments. This must be declare in the enclosing scope, so it’s cleaned up in the parent. It’s pretty easy to wrap this in a function that takes a lambda and passes it a reference to the argv and envp vectors, and executes it before the execve . This ensures that you get the memory management right. As a caller, you just pass a lambda that does any file descriptor opening and so on. The wrapper that I use also takes a vector of file descriptors to inherit, so you can open some files before spawning the child, the do arbitrary additional setup in the child context (including things like entering capability mode or attaching to a jail).
I don’t understand how any of those use cases break. In between vfork and execve, you are running in the child’s context, just as you are now. You can drop privileges, open / close / dup file descriptors, and so on. The only difference is that you wouldn’t have the behaviour where execve effectively jongjmps back to the vfork, you’d just return to running in the parent’s context after you started the child.
At the point of vfork, the kernel creates two processes using the same page tables. One is suspended (parent), and the other isn’t (child).
The child continues running until execve. At that point the child process image is replaced with the executable loaded by exec. The parent process is then resumed but with the register/stack state of the child.
Exactly. The vfork call just switches out the kernel data structures associated with the running thread but leaves everything else in place, the execve would switch back and doesn’t do any of the saving and restoring of register state that makes vfork a bit exciting. The only major change would be that execve would return to the parent process’ kernel state even in case of failure.
Actually that tweet thread by the author of fish is very good.
This is exactly what Melvin just added, so looks like we can’t use it ? On what platforms?
8/ What’s missing? One hole is the syscall you’ve never heard of: tcsetpgrp(), which hands-off tty ownership. The “correct” usage with fork is a benign race, where both the parent (tty donator) and child (tty inheritor) request that the child own the tty.
9/ There is no “correct” tcsetpgrp usage with posix_spawn: no way to coax the child into claiming the tty. This means, when job control is on, you may start a process which immediately stops (SIGTTIN) or otherwise. Here’s ksh getting busted: https://mail-archive.com/ast-developers
The way I’m reading this is that ksh has a bug due to using posix_spawn() and lack of tcsetpgrp(). And they are suggesting that the program being CALLED by the shell can apply a workaround, not the shell itself!
This seems very undesirable.
I think we could use posix_spawn() when job control is off, but not when it’s on.
And I actually wonder if this is the best way to port to Windows? Not sure
Linux does posix_spawn in user space, on top of fork - but sometimes vfork
Glibc and musl add more and more optimizations over time, allowing posix_spawn to use vfork (as opposed to fork) in more and more cases. It is quite possible that recent versions of glibc and musl call vfork in all cases.
I would like to do some container-like stuff without Docker, e.g. I’ve looked at bubblewrap, crun, runc, etc. a little bit
We have a bunch of containers and I want to gradually migrate away from them
I also wonder if on Linux at least a shell should distribute a bubblewrap-like tool ! Although I think security takes a long time to get right on Linux, so we probably don’t want that responsibility
It is public domain. I’m sorry for Russian comments.
Compile with gcc -o asjail asjail.c. My program is x86 Linux specific. Run like so: asjail -p bash.
Program has options -imnpuU, which correspond to unshare(1) options (see its manpage). (Also, actual unshare has option -r, I have no such cool option.)
My program usually requires root privileges, but you can specify -U flag, which creates user namespace. So, you can run asjail -pU bash as normal user, this will create new user namespace and then create PID namespace inside it. (Again: unshare -pr bash is even better.)
But user namespace requires that they should be enabled in kernel. In some distros they are enabled by default, in others - not.
I wrote this util nearly 10 years ago. Back then I wanted some lightweight container solution. I was not aware of unshare(1) util. unshare(1) fully subsumes my util. (Don’t confuse with unshare(2) syscall.) Also, unshare(1) is low-level util, it’s lower then bubblewrap, runc, etc.
I don’t remember some details in this code, for example I don’t remember why I need this signal mask manipulations.
Today I use docker and I’m happy with it. Not only docker provide isolation, it is also allows you to write Dockerfiles. And partial results in dockerfiles are cached, so you can edit some line in dockerfile, and docker will rebuild exactly what is needed and no more. Dockerfiles are perfect for bug reports, i. e. you can simply send dockerfile instead of “steps to reproduce” section. The only problem with docker is inability to run systemd inside it. I have read that this is solved by podman, but I didn’t test it. Also, dockerfiles are not quite reproducible, because they are often rely on downloading something from internet. I have read that proper solution is Nix, but I didn’t test it
You need to add -- to make sure options are processed as part of target command, not asjail itself, i. e. asjail -p -- bash -c 'echo ok'.
asjail was written by careful reading of clone(2) manual page.
asjail is not complete solution. It doesn’t do chrooting, mounting needed directories (/proc, /dev, etc). So back then 10 years ago I wrote bash script called asjail-max-1, which does these additional steps. asjail-max-1 was written by careful reading of http://www.freedesktop.org/wiki/Software/systemd/ContainerInterface and so asjail-max-1 (together with asjail) can run systemd in container! asjail-max-1 is screen sized bash script.
But, unfortunately, to make all this work, you also need C program, which emulates terminal using forkpty(3). So I wrote such a program and I called it pty. It is some screens sized C program. Together asjail, asjail-max-1 and pty gives you complete small solution for running containers. In something like 5 screens of code.
I can post all this code.
But all this is not needed, because all this is subsumed by existing tools. asjail is subsumed by unshare. And asjail-max-1+asjail+pty is subsumed by systemd-nspawn.
Today I don’t use any of these my tools. When I need to run container I use docker. If I need to run existing file tree I use systemd-nspawn
Also, all these tools I discussed so far are designed for maximal isolation. I. e. to prevent container from accessing host resources. But sometimes you have opposite task, i. e. you want to run some program and give it access to host resources, for example to X server connection, to sound card etc. So I have another script, which is simple wrapper for chroot, which does exactly this: https://paste.gg/p/anonymous/84c3685e200347299cac0dbd23d31bf3
I want to write my own shell some day. And I don’t want to use fork there, I will use posix_spawn instead. But this will be tricky. How to launch subshell? Using /proc/self/exe? What if /proc is not mounted? I will possibly try to send a patch to Linux kernel to make /proc/self/exe available even if /proc is not mounted. Busybox docs contain (or contained in the past) such patch, so theoretically I can simply copy it from there. I can try to find a link
Also, anything like OCI / Docker / systemd-nspawn does a bunch of stuff between fork() and exec()
Surprisingly, we can combine posix_spawn speed with needs of docker and systemd-spawn. Let me tell you how.
First of all, let’s notice that merely fork is not enough for systemd-nspawn. You need to also put child in new mount/utc/etc namespace. For this you need clone or unshare(2).
Fortunately, clone has flag CLONE_VFORK, which allows us to get vfork-like behavior, i. e. our program will be faster than with fork.
So, to summarize, we have two options one option to combine posix_spawn speed with features systemd-nspawn needs (either one will be enough):
Use clone with all namespacing flags we need (such as CLONE_NEWNS) and CLONE_VFORK
Create new process using usual posix_spawn and then call unshare to put the process into new namespace
I didn’t tested any of this, so it may be possible something will go wrong.
Also: I’m author of my own unshare(1) util analog (don’t confuse with unshare(2) syscall). My util doesn’t do any speed up tricks, I just use plain clone without CLONE_VFORK
Also, I wrote a Rust program, which spawns program using posix_spawn, redirects its in, out and err using posix_spawn_file_actions_adddup2, collects its out and err, waits for finish and reports status
Would like to see some more technical exposition to understand why the DNS issue “can only happen in Kubernetes” and if it’s the fault of musl, or kubernetes, or the DNS nodes that for some reason require TCP. Natanael has a talk about how running musl can help make upstream code better, by catching things that depend on GNU-isms without being labeled as such.
I also wonder where the author gets the confidence to say “if your application requires CGO_ENABLED=1, you will obviously run into issue with Alpine.”
Not sure if it’s fixed, but there were other gotchas in musl related to shared libraries last time I looked. Their dlclose implementation is a no-op, so destructors will not run when you think you’ve unloaded a library, which can cause subtly wrong behaviour including memory leaks and state corruption. I hacked a bit on musl for another project a couple of years ago and it felt like a perfect example of the 90:10 rule: they implement the easy 90% without really understanding why the remaining difficult 90% is there and why people need it.
Oh, and on x86 platforms they ship a spectacularly bad assembly memcpy that performs worse than a moderately competent C one on any size under about 300 bytes (around 90% of memcpy calls, typically).
Natanael has a talk about how running musl can help make upstream code better, by catching things that depend on GNU-isms without being labeled as such.
Expecting getaddrinfo to work reliably isn’t a GNUism, it’s a POSIXism. Code that uses it to look up hosts that require DNS over TCP to resolve will work on GNU/Linux, Android, Darwin, *BSD, and Solaris.
They happen in Kubernetes if you use DNS for service discovery.
On the Internet, DNS uses UDP. RFC 1123 was really clear about that. It could use TCP, but Internet hosts typically didn’t because DNS responses that don’t fit in one packet require more than one packet, and that takes more time leading to a lower-quality experience, so people just turned it off. How much time depends mostly on the speed of light and the distance the packets need to travel, so we can use a random domain name to measure circuit length:
$ time host foiioj.google.com
Host foiioj.google.com not found: 3(NXDOMAIN)
real 0m0.103s
user 0m0.014s
sys 0m0.015s
Once being “off” was ubiquitous, DNS client implementations started showing up that didn’t bother with the TCP code that they would never use, and musl is one of these.
Kubernetes (ab)uses the DNS protocol for service discovery in most reference implementations, but the distance between nodes is typically much less than 1000 miles or so, so you aren’t going to notice the time-delay so much between one packet and five. As a result, when something goes wrong, people blame the wrong-thing that isn’t in most of those reference implementations (in this case, musl).
I use /etc/hosts for service discovery (and a shell script that builds it for all the containers from the output of kubectl get …) which is faster still, and reduces the number of partitions which can make tracking down some intermittent problems easier.
Natanael has a talk about how running musl can help make upstream code better, by catching things that depend on GNU-isms without being labeled as such.
This is a good point: If your application calls gethostbyname or something, what’s it going to do with more than 512 bytes of output? The most common reason seems to be people who use DNS to get everything implementing a service or sharing a label. Some of those are just displaying the list (on say a service dashboard), and for them, why not just ask the Kubernetes REST API? Who knows.
But others are doing this because they don’t know any better: If you get five responses and are only going to connect() to one you’ve made a design mistake and you might not notice unless you use Alpine!
depends mostly on the speed of light and the distance the packets need to travel
This reminded me of this absolute gem and must-read story from ancient computer history, of how people can’t send emails to people more then 520 miles away.
https://web.mit.edu/jemorris/humor/500-miles
You can use TCP_INFO to extract the RTT between the parts of the TCP handshake and use it to make firewall rules that block connections from too far away.
This works well for me since I generally know (geographically) where I am and where I will be, but people attacking my systems are going to be anywhere, probably on a VPN which hides their location (and makes their RTT longer)
I wonder why there is an
eol
AND aneol2
and why both of them areundef
.stty
man page sayswhere the * before
eol2
indicates non POSIX settings; but it does not elaborate any more :(The reason you don’t need them is because linux always treats
\n
as newline in line processing and withicrnl
enabled (by default) if your keyboard emits\r
you will get\n
before line processing.Once upon a time, ASCII keyboards could often send a different character for the return keys on the number pad versus the “main” keyboard, or have a special “submit” or “send” key (instead of sending an escape sequence). Unix’s kernel-side line editor didn’t have much use for those keys, so users might configure stty to treat them as newlines rather than reconfigure their terminal (which they might use to connect to non-unix systems or to run interactive applications).
Ah makes sense, it is for backwards compatibility for very old nonstandard keyboards. Thanks :D
Ah, the digraphs you sometimes see mentioned in older C books (never encountered them in actual code, ’though I do remember a good deal of K&R style code bases).
It’s interesting how early languages interacted with the available key codes. The most obvious being systems where you only had upper case. The Algol family with its begin and end didn’t have the brace issue, but if I remember correctly, some early examples used the left-pointing arrow (←) for assignment instead of “:=” and couldn’t use underscore because that’s how early ASCII worked (which means that viewed only a few years later, some people probably wondered why an underscore is used for assignment).
(Not that begin/end was that easy with Algol’s “stropping”, but that’s an entirely different tale)
Not that we don’t still have this issue today. We could use all kinds of more semantically meaningful Unicode characters, but a lack of an easy way to enter them prevents us from doing it. And not just in programming languages, the lack of easily accessible dashes and proper apostrophes and quotation marks also leads to less ideal substitutes being way too common (I’ve yet to see a proper DIN 2137 E1 keyboard here in Germany).
It’s not keycodes or codepoints but keys and keyboards.
Early typewriters absolutely had a
_
: just take a close look at the remington 2 if you don’t believe me - you would use this with overstrike to underline some text on a page.Early teletypes on the other hand, didn’t make as much use of the overstrike feature, so they often replaced the underscore with a left arrow but this wasn’t universal: plenty of typewriters of this era had underscores.
But neither teletypes or typewriters had so many brackets and braces, and whilst modern keyboards in the US and UK often have them, here in Portugal we definitely need keys for º and ª and ç and lots of other é ñ and so on, so my macbook pro’s keyboard doesn’t have
[
or]
keys. I think the mac german layout is similar but maybe you don’t have a ç key. In any event I’m lucky I learned C on a teletype because I still don’t have those keys!You wouldn’t know it looking at my code because I’ve got this in my vimrc:
APL programmers use a lot of symbols too, including ← for assignment, but I have to press so many keys to get it on the mac with the default layout that I just don’t bother on my laptop.
It absolutely was character sets. See page 21 of the rationale. A lot of the motivation was due to the way ISO 646 international ASCII used the higher punctuation character codes for extra letters, so {} could not be represented at all.
Incorrect: That document you referred isn’t “the rationale” for digraphs (or possibly anything), and that’s not the only mistake in it.
The digraphs were introduced in ISO/IEC 9899-1990-AMD1 (1995) and it makes it very clear it’s about internationalisation.
From the very first page, I quote:
I don’t see where the contradiction is?
Internationalization -> national variants of ISO646 without {} -> trigraphs, but trigraphs are super ugly -> digraphs
The issue of keyboards is tied up with internationalization but it wasn’t the prime reason for trigraphs or digraphs.
Is there any specific reason you prefer to use the digraphs instead of option key to access the brackets?
It’s usually faster and easier to press two keys on opposite sides of the keyboard than to chord two keys (or three) on the same side of the keyboard; The option key is also in an awkward place requiring more hand travel.
The most influential language that used ← for assignment was SmallTalk — which was weird because as a 1970s language it came after ASCII replaced ← with
_
. Dunno why Xerox PARC used out-of-date ASCII.Anyway, SmallTalk was a wordy language with a preference for longer identifiers than had been typical in older languages. Because they lacked
_
they used camelCase to separate words.Object-oriented programmers carried the SmallTalk camelCase identifier style to later languages that had
_
and could have supported less typographically ugly styles.I think at that time ASCII wasn’t as standardized as one would assume. Stanford had their own version, and I think this was also used by the Algol-60 variant Knuth used to write the first TeX version.
Every Smalltalk I’ve used has used := for assignment and ^ for return, but rendered them as ← and ꜛ. I didn’t know that they were characters you could type on the Alto!
My old tweet with this fun fact was surprisingly popular :-) It was prompted by Allen Wirfs-Brock saying, “Very few realize how much the software development world has been influenced over the last 20 years by the infiltration of former Smalltalk programmers into key communities.”
It is hard to recommend [anything but AWS]
sad but true for large cloud providers
more niche cloud providers like cloudflare and fly.io and the like are making inroads, though
AWS is reliable but too coarse compared to the Google Cloud Platform The engineering productivity cost is much higher with AWS than with GCP.
With the understanding that I might just have been lucky, and / or working for a company seen as a more valuable customer … in a previous role I was a director of engineering for a (by Australian standards) large tech company. We had two teams run into difficulties, one with AWS, and another with GCP.
One team reached out to Amazon, and our AWS TAM wound up giving us some great advice around how we’d be better off using different AWS tech, and put us in touch with another AWS person who provided some deeper technical guidance. We wound up saving a tonne of money on our AWS bill with the new implementation, and delighting our internal customers.
The other team reached out to Google, and Google wouldn’t let a human speak to us until we could prove our advertising spend, and as it wasn’t high enough, we never got a straight answer to our question.
Working with Amazon feels like working with a company that actually values its customers; working with Google feels like working with a company that would prefer to deal with its customers solely through APIs and black-box policies.
Indeed. If you think you will need a human to get guidance, then GCP is an inferior option by a huge margin.
Where in my experience, guidance can include “why is your product behaving in this manner?”.
What does “too coarse” mean?
I’m not sure I believe you that the “engineering productivity cost” is higher with AWS: How exactly did you measure that?
I have a Docker image (Python/Go/static HTML) that I want to deploy as a web server or a web service.
GCP is far superior than AWS on this measure.
I’m still confused. What do you have to do in AWS that you don’t have to do in GCP or is significantly easier? I have very little experience with GCP and a lot with AWS, so interested to learn more about how GCP compares.
I have a good enough experience with both platforms so this is something you have to try and you will see the difference.
GCP primitives are different and much better than AWS.
“Trust me bro” is a difficult sell; but I am here to back up the sentiment. There are simply too many little things that by themselves are not deal breakers at all - but make the experience much more frustrating.
GCP feels like if engineers made a cloud platform for themselves with nearly infinite time and budget (and some idiots tried to bolt crud on the side, like oracle, vmware and various AI stuff).
AWS feels like if engineers were forced to write adjacently workable services without talking to each other and on an obscenely tight deadline - by people who didnt really know what they wanted, and then accidentally made something everyone started to use.
I’m going yo write my own blog post about this, I used to have a list of all my minor frustrations so I could flesh it out.
Thanks, I would love to link it.
I’m still working on it, there’s some missing points and I want to bring up a major point about cloud being good for prototyping, but here: https://blog.dijit.sh/gcp-the-only-good-cloud/
Thanks for sharing.
If it helps I have the same professional experience with AWS and GCP. GCP is easier to work with but an unreliable foundation.
Correct me if I’m wrong but you can do all of the above with Elastic Beanstalk yes? Maybe ECS as well?
The trickiest part would be using AWS Secrets Manager to store/fetch the keys, which has a friendly enough UI through their web, or CLI.
You can definitely do all of this with EKS easily, but that requires k8s knowledge which is a whole other bag of worms.
The thing you are describing I should have only needed to learn once for each cloud vendor and put in a script: The developer-time should amortise to basically zero for either platform as long as the cloud vendor doesn’t change anything.
GCP however, likes to change things, so…
Yes and no. Some constructs you don’t even have to worry about on GCP, so, it is best for fast deployments. However, if you spend 500+ hours a year doing cloud work, then your point stands.
500 hours a year is only about five hours a week.
My application has run for over ten years. It paid for my house.
You’re damn right my point stands.
Indeed, if you are doing 5 hours a week of cloud infra work, your application is likely a full-time job or equivalent. I do believe you made the right choice with AWS.
Five hours a week is a full-time job?
You’ve got some strange measures friend…
read the post you replied to again, it does not say that.
No. If you are spending 5 hours a week on infrastructure, then your application would be worth spending 40-hours on. Or is infra the only component of your application?
Can you do an actual comparison of the work? I’d be curious. Setting up a web service from a Docker image is a few minutes. Env vars are no extra work. Secrets are trivial. A cert would take maybe 10 minutes?
Altogether, someone new should be able to do this in maybe 30 minutes to an hour. Someone who’s done it before could likely get it done in 10 minutes or less, some of that being downtime while you wait for provisioning.
I have done it for myself and I’m speaking from experience.
You’ve posted a lot in this thread to say, roughly ‘they’re different, AWS is better, but I can’t tell you why, trust me’. This doesn’t add much to the discussion. It would help readers (especially people like me, who haven’t done this on either platform) if yo u could slow down a bit and explain the steps in AWS and the steps in GCP and why there’s more friction with the latter.
No, please don’t trust me. Here’s an exercise for yourself.
Assuming you control a domain
domain1.com
.Deploy a dockerized web service (pull one from the Internet if you don’t know how to write one). Deploy it on
gcp1.domain1.com
, Deploy another onaws1.domain1.com
. Compare how many steps it takes and how long it takes.Here’s how I deploy it on GCP. I never open-sourced my AWS setup but I am happy to see a faster one.
As I said, I have not used AWS or GCP so I have no idea how many steps it requires on either platform. I can’t judge whether your command is best practice for CGP and have no idea what the equivalent is on AWS (I did once do this on Azure and the final deploy step looked similar, from what I recall, but there was some setup to create the Azure Container thingy instance, but I can’t tell from your example if this is not needed on GCP of if you’ve simply done it already). If I tried to do it on AWS, I’d have no idea if I were doing the most efficient thing or some stupid thing that most novices would manage to avoid.
You have, apparently, done it on both. You are in a position to tell me what steps are needed on AWS but not on GCP. Or what abstractions are missing on AWS but are better on GCP. Someone from AWS might even read your message and improve AWS. But at the moment I just see that a thing on GCP is one visible step plus at least zero setup steps, whereas on AWS it is at least one step.
You’ve posted ten times so far in this story to say that GCP is better, but not articulated how or why it’s better. As someone with no experience with either, nothing you have said gives me any information to make that comparison. At least one of the following is true:
It sounds as if you believe the first is true and the second two are not but (again) as someone reading your posts who understands the problem but is not familiar with either of the alternatives in any useful level of detail, I cannot judge for myself from your posts.
If you wrote ‘GCP has this set of flows / abstractions that have no equivalent on AWS’ then someone familiar with AWS could say ‘actually, it has this, which is as good’ or ‘you are correct, this is missing and it’s annoying’. But when you write:
That doesn’t help anyone reading the thread understand what is better or why.
Given my experience with AWS, my vote is #2.
I find AWS to be better organized and documented but much larger that any Google product I’ve seen. There is more to learn because it’s a large, modular system. And it’s trivial to do an easy things the hard way.
I don’t have much direct experience with Google’s hosting but if any of their service products are similar, the only advantage is they do less, which means you can skimp on organization and documentation without too many problems.
IMO, it’s hard to recommend AWS as well.
So what do you recommend then? 🙂
In terms of big players I would recommend GCP still, but only because I mostly work with Kubernetes and it’s best there. From smaller players Fly.io is actually works well for me.
Why is it “best” there? I use EKS on aws and have had no issues with it…?
in Kubernetes territory, Google is the elder. Also cleaner and more intuitive UI.
not shocking. aren’t they the original contributors?
I don’t touch EKS’s ui much/ever so I honestly don’t really care about that. Usually use aws via terraform/pulumi.
Why are you excited? I have no idea, because upon loading the site, literally all of my screen is filled with advertising, a disgusting AI slop image, and an admittedly well-intentioned 1/3 height popup about kindness.
There is literally no content visible. It’s just awful slop, advertising, and pop-ups as far as the eye can see.
Hell, you didn’t even have to use AI slop for the image of the elephant … I did a quick search and found loads of Creative Commons licensed photos of real elephants.
https://postimg.cc/8sm6hvMV
I agree, it’s not ideal how modern websites are just piles of AI elephant dung. Anyway, why were they interested?
Because of this:
With a fix that made it into 17, an example query (ab)using multiple query parameters went from over 8 seconds to 0.11 seconds.
That’s literally it.
I mean, that’s exciting as anything if you happen to run a site that uses PostreSQL in that way :)
Can an AI summarizer summarize this as well as you have? I wonder: what happens if we run the article through an AI summarizer?
Here’s what
ellama-summarize-webpage
gives me (I’m running Emacs integrated with an LLM running locally on my desktop):I kind of agree. Maybe dev.to should be a banned domain here.
I’m okay with that idea :)
On phone it takes up all of the screen. And clicking “okay” took me to the signup page. 😅
Hello. I am the author. I am sorry. I thought dev.to was a good platform for posting. Since them, I made a copy on a blog post without any advertising. https://benoittgt.github.io/blog/postgres_17_rails/
Sorry if my original reply seemed harsh - my issue was with dev.to, not at all with you or your article :) I’ve re-submitted your blog post to lobste.rs on the new URL :)
Thanks !
tl;dr? multiple IN expressions in a query now use indexes; you don’t have to write (a=? OR a=? OR a=?) you can just write a IN (?,?,?)
I haven’t used postgres seriously in a decade or more, so I’m unlikely to go change the one app I made once upon a time that still uses postgres to take advantage of this optimisation, but I clicked the link hoping to learn something anyway…
That seems to be it, and seems to be little more than a syntactic convenience when compared like this, but it could be significant if the new optimisation covers dynamic cases as well, like an ANY expression with a data-dependent array. Though even if it doesn’t work on dynamic cases, you could still say that a pitfall got ironed out.
That said, even though this post only mentions this IN optimisation, Postgres 17 does indeed seem to come with quite a lot of other optimiser improvements, seems significantly more than usual.
But how many of them had a baby-elephant sized ruby gemstone behind them?
It may seem irrelevant to his technical qualifications that Curt Yarvin is a racist. Someone can be a good plumber but a bad electrician, so why not a good programmer but a bad human being? But I think it really is relevant in this case because once you realize that Yarvin is just as dumb and racist as your racist uncle at Thanksgiving and he’s just better at writing twenty thousand word blog posts, it helps you see all of his other ideas in a similar light. Is it a good idea or just a lot of words surrounding a bad idea?
Is Kelvin versioning a good idea? It’s an interesting idea on the surface, and it’s interesting to think about converging on a final design rather than evolving forever, but once you allow the version to go from v1 to v0.1 to v0.01 you realize that it’s actually the same as a normal version that always counts up, it’s just way less convenient to work with and it requires twenty thousand words of fog to cover it up.
Yarvin is the human equivalent of an LLM: there’s nothing of substance there, just a lot of hot air surrounding a core of absolute stupidity.
“Racist” is putting it mildly. Comparing Yarvin to a garden-variety racist uncle is like comparing Darth Vader to a stormtrooper. He’s explicitly a feudalist who believes democracy is bad and the world should be run by an elite.
But I think calling him stupid is a cop-out. I’ve read through some of the Urbit docs and there are some fascinating, but convoluted and impractical, ideas in there. I would rather call him evil. I just hope his plans for world domination are as unlikely to succeed as Urbit is.
I dunno, saying Bill Ayers wrote Obama’s first book is pretty uncle-level stupid.
It’s not irrelevant: people who can be racist have other bad ideas, so you are right to be suspicious!
Version number schemes are also a fetishism: they don’t make your code faster or smaller or more correctly handle inputs. When a racist tells you their fetish, why do you listen?
Not trying to kink-shame here, more kink-ask-why.
For more context:
Wow. Before this I think PuTTY’s configuration dialogs were lazy, maybe even there wasn’t a “design decision” to make them that way, but the idea the author likes this layout seems very strange to me. I hope this is not what they mean.
But I like some idea of symbiosisware anyway; Most of my business is a bunch of perl scripts…
Heh, I somewhat agree with you but there’s some interesting contrasts there as well. I couldn’t tell you when I started using PuTTY, but I deeply appreciate how it works exactly the same today as it did when I started using it. There’s never been a moment where I’ve installed PuTTY on a new machine and couldn’t immediately use it without having to hunt around for “that thing that moved”. And… I just checked on a PuTTY process that that I’ve connected to a USB-Serial terminal right now and Windows is reporting that it’s using 0.1MB of RAM!
PuTTY is clearly not symbiosisware: it has a lot more than 1 user, so obviously this article is not a guide to how Simon thinks about PuTTY.
But the PuTTY documentation discusses its dialog box design constraints
I didn’t read it that way, since the article only says it was designed for one user, not that it only has one user. Once upon a time, PuTTY only had one user, so yeah, I don’t agree it is clearly not symbiosisware.
Further, I don’t think “small screens” justifies just how weird PuTTY’s configuration/session interface is: People, including myself (and almost certainly including the author) get “used to it”, but I also watch new users who need an SSH client on Windows struggle to translate their familiarity with other Windows applications to that configuration screen, and I sometimes find myself wondering why it works that way.
No it doesn’t. In a few million daily page loads and less than 0,05% of my traffic without javascript, and it’s usually curl or LWP or some other scraper doing something silly. Your traffic might be different, so it’s important to measure it, then see what you want to do about it. For me, such a small number the juice probably isn’t worth the squeeze, but I have other issues with this list:
99,67% of my js page loads include the telemetry response; I don’t believe spending any time on a js-free experience is worth anything to the business, but I appreciate it is possible it could be to someone, so I would like to understand more things to check and try, not more things that could go wrong (but won’t or don’t).
I’m not sure how to put this politely, but I seriously doubt your numbers. Bots not running headless browsers themselves should be more than what you estimated.
I’d love to know how you can be so certain in your numbers? What tools do you use or how do you measure you traffic?
In our case our audience are high school students and schools WILL occasionally block just certain resource types. Their incompetence doesn’t make it less of your problem.
phantomjs/webdriver (incl. extra-stealth) is about 2,6% by my estimate. They load the javascript just fine.
A page that has in the
<body>
some code like this:will load a.js then load b.txt or c.txt based on what happened in a.js.Then because I know basic math I can compare the number of times a.js loads and the number of times b.txt loads and c.txt loads according to my logfiles.
Tools I build.
I buy media to these web pages and I am motivated to understand every “impression” they want to charge me for.
I think it’s important to understand the mechanism by which the sysadmin at the school makes a decision to do anything;
If you’ve hosted an old version of jquery that has some XSS vector you’ve got to expect someone is going to block jquery regexes; Even if you’ve updated the version underneath. That’s life.
The way I look at it is this: I find if people can get to Bing or Google and not get to me, that’s my problem, but if they can’t get to Bing or Google either, then they’re going to sort that out. There’s a lot that’s under my control.
Can I invite you to a train ride in a German train from, say, Zurich to Hamburg? Some sites work fine the whole ride, some sites really can’t deal with the connection being, well, spotty.
If you can host me I’m happy to come visit.
Yeah, and I think those sites can do something about it. Third-party resources is probably the number one issue for a lot of sites. Maybe they should try the network simulator in the browser developer tools once in a while. My point is the javascriptness isn’t as big the problem as the persons writing the javascript.
So someone points an issue in some code, provides a patch with clear explanations and the maintainer writes:
And later:
Guess he didn’t go to the “stop being an asocial basement nerd” class…
I realize now he has a wikipedia page which boils down to: “he’s a good coder, but a bigger jerk.”…
Can I suggest an alternative way to think about this?
The human being in this case is paid to maintain it, and in order to get paid they need to tell their employer what they were spending their money on, and they don’t want to say “I spent that money on a joke function zero of your customers care about” – because seriously, who would?
Every single person commenting on this thread is putting that relationship at risk… for a joke. This is their life!
Can you even imagine how frustrating it must be for someone to put your fucking life at risk for a joke and then get mad at you for not getting the joke?
On the other hand, Drepper’s behaviour caused Debian and other major distributions to drop glibc in favour of eglibc, and subsequently glibc maintainership was taken over by a committee without Drepper. It wasn’t the bug report that lost him his job, it was the way he handled it (and many others).
At some point in the ticket they mention something alongside: “if it’s a joke, remove it. Otherwise if it’s left in the codebase what’s wrong with a correct implementation?”.
Also, merging such a patch would take no more than a few minutes. I doubt his employers are tracking his work to the minute, nor judging if an applied patch is “worth it”.
Do you seriously think that merging a fix provided by a very competent developer puts “his life at risk”? Do you think the employer will be like: “hey, you spent 5 minutes merging this fix, but it’s for a joke function, you’re fired!!!”.
If that’s your idea of employment, you must have had very shitty bosses…
I’m curious - why not a database? Maildir/MH/mbox all have gross tradeoffs between performance and reliability, whereas SQLite would cut the gordian knot of those. Of course, schema design could be a problem, but even just stuffing the whole message without special columns for indexing would probably end up working better.
I think archival purposes used be a a really good reason, but SQLite is now a LOC recommendation for long-term storage, so I am not really convinced of it anymore.
On the subject of schemas: Once upon a time I added SQLite support to dbmail; If you only do IMAP/SMTP with the occasional ad-hoc query maybe that’s fine for you. The general architecture is good enough for archival purposes but then derived tables are used to make IMAP suck less and are sometimes useful for ad-hoc queries. Obviously you can add your own triggers and build up your own cache tables as well…
For a server or general backing storage for e.g. a GUI client, I agree. This is aimed at people who enjoy having stuff stored as regular files that can be inspected with regular command line tools, like grepping for something etc.
Even for the local MUA people, I think command line tools that abstract SQL queries so you can use a more robust format than plain text would be worthwhile.
I like the idea of trying something new, but between making claims like this without linking to full explanation and calling the thing SSH3 like it’s officially a new version… I’m not happy how they approach things.
It makes me wonder though, who has the right to name something SSH-3. Not legally, but rather who can we agree is the right set of people to assign that name. At least http3 existed under a different name and got adopted by standards body later.
I mean the name ssh is not really protection worthy, in the end the original implementation sshv1 apparently eventually became proprietary it self.
Though when it comes to the process, that’s some good feedback nevertheless. Perhaps someone should open an issue on their GitHub.
IETF initial working name for ssh-2 was “Secsh” and just later became ssh-2, so one might expect the same for anyone attempting to create a new ssh version.
In the end, I’m surprised there aren’t more attempts to lift ssh to a newer standard, a lot has moved. Now with ssh-3, but also with mosh and rise of the web, networked microcontrollers and so on there is definitively a great opportunity to do so.
At the same time, OpenSSH is like the cathedral on top of a mountain on an island. Carefully engineered, slowly crafted with a lot of thoughts spent on very important aspects (private keys encrypted in memory comes to mind), and whatever comes next should hopefully be on par with that later on.
QUIC is the first time it was really worthwhile. The SSH protocol itself is fairly modular. You can plug in new encryption schemes, different hash schemes, and even different key types pretty trivially. Most of the evolution has been in that direction. We’ve moved from RSA to ECDH fairly smoothly over the last few years and you can store keys in TPMs or U2F tokens easily. SSH supports multiple channels and different out-of-band message types, so it’s easy to extend.
About the only thing that would require changes would be replacing TCP. SCTP never took off, so QUIC is the first thing that looks like a plausible replacement. QUIC bakes in a lot of the security at a lower level, so just running SSHv2 over QUIC is probably not useful and you need different encapsulation.
It’s also not supporting the new versions of the various protocols. I recently implemented an sftp server and was quite surprised to discover that OpenSSH doesn’t support anything newer than v4 (the latest RFCs are for v7).
That said, it has an impressive security track record. There are a lot of CVEs but most of them are either in niche features or possible sandbox escapes that are only a problem if you couple them with an arbitrary-code execution vulnerability in the rest of the code. Few other things have the same track record.
I’m not convinced they’re equivalent. There’s a lot of stuff suggesting QUIC is weak against things TLSv3 is strong to e.g. https://link.springer.com/article/10.1007/s00145-021-09389-w and https://link.springer.com/chapter/10.1007/978-981-15-4474-3_51 and it seems possible that this sort of change actually creates a difficult to understand vulnerability.
I will recommend checking back on this project in a few years and see if issues continue to be uncovered and get better understood, or if this turns out to be another dead-end.
I think it is protection worthy in the sense that any word with commonly understood meaning and connotations should be protected so that it can continue to be a useful word.
People understand SSH to be a standardized protocol that has been gradually developed over decades and is now ubiquitous on Unix machines. The transition from SSH-1 to SSH-2 respected people’s expectations in a way that SSH3 does not.
If a project’s name subverts expectations and creates the possibility for confusion, it’s not grounds for a lawsuit, but it reflects poorly on the project and will turn off a lot of people.
For example, this recently happened when blockchain bros tried to run off with the name “web 3”. That was the first thing I thought of when I saw that this new project was trying to land-grab the “ssh 3” name. It would have been a better first impression to pick a new name and let the SSH community figure it out.
Old news is old, but “ssh” is a registered trademark: https://www.linux.com/news/ylonen-we-own-ssh-trademark-heres-proposal/
The differentiation between a TUI and a GUI doesn’t hinge on whether it operates within a terminal, but rather, its design centered on interactivity. This can be exemplified through the use of modes, keybindings, panes, and so forth, with lazygit serving as a prime example. In my opinion, this project leans more towards the GUI side.
That’s an interesting way of thinking about it. I always assumed that TUIs were any text-based user interface as the name suggests, regardless of the interactivity. For example, vim / neovim are TUIs even though they are much more complex than simple GUI applications (for example minesweeper).
Yes. Text-based. Line characters and VT220 escape sequences are not text though, any more than a base64-encoded png is text. Text can be composed and manipulated with lots of different tools (not just humans), so interactivity and composability really are the main things you will have with a Text-based user interface.
https://en.wikipedia.org/wiki/Impulse_Tracker runs in what is called “text mode”, and I don’t think it’s a TUI. I don’t think line art and box-drawing characters are “text” just because they’re conveniently-located and sized in a font-block and there’s special circuitry to translating a buffer of characters and attributes into pixels instead of working with pixels directly. You can’t run this in a terminal, you can’t pipe it to anything, and keyboard input is controlled by each control individually (like every GUI you have ever used). Minesweeper is a GUI even when it runs in a terminal for exactly the same reason: https://minesweeper.mia1024.io
http://acme.cat-v.org on the other hand, only ever works with a raster display (graphical pixels), and yet I think it’s undeniably a TUI because there is literally no other metaphor the user is exposed to except in terms of text. There are no widgets, and every control and button always works the same way, and you most certainly can pipe other programs into and out of acme (it’s way more powerful can :!/% on vi)
I usually refer to emacs and vim as a gui, especially when talking about pull-down menus, tool-bar clickable buttons and the protected forms interfaces used in configuration or in some vimscripts, but ex (the editor that vi is built on) is absolutely a TUI, and the level of composition that is possible in emacs and vi is high enough that they can be used-reasonably as TUI, so I don’t think it’s worth quibbling too much, but surely Microsoft Notepad.exe is a GUI, and so if composition isn’t as important as I think, maybe vim/neovim are GUI as well.
There are lots of examples on wikipedia that includes interactive elements and terminals’ features: https://en.wikipedia.org/wiki/Text-based_user_interface
What is your definition based on?
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
I just explained that.
Ok, I might have not gotten your point first time reading it. So, it’s more about what (text, both ‘plain’ and commands) you interact with, than how (acme via mouse interactions)?
Would it be fair to call Excel a TUI, by the same reasoning? I mean editing text in cells, writing formulas and such, not the graphing things. Maybe pre-ribbon versions of it.
I think it depends exactly on what you’re talking about: visicalc and (better) sc are both TUI spreadsheets, and I would agree that “editing text in cells, writing formulas and such” is a big part of it, so maybe it’s possible to talk about the TUI-ness of Excel, or the TUI that’s inside Excel. Some ancient MS-DOS version of Excel might’ve been a TUI.
But every piece marketing collateral I’ve ever seen about Excel shows front-and-center that graphing, a rich set of programmable interface controls and widgets, so I think it would be otherwise very strange for someone to refer to it as a TUI.
In my opinion the Oberon System is also a TUI, even though it is graphical. It definitely follows similar interactivity patterns as TUIs.
It seems like the the Conservancy lawyers say: Website/internet opinions that GPL is a license and not a contract, are not lawyer or judges opinions, therefore not relevant for the lawsuit
But then the Conservancy lawyers also argue that the community (the same that believes GPL is a license, not a contract), can sue Vizio for violation of contract.
The two statements aren’t in-conflict, but require context to properly understand. I’m not so certain this article explains it very well.
This court can hear about contracts. Vizio says “a bunch of non-lawyers” think this is a “license not a contract” so this court cannot hear this case; Conservancy agrees with the first part, but says “those same bunch of non-lawyers” think this court can hear this case (which is what the second statement is).
The main thing is that “the contract” they are talking about isn’t clearly written down, and the reason it isn’t clear is because “a bunch of non-lawyers” did the writing, but like any verbal contract, all a contract really requires is agreement.
So Conservancy argues: If the terms are understood, and Vizio acts in accordance to those terms, then Vizio automatically enters “into contact” with everyone who could be harmed by Vizio failing to implement those terms. This is a contract, and so therefore, “this court” can hear the case and implement a cure.
Vizio wants this to be a “license”, because if it’s a license, we just hear Conservancy’s argument, but it’s moot, because this court can’t hear it, and we won’t need to hear Vizio explain what they think their rights are.
The Judge (in California) is elected, so they need to see press like this to understand if they make a mistake, the ramifications will be huge. They probably would be: If they refuse to hear the case, and Conservancy can’t get another court to hear it, then companies are probably going to start ignoring other files called “license” and this Judge will get blamed.
If he gets blamed, he will also get credit from the concentrations of wealth that want him to rule that way.
Note that it is not that simple. Concentration of wealth also knows that killing the open source golden goose would be highlu detrimental and costly to their own wealth concentration machinery.
Can you sketch out how hobbling copyleft would be detrimental to them?
Sure. 75% to 90% of their code is open source from external coders. And that is the case across the industry. We actually have numbers about this. Killing the thing that attract maintainers to share the code they write would … Basically kill the software industry at this point.
Are you saying that maintainers’/developers’ willingness to share code depends on copyleft being enforceable?
Separately, I would be interested to see where those numbers come from.
Note that this is not only about copyleft. There are other things in licenses that you want to enforce. And in general if you start using code with licence and not respecting the terms, the fact it is copyleft does not change the fact we cannot trust you for non copyleft one.
For these numbers, see https://www.synopsys.com/content/dam/synopsys/sig-assets/reports/rep-ossra-2022.pdf
It’s not that simple. The argument for the GPL being a contract and everyone else being third-party beneficiaries doesn’t extend to permissive licenses, so I don’t think this ruling will have much bearing on that.
Regardless, are you then saying that maintainers’/developers’ willingness to share code depends on third parties being able to enforce the attribution requirements of permissive licenses?
I am saying that anything that will make maintaineds feel like the opensource/free “social contract” is abused far more than it already is will make a lot of them rethink their involvement yes. We already are having a pretty bad burnout problem in that domain, which is starting to real scare governments and corporations at the edge.
Wait why say “yes” if that’s not what I asked. Does that mean “yes, maintainers’/developers’ willingness to share code depends on third parties being able to enforce attribution requirements of permissive licenses?”
I mean yes, because otherwise they have to do it themselves, which noone has money and time for. The US court system is based on third parties enforcing laws for you
We disagree on what motivates people to write and share code, but there isn’t much data to draw on. I just think it’s a very strange position.
Surely 75-90% of code used by commercial entities is not copyleft licensed, right? I would be surprised if this was the case. My understanding is that companies strongly prefer permissively licensed OSS.
Going by lines of code, the Linux kernel may make up that percentage. I’m just not convinced Linux would collapse if GPL were not enforceable.
I understand the issue for the specific lawsuit mentioned. But I didn’t follow why there was a preexisting preference for GPL to be a license and not a contract.
Anyone know what the perceived benefit was for arguing that GPL is a license?
My third hand and long after the fact impression is that the idea was that while a contract restricts the users rights, a license only grants more rights. So by arguing that it was a license they were trying to argue “this is strictly beneficial (more liberal in user rights) compared to proprietary software”.
I believe the key thing in the USA is that a contract must provide something to both parties. This is why a bunch of contracts have the weird nominal dollar thing.
The GPL does not provide anything to the copyright owner. The recipient of the GPL’d work receives the right to use the work. Customers of the recipient receive a set of rights. The author receives nothing as a direct result of the GPL (they may subsequently get code back if someone downstream happens to share it, but the GPL does not require this).
It’s quite surprising to see the defendant arguing that this should be treated as copyright infringement because the statutory penalties are much higher in that case, especially with the precedent that the RIAA set that each copy distributed counts as a separate incident and triggers the punitive damages again.
In legal terms, this is called a peppercorn.
My suspicion (don’t quote me on this) is that a copyright claim would have to go through federal court, which lacks California’s rule allowing the SFC to sue as a third-party beneficiary.
Yes the OP mention it…
It has to do with standing: This is a contract-court.
Vizio wants to argue it is a license (and so it deals with e.g. copyright infringement) in the legal sense of the word so that this court cannot hear the case.
So why did SFC make this a state contract suit rather than a federal copyright suit in the first place? Federal judiciary too right wing?
As mentioned, in a license law, the SFC has no standing as they are a third party. A concept that is not accepted in license court, as the SFC does not represent the copyright owner
Conservancy can represent several of the relevant copyright holders, but they cannot use a copyright case to ask for source code, nor wouud that get them a precedent that all other USA software users also have standing to sue.
As mentioned where?
In the OP….
You are mistaken.
From the OP:
There are actually multiple relevant paragraphs, but this seems the most explicit about the issue
Yeah, and the closest thing is this:
which requires a leap of inference to get to “SFC has no standing under license law.” It seems plausible, but it was certainly not mentioned in the article. Are we supposed to know that the contracts-law “third-party beneficiary” concept is the only legal device that could give the SFC standing to sue?
Would it really be that hard for the SFC to find a Linux/bash/glibc contributor to sign on to the suit, if that’s even necessary?
Again, the article says that, it goes into great detail on the matter. It does not have a single sentence saying “by making this a copyright license case, Vizio is forcing this into federal courts where there is no concept of a third party beneficiary, and so SFC would have no legal standing to bring the case and it would be dismissed.” It does have numerous paragraphs that together make this point. e.g. one paragraph details how contract vs license suits are preempted by federal law, one details how third-party standing differs between contracts and licenses in CA vs federal law, one explains how everything combines to completely remove the SFCs right to sue.
In principle anyone could have done that already. The whole point is SFC wants to do this unilaterally without working with actual authors because that’s more expensive. My not at all a lawyer take is the issue is this:
If SFC has no standing all they can do is provide lawyers. But lawsuits take time and money for the non-lawyer person as well: you have to attend depositions, for which you may have to travel, that can take 10s of hours. So you not only need someone who’s copyright was violated, but they have to have the time and money to be able to handle the case workload. These cases are generally about forcing source code to be released, not extracting monetary damages, so the end result is you are fundamentally out of pocket (you can get compensated for say hotel time, but not for vacation time, etc). e.g. the end result may still be the contributor being out of pocket.
The other side is developers working for companies that contribute to open source (e.g. the ones that might be able to afford the time) are likely not the copyright owners - e.g. for the last 15 or 20 years all my open source code belongs to large corporations, so for that code I would not have standing either. So you actually need to convince the company to be part of the suit not the individual developers there.
You don’t think the discussion is worth continuing?
I now realize you are not the original person I asked about this, so why are you saying “again”? Did you already say the article says SFC has no standing under license law? I can address the rest of your comment but want to make sure I’m not having an aneurysm. Whats with the “again”?
Wild guess: MIT and BSD licenses already existed, so it’s less of an adoption hurdle to think about GPL as a license.
Otherwise users would be confused about putting contracts on their code.
TBH I don’t think this article was very well written - it doesn’t give references or definitions
Mac resolution was 512x342, but a 4:3 projection would imply 384 pixels vertical, so to be square-on-screen, so unless Canvas compensated for this (which seems unlikely) it would be off by 11%
I don’t actually have the ability to see the video and vectors to verify that myself, but maybe that’s worth checking for?
Interesting hypothesis. I’ll look into it when I get a chance.
hostname+pid+counter is guaranteed unique in unix.
using an lfsr (rand) to generate “unique” ids will fail eventually. i would just compose the name out of those three parts instead of trying to compress it into a single numeric token.
If you really got no explanation, here it is: You went, and told someone else you wanted voting rights over their life. In this case: what they program on, but this isn’t ok! Not ever! It’s wrong to bully people, and you should be ashamed of doing wrong things.
Please don’t do this: Animations that never happen still waste electricity, which harms the environment, and reduces browser battery time.
You can use a CSS animation on a desired property and hook the animation events so your code only runs when something happens instead of every single frame.
In a tweet op has linked on the main post they say that there is no perceivable performance degradation - I’m curious if you mean something else?
Of course I mean something else. Where did I speak of “performance”?
All that statement means is that JavaScript can query a property on every node in a set of a dozen or so sixty times a second on multi-gigahertz machines.
This paper reads of sour grapes. I think fork() is great, but maybe some other metaphors would be great as well.
Once upon a time, I was working on a DNS server, and after finding it a bit slow, did a:
after binding the sockets and immediately got it running multicore. Yes, I could have rewrote things to allow main() to accept the socket to share, or to have a server process pass out threads, and then I could have used spawn, but I have a hard time being mad at those 14 bytes.
Yes this is a great paper. It’s a perspective from kernel implementers, and the summary is that fork() is only good for shells these days, not other apps like servers :)
It reminds me that I bookmarked a comment from the creator of Ninja about using posix_spawn() on OS X since 2016 for performance:
https://news.ycombinator.com/item?id=30502392
I think basically Ninja can do it because it only spawns simple processes like the compiler.
It doesn’t do anything between fork() and exec(), which is how a shell sets up pipelines, redirects, and does job control (just implemented by Melvin in Oil, with setpgid() etc. )
Also, anything like OCI / Docker / systemd-nspawn does a bunch of stuff between fork() and exec()
It would be cool if someone writes a demo of that functionality without fork(), makes a blog post, etc.
Related from the Hacker News thread:
As a way to help alternative kernel implementers, I’d be open to making https://www.oilshell.org optionally use some different APIs, but I’m not sure of the details right now
Most of those things can be done with posix_spawn_file_actions_t.
You can get setpgid with POSIX_SPAWN_SETPGROUP, and the signal mask with posix_spawnattr_setsigmask.
Oh cool, I didn’t know that .. Are there any docs or books with some history / background on posix_spawn()?
When I google I get man pages, but they sort of tell “what” and not “why”
And random tweets, the poorest medium for technical information :-( https://twitter.com/ridiculous_fish/status/1232889391531491329?lang=en
I guess this kind of thing is what I’m looking for
The “why” was largely political, IMO: The spawn() family of functions were introduced into POSIX so that Windows NT could get a POSIX certification and win government contracts, since it had spawn() and not fork().
Tannenbaum’s Modern Operating Systems was written around this time, and you might find its discussion of process-spawning APIs interesting: He doesn’t mention performance, and indeed Linux’s fork+exec was faster than NT’s CreateProcess so I find it incredibly unlikely NT’s omission for fork() was for performance, but more likely to simplify other parts of the NT design.
The suggestion to run a subprocess that calls tcsetpgrp before exec isn’t a bad one, and maybe obviates some of the performance benefits you get from posix_spawn, but it might not be so bad because that subprocess can be a real simple tiny static binary that does what it needs to and calls exec(). One day maybe we won’t have to worry about this.
Another option is to just wait+WIFSTOPPED and then kill+SIGCONT it if it’s supposed to be in the foreground.
Very strange claim. AFAIK it was possible to implement fork on top of Windows NT native API right from the beginning. (I can try to find links.) And early Windows POSIX subsystems (including Interix) actually implemented fork. (This was long before WSL happened.) And Interix actually directly implemented fork on top of Windows NT native API, as opposed to very hacky Cygwin’s fork implementation.
Also IIRC the very first Windows POSIX subsystem happened before posix_spawn was added to POSIX. (Windows had a lot of different official POSIX subsystems authored by Microsoft, WSL is the last one.)
I think you’re thinking of zwCreateSection, but I don’t think this was a win32 API call (or even well-documented), and it takes a careful reading to see how fork could be implemented with it, so I don’t think this is the same as having
fork()
– after all, there’s got to be lots of ways to get a fork-like API out of other things, including:I remembered they claimed they had reasons for not using zwCreateSection but I don’t know enough to know what problems they ran into though.
It’s not quite that clear cut. Modern versions of NT have a thing called a picoprocess, which was originally added for Drawbridge but is now used for WSL. These are basically processes that start (almost) empty. Creating a new process with
CreateProcess[Ex]
creates a new process very quickly, but then maps a huge amount of stuff into it. This is the equivalent ofexecve
+ld-linux.so
running and that’s what takes almost all of the time.Even on Linux,
vfork
instead offork
is faster (especially on pre-Milan x86, where fork needs to IPI other cores for TLB synchronisation).XNU has a lot of extensions to POSIX spawn that make it almost usable. Unfortunately, they’re not implemented anywhere else. The biggest problem with the API is that they constrained it to permit userspace implementations. As such, it is strictly less expressive than vfork + execve. That said, vfork isn’t actually that bad an API. It would be even better if the execve simply returned and vfork didn’t return twice. Then the sequence would be simply vfork, setup, execve, cleanup.
With a language like C++ that supports RAII, you can avoid the footguns of vfork by doing
This ensures that anything that you created in between the setup is cleaned up. I generally use a
std::vector
for theexecve
arguments. This must be declare in the enclosing scope, so it’s cleaned up in the parent. It’s pretty easy to wrap this in a function that takes a lambda and passes it a reference to theargv
andenvp
vectors, and executes it before theexecve
. This ensures that you get the memory management right. As a caller, you just pass a lambda that does any file descriptor opening and so on. The wrapper that I use also takes a vector of file descriptors to inherit, so you can open some files before spawning the child, the do arbitrary additional setup in the child context (including things like entering capability mode or attaching to a jail).So, a few very common cases that would break:
Redirecting stdin/stdout/stderr. How do you preserve the parent’s stdin/stdout/stderr for the cleanup step without also passing it to the child?
Changing UID/GID. Whoops the parents is now no longer root, and can’t change back.
Entering a jail/namespace. Again, the parent is now in that jail, so it break out without also leaving the child with an escape hatch.
Basically anything that locks down the child in some way, will also affect the parent now.
I don’t understand how any of those use cases break. In between
vfork
andexecve
, you are running in the child’s context, just as you are now. You can drop privileges, open / close / dup file descriptors, and so on. The only difference is that you wouldn’t have the behaviour whereexecve
effectivelyjongjmp
s back to thevfork
, you’d just return to running in the parent’s context after you started the child.Ok, so I think I see what you mean now.
At the point of vfork, the kernel creates two processes using the same page tables. One is suspended (parent), and the other isn’t (child).
The child continues running until execve. At that point the child process image is replaced with the executable loaded by exec. The parent process is then resumed but with the register/stack state of the child.
That would actually work pretty nicely.
Exactly. The vfork call just switches out the kernel data structures associated with the running thread but leaves everything else in place, the execve would switch back and doesn’t do any of the saving and restoring of register state that makes vfork a bit exciting. The only major change would be that execve would return to the parent process’ kernel state even in case of failure.
That’s pretty elegant. If I’m ever arsed writing a hobby OS-kernel again, I’m definitely going to try implementing this.
Actually that tweet thread by the author of fish is very good.
This is exactly what Melvin just added, so looks like we can’t use it ? On what platforms?
https://github.com/ksh93/ksh/blob/dev/src/lib/libast/comp/spawnveg.c is a good one to look at; you can see how to use posix_spawn_file_actions_addtcsetpgrp_np and if/when POSIX_SPAWN_TCSETPGROUP shows up you can see how it could be added.
Hm in the Twitter thread by ridiculous_fish he points to this thread:
https://www.mail-archive.com/[email protected]/msg00718.html
The way I’m reading this is that ksh has a bug due to using posix_spawn() and lack of tcsetpgrp(). And they are suggesting that the program being CALLED by the shell can apply a workaround, not the shell itself!
This seems very undesirable.
I think we could use posix_spawn() when job control is off, but not when it’s on.
And I actually wonder if this is the best way to port to Windows? Not sure
posix_spawn is documented in POSIX: https://pubs.opengroup.org/onlinepubs/9699919799/ , specially here: https://pubs.opengroup.org/onlinepubs/9699919799/functions/posix_spawn.html . This manpage contains big “RATIONALE” section. And “SEE ALSO” section with
posix_spawnattr_*
functionsGlibc and musl add more and more optimizations over time, allowing posix_spawn to use vfork (as opposed to fork) in more and more cases. It is quite possible that recent versions of glibc and musl call vfork in all cases.
AFAIK this glibc bug report https://sourceware.org/bugzilla/show_bug.cgi?id=10354 is resolved using patchset, which makes glibc always use vfork/CLONE_VFORK.
Using vfork is not simple. This article explains how hard it is: https://ewontfix.com/7/
I can post code of my
unshare(1)
analog. It seems I can simply add CLONE_VFORK option to list ofclone
options and everything will workThat would be cool! What do you use it for?
I would like to do some container-like stuff without Docker, e.g. I’ve looked at bubblewrap, crun, runc, etc. a little bit
We have a bunch of containers and I want to gradually migrate away from them
I also wonder if on Linux at least a shell should distribute a bubblewrap-like tool ! Although I think security takes a long time to get right on Linux, so we probably don’t want that responsibility
Here is my util, I call it “asjail” for “Askar Safin’s jail”:
https://paste.gg/p/anonymous/4d26975181eb4223b10800911255c951
It is public domain. I’m sorry for Russian comments.
Compile with
gcc -o asjail asjail.c
. My program is x86 Linux specific. Run like so:asjail -p bash
.Program has options
-imnpuU
, which correspond tounshare(1)
options (see its manpage). (Also, actual unshare has option-r
, I have no such cool option.)My program usually requires root privileges, but you can specify
-U
flag, which creates user namespace. So, you can runasjail -pU bash
as normal user, this will create new user namespace and then create PID namespace inside it. (Again:unshare -pr bash
is even better.)But user namespace requires that they should be enabled in kernel. In some distros they are enabled by default, in others - not.
I wrote this util nearly 10 years ago. Back then I wanted some lightweight container solution. I was not aware of
unshare(1)
util.unshare(1)
fully subsumes my util. (Don’t confuse withunshare(2)
syscall.) Also,unshare(1)
is low-level util, it’s lower then bubblewrap, runc, etc.I don’t remember some details in this code, for example I don’t remember why I need this signal mask manipulations.
Today I use docker and I’m happy with it. Not only docker provide isolation, it is also allows you to write Dockerfiles. And partial results in dockerfiles are cached, so you can edit some line in dockerfile, and docker will rebuild exactly what is needed and no more. Dockerfiles are perfect for bug reports, i. e. you can simply send dockerfile instead of “steps to reproduce” section. The only problem with docker is inability to run systemd inside it. I have read that this is solved by podman, but I didn’t test it. Also, dockerfiles are not quite reproducible, because they are often rely on downloading something from internet. I have read that proper solution is Nix, but I didn’t test it
Additional comments on asjail (and other topics).
You need to add
--
to make sure options are processed as part of target command, not asjail itself, i. e.asjail -p -- bash -c 'echo ok'
.asjail was written by careful reading of
clone(2)
manual page.asjail is not complete solution. It doesn’t do chrooting, mounting needed directories (/proc, /dev, etc). So back then 10 years ago I wrote bash script called
asjail-max-1
, which does these additional steps.asjail-max-1
was written by careful reading of http://www.freedesktop.org/wiki/Software/systemd/ContainerInterface and soasjail-max-1
(together withasjail
) can run systemd in container!asjail-max-1
is screen sized bash script.But, unfortunately, to make all this work, you also need C program, which emulates terminal using
forkpty(3)
. So I wrote such a program and I called itpty
. It is some screens sized C program. Togetherasjail
,asjail-max-1
andpty
gives you complete small solution for running containers. In something like 5 screens of code.I can post all this code.
But all this is not needed, because all this is subsumed by existing tools.
asjail
is subsumed byunshare
. Andasjail-max-1+asjail+pty
is subsumed bysystemd-nspawn
.Today I don’t use any of these my tools. When I need to run container I use docker. If I need to run existing file tree I use systemd-nspawn
Also, all these tools I discussed so far are designed for maximal isolation. I. e. to prevent container from accessing host resources. But sometimes you have opposite task, i. e. you want to run some program and give it access to host resources, for example to X server connection, to sound card etc. So I have another script, which is simple wrapper for chroot, which does exactly this: https://paste.gg/p/anonymous/84c3685e200347299cac0dbd23d31bf3
Also a good paper about old POSIX APIs not being a great fit for modern applications (Android and OS X both use non-POSIX IPC throughout, etc.)
POSIX Abstractions in Modern Operating Systems: The Old, the New, and the Missing (2016)
https://roxanageambasu.github.io/publications/eurosys2016posix.pdf
https://news.ycombinator.com/item?id=11652609 (51 comments)
https://lobste.rs/s/jhyzvh/posix_has_become_outdated_2016 (good summary comment)
https://lobste.rs/s/vav0xl/posix_abstractions_modern_oses_old_new (1 comment 6 years ago)
This is not true. Ninja redirects output of command so that outputs of multiple parallel commands don’t mix together
I want to write my own shell some day. And I don’t want to use
fork
there, I will use posix_spawn instead. But this will be tricky. How to launch subshell? Using/proc/self/exe
? What if/proc
is not mounted? I will possibly try to send a patch to Linux kernel to make/proc/self/exe
available even if/proc
is not mounted. Busybox docs contain (or contained in the past) such patch, so theoretically I can simply copy it from there. I can try to find a linkBTW my reading of this thread above
https://lobste.rs/s/smbsd5/fork_road#c_qlextq
is that if you want a shell to have job control (and POSIX requires job control), you should probably use fork() when it’s on.
Also I’m interested in whether posix_spawn() is the best way to port a shell to Windows or not …. not sure what APIs bash uses on Windows.
Is posix_spawn() built on top of Win32? How do they do descriptors, etc. ?
Surprisingly, we can combine posix_spawn speed with needs of docker and systemd-spawn. Let me tell you how.
First of all, let’s notice that merely fork is not enough for systemd-nspawn. You need to also put child in new mount/utc/etc namespace. For this you need
clone
orunshare(2)
.Fortunately,
clone
has flagCLONE_VFORK
, which allows us to get vfork-like behavior, i. e. our program will be faster than with fork.So, to summarize, we have
two optionsone option to combine posix_spawn speed with features systemd-nspawn needs(either one will be enough):clone
with all namespacing flags we need (such as CLONE_NEWNS) and CLONE_VFORKCreate new process using usual posix_spawn and then callunshare
to put the process into new namespaceI didn’t tested any of this, so it may be possible something will go wrong.
Also: I’m author of my own
unshare(1)
util analog (don’t confuse withunshare(2)
syscall). My util doesn’t do any speed up tricks, I just use plainclone
without CLONE_VFORKAlso, I wrote a Rust program, which spawns program using posix_spawn, redirects its in, out and err using posix_spawn_file_actions_adddup2, collects its out and err, waits for finish and reports status
Would like to see some more technical exposition to understand why the DNS issue “can only happen in Kubernetes” and if it’s the fault of musl, or kubernetes, or the DNS nodes that for some reason require TCP. Natanael has a talk about how running musl can help make upstream code better, by catching things that depend on GNU-isms without being labeled as such.
I also wonder where the author gets the confidence to say “if your application requires CGO_ENABLED=1, you will obviously run into issue with Alpine.”
My application requires CGO_ENABLED=1, and I ran into this issue with Alpine: https://github.com/golang/go/issues/13492
TLDR: Cgo + musl + shared objects = a bad time
That’s really more of a reliance on glibc rather than a problem with musl. musl is explicitly not glibc.
Not sure if it’s fixed, but there were other gotchas in musl related to shared libraries last time I looked. Their dlclose implementation is a no-op, so destructors will not run when you think you’ve unloaded a library, which can cause subtly wrong behaviour including memory leaks and state corruption. I hacked a bit on musl for another project a couple of years ago and it felt like a perfect example of the 90:10 rule: they implement the easy 90% without really understanding why the remaining difficult 90% is there and why people need it.
Oh, and on x86 platforms they ship a spectacularly bad assembly memcpy that performs worse than a moderately competent C one on any size under about 300 bytes (around 90% of memcpy calls, typically).
The result is the same; my app, which really isn’t doing anything unusual in its Go or C bits, can’t be built on a system that uses musl.
Yes, but I suspect you could more accurately say that it doesn’t work on a system that doesn’t use glibc.
It works fine on macOS, no glibc there.
Expecting getaddrinfo to work reliably isn’t a GNUism, it’s a POSIXism. Code that uses it to look up hosts that require DNS over TCP to resolve will work on GNU/Linux, Android, Darwin, *BSD, and Solaris.
So is it in the POSIX standard?
They happen in Kubernetes if you use DNS for service discovery.
On the Internet, DNS uses UDP. RFC 1123 was really clear about that. It could use TCP, but Internet hosts typically didn’t because DNS responses that don’t fit in one packet require more than one packet, and that takes more time leading to a lower-quality experience, so people just turned it off. How much time depends mostly on the speed of light and the distance the packets need to travel, so we can use a random domain name to measure circuit length:
Once being “off” was ubiquitous, DNS client implementations started showing up that didn’t bother with the TCP code that they would never use, and musl is one of these.
Kubernetes (ab)uses the DNS protocol for service discovery in most reference implementations, but the distance between nodes is typically much less than 1000 miles or so, so you aren’t going to notice the time-delay so much between one packet and five. As a result, when something goes wrong, people blame the wrong-thing that isn’t in most of those reference implementations (in this case, musl).
I use /etc/hosts for service discovery (and a shell script that builds it for all the containers from the output of kubectl get …) which is faster still, and reduces the number of partitions which can make tracking down some intermittent problems easier.
This is a good point: If your application calls
gethostbyname
or something, what’s it going to do with more than 512 bytes of output? The most common reason seems to be people who use DNS to get everything implementing a service or sharing a label. Some of those are just displaying the list (on say a service dashboard), and for them, why not just ask the Kubernetes REST API? Who knows.But others are doing this because they don’t know any better: If you get five responses and are only going to connect() to one you’ve made a design mistake and you might not notice unless you use Alpine!
This reminded me of this absolute gem and must-read story from ancient computer history, of how people can’t send emails to people more then 520 miles away. https://web.mit.edu/jemorris/humor/500-miles
That’s a fun story.
You can use TCP_INFO to extract the RTT between the parts of the TCP handshake and use it to make firewall rules that block connections from too far away.
This works well for me since I generally know (geographically) where I am and where I will be, but people attacking my systems are going to be anywhere, probably on a VPN which hides their location (and makes their RTT longer)