The trouble with GnuPG wasn’t primarily the C part. It’s that OpenPGP is like those comedy swiss army knives. You open one blade and it’s an umbrella, the next one is a neck massager, the next a rabbit and so on.
It doesn’t even attempt to do many things badly: it does nebulous things, badly, back from the times in the 90s when people thought encryption was a far less diverse thing than it really is.
Email encryption? Just forget it.
Signing software packages? signify
Backups? Borgbackup
Secure comms? Signal
Encrypt files? age
I’m also surprised to hear that. Unless you explicitly look for troll channels, my experience has either been quiet (but quick to answer) or constantly active, and on topic.
I can’t say I’ve seen the things that the grandparent comment mentioned, but they definitely wouldn’t be on Freenode. If you limit yourself to Freenode, IRC is a very safe and well-moderated experience, especially on some exemplary channels like the Haskell one.
I have accidentally wandered into uncomfortable conversations and much worse things on some of the other popular IRC networks, of which quite a few still exist: https://netsplit.de/networks/top100.php
The same thing is true of sketchy Discord servers as well; it’s not like IRC is unique in this regard.
A year or two back, Supernets was spamming hard on IRC networks. I forgot if Freenode was affected, but I know a lot of the smaller networks I was on were.
Not OP, but I spend my time on IRCnet and EFnet since my IRC use is just to stay in touch with friends. Anyway, last year I was DDoS’d pretty hard because someone wanted my nick on EFnet.
Sometimes I miss #C++ on EFnet, not enough to go back on EFnet, but I do miss it – a lot of wonderful people were there in the late 90s. Freenode feels a lot more sane in terms of management and tools for the system operators. Cloaks and nickname registration go a long way.
Back in the days when the Internet started gaining wider adoption (1994-2005 roughly), there was a real sense of this tech-tinkerer ethos as we on the frontier of things enjoyed relatively unfettered freedom to mold this new environment. But the rest of the world joined us in enjoying the benefits of this new world and existing power structures reasserted themselves (as in governments - the best book I’ve read about it was written by a sociologist - Zeynep Tufekci’s Twitter and Tear Gas) or emerged under new fault lines (the newly minted big tech companies).
I think it was a huge mistake and a wasted opportunity that we technologists didn’t start to explicitly acknowledge these power structures and politics earlier, and that it took us the last decade to painfully learn that this is not our playground or toy when these technologies are wired into the world, and increasingly remake the world.
We spent decades arguing about open source licenses when people and companies who are used to thinking about and navigating power/politics ran circles around these naive intentions (it’s not sufficient to have the right license, it’s far more important that a project/technology has a healthy community organized around it, a license doesn’t guarantee against a company monopolizing a platform). I think it behooves us tech people to think about power & politics, because we have influence to affect changes - good or bad, and pretending that we don’t (“let’s just stick to technology”) biases outcomes more towards the bad end of the scale.
That’s why I like RFC8890 and Mark’s post, the approach isn’t hopelessly naive, it acknowledges the politics and tension around different interests and plonks down a good approach in its own corner of the tech world. In its own it’s not sufficient, but at least it finally heads in the right direction that we can build on.
my partner submitted a patch to OpenBSD a few weeks ago, and he had to set up an entirely new mail client which didn’t mangle his email message to HTML-ise or do other things to it, so he could even make that one patch. That’s a barrier to entry that’s pretty high for somebody who may want to be a first-time contributor.
I think it was actually Gmail that was a barrier. And he also couldn’t do it from Apple Mail. It is just that the modern mail client has intentionally moved towards HTML
I am flabbergasted someone is able to send a patch to OpenBSD but is not able to set the web interface of GMail to sent plain-text emails. Or install Thunderbird, which might have been the solution, in that particular case.
I also never used git send-email, but I don’t think it is the barrier to become a kernel maintainer.
Actually, that might work as an efficient sieve to select the ones who want to be active contributors from those who are just superficial or ephemeral contributors.
In my opinion, although Github supposedly decreases the barrier of first-time contribution, it increases the load on the maintainers, which has to interact with a lot of low-quality implementation, reckless hacks and a torrent of issues which might not even be. That is my experience in the repositories I visit or contributed.
On the other hand, if someone knows how to install OpenBSD, use cvs (or git), program in C and navigate the kernel’s souce-code, I supposed they are capable of going to anysearchengine and find the answer on how to send plain-text email via the GMail interface.
GMail mangles your “plain-text” messages with hard-wraps and by playing with your indentation. You’d have to install and configure thunderbird or something.
Note that this is anecdotal hearsay. This person is saying their partner had to set things up… they may have misunderstood their partner or not realized their partner was exaggerating.
Also, one might expect the amount of patience and general debugging skill necessary to transfer rather well to the domain of email configuration.
It’s also possible that guiraldelli is assuming it was an excepted OpenBSD kernel patch, wheras we don’t know if the patch was accepted and we don’t know if it was a userland patch. It doesn’t take much skill to get a userland patch rejected.
I am flabbergasted someone is able to send a patch to OpenBSD but is not able to set the web interface of GMail to sent plain-text emails.
You can’t send patches, or indeed any formatted plain text emails through the gmail web interface. If you set gmail to plain text, gmail will mangle your emails. It will remove tabs and hardwrap the text. It’s hopeless.
Think about impediments in terms of probabilities and ease of access.
You wouldn’t believe how many people stop contributing because of tiny papercuts. There is a non-trivial amount of people who have the skills to make meaningful contributions, but lack the time.
Lobsters requires an invite. That’s a barrier to entry, so those who make it are more likely to make good contributions, and have already extended effort to comply with the social norms.
You may be willing to jump a barrier of entry for entertainment (that is lobsters) but not for free work for the benefit of others (because you have already done that work for yourself).
If the project wants to have contributors in the future it might want to consider using modern, user friendly technologies for contribution.
Linux is considering it, a discussion is started, which is positive (for the future of the project) in my opinion. It is a sign or responsive project management, where its conservatism does not hinder progress, only slows it to not jump for quickly fading fads.
Other communities are closing their microverse on themselves, where it is not enough to format your contribution in N<80 column plain text email without attachments with your contribution added to it via come custom non-standard inline encoding (like pasting the patch to the end of the line), but you may even need to be ceremonially accepted for contribution by the elders. Also some such projects still don’t use CI, or automated QA. These communities will die soon, actually are already dead, especially as they don’t have large corporations backing them, as someone getting paid usually accepts quite a few papercuts a job demands. This is why Linux could actually allow itself to be more retrograde than the BSD projects for example, which I think are even more contributor-unfriendly (especially ergonomically).
I tend to agree, however that discussion was started by someone with a very striking conflict of interest and therefore their opinion should be questioned and considered to be insincere at best and maleficent at worst.
That doesn’t matter, the discussion can be done despite that. I think a forge-style solution is the key.
Despite Microsoft having 2 “forge” type solutions in its offering (both quite usable in my experience, github being a de-facto standard) I still cannot see a conflict of interest in this topic. There are other software forges available. The current process is simply outdated.A custom solution could also be developed, if deemed necessary, as it was the case with git.
Pay attention my comment talks about maintainers, active contributors and superficial or ephemeral contributors.
From the article:
a problem recently raised by Linux kernel creator Linus Torvalds, that “it’s really hard to find maintainers.” Maintainers are the gatekeepers who determine what code ends up in the widely used open-source kernel, and who ensure its quality. It is part of a wider discussion about who will oversee Linux when the current team moves on.
And later on…
“We need to set up a better or a different or an additional way to view tools and the work that’s being done in the Linux project as we’re trying to bring in new contributors and maintain and sustain Linux in the future,” she told us.
Picking her words carefully, she said work is being done towards “moving from a more text-based, email-based, or not even moving from, but having a text-based, email-based patch system that can then also be represented in a way that developers who have grown up in the last five or ten years are more familiar with.
“Having that ability and that perspective into Linux is one of the ways we hope we can help bring newer developers into the kernel track.”
So I understood Sarah Novotny is addressing the problem of new contributors, not the one Linus Torvalds see, of new maintainers.
So, your comment that
how many people stop contributing because of tiny papercuts. There is a non-trivial amount of people who have the skills to make meaningful contributions, but lack the time
is not the problem the Linux Foundation has, but the one that Microsoft’s Sarah Novotny wants to solve. Those are two different problems, with two different solutions. On the surface, they might seem the same, but they are not. They might be correlated, but the solution for one problem does not necessarily mean it is the solution for the other.
Therefore, my argument still stands:
[having a plain-text email list as central communication and collaboration to the Linux’s kernel] might work as an efficient sieve to select the ones who want to be active contributors [i.e., maintainers] from those who are just superficial or ephemeral contributors.
If we are going to address, though, that it might hinder new contributors, then I tend to agree with you. :)
This is just my opinion but I think that while she’s totally missing the mark on finding Linux kernel module maintainers having anything at all to do with plain text email patch submission systems, the general issue she’s speaking to is one that has generated a lot of discussion recently and should probably not be ignored.
Also, in the spirit of the open source meritocracy, I’d prefer to let people’s actions and track records speak more loudly than which company they happen to work for, but then, lots of people consider my employer to be the new evil empire, so my objectivity in this matter could be suspect :)
So I understood Sarah Novotny is addressing the problem of new contributors, not the one Linus Torvalds see, of new maintainers.
Bingo!
My first thought after reading this was “Does this person ACTUALLY think that having to use plaintext E-mail is even statistically relevant to the problem of finding module maintainers for the Linux kernel?”
In a general sense I believe that the open source community’s reliance on venerable tools that are widely rejected by younger potential contributors is a huge problem.
Martin Wimpress of Ubuntu Desktop fame has spoken about this a lot recently, and has been advocating a new perspective for open source to increase engagement by embracing the communications technologies where the people are not where we would like them to be.
So he streams his project sessions on Youtube and runs a Discord, despite the fact that these platforms are inherently proprietary, and has reported substantial success at attracting an entirely new audience that would otherwise never have engaged.
is not the problem the Linux Foundation has, but the one that Microsoft’s Sarah Novotny wants to solve. Those are two different problems, with two different solutions. On the surface, they might seem the same, but they are not. They might be correlated, but the solution for one problem does not necessarily mean it is the solution for the other.
GKH even said that there are more than enough contributors in his last ama, so having contributors is “a priori” not a problem right now.
Also better tooling would be beneficial for everyone.
That is not necessarily true: the introduction of a tool requires a change of mindset and workflow of every single person already involved in the current process, as well as update of the current instructions and creation of new references.
I saw projects that took months to change simply a continuous integration tool (I have GHC in mind, except the ones in the companies I worked for). Here, we are talking about the workflow of many (hundreds for sure, but I estimate even thousands) maintainers and contributors.
If the problem is retrievability of the manual or specific information, than I think that should be addressed first. But that topic is not brought up in the article.
I am able to deal with email patches, but I hate doing it. I would not enjoying maintaining a project which accepts only email patches.
Actually, that might work as an efficient sieve to select the ones who want to be active contributors from those who are just superficial or ephemeral contributors.
Ephemeral contributors evolve to maintainers, but if that initial step is never made this doesn’t happen. I don’t know if this will increase the number of maintainers by 1%, 10%, or more, but I’m reasonably sure there will be some increase down the line.
Finding reliable maintainers is always hard by the way, for pretty much any project. I did some git author stats on various popular large open source projects, and almost all of them have quite a small group of regular maintainers and a long tail of ephemeral contributors. There’s nothing wrong with that as such, but if you’re really strapped for maintainers I think it’s a good idea to treat every contributor as a potential maintainer.
But like I said, just because you can deal with email patches doesn’t mean you like it.
GMail, even in plain-text mode, tends to mangle inline patches since it has no way of only auto-wrapping some lines and not others.
Not a major issue as you can attach a diff, but from a review-perspective I’ve personally found folks more inclined to even look at my patches if they’re small enough to be inlined into the message. I say this as someone having both submitted patches to OpenBSD for userland components in base as well as having had those patches reviewed and accepted.
I personally gave up fighting the oddities of GMail and shifted to using Emacs for sending/receiving to the OpenBSD lists. I agree with Novotny’s statement that “GMail [is] the barrier.” The whole point of inlining patches like this is the email body or eml file or whatever can be direct input to patch(1)…if GMail wraps lines of the diff, it’s broken the patch. Text mode or not.
Obviously, this doesn’t mean if the Linux kernel maintainers want to change their process that I don’t think they shouldn’t. (Especially if they take a lot of funding from the Foundation…and as a result, Microsoft/GitHub.) OpenBSD is probably always going to take the stance that contributions need to be accessible to those using just the base system…which means even users of mail(1) in my interpretation.
Good point…so basically GMail (the client) doesn’t really work with an email-only contribution system. It’s Gmail SMTP via mutt/emacs/thunderbird/etc. or bust.
I have found several Linux subtle kernel bugs related to PCIe/sysfs/vfio/NVMe hotplug. I fixed them locally in our builds, which will ultimately get published somewhere. I don’t know where, but it’s unlikely to ever get pushed out to the real maintainers.
The reason I don’t try to push them out properly? The damn email process.
I did jump through the hoops many years ago to configure everything. I tried it with gmail, and eventually got some Linux email program working well enough that I was able to complete some tutorial. It took me some non-negligible amount of time and frustration. But that was on an older system and I don’t have things set up properly anymore.
I don’t want to go through that again. I have too many things on my plate to figure out how to reconfigure everything and relearn the process. And then I won’t use it again for several months to a year and I’ll have to go through it all again. Sometimes I can get a coworker to push things out for me - and those have been accepted into the kernel successfully. But that means getting him to jump through the hoops for me and takes time away from what he should be doing.
So for now, the patches/fixes get pulled along with our local branch until they’re no longer needed.
I have no idea how dense they need to be to not be able to send text-only email from Apple Mail. It must be anecdotal, because I cannot believe that someone is smart enough to code anything, but dumb enough to be able to click Format > Change to text in the menu bar.
I would like to know; Did removing XUL (and hence making FF faster) result in a net increase in the userbase or did the userbase drop because people lost their favorite extensions, and many extension writers migrated to Chrome?
The number of people using* extensions is a very small percentage of the FF userbase.
*or broadly speaking only a very small percentage of users ever change any default setting for any widely used software/service - which is why making things opt-out is such a commonly accepted but evil trick
either tech people or someone who’ve used ff long enough that they don’t want change. tech people are pissed if their customized application can’t be customized anymore, the long term users are pissed if anything changes. imho, mozilla betting on the casual-user-base and dumbing down firefox wasn’t the smartest move. the quantum engine change was nice, but the things since then were actively against their userbase.
if anything, i hope servo will be an easily embedable engine so there is some alternative to $webkit. maybe we can have a decent browser again, which isn’t trying to be my nanny.
My impression is that the perfectly spherical average Firefox user is a mythical illusion only existing in some heads of Mozilla management.
Thinking goes like “Chrome has this much marketshare, if we make our browser more like Chrome, people will come to us!!!” – except that this isn’t how it works.
Instead they gave all the loyal, long-term users the finger. I used Firefox since it was Netscape, but I’ll be gone as soon as Firefox devs kill userChrome.css, because they are more successful in alienating actual users than they are in winning new ones over.
I think this comment from the article gets the point across:
To sum it up, there’s just a lot of old power users out there that are sick and tired of companies chasing mobile and cloud, and telling them that their aesthetics, their desire for fine-grained control over their machine, their desire for freedom over security… those things no longer matter and everything they grew up with is being molded into something designed to suit a brave new world that they feel they have no place in, or at least a much smaller place. It’s like… growing old, but before you’re 30 or 40 in many cases.
I agree, but how many people actively installed FF versus have it installed for them by friends or family? For example, on all machines that I helped setup, I installed FF (plain vanila); except lately, I am forced to install Chrome (because a lot of Google properties have a degraded performance in FF).
This was how IE lost its market share in the first place right? Devs started recommending and in many cases actively installing the browser they liked in the computers they setup or maintained.
except lately, I am forced to install Chrome (because a lot of Google properties have a degraded performance in FF)
This is why I think the entire argument about XUL addons is a total red herring.
Most people never installed any add-ons. I myself only install two, and I’m a power user by almost any metric. But anyway, if web pages don’t work well in Firefox, then the whole argument is moot. Who cares how customizable Firefox is if the pages that I want or need don’t work well in it!
And Firefox’s greatest competitor isn’t abandonware any more.
I don’t know about that. I did (and do) use a bunch of addons for myself:
Always kill sticky
Cookie Quick Manager & Cookie Autodelete
ublock origin
Print Edit WE
Redirector
Privacy Badger
Tab session bar
and I know that another bunch exist when I need them. I installed FF for my friends and family because I actively use FF. The extensions such as Vimperator and Redirector and Cookie killers are too advanced to install for others. I was tempted to jump ship when they changed the addons, and can imagine that others might have.
The degradation is a recent phenomenon, much later than the when XUL addons got dropped. Noticeable especially recently. In this, however, I think Google is playing the game because they have overwhelming numbers on their side.
The degradation is a recent phenomenon, much later than the when XUL addons got dropped.
Depends on what you consider a degraded experience, I guess. Google has been showing pop-up ads recommending Chrome since 2012 at least. Google Talk worked out of the box in Chrome, but required an add-on for Firefox, until 2017. Similarly, Google Gears gave you offline support that worked out-of-the-box in Chrome but required an add-on in Firefox until around 2011, where Google’s web apps switched to standard HTML5 for their offline support.
You might never have noticed any of these, because you use an ad blocker and would’ve just installed the browser plug-ins if you needed them, but a lot of people were probably convinced to switch. Besides, this is what insiders are saying about it (the thread is from 2019, but the claim is that it had already been happening for years at that point).
Firefox removed XUL addons in 2017. They had 27% market share in 2009. By 2017, they were already down to 14%.
I would like to know; Did removing XUL (and hence making FF faster) result in a net increase in the userbase or did the userbase drop because people lost their favorite extensions, and many extension writers migrated to Chrome?
Firefox had 18.70% Marketshare in January 2015, 15.95% January 2016, 14.85% in January 2017, was down to 13.04% in October 2017, Firefox 57 (which removed XUL) released in November 2017 at 12.55% Marketshare, by January 2018 they were at 11.87%, and then there was a long, slow bleed until October 2019 they’re at 9.25%.
If anything it seems like they were bleeding users a bit faster before 57 than afterwards. You could read that in a lot of ways; there may have been some accelerated bleed-off of users ahead of Firefox 57 once the plan was announced, to the tune of maybe 1-1.5%, but it doesn’t seem to have been enormous and there definitely wasn’t a disproportionately huge drop afterwards. Or maybe they were bleeding over the perceived performance benefits of Chrome, and dropping XUL helped stem the tide.
A drop in percentage does not indicate a loss of users. They might have been getting more users, but just growing much slower than their competition (which isn’t an absurd claim as the number of internet users worldwide doubled in the last decade: https://ourworldindata.org/grapher/broadband-penetration-by-country ).
Remember, Firefox’s WebExtensions are just as powerful as (if not more powerful than) Chrome’s WebExtensions. Nobody migrates off of Firefox to Chrome because Chrome’s extensions are more powerful.
Now, it’s certainly possible that people would stick with an otherwise-inferior browser because the extensions are superior to chrome’s, but Mozilla would probably say that the whole point of the switch is to make Firefox stop being an inferior browser.
Your post starts with the advice that you claim is problematic, but then proceeds to prove it correct. If this were a research paper I’d thank you for publishing it despite finding the opposite that you’ve expected :)
More seriously though, learning that cryptography is advanced technology and not magic takes years of study, and until then they are indistinguishable and best treated with a healthy distance.
There are technologies that exist at the overlap of many different areas and that makes them particularly difficult. Applied cryptography is one of them. The need for mathematical formalism, high-stakes security threat modeling, detailed hardware and compiler knowledge are rare combinations.
What part of today’s political climate would prevent you from voicing a rational, erudite argument on why you disagree with some parts of the analysis?
What would be the benefit, do you think? To me, to you, to Dan, or to the community?
Dan presumably believes his (I think his, please correct me if the pronoun is incorrect) conclusions are correct, and additional argumentation on the point is unlikely to generate a retraction or a significant change.
You might enjoy going over the argument with me and spotting issues, but you might not. I don’t know.
The community doesn’t really gain anything if I point out some things I think Dan has missed.
For me, there’s not really any benefit in pointing out issues with methodology or cited papers or maybe unreasonable comparisons. I’m not getting paid, it won’t get me more dates, it won’t win me more friends, and such points are unlikely important enough to win me the golden Rationalist of the Year fedora or whatever.
And what of the costs?
Dan doesn’t lose much, since his original article is completely reasonable, and it’s not like anybody is realistically going to hold him to the fire for making an incorrect or imperfect argument in support of the zeitgeist of the times.
The community may in the ensuing discussion get really ugly. Gven the experience of past threads about Damore’s memo and other things suggests that the odds are high that we’ll just end up with unpleasantness if there is any genuine disagreement. Even assuming we can all have a polite and dispassionate reasoned discussion, it is quite a popular opinion these days that simply speaking of certain things constitutes violence–and I do not wish to accidentally commit violence against fellow Lobsters!
To you, I’d imagine there’s no real cost beyond continuing to burn cycles reading a discussion back and forth. Then again, think of all the creative, productive, or cathartic things you could be doing instead of reading a thread of me trying to well-actually Dan without spawning a dumpsterfire.
To me, it at the very least requires a time commitment in terms of research and writing. I need to go grab counterexamples or additional context for what he’s written about (say, additional statistics about how majors actually turn into careers, background research around why the mid 1980s has that change in CS, and so forth) and make that into a coherent argument. And that’s fine, and if that were all I stand to lose I chalk it up as the opportunity cost of performance art and the good fun of public debate.
Alas, that’s not the potential full cost at all. Even assuming I could put together a critique that cleared some arbitrarily high bar of rationality and erudition, it is entirely probable that it’ll be held against me at some time or another–just look at what happened to RMS, an author who surpasses me in ideological consistency and rationality as much as he differs from me in viewpoint. I may well be held liable for (as @nebkor put it) “garbage opinions” that have no textual or factual basis. This could cost me career opportunities, this could cost me friendships, this could cost me any number of things–and that just isn’t outweighed by the minor joy I get from debating online with people.
(And note: I bear this risk not only for taking a completely opposite position, but for taking a position probably in agreement but quibbling about the details and suggesting different arguments. My experience is that people get grumpier and more vicious over little differences than large ones.)
That’s the sad state of things today. People have made the already-marginal benefits of reasoned and civil public debate much less than the potential costs, and there is almost no goodwill left that one can argue in good faith (or in bad faith but with rigor as polite arguendo). We have lost a great deal of intellectual curiosity, freedom, and frankly ludic capacity–things are too serious and the stakes too high for playing around with different ideas and viewpoints!
Yet, still, it is better to react and run the risk of being ostracised and shunned than it is to remain quiet in fear of retribution. Once enough people start doing this those who want to silence any and all who dare to voice a differing opinion will no longer be able to do so. They will be exposed for what they are, they’ll lose their power over others and with a bit of luck end up as a side note in the history books, taught to children in the same lesson where they learn about book burnings and state propaganda drives. Let’s hope that that is where it ends and that freedom of expression remains the norm.
I have voiced some differing opinions on this board and elsewhere yet I’m still here. For now the worst that will happen is a pink rectangle in the browser telling you that your posts have been flagged a lot recently and a reduction in whatever those karma points are called here. That pink rectangle is easily removed with a uBlock rule and those points don’t matter to begin with.
I don’t know what @friendlysock intended, but from my perspective, one benefit is that it draws attention to the fact that people disagree but for $reasons, don’t want to go into details.
To me, it at the very least requires a time commitment in terms of research and writing. I need to go grab counterexamples or additional context for what he’s written about (say, additional statistics about how majors actually turn into careers, background research around why the mid 1980s has that change in CS, and so forth) and make that into a coherent argument. And that’s fine, and if that were all I stand to lose I chalk it up as the opportunity cost of performance art and the good fun of public debate.
You opened up this thread by claiming that you don’t agree “with all of the analysis”. And yet you reveal here that you lack a “coherent argument”, and have neither counterexamples nor context to inspire your disagreement in the first place. This is one way of defining a bad faith argument. You can’t have it both ways. You either disagree because you’ve got a reasonable analysis yourself, or because you have an existing bias against what has been written. As you’ve expressed in many words, one of these is worth sharing and one is not.
You either disagree because you’ve got a reasonable analysis yourself, or because you have an existing bias against what has been written.
I hold that it is entirely possible to disagree based on a rough analysis or by applying heuristics (for example, asking “what is missing from this chart?” or “are there assumptions being made about which population we’re looking at?”) that are in good faith but which require additional cleanup work if you want to communicate effectively. This is a third option I don’t believe you have accounted for here.
The nastiness in this thread somewhat underscores the importance of arguing coherently and choosing words carefully–I haven’t stated which points I disagree with Dan (nor how much!) and yet look at the remarks some users are making. With such understanding and charitable commentary, no argument that isn’t fully sourced and carefully honed can even be brought up with the hope of a productive outcome. That’s not a function of reasonable arguments not existing, but just being able to read the room and see that any blemish or misstatement is just going to result in more slapfighting.
We don’t have discourse, because people aren’t interested in exploring ideas even if they’re not fully-formed. We don’t have debate, because people are uncivil. What’s left then is argument and bluster, and better to protest than to participate.
We don’t have discourse, because people aren’t interested in exploring ideas even if they’re not fully-formed.
No, we don’t have discourse because in most discussions that even remotely brush up against the intersection of tech with other currents in society, members such as yourself cry “politics” and immediately shut things down. And much more often than not these topics call us to engage in ethical problem solving i.e. what to do about toxic members of the tech community or how to address inequalities in open source. So by shutting down discussion, folks such as yourself – whether you mean to or not – send the message to those affected by these issues that not only are you not interested in solving these problems, but that you’re not even interested in discussing them at all. In this context, who do you think sticks around?
Dan doesn’t lose much, since his original article is completely reasonable, and it’s not like anybody is realistically going to hold him to the fire for making an incorrect or imperfect argument in support of the zeitgeist of the times.
He wrote it in 2014 and updated it recently. I don’t think it’s entirely fair to tag that as “in support of the zeitgeist of the times.” Your turn of phrase makes it sound much more fleeting. That is not to call into question the other reasons you don’t want to differ; but I think the way you stated this does not give the piece enough credit.
I don’t mean to make it sound fleeting–rather that things being what they are right now, I doubt that anybody is going to be terribly upset if there’s some flaw in his argument revealed through discussion at this time. I apologize for any confusion or disrespect that may have been parsed there.
Memcached is definitely not abandonware. It’s a mature project with a narrow scope. It excels at what it does. It’s just not as feature rich as something like Redis.
The HA story is usually provided by smart proxies (twemcache and others).
It’s designed to be a cache, it doesn’t need an HA story. You run many many nodes of it and rely on consistent hashing to scale the cluster. For this, it’s unbelievably good and just works.
I would put it with a little bit more nuance: if you have already Redis in production (which is quite common), there is little reason to add memcached too and add complexity/new software you may have not as much experience with.
Many large tech companies, including Facebook, use Memcached. Some even use both Memcached and Redis: Memcached as a cache, and Redis for its complex data structures and persistence.
Memcached is faster than Redis on a per-node basis, because Redis is single-threaded and Memcached isn’t. You also don’t need “built-in clustering” for Memcached; most languages have a consistent hashing library that makes running a cluster of Memcacheds relatively simple.
If you want a simple-to-operate, in-memory LRU cache, Memcached is the best there is. It has very few features, but for the features it has, they’re better than the competition.
N-1 processes is better than nothing but it doesn’t usually compete with multithreading within a single process, since there can be overhead costs. I don’t have public benchmarks for Memcached vs Redis specifically, but at a previous employer we did internally benchmark the two (since we used both, and it would be in some senses simpler to just use Redis) and Redis had higher latency and lower throughput.
Yeah agreed, and I don’t mean to hate on Redis — if you want to do operations on distributed data structures, Redis is quite good; it also has some degree of persistence, and so cache warming stops being as much of a problem. And it’s still very fast compared to most things, it’s just hard to beat Memcached at the (comparatively few) operations it supports since it’s so simple.
Real secure messaging software. The standard and best answer here is Signal,
Oh please. They aren’t even close to sharing the same level of functionality. If I want to use Signal, I have to commit to depending on essentially one person (moxie) who is hostile towards anyone who wants to fork his project, and who completely controls the server/infrastructure. And I’d have to severely limit the options I have for interfacing with this service (1 android app, 1 ios app, 1 electron [lol!] desktop app). None of those are problems/restrictions with email.
I don’t know what the federated, encrypted ‘new’ email thing looks like, but it’s definitely not Signal. Signal is more a replacement for XMPP, if perhaps you wanted to restrict your freedom, give away a phone number, and rely on moxie.
Matrix is a federated messaging platform, like XMPP or email. You could definitely support email-style use of the system it’s just that the current clients don’t support that. The protocol itself would be fine for email, mailing lists and git-send-email.
The protocol also gives you the benefits of good end-to-end encryption support without faff, which is exactly what general email use and PGP don’t give you.
Adding patch workflow to Matrix is no different to adding it to XMPP or any other messaging solution. Yes, it is possible but why?
I can understand you like Matrix but it’s not clear how Matrix is getting closer to e-mail replacement with just one almost-stable server implementation and the spec that’s not an IETF standard. I’d say Matrix is more similar to “open Signal” than to e-mail.
If I only knew the future I’d counter argument that but given that the future is unknown I can only extrapolate the current and the past. Otherwise Matrix may be “getting closer” to anything.
Do you have any signs that Matrix is getting e-mail patch workflow?
Mailing lists could move to federated chatrooms. They moved from Usenet before, and in some communities moved to forums before the now common use of Slack.
I’m not saying it would be the best solution, but it’s our most likely trajectory.
I do think, actually, that converting most public mailing lists to newsgroups would have a few benefits:
It’d make their nature explicit.
It’d let us stop derailing designs for end-to-end encryption with concerns that really apply only to public mailing lists.
I could go back to reading them using tin.
Snark aside, I do think the newsgroup model is a better fit for most asynchronous group messaging than email is, and think it’s dramatically better than chat apps. Whether you read that to mean slack or any of the myriad superior alternatives to slack. But that ship sailed a long time ago.
Mailing lists don’t use slack and slack isn’t a mailing list. Slack is an instant messaging service. It has almost nothing in common with mailing lists.
It’s really important to drive this point home. People critical of email have a lot of good points. Anyone that has set up a mail server in the last few years knows what a pain it is. But you will not succeed in replacing something you don’t understand.
Personally I think that GitHub’s culture is incredibly toxic. Only recently have there been tools added to allow repository owners to control discussions in their own issues and pull requests. Before that, if your issue got deep linked from Reddit you’d get hundreds of drive by comments saying all sorts of horrible and misinformed things.
I think we’re starting to see a push back from this GitHub/Slack culture at last back to open, federated protocols like SMTP and plain git. Time will tell. Certainly there’s nothing stopping a project from moving to {git,lists}.sr.ht, mirroring their repo on GitHub, and accepting patches via mailing list. Eventually people will realise that this means a lower volume of contributions but with a much higher signal to noise ratio, which is a trade-off some will be happy to make.
Only recently have there been tools added to allow repository owners to control discussions in their own issues and pull requests. Before that, if your issue got deep linked from Reddit you’d get hundreds of drive by comments saying all sorts of horrible and misinformed things.
It’s not like you used to have levers for mailing lists, though, that would stop marc.org from archiving them or stop people from linking those marc.org (or kernel.org) threads. And drive-bys happened from that, too. I don’t think I’m disputing your larger point. Just saying that it’s really not related to the message transfer medium, at least as regards toxicity.
Sure, I totally agree with you! Drive-bys happen on any platform. The difference is that (at least until recently) on GitHub you had basically zero control. Most people aren’t going to sign up to a mailing list to send an email. The barrier to sending an email to a mailing list is higher than the barrier to leaving a comment on GitHub. That has advantages and disadvantages. Drive-by contributions and drive-by toxicity are both lessened. It’s a trade-off I think.
I guess I wasn’t considering a mailing list subscription as being meaningfully different than registering for a github account. But if you’ve already got a github account, that makes sense as a lower barrier.
(A separate issue: I gave up on Matrix because its e2e functionality was too hard to use with multiple clients)
and across UA versions. When I still used it I got hit when I realized it derived the key using the browser user agent, so when OpenBSD changed how the browser presented itself I was suddenly not able to read old conversations :)
Functionality is literally irrelevant, because the premise is that we’re talking about secure communications, in cases where the secrecy actually matters.
Of course if security doesn’t matter then Signal is a limited tool, you can communicate in Slack/a shared google doc or in a public Markdown document hosted on Cloudflare at that point.
Signal is the state of the art in secure communications, because even though the project is heavily driven by Moxie, you don’t actually need to trust him. The Signal protocol is open and it’s basically the only one on the planet that goes out of it’s way to minimize server-side information storage and metadata. The phone number requirement is also explicitly a good design choice in this case: as a consequence Signal does not store your contact graph - that is kept on your phone in your contact store. The alternative would be that either users can’t find each other (defeating the point of a secure messaging tool) or that Signal would have to store the contact graph of every user - which is a way more invasive step than learning your phone number.
even though the project is heavily driven by Moxie, you don’t actually need to trust him
Of course you must trust Moxie. A lot of the Signal privacy features is that you trust them not to store certain data that they have access to. The protocol allows for the data not to be stored, but it gives no guarantees. Moxie also makes the only client you can use to communicate with his servers, and you can’t build them yourself, at least not without jumping hoops.
The phone number issue is what’s keeping me away from Signal. It’s viral, in that everyone who has Signal will start using Signal to communicate with me, since the app indicates that they can. That makes it difficult to get out of Signal when it becomes too popular. I know many people that cannot get rid of WhatsApp anymore, since they still need it for a small group, but cannot get rid of the larger group because their phone number is their ID, and you’re either on WhatsApp completely or you’re not. Signal is no different.
And how can you see that a phone number is able to receive your Signal messages? You have to ask the Signal server somehow, which means that Signal then is able to make the contact graph you’re telling me Signal doesn’t have. They can also add your non-Signal friends to the graph, since you ask about their numbers too. Maybe you’re right and Moxie does indeed not store this information, but you cannot know for sure.
What happens when Moxie ends up under a bus, and Signal is bought by Facebook/Google/Microsoft/Apple and they suddenly start storing all this metadata?
Signal is a 501c3 non-profit foundation in the US, Moxie does not control it nor able to sell it. In theory every organization can turn evil but there is still a big difference between non-profits who are legally not allowed to do certain things vs corporations who are legally required to serve their shareholders, mostly by seeking to turn a profit.
And how can you see that a phone number is able to receive your Signal messages? You have to ask the Signal server somehow, which means that Signal then is able to make the contact graph you’re telling me Signal doesn’t have.
There are two points here that I’d like to make, one broader and one specific. In a general sense, Signal does not implement a feature until they can figure out how to do that securely and with leaking as little information as possible. This has been the pattern for basically almost every feature that Signal has. Specifically, phone numbers are the same: The Signal app just sends a cryptographically hashed, truncated version of phone numbers in your address book to the server, and the server responds with the list of hashes that are signal users. This means that Signal on the server side knows if any one person is a Signal user, but not their contact graph.
Every organization can also be bought by an evil one. Facebook bought WhatsApp, remember?
The Signal app just sends a cryptographically hashed, truncated version of phone numbers in your address book
These truncated hashes can still be stored server-side, and be used to make graphs. With enough collected data, a lot of these truncated hashes can be reversed. Now I don’t think Signal currently stores this data, let alone do data analysis. But Facebook probably would, given the chance.
Every organization can also be bought by an evil one. Facebook bought WhatsApp, remember?
WhatsApp was a for-profit company, 501(c)3 work under quite different conditions. Not saying they can’t be taken over, but this argument doesn’t cut it.
Oh but Signal users can always meet in person to re-verify keys, which would prevent any sim swap attack from working? No, this (overwhelmingly) doesn’t happen. In an era where lots of people change phones every ~1-2yr, it’s super easy to ignore the warning because 99% of the time it’s a false positive.
The alternative would be that either users can’t find each other (defeating the point of a secure messaging tool)
This is a solved problem. I mean, how do you think you got the phone numbers for your contacts in the first place? You probably asked them, and they probably gave it to you. Done.
Many projects are so complex that we need a prototype to explore the options. Correctness is not an important aspect there. Python (JavaScript, Lua, Ruby, TCL,…) are great for prototyping. Never do we throw away the prototype to build the real product. So there is the desire for a language which can do both: Quick prototypes and reliable products.
Rust can do both, about two years ago at work we wrote a ~300LOC middleware to handle redirects.
Then proceeded to put that in front of the full request traffic of the company. This was our first Rust project, but a massive success: it took a reasonable amount of time to put the tool into production, the compiler said no a couple of times, but rightly so. The memory/cpu usage of the middleware were ridiculously low, even though it dealt with millions of redirects. Integrating cuckoo hashing, a LRU cache, and a database backend was easy. Zero production problems afterwards, because this thing just worked once it compiled.
Ok, so 300 LOC is not exactly complex, but that’s not the point. This middleware was a part of a surreally complex project and that’s how prototypes should work: you identify part of a complex project, cordon it off and then prototype and implement it. Parallelize and iterate with everything and your complex project ends up being viable. Rust allowed us to get in, solve a problem, operate it in production and have it work reliably with a low maintenance overhead.
I would say yes, because Rust-without-lifetimes* hits a sweet spot of letting you focus on structure without getting distracted by (as many) runtime bugs.
*Writing fresh code, you can get away with cloning everything and having &mut self being the only borrow-y part.
Lots of clone() and unwrap() and coding is quick, I suppose. Those can be easily searched for to clean it up once the design is stable. This clean up will surely be painful but you just pay off the technical debt accumulated to finish the prototype quickly.
I prototyped some code in Rust that used lots of clone and had an overabundance of Rc. Once I had it correct and a full suite of tests proving it’s behavior the compiler guided me on the refactoring. Just removing the Rc usage got me a 6x speedup on benchmarks.
I don’t think using Rust obviates the truism “First make it correct. Then make it fast”. Instead it allows you to use the compiler and the types to guide your make it fast refactoring and know that you didn’t break the code in non-obvious harder to test ways.
That’s basically what I do. It’s actually not too difficult in my experience ’cause the compiler is so good at guiding you. The hard part happens before that, when you structure things correctly.
I wonder if there would be a market for a simpler, runtime-checked, GC, lang that’s a subset of rust syntax that you could use on a per-module basis for prototyping, but still keeps the non-safety-related structures and idioms that real rust code uses, to aid porting to “safe mode” later if you want to keep it?
Sort of like the anti-typescript, but instead of adding some type safety, you keep the types but lose the borrow checker, and you pay for it with performance. But not by writing different code where you fill it with calls to cells and ref counts and such, rather you leave those annotations outand they just get added in during compilation / interpretation.
If you could do it without perverting the syntax much, it’d surely be easier to port from than if you say, just wrote your PoC in Python or JS.
Really depends on what and how much you think you’ll have to refactor. Rewrites that challenge bigger architectural decisions are more costly in my experience with hobby (!) rust projects.
But if your protocol / API is stable, I’d go with rust from the start.
I prototype with python, and gradually improve reliability with static typing (i.e. mypy) and unittests. Rust is great, but it’s correctness guarantees require an upfront cost which will likely penalise rapid prototyping. It all depends though.
On embedded systems, Rust is amazing for prototyping, especially compared to C or C++. It has a proper ecosystem, and debugging Rust’s compiler errors is far, far superior to debugging runtime bugs on the board.
People have pointed out that Google is much less reliant on third-party cookies than the competition, because users are probably on a Google-owned platform already, and if they’re not, they’re almost certainly logged in to Google in their session. This doesn’t really impact Google Analytics or Google’s ads, but it does harm their competitors in the user surveillance and ad targeting business, with the benefit of appearing to care about user privacy.
This definitely isn’t a bad thing for humanity, though. I wouldn’t mind a world without third-party tracking cookies. I also wouldn’t mind a world with fewer user surveillance and ad targeting companies.
Fewer companies mean more unified/combined databases though. That’s not necessarily good news when we’ve been hearing for years that Google is looking to combine medical, financial and government-provided data with their other datasets.
I’m glad there are finally blog posts that mention working remotely isn’t all sunshine and rainbows. It takes a lot of effort to balance home/work life, avoid distraction, and communicate with your coworkers.
I’ve worked for five years remotely and I wouldn’t do it again.
Yes, you need discipline, avoid distraction and have a clear separation between home/work, but the dealbreaker for me is how much less efficient/more isolating remote working is.
Isolating from the “having conversations” perspective. A mind-boggling amount of progress is an outcome of having random conversations and random discussions with people in an office, or attending the right meeting or the right devJF or the right event. It’s also the sad reason why large companies spend so much on flying people around countries.
I’ve pretty much only worked remotely for the past 7 years and it is awfully isolating. I’d disagree with you on efficiency though - on few occasions where I’d go visit the office I’d spend like 10 hours there and would have 3 hours of real work done. Remote work does display burnout better though.
So I think that if you’re in a rut you’ll do less work remotely but otherwise you’ll be much more efficient.
Honestly I cant imagine working in a office. The days are so short already and the inefficiency of the whole culture would drive me mad.
I think that main issue with remote work (which I do, and love) is that all kinds of things that are implicit in an onsite office need to be communicated differently. The obvious example is whether or not someone is busy, but other things like who is chatting with who, is there a big meeting going on, and after meeting chit chat. All kinds of signals physical proximity provides simply aren’t there in a remote environment.
Both remote and onsite work and work and have different strengths, in my experience.
I agree on the conversations, its one of the things i miss from office work but it certainly seems that depends on personality type. There are people who hate that part the most and find it distracting.
Another thing I just remembered: you also effectively need an extra room in your house/apt/etc to dedicate to work. Depending on where you live, going from N rooms to N+1 can be a very expensive proposition.
Yes, let’s replace a system which has been tested and proven and worked on since the 1990s with a random 47-commit project someone just tossed up on GitHub. Because good encryption is easy.
I don’t see the point in sarcasm. PGP does many things and most of them are handled poorly be default. This is not a PGP replacement, it’s a tools with single purpose: file encryption. It’s not for safe transfers, it’s not for mail. It’s got a mature spec and it’s designed and developed by folks who are in the crypto community and there are two ref implementations. It does one thing and does it well which is everything PGP isn’t.
In a cryptography context, “since the 1990s” is basically derogatory. Old crypto projects are infamous for keeping awful insecure garbage around for compatibility, and this has been abused many many times (downgrading TLS to “export grade” primitives anyone?)
Whatever it’s faults there are plenty of good reasons to use protobuf not related to scale.
Protobuf is a well known, widely used format with support in many different programming languages. And it’s the default serialization format used by gRPC.
The only more boring choice would be restful JSON. But gRPC is actually easier to use for an internal service. You write a schema and get a client and server in lots of different languages.
And you also get access to an entire ecosystem of tools, like lyft’s envoy, automatic discovery, cli/graphical clients, etc.
Maybe instead of using an atypical serialization format (or god-forbid rolling your own), it would be better to spend your innovation tokens on something more interesting.
I second ASN.1. Although underappreciated by modern tech stacks, it is used quite widely, e.g. in X.509, LDAP, SNMP and very extensively in telecom (SS7, GSM, GPRS, LTE, etc). It is suitable for protocols that need a unique encoding (distinguished encoding rules, DER) and for protocols where you can’t keep all the data in memory and need to stream it (BER).
It has some funny parts that might be better done away with, e.g. the numerous string types that nobody implements or cares about. I find it hilarious to have such things as TeletexString and VideotexString.
Support in today’s languages could be better. I suspect that Erlang has the best ASN.1 support of any language. The compiler, erlc, accepts ASN.1 modules straight up on the command line.
Nobody should use BER ever again, and people should use generated parsers rather than (badly) hand-rolling it. All of the certificate problems that I have seen are not fundamental to ASN.1, but rather badly hand-rolled implementations of BER.
XDR/sunrpc predates even that by ~a decade I believe, and its tooling (rpcgen) is already available on most Linux systems without installing any special packages (it’s part of glibc).
But gRPC is actually easier to use for an internal service. You write a schema and get a client and server in lots of different languages.
Swagger/openapi are so much better than grpc in that respect that it is borderline embarasing. No offense intended.
It’s human readable and writeable. You can include as much detail as you want. For example you can include only the method signatures or you can include all sorts of validation rules. You can include docstrings. You have an interactive test GUI out of the box which you don’t need to distribute. All they need is the url for you swagger spec. There are tools to generate client libraries for whatever languages you fancy, certainly more than those grpc offers, in some cases multiple library generators per language.
But most importantly. It doesn’t force you do distribute anything. There is no compile step necessary. Simply call the API via http, you can even forge your requests by hand.
In a job I had, we replaced a couple of HTTP APIs with gRPC because a couple of Google fanboys thought it was critical to spend time fixing something that just works with whatever Google claims to be the be all end all solution. The maintaining effort for those APIs jumped up an order of magnitude easily.
gRPC with protobuf is significantly simpler than a full-blown HTTP API. In this regard gRPC is less flexible, but if you don’t need those features (ie you really are just building an RPC service), it’s a lot easier to write and maintain. (I’ve found swagger to be a bit overwhelming every time I’ve looked at it)
Why was there so much maintenance effort for gRPC? Adding a property or method is a single line of code and you just regenerate the client/server code. Maybe the issue was familiarity with the tooling? gRPC is quite well documented and there are plenty of stack-overflowable answers to questions.
I’ve only ever used gRPC with python and Go. The python library had some issues, but most of them were fixed over time. Maybe you were using a language that didn’t play nice?
Also this has nothing to do with Google fanboyism. I worked at a company where we used the Redis protocol for our RPC layer, and it had significant limitations. In our case, there was no easy way to transfer metadata along with a request. We need the ability to pass through a trace ID and we also wanted support for cancellation and timeouts. You get all that out of the box with gRPC. (in Go you use context) We looked at other alternatives and there were either missing features we wanted or the choice was so esoteric that we were afraid it would present too much of an upfront hurdle for incoming developers.
I guess we could’ve gone with thrift. But gRPC seemed easier to use.
And it’s the default serialization format used by gRPC.
We got stuck using protobufs at work, and they’ve been universally reviled by our team as being a pain in the neck to work with, merely because of their close association with gRPC. I don’t think the people making the decision realized that gRPC could have the encoding mechanism swapped out. Eventually we switched to a better encoding, but it was a long, tedious road to get there.
What problems have you had with protobufs? All the problems the original post talks about come from the tool evolving over time while trying to maintain as much format compatibility as possible. While I agree the result is kind of messy, I’ve never seen any of those quirks actually cause significant problems in practice.
The two biggest complaints I’ve heard about gRPC are “Go gRPC is buggy” and “I’m annoyed I can’t serialize random crap without changing the proto schema.” Based on what I know about your personal projects, I can’t imagine you having either problem.
Part of the problem is that the Java API is very tedious to use from Clojure, and part of the problem is that you inherit certain properties of golang’s pants-on-head-stupid type system into the JVM, like having nils get converted into zeroes or the empty string. Having no way to represent UUIDs or Instants caused a lot of tedious conversion. And like golang, you can forget about parametric types.
(This was in a system where the performance implications of the encoding were completely irrelevant; much bigger bottlenecks were present several other places in the pipeline, so optimizing at the expense of maintainability made no sense.)
But it’s also just super annoying because we use Clojure spec to describe the shape of our data in every other context, which is dramatically more expressive, plus it has excellent tooling for test mocks, and allows us to write generative tests. Eventually we used Spec alongside Protobufs, but early on the people who built the system thought it would be “good enough” to skip Spec because Protobufs “already gives us types”, and that was a big mistake.
Thanks for the detailed reply! I can definitely see how Clojure and protobufs don’t work well together. Even without spec, the natural way to represent data in Clojure just doesn’t line up with protobufs.
Why (practically) no mention of xmpp/jabber? It’s federated, has E2EE support (OMEMO), many FOSS clients and server implementations, and providers generally don’t require any personal info to sign up. The article only mentions that last bit briefly, but instead spends more time focusing on the various walled garden services out there.
It’s not trendy and new? Honestly the only reason I can think why these articles always gloss over it.
From a user point of view, I can see why it struggled. It is old, it wasn’t always great, OMEMO rollout has been slow and steady.
However, if you are writing an article like this you should know that XMPP in 2019 is really good. Services like Conversations make it a program that I use with real people in the real world every day.
Nerds like me use their domain as their ID. Other people just use hosted services. Doesn’t matter, it all works.
Decentralised services are always going to have a branding issue I guess.
It is listed under Worth Mentioning of our Federated section. The reason why it is not a main feature is because client quality is such fragmented ecosystem, and this is due largely to poor quality of documentation. Many of the XEPs still remain in draft or proposed status.
However, if you are writing an article like this you should know that XMPP in 2019 is really good. Services like Conversations make it a program that I use with real people in the real world every day.
The issue is Conversations is the only good client. If there were iOS and Desktop clients as good as that then we would be more likely to make it a main feature.
Nerds like me use their domain as their ID. Other people just use hosted services. Doesn’t matter, it all works.
There is also Quicksy.im by the Conversations author that provides even easier on-boarding for non-nerds but still uses XMPP underneath.
For me the biggest problems with XMPP are lack of good clients for iOS and desktops. There is Dino.im but still in beta and it’s not clear if there will ever be an iOS client with Conversations feature-parity.
It’s not trendy and new? Honestly the only reason I can think why these articles always gloss over it.
I should mention here that is not the case at all. We look at number of factors, including client quality, developer documentation quality, types of ‘footguns’ involved, ie where a user might expect something to be encrypted and in reality it is not etc.
You’re being too kind to XMPP, like PGP it’s another example of focusing on things that are trendy in some FOSS circles and meanwhile losing focus on actually providing value where it really matters to users.
It’s trendy to assume that federation is an unequivocal good thing and centralized services are bad, when looking deeper into the topic reveals it’s a mess of tradeoffs. Every time this comes up, Moxie’s “The Ecosystem is Moving” post is looking more and more insightful.
XMPP, like PGP provides a horrible user experience unless you have extensive domain-specific knowledge. In XMPP’s case, federation is partly to blame for that. Another part is that XMPP is very much a “by nerds, for nerds” thing which comes with a very different set of priorities than anything that aims to be used by most people.
For me the biggest problems with XMPP are lack of good clients for iOS and desktops.
For the desktop there is Gajim (gajim.org). It has OMEMO and works very well with Conversations. I have been using this for years and years, although I can only attest to the Linux version.
Yes, I agree. Gajim is fully featured. It’s not without flaws: outdated UI, OMEMO not built in and enabled by default and apparently no official MacOS version (there is https://beagle.im/ for MacOS though…).
I guess XMPP’s problem no 1 is software fragmentation as there is no single company that’s maintaining full suite of software. It’s always mix-and-match depending on what OS/phone is used by one’s friends.
For me the biggest problems with XMPP are lack of good clients for iOS and desktops. There is Dino.im but still in beta and it’s not clear if there will ever be an iOS client with Conversations feature-parity.
The issue with that is they have no tagged releases, which means maintainers have some ancient random old version or have to keep up to date with every commit. It is unacceptable for something as complex as an instant messenger program to have no tagged release and we believe this because the developers are not comfortable in the completeness of the product to do so.
Because it does not solve any privacy, security or resilience problems from the point of view of individual.
a) Federation is meaningless from resilience PoV since XMPP accounts are not transferable; if someone is targeting me they can take down server I’m using. User or programmer giving a damn about “network being resilient as whole” is irrational. It’s should always be about end-user experience.
b) Until people will figure out how to create Open Incentive-Aligned Cloud Messaging Platform (replacement for FCM and APNS) battery life will suck. Having multiple tcp sockets each with its own heartbeat for every of your apps means short battery life. I want one socket with heartbeat values optimized for network I’m using ATM.
If you want to figure out how to build open replacement for FCM/APNS, I would love to help.
Aren’t all of there points especially worse for the services mentioned in the article? They all depend on a single company, none of the accounts or services are transferable.
Battery life doesn’t ‘suck’. My nexus 5x regularly sees 24hr+ with moderate xmpp usage through Conversations (and no Google play services installed)
I’ve been using XMPP with OMEMO E2EE for about a year now, after a FOSS enthusiast convinced me to use it. I’m using Gajim (https://gajim.org/) pretty much daily now and am quite happy with the feel and performance of the chat. It even has code highlighting blocks and other goodies and addons, and it stores the history in a sqlite database. Apparently it’s also possible to use multiple clients on the same account and the messages go to all your clients once they’re hooked up, but I’ve never tried it myself.
Apparently it’s also possible to use multiple clients on the same account and the messages go to all your clients once they’re hooked up, but I’ve never tried it myself.
It’s not abundantly clear to the user whether their file transfer was sent with E2EE or not. As for VOIP over Jingle, there’s no E2EE to be found there. We believe all channels should be E2EE and not “some features only”.
I’ve been using XMPP with OMEMO E2EE for about a year now, after a FOSS enthusiast convinced me to use it. I’m using Gajim (https://gajim.org/) pretty much daily now and am quite happy with the feel and performance of the chat.
That is the client we suggested for desktop under our Federated section.
We would like to see documentation for MacOS. Pages like https://gajim.org/download/ just simply say things like:
Apparently it’s also possible to use multiple clients on the same account and the messages go to all your clients once they’re hooked up, but I’ve never tried it myself.
Yes, and it works very well. I am using Conversation on my mobile and Gajim on the desktop. Both support OMEMO.
See omemo.top for the OMEMO implementation status across a large number of XMPP clients.
This is a bad idea in itself, but let’s be clear: a lot of these ecosystem-impacting changes are questionable because Google has a massive conflict of interest between Chrome and their business.
The ad-blocking changes, the mandatory forced Chrome login, this, the attempt to kill the url and others are just facets of the underlying conflict of interest.
We can expect Neqo to be the implementation of HTTP3/QUIC in Firefox:
Mozilla is developing Neqo - a QUIC and HTTP/3 implementation written in Rust. Neqo is planned to be integrated in Necko (which is a network library used in many Mozilla-based client applications - including Firefox)
Finally, networking/protocol components written in a memory-safe language. As the complexity of these layers increased substantially over the last few years, I don’t think I’d be comfortable with using something written in C/C++ (or at least without half a decade of fuzzing/real world testing) for these.
Edit: curl uses another Rust QUIC implementation - Quiche from Cloudflare. Things are moving in the right direction.
Getting kind of tired of these thinly-veiled off-topic political posts to be quite honest, we’ve had a few of them now. Stick to technology, take your unwanted political views to hacker news.
It’s fine to flag as off-topic and hide the submission so it doesn’t bother you.
While this particular instance and article deals with a current hot-button political issue, the current structure of open source is vulnerable to this sort of disruption. See my comment here, and this comment by @chobeat.
Ah yes, agreed! Technology is the first known example of Plato’s Perfect Forms. Technology exists on its own abstract, perfect realm that trancends space and time and has no relevance to anything happening in this physical reality.
Stick to technology, I say! And no funny human business!
The trouble with GnuPG wasn’t primarily the C part. It’s that OpenPGP is like those comedy swiss army knives. You open one blade and it’s an umbrella, the next one is a neck massager, the next a rabbit and so on.
It doesn’t even attempt to do many things badly: it does nebulous things, badly, back from the times in the 90s when people thought encryption was a far less diverse thing than it really is.
Email encryption? Just forget it. Signing software packages? signify Backups? Borgbackup Secure comms? Signal Encrypt files? age
Expectation: a pure text-based chat system, from a more enlightened age
Reality: trolls spamming channels with huge ascii-art dildos and/or swastikas, and ddos
Not in my reality.
I’m also surprised to hear that. Unless you explicitly look for troll channels, my experience has either been quiet (but quick to answer) or constantly active, and on topic.
Never saw anything like that on freenode. Mind me asking - what channels do you visit?
I can’t say I’ve seen the things that the grandparent comment mentioned, but they definitely wouldn’t be on Freenode. If you limit yourself to Freenode, IRC is a very safe and well-moderated experience, especially on some exemplary channels like the Haskell one.
I have accidentally wandered into uncomfortable conversations and much worse things on some of the other popular IRC networks, of which quite a few still exist: https://netsplit.de/networks/top100.php
The same thing is true of sketchy Discord servers as well; it’s not like IRC is unique in this regard.
A year or two back, Supernets was spamming hard on IRC networks. I forgot if Freenode was affected, but I know a lot of the smaller networks I was on were.
Not OP, but I spend my time on IRCnet and EFnet since my IRC use is just to stay in touch with friends. Anyway, last year I was DDoS’d pretty hard because someone wanted my nick on EFnet.
Sometimes I miss #C++ on EFnet, not enough to go back on EFnet, but I do miss it – a lot of wonderful people were there in the late 90s. Freenode feels a lot more sane in terms of management and tools for the system operators. Cloaks and nickname registration go a long way.
I’m in, like, 15 networks, and never saw anything like that either.
This is a good start.
Back in the days when the Internet started gaining wider adoption (1994-2005 roughly), there was a real sense of this tech-tinkerer ethos as we on the frontier of things enjoyed relatively unfettered freedom to mold this new environment. But the rest of the world joined us in enjoying the benefits of this new world and existing power structures reasserted themselves (as in governments - the best book I’ve read about it was written by a sociologist - Zeynep Tufekci’s Twitter and Tear Gas) or emerged under new fault lines (the newly minted big tech companies).
I think it was a huge mistake and a wasted opportunity that we technologists didn’t start to explicitly acknowledge these power structures and politics earlier, and that it took us the last decade to painfully learn that this is not our playground or toy when these technologies are wired into the world, and increasingly remake the world.
We spent decades arguing about open source licenses when people and companies who are used to thinking about and navigating power/politics ran circles around these naive intentions (it’s not sufficient to have the right license, it’s far more important that a project/technology has a healthy community organized around it, a license doesn’t guarantee against a company monopolizing a platform). I think it behooves us tech people to think about power & politics, because we have influence to affect changes - good or bad, and pretending that we don’t (“let’s just stick to technology”) biases outcomes more towards the bad end of the scale.
That’s why I like RFC8890 and Mark’s post, the approach isn’t hopelessly naive, it acknowledges the politics and tension around different interests and plonks down a good approach in its own corner of the tech world. In its own it’s not sufficient, but at least it finally heads in the right direction that we can build on.
I am flabbergasted someone is able to send a patch to OpenBSD but is not able to set the web interface of GMail to sent plain-text emails. Or install Thunderbird, which might have been the solution, in that particular case.
I also never used
git send-email
, but I don’t think it is the barrier to become a kernel maintainer.Actually, that might work as an efficient sieve to select the ones who want to be active contributors from those who are just superficial or ephemeral contributors.
In my opinion, although Github supposedly decreases the barrier of first-time contribution, it increases the load on the maintainers, which has to interact with a lot of low-quality implementation, reckless hacks and a torrent of issues which might not even be. That is my experience in the repositories I visit or contributed.
Patching some component or other of OpenBSD is not a skill that transfers well to making some random application do what you want.
I agree with you.
On the other hand, if someone knows how to install OpenBSD, use
cvs
(orgit
), program in C and navigate the kernel’s souce-code, I supposed they are capable of going to any search engine and find the answer on how to send plain-text email via the GMail interface.GMail mangles your “plain-text” messages with hard-wraps and by playing with your indentation. You’d have to install and configure thunderbird or something.
One of the many reasons why I use Mutt.
Note that this is anecdotal hearsay. This person is saying their partner had to set things up… they may have misunderstood their partner or not realized their partner was exaggerating.
Also, one might expect the amount of patience and general debugging skill necessary to transfer rather well to the domain of email configuration.
It’s also possible that guiraldelli is assuming it was an excepted OpenBSD kernel patch, wheras we don’t know if the patch was accepted and we don’t know if it was a userland patch. It doesn’t take much skill to get a userland patch rejected.
You can’t send patches, or indeed any formatted plain text emails through the gmail web interface. If you set gmail to plain text, gmail will mangle your emails. It will remove tabs and hardwrap the text. It’s hopeless.
Think about impediments in terms of probabilities and ease of access.
You wouldn’t believe how many people stop contributing because of tiny papercuts. There is a non-trivial amount of people who have the skills to make meaningful contributions, but lack the time.
Lobsters requires an invite. That’s a barrier to entry, so those who make it are more likely to make good contributions, and have already extended effort to comply with the social norms.
You may be willing to jump a barrier of entry for entertainment (that is lobsters) but not for free work for the benefit of others (because you have already done that work for yourself).
If you want to save yourself compiling and patching your kernel every time there’s an update, you might want to submit it to the project maintainer.
If the project wants to have contributors in the future it might want to consider using modern, user friendly technologies for contribution.
Linux is considering it, a discussion is started, which is positive (for the future of the project) in my opinion. It is a sign or responsive project management, where its conservatism does not hinder progress, only slows it to not jump for quickly fading fads.
Other communities are closing their microverse on themselves, where it is not enough to format your contribution in N<80 column plain text email without attachments with your contribution added to it via come custom non-standard inline encoding (like pasting the patch to the end of the line), but you may even need to be ceremonially accepted for contribution by the elders. Also some such projects still don’t use CI, or automated QA. These communities will die soon, actually are already dead, especially as they don’t have large corporations backing them, as someone getting paid usually accepts quite a few papercuts a job demands. This is why Linux could actually allow itself to be more retrograde than the BSD projects for example, which I think are even more contributor-unfriendly (especially ergonomically).
I tend to agree, however that discussion was started by someone with a very striking conflict of interest and therefore their opinion should be questioned and considered to be insincere at best and maleficent at worst.
That doesn’t matter, the discussion can be done despite that. I think a forge-style solution is the key.
Despite Microsoft having 2 “forge” type solutions in its offering (both quite usable in my experience, github being a de-facto standard) I still cannot see a conflict of interest in this topic. There are other software forges available. The current process is simply outdated.A custom solution could also be developed, if deemed necessary, as it was the case with git.
Pay attention my comment talks about maintainers, active contributors and superficial or ephemeral contributors.
From the article:
And later on…
So I understood Sarah Novotny is addressing the problem of new contributors, not the one Linus Torvalds see, of new maintainers.
So, your comment that
is not the problem the Linux Foundation has, but the one that Microsoft’s Sarah Novotny wants to solve. Those are two different problems, with two different solutions. On the surface, they might seem the same, but they are not. They might be correlated, but the solution for one problem does not necessarily mean it is the solution for the other.
Therefore, my argument still stands:
If we are going to address, though, that it might hinder new contributors, then I tend to agree with you. :)
Let’s not let Microsoft decide how Linux is developed.
The waning popularity of their own, proprietary kernel is no excuse for telling other projects how they need to be run.
This is just my opinion but I think that while she’s totally missing the mark on finding Linux kernel module maintainers having anything at all to do with plain text email patch submission systems, the general issue she’s speaking to is one that has generated a lot of discussion recently and should probably not be ignored.
Also, in the spirit of the open source meritocracy, I’d prefer to let people’s actions and track records speak more loudly than which company they happen to work for, but then, lots of people consider my employer to be the new evil empire, so my objectivity in this matter could be suspect :)
Bingo!
My first thought after reading this was “Does this person ACTUALLY think that having to use plaintext E-mail is even statistically relevant to the problem of finding module maintainers for the Linux kernel?”
In a general sense I believe that the open source community’s reliance on venerable tools that are widely rejected by younger potential contributors is a huge problem.
Martin Wimpress of Ubuntu Desktop fame has spoken about this a lot recently, and has been advocating a new perspective for open source to increase engagement by embracing the communications technologies where the people are not where we would like them to be.
So he streams his project sessions on Youtube and runs a Discord, despite the fact that these platforms are inherently proprietary, and has reported substantial success at attracting an entirely new audience that would otherwise never have engaged.
GKH even said that there are more than enough contributors in his last ama, so having contributors is “a priori” not a problem right now.
Every maintainer was a new contributor at a point! Also better tooling would be beneficial for everyone.
That is true, but it is not the current problem: see /u/rmpr’s reply and his link to the Reddit’s AMA to verify that.
That is not necessarily true: the introduction of a tool requires a change of mindset and workflow of every single person already involved in the current process, as well as update of the current instructions and creation of new references.
I saw projects that took months to change simply a continuous integration tool (I have GHC in mind, except the ones in the companies I worked for). Here, we are talking about the workflow of many (hundreds for sure, but I estimate even thousands) maintainers and contributors.
As I already told before, RTFM [1] [2] does not take long for a single individual that wants to become a new contributor; in [1], there is even a session specially about the GMail’s Web GUI problem.
If the problem is retrievability of the manual or specific information, than I think that should be addressed first. But that topic is not brought up in the article.
I am able to deal with email patches, but I hate doing it. I would not enjoying maintaining a project which accepts only email patches.
Ephemeral contributors evolve to maintainers, but if that initial step is never made this doesn’t happen. I don’t know if this will increase the number of maintainers by 1%, 10%, or more, but I’m reasonably sure there will be some increase down the line.
Finding reliable maintainers is always hard by the way, for pretty much any project. I did some git author stats on various popular large open source projects, and almost all of them have quite a small group of regular maintainers and a long tail of ephemeral contributors. There’s nothing wrong with that as such, but if you’re really strapped for maintainers I think it’s a good idea to treat every contributor as a potential maintainer.
But like I said, just because you can deal with email patches doesn’t mean you like it.
GMail, even in plain-text mode, tends to mangle inline patches since it has no way of only auto-wrapping some lines and not others.
Not a major issue as you can attach a diff, but from a review-perspective I’ve personally found folks more inclined to even look at my patches if they’re small enough to be inlined into the message. I say this as someone having both submitted patches to OpenBSD for userland components in base as well as having had those patches reviewed and accepted.
I personally gave up fighting the oddities of GMail and shifted to using Emacs for sending/receiving to the OpenBSD lists. I agree with Novotny’s statement that “GMail [is] the barrier.” The whole point of inlining patches like this is the email body or eml file or whatever can be direct input to
patch(1)
…if GMail wraps lines of the diff, it’s broken the patch. Text mode or not.Obviously, this doesn’t mean if the Linux kernel maintainers want to change their process that I don’t think they shouldn’t. (Especially if they take a lot of funding from the Foundation…and as a result, Microsoft/GitHub.) OpenBSD is probably always going to take the stance that contributions need to be accessible to those using just the base system…which means even users of
mail(1)
in my interpretation.You can’t attach a diff – nearly all mailing lists remove attachments!
Heh, if you add
arc
to the base system, you can adopt Phabricator without violating that principle :)Good point…so basically GMail (the client) doesn’t really work with an email-only contribution system. It’s Gmail SMTP via mutt/emacs/thunderbird/etc. or bust.
I have found several Linux subtle kernel bugs related to PCIe/sysfs/vfio/NVMe hotplug. I fixed them locally in our builds, which will ultimately get published somewhere. I don’t know where, but it’s unlikely to ever get pushed out to the real maintainers.
The reason I don’t try to push them out properly? The damn email process.
I did jump through the hoops many years ago to configure everything. I tried it with gmail, and eventually got some Linux email program working well enough that I was able to complete some tutorial. It took me some non-negligible amount of time and frustration. But that was on an older system and I don’t have things set up properly anymore.
I don’t want to go through that again. I have too many things on my plate to figure out how to reconfigure everything and relearn the process. And then I won’t use it again for several months to a year and I’ll have to go through it all again. Sometimes I can get a coworker to push things out for me - and those have been accepted into the kernel successfully. But that means getting him to jump through the hoops for me and takes time away from what he should be doing.
So for now, the patches/fixes get pulled along with our local branch until they’re no longer needed.
I have no idea how dense they need to be to not be able to send text-only email from Apple Mail. It must be anecdotal, because I cannot believe that someone is smart enough to code anything, but dumb enough to be able to click
Format > Change to text
in the menu bar.I would like to know; Did removing XUL (and hence making FF faster) result in a net increase in the userbase or did the userbase drop because people lost their favorite extensions, and many extension writers migrated to Chrome?
The number of people using* extensions is a very small percentage of the FF userbase.
*or broadly speaking only a very small percentage of users ever change any default setting for any widely used software/service - which is why making things opt-out is such a commonly accepted but evil trick
but who is the ff userbase?
either tech people or someone who’ve used ff long enough that they don’t want change. tech people are pissed if their customized application can’t be customized anymore, the long term users are pissed if anything changes. imho, mozilla betting on the casual-user-base and dumbing down firefox wasn’t the smartest move. the quantum engine change was nice, but the things since then were actively against their userbase.
if anything, i hope servo will be an easily embedable engine so there is some alternative to $webkit. maybe we can have a decent browser again, which isn’t trying to be my nanny.
My impression is that the perfectly spherical average Firefox user is a mythical illusion only existing in some heads of Mozilla management.
Thinking goes like “Chrome has this much marketshare, if we make our browser more like Chrome, people will come to us!!!” – except that this isn’t how it works.
Instead they gave all the loyal, long-term users the finger. I used Firefox since it was Netscape, but I’ll be gone as soon as Firefox devs kill
userChrome.css
, because they are more successful in alienating actual users than they are in winning new ones over.I think this comment from the article gets the point across:
I agree, but how many people actively installed FF versus have it installed for them by friends or family? For example, on all machines that I helped setup, I installed FF (plain vanila); except lately, I am forced to install Chrome (because a lot of Google properties have a degraded performance in FF).
This was how IE lost its market share in the first place right? Devs started recommending and in many cases actively installing the browser they liked in the computers they setup or maintained.
This is why I think the entire argument about XUL addons is a total red herring.
Most people never installed any add-ons. I myself only install two, and I’m a power user by almost any metric. But anyway, if web pages don’t work well in Firefox, then the whole argument is moot. Who cares how customizable Firefox is if the pages that I want or need don’t work well in it!
And Firefox’s greatest competitor isn’t abandonware any more.
I don’t know about that. I did (and do) use a bunch of addons for myself:
and I know that another bunch exist when I need them. I installed FF for my friends and family because I actively use FF. The extensions such as Vimperator and Redirector and Cookie killers are too advanced to install for others. I was tempted to jump ship when they changed the addons, and can imagine that others might have.
The degradation is a recent phenomenon, much later than the when XUL addons got dropped. Noticeable especially recently. In this, however, I think Google is playing the game because they have overwhelming numbers on their side.
Depends on what you consider a degraded experience, I guess. Google has been showing pop-up ads recommending Chrome since 2012 at least. Google Talk worked out of the box in Chrome, but required an add-on for Firefox, until 2017. Similarly, Google Gears gave you offline support that worked out-of-the-box in Chrome but required an add-on in Firefox until around 2011, where Google’s web apps switched to standard HTML5 for their offline support.
You might never have noticed any of these, because you use an ad blocker and would’ve just installed the browser plug-ins if you needed them, but a lot of people were probably convinced to switch. Besides, this is what insiders are saying about it (the thread is from 2019, but the claim is that it had already been happening for years at that point).
Firefox removed XUL addons in 2017. They had 27% market share in 2009. By 2017, they were already down to 14%.
It’s hard to say https://en.wikipedia.org/wiki/Usage_share_of_web_browsers#StatCounter_(Jan_2009_to_October_2019)
Firefox had 18.70% Marketshare in January 2015, 15.95% January 2016, 14.85% in January 2017, was down to 13.04% in October 2017, Firefox 57 (which removed XUL) released in November 2017 at 12.55% Marketshare, by January 2018 they were at 11.87%, and then there was a long, slow bleed until October 2019 they’re at 9.25%.
If anything it seems like they were bleeding users a bit faster before 57 than afterwards. You could read that in a lot of ways; there may have been some accelerated bleed-off of users ahead of Firefox 57 once the plan was announced, to the tune of maybe 1-1.5%, but it doesn’t seem to have been enormous and there definitely wasn’t a disproportionately huge drop afterwards. Or maybe they were bleeding over the perceived performance benefits of Chrome, and dropping XUL helped stem the tide.
A drop in percentage does not indicate a loss of users. They might have been getting more users, but just growing much slower than their competition (which isn’t an absurd claim as the number of internet users worldwide doubled in the last decade: https://ourworldindata.org/grapher/broadband-penetration-by-country ).
Yeah, very good point.
Remember, Firefox’s WebExtensions are just as powerful as (if not more powerful than) Chrome’s WebExtensions. Nobody migrates off of Firefox to Chrome because Chrome’s extensions are more powerful.
Now, it’s certainly possible that people would stick with an otherwise-inferior browser because the extensions are superior to chrome’s, but Mozilla would probably say that the whole point of the switch is to make Firefox stop being an inferior browser.
Your post starts with the advice that you claim is problematic, but then proceeds to prove it correct. If this were a research paper I’d thank you for publishing it despite finding the opposite that you’ve expected :)
More seriously though, learning that cryptography is advanced technology and not magic takes years of study, and until then they are indistinguishable and best treated with a healthy distance.
There are technologies that exist at the overlap of many different areas and that makes them particularly difficult. Applied cryptography is one of them. The need for mathematical formalism, high-stakes security threat modeling, detailed hardware and compiler knowledge are rare combinations.
I don’t quite agree with all of the analysis, but in the current political climate I don’t think the juice is worth the squeeze.
What part of today’s political climate would prevent you from voicing a rational, erudite argument on why you disagree with some parts of the analysis?
What would be the benefit, do you think? To me, to you, to Dan, or to the community?
Dan presumably believes his (I think his, please correct me if the pronoun is incorrect) conclusions are correct, and additional argumentation on the point is unlikely to generate a retraction or a significant change.
You might enjoy going over the argument with me and spotting issues, but you might not. I don’t know.
The community doesn’t really gain anything if I point out some things I think Dan has missed.
For me, there’s not really any benefit in pointing out issues with methodology or cited papers or maybe unreasonable comparisons. I’m not getting paid, it won’t get me more dates, it won’t win me more friends, and such points are unlikely important enough to win me the golden Rationalist of the Year fedora or whatever.
And what of the costs?
Dan doesn’t lose much, since his original article is completely reasonable, and it’s not like anybody is realistically going to hold him to the fire for making an incorrect or imperfect argument in support of the zeitgeist of the times.
The community may in the ensuing discussion get really ugly. Gven the experience of past threads about Damore’s memo and other things suggests that the odds are high that we’ll just end up with unpleasantness if there is any genuine disagreement. Even assuming we can all have a polite and dispassionate reasoned discussion, it is quite a popular opinion these days that simply speaking of certain things constitutes violence–and I do not wish to accidentally commit violence against fellow Lobsters!
To you, I’d imagine there’s no real cost beyond continuing to burn cycles reading a discussion back and forth. Then again, think of all the creative, productive, or cathartic things you could be doing instead of reading a thread of me trying to well-actually Dan without spawning a dumpsterfire.
To me, it at the very least requires a time commitment in terms of research and writing. I need to go grab counterexamples or additional context for what he’s written about (say, additional statistics about how majors actually turn into careers, background research around why the mid 1980s has that change in CS, and so forth) and make that into a coherent argument. And that’s fine, and if that were all I stand to lose I chalk it up as the opportunity cost of performance art and the good fun of public debate.
Alas, that’s not the potential full cost at all. Even assuming I could put together a critique that cleared some arbitrarily high bar of rationality and erudition, it is entirely probable that it’ll be held against me at some time or another–just look at what happened to RMS, an author who surpasses me in ideological consistency and rationality as much as he differs from me in viewpoint. I may well be held liable for (as @nebkor put it) “garbage opinions” that have no textual or factual basis. This could cost me career opportunities, this could cost me friendships, this could cost me any number of things–and that just isn’t outweighed by the minor joy I get from debating online with people.
(And note: I bear this risk not only for taking a completely opposite position, but for taking a position probably in agreement but quibbling about the details and suggesting different arguments. My experience is that people get grumpier and more vicious over little differences than large ones.)
That’s the sad state of things today. People have made the already-marginal benefits of reasoned and civil public debate much less than the potential costs, and there is almost no goodwill left that one can argue in good faith (or in bad faith but with rigor as polite arguendo). We have lost a great deal of intellectual curiosity, freedom, and frankly ludic capacity–things are too serious and the stakes too high for playing around with different ideas and viewpoints!
Thus, I elect to politely protest.
Yet, still, it is better to react and run the risk of being ostracised and shunned than it is to remain quiet in fear of retribution. Once enough people start doing this those who want to silence any and all who dare to voice a differing opinion will no longer be able to do so. They will be exposed for what they are, they’ll lose their power over others and with a bit of luck end up as a side note in the history books, taught to children in the same lesson where they learn about book burnings and state propaganda drives. Let’s hope that that is where it ends and that freedom of expression remains the norm.
I have voiced some differing opinions on this board and elsewhere yet I’m still here. For now the worst that will happen is a pink rectangle in the browser telling you that your posts have been flagged a lot recently and a reduction in whatever those karma points are called here. That pink rectangle is easily removed with a uBlock rule and those points don’t matter to begin with.
I had to re-read that, because I thought surely you were referring figuratively to the pink triangle. Ironic.
What was the benefit of your original reply, though?
I don’t know what @friendlysock intended, but from my perspective, one benefit is that it draws attention to the fact that people disagree but for $reasons, don’t want to go into details.
Or this 663 word treatise?
You opened up this thread by claiming that you don’t agree “with all of the analysis”. And yet you reveal here that you lack a “coherent argument”, and have neither counterexamples nor context to inspire your disagreement in the first place. This is one way of defining a bad faith argument. You can’t have it both ways. You either disagree because you’ve got a reasonable analysis yourself, or because you have an existing bias against what has been written. As you’ve expressed in many words, one of these is worth sharing and one is not.
I hold that it is entirely possible to disagree based on a rough analysis or by applying heuristics (for example, asking “what is missing from this chart?” or “are there assumptions being made about which population we’re looking at?”) that are in good faith but which require additional cleanup work if you want to communicate effectively. This is a third option I don’t believe you have accounted for here.
The nastiness in this thread somewhat underscores the importance of arguing coherently and choosing words carefully–I haven’t stated which points I disagree with Dan (nor how much!) and yet look at the remarks some users are making. With such understanding and charitable commentary, no argument that isn’t fully sourced and carefully honed can even be brought up with the hope of a productive outcome. That’s not a function of reasonable arguments not existing, but just being able to read the room and see that any blemish or misstatement is just going to result in more slapfighting.
We don’t have discourse, because people aren’t interested in exploring ideas even if they’re not fully-formed. We don’t have debate, because people are uncivil. What’s left then is argument and bluster, and better to protest than to participate.
No, we don’t have discourse because in most discussions that even remotely brush up against the intersection of tech with other currents in society, members such as yourself cry “politics” and immediately shut things down. And much more often than not these topics call us to engage in ethical problem solving i.e. what to do about toxic members of the tech community or how to address inequalities in open source. So by shutting down discussion, folks such as yourself – whether you mean to or not – send the message to those affected by these issues that not only are you not interested in solving these problems, but that you’re not even interested in discussing them at all. In this context, who do you think sticks around?
He wrote it in 2014 and updated it recently. I don’t think it’s entirely fair to tag that as “in support of the zeitgeist of the times.” Your turn of phrase makes it sound much more fleeting. That is not to call into question the other reasons you don’t want to differ; but I think the way you stated this does not give the piece enough credit.
I don’t mean to make it sound fleeting–rather that things being what they are right now, I doubt that anybody is going to be terribly upset if there’s some flaw in his argument revealed through discussion at this time. I apologize for any confusion or disrespect that may have been parsed there.
That’s fair. Thanks for clarifying.
You are making his point.
The best SRE recommendation around Memcached is not to use it at all:
Don’t use memcached, use redis instead.
(I do SRE and systems architecture)
… there was literally a release yesterday, and the project is currently sponsored by a little company called …[checks notes]…. Netflix.
Does it do everything Redis does? No. Sometimes having simpler services is a good thing.
SRE here. Memcached is great. Redis is great too.
HA has a price (Leader election, tested failover, etc). It’s an antipattern to use HA for your cache.
Memcached is definitely not abandonware. It’s a mature project with a narrow scope. It excels at what it does. It’s just not as feature rich as something like Redis. The HA story is usually provided by smart proxies (twemcache and others).
It’s designed to be a cache, it doesn’t need an HA story. You run many many nodes of it and rely on consistent hashing to scale the cluster. For this, it’s unbelievably good and just works.
seems like hazelcast is the successor of memcached https://hazelcast.com/use-cases/memcached-upgrade/
I would put it with a little bit more nuance: if you have already Redis in production (which is quite common), there is little reason to add memcached too and add complexity/new software you may have not as much experience with.
i was under the impression that facebook uses it extensively, i guess redis it is.
Many large tech companies, including Facebook, use Memcached. Some even use both Memcached and Redis: Memcached as a cache, and Redis for its complex data structures and persistence.
Memcached is faster than Redis on a per-node basis, because Redis is single-threaded and Memcached isn’t. You also don’t need “built-in clustering” for Memcached; most languages have a consistent hashing library that makes running a cluster of Memcacheds relatively simple.
If you want a simple-to-operate, in-memory LRU cache, Memcached is the best there is. It has very few features, but for the features it has, they’re better than the competition.
Most folks run multiple Redis per node (cpu minus one is pretty common) just as an FYI so the the “single process thing” is probably moot.
N-1 processes is better than nothing but it doesn’t usually compete with multithreading within a single process, since there can be overhead costs. I don’t have public benchmarks for Memcached vs Redis specifically, but at a previous employer we did internally benchmark the two (since we used both, and it would be in some senses simpler to just use Redis) and Redis had higher latency and lower throughput.
Yup. Totally. I just didn’t want people to think that there’s all of these idle CPUs sitting out there. Super easy to multiplex across em.
Once you started wanting to do more complex things / structures / caching policies then it may make sense to redis
Yeah agreed, and I don’t mean to hate on Redis — if you want to do operations on distributed data structures, Redis is quite good; it also has some degree of persistence, and so cache warming stops being as much of a problem. And it’s still very fast compared to most things, it’s just hard to beat Memcached at the (comparatively few) operations it supports since it’s so simple.
this comment is ridiculous
Oh please. They aren’t even close to sharing the same level of functionality. If I want to use Signal, I have to commit to depending on essentially one person (moxie) who is hostile towards anyone who wants to fork his project, and who completely controls the server/infrastructure. And I’d have to severely limit the options I have for interfacing with this service (1 android app, 1 ios app, 1 electron [lol!] desktop app). None of those are problems/restrictions with email.
I don’t know what the federated, encrypted ‘new’ email thing looks like, but it’s definitely not Signal. Signal is more a replacement for XMPP, if perhaps you wanted to restrict your freedom, give away a phone number, and rely on moxie.
I think Matrix is getting closer to being a technically plausible email and IM replacement.
The clients don’t do anything like html mail, but I don’t think I’d miss that much, and the message format doesn’t forbid it either.
If you can’t send patches to mailing lists with them then they’re not alternatives to email. Email isn’t just IM-with-lag.
Email can be exported as text and re-parsed by Perl or a different email client.
Until that functionality is available, I won’t consider something a replacement for email.
In all fairness: cmcaine says “Matrix is getting closer”.
Matrix is a federated messaging platform, like XMPP or email. You could definitely support email-style use of the system it’s just that the current clients don’t support that. The protocol itself would be fine for email, mailing lists and git-send-email.
The protocol also gives you the benefits of good end-to-end encryption support without faff, which is exactly what general email use and PGP don’t give you.
Adding patch workflow to Matrix is no different to adding it to XMPP or any other messaging solution. Yes, it is possible but why?
I can understand you like Matrix but it’s not clear how Matrix is getting closer to e-mail replacement with just one almost-stable server implementation and the spec that’s not an IETF standard. I’d say Matrix is more similar to “open Signal” than to e-mail.
“Getting closer” is a statement towards the future, yet all of your counter arguments are about the current state.
If I only knew the future I’d counter argument that but given that the future is unknown I can only extrapolate the current and the past. Otherwise Matrix may be “getting closer” to anything.
Do you have any signs that Matrix is getting e-mail patch workflow?
Mailing lists could move to federated chatrooms. They moved from Usenet before, and in some communities moved to forums before the now common use of Slack.
I’m not saying it would be the best solution, but it’s our most likely trajectory.
Mailing lists existed in parallel with Usenet.
Both still exist :)
I do think, actually, that converting most public mailing lists to newsgroups would have a few benefits:
Snark aside, I do think the newsgroup model is a better fit for most asynchronous group messaging than email is, and think it’s dramatically better than chat apps. Whether you read that to mean slack or any of the myriad superior alternatives to slack. But that ship sailed a long time ago.
Mailing lists are more useful than Usenet. If nothing else, you have access control to the list.
Correct, and the younger generation unfamiliar with Usenet gravitated towards mailing lists. The cycle repeats.
Mailing lists don’t use slack and slack isn’t a mailing list. Slack is an instant messaging service. It has almost nothing in common with mailing lists.
It’s really important to drive this point home. People critical of email have a lot of good points. Anyone that has set up a mail server in the last few years knows what a pain it is. But you will not succeed in replacing something you don’t understand.
The world has moved on from asynchronous communication for organizing around free software projects. It sucks, I know.
Yeah. Not everyone, though.
Personally I think that GitHub’s culture is incredibly toxic. Only recently have there been tools added to allow repository owners to control discussions in their own issues and pull requests. Before that, if your issue got deep linked from Reddit you’d get hundreds of drive by comments saying all sorts of horrible and misinformed things.
I think we’re starting to see a push back from this GitHub/Slack culture at last back to open, federated protocols like SMTP and plain git. Time will tell. Certainly there’s nothing stopping a project from moving to {git,lists}.sr.ht, mirroring their repo on GitHub, and accepting patches via mailing list. Eventually people will realise that this means a lower volume of contributions but with a much higher signal to noise ratio, which is a trade-off some will be happy to make.
It’s not like you used to have levers for mailing lists, though, that would stop marc.org from archiving them or stop people from linking those marc.org (or kernel.org) threads. And drive-bys happened from that, too. I don’t think I’m disputing your larger point. Just saying that it’s really not related to the message transfer medium, at least as regards toxicity.
Sure, I totally agree with you! Drive-bys happen on any platform. The difference is that (at least until recently) on GitHub you had basically zero control. Most people aren’t going to sign up to a mailing list to send an email. The barrier to sending an email to a mailing list is higher than the barrier to leaving a comment on GitHub. That has advantages and disadvantages. Drive-by contributions and drive-by toxicity are both lessened. It’s a trade-off I think.
I guess I wasn’t considering a mailing list subscription as being meaningfully different than registering for a github account. But if you’ve already got a github account, that makes sense as a lower barrier.
Matrix allows sending in the clear, so I suppose this has the “eventually it will leak” property that the OP discussed?
(A separate issue: I gave up on Matrix because its e2e functionality was too hard to use with multiple clients)
and across UA versions. When I still used it I got hit when I realized it derived the key using the browser user agent, so when OpenBSD changed how the browser presented itself I was suddenly not able to read old conversations :)
Oh! I didn’t know that!
Functionality is literally irrelevant, because the premise is that we’re talking about secure communications, in cases where the secrecy actually matters.
Of course if security doesn’t matter then Signal is a limited tool, you can communicate in Slack/a shared google doc or in a public Markdown document hosted on Cloudflare at that point.
Signal is the state of the art in secure communications, because even though the project is heavily driven by Moxie, you don’t actually need to trust him. The Signal protocol is open and it’s basically the only one on the planet that goes out of it’s way to minimize server-side information storage and metadata. The phone number requirement is also explicitly a good design choice in this case: as a consequence Signal does not store your contact graph - that is kept on your phone in your contact store. The alternative would be that either users can’t find each other (defeating the point of a secure messaging tool) or that Signal would have to store the contact graph of every user - which is a way more invasive step than learning your phone number.
Of course you must trust Moxie. A lot of the Signal privacy features is that you trust them not to store certain data that they have access to. The protocol allows for the data not to be stored, but it gives no guarantees. Moxie also makes the only client you can use to communicate with his servers, and you can’t build them yourself, at least not without jumping hoops.
The phone number issue is what’s keeping me away from Signal. It’s viral, in that everyone who has Signal will start using Signal to communicate with me, since the app indicates that they can. That makes it difficult to get out of Signal when it becomes too popular. I know many people that cannot get rid of WhatsApp anymore, since they still need it for a small group, but cannot get rid of the larger group because their phone number is their ID, and you’re either on WhatsApp completely or you’re not. Signal is no different.
And how can you see that a phone number is able to receive your Signal messages? You have to ask the Signal server somehow, which means that Signal then is able to make the contact graph you’re telling me Signal doesn’t have. They can also add your non-Signal friends to the graph, since you ask about their numbers too. Maybe you’re right and Moxie does indeed not store this information, but you cannot know for sure.
What happens when Moxie ends up under a bus, and Signal is bought by Facebook/Google/Microsoft/Apple and they suddenly start storing all this metadata?
Signal is a 501c3 non-profit foundation in the US, Moxie does not control it nor able to sell it. In theory every organization can turn evil but there is still a big difference between non-profits who are legally not allowed to do certain things vs corporations who are legally required to serve their shareholders, mostly by seeking to turn a profit.
There are two points here that I’d like to make, one broader and one specific. In a general sense, Signal does not implement a feature until they can figure out how to do that securely and with leaking as little information as possible. This has been the pattern for basically almost every feature that Signal has. Specifically, phone numbers are the same: The Signal app just sends a cryptographically hashed, truncated version of phone numbers in your address book to the server, and the server responds with the list of hashes that are signal users. This means that Signal on the server side knows if any one person is a Signal user, but not their contact graph.
Every organization can also be bought by an evil one. Facebook bought WhatsApp, remember?
These truncated hashes can still be stored server-side, and be used to make graphs. With enough collected data, a lot of these truncated hashes can be reversed. Now I don’t think Signal currently stores this data, let alone do data analysis. But Facebook probably would, given the chance.
WhatsApp was a for-profit company, 501(c)3 work under quite different conditions. Not saying they can’t be taken over, but this argument doesn’t cut it.
No, it’s an absolutely terrible choice, just like it is a terrible choice for ‘two factor authentication’
Oh but Signal users can always meet in person to re-verify keys, which would prevent any sim swap attack from working? No, this (overwhelmingly) doesn’t happen. In an era where lots of people change phones every ~1-2yr, it’s super easy to ignore the warning because 99% of the time it’s a false positive.
This is a solved problem. I mean, how do you think you got the phone numbers for your contacts in the first place? You probably asked them, and they probably gave it to you. Done.
Careful there… you can’t say bad things about electron in here….
Would anybody suggest Rust for prototyping?
Many projects are so complex that we need a prototype to explore the options. Correctness is not an important aspect there. Python (JavaScript, Lua, Ruby, TCL,…) are great for prototyping. Never do we throw away the prototype to build the real product. So there is the desire for a language which can do both: Quick prototypes and reliable products.
Rust can do both, about two years ago at work we wrote a ~300LOC middleware to handle redirects.
Then proceeded to put that in front of the full request traffic of the company. This was our first Rust project, but a massive success: it took a reasonable amount of time to put the tool into production, the compiler said no a couple of times, but rightly so. The memory/cpu usage of the middleware were ridiculously low, even though it dealt with millions of redirects. Integrating cuckoo hashing, a LRU cache, and a database backend was easy. Zero production problems afterwards, because this thing just worked once it compiled.
Ok, so 300 LOC is not exactly complex, but that’s not the point. This middleware was a part of a surreally complex project and that’s how prototypes should work: you identify part of a complex project, cordon it off and then prototype and implement it. Parallelize and iterate with everything and your complex project ends up being viable. Rust allowed us to get in, solve a problem, operate it in production and have it work reliably with a low maintenance overhead.
I would say yes, because Rust-without-lifetimes* hits a sweet spot of letting you focus on structure without getting distracted by (as many) runtime bugs.
*Writing fresh code, you can get away with cloning everything and having
&mut self
being the only borrow-y part.Lots of
clone()
andunwrap()
and coding is quick, I suppose. Those can be easily searched for to clean it up once the design is stable. This clean up will surely be painful but you just pay off the technical debt accumulated to finish the prototype quickly.I prototyped some code in Rust that used lots of clone and had an overabundance of
Rc
. Once I had it correct and a full suite of tests proving it’s behavior the compiler guided me on the refactoring. Just removing the Rc usage got me a 6x speedup on benchmarks.I don’t think using Rust obviates the truism “First make it correct. Then make it fast”. Instead it allows you to use the compiler and the types to guide your make it fast refactoring and know that you didn’t break the code in non-obvious harder to test ways.
That’s basically what I do. It’s actually not too difficult in my experience ’cause the compiler is so good at guiding you. The hard part happens before that, when you structure things correctly.
I wonder if there would be a market for a simpler, runtime-checked, GC, lang that’s a subset of rust syntax that you could use on a per-module basis for prototyping, but still keeps the non-safety-related structures and idioms that real rust code uses, to aid porting to “safe mode” later if you want to keep it?
Sort of like the anti-typescript, but instead of adding some type safety, you keep the types but lose the borrow checker, and you pay for it with performance. But not by writing different code where you fill it with calls to cells and ref counts and such, rather you leave those annotations outand they just get added in during compilation / interpretation.
If you could do it without perverting the syntax much, it’d surely be easier to port from than if you say, just wrote your PoC in Python or JS.
Rust without lifetimes is OCaml (or ReasonML for those offended by OCaml’s syntax) :)
Really depends on what and how much you think you’ll have to refactor. Rewrites that challenge bigger architectural decisions are more costly in my experience with hobby (!) rust projects.
But if your protocol / API is stable, I’d go with rust from the start.
I prototype with python, and gradually improve reliability with static typing (i.e. mypy) and unittests. Rust is great, but it’s correctness guarantees require an upfront cost which will likely penalise rapid prototyping. It all depends though.
On embedded systems, Rust is amazing for prototyping, especially compared to C or C++. It has a proper ecosystem, and debugging Rust’s compiler errors is far, far superior to debugging runtime bugs on the board.
People have pointed out that Google is much less reliant on third-party cookies than the competition, because users are probably on a Google-owned platform already, and if they’re not, they’re almost certainly logged in to Google in their session. This doesn’t really impact Google Analytics or Google’s ads, but it does harm their competitors in the user surveillance and ad targeting business, with the benefit of appearing to care about user privacy.
This definitely isn’t a bad thing for humanity, though. I wouldn’t mind a world without third-party tracking cookies. I also wouldn’t mind a world with fewer user surveillance and ad targeting companies.
Fewer companies mean more unified/combined databases though. That’s not necessarily good news when we’ve been hearing for years that Google is looking to combine medical, financial and government-provided data with their other datasets.
I’m glad there are finally blog posts that mention working remotely isn’t all sunshine and rainbows. It takes a lot of effort to balance home/work life, avoid distraction, and communicate with your coworkers.
I’ve worked for five years remotely and I wouldn’t do it again.
Yes, you need discipline, avoid distraction and have a clear separation between home/work, but the dealbreaker for me is how much less efficient/more isolating remote working is.
Isolating from the “having conversations” perspective. A mind-boggling amount of progress is an outcome of having random conversations and random discussions with people in an office, or attending the right meeting or the right devJF or the right event. It’s also the sad reason why large companies spend so much on flying people around countries.
I’ve pretty much only worked remotely for the past 7 years and it is awfully isolating. I’d disagree with you on efficiency though - on few occasions where I’d go visit the office I’d spend like 10 hours there and would have 3 hours of real work done. Remote work does display burnout better though.
So I think that if you’re in a rut you’ll do less work remotely but otherwise you’ll be much more efficient. Honestly I cant imagine working in a office. The days are so short already and the inefficiency of the whole culture would drive me mad.
I think that main issue with remote work (which I do, and love) is that all kinds of things that are implicit in an onsite office need to be communicated differently. The obvious example is whether or not someone is busy, but other things like who is chatting with who, is there a big meeting going on, and after meeting chit chat. All kinds of signals physical proximity provides simply aren’t there in a remote environment.
Both remote and onsite work and work and have different strengths, in my experience.
I agree on the conversations, its one of the things i miss from office work but it certainly seems that depends on personality type. There are people who hate that part the most and find it distracting.
Another thing I just remembered: you also effectively need an extra room in your house/apt/etc to dedicate to work. Depending on where you live, going from N rooms to N+1 can be a very expensive proposition.
This looks great, let’s please replace PGP with it everywhere. :D
Yes, let’s replace a system which has been tested and proven and worked on since the 1990s with a random 47-commit project someone just tossed up on GitHub. Because good encryption is easy.
/s
File encryption is in fact kind of easy, thanks to everything we learned in the last 30 years.
Yes, actually.
I don’t see the point in sarcasm. PGP does many things and most of them are handled poorly be default. This is not a PGP replacement, it’s a tools with single purpose: file encryption. It’s not for safe transfers, it’s not for mail. It’s got a mature spec and it’s designed and developed by folks who are in the crypto community and there are two ref implementations. It does one thing and does it well which is everything PGP isn’t.
I guess even the author of PGP would be up for that: https://www.vice.com/en_us/article/vvbw9a/even-the-inventor-of-pgp-doesnt-use-pgp
In a cryptography context, “since the 1990s” is basically derogatory. Old crypto projects are infamous for keeping awful insecure garbage around for compatibility, and this has been abused many many times (downgrading TLS to “export grade” primitives anyone?)
I think icefox’s comment was already being sarcastic
Not necessarily, PGP is a trashfire
Why do you say that?
This should answer your question better than I ever will.
https://latacora.micro.blog/2019/07/16/the-pgp-problem.html
Thanks
Whatever it’s faults there are plenty of good reasons to use protobuf not related to scale.
Protobuf is a well known, widely used format with support in many different programming languages. And it’s the default serialization format used by gRPC.
The only more boring choice would be restful JSON. But gRPC is actually easier to use for an internal service. You write a schema and get a client and server in lots of different languages.
And you also get access to an entire ecosystem of tools, like lyft’s envoy, automatic discovery, cli/graphical clients, etc.
Maybe instead of using an atypical serialization format (or god-forbid rolling your own), it would be better to spend your innovation tokens on something more interesting.
ASN.1 anyone? For gods’ sake, it is one of the oldest protocol description format out there and for some reason people are still missing this out.
I second ASN.1. Although underappreciated by modern tech stacks, it is used quite widely, e.g. in X.509, LDAP, SNMP and very extensively in telecom (SS7, GSM, GPRS, LTE, etc). It is suitable for protocols that need a unique encoding (distinguished encoding rules, DER) and for protocols where you can’t keep all the data in memory and need to stream it (BER).
It has some funny parts that might be better done away with, e.g. the numerous string types that nobody implements or cares about. I find it hilarious to have such things as TeletexString and VideotexString.
Support in today’s languages could be better. I suspect that Erlang has the best ASN.1 support of any language. The compiler, erlc, accepts ASN.1 modules straight up on the command line.
If certs taught one thing to people it is that noone in their right mind should EVER use ASN.1 again.
Nobody should use BER ever again, and people should use generated parsers rather than (badly) hand-rolling it. All of the certificate problems that I have seen are not fundamental to ASN.1, but rather badly hand-rolled implementations of BER.
XDR/sunrpc predates even that by ~a decade I believe, and its tooling (
rpcgen
) is already available on most Linux systems without installing any special packages (it’s part of glibc).I love ASN.1 I guess other people prefer long text description of objects instead of a dot numbered notation.
Swagger/openapi are so much better than grpc in that respect that it is borderline embarasing. No offense intended. It’s human readable and writeable. You can include as much detail as you want. For example you can include only the method signatures or you can include all sorts of validation rules. You can include docstrings. You have an interactive test GUI out of the box which you don’t need to distribute. All they need is the url for you swagger spec. There are tools to generate client libraries for whatever languages you fancy, certainly more than those grpc offers, in some cases multiple library generators per language. But most importantly. It doesn’t force you do distribute anything. There is no compile step necessary. Simply call the API via http, you can even forge your requests by hand.
In a job I had, we replaced a couple of HTTP APIs with gRPC because a couple of Google fanboys thought it was critical to spend time fixing something that just works with whatever Google claims to be the be all end all solution. The maintaining effort for those APIs jumped up an order of magnitude easily.
gRPC with protobuf is significantly simpler than a full-blown HTTP API. In this regard gRPC is less flexible, but if you don’t need those features (ie you really are just building an RPC service), it’s a lot easier to write and maintain. (I’ve found swagger to be a bit overwhelming every time I’ve looked at it)
Why was there so much maintenance effort for gRPC? Adding a property or method is a single line of code and you just regenerate the client/server code. Maybe the issue was familiarity with the tooling? gRPC is quite well documented and there are plenty of stack-overflowable answers to questions.
I’ve only ever used gRPC with python and Go. The python library had some issues, but most of them were fixed over time. Maybe you were using a language that didn’t play nice?
Also this has nothing to do with Google fanboyism. I worked at a company where we used the Redis protocol for our RPC layer, and it had significant limitations. In our case, there was no easy way to transfer metadata along with a request. We need the ability to pass through a trace ID and we also wanted support for cancellation and timeouts. You get all that out of the box with gRPC. (in Go you use
context
) We looked at other alternatives and there were either missing features we wanted or the choice was so esoteric that we were afraid it would present too much of an upfront hurdle for incoming developers.I guess we could’ve gone with thrift. But gRPC seemed easier to use.
We got stuck using protobufs at work, and they’ve been universally reviled by our team as being a pain in the neck to work with, merely because of their close association with gRPC. I don’t think the people making the decision realized that gRPC could have the encoding mechanism swapped out. Eventually we switched to a better encoding, but it was a long, tedious road to get there.
What problems have you had with protobufs? All the problems the original post talks about come from the tool evolving over time while trying to maintain as much format compatibility as possible. While I agree the result is kind of messy, I’ve never seen any of those quirks actually cause significant problems in practice.
The two biggest complaints I’ve heard about gRPC are “Go gRPC is buggy” and “I’m annoyed I can’t serialize random crap without changing the proto schema.” Based on what I know about your personal projects, I can’t imagine you having either problem.
Part of the problem is that the Java API is very tedious to use from Clojure, and part of the problem is that you inherit certain properties of golang’s pants-on-head-stupid type system into the JVM, like having nils get converted into zeroes or the empty string. Having no way to represent UUIDs or Instants caused a lot of tedious conversion. And like golang, you can forget about parametric types.
(This was in a system where the performance implications of the encoding were completely irrelevant; much bigger bottlenecks were present several other places in the pipeline, so optimizing at the expense of maintainability made no sense.)
But it’s also just super annoying because we use Clojure spec to describe the shape of our data in every other context, which is dramatically more expressive, plus it has excellent tooling for test mocks, and allows us to write generative tests. Eventually we used Spec alongside Protobufs, but early on the people who built the system thought it would be “good enough” to skip Spec because Protobufs “already gives us types”, and that was a big mistake.
Thanks for the detailed reply! I can definitely see how Clojure and protobufs don’t work well together. Even without spec, the natural way to represent data in Clojure just doesn’t line up with protobufs.
Why (practically) no mention of xmpp/jabber? It’s federated, has E2EE support (OMEMO), many FOSS clients and server implementations, and providers generally don’t require any personal info to sign up. The article only mentions that last bit briefly, but instead spends more time focusing on the various walled garden services out there.
It’s not trendy and new? Honestly the only reason I can think why these articles always gloss over it.
From a user point of view, I can see why it struggled. It is old, it wasn’t always great, OMEMO rollout has been slow and steady.
However, if you are writing an article like this you should know that XMPP in 2019 is really good. Services like Conversations make it a program that I use with real people in the real world every day.
Nerds like me use their domain as their ID. Other people just use hosted services. Doesn’t matter, it all works.
Decentralised services are always going to have a branding issue I guess.
It is listed under Worth Mentioning of our Federated section. The reason why it is not a main feature is because client quality is such fragmented ecosystem, and this is due largely to poor quality of documentation. Many of the XEPs still remain in draft or proposed status.
The issue is Conversations is the only good client. If there were iOS and Desktop clients as good as that then we would be more likely to make it a main feature.
There is also Quicksy.im by the Conversations author that provides even easier on-boarding for non-nerds but still uses XMPP underneath.
For me the biggest problems with XMPP are lack of good clients for iOS and desktops. There is Dino.im but still in beta and it’s not clear if there will ever be an iOS client with Conversations feature-parity.
Edit: It seems some members of privacytools.io actually like XMPP: https://github.com/privacytoolsIO/privacytools.io/pull/1500#issuecomment-559405853
I should mention here that is not the case at all. We look at number of factors, including client quality, developer documentation quality, types of ‘footguns’ involved, ie where a user might expect something to be encrypted and in reality it is not etc.
You’re being too kind to XMPP, like PGP it’s another example of focusing on things that are trendy in some FOSS circles and meanwhile losing focus on actually providing value where it really matters to users.
It’s trendy to assume that federation is an unequivocal good thing and centralized services are bad, when looking deeper into the topic reveals it’s a mess of tradeoffs. Every time this comes up, Moxie’s “The Ecosystem is Moving” post is looking more and more insightful.
XMPP, like PGP provides a horrible user experience unless you have extensive domain-specific knowledge. In XMPP’s case, federation is partly to blame for that. Another part is that XMPP is very much a “by nerds, for nerds” thing which comes with a very different set of priorities than anything that aims to be used by most people.
For a different perspective on the subject see “An Objection to ‘The ecosystem is moving’”.
I personally like this one I don’t trust Signal by Drew DeVault.
For the desktop there is Gajim (gajim.org). It has OMEMO and works very well with Conversations. I have been using this for years and years, although I can only attest to the Linux version.
Yes, I agree. Gajim is fully featured. It’s not without flaws: outdated UI, OMEMO not built in and enabled by default and apparently no official MacOS version (there is https://beagle.im/ for MacOS though…).
I guess XMPP’s problem no 1 is software fragmentation as there is no single company that’s maintaining full suite of software. It’s always mix-and-match depending on what OS/phone is used by one’s friends.
Ah, iOS is a big deal. Didn’t realise Conversations didn’t have an app on there.
Yeah. Some people report good results with ChatSecure or Monal or Siskin.im but it seems all of them have minor issues here and there.
The issue with that is they have no tagged releases, which means maintainers have some ancient random old version or have to keep up to date with every commit. It is unacceptable for something as complex as an instant messenger program to have no tagged release and we believe this because the developers are not comfortable in the completeness of the product to do so.
https://github.com/privacytoolsIO/privacytools.io/pull/1500#discussion_r347156496
Because it does not solve any privacy, security or resilience problems from the point of view of individual.
a) Federation is meaningless from resilience PoV since XMPP accounts are not transferable; if someone is targeting me they can take down server I’m using. User or programmer giving a damn about “network being resilient as whole” is irrational. It’s should always be about end-user experience.
b) Until people will figure out how to create Open Incentive-Aligned Cloud Messaging Platform (replacement for FCM and APNS) battery life will suck. Having multiple tcp sockets each with its own heartbeat for every of your apps means short battery life. I want one socket with heartbeat values optimized for network I’m using ATM.
If you want to figure out how to build open replacement for FCM/APNS, I would love to help.
Aren’t all of there points especially worse for the services mentioned in the article? They all depend on a single company, none of the accounts or services are transferable.
Battery life doesn’t ‘suck’. My nexus 5x regularly sees 24hr+ with moderate xmpp usage through Conversations (and no Google play services installed)
I’ve been using XMPP with OMEMO E2EE for about a year now, after a FOSS enthusiast convinced me to use it. I’m using Gajim (https://gajim.org/) pretty much daily now and am quite happy with the feel and performance of the chat. It even has code highlighting blocks and other goodies and addons, and it stores the history in a sqlite database. Apparently it’s also possible to use multiple clients on the same account and the messages go to all your clients once they’re hooked up, but I’ve never tried it myself.
Yeah I use it on my phone and desktop, much like one might use whatsapp and whatsapp web. Only your phone doesn’t have to be on for it to work.
Yep, I believe that’s XEP-0280 ‘message carbons’. Many servers/clients support it.
The other issue we have with XMPP is that E2EE is not consistent. For example file transfer and VOIP.
https://github.com/privacytoolsIO/privacytools.io/pull/1500#discussion_r351079569
It’s not abundantly clear to the user whether their file transfer was sent with E2EE or not. As for VOIP over Jingle, there’s no E2EE to be found there. We believe all channels should be E2EE and not “some features only”.
That is the client we suggested for desktop under our Federated section.
We would like to see documentation for MacOS. Pages like https://gajim.org/download/ just simply say things like:
Yes, and it works very well. I am using Conversation on my mobile and Gajim on the desktop. Both support OMEMO.
See omemo.top for the OMEMO implementation status across a large number of XMPP clients.
This is a bad idea in itself, but let’s be clear: a lot of these ecosystem-impacting changes are questionable because Google has a massive conflict of interest between Chrome and their business.
The ad-blocking changes, the mandatory forced Chrome login, this, the attempt to kill the url and others are just facets of the underlying conflict of interest.
Yikes, another of these cultural posts that has nothing to do with technology, now my day is ruined!
/s
That’s an insightful article and quite succintly explains why meaningful values matter in a corporate setting.
We can expect Neqo to be the implementation of HTTP3/QUIC in Firefox:
Source: https://http3-explained.haxx.se/en/proc-status.html
Finally, networking/protocol components written in a memory-safe language. As the complexity of these layers increased substantially over the last few years, I don’t think I’d be comfortable with using something written in C/C++ (or at least without half a decade of fuzzing/real world testing) for these.
Edit: curl uses another Rust QUIC implementation - Quiche from Cloudflare. Things are moving in the right direction.
Getting kind of tired of these thinly-veiled off-topic political posts to be quite honest, we’ve had a few of them now. Stick to technology, take your unwanted political views to hacker news.
It’s fine to flag as off-topic and
hide
the submission so it doesn’t bother you.While this particular instance and article deals with a current hot-button political issue, the current structure of open source is vulnerable to this sort of disruption. See my comment here, and this comment by @chobeat.
Ah yes, agreed! Technology is the first known example of Plato’s Perfect Forms. Technology exists on its own abstract, perfect realm that trancends space and time and has no relevance to anything happening in this physical reality.
Stick to technology, I say! And no funny human business!
Google can’t track you on FTP, and also AMP is not needed there =)
pretty much anyone can track you on FTP, it’s an unencrypted protocol.
Using cookies? I don’t think so. Unencrypted means ISP can track you, yes.
I don’t think I can, since I (1) don’t work at your or the server’s ISP and (2) I’m not in the neighbourhood. Feel free to prove me wrong. ;)