RTO is driving a lot of new organising here in Sweden and it’s been a frustration of mine that so far there’s very little appetite for taking up that fight among the decision-makers in the trade union movement here. I have somewhat come around to the idea that it’s strategically sound to prioritise negotiation goals that are possible to coordinate across the entire labour movement over those that are only applicable for white collar roles. I’m still keen to explore ways to move the issue further up the agenda at the key moments when priorities are being defined though.
Very excited about what’s ahead for tech industry unionisation over here. If you’re reading this in Sweden and wishing you had a kollektivavtal at your current gig feel free to reach out ([email protected]) and I’ll be happy to share whatever contacts and experience I have that can help you get started.
The tech worker stuff I saw back in 2016-2017 was not compelling. When Macej came through town, he seemed more interested in riling us up to give donations to defeat Trump than to, you know, work on common issues like contracts.
During a stint in Brooklyn in 2019, I attended a TWC event (I think it was them, could’ve been somebody else) and was summarily unimpressed with the organizing efforts–least of all the amount of time spent by a Google contractor who, bless her heart, didn’t seem to understand that they weren’t actually there for her but for “real employees”. There was also a bunch of time wasted with the usual progressive genuflections.
For the past several years, I’ve had the pleasure of watching a CWA shop play out its bargaining from up close, and seen the slow disillusionment of the bargaining unit with the process–as a comrade noted after bouncing from the place, “having a union just means you get to help hold the knife when layoffs come”.
If I had to sum up many of my criticisms and observations (and this is heavily anchored in a US background):
In order for unions to be successful, you need broad appeal among your base. For whatever reason, the progressives in the US cannot peacefully coexist and form coalitions with more conservative fellow workers, and that is to the detriment of both factions when placed across from Capital.
For historical reasons, unions in the US are very, very different from unions in places like Germany or elsewhere. They are explicitly very political and if you read the limits and affordances on them it’s clear a lot of time is spent making sure they’re able to serve their other masters, the political party that they throw in with. This is not the same cause as helping the workers. (Were I a more cynical sock, I’d suggest that this is by design and a good way of rendering toothless what was once a more threatening hotbed of activism by the working-class socialist and anarchist.)
For historical reasons, conservative tech workers are actively propagandized against unionization and do not understand the history of how we got here, the potential benefits and tradeoffs for having or not having a union, and generally that their needs as workers could possibly be handled better via collective action than the luck-of-the-draw of their currently negotiated position.
For historical reasons–I suspect mainly that unions primarily formed in Democrat/progressive-leaning areas due to urbanization and where historical capital/factories were concentrated–the modern union organizers definitely and the members frequently have a set of political beliefs that are let us say inefficient in achieving the goals of a union. To wit: the purpose of a union is ultimately to get pro-worker treatment via a monopoly on labor…so, it is fundamentally incompatible with any sort of pro-immigrant or pro-globalization stance. This is awkward for tech workers, who as class lean progressive (arguably due to a previous position of privilege) and who are used to working in a space (as much as these days it is popular to trip over oneself to mock such libertarian ideals) free of the weary giants of flesh and steel that would afford the protectionism that empowers unions.
Nearly all of us rely on open-source software, again something which knows not borders. It is very difficult to be pro-open-source software while still maintaining the desired monopoly on labor.
There is negligible cost for reproduction for a lot of our software; if done properly, once a library solves an issue the library maintainer can disappear and we’ll all still benefit. This dynamic is not the same as for, say, a factory worker whose absence will be felt until filled. The issue this presents is: you can’t meaningfully control a labor supply when the outputs of the labor supplant that labor!
Many people who want to enter into a bargaining agreement don’t quite understand that there is actual bargaining that is going on–once a company’s management enters into these negotiations, they are going to seek concessions on everything possible (because it’s a negotiation, right?). Many first-time union people, bless their hearts, do not understand how negotiations work and will leave things on the table only to be dismayed later about their absence (I had to explain such fundamental concepts as BATNA to one set of folks, only to watch them get screwed on total comp because they didn’t understand how to value their profit share and equity).
Many people in the US who kick off a unionization process do not understand that they are casting Summon Bureaucracy III and that it will be a multi-year journey. Something like two-thirds of bargaining units do not have agreements in place after two years, and other figures I’ve seen are between 400-500 days. It’s a long, drawn-out process and in an industry where we have (had?) such high turnover, this can be hard to maintain.
For smaller firms, especially SMBs and (god-forbid) startups, the introduction of a bargaining unit creates all kinds of annoying issues for middle management. If you have a low performer, or a malicious worker, or even just somebody who needs a nudge in the right direction to get back on track, you suddenly need to consider how that all fits in with the union dynamics. In some cases–speaking from experience–this greatly complicates the ability to use a lighter touch and keep things out of documentation that could later be used against the employee. Again, the union implementation in the US is a bureaucratic and inherently adversarial one. During the time of negotiations, everything is under even a closer microscope and that is not always to the workers’ or company’s benefit!
Because a lot of the people in tech come from California, there’s a sort of baseline level of progress they assume and in their privilege fail to account for when interacting with folks elsewhere in the country–for example, non-competes are a constant source of worker oppression in our industry but I do not see that focused on nearly as much. Again as an example, when Macej came through he failed to address any of the (relatively straightforward, non-partisan) issues we faced as tech workers in Texas around things like non-competes, reasonable exercise windows, and so forth.
The unionization efforts I’ve seen in tech do not account for the fact that seniority in a position does not mean a good fit all the time. There is a constant evolutionary pressure for firms to employ the best and latest techniques, and in small firms and startups failure to do so is often a matter of life-or-death in the marketplace. Demanding that workers with more up-to-date skills be de-prioritized in favor of somebody with tenure works great until the company is eaten alive by other firms. I suspect a sufficiently clever union could address this issue via training requirements or whatever, but in practice I’ve seen this typically just turn into a jobs program. This is by design, but it does not work if there is actual competition from other firms (which there is, indeed there is a global marketplace for tech competition!).
In small firms in tech, there is less of a buffer with how close tech workers are to business factors. In a large factory stamping out car frames or windows, your assembly line worker can afford to be ignorant of the company’s position in the marketplace (and even if they couldn’t, there’s not a lot that Glue Station 3 second-shift can do about it). In a small tech company (at least according to our tribal legends) a few developers who really know the business can make all the difference in the world. At the same time, for the reasons listed above, pro-union techies tend to come with a lot of progressive baggage that makes it incredibly difficult to acknowledge fundamental truths about economics and the marketplace they engage in, which tends to have a deleterious effect. I have seen aspiring union workers actively proud that they are ignorant of business concerns that ultimately do things like pay for their negotiated healthcare or benefits packages (only to be surprised, surprised! when RIFs happen a year later).
For chrissake, please please please read about the perils of hotshops before you try to organize.
If I ever get my blog back up, I could write a small treatise on all of this.
To wit: the purpose of a union is ultimately to get pro-worker treatment via a monopoly on labor…so, it is fundamentally incompatible with any sort of pro-immigrant or pro-globalization stance.
What? Immigrant/offshore workers can act as scabs, of course, but I don’t see why this is a fundamental truth. Immigrants can be union workers too.
It’s not a fundamental truth. It’s cherry-picking of the most dysfunctional traits a union can have to build a strawman that is the easiest to shoot down and feel more intelligent about it.
Typed by an immigrant union organizer in a “Migrant Organizing Unit”.
Are you in the US, or elsewhere (as I gather from your profile)? I was pretty clear I was addressing the US operating regime–if you’re in Germany or whatever you have a different kettle of fish.
The de facto purpose of getting H1-B visas in the US is to have a captive workforce that can work more cheaply than native labor. If they cost the same to employers, I wager they wouldn’t be as attractive to hire. Here, go look for yourself–“some of these things are not like the others”, as one might say.
The de facto purpose of globalization is to export work processes to where it’s cheaper to have them occur, whether that’s due to avoiding expensive environmental regulations, expensive worker compensation, expensive resource extraction, or whatever else. If you are a union, you do not want your employer to get labor elsewhere.
I’m explaining this not to excuse any of it but to make sure anybody (in the US, again) who is looking at solving the puzzle knows more of the pieces and dynamics at play, and to encourage people to look past whatever easy ideology they likely to have have because they’re (comparatively) wealthy and in tech because it will fail them as times get rough (and times, they are getting rough).
If you want to make a useful contribution @chobeat instead of just sniping at me perhaps you could share your experience and background with a “Migrant Organizing Unit” and talk about what works, what doesn’t, and where they show up.
Not that I have any more experience with labor organization than yourself but unionism breaks into different strains and your conception of it is called business unionism, where a specific union exists solely to forward the narrow economic self-interest of its then-existing membership. Other conceptions of unionism include labor liberalism, where unions exist for workers as a class to more effectively lobby the state (which is the source of the association with dem party politics you saw) and also class struggle unionism, which takes an explicitly combative approach with capital and also involves the sort of “ideological baggage” you talk about. The latter tends to take a “workers of the world unite” type approach and does not inherently view immigrants or workers elsewhere as scabs. Whether this last strain actually makes any sense when operating from within the imperial core can certainly be debated. But anyway I’m not here to argue for one conception or the other, just to say that your conception of what a union is is not the only one. Most of the ideas in this post come from the book Class Struggle Unionism by Joe Burns.
The de facto purpose of getting H1-B visas in the US is to have a captive workforce that can work more cheaply than native labor. If they cost the same to employers, I wager they wouldn’t be as attractive to hire.
This is true, but I’m not sure how you get from there to “unions must be anti-immigrant”.
A union’s goal should be to get a collective agreement that covers immigrant workers too, so they’re no longer cheaper. Maybe that means they hire fewer immigrants (though I honestly don’t think there would be a noticeable effect in tech), but the immigrants they do hire would get much better working conditions, I’d call that pro-immigrant. I think immigrant workers would gladly sign on to that.
We are stronger together than apart. Your point about divisions along political alignment lines applies to division between citizens and immigrants, if we exclude them, we just set ourselves up to get scabbed on.
An immigrant who is established enough in their new country to be a union worker now also has an economic incentive to oppose further immigration from the place they originally came from, just as native workers in the same union do.
This isn’t true and the history of labour organising shows how excluding people on the basis of race, sex, nationality, homelessness, etc makes a union weaker. (E.g. IWW organising “hobos” in a logging industry dispute. Listen to Cool People Who Did Cool Stuff for more examples).
You may also be making the “lump of labour fallacy”.
I acknowledge your stated intentions here about inspiring people to adapt tactics. But this is not a forum where your coparticipants are union insiders with the power to make the kinds of changes you want.
The primary net impact of sucking up all the oxygen in a discussion space with a huge critique post like yours is going to be to dissuade potential new organisers from engaging. Instead of leading to the change you want to see, it’s more likely contributing to further entrenching the structures and strategies you dislike by depriving the labour movement of the new blood it depends on to refresh its thinking.
You seem pretty well read on this general topic. If you ever do get that blog post series written, I’d encourage you as a next step to consider ways of putting the ideas into practice through real organising. The labour movement has plenty of armchair generals who can tell you a million reasons why this or that aspect of how things are done is doomed to failure. As in any other field of endeavour, you earn your influence over the direction things take by the merits of your prior contributions. It’s unlikely for a Substack authored by someone who attended a TWC event in Brooklyn six years ago and knows some people at a place with a union deal is going to produce the outcomes you want to see on its own.
Because of the aforementioned adversarial nature of unions in the US, I can neither confirm nor deny any actions I may or may not have taken that may have facilitated, directly or indirectly, a bargaining unit’s progress.
Nice attempt at a gotcha though! Maybe consider the solidarity part of worker solidarity, friendo. They want us buddyfucking each other.
Less glibly: I believe that by explaining these issues, especially around the dynamics of unionization in the US in tech, there’s a better chance to adapt tactics and find a path forward that better serves tech workers. I suspect that something other than the current union framework in the US would actually be more to our benefit, perhaps a guild system or similar. I see too much cheerleading from the latte class who haven’t either formed, worked with, or managed members of a bargaining unit in tech, and that cheerleading I have seen mislead and screw workers who made the simple mistake of trying to do what they were told would magically solve all their problems.
I’m not gonna lie, he got me in the first half. At first I thought the article was yet another rant against a strawman from a cranky, dysfunctional senior programmer convinced software development is mostly a technical problem. I didn’t expect it to be an allegory.
Hello. I consult on this stuff for work and… yeah, there aren’t truly good options. Most orgs use actionnetwork that is the only real decent option out there, but often they have to operate against the platform and integrate it with custom processes, often involving stuff like airtable or notion.
You can consider activist.org, but it’s broader in scope than what you need and does everything kinda poorly. I call it the “nextcloud of activist software” (derogatory).
According to your needs, it would be maybe easier to develop something custom on n8n+nocodb+ghost.io. They are all supported by yunohost so you can have a working setup in like 2 hours top.
Decreasing power consumption of a program can increase power consumption in aggregate, because then it is more economical to run the program and so it will be run more often by more people. The same phenomenon befalls all attempts to decrease overall resource consumption by focusing on efficiency. This isn’t to say efficiency is bad to pursue or anything. It just depends what your goals are.
For years, I’ve meant to write a blog post examining total lifecycle power consumption of programs. It would include analyses like this paper, but would also look at how often the program is run in the world (Relevant XKCD, but for power instead of time) and how much power the human developers consumed in the making of it.
It would be a lot of digging to get real numbers, but roughly speaking, I’d expect that:
Obviously, more-frequently run programs should be better optimized (low-level C utilities used on every cloud box vs one-off shell scripts on your laptop)
Maybe less obviously, infrequently-run programs may be worse for the environment to build in a faster/more efficient language if it requires more devs.
Jevon’s Paradox depends a lot on the elasticity of demand so I never trust it as a general rule. But in this case there’s plenty of evidence that use of computation will tend to grow as much as we want it to.
I keep remembering a story about some scientist in the 1980’s who did lots of simulations on her dinky little desktop PC, and they would take like 16 hours per run. She finally got funding to replace it with a big beefy workstation, and was like “yessss my simulations are gonna go so fast now” and ported her simulation program to it… and quite rapidly they ended up taking about 16 hours per run, because that was how long she felt like waiting for them. She could start them at the end of the day, go home, and the results would be waiting for her the next morning, and the complexity of the simulations grew until they reached that limit.
This is somewhat counter-balanced now by performance increases coming mainly from performance-per-watt improvements.
Consumer hardware is thermally throttled, and it’s hard to use more energy when you can’t dissipate the heat it generates.
Even large-scale compute is bound by the cost of electricity and cooling.
Totally, other costs just start intruding and context ends up mattering. Depends on the context and the variable and fixed costs and you start getting into actual economics stuff.
how does it compare to stuff like bonfire in terms of features? Is it just a rich wrapper around activitypub or does it implement higher level features?
I see. I mean, Bonfire per se is definitely closer to WordPress, but it does offer a framework too, since it was born for that purpose. Anyway your comparison to Rails makes it clear enough, thanks.
Aren’t the towers of abstraction an enormous success? When I was learning to program around 1990, people were still writing think pieces about the lack of software reuse. Now that is a solved problem. If anything, people write think pieces about how there’s too much software reuse!
My response to the mountain metaphor is that a rising tide lifts all boats: our situation is more like Hawaii than the Himalayas. True, there’s a risk of drowning in abstraction and sometimes our mountains of software explode spectacularly. But it’s easier to draw something on a <canvas> now than it was to draw in a window 35 years ago. And new mountains with better abstractions are being built: look to CHERI, Rust, io_uring. Maybe Oxide’s approach to firmware will succeed? I’m optimistic.
When users of all kinds complain about the lack of interoperability of software, data silos, and pervasive mono-cultures that we can’t upend by some random individual working in their garage, I don’t think of success.
When the majority of software re-use is now delegated to shipping containerized binaries because we can’t actually build portable, composable software, I don’t think of success.
When I’m subject to the few, if any, outlets of configuration that a piece of software will give me, apart from what the authors allow, I don’t think of success.
When I think that we’re still in the same general spot as we were 35 years ago, just with the ability to move faster due to the demands of capital and product, I don’t think of success.
When the majority of software re-use is now delegated to shipping containerized binaries because we can’t actually build portable, composable software, I don’t think of success.
I feel like … we know how to build good software and good abstractions. We see this happen a lot in open source projects where people have the freedom to do the right thing without pressures from management and executives breathing down their necks. Tremendous successes abound.
But we don’t know how to incentivize producers of commercial software to build quality products. Sometimes it happens by accident.
software should be detached from profit and market economy. There are several fields in which this just works better, like healthcare. Any serious attempt at bringing software under public control, assuming there will ever be enough concentration of political capital to do that before the end of the information age, would be met with incredibly violent resistance by the oligarchs that profit from private software.
If anything, the current trend is going the opposite way: regulations on software are being attacked left and right by the oligarchs and planes started falling.
I think the danger with that approach is that it’s difficult to ensure that the correct software gets created. Markets are a very good way of ensuring that resources get allocated relatively efficiently without needing a central planning system, and without having lots of waste. (Waste in this context is having everyone learn how to write COBOL when app developers are necessary, or vice versa.) Markets have a lot of issues and require a lot of careful regulation and interventions, but they are really good at decentralised decision-making, and we should use them for that purpose.
In fairness, I can understand why people might not associate the current software market with efficiency, but we’re talking about a different kind of efficiency here! The goal of the market is to match people with desires and people who can solve those desires. Right now, few people desire fast, efficient software, as hardware is mostly cheap, so it doesn’t get created as often. It might seem counterintuitive, but this is good: it generally takes longer and more resources to write a shorter, faster, more efficient program (in the vein of “I would have written a shorter letter but I didn’t have the time”), and that time and those resources would be wasted if people didn’t actually need the efficiency.
Where problems arise is where the markets cannot capture some aspect of the “true price” of something. For example, in the discussion on software efficiency, there are environmental issues which don’t get factored into the price of hardware, and there are many groups of people who have needs, but don’t have enough buying power for those needs to be properly met. In these cases, we need regulation to “fix” the markets - pricing in environmental impacts to hardware and running costs, and ensuring minimum standards are met for all software that allow people with various disadvantages to still engage with software. However, just because the markets require adjustment, doesn’t mean that we should throw them away entirely. Software needs to remain attached to profit and markets to ensure that software gets written that actually serves people’s needs.
I realise we’re in danger of getting off-topic here and I don’t want to derail this discussion too much. But I wanted to provide a short leftist defence of markets in software, and point out ways of solving current issues that don’t involve rejecting markets entirely.
The goal of the market is to match people with desires and people who can solve those desires.
The idea that I could spend time working on software that does things that people actually want is why I write free software outside of a market. It appeals to me specifically because the opportunity to do that is so rare in the industry.
In theory, yes, a company that could do this would do well in the market, but in practice any company that achieves this ability briefly ends up self-sabotaging it away in a short time.
Aren’t the towers of abstraction an enormous success? When I was learning to program around 1990, people were still writing think pieces about the lack of software reuse. Now that is a solved problem. If anything, people write think pieces about how there’s too much software reuse!
I think the part that bothers me the most is that a lot of the “modern” abstractions are designed more for plug & play and not for extension. “Frameworks” instead of “libraries”, as I’ve seen the distinction before. If what you’re doing fits well into what the authors were expecting you to do things work really well. And if you try to step anywhere off of that pre-ordained path things start getting really hairy quickly. I wish I could remember what the project was that I was working on a few months ago… it was UI stuff and the framework provided a fabulous set of components, but adding a field validator to a text field involved climbing 3 or 4 layers up the abstraction tower and making your own variant of some superclass and then bringing back a bunch of extra functionality from the subclasses you couldn’t use.
When the majority of software re-use is now delegated to shipping containerized binaries because we can’t actually build portable, composable software, I don’t think of success.
I 100% agree. I mean… thinking about to the late 90s and early 2000s, I do somewhat appreciate that many of those containerized binaries are going to be talking JSON over HTTP and/or Websockets and the languages I use on a regular basis all have really good libraries for those protocols. On the other hand, it’d be really great if a lot of that was a matter of linking a .so and potentially using an FFI binding instead. I’m absolutely exhausted from looking at code that JPEG-encodes an image buffer, takes the JPEG, base64 encodes it, stuffs it in a JSON dict, only to have the whole decoded process reversed on the other side.
I draw a distinction between abstraction and composition, which is also in the article. It’s not a hard distinction, but I’d say:
Composition means putting parts together to form a working system. Does the result work? Is it correct? Is it fast and secure? (Composition does feel more “horizontal”)
Abstraction means hiding details. Abstracting over Windows and Unix is something that I think is often accidental complexity, or at least a big tradeoff. It saves time for the developer, but it can be a loss to the end user. (Abstraction does feel more “vertical” – and fragile when you get too high)
This person, commenting on the same article, pointed out “shallow and composable” as properties of Unix, and I agree:
So I think shell composes, but it’s actually not very abstract. And this is a major reason I’ve been working on https://www.oilshell.org/
IME, shell gets a lot of work done effectively, without much weight, and is adaptable to new requirements. One person can write a shell script to solve a problem – you don’t have to assemble a big team, and justify its existence.
(Of course something that’s challenging is for that shell script to not become a mess over the long term, and I believe we’re doing something about that)
From the article:
Programming models, user interfaces, and foundational hardware can, and must, be shallow and composable. We must, as a profession, give agency to the users of the tools we produce. Relying on towering, monolithic structures sprayed with endless coats of paint cannot last.
This is generally my preference, but I would say “must” is not true … One thing I learned the hard way is that interoperability is basically anti-incentivized.
Long story, but I think the prevelance of YAML in the cloud is a “factoring” problem, but there’s actually a deeper economic issue at play.
That is, the people on one side of the YAML write code and algorithms, and the people on the other “configure” those lego blocks that don’t actually fit together.
YAML arguably abstracts (it hides details behind an interface)
But it doesn’t compose (when you put things together, they don’t have the properties you want) …
abstracting over OS always feels weird to me, when one of the main purposes of an OS is to abstract over hardware
abstracting over hardware makes sense, because we keep getting better at making hardware, we have different tradeoffs, etc.
but with OSs, it mostly seems like a coordination problem. sometimes an intentional one, because the organizations involved were trying to build a moat
The OS already abstracts over hardware, and then we are piling more abstractions on top of OSes.
One that that leak – in terms of performance, security, or just making the application behave poorly
Electron is basically that – it lets you ship faster, but that’s about it
The “tower” or “stack’ is often not a good way of building software.
And the funny thing is that OSes are converging, with Windows gaining a Linux kernel in ~2016 (WSL), and then it also gained a Unix terminal some time later!
I guess to argue the other side, Unix was never good at GUIs … so it’s not like Macs or Unix were superfluous or anything. But it’s just that the most basic layer is still in flux, and it is converging on “Unix”, even in 2016 and 2024 …
(running Docker containers seems to require some sort of Linux x86-64 syscall ABI too)
As a thought experiment, I’d say if we knew how to perfectly abstract, we’d be able to write multi-platform GUIs that work perfectly on all targeted platforms.
But I think anyone who works in that area (I don’t) will tell you that it’s a big compromise. You can write something better if you start OS X only, or Windows only.
I think Flutter is something that abstracts over Android-iPhone, and there are many others.
And of course there were many attempts at Windows / OS X abstraction (QT etc.), but what seems to have happened is that desktop GUIs just got uniformly WORSE since those attempts were made.
Is an Electron app better than a QT app?
Rust is famously “not GUI yet”, and you can argue that if it had some yet-unknown great powers of abstraction, then it would be.
So you could say it’s an unsolved problem to have “zero-cost abstraction” in that respect (!)
(And yes this is a pun – the cost I’m talking about is in the behavior of the app, not the performance)
To summarize, I think there are many things better about where we were 20-30 years ago, but many things are worse. Latency is another one - https://danluu.com/input-lag/
Composing software from parts and maintaining latency is another unsolved problem.
Programmers are not famous for awareness of labor dynamics or solidarity to other workers. Factor in the child-like propensity for “novelty over responsibility”, some marketing and other qualities of such frameworks and the deal is closed.
In general, workers control nowadays is not top down, but presented indirectly to push workers to take initiative in undermining their own conditions. Frontal conflict with privileged workers is too expensive for companies: soft control is a much better option when the workers are unaware of their position in the company or in the industry. These frameworks, but also languages like Java or COBOL, could be very easily a case-study.
It’s a way to fight boredom - at least you can learn using a new framework in addition to writing the same old stuff day after day (and that way gain competence that might be useful).
I found this to be a thought-provoking article. The labour arbitrage theory definitely holds water, but I don’t think it’s the full story. For individual developers, commodification of tools can also be empowering.
Where before, you would need to either spend a lot of time mastering new tools (eg native development with all that entails, like learning Objective C or Swift for iOS and Java and maybe C++ for Android)cto do a task (or not doing it at all), now you can re-use your existing skills. As much as I dislike React, something like React Native for instance allows you to use most of your existing React knowledge and write code for mobile in less than half the time it would take you to write native code for both platforms (even assuming you already know and master the native frameworks). Also, there are good reasons it won out over Cordova/Phonegap - that stuff is extremely slow and you have to deal with the frustrations of buggy native webview components on top of that.
So yeah, of course companies will gravitate towards commodified tools - they have people with knowledge X, and instead of having to hire people with knowledge Y in order to take on a new project, they can put people who already have the X knowledge onto the new project. It can be the difference between having to tell a client “no” and being able to accept it. Hiring (or creating) experts for a new project isn’t always feasible.
Of course, the flip side is that these commodification tools are inherently slower and more janky than the native stuff - they’re an additional abstraction layer over what lies underneath. And not knowing the fundamental underpinnings means you’ll end up building in even more inefficiencies and sometimes even reinventing existing things badly. But this has been hashed out and argued to death here already.
I don’t really buy the conclusion about unionization though. If you want to jump on the bandwagon and use the most commodified thing to avoid gaining more specialised knowledge, of course your skills will be worth less on the job market.
I’m afraid you fell for the trap of framing the issue as an individual problem. From an individual perspective there’s a way out where you spend extra personal resources to compete against your peers and have a chance to come out on top, but collectively and economically this makes the situation worse for everybody.
Obviously standardization of tools and practices is a good thing in terms of efficiency in most scenarios, but under our economic system this means less bargaining power for workers and more bargaining power for owners. It doesn’t have to be this way, but in most of the West that’s the case. There’s a technical incentive that is opposite to the interest of the worker. Unions, as in “industry-wide trans-national unions able to negotiate technology adoption” can patch up this issue, to some degree. Obviously this is a long-term goal of the growing tech workers movement, and at the moment this kind of conflict in the IT exists only at company level, but we have to start somewhere. Labor arbitrage can only make our situation worse and we are still in time to spend our privileged position as tech workers to build a moat against these dynamics.
I work in the space that is trying to tell people that ChatGPT not only lies, but lies a lot and in dangerous ways. The problem is the millions of dollars in lobbying and propaganda pushed by right-wing think-tanks financed by the orgs that want to deflect attention from human responsibility by giving agency to the “AI”.
It’s not enough to tell people, when there’s a mediatic juggernaut on the other side that has been building the opposite narrative for decades.
Generative AIs and LLMs should be heavily regulated, put under public governance and took away from corporations, especially extra-reckless ones like OpenAI. The “it’s already open source” argument is bullshit: most of the harm of these tools come from widespread accessibility, user expectations created by marketeers and cheap computational cost. Yes, with open source diffusion models or llms you will still have malicious actors in NK, Russia or Virginia making automated deepfake propaganda but that’s a minor problem compared to the societal harm that these tools are creating right now.
Do models like GPT-3 inadvertently encode the sociopathy of the
corporations that create them? Reading through this thread, I have
the distinct impression that GPT-3 is yet another form of psycological
warfare, just like advertising.
I love Star Trek. And in the original Trek, there were plenty of
episodes where some really advanced computer essentially managed to lie
and gaslight its way into being the equivalent of god for a society full
of people that it then reduced to an agrarian or even lower level of
tech.
Return of the Archons, The Apple, For the World Is Hollow and I
Have Touched the Sky, probably a couple others that don’t come to mind
right now. In a couple of those cases, it was obvious that the
programmers had deliberately encoded their own thinking into the model.
(Landru from Return of the Archons, the Fabrini oracle from For the
World Is Hollow).
And reading through this thread right now, I’m like, maybe these
scenarios aren’t so far-fetched.
An excellent point. All of my experience and everything I’ve read tells
me that human wetware is full of easily exploitable vulns. People have
been exploiting them for a much longer time than digital computers were
even a thing. They’re easier to exploit than to grokk and fix.
Psychology is a young discipline when compared to rhetoric and
sophistry. So yes, the former is much simpler.
All of my experience and everything I’ve read tells me … for a much longer time than digital computers were even a thing. … Psychology is a young discipline when compared to rhetoric and sophistry.
— teiresias
This comment is enhanced by knowing that Teiresias is a mythic character from ancient Greece. :-)
Talon Voice supports voice control and eye control but the learning curve is much longer than two weeks. I suggest you relax, rest, talk to people and go on walks.
I work at a news site. They’re going to hold the election on a certain day whether I’m ready for it or not. Obviously then the goal is to make an actionable design that can be implemented far enough in advance before the election day to do some testing and whatnot. You can call it what you like, but it’s basically a deadline.
Of course, lots of work I do has no deadline. The CMS is upgraded when the CMS is upgraded. So I think it’s worth distinguishing things with real externally imposed deadlines from things with internal goal dates. Goal dates can be shifted. Deadlines are not so easy to move.
I used to work for a edtech company. If features weren’t ready by the start of the school year we’d miss our chance to get the majority of our teachers properly onboarded. Deadlines matter in software because they matter in the meatworld.
The fact that you have to deliver a specific feature for election day is a choice made by humans, not something inevitable. The decision to develop something new before a given date instead of saying “we develop the feature and at the first useful election we use it” is a choice. Deadlines become inevitable when profit is put before software quality. Deadlines are inevitable when people that care about profit hold more power than people that care about software quality.
Are you saying that deadlines happen when people care more about profit than software quality but not when people care more about revenue than software quality?
it depends on the context and mission of the organization but in my experience, in non-profit orgs it’s much easier to reason long term compared to VC-funded startups or big corpos.
Deadlines are also inevitable when the food is going to spoil unless we have a system for distributing it by a certain date. That doesn’t have anything to do with profits, it has to do with the food spoiling; it would spoil just as fast under any kind of system humans can come up with.
“we develop the feature and at the first useful election we use it”
I know that Politico takes that approach because they cover lots of elections nationwide. For me, I’m only covering Pennsylvania, so there’s not really enough chance to recoup investment. If we miss 2022, there’s no telling if we’ll even be using the same CMS in 2024. It would be like trying to get the kids ready for school on Sunday morning: you can do some things, for sure, but lots of it really can only be done on Monday morning.
I generally agree, it’s part of the system the engineer is operating inside of…they are linked. It’s the reality we often brush off or ignore. Instead, we tell ourselves (and our customers) this software was “made with 💖” and “we’re making the world a better place.” Fine attitude for a pet project you control, but when you work for a company that’s beholden to VC money or the public stock market, good luck.
So I think it’s worth distinguishing things with real externally imposed deadlines from things with internal goal dates.
Failure to do this effectively has been the cause of so many Dilbertesque own-goals - including some of my own - over the several decades I’ve been part of the industry.
It’s doubly important for senior leaders. I’ve seen a “gee it would be nice if we had this by $DATE” turn through a series of retellings into a hard deadline, and then eventually horrify the person who spoke off-the-cuff when she discovered how that comment had been taken.
Distinguishing deadline vs goal date is a good clarification.
If you look at what the article is espousing, it’s exactly what you would want to do with a real deadline: check in regularly on where you are, decide what gets done and what doesn’t get done, and bulldoze everything else out of the way.
Invoking the halting problem’s a bit problematic, because we write many, many programs where we do know when they will finish. We have whole languages (Datalog) that are guaranteed to terminate.
Though most of what I see in articles like this is what the agile community figured out long ago. I still haven’t figured out what the balance of what happened to agile was among sabotage, misunderstanding, or Gresham’s law.
Sometimes I read something like this and imagine the author got in big trouble and chose to spend a few thousand words explaining why the thing they got in trouble for shouldn’t be a thing they can get in trouble for.
Agreed. The reason deadlines exist is that multiple people need to coordinate. This is much more obvious in physical products where multiple pieces literally have to come together at the right place at the right time, and there are physical costs associated with either storing surplus or waiting for one part to be supplied. But it’s still true in software systems: you need developers building the system, developers building support stuff for the system (specialized tooling, installers, etc), you need devops building and running the infrastructure for it, you need to give testing/QA (if any) sufficient time to hammer on it, customer support time to get trained on it, even marketing (hrk, ptui) needs to know when to put up the posts announcing releases or changes.
Now not all systems need all these things, and in software there’s a fair bit more soft wiggle room around these than if, say, you have a shipping container full of stuff sitting in a port and they’re charging you $X per day for storage. But they still exist. The deadline is not about you, it’s about making all the pieces fit together.
The fact that this seems to be so seldom communicated properly and the feedback and timing on deadlines is so dysfunctional is its own problem, of course.
Or maybe it’s full of people out there that don’t use deadlines and are appalled by how many still do. Or it’s just a way to stimulate healthier desires in our colleagues to make the whole industry a bit better.
I tried something similar, asking ChatGPT to produce recipes for different fermented food. It is similar in the sense that there are specific models implied in the production of the answer: proportion of the ingredients, times, temperatures, phases in the processing, etc etc.
They all looked kinda ok and it showed that the AI could infer the relevant parts necessary to ferment food and 1 out of 10 was maybe neighbouring correctness. Nonetheless 9 out of 10 would have probably molded and killed you.
As usual with these generative models, the content keeps looking better and better but it doesn’t get any more reliable than before. It might be good for fillers in a newspaper, the copy of your startup’s website and stuff like this.
I think this article, while I believe it’s completely real, is very misleading in portraying cherry-picked examples. It’s misleading because it implies the possibility of trust in the output of the model that shouldn’t be there. Obviously this guy is biased and produces propaganda for his side in order to overcome this need for trust but as technologists we shouldn’t buy into it.
This doesn’t take anything away from the impressiveness of this parroting device. Just don’t use this stuff in the real world.
RTO is driving a lot of new organising here in Sweden and it’s been a frustration of mine that so far there’s very little appetite for taking up that fight among the decision-makers in the trade union movement here. I have somewhat come around to the idea that it’s strategically sound to prioritise negotiation goals that are possible to coordinate across the entire labour movement over those that are only applicable for white collar roles. I’m still keen to explore ways to move the issue further up the agenda at the key moments when priorities are being defined though.
Very excited about what’s ahead for tech industry unionisation over here. If you’re reading this in Sweden and wishing you had a kollektivavtal at your current gig feel free to reach out ([email protected]) and I’ll be happy to share whatever contacts and experience I have that can help you get started.
you should also join TWC’s slack. There’s discussion going on to see if there’s enough mass to create a Swedish chapter.
The tech worker stuff I saw back in 2016-2017 was not compelling. When Macej came through town, he seemed more interested in riling us up to give donations to defeat Trump than to, you know, work on common issues like contracts.
During a stint in Brooklyn in 2019, I attended a TWC event (I think it was them, could’ve been somebody else) and was summarily unimpressed with the organizing efforts–least of all the amount of time spent by a Google contractor who, bless her heart, didn’t seem to understand that they weren’t actually there for her but for “real employees”. There was also a bunch of time wasted with the usual progressive genuflections.
For the past several years, I’ve had the pleasure of watching a CWA shop play out its bargaining from up close, and seen the slow disillusionment of the bargaining unit with the process–as a comrade noted after bouncing from the place, “having a union just means you get to help hold the knife when layoffs come”.
If I had to sum up many of my criticisms and observations (and this is heavily anchored in a US background):
If I ever get my blog back up, I could write a small treatise on all of this.
What? Immigrant/offshore workers can act as scabs, of course, but I don’t see why this is a fundamental truth. Immigrants can be union workers too.
It’s not a fundamental truth. It’s cherry-picking of the most dysfunctional traits a union can have to build a strawman that is the easiest to shoot down and feel more intelligent about it.
Typed by an immigrant union organizer in a “Migrant Organizing Unit”.
Are you in the US, or elsewhere (as I gather from your profile)? I was pretty clear I was addressing the US operating regime–if you’re in Germany or whatever you have a different kettle of fish.
The de facto purpose of getting H1-B visas in the US is to have a captive workforce that can work more cheaply than native labor. If they cost the same to employers, I wager they wouldn’t be as attractive to hire. Here, go look for yourself–“some of these things are not like the others”, as one might say.
The de facto purpose of globalization is to export work processes to where it’s cheaper to have them occur, whether that’s due to avoiding expensive environmental regulations, expensive worker compensation, expensive resource extraction, or whatever else. If you are a union, you do not want your employer to get labor elsewhere.
I’m explaining this not to excuse any of it but to make sure anybody (in the US, again) who is looking at solving the puzzle knows more of the pieces and dynamics at play, and to encourage people to look past whatever easy ideology they likely to have have because they’re (comparatively) wealthy and in tech because it will fail them as times get rough (and times, they are getting rough).
If you want to make a useful contribution @chobeat instead of just sniping at me perhaps you could share your experience and background with a “Migrant Organizing Unit” and talk about what works, what doesn’t, and where they show up.
Not that I have any more experience with labor organization than yourself but unionism breaks into different strains and your conception of it is called business unionism, where a specific union exists solely to forward the narrow economic self-interest of its then-existing membership. Other conceptions of unionism include labor liberalism, where unions exist for workers as a class to more effectively lobby the state (which is the source of the association with dem party politics you saw) and also class struggle unionism, which takes an explicitly combative approach with capital and also involves the sort of “ideological baggage” you talk about. The latter tends to take a “workers of the world unite” type approach and does not inherently view immigrants or workers elsewhere as scabs. Whether this last strain actually makes any sense when operating from within the imperial core can certainly be debated. But anyway I’m not here to argue for one conception or the other, just to say that your conception of what a union is is not the only one. Most of the ideas in this post come from the book Class Struggle Unionism by Joe Burns.
Excellent pointers on terminology, thank you–will doubtless incorporate that in the essay!
This is true, but I’m not sure how you get from there to “unions must be anti-immigrant”.
A union’s goal should be to get a collective agreement that covers immigrant workers too, so they’re no longer cheaper. Maybe that means they hire fewer immigrants (though I honestly don’t think there would be a noticeable effect in tech), but the immigrants they do hire would get much better working conditions, I’d call that pro-immigrant. I think immigrant workers would gladly sign on to that.
We are stronger together than apart. Your point about divisions along political alignment lines applies to division between citizens and immigrants, if we exclude them, we just set ourselves up to get scabbed on.
An immigrant who is established enough in their new country to be a union worker now also has an economic incentive to oppose further immigration from the place they originally came from, just as native workers in the same union do.
This isn’t true and the history of labour organising shows how excluding people on the basis of race, sex, nationality, homelessness, etc makes a union weaker. (E.g. IWW organising “hobos” in a logging industry dispute. Listen to Cool People Who Did Cool Stuff for more examples).
You may also be making the “lump of labour fallacy”.
If they only think in terms of narrow, first-order effects, sure.
I acknowledge your stated intentions here about inspiring people to adapt tactics. But this is not a forum where your coparticipants are union insiders with the power to make the kinds of changes you want.
The primary net impact of sucking up all the oxygen in a discussion space with a huge critique post like yours is going to be to dissuade potential new organisers from engaging. Instead of leading to the change you want to see, it’s more likely contributing to further entrenching the structures and strategies you dislike by depriving the labour movement of the new blood it depends on to refresh its thinking.
You seem pretty well read on this general topic. If you ever do get that blog post series written, I’d encourage you as a next step to consider ways of putting the ideas into practice through real organising. The labour movement has plenty of armchair generals who can tell you a million reasons why this or that aspect of how things are done is doomed to failure. As in any other field of endeavour, you earn your influence over the direction things take by the merits of your prior contributions. It’s unlikely for a Substack authored by someone who attended a TWC event in Brooklyn six years ago and knows some people at a place with a union deal is going to produce the outcomes you want to see on its own.
Thanks for typing this up, I would read more if you wrote more.
Once again begging progressive people to learn how money works and read the FT.
and how did you contribute to make this better?
Because of the aforementioned adversarial nature of unions in the US, I can neither confirm nor deny any actions I may or may not have taken that may have facilitated, directly or indirectly, a bargaining unit’s progress.
Nice attempt at a gotcha though! Maybe consider the solidarity part of worker solidarity, friendo. They want us buddyfucking each other.
Less glibly: I believe that by explaining these issues, especially around the dynamics of unionization in the US in tech, there’s a better chance to adapt tactics and find a path forward that better serves tech workers. I suspect that something other than the current union framework in the US would actually be more to our benefit, perhaps a guild system or similar. I see too much cheerleading from the latte class who haven’t either formed, worked with, or managed members of a bargaining unit in tech, and that cheerleading I have seen mislead and screw workers who made the simple mistake of trying to do what they were told would magically solve all their problems.
Union as aesthetic don’t work.
I’m not gonna lie, he got me in the first half. At first I thought the article was yet another rant against a strawman from a cranky, dysfunctional senior programmer convinced software development is mostly a technical problem. I didn’t expect it to be an allegory.
Hello. I consult on this stuff for work and… yeah, there aren’t truly good options. Most orgs use actionnetwork that is the only real decent option out there, but often they have to operate against the platform and integrate it with custom processes, often involving stuff like airtable or notion.
You can consider activist.org, but it’s broader in scope than what you need and does everything kinda poorly. I call it the “nextcloud of activist software” (derogatory).
According to your needs, it would be maybe easier to develop something custom on n8n+nocodb+ghost.io. They are all supported by yunohost so you can have a working setup in like 2 hours top.
Of course, no discussion of efficiency can be complete without mention of the Jevons Paradox: https://en.wikipedia.org/wiki/Jevons_paradox
Decreasing power consumption of a program can increase power consumption in aggregate, because then it is more economical to run the program and so it will be run more often by more people. The same phenomenon befalls all attempts to decrease overall resource consumption by focusing on efficiency. This isn’t to say efficiency is bad to pursue or anything. It just depends what your goals are.
Ooh! Kind of like how, due to the increased capacity, building more highways supposedly increases congestion rather than decreasing it!
That phenomenon is called “induced demand”, IIRC
For years, I’ve meant to write a blog post examining total lifecycle power consumption of programs. It would include analyses like this paper, but would also look at how often the program is run in the world (Relevant XKCD, but for power instead of time) and how much power the human developers consumed in the making of it.
It would be a lot of digging to get real numbers, but roughly speaking, I’d expect that:
Jevon’s Paradox depends a lot on the elasticity of demand so I never trust it as a general rule. But in this case there’s plenty of evidence that use of computation will tend to grow as much as we want it to.
I keep remembering a story about some scientist in the 1980’s who did lots of simulations on her dinky little desktop PC, and they would take like 16 hours per run. She finally got funding to replace it with a big beefy workstation, and was like “yessss my simulations are gonna go so fast now” and ported her simulation program to it… and quite rapidly they ended up taking about 16 hours per run, because that was how long she felt like waiting for them. She could start them at the end of the day, go home, and the results would be waiting for her the next morning, and the complexity of the simulations grew until they reached that limit.
This is somewhat counter-balanced now by performance increases coming mainly from performance-per-watt improvements.
Consumer hardware is thermally throttled, and it’s hard to use more energy when you can’t dissipate the heat it generates. Even large-scale compute is bound by the cost of electricity and cooling.
Totally, other costs just start intruding and context ends up mattering. Depends on the context and the variable and fixed costs and you start getting into actual economics stuff.
that’s how we ended up with LLMs consuming like little countries to write text glob
how does it compare to stuff like bonfire in terms of features? Is it just a rich wrapper around activitypub or does it implement higher level features?
To use a web development analogy, Bonfire is more like a CMS like WordPress, while Fedify is more like a framework like Rails.
I see. I mean, Bonfire per se is definitely closer to WordPress, but it does offer a framework too, since it was born for that purpose. Anyway your comparison to Rails makes it clear enough, thanks.
Aren’t the towers of abstraction an enormous success? When I was learning to program around 1990, people were still writing think pieces about the lack of software reuse. Now that is a solved problem. If anything, people write think pieces about how there’s too much software reuse!
My response to the mountain metaphor is that a rising tide lifts all boats: our situation is more like Hawaii than the Himalayas. True, there’s a risk of drowning in abstraction and sometimes our mountains of software explode spectacularly. But it’s easier to draw something on a <canvas> now than it was to draw in a window 35 years ago. And new mountains with better abstractions are being built: look to CHERI, Rust, io_uring. Maybe Oxide’s approach to firmware will succeed? I’m optimistic.
When users of all kinds complain about the lack of interoperability of software, data silos, and pervasive mono-cultures that we can’t upend by some random individual working in their garage, I don’t think of success.
When the majority of software re-use is now delegated to shipping containerized binaries because we can’t actually build portable, composable software, I don’t think of success.
When I’m subject to the few, if any, outlets of configuration that a piece of software will give me, apart from what the authors allow, I don’t think of success.
When I think that we’re still in the same general spot as we were 35 years ago, just with the ability to move faster due to the demands of capital and product, I don’t think of success.
I feel like … we know how to build good software and good abstractions. We see this happen a lot in open source projects where people have the freedom to do the right thing without pressures from management and executives breathing down their necks. Tremendous successes abound.
But we don’t know how to incentivize producers of commercial software to build quality products. Sometimes it happens by accident.
software should be detached from profit and market economy. There are several fields in which this just works better, like healthcare. Any serious attempt at bringing software under public control, assuming there will ever be enough concentration of political capital to do that before the end of the information age, would be met with incredibly violent resistance by the oligarchs that profit from private software.
If anything, the current trend is going the opposite way: regulations on software are being attacked left and right by the oligarchs and planes started falling.
I think the danger with that approach is that it’s difficult to ensure that the correct software gets created. Markets are a very good way of ensuring that resources get allocated relatively efficiently without needing a central planning system, and without having lots of waste. (Waste in this context is having everyone learn how to write COBOL when app developers are necessary, or vice versa.) Markets have a lot of issues and require a lot of careful regulation and interventions, but they are really good at decentralised decision-making, and we should use them for that purpose.
In fairness, I can understand why people might not associate the current software market with efficiency, but we’re talking about a different kind of efficiency here! The goal of the market is to match people with desires and people who can solve those desires. Right now, few people desire fast, efficient software, as hardware is mostly cheap, so it doesn’t get created as often. It might seem counterintuitive, but this is good: it generally takes longer and more resources to write a shorter, faster, more efficient program (in the vein of “I would have written a shorter letter but I didn’t have the time”), and that time and those resources would be wasted if people didn’t actually need the efficiency.
Where problems arise is where the markets cannot capture some aspect of the “true price” of something. For example, in the discussion on software efficiency, there are environmental issues which don’t get factored into the price of hardware, and there are many groups of people who have needs, but don’t have enough buying power for those needs to be properly met. In these cases, we need regulation to “fix” the markets - pricing in environmental impacts to hardware and running costs, and ensuring minimum standards are met for all software that allow people with various disadvantages to still engage with software. However, just because the markets require adjustment, doesn’t mean that we should throw them away entirely. Software needs to remain attached to profit and markets to ensure that software gets written that actually serves people’s needs.
I realise we’re in danger of getting off-topic here and I don’t want to derail this discussion too much. But I wanted to provide a short leftist defence of markets in software, and point out ways of solving current issues that don’t involve rejecting markets entirely.
The idea that I could spend time working on software that does things that people actually want is why I write free software outside of a market. It appeals to me specifically because the opportunity to do that is so rare in the industry.
In theory, yes, a company that could do this would do well in the market, but in practice any company that achieves this ability briefly ends up self-sabotaging it away in a short time.
I’m with you.
From the GP:
I think the part that bothers me the most is that a lot of the “modern” abstractions are designed more for plug & play and not for extension. “Frameworks” instead of “libraries”, as I’ve seen the distinction before. If what you’re doing fits well into what the authors were expecting you to do things work really well. And if you try to step anywhere off of that pre-ordained path things start getting really hairy quickly. I wish I could remember what the project was that I was working on a few months ago… it was UI stuff and the framework provided a fabulous set of components, but adding a field validator to a text field involved climbing 3 or 4 layers up the abstraction tower and making your own variant of some superclass and then bringing back a bunch of extra functionality from the subclasses you couldn’t use.
I 100% agree. I mean… thinking about to the late 90s and early 2000s, I do somewhat appreciate that many of those containerized binaries are going to be talking JSON over HTTP and/or Websockets and the languages I use on a regular basis all have really good libraries for those protocols. On the other hand, it’d be really great if a lot of that was a matter of linking a .so and potentially using an FFI binding instead. I’m absolutely exhausted from looking at code that JPEG-encodes an image buffer, takes the JPEG, base64 encodes it, stuffs it in a JSON dict, only to have the whole decoded process reversed on the other side.
I draw a distinction between abstraction and composition, which is also in the article. It’s not a hard distinction, but I’d say:
Composition means putting parts together to form a working system. Does the result work? Is it correct? Is it fast and secure? (Composition does feel more “horizontal”)
Abstraction means hiding details. Abstracting over Windows and Unix is something that I think is often accidental complexity, or at least a big tradeoff. It saves time for the developer, but it can be a loss to the end user. (Abstraction does feel more “vertical” – and fragile when you get too high)
This person, commenting on the same article, pointed out “shallow and composable” as properties of Unix, and I agree:
https://news.ycombinator.com/item?id=40885635
So I think shell composes, but it’s actually not very abstract. And this is a major reason I’ve been working on https://www.oilshell.org/
IME, shell gets a lot of work done effectively, without much weight, and is adaptable to new requirements. One person can write a shell script to solve a problem – you don’t have to assemble a big team, and justify its existence.
(Of course something that’s challenging is for that shell script to not become a mess over the long term, and I believe we’re doing something about that)
From the article:
This is generally my preference, but I would say “must” is not true … One thing I learned the hard way is that interoperability is basically anti-incentivized.
Long story, but I think the prevelance of YAML in the cloud is a “factoring” problem, but there’s actually a deeper economic issue at play.
That is, the people on one side of the YAML write code and algorithms, and the people on the other “configure” those lego blocks that don’t actually fit together.
YAML arguably abstracts (it hides details behind an interface)
But it doesn’t compose (when you put things together, they don’t have the properties you want) …
Similar to this comment - https://lobste.rs/s/saqp6t/comments_on_scripting_cgi_fastcgi#c_28yzy4
abstracting over OS always feels weird to me, when one of the main purposes of an OS is to abstract over hardware
abstracting over hardware makes sense, because we keep getting better at making hardware, we have different tradeoffs, etc.
but with OSs, it mostly seems like a coordination problem. sometimes an intentional one, because the organizations involved were trying to build a moat
Yes exactly !!
The OS already abstracts over hardware, and then we are piling more abstractions on top of OSes.
One that that leak – in terms of performance, security, or just making the application behave poorly
Electron is basically that – it lets you ship faster, but that’s about it
The “tower” or “stack’ is often not a good way of building software.
And the funny thing is that OSes are converging, with Windows gaining a Linux kernel in ~2016 (WSL), and then it also gained a Unix terminal some time later!
I guess to argue the other side, Unix was never good at GUIs … so it’s not like Macs or Unix were superfluous or anything. But it’s just that the most basic layer is still in flux, and it is converging on “Unix”, even in 2016 and 2024 …
(running Docker containers seems to require some sort of Linux x86-64 syscall ABI too)
As a thought experiment, I’d say if we knew how to perfectly abstract, we’d be able to write multi-platform GUIs that work perfectly on all targeted platforms.
But I think anyone who works in that area (I don’t) will tell you that it’s a big compromise. You can write something better if you start OS X only, or Windows only.
I think Flutter is something that abstracts over Android-iPhone, and there are many others.
And of course there were many attempts at Windows / OS X abstraction (QT etc.), but what seems to have happened is that desktop GUIs just got uniformly WORSE since those attempts were made.
Is an Electron app better than a QT app?
Rust is famously “not GUI yet”, and you can argue that if it had some yet-unknown great powers of abstraction, then it would be.
So you could say it’s an unsolved problem to have “zero-cost abstraction” in that respect (!)
(And yes this is a pun – the cost I’m talking about is in the behavior of the app, not the performance)
To summarize, I think there are many things better about where we were 20-30 years ago, but many things are worse. Latency is another one - https://danluu.com/input-lag/
Composing software from parts and maintaining latency is another unsolved problem.
On the teams I’ve been on, React and Electron were pushed by the developers, not the managers. How does that play in?
Programmers are not famous for awareness of labor dynamics or solidarity to other workers. Factor in the child-like propensity for “novelty over responsibility”, some marketing and other qualities of such frameworks and the deal is closed.
In general, workers control nowadays is not top down, but presented indirectly to push workers to take initiative in undermining their own conditions. Frontal conflict with privileged workers is too expensive for companies: soft control is a much better option when the workers are unaware of their position in the company or in the industry. These frameworks, but also languages like Java or COBOL, could be very easily a case-study.
It’s a way to fight boredom - at least you can learn using a new framework in addition to writing the same old stuff day after day (and that way gain competence that might be useful).
I found this to be a thought-provoking article. The labour arbitrage theory definitely holds water, but I don’t think it’s the full story. For individual developers, commodification of tools can also be empowering.
Where before, you would need to either spend a lot of time mastering new tools (eg native development with all that entails, like learning Objective C or Swift for iOS and Java and maybe C++ for Android)cto do a task (or not doing it at all), now you can re-use your existing skills. As much as I dislike React, something like React Native for instance allows you to use most of your existing React knowledge and write code for mobile in less than half the time it would take you to write native code for both platforms (even assuming you already know and master the native frameworks). Also, there are good reasons it won out over Cordova/Phonegap - that stuff is extremely slow and you have to deal with the frustrations of buggy native webview components on top of that.
So yeah, of course companies will gravitate towards commodified tools - they have people with knowledge X, and instead of having to hire people with knowledge Y in order to take on a new project, they can put people who already have the X knowledge onto the new project. It can be the difference between having to tell a client “no” and being able to accept it. Hiring (or creating) experts for a new project isn’t always feasible.
Of course, the flip side is that these commodification tools are inherently slower and more janky than the native stuff - they’re an additional abstraction layer over what lies underneath. And not knowing the fundamental underpinnings means you’ll end up building in even more inefficiencies and sometimes even reinventing existing things badly. But this has been hashed out and argued to death here already.
I don’t really buy the conclusion about unionization though. If you want to jump on the bandwagon and use the most commodified thing to avoid gaining more specialised knowledge, of course your skills will be worth less on the job market.
I’m afraid you fell for the trap of framing the issue as an individual problem. From an individual perspective there’s a way out where you spend extra personal resources to compete against your peers and have a chance to come out on top, but collectively and economically this makes the situation worse for everybody.
Obviously standardization of tools and practices is a good thing in terms of efficiency in most scenarios, but under our economic system this means less bargaining power for workers and more bargaining power for owners. It doesn’t have to be this way, but in most of the West that’s the case. There’s a technical incentive that is opposite to the interest of the worker. Unions, as in “industry-wide trans-national unions able to negotiate technology adoption” can patch up this issue, to some degree. Obviously this is a long-term goal of the growing tech workers movement, and at the moment this kind of conflict in the IT exists only at company level, but we have to start somewhere. Labor arbitrage can only make our situation worse and we are still in time to spend our privileged position as tech workers to build a moat against these dynamics.
too little, too late, but better than nothing.
I hope this will lead to a consolidation of hardware and a slower pace of change.
I work in the space that is trying to tell people that ChatGPT not only lies, but lies a lot and in dangerous ways. The problem is the millions of dollars in lobbying and propaganda pushed by right-wing think-tanks financed by the orgs that want to deflect attention from human responsibility by giving agency to the “AI”.
It’s not enough to tell people, when there’s a mediatic juggernaut on the other side that has been building the opposite narrative for decades.
Generative AIs and LLMs should be heavily regulated, put under public governance and took away from corporations, especially extra-reckless ones like OpenAI. The “it’s already open source” argument is bullshit: most of the harm of these tools come from widespread accessibility, user expectations created by marketeers and cheap computational cost. Yes, with open source diffusion models or llms you will still have malicious actors in NK, Russia or Virginia making automated deepfake propaganda but that’s a minor problem compared to the societal harm that these tools are creating right now.
Do models like GPT-3 inadvertently encode the sociopathy of the corporations that create them? Reading through this thread, I have the distinct impression that GPT-3 is yet another form of psycological warfare, just like advertising.
I love Star Trek. And in the original Trek, there were plenty of episodes where some really advanced computer essentially managed to lie and gaslight its way into being the equivalent of god for a society full of people that it then reduced to an agrarian or even lower level of tech. Return of the Archons, The Apple, For the World Is Hollow and I Have Touched the Sky, probably a couple others that don’t come to mind right now. In a couple of those cases, it was obvious that the programmers had deliberately encoded their own thinking into the model. (Landru from Return of the Archons, the Fabrini oracle from For the World Is Hollow). And reading through this thread right now, I’m like, maybe these scenarios aren’t so far-fetched.
Here would be my caveat.
They are psychological warfare not due to marketing. They are psychological warfare because their fundamental goal is to deceive.
They were not tested to be right. They were tested (and rewarded) for making humans feel that they worked.
Now. Question time. Is it simpler to find and abuse humans bugs, shortcut, heuristics and biases? Or to actually learn to do it right? You have 4h.
But if it is the former, then this is literally a machine trained to deceive, not to help.
An excellent point. All of my experience and everything I’ve read tells me that human wetware is full of easily exploitable vulns. People have been exploiting them for a much longer time than digital computers were even a thing. They’re easier to exploit than to grokk and fix. Psychology is a young discipline when compared to rhetoric and sophistry. So yes, the former is much simpler.
This comment is enhanced by knowing that Teiresias is a mythic character from ancient Greece. :-)
This sort of reads like an ad for a particular note-taking software.
Kind of sort-of definitely turned me off to the entire article, unfortunately.
Did you go past the first few lines? Because most of the article is problematizing Notion
First few lines? Half the article is an ad read.
the article openly advocates for not using Notion. I think it’s quite clear.
Talon Voice supports voice control and eye control but the learning curve is much longer than two weeks. I suggest you relax, rest, talk to people and go on walks.
I work at a news site. They’re going to hold the election on a certain day whether I’m ready for it or not. Obviously then the goal is to make an actionable design that can be implemented far enough in advance before the election day to do some testing and whatnot. You can call it what you like, but it’s basically a deadline.
Of course, lots of work I do has no deadline. The CMS is upgraded when the CMS is upgraded. So I think it’s worth distinguishing things with real externally imposed deadlines from things with internal goal dates. Goal dates can be shifted. Deadlines are not so easy to move.
I used to work for a edtech company. If features weren’t ready by the start of the school year we’d miss our chance to get the majority of our teachers properly onboarded. Deadlines matter in software because they matter in the meatworld.
The fact that you have to deliver a specific feature for election day is a choice made by humans, not something inevitable. The decision to develop something new before a given date instead of saying “we develop the feature and at the first useful election we use it” is a choice. Deadlines become inevitable when profit is put before software quality. Deadlines are inevitable when people that care about profit hold more power than people that care about software quality.
Profit is more important than software quality. Without profit, people don’t get paid.
that’s revenue. Profit is what pays your manager’s yacht. I work in a non-profit and I get paid every month.
Also it’s not true: in start-up economy is not really important to turn a profit. But it’s not the best driver of software quality either.
Are you saying that deadlines happen when people care more about profit than software quality but not when people care more about revenue than software quality?
it depends on the context and mission of the organization but in my experience, in non-profit orgs it’s much easier to reason long term compared to VC-funded startups or big corpos.
Deadlines are also inevitable when the food is going to spoil unless we have a system for distributing it by a certain date. That doesn’t have anything to do with profits, it has to do with the food spoiling; it would spoil just as fast under any kind of system humans can come up with.
It’s quite OT but the logistics of food greatly impact on the spoilage of food in different ways.
I know that Politico takes that approach because they cover lots of elections nationwide. For me, I’m only covering Pennsylvania, so there’s not really enough chance to recoup investment. If we miss 2022, there’s no telling if we’ll even be using the same CMS in 2024. It would be like trying to get the kids ready for school on Sunday morning: you can do some things, for sure, but lots of it really can only be done on Monday morning.
I generally agree, it’s part of the system the engineer is operating inside of…they are linked. It’s the reality we often brush off or ignore. Instead, we tell ourselves (and our customers) this software was “made with 💖” and “we’re making the world a better place.” Fine attitude for a pet project you control, but when you work for a company that’s beholden to VC money or the public stock market, good luck.
Failure to do this effectively has been the cause of so many Dilbertesque own-goals - including some of my own - over the several decades I’ve been part of the industry.
It’s doubly important for senior leaders. I’ve seen a “gee it would be nice if we had this by $DATE” turn through a series of retellings into a hard deadline, and then eventually horrify the person who spoke off-the-cuff when she discovered how that comment had been taken.
Indeed – this seems to.me like the difference between a deadline and good old wishful thinking!
Distinguishing deadline vs goal date is a good clarification.
If you look at what the article is espousing, it’s exactly what you would want to do with a real deadline: check in regularly on where you are, decide what gets done and what doesn’t get done, and bulldoze everything else out of the way.
Invoking the halting problem’s a bit problematic, because we write many, many programs where we do know when they will finish. We have whole languages (Datalog) that are guaranteed to terminate.
Though most of what I see in articles like this is what the agile community figured out long ago. I still haven’t figured out what the balance of what happened to agile was among sabotage, misunderstanding, or Gresham’s law.
Sometimes I read something like this and imagine the author got in big trouble and chose to spend a few thousand words explaining why the thing they got in trouble for shouldn’t be a thing they can get in trouble for.
Agreed. The reason deadlines exist is that multiple people need to coordinate. This is much more obvious in physical products where multiple pieces literally have to come together at the right place at the right time, and there are physical costs associated with either storing surplus or waiting for one part to be supplied. But it’s still true in software systems: you need developers building the system, developers building support stuff for the system (specialized tooling, installers, etc), you need devops building and running the infrastructure for it, you need to give testing/QA (if any) sufficient time to hammer on it, customer support time to get trained on it, even marketing (hrk, ptui) needs to know when to put up the posts announcing releases or changes.
Now not all systems need all these things, and in software there’s a fair bit more soft wiggle room around these than if, say, you have a shipping container full of stuff sitting in a port and they’re charging you $X per day for storage. But they still exist. The deadline is not about you, it’s about making all the pieces fit together.
The fact that this seems to be so seldom communicated properly and the feedback and timing on deadlines is so dysfunctional is its own problem, of course.
Or maybe it’s full of people out there that don’t use deadlines and are appalled by how many still do. Or it’s just a way to stimulate healthier desires in our colleagues to make the whole industry a bit better.
Wait, people use “grindset” seriously? I thought people only used it to make fun of that entire mindset (see “sigma male” and other such memes).
yes they do. Lot of people hate themselves. I came here thinking I would find a post on self-care and instead it was unironical.
“Destroy mental health for more tippy tappy on the keyboard good”
Yeah I’m not sure the author realized that the Urban Dictionary definition they used was intended to be sarcastic.
Excellent article and conclusions, though.
I tried something similar, asking ChatGPT to produce recipes for different fermented food. It is similar in the sense that there are specific models implied in the production of the answer: proportion of the ingredients, times, temperatures, phases in the processing, etc etc.
They all looked kinda ok and it showed that the AI could infer the relevant parts necessary to ferment food and 1 out of 10 was maybe neighbouring correctness. Nonetheless 9 out of 10 would have probably molded and killed you.
As usual with these generative models, the content keeps looking better and better but it doesn’t get any more reliable than before. It might be good for fillers in a newspaper, the copy of your startup’s website and stuff like this.
I think this article, while I believe it’s completely real, is very misleading in portraying cherry-picked examples. It’s misleading because it implies the possibility of trust in the output of the model that shouldn’t be there. Obviously this guy is biased and produces propaganda for his side in order to overcome this need for trust but as technologists we shouldn’t buy into it.
This doesn’t take anything away from the impressiveness of this parroting device. Just don’t use this stuff in the real world.