Re: 42% less unix philosophy
Eleven seconds. My development desktop, on Gentoo and OpenRC, boots from cold, hard power-off to a fully-operational KDE Plasma 6.2 desktop in eleven seconds.
114 publicly visible posts • joined 17 Oct 2014
My last conflict with anything `systemd` was when I ran in to repeated problems on my development desktop. The box kept falling over, at random intervals but `sudo` also ceased working. `sudo` was failing because `systemd-homed` was falling over and I couldn't fix anything because, yeah, `sudo` wasn't doing the `do` part at the end. THAT was the final straw. That pushed me to migrate my desktop to (really: "back to") Gentoo, on OpenRC, which is the same thing that runs on all my servers. Alpine runs in all my containers (none of them have an init but some had `tini` in the past) and no boxen I maintain use `systemd`, today.
Since that point in time, the `xz` Thing came and went and I watched on, bemused and amused. Schadenfreude is apparently a healthy and necessary part of our human psychology. I can't imagine handing `sudo`'s responsibilities to the same supply-chain that let the `xz` Thing happen. I certainly can't imagine replacing the Kernel's SUID mechanics with their crap.
Not only because `sudo` and SUID, in general, do a whole lot more than *just* run commands as user 0. There's a reason why `sudo` became ubiquitous and the fact that `root` passwords no longer need to be shared (or even be set at all!) is only the very beginning of the very tip of an iceberg-sized point.
I read the news about Shepherd 1.0 and I do sigh, a little, and wish more journalists were mentioning it as an alternative to OpenRC and Sys.V Init instead of holding it up to `systemd`, though. Both of the latter actually are init. systems (and stay in their lanes) and I'm sure that Shepherd is intended to be init., too, not some sprawling, redmondian behemoth that also does a spot of init. by-the-by. (I'd say `systemd` contains an init. system but it seems to contain multiple components that all do some init. stuff, along with multiple mini-inits. Inits-within-inits. It's truly an oroboros. I'd say it's a misguided, cynical attempt to make an Eierlegende Wollmilchsau if I didn't love that German phrase too much to sully it with the association.)
So, dear Register: can we anticipate a good technical deep-dive into Shepherd 1.0 in which you tell me how and why it might, or should, one day replace my perfectly-fine-thank-you scripts in `/etc/init.d`? Those perfectly-fine-thank-you scripts just sit there and do their thing at init. time. I haven't had to "maintain" a single one in ages -- at least, not since the last time Wayland fell over and I had to sort of wrangle some things to get dbus+kwin starting, again. I can't really see the appeal of porting them all to something new: they're prefectly fine, thank you.
Maintaining some open source-code isn't supposed to carry a punishment. It is this sort of opinion that breaks that deal.
This is why I never publish my work as open-source and only ever push patches up to projects that I really, really love and trust to value my contributions. There are simply far too many who think that "maintainers" should just deal with the nasty side of open-source for whatever reason.
Since they've backported (back-foisted) it upon Windows 10 (it hit me, again, yesterday) I ask this: why not go all the way and foist it right back on Windows 3.1, too?
Do you remember those "Help" menus from 3.1? If you don't, you should go and run that in an emulator, unplug your router and try to work out how to use it, offline, just from the help files that used to ship with programs, back then – accessible by help menus! They were often – albeit with exception – actually useful! They were references and sometimes even told you how to do Things with the software or the operating system.
Also: you'd be building up the user-base to validate my argument that there are surely many running 3.1, even in 2024, and they could be blighted with Copilot, too, just like the rest of us.
This news should come as no surprise to anyone. This is the standard play for venture-capital (or private-equity) bought "Open Source."
What it means – simply – is that the balance has tipped: the suits have determined that the loss of goodwill from those who flee is less than the stakes to be gained by this change. That, in turn, is trivially easy to interpret: "we" – the users, developers and community – are worth less than the Dollars knocking on their door. That follows by definition.
Why would we trust them, then? They're demonstrating contempt for our worth, as a community, so only a fool would expect them to value us!
WordPress. Bitwarden. VSCode. GitHub. Redis. MySQL. (… and I haven't even started to 'think' to find examples, yet.)
Pre-emptive response: "Source Available" does *not* cut it for a web browser, supposed to be a User Agent, with which human beings do things like Internet Banking, personal or intimate messaging and – on occasion – research which conflicts with the prevailing status-quo of the land in which they're living such as searching for certain bears that like honey or for clinics deplored by a certain political lobby for culture-war reasons.
You cannot fork a Source Available software if those who publish it change their behaviour in the future. Without the threat of being forked, the corporation behind the software faces no checks and balances from their user base and being morally upstanding holds no specific utility value. This means that morality necessarily ceases to be a dominant strategy as soon as the Dollar arrives at the door.
A Source Available software will have no long-term plans to maintain support for user-first standards like Manifest v2 because they know that user disagreement is impotent. Indeed: Vivaldi say they'll support v2 until some time next year – THAT'S NOT LONG-TERM and we can't just fork Vivaldi in 2025, either, should they neglect to extend that or even just forget that promise before June, 2025. (Vivaldi, to be fair, are in an unfortunate position because they surely don't want to maintain v2 in their own fork of Chromium. Again: the problem is in evidence.)
If it is not truly Open Source, it can never be a proper "User Agent". Also, it's Chromium so its very existence exacerbates the monoculture problem, no matter how honest their claims of privacy – which, I guess, we just have to trust because it isn't Open Source. It exacerbates the problems with EME and other Google-mandated "standards", too.
We need a proper User Agent that isn't controlled by Google – directly, or indirectly via their advertising-company proxy: Mozilla. Vivaldi is not that.
That said, this news did impart one positive and novel idea: after reading their response, beginning "Our Dashboard is quite different…", I no longer doubt that LLMs *can* actually replace many humans in the workplace.
For long and long, I've considered the connection between "crypto" and "A.I." to be the fact that both are bubbles of the most ephemeral kind but, today, I think I've been wrong about that: both are about consumption of power – Watts!
Precisely how the megalomaniacs are punting A.I. is pretty obvious but *why* they're so desperate to do so makes little sense. Is their really enough money to be made from rendering artists, authors and many others redundant? Maybe they're set on positioning themselves to capture "human culture" or something by flooding the channels with 'botshit' but even that seems a little too desperate to motivate the scale of the infrastructure investments. (It would also be too dystopian or nefarious: the megalomaniacs are not that inspired.)
But Watts of power? This is all a counter-move to the acceleration and expansion of green energy.
This is about keeping legacy power-plants operational. This is about keeping the oil-economy flowing and ensuring that green sources cannot hope to bring about the decommissioning of fossil-fuel sources in the short term. This is about keeping old establishments like OPEC relevant as puppeteers of the world economy.
This is also a cynical act but cynical, short-termism is right on-brand for the megalomaniac.
Perhaps they know that "A.I." is stupid and a bubble and hated by the masses and laughably prone to hallucinations and not fit for any purpose in the real world. They do not actually care because "A.I." – the entire industry – is just a "cost centre" to them. Bad press, loathing, even infrastructure wastes are just "costs" and their profit lies elsewhere: these maniacs are firmly rooted in traditional economic structures, investments, markets and capitalism and granting the oil-economy a final lease on life serves them, there. Keeping fossil-fuel chains alive keeps their lobby relevant in government and international institutions.
They're seeing Watts of green power becoming available on the grid and so they strive to sink as many Watts as possible into something: not crypto, now, but A.I.
For Satisfactory players, the TL;DR is this: A.I. is just the AWESOME Sink of real life.
The other big question is why does the corpo have any influence on the actions of the .org, anyway?
I've always understood the split between corpo/.org for FOSS products to define the boundary between what the community retains (.org) in return for – you know – actually building the whole product, and the money-making entity to be set up as just one service provider to profit, employ a core team of developers, and, ostensibly, to sponsor that community.
It's supposed to be a hedge so that all the FOSS developers don't just immediately jump ship, fork, and refuse to ever touch the original code again – potentially turning directly to their lawyers to ask whether a corporate take-over is legal. FOSS devs understand that they're getting ripped off (in a way) but that the split is inevitable as soon as the projects becomes successful enough to demand a stable revenue stream coinciding with corporate profiteers looming, inexorably, anyway.
It seems to me that Automattic are proudly and loudly pronouncing the quiet bit: FOSS projects are basically suckers for this abuse. I don't particularly care one jot about Automattic, WPEngine or WordPress but I do care about FOSS (in general) and it seems to me that this story is relevant as a cautionary tale for the whole ecosystem.
This is irrelevant, now. Unity demonstrated to developers that they could and *would* unilateraly fiddle with the pricing model as and whenever they wanted to. This was the final straw and also served to accellerate interest in alternatives like Godot by orders of magnitude.
I did not dig into the details but I'm pretty sure that they have not relinquished their power to alter the deal again, unfavourably, as and whenever they want to.
Nobody trusts them and nobody should ever trust them.
Indeed, any license that explicitly requires attribution is already being expressly violated because no attribution is being granted. This will end up in courts but the fight is nonsense because Open Source and most Creative Commons licenses are unambiguous in their demand for attribution.
I fear that a nonsense battle is actually what the corporations want and foresee two awful but non-exclusive outcomes:
1. They want to argue that attribution-at-scale is just not practical – they would, in essence, have to cite every public web site ever published – and use the impracticality to somehow neuter the very concept of "attribution"
2. They intend to capitulate and try to strike a compromise: they wreck pre-AI copyright and, in turn, precedent that AI-generated "content" is uncopyrightable (being built from the ashes of pre-AI copyrighted works) is carved in stone...
But 2 is a trap, not their end-game! Their end-game, then, will be to punt the idea that, given the right model and the right, designed prompt, AI algorithms can output *any* target content – this has already been demonstrated in the lab.
They'll use that to muddy the waters and cast doubt on whether anything was ever human work and push for precedent that basically ends copyright for works post 2023.
The resulting free-for-all would hand a huge advantage to whoever has the "biggest AI". In a world devoid of attribution, there ceases to be any reward for independent artists, authors, musicians or writers or other creatives to participate. Independents will surely persist out of vim and vigour, in their niches, earning a pittance in kudos and currency but AI content can and will be churned out at scale, heretofore unseen, and eclipse their already meagre visibility.
Borking copyright will further exacerbate the imbalance of power that gives big content houses free license to dictate what the vast majority see, hear or read, whether it comes with ads – even at premium tiers – which devices it can be played on, which lands it can be viewed in, etc.
I don't particularly like the idea of Gates, Bezos, Zuck and the rest choosing the landscape of arts and culture.
Always remember the single-paragraph, fundamental truth of the advertising industry: advertisements shoved into faces – "exposure" – exist only for there to be a concepts of "ad exposure" to sell to gullible advertisers. Hit rates or conversions to actual transactions are intensely and extremely irrelevant.
Advertising has nothing to do with actually selling a product or service. That was once called a "hit rate" or something but everyone realised pretty quickly that advertising hit rates were always and always would be ridiculously low – well below any kind of noise floor. Advertising is all about selling ad space or potential "exposures".
That's why algorithmic feeds and content farms exist. They aren't there because they provide utility. They're there because they represent potential exposures to sell to fools. "Engagement" metrics exist to make those seem valuable – the more time wasted, the more potential "exposures" there are to sell.
Advertising is the antitheses of a "collective action" problem and that's why ad-blocking went ignored for so long. Ad-blocking represented "collective action" in opposition to advertising by the wrong group of people – the users – and those in power didn't care about that because the only action that would threaten them would be if the idiots paying for the ad space acted against it – which they did not. Google, Microsoft et al can still sell their "exposures" to fools even if the users who might actually click the fool's ads block them. Again, the hit rates and conversion stats were so low and meaningless, the effects of ad-blockers were immeasurable. (YouTube being the one exception to this, admittedly. I honestly don't even...)
I imagine that the up-stream OpenSSH developers do consider unadulterated `sshd` to be perfectly well ring-fenced from attacks against systemd, or `xz`/`liblzma` or – more generally – from the attack surface of essentially unfunded libraries with at-most-one trustworthy maintainer. That's why they don't link those libraries!
The UNIX Principle is what we *need* to be discussing but – frankly – what's the use? It has been long abandoned. Meanwhile, call me a "hater" because, yes, I do hate the very concept of a Linux box that runs an init that scorns the UNIX Principle so extremely that a daemon likely to run as `root` must necessarily be compromised *at build time* for compatibility.
OpenSSH should never need to know of the existence or use of whatever is chosen for init or whatever initiates it as a daemon, let alone be critically compromised via a supply-chain attack targetting that initiator or libraries that may or may not be linked to that.
If, indeed, `libsystemd` is not safe for use then it should not exist at all.
You can only reproduce it if the `sshd` executable was built from compromised sources in the first place. Gentoo – for example – write the following in their advisory notice:
> 2. the backdoor as it is currently understood targets OpenSSH patched to work with systemd-notify support. Gentoo does not support or include these patches
> https://security.gentoo.org/glsa/202403-04
I do not think that systemd is at fault for this particular exploit, in this instance, but rather at fault because it has created the channel through which exploits like this cannot fail to occur. It has normalised the very concept of an overly complex, bloated init.
When I read through the email (https://www.openwall.com/lists/oss-security/2024/03/29/4) in full, it seems apparent that `xz` and `liblzma` play roles only as the attack vectors through which to compromise `sshd` via the vast attack surface that is systemd and `libsystemd`.
This news should really be about how distributions should not be patching trusted sources, init-systems should not be requiring such patches and shouldn't be so bloated in the first place!
1. Debian patches the sources of everyone's most trusted, most critical daemon – `sshd` – to add support for notifying systemd …
2. which exposes everyone's most trusted, most critical daemon – `sshd` – to an attack surface broadened to nothing less than the entire set of libraries linked by `libsystemd` …
3. which, due to bloat and feature-creep, is vast …
4. and `xz` and `liblzma` just happen to constitute vulnerable libraries within it, those salient today.
It could have been anything else; the wider the attack surface, the more vulnerable everyone is.
Every distribution is now frantically and reactively patching but the real vulnerability persists – systemd, itself – and every news item mentioning it is either bad news or notice of how its feature-creep progresses apace. As long as *that* attack-surface continues to exist on modern Linux, backdoors such as this one will only become easier and more frequent whether they are detected and reported or not.
Both inference *and* training are just fancy 'tensor'-like product operations performed ad nauseam. Just about anything *could* execute them and nearly everything even has some form of SIMD instruction-set to accelerate them beyond primitive `for`-loops, anyway, and has had for decades. Even libraries which include highly-optimised implementations of the maths exist and have been open-source for ages, now. Particularly for the case of inference (given the weights of a trained model) proprietary and novel advances – alike – only grant a marginal speed-up.
The question is only how *fast* some given hardware can run inference and, there, I propose that the answer is entirely meaningless because the vendors behind this fad will *never* allow LLMs to truly be used in anger, offline. (And literally anything can send an HTTP request to "the cloud" to query models.)
Being the man-in-the-middle to serve the 'AI' responses – capturing usage and prompts and all the telemetry and metadata – is the very product they're building. Why would they ever let that run offline? Forgoing gatekeeper status would be arse-about-face, for them, because the perceived value in interaction data is their only business case.
Any hardware vendor punting "AI compatible" hardware is not just pulling a fast one because just about any Turing-complete machine with a product-op *could* execute the algorithms (perhaps slowly) but because users will never truly use the capabilities.
Sure, the open-source algorithms and open-source model weights are cute to run, offline, but they are only a curiosity. Although the hardware vendors are punting chips capable of slightly faster execution of these algorithms with those models, nobody with the funding or power to drive this fad forwards intends for those models to find main-stream use – I predict that they will disappear from public consciousness exceedingly quickly.
Think about how many algorithms *could* run locally, on-device, on any modern smartphone but are, instead, served from some cloud data-centre, somewhere, where some corporation receives all the data. Think about how many services *could* operate just perfectly over everyone's LAN, without ever crossing the firewall or being routed outside the subnet. Hardware vendors don't seize upon these 'capabilities' to promote their stuff only because none of these are in the headlines.
Would that be "Germany" as in "*here*, this Germany" ... that I'm living in... that shut down all its nuclear "Atomkraft" power-plants under pressure largely lead by the greens who were manipulated into opposing nuclear power, decades ago.
THIS very Germany that had to re-open coal-fired power-plants when the "Ostpolitik" policies were shown to be failures and the pipelines carrying liquefied-dead-dinosaurs from Russia were limited?
THIS Germany that imports electricity from aged and decrepit nuclear plants placed conveniently far outside the borders as to be immune from the anti-Atomkraft lobby in the Bundestag, yet close enough to annihilate quite a damn lot of it, should they go boom as the misguided fears of the past foretold – the same fears that were over-played by, yes, the Greens, in fighting against Atomkraft and achieving a massive win for Big Coal, back in the day?
Whatever power Microsoft draw to power those data-centres, they have no moral right to do so – certainly not for "A.I." – but it gets worse because they'll surely buy their power wholesale and it will surely also be discounted! On the ground, consumer electricity prices are sky-high and the need to convert to electric heating – gas heating being phased out – will be punishing to many people and families in the very near term. Spending Watts on the stochastic parrots of the already-rich is not going to insulate all the old houses or lower heating costs.
In a sense, this is very much a direct waste of resources on "artificial" algorithms in spite of the real-life humans who need them, and all so that a corporation's profits can increase in line with the latest fad.
Ich bin doch wutend.
I'd argue that the remainder of the points would serve to erode the market for Electron, which is essentially Chromium and V8, and that that, too, would serve in the battle to prevent a web browser monoculture from developing.
There are a tonne of things that use Electron-based "native" clients but don't provide any benefit that couldn't be served by Firefox in some kind of "PWA mode" and providing the necessary features to do so would amount to a few new command-line parameters, assuming that the previously mentioned profile isolation features were first class.
All Mozilla have to do to take a serious bite out of not just Chrome and the whole extended family tree of Chromium – INCLUDING Electron – is this:
- Fix the bugs. Seriously, just fix the damn bugs, already. There as so, so many...
- Cut the tracking, telemetry and privacy-snooping features at the source-code level.
- Cut the value-adds that nobody wants nor asked for, starting with Pocket.
- Return to the "principle of least surprise", meaning absolutely no "experiments", no modal popups interrupting the user's flow just to try to sell them on a frivolous new colour-theming gimick, no surveys, no up-selling of features: zero surprise, it's a browser, just be a browser and always be a browser.
- Make desktop integration a first-class feature: starting with effort to make the look and feel fit with the desktop environment!!{infty.}
- … and the U.I. font-size match the desktops font-size and DPI!!{infty.+2}
- … and task-bar/launcher/launchbar/launchbarx integration so that multiple, segregated, privacy-sandboxed profiles can coexist as first-class buttons, thereon. (Without the hacks needed to achieve this, today.)
- Oh, and give us PWA / `--app` support (again, with nice taskbar integration on the host desktop) so we can just use a Firefox instance to kill off Electron apps for all those cases where the available desktop app is just a site wrapped in Electron.
After all of the above, there are a tonne of easy, low-hanging fruit to grab to distinguish Firefox from the rest:
- First class advanced tab management and organisation, multiple selection, copy links to the clipboard, etc...
- Advanced user keyboard shortcuts and re-mappings: closing all tabs, closing tabs to the left or right, closing unpinned tabs, closing duplicate or old tabs...
- Built in RSS support with synchronisation of read articles via the Mozilla account...
- Use of a hardware token to encrypt synchronised passwords and other data.
Some are provided hap-haphazardly by plug-ins but I REALLY struggle to trust those because they invariably require permissions I don't want to grant and often require extra permissions because of feature creep. I do still trust Mozilla – technically – and, by building these features right into the browser, could begin to make advancements, again.
Something tells me that this new management change will not result in a single solitary one of these obvious and often trivially easy quality-of-life and privacy related changes will ever be made.
Let me get this perfectly straight because it is important to me: this means that, should something disturb my Windows 10 activation on my old hardware or should my disk drive fail, I can never re-install, even though my PC has been running Windows 10 for many years – originally from a Windows 7 upgrade?
I can't upgrade to Windows 11 – don't have a TMP and my motherboard is rather dated – and, even if I could, I *would* not because I consider it to be a step backwards in usability, privacy, openness and, outside of those, it's no advancement in any way that matters.
There's nothing wrong with my old hardware (i7 7700, 1080ti era) and no earthly reason to upgrade it.
Frankly, I'd actually be perfectly happy to DOWNgrade back to 7 except that nothing supports that any more. LOTS of games, software, tools and even modern programming languages and compilers have dropped Windows 7 support simply because they did not want to or could not afford to expend resources maintaining it, given that Windows 10 was a "free" upgrade path.
This is basically a pure, end-game distillation of the EEE tactic, except I'd personally omit the "Extend" part because I honestly can't think of a single Windows 10 feature that "Extended" functionality in a way that I actually wanted or needed. And, basically, this goes to show that Microsoft could put paid to anybody's use of their OS at any time on their own hardware, even if one had a formerly valid and legal license. The can choose to alter the deal at any time.
Meh. I've migrated 99% of my use-cases to Linux, anyway. I only boot Windows 10 for a few games, now. But I was planning a fresh re-install to clean out some unwanted stuff – like old Adobe and Apple bloat left over from other abysmal software that I once used, have uninstalled, and will never touch, again – and now I see I can't ever do that, again!
Open Source software exists either to serve a need or curiosity, either of an individual person or concern or of a community of them. That need or the satisfaction of that curiosity is its sole "mission" and purpose and suffices – no more is required.
However, the Open Source movement – as a philosophy – does have a mission and, as is highlighted by Bruce Perens and this article, that mission deserves new consideration in today's climate of LLMs, where former bastions have been acquired by hostile corporations. That movement's mission has nothing to do with data or software or drawing arbitrary, semantic lines between those two. It has nothing to do with usability or portability or anything technical.
The Open Source movement's concern must focus on protecting the communities that bring the needs and curiosities and gather together – or venture forth as individuals – to build whatever the hell piques their interest or whatever the hell they require.
It must focus on protecting the knowledge and the artefacts from those communities such that they and those that come after may build upon each other's work as Open Source always has done and, without which, Open Source simply could not exist – we wouldn't have any compiler to compile it, any libraries to link it to or an Operating System to run it if it did!
The requirements are crystal clear: what is needed, today, is a return to searchability of knowledge and code and safe harbour for code-bases – i.e. never GitHub, again! We need new places to communicate instead of black-holes like Slack, Reddit and Discord which are not to be trusted and already have strictly finite time-to-live on what's previously written, there. We need improvements to the safety and security of our supply chains: NPM and Cargo and PyPI and the like.
The Open Source movement needs to defend itself from the parasitic predators that pervert the good-faith contributions to build for-profit products. These parasites are invariably "anthropomorphised equity" and they suck the blood of living, thriving organisms nurtured by bleeding and breathing people in order to make their own lines go up – that is the antithesis of "Open Source" AND of "community"! It needs to enable its communities to build what they want to build or need to have – serving users and doing whatever one does with "data" is inevitable and will happen in a more or less successful way, as it always has.
Post-script: I think that GitHub is a good case study. It was once a bastion of the Open Source world and, even today, serves to host the vast, vast majority of Open Source projects – both code and issue trackers, discussions, C.I., wikis and documentation. But, in fact, GitHub is largely a cornerstone of the problem and the reason this debate is interesting in 2024. Before Microsoft weaponized GitHub, all Open Source projects were ruled by a LICENSE and that LICENSE was invariably to be served and stored with the sources. On GitHub, the LICENSE is there but the LICENSE no longer makes everyone equal. Instead, Microsoft have become the pigs in the farm house: they are more equal than others, even though they technically hold only the LICENSE that everyone who fetches the code receives.
Any project that is on GitHub hands to Microsoft perfect visibility not only into every revision of their code-base but also into every single interaction of every other user, developer or viewer of that code and that interaction data is probably far more valuable and far more threatening to the communities about which I'm ranting lyrically, today. Not only that: every fork of every code-base extenuates this problem.
Hosting on GitHub is easy but I wonder if it is not fuelling the machine that will ultimately crush any hope for the future of Open Source as a movement of humans.
It *will* run Linux Mint. I know. I've tried. Also Ubuntu, Arch and Gentoo.
The experience is sub-optimal, however. Sound won't work at all – after days and days of hacking in which I managed to get some white noise (and software toggles to control that white noise, I suppose), I ended up passing sound down HDMi and out the headphone jack from my monitor to my real speakers and that was as good as I could get and certainly unsatisfactory.
The GPU will work for compute tasks and can be bludgeoned into appearing to achieve something akin to desktop compositing but never both compute and presentation at the same time and the performance is abysmal – resizing a window or scrolling a browser page is insufferably poor. Watching a full-screen, in-browser video is a joke. Full-screen 3D stuff appears fine and renders at very high frame rates but the horizontal tearing apparently can't be solved – any kind of v-sync functionality just doesn't work – presumably, this is because whatever is controling the GPU isn't playing nicely with the window manager and compositing engine.
I've tried the nVidia official, closed-source drivers and open-source ones and nothing makes it better. I've tried it under Gnome, XFCE, KDE, etc. I've even tried Wayland but, yeah, Wayland + nVidia are/were a match made in hell.
The sound device *appears* as some kind of HD-Audio-esque thing but just defys typical behaviour for such hardware and only produces noise signals out of any audio jacks, whatever the configuration.
There's also the on-board WiFi – I gave up on that, completely, but don't need it, either, so that isn't too much of an issue. (Not right now, anyway. I did need it, recently. It would be nice to know the hardware does work if I should happen to need it, again.)
My "Windows 10" hardware is just a pile of incompatible rubbish – that's what. I gave up fighting with it, long ago, because I honestly can't be bothered to keep trying. After the days become weeks, once or a couple of times, round, one just gives up.
Part of the problem is that everything is on-board and what's not on-board is the GPU and that's just too expensive to simply replace. To avoid having this issue, again, I'll be making sure that my next box has a motherboard that is 110% Linux-friendly (i.e. ALL the on-board stuff works flawlessly, without any need to fight with it) and the GPU is proven good under Linux before the return-window on the part runs out.
Of course I won't just land-fill the old hardware. It will probably work fine as a headless server which never needs to emit sound, connect to WiFi or use the GPU in anything other than compute modes.
I, for one, will be forced to upgrade my PC hardware when Windows 10 ceases to serve my needs because, sadly, it is officially incompatible with Windows 11. I could hack my way around the official requirements – that's easily done – but I don't wish to exert effort defying Microsoft's wishes and so I suppose that I will have to play a role in this up-swing in PC hardware sales, despite my anti-consumerist stance.
Thankfully, once I've replaced my old hardware with stuff that operates properly under Linux – goodbye, nVidia; goodbye, Creative SoundBlaster on-board audio – this coming upgrade-refresh might just be the one to end the cycle.
Never again, Windows.
Now: does anyone have a great hardware review site with a STRONG Linux focus? I.e. one that can be trusted to absolutely lambast any kit that has even minor niggles under Linux – and basically black-list makers who's drivers are rubbish?
An A.I. cannot write anything that is truly of interest to me or many of us, perhaps – aside from a curiosity at what the algorithms are capable of through scientific interest in the numbers and mathematics, of course – but I fear that they can still write *enough*. The truth is simply that there's a very low bar for content. Enough is easy to achieve.
Consider Netflix as a case study. Today's binge-watchable series are invariably one-trick ponies: they have mastered precisely one of the story teller's arts: the hook to bring you back. Every episode is a waste of time, meaningless. Characters are not developed, worlds and places are not explored, theories and philosophies are not elaborated, in fantasy the story does not indulge, and questions it does not confront. Instead, in the dying minutes of any episode, a hook is placed simply to get the viewer to begin the next expisode in which nothing at all will happen, either.
Ceasing between episodes is consequently uncomfortable but, should one abandon ANY of these "binge watchable" things at T=10 minutes into any episode, one very quickly realises that they've no real reason – besides boredom – to pick it up, again.
Can an A.I. write this? Surely it can or it will be able to, soon – perhaps only two academic papers down the line.
Researchers have studied how free pornography and "tube-sites" exploit the dopamine loop in the brain. If A.I. could reproduce this exploit with matter that is both free of taboo and that does not trigger any interruption by a refractory period, the result could be devastating.
Could A.I. power a pleasure-button that many – like rats – would press until they die? In fact, it will not be necessary to press the button – we've "autoplay" for that and the 60-second video format – close the feedback loop with "telemetry" and any control engineer can tell you what can be built.
Will I then be able to disable the sponsored search engines?
In its current state, iOS Firefox will automatically enable sponsored links on the home page – to Amazon and other evil actors – and require to you manually turn those of and it will also include sponsored search engine plugins to those same evil actors that I can't find a way to disable. These buttons appear when you activate the search/address bar and, as I said, I can't find a way to turn those off.
Firstly, enabling and supporting evil actors like Amazon should not be done. Secondly, why can't I turn those OFF?
Until Mozilla allow me full control of and confidence in my browser, once again – or, even better, just not include those traps and parasites at all, ever – I honestly do not care what rendering or HTML engine is running behind the scenes. Frankly, Mozilla also have bigger problems, elsewhere, including the state of Desktop and ESR Firefox.
The end of the rule of Apple's WebKit is great but it is not the end of the problems browser users (that's everyone) face, today. In fact, I would even doubt it is the most significant!
Could you possibly mean it doesn't do stuff like pulling an emergency-stop in the middle lane of the notoriously narrow-laned Brenner pass (between Austria and Italy) with no reason or other vehicle (apart from the overtakee) in sight, while I'm executing a perfectly tame, considered, pre-meditated over-take?
Someone who works in the automotive industry as a programmer explained it to me: these systems basically only count false negatives. Slamming on the brakes (even causing a pile-up because of inadequate following distance from those behind) doesn't count as a black mark against them but FAILING to slam on the breaks when the driver is incompetent does. So they all just guess that any sensor blip is worth a crash-stop and, even if that actually causes a crash, blame the human anyway.
(It slammed on the brakes and induced so much unexpected under-steer, I ended up half in the lane on the outside of the curve which was thankfully vacant. I can anticipate a lot of things on the road – I drove for decades in South Africa – but who can anticipate the moment when software or sensor bugs will suddenly strike?)
Or GitHub search!
I have a GitHub account but my browser is typically not logged in – see, well, this very article for reasons why one might have set it to delete cookies upon exit – and, my GOD does it is enrage me when I'm punted to the Sign-In screen for searching what is actually Open Source code.
The very essence of GitHub's existence and market dominance stems from the community that wrote that content I am trying to search. For the love of U+FEOE, ...
The *vast*, *overwhelming* majority just click the button that is visually styled, sized and positioned to entice them to click it and most of those who don't just click the button that makes it not appear the next time: same button!
Those of us who even question the fairness of these things are simply outnumbered.
Why do streaming sites not provide a useable interface for finding shows? Because that would present a choice between their current interface and an alternative and their current interface is entirely designed to squeeze addiction and subscription-renewals and binge-watching from everyone else.
It's not because they hate anyone with an organized mind who cares to see content in categories or sensibly-sorted lists or in any logical way, whatsoever – they simply don't give a damn about us and, frankly, they're probably happy when people like me *cancel* their subscription: we're outnumbered and too difficult to bother with.
If Microsoft are doing this, Google either are doing it too or have already done it – perhaps even already rejected the possibilities. Microsoft are certainly not breaking new ground.
Whatever the case, however, none of the actors in the Search-space are doing it in order to improve their Search offerings and thus regain market share.
Their motives will be profit – one way or another – and, sadly, the utility-value of Search is not measured in Shannon Information.
The answers-to-queries ratio is almost anathema to these companies and their exploitation will not change that. We should not expect or even hope for a return to the golden-days of freely accessible, online answers to our questions.
Simultaneously, we should be starting to realise that just expecting our own stake-holders to "search for it" is not reasonable any more, and design our own products and references and documentation and code-libraries and lookup-tables to reflect the fact that our users, too, are deprived of useful search engines. When you next read a nice, ergonomic, friendly-faced, human-language error message in your terminal and know, for certain, that THAT will be useless – even quoted – to search, yearn for those good, old-fashioned codes and numbers and appendices full of tables, from the past.
Don't they know that the best way to compete is just to have some content?
Sigh. We've gone and got our month of Netflix for the year. We do this about once a year, in winter, to catch up on "everything" the family cares about. If you take that limited subset, minus what's 'gone from Netflix' (and that includes "Netflix originals" that just weren't), there's literally nothing new after a year of being off the service.
Whole household is complaining that they're out of stuff to watch after a mere three days – it's not just me!
Looking in to what's gone, and why former "Netflix originals" could even possibly be gone, I'm learning that the providers are basically fragmented into uselessness, now. Anything for which the rights are sought-after goes to whoever owns those, which, from a consumer's point of view, may as well be random – and everything even a little bit older or more esoteric just dissappears and goes nowhere because there's nobody willing to pay or fight for the rights to serve it and, when newer stuff gets taken by the lawyers, the back-catalogue is always affected.
The inevitable heuristic is that *if* it is new and in vogue, it's exclusive to a service you don't have. If it is old or niche, it is nowhere at all.
I am honestly strugling to understand why one would ever shower *before* suffering the experience of public transport.
I have neither will nor inclination to impinge upon anyone else's olfactory senses or inflict unhygiene upon them but I have **BEEN** on public transport. A decontamination chamber at the destination is more appropriate than a shower before departure!
A world in which I felt obliged to maintain a sanitary condition in public would be well beyond my wildest dreams – a proper utopia! Perhaps, in that fairy-land, people would respect the personal space of others and refrain from watching videos on their phones with the sound up in restaurants, too – we can but dream, yeah?
As far as offices are concerned: they are often little better! We can all identify with that sinking feeling of dispair upon finding someone else's username pre-populated into the login screen, irrefutably explaining the novel greasiness of keyboard and other peripherals.
And remember shared desk-phones? I'll stop, now – writing this post is re-traumatising me.
That's more or less been my experience: performance simply raises the bar for recognition and it is never judged relative to the standards of others or to absolute, objective contribution. One is rewarded for exceeding their own past and the worst thing one can do is set that bar too high to sustain in the future.
After five years at one place, I approached management and told them that I was unhappy because promises to bring my pay-scale up to par with my industry, given my experience, had not been met despite the fact that the entire small business was then pretty much defined by what had been my own, personal prototype – started literally from file-new-project, by me, and grown, by me, into its overwhelmingly dominant position. They said they did not need me any more – they had the product, now – and that they did not care about my concerns. Wouldn't everyone like to earn more? Nobody else was getting bumped so why should I be? Absolute contribution and effort and excellence and the fact that I had stuck through the company's hard times – the very reason I was paid below par, then – were forgotten, as were the all-nighters, the coding-on-holiday, the support-calls-on-weekends and the rest.
After many such experiences, I've learned: one must go in with excessive demands that must be met to the letter and, then, one does the minimum and spares the horses – more is never rewarded but relative decline is never tolerated so excellence is only a route to disadvantage in the future.
Do these employees expect that their going-of-the-extra-mile, today, will lead to something in their future? If so: they're naiive fools – at best, they will be recognised as people who "can" and subsequently find themselves having to compensate for the shortcommings of others who "can't".
It is ironic that I would be less bitter and jaded, today, had I stood up for myself and demanded more from the start of my career instead of trusting the promises of corporations and believing the stupid fallacy that merit, effort and ingenuity were the routes to success.
Enthroned upon the porcelain, a thought occurred to me: that new twenty-nine-American-cent tier makes no sense in today's inflation-driven economy but it absolutely DOES when one goes and re-reads that Reg. article on the Internet of Shite[1].
That's fairly clairvoyant of Apple – neigh Jobs-esque! They've seen Stable Diffusion and ChatGTP and GitHub Co-Pilot on the horizon (neigh: already bloody here too soon) and they know the flood is coming. They do not want bucks from it – cents on volume will do.
I follow Deep Learning Twitter[2] and they are not off the money, here. That flood *is* coming and, if you're a small developer with small dollar-apps on the Store, it is coming for *your* lunch. However innovative you may be, if you are small and independent, shere grunt-work can replace your efforts. Perhaps the quality and flair and spark might be lost but the new A.I. Store Barons will have essentially limitless grunt power at their hands and that power will be optimised for marketability and store appeal and click-through rate. Most notably, this will all come sans-scruple and independent devs and creative-types tend to carry those and even occasionally hold Morals.
In this flood, pearls will be lost long before they can even present before the swine; diamonds will be irrelevant in the exponentially-scaled rough.
[1] IoS is definitely Internet of Shite-with-an-E. Of a good dram of Whisk{e}y, one might say, "there's the shit" but, were one to say, "that's properly shite," one's judgement would be conclusively the contrary.
[2] The Bird Site is bad. But, if you're in "tech" and do not follow Deep Learning Twitter – at least lurkingly-passively – you are not doing due diligence. Just note that many of the pioneers and visionaries in the field do rather tend to re-tweet a lot of shite so supressing their re-tweets is advisable.
... than many in-vogue NFT, for sure.
Safer because buying "I am Rich" MK (n+1) will actually add something to one's Apple ID – albeit something fatuous – and at least that has some precident for being somewhat secure as a token of "ownership" of digital "properties".
The 30% cut remains iniquitous.
If Epson do know two things, they are, in this order: (1.) that this is not about the environment but rather about their turn-over and bottom line and (2.) that marketing works in the printer market and the consumer does not know better – green-wash the turd and the consumer will lap it right up.
In my opinion, an Ink-Jet printer is basically not a printer. You can't print with it because it is always either empty, dry, clogged or broken or, if it is none of those, then the print job just isn't worth a whole new set of cartridges that will be junked when the time comes for the next job because, by then, one of the first set of states will certainly apply.
Hell, I'd love to say that I lived in a place where laws banned such things from the market. Ink-Jet is a blatant scam – a false product and a con. Sure, I could believe that it is *possible* to build a worth-while Ink-Jet printer but I just can't believe that any sales and profit motivated corporation ever would and LEAST of all: Epson.
Also: what, precisely, is "Mechanical Energy"? That's just BS of the highest order.
Many of those "brands" are actual individual independent artists, musicians and creators – that is, they're *people*.
Sure, there's a lot wrong with Twitter but a lot of actual people have their entire identity and profession tied up in their Twitter profile and following. Whether that was a wise move on their part is entirely irrelevant, today – it is the fact of the moment and he's trampling all over that and enabled to do so just because he is rich.
Perhaps, n-decades down the line, this will not be the case and these artists and creators will have learned not to put their entire profile and identity in a corporate-controlled silo – we can hope – but the fact of the matter is that they stand to lose their livelihoods. As much as I am anti-big-corporation and anti-silo, I also appreciate that many of these creators would probably be doing mundane jobs just to eat if they never had that opportunity that pre-Musk Twitter afforded them and, since I follow many and enjoy their content, I'm quite angry that their chances are being erroded.
Do you mean Intellectual Property? I trust European courts and am fairly certain that Hong Kong and China hold no jurisdiction over anything I'd be remotely interested in.
Do you mean some cultish, misguided fantasy of "ownership" of something accessible by a URI, the requestor of which is NOT authenticated by the server? Because that's just crazy shit!
I really wish that people would be more blatantly accepting of this craziness or else, one day, we might all wake up in a world where someone actually relevant – not I – begins to recognise these claims of "ownership" and "home" and, then, we're properly fucked!
The article fails to mention how the same thinking plagues the Open Source world, too, in MANY, MANY cases. On prime example: Mozilla Firefox.
Sure it's FOSS but Mozilla appear to be hell-bent on proving that "irrelevance" is achievable by those holding the reigns of a FOSS product, too, and, in fact, that they can mimic Google in removing every feature anybody actually wants and adding tonnes and tonnes of cruft that nobody asked for.
All of this – whether in the corporate world or without – stems from the same fundamental truth: nobody, today, achieves a powerful role while retaining a moderate attitude or an understanding that just-being-brilliant-at-something is enough. The ONLY way to the top is to fully subscribe to the cult of owning everything, always increasing everything, beating everything. And, of course, everything must be maximally monetized – just a fair profit is not enough.
This story prompts one primary question: *why* do I consider giving my code and insights to other humans (via GitHub or StackExchange or the like) to be alright if the thought of Microsoft-controlled Co-Pilot exploiting the same is anathema?
I think this is a story about solidarity and, quite simply, I don't hold any solidarity for corporations. For other coders, I can at least try to believe that I'm helping out a human being who may very well be living a similar life to my own – past and present. Their high-functioning thoughts may very well be being exploited and, frankly, any little helps, right?
Had Microsoft said to the open-source world that their A.I. trained on open-source *was* itself also open source and, additionally, free to use for free-as-in-freedom work – and, also, not useable for proprietary work from which its training data would also be precluded – I expect that the revulsion from the world of coding would be very much different.
Do not break the picket line! Solidarity! No open-source code for corporate parasites!
They don't know how to "manage" in any way other than "presence". These are the people for whom "respect" is a thing you earn by wearing shiny shoes and a wrist watch or for prancing about with a certain outgoing, chummy body language and demeanor -- not something you get for actual graft, skill and ability -- and management means influencing your underlings through "respect" and manipulating any uncooperative ones with psychological tricks and, frequently, abuse.
The irony is that, once, long ago, there actually were things in an office that actually were useful. They were called "whiteboards" and, if you could actually find a meeting room with a clean one and sufficient pens that actually wrote, a team of developers with the right direction and camaraderie *could* actually use them to come up with an idea or a plan and take a 'phone picture of it, afterwards, to put down as "documentation."
Whiteboards, however, don't look very "nice" (I guess...) and so they've been gone for years -- about a decade since I found a useful one!
There's one critical thing that's missing from this article: GitLab, the software, is an open-source software!
However much GitLab might try to lean on the fact that GitLab dot com offers some Enterprise Edition features -- not fully open-source -- to free users, the GitLab product stems from an open-source background and the core functionality certainly is still open-source. Many of the supposed freeloaders contributed patches and debugging time and feedback and well researched issue reports and other input into that product!
It is quite dishonest for GitLab dot com, the commercial entity, to simply sum up the cost of keeping some hard-drives spinning! They also should perform the impossible calculation of how much of their income from actual paying customers should rightly be attributed to work from the community they're now spurning.
I don't think anyone on the open-source side of this equation was or is complaining that GitLab dot com brings in income from exploiting the open-source portion of their code base -- it's within the terms of the license. But, to appreciate exactly *why* this feels like a massive rug-pull to many of us, ask this: would anybody have ever contributed to GitLab open-source, had they know they were just free labour for a corporation that chooses to optimise its bottom-line at the expense of this very community -- pretty much just like any other capitalist corporation?
Prolly not, yeah? Capitalism and community don't mix!
I mean, I'm bitter because I've just had to spend a tonne of my time migrating from self-hosted GitLab to self-hosted Gitea. This, it turns out, was a very good decision but I rather liked GitLab, back in the day, and do somewhat resent the way that they've been treating GitLab CE users as second-class citizens for a while -- pretty much making from-source builds too onerous to bother with, forcing the use of Omnibus or official, bloated Docker images, and pushing U.I. junk that can't be disabled, readily, in CE, that nobody asked for, but does nothing but plug an EE-only feature.
The writing has rather been on the wall for at least some years!
I have been waiting for this for a LONG time. Finally, I can banish Google Chrome from my PC. (Firefox is, will continue to be and will always be my main browser.)
I downloaded the new non-beta build of Edge-with-Chromium (that's what it calls itself, I believe) and tested it a bit. The headline results are these: the installed `msedge.exe` supports BOTH `--user-data-dir` AND `--app` and that means that I don't need Google Chrome anymore!
I browse with Firefox but I make extensive use of shortcuts that employ these two command-line parameters to create sand-boxed environments for individual web apps that I use all the time. For example, I have a whole user-data-dir for work-related web apps and that's used to get an app-like experience with GitLab, Jenkins, Trello, Slack, etc. -- each one (launched with --app) gets its own task-bar button and its own window, remembers that windows last location for the next launch, and gets its own entry in the alt-tab list, just like a real app. Then I have another user-data-dir for personal apps and some of those apps are duplicates -- I have two instances of GitLab, for example, and, because of the separation of user-data directories, they behave completely independently -- I can even have them running side-by-side with separate logins.
I've had to keep some form of Chrome around just for this, for years. Now, I don't need that any more because I can use the Windows built-in browser. (Not built-in, yet... but soon.)
Do not misunderstand me: choosing to replace Chrome with Edge-with-Chromium is certainly a choice of the lesser evil. Ultimately, Microsoft are going to bundle a browser in their OS so actually using it costs me nothing as far as installed footprint is concerned. Also, I'm personally extremely anti-Google so, if I have to choose between data spies, I'll be choosing Microsoft any day of the week.
I did experiment, in the past, with builds of Chromium sources that had been patched to remove Google's hooks into the code base but, personally, I found them to be very high-maintenance commitments and ended up returning to Chrome proper and feeling dirty for it. Replacing that with Edge-with-Chromium isn't a perfect solution but it is certainly a step in the right direction.
There shall be NO Google software (or update services) running on my PC.
> "We regret the content of these communications, and apologize to the FAA, Congress, our airline customers, and to the flying public for them."
The fact that "the flying public" are last in that statement and that the VICTIMS of Boing's negligence were not explicitly mentioned speaks volumes about Boing's priorities.