The argument certainly has holes in it, but I think the most important thing to take away from it is this: Consider your threat model before you worry about privilege escalation. There are some situations where it matters, and a lot of situations where it doesn’t matter.
If it doesn’t matter for your threat model, then don’t waste too much time / effort / complexity trying to mitigate it.
my container host at home uses quadlet to do container stuff directly from systemd using podman. pretty nice and simple. The only annoyance I have run into is getting it to pull conainers from a private registry without tls. This is not production in the sense that I serve a business from it, but it currently runs about 16 different local services I run at home.
It is exceedingly simple compared to anything having to do with k8s.
We do something similar, but we use docker in systemd. In order to avoid the iptables problem, we only use port publishes on localhost, like -p 127.0.0.1:8001:80. We also run HAProxy installed directly on the host which does forwarding to the correct containers.
I’ve even made a custom solution for zero downtime deploys and failovers where we on deploy just swap out the backend server via the admin API socket in HAProxy. For soft failover, we pause the traffic and then wait for other server to start the container, and then we swap backend and unpauses. (Pausing is done by setting maxconn = 0).
My home setup is also mostly all containerized through the use of quadlet/systemd, and probably the most important thing for me is that it all just works with incredibly minimal babysitting. I also use podman-auto-update so I don’t need to worry about updates and restarts and whatnot.
I’ve also been doing the podman+systemd thing for 4.5 years. I haven’t moved to quadlet yet, but now that I’m on Podman 5 I should have that option. Even doing it “the hard way” has been very reliable and manageable, but the new way definitely makes getting up and running easier.
On Arch and even Ubuntu-based systems, I have found that Podman regularly breaks after updates. It happened enough to force me to switch to Docker, which seems much more stable.
Am I the only one with this experience? Am I doing it wrong? Or is everyone on Redhat OSes, where Podman probably works more reliably?
I’m using Podman on macOS and FreeBSD. The macOS version is somewhat cheating, it’s actually podman-remote and a little bit of VM management, so is actually running an Fedora CoreOS VM that runs the containers. I’ve not had problems with either.
No, but there’s no reason that contains have a Linux ABI. OCI has specs for Linux and Windows containers ratified. Solaris uses the Linux ABI in branded zones, but the (not yet final) FreeBSD container spec uses native FreeBSD binaries. I’d like to be able to run Darwin binaries in containers on macOS. Apple had a job ad up recently for kernel engineers to work on OCI support, so hopefully that will come soon. The immutable system image model of macOS and APFS providing lightweight CoW snapshots should mean that a lot of building blocks are there.
Been doing it on Debian for over 4 years. I think there was one update that cost me a few hours, but no, I haven’t run into any kind of frequent breakage.
On second thought, I was probably using bleeding-edge Podman (even on Ubuntu) because the Debian version was really old and lacking an important feature or bugfix that I needed.
Hmm, my experience was something like that too. Issues with the networking drivers and weird almost but not quite compatibility with Docker Compose (overpromising and under delivering), etc. (can’t remember everything). I tried going back to it twice but finally gave up as each time I ended up wasting hours over something, whereas Docker has always just worked. Some of my issues could have been a result of using Arch, and perhaps Podman is in a much better state these days, but I’ve been burned too many times by it at this point to give it another go (I don’t trust it anymore) and I don’t really feel like I have anything to gain from it now anyway. Rootless Docker works well enough for me.
I used to be the same. However I quickly learned that I am not available enough for my users.
My partner hates maintaining her own system, however she hates even more when a printer stops working, or when she needs to install some new software, and I’m unavailable.
So even though she does hate maintaining her own system, she still wants one that she can manage herself. While I agree with most of what is said, I wouldn’t say these non-enthusiast-friendly OSes are based on “fundamentally wrong” principles.
I maintain my position that they are based on fundamentally wrong principles, even in the case you described. An intentionally limited, non-enthusiast-oriented OS such as the GNOME OS proposed in the blog post linked from mine, would likely not work for your partner, unless she does not wish to install software that isn’t part of said OS (nor of a third-party software store like flathub).
A general purpose operating system will have more packages, and can still remain maintainable by someone who’s not a sysadmin by trade. Limiting software availability is not helping non-enthusiasts.
On the NixOS computer I set up for my partner I enabled appimages (installed/updated through Gear Lever), Flatpak and the GNOME Software Center, so she can install and update programs without my intervention.
She definitely doesn’t want to maintain the system herself, so stuff like drivers and system software are managed by me and generally shouldn’t break because I handle the updates myself (and even if they do, she can boot from an old generation until I’m available to take a look), but she might need to install new programs or update Chrome every now and then, so I think this is a nice middle ground.
I mean, that sounds like the setup still makes sense, but the user is allowed/able to fix stuff on their own, either using the same method, or just doing it and then having etckeeper or similar running to record the change and codify it afterwards.
I really don’t like being overtly cynical to the point of sounding derisive, but, I genuinely stopped reading .NET updates years ago, and one of the reasons I never check on them is because I just know they integrated some bullshit APIs for OpenAI or something else equally unreasonable to bake into an API that’s supposed to exist for decades, that requires an internet connection to an unsustainable service. And, lo and below, they proved me right!
This is why I stopped using C#. I learned my lesson about what Microsoft really thinks about their users back when I swore off C#. I stopped using C# because Microsoft tried to remove hot code reloading from the open source, cross platform CLI tooling so they could lock it behind Visual Studio. The guys who develop C# are great, but at the end of the day, it’s owned by Microsoft, and if you’re using a Microsoft product, they see you as nothing more than livestock to cattle into the next marketing venture, whether you want it or not. Whether you want a Microsoft account or not, whether you want Copilot or not, whether you want Visual Studio or not (I definitely don’t).
EEE works well for their bottom-line, but as most things optimized for profit it’s terrible for the commons. This is all pretty sad, but expected. I’m particularly afraid of long term consequences of WSL.
To end on a more positive note, I love tech and will continue investing my time in projects I believe in, and it’s great there’s a vibrant community that feels like that too!
If you read this far, I encourage you to take the time and remove one piece of MS from your life now :)
F# is nice but it has a ton of sharp edges around interop with C# (no pun intended). And everything is written in C# on .NET. If you’re using an F# library, it’s probably the only one you’re using out of all your dependencies, even if you’re using F#. And that means dealing with everything F# desperately tries to hide from you but still leaves open like a live electrical box, including OOP/inheritance and exception handling, which is much more painful in F# than it ever had any right to be, because you’re forced to use them, even if you don’t want to. So basically F# punishes you for using the OOP-centric platform they deliberately targeted.
In my limited experience, DNSSEC is only usable when your resolver is allowed to downgrade to insecure DNS. It just breaks too often. Strict DNSSEC is a pain.
So for me, SSHFP isn’t really much protection against MITM.
How does “it” break? Can some domains not be resolved because they have some form of brokenness in their DNSSEC setup?
I’m using a DNSSEC verifying resolver on my laptop and servers, and haven’t run into any issues yet.
With extended dns errors (EDE), you get details about DNS failures. So if an issue arises, it should be relatively easy to diagnose. I am a bit surprised at how new EDE is, seems like a pretty basic requirement for diagnosing issues…
Good reminder, I’m not using SSHFP, but it’s easy enough to setup and use.
DNSSEC needs support from all resolvers so that signatures are passed along correctly and so that DS records are queried in the parent zone. There are sadly a lot of resolvers that still lack basic support for a 19-year-old standard.
Yeah, I wouldn’t rely on the resolvers received through dhcp on random networks to implement dnssec. I run unbound locally. No other resolvers should be involved then (only the authoritative name servers). A local dnssec resolver also makes it more reasonable for software (that reads /etc/resolv.conf) to trust the (often still unverified) connection to the resolver.
If a network I’m on would intercept dns requests (from unbound) towards authoritative dns servers, and would break dnssec-related records, then that would cause trouble. I just checked, and it turns out my local unbound forwards to unbound instances on servers, over a vpn (to resolve internal names). Perhaps my experience would be worse when connecting directly to authoritative name servers on random networks.
On servers, I haven’t seen any dns(sec)-related request/response mangling, and would just move elsewhere when that happens.
I’m honestly not sure how it broke; that’s part of the problem. After all my time troubleshooting, I eventually decided to go back to plain-old DNS and get on with my life.
Maybe the tooling is better these days, and next time I set it up, it’ll go more smoothly.
I say pass the ethical judgment call to users. For “summary-only” posts, only show the summary by default. But give users a button (or a setting) to show the scraped article.
That way you are respecting authors’ preferences, but also preserving the user’s right to have a user agent that acts in the best interests of the user.
I’ll keep repeating it until I die: people don’t use desktop environments, people use apps.
Linux desktops have been fine for a long time now. But if a user can’t run Photoshop/MS Office/whatever else they run on their Macs and Windows, there’s no point.
Even as an atypical computer user (professional developer) this has been a blessing. Between Firefox and a reasonable POSIX environment I can do what I need to do on a computer for the past couple decades. I do keep some Windows systems around to stay abreast of developments there but I would be able to function just fine without them or other commercial operating system like macOS.
The more specialized your pursuits, the more the commercial operating systems are entrenched, specifically with things like gaming or content creation (video, photo, 3D, audio production tend to favor Windows and macOS). Cubase, Ableton, and a bunch of commercial VSTs remain on my Windows machines but that is just an occasional hobby for me.
I tried switching my non-technical partner from Pop!_OS to Linux Mint (GNOME to Cinnamon). She hated it, mostly because text never looked right (too small in some places, too big in others), and we couldn’t figure out how to get it looking right.
So, she never realized she was using a desktop environment, but she certainly cared when the desktop environment became less user-friendly.
Pardon my ignorance but I’m not sure this brings anything new to the table that isn’t already done by Fedora Silverblue - they ship a recent Gnome desktop and apps, it uses Flatpak as the primary method of obtaining software and it’s atomic.
Fedora makes a few… uh, unique decisions that add some unnecessary sharp edges for normies.
Of course many of those decisions can be worked around with a soft fork. You could just base the new OS on Silverblue, but the author’s objections against overlaying software makes it sound to me like OSTree isn’t an option. OSTree is kinda fundamental to Silverblue though, so that takes Silverblue off the table.
This project is kinda going for “fedora silverblue for everyone” with a few different flavors for different dms and types of machine. It feels more like something good for Linux-curious gamers than something you could put on anyone’s laptop, but it’s a step in that direction.
I played with the xfcs flavor a bit before returning to the ego-soothing sharp edges of NixOS.
I’ve seen a lot of these kinds of articles over the years. I’d really like to see an article that uses a bit of very basic threat modeling as the basis for all of the changes that are made.
Let’s teach people to avoid the security cargo cult.
DNS on Linux is a bit of a mess. Fortunately most Linux distros have chosen one way to do things, so troubleshooting DNS problems can be done reliably assuming you know how it all works and you can find the documentation for your current setup.
However with Arch in particular you need to make your own DNS resolution decisions, so it’s that much more important to understand how things are configured currently on your system.
While the author’s solution currently works, I’m not confident that it was fixed entirely correctly, and it may seem to randomly break at some point. We’ll see.
NOYB does great work, but it is wrong on this one. Privacy Preserving Attribution is a sensible, well-designed feature and a giant step in the right direction for balancing privacy and the primary economic model of the Web, ads. I wrote more about why at https://alpaca.gold/@Jeremiah/113198664543831802
I don’t know the specifics of NOYB’s complaint, but reading your thoughts, I think you’re missing an important point:
Sending data over the web, at the very least, leaks your IP, possibly your device type as well. It doesn’t matter how anonymized the data contained in the envelope is. Making a web request that sends some data, any data, will always be leaky compared to making no web requests, which means that the user needs to trust the endpoint the browser is communicating with.
And this is also where NOYB’s complaint may have merit because any service would work just fine without those PPA requests. And let’s be clear, PPA is relevant for third-party ads, less for first-party ones. In other words, user data is shared with third-parties, without the user expecting it as part of the service. Compared with browser cookies, a feature that enables many legitimate uses, PPA is meant only for tracking users. It will be difficult for Mozilla or the advertising industry to claim a legitimate interest here.
Another point is that identifying users as a group is still a privacy violation. Maybe they account for that, maybe people can’t be identified as being part of some minority via this API. But PPA is still experimental, and the feature was pushed to unsuspecting users without notification. Google’s Chrome at least warned people about it when enabling features from Privacy Sandbox. Sure, they used confusing language, but people that care about privacy could make an informed decision.
The fact that Safari already has this feature on doesn’t absolve Firefox. Apple has its issues right now with the EU’s DMA, and I can see Safari under scrutiny for PPA as well.
Don’t get me wrong, I think PPA may be a good thing, but the way Mozilla pushed this experiment, without educating the public, is relatively disappointing.
The reason for why I dislike Chrome is that it feels adversarial, meaning that I can’t trust its updates. Whenever they push a new update, I have to look out for new features and think about how I can get screwed by it. For example, at least when you log into your Google account, Chrome automatically starts sharing your browsing history with the purpose of improving search and according to the ToS, they can profile you as well, AFAIK.
Trusting Firefox to not screw people over is what kept many of its users from leaping to Chrome, and I had hoped they understood this.
The least they could do is a notification linking to some educational material, instead of surprising people with a scary-looking opt-out checkbox (that may even be problematic under GDPR).
Sending data over the web, at the very least, leaks your IP, possibly your device type as well. It doesn’t matter how anonymized the data contained in the envelope is. Making a web request that sends some data, any data, will always be leaky compared to making no web requests, which means that the user needs to trust the endpoint the browser is communicating with.
The problem with this is that it claims too much. You’re effectively declaring that every web site in existence is in violation of GDPR, because they all need to know your IP address in order to send packets back to you, which makes them recipients and processors of your personal data.
This sort of caricature of GDPR is one reason why basically every site in Europe now has those annoying cookie-consent banners – many of them are almost certainly not legally required, but a generic and wrong belief about all cookies being inherently illegal under GDPR without opt-in, and a desire on the part of industry for malicious compliance, means they’re so ubiquitous now that people build browser extensions to try to automatically hide them or click them away!
The GDPR acknowledges that the IP is sent alongside requests, and that it may be logged for security purposes. That’s a legitimate interest. What needs consent is third-party tracking with the purpose of monetizing ads. How you use that data matters, as you require a legal basis for it.
Cookies don’t need notifications if they are needed for providing the service that the user expects (e.g., logins). And consent is not needed for using data in ways that the user expects as part of the service (e.g., delivering pizza to a home address).
The reason most online services have scary cookie banners in the EU is because they do spyware shit.
Case in point, when you first open Microsoft Edge, the browser, they inform you that they’re going to share your data with over 700 of Microsoft’s partners, also claiming legitimate interest for things like “correlating your devices” for the purpose of serving ads, which you can’t reject, and which is clearly illegal. So Microsoft is informing Edge users, in the EU, that they will share their data with the entire advertising industry.
Well, I, for one, would like to be informed of spyware, thanks.
The GDPR acknowledges that the IP is sent alongside requests, and that it may be logged for security purposes. That’s a legitimate interest. What needs consent is third-party tracking with the purpose of monetizing ads.
Luckily for Mozilla, PPA does not do “third-party tracking with the purpose of monetizing ads”. In fact, kind of the whole point of PPA is that it provides the advertiser with a report that does not include information sufficient to identify any individual or build a tracking profile of an individual. The advertiser gets aggregate reports that tell them things like how many people saw or clicked on an ad but without any sort of identification of who those people were.
This is why the fact that, yes, technically Mozilla does receive your IP address as part of a web request does not automatically imply that Mozilla is doing processing of personal data which would trigger GDPR. If Mozilla does not use the IP address to track you or share it to other entities, then GDPR should not have any reason to complain about Mozilla receiving it as part of the connection made to their servers.
As I’ve told other people: if you want to be angry, be angry. But be angry at the thing this actually is, rather than at a made-up lie about it.
The reason most online services have scary cookie banners in the EU is because they do spyware shit.
No, they do it because (like the other reply points out), they have a compliance department who tells them to do it even if they don’t need to, because it’s better to do it.
There’s a parallel here to Proposition 65 in the US state of California: if you’ve ever seen one of those warning labels about something containing “chemicals known to the State of California to cause cancer”, that’s a Proposition 65 warning. The idea behind it was to require manufacturers to accurately label products that contain potentially hazardous substances. But the implementation was set up so that:
If your product is eventually found to cause cancer, and you didn’t have a warning, you suffer a huge penalty, but
If your product does not cause cancer, and you put a warning on it anyway, you suffer no penalty.
So everyone just puts the warning on everything. Even things that have almost no chance of causing cancer, because there’s no penalty for a false cancer warning and if your product ever is found to cause cancer, the fact that you had the warning on it protects you.
Cookie banners are the same way: if you do certain things with data and don’t get up-front opt-in consent, you get a penalty. But if you get the consent and then don’t do anything which required it, you get no penalty. So the only safe thing to do is put the cookie consent popup on everything all the time. This is actually an even more important thing in the EU, because (as Europeans never tire of telling everyone else) EU law does not work on precedent. 1000 courts might find that your use of data does not require consent, but the 1001st court might say “I do not have to respect the precedents and interpretations of anyone else, I find you are in violation” and ruin you with penalties.
This is why the fact that, yes, technically Mozilla does receive your IP address as part of a web request does not automatically imply that Mozilla is doing processing of personal data which would trigger GDPR.
Mozilla does not have a legitimate interest in receiving such reports from me.
Those are fairly useless for this purpose without a lot of cleaning up and even then I’d say it is impossible to distinguish bots from real visits without actually doing the kind of snooping everyone is against.
You are not allowed to associate a session until you have permission for it and you don’t on first page load if visitor didn’t agree to it on a previous visit.
This whole described tracking through website is illegal if you either don’t have a previous agreement or you don’t need session for the pages to even work which you will have a hard time arguing for browsing a web shop.
Using third party doesn’t solve anything because you need permission to do this kind of tracking anyway. My argument however was that you can’t learn how many people saw or clicked an ad from your logs because some who saw it on other peoples pages or search engine of which you don’t have logs and A LOT of those clicks are fake and your logs are unlikely rich enough to know which.
What you want to learn about people’s behavior is more than above which I’m sure you’d know if this was actually remotely your job.
I’m not sure anyone here is arguing that these are the same thing and certainly not me.
I’m not sure if you are implying that I am neck-deep in the ad industry, but I certainly never have been. I am, however, responsible also for user experience in our company and there’s a significant overlap in needing to understand visitor/user behavior.
We go to great lengths to not only comply with the letter of the law, but also with its spirit which means we have to make a lot of decisions less informed as we’d prefer. I am not complaining about that either, but it does bother me describing every attempt to ethically learn as either not necessary or sinister.
If your product is eventually found to cause cancer, …
The condition for requiring a warning label is not “causes cancer” but “exposes users to something that’s on this list of ‘over 900 chemicals’ at levels above the ‘safe harbor levels’”, which is a narrower condition, although maybe not very narrower in practice. (I also thought that putting unnecessary Prop. 65 warning labels on products had also been forbidden (although remaining common), but I don’t see that anywhere in the actual law now.)
No, the reason many have them is that every data privacy consultant will beat you over your head if you don’t have an annoying version of it. Speaking as someone on the receiving end of such reports.
No, you must have an annoying version of it because the theory goes, the more annoying it is the higher the chance the users will frustratingly click the first button they see, e.g. the “accept all” button. The job of privacy consultants is to legitimize such practices.
Which part of “Speaking as someone on the receiving end of such report” was not clear?
Do you think they are trying to persuade us to have more annoying versions so we could collect more information even though we don’t want to for benefit of whom exactly?
My guess is that you don’t have much experience with working with them and how those reports actually look like.
Well, what I do know is that the average consent modal you see on the internet is pretty clearly violating the law, which means that either the average company ignores their data privacy consultants, or the data privacy consultants that they hire are giving advice designed to push the limits of the law.
The problem with this is that it claims too much. You’re effectively declaring that every web site in existence is in violation of GDPR, because they all need to know your IP address in order to send packets back to you, which makes them recipients and processors of your personal data.
Yes, IP addresses are personal data and controlled under GDPR, that’s correct. That means each and every HTTP request made needs freely given consent or legitimate interest.
I request a website, the webserver uses my IP address to send me a reply? That’s legitimate interest. The JS on that site uses AJAX to request more information from the same server? Still legitimate interest.
The webserver logs my IP address and the admin posts it on facebook because he thinks 69.0.4.20 is funny? That’s not allowed. The website uses AJAX to make a request to an ad network? That isn’t allowed either.
I type “lobste.rs” into Firefox, and Firefox makes a request to lobsters? Legitimate interest. Firefox makes an additional request to evil-ad-tracking.biz to tell them that I visited lobsters? That’s not allowed.
a giant step in the right direction for balancing privacy and the primary economic model of the Web, ads
Balancing lol, for years ad providers ignore all data protections laws (in Germany, way before GDPR) and GDPR. They are staking all users without consent. Then the EU forces the ad companies to follow the law and at least ask the user if they want to share private data. The ad companies successful framed this as bad EU legislation. And now your browser wants to help add companies to staking you. Framing this as balancing is ridiculous.
All it does is tell a site you have already visited that someone got to the site via an ad without revealing PII.
[…]
which ads worked without knowing who they worked for specifically
Just because there is no nametag on it doesn’t mean it’s not private data.
It’s reasonable for a business to know if their ad worked
Sorry for the bad comparison: But it’s also reasonable for a thief to want to break in your house. But it’s illegal. Processing personal data is illegal, with some exceptions. Yes there is a “the legitimate interests”, but this has to be balances with “fundamental rights and freedoms of the data subject”. I would say “I like money” isn’t enough to fall under this exception.
Apple enabled Privacy Preserving Attribution by default for iOS and Safari on macOS 3 years ago
``But the other one is also bad’’. This could be an argument, iff you can prove that this is willful ignored by others. There is so much vendors pushing such shit to there paying customers, so I would assume this was overseen. Also Apple should disable it also, because as far as I see it’s against the law (no I’m not a lawyer).
And no I don’t say ads are bad or you shouldn’t be allowed to do some sort of customers analyses. But as the freedom of your fist ends where my nose starts. The freedom of the market analyses ends when you stalking customers. I know it’s not easy to define where customers analyses end and where stalking starts, but currently ad companies are miles away for it. So stop framing this poor little advertisers.
The thing that makes me and presumably some other people sigh and roll our eyes at responses like this is that we’re talking about a feature which is literally designed around not sending personal data to advertisers for processing! The whole point of PPA is to give an advertiser information about ad views/clicks without giving them the ability to track or build profiles of individuals who viewed or clicked, and it does this by not sending the advertiser information about you. All the advertiser gets is an aggregate report telling them things like how many people clicked on the ad.
If you still want to be angry about this feature, by all means be angry. Just be angry about the actual truth of it rather than whatever you seem to currently believe about it.
The only problem I see is that Mozilla is able to track and build profiles of individuals. To some extent, they’ve always been able to do so, but they’ve also historically been a nonprofit with a good track record on privacy. Now we see two things in quick succession: first, they acquire an ad company, and historically, when a tech company acquires an ad company, it’s being reverse-acquired. Second, they implement a feature for anonymizing and aggregating the exact kind of information that advertising companies want (which they must, in the first place, now collect). PPA clearly doesn’t send this information directly to advertisers. But do we now trust Mozilla not to sell it to them separately? Or to use it for the benefit of their internal ad company?
The only problem I see is that Mozilla is able to track and build profiles of individuals.
Except they aren’t! They’ve literally thought of this and many other problems, and built the whole thing around distributed privacy-preserving aggregation protocols and random injected noise and other techniques to ensure that even Mozilla does not have sufficient information to build a tracking profile on an individual.
And none of this is secret hidden information. None of it is hard to find. That link? I type “privacy preserving attribution” into my search engine, clicked the Mozilla support page that came up, and read it. This is not buried in a disused lavatory with a sign saying “Beware of the Leopard”. There’s also a more technical explainer linked from that support doc.
Which is why I feel sometimes like I should be tearing my hair out reading these discussions, and why I keep saying that if someone wants to be angry I just want them to be angry at what this actually is, rather than angry at a pile of falsehoods.
Except they aren’t! They’ve literally thought of this and many other problems, and built the whole thing around distributed privacy-preserving aggregation protocols and random injected noise and other techniques to ensure that even Mozilla does not have sufficient information to build a tracking profile on an individual.
How do I actually know that Mozilla’s servers are implementing the protocol honestly?
How do I actually know that Mozilla’s servers are implementing the protocol honestly?
How do you know anything?
Look, I’ve got a degree in philosophy and if you really want me to go deep on whether you can know things and how, I will, but this is not a productive line of argumentation because there’s no answer that will satisfy. Here’s why:
Suppose that there is some sort of verifier which proves that a server is running the code it claims to be; now you can just reply “ah-ha, but how do I trust that the verifier hasn’t been corrupted by the evil people”, and then you ask how you can know that the verifier for the verifier hasn’t been corrupted, and then the verifier for the verifier for the verifier, and thus we encounter was is known, in philosophy, as the infinite regress – we can simply repeat the same question over and over at deeper and deeper levels, so setting up the hundred-million-billion-trillionth verifier-verifier just prompts a question about how you can trust that and now we need the hundred-million-billion-trillion-and-first verifier verifier, and on and on we keep going.
This is an excellent question, and frankly the basis of my opposition to any kind of telemetry bullshit no matter how benign it might seem to you now. I absolutely don’t know that it’s safe or unsafe, or anonymous or only thought to be anonymous. It turns out you basically can’t type on a keyboard without anybody being able to turn a surprisingly shitty audio recording of your keyboard into a pretty accurate transcript of what you typed. There have been so many papers that have demonstrated that a list of the fonts visible to your browser can often uniquely identify a person. Medical datasets have been de-anonymised just by using different bucketing strategies.
I have zero confidence that this won’t eventually turn out to be similar, so there is zero reason to do it at all. Just cut it out.
If there’s no amount of evidence someone could present to convince you of something, you can just say so and let everyone move on. I don’t like arguing with people who act as if there might be evidence that would convince them when there isn’t.
It’s a perfectly legitimate position to hold that the only valid amount of leaked information is zero. You’re framing it as if that was something unreasonable, but it’s not. Not every disagreement can be solved with a compromise.
I prefer to minimize unnecessary exposure. If I visit a website, then, necessarily, they at a minimum get my IP address. I don’t like it when someone who didn’t need to get data from me, gets data from me. Maybe they’re nice, maybe they’re not nice, but I’d like to take fewer chances.
I like your take on this, insomuch as “it’s better than what we currently have”.
It’s reasonable for a business to know if their ad worked.
I don’t agree with this, it wasn’t even possible to know until about 20 years ago. The old ad-man adage goes that “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” Well that’s just the price you pay when producing material that hardly ever is a benefit to society.
Funnily enough there does seem to have been a swing back towards brands and publishers just cutting all middle men out and partnering up. This suggests to me that online ads aren’t working that well.
This to me is so incredibly naive and I’m speaking as someone who doesn’t like ads. How in the world would anyone hear about your product and services without them, especially if they are novel?
Imagining that every company, no matter how small or new sits on tons of money they can waste on stuff that is ineffective seems unreasonable. Having ads be an option only for companies already successful enough doesn’t seem particularly desirable from point of view of economy.
I’m as much against snooping, profiling and other abuses as the next guy, but I disagree with seeing every tracking, no matter how much it is privacy preserving, as inherently bad.
Why? Justify that. What is it about a company requiring advertising that inherently reduces the value of that company to 0 or less? If I have a new product and I have to tell people about it to reach the economic tipping point of viability, my product is worthless? Honestly, I find this notion totally ridiculous - I see no reason to connect these things.
I am fine with ads that are not targeted at me at all, and don’t transmit any information about me to anyone. For example, if you pay some website to display your ad to all its visitors, that it fine to me. Same as when you pay for a spot in a newspaper, or billboard. I don’t like it, but I’m fine with it.
It’s absolutely naive, and I stand by it because I don’t care if you can’t afford to advertise your product or service. But I do find ads tiresome, especially on the internet. Maybe I’m an old coot but I tend to just buy local and through word of mouth anyway, and am inherently put off by anything I see in an ad.
Imagining that every company, no matter how small or new sits on tons of money they can waste on stuff that is ineffective seems unreasonable. Having ads be an option only for companies already successful enough doesn’t seem particularly desirable from point of view of economy.
This is pretty much the state of affairs anyway. Running an ad campaign is a money-hole even in the modern age. If I turn adblock off I just get ads for established players in the game. If I want anything novel I have to seek it out myself.
But as I said, I’m not against this feature per-se, as an improvement on the current system.
It’s worth repeating, society has no intrinsic responsibility to support business as an aggregated constituent, nor as individual businesses.
One might reasonably argue it’s in everyone’s best interest to do so at certain times, but something else entirely to defend sacrosanct business rights reflexively the moment individual humans try to defend themselves from the nasty side effects of business behavior.
We absolutely have a responsibility to do so in a society where people rely on businesses for like… everything. You’re typing on a computer - who produced that? A business. How do you think most Americans retire? A business. How do new products make it onto the market? Advertising.
I think it’s exactly the opposite situation of what you’re purporting. If you want to paint the “society without successful businesses is fine” picture, you have to do so.
Would it not be fair to suggest that there’s a bit of a gulf between businesses people rely on and businesses that rely on advertising? Perhaps it’s just my own bubble, dunno
How in the world would anyone hear about your product and services without them, especially if they are novel?
Have you heard of shops? It’s either a physical or virtual place where people with money go to purchase goods they need. And sometimes to browse if there’s anything new and interesting that might be useful.
Also, have you heard of magazines? Some of them are dedicated to talking about new and interesting product developments. There are multiple printed (and digital) magazines detailing new software releases and online services that people might find handy.
Do they sometimes suggest products that are not best for the consumer, but rather best for their bottom line? Possibly. But still, they only suggest new products to consumers who ask for it.
Regardless how well PPA works, I think this is crux of the issue:
Mozilla has just bought into the narrative that the advertising industry has a right to track users
Even if PPA is technically perfect in every way, maybe MY personal privacy is preserved. But ad companies need to stop trying to insert themselves into every crack of society. They still have no right to any kind of visibility into consumer traffic, interests, eyeballs, whatever.
PPA does not track users. It tracks that an ad was viewed or clicked and it tracks if an action happened as a result, but the user themself is never tracked in any way. This is an important nuance.
What “visibility into consumer traffic, interests, eyeballs, whatever” do you think PPA provides?
The crux of PPA is literally that an advertiser who runs ads gets an aggregate report with numbers that are not the actual conversion rate (number of times someone who saw an ad later went on to buy the product), but is statistically similar enough to the actual conversion rate to let the advertiser know whether they are gaining business from running the ad.
It does not tell them who saw an ad. It does not give them an identifier for the person who saw the ad. It does not tell them what other sites the person visited. It does not tell them what that person is interested in. It does not give them a behavioral profile of that person. It does not give them any identifiable information at all about any person.
For years, people have insisted that they don’t have a problem with advertising in general, they have a problem with all the invasive tracking and profiling that had become a mainstay of online advertising. For better or worse, Mozilla is taking a swing at eliminating the tracking and profiling, and it’s kind of telling that we’re finding out how many people were not being truthful when they said the tracking was what they objected to.
Personally, while I don’t like seeing ads, and on services that I use enough that offer me the option, I pay them money in exchange for not seeing ads, I also understand that being online costs money and that I don’t want the internet to become a place only for those wealthy enough to afford it without support. So having parts of the web that are paid for by mechanisms like advertising – provided it can be done without the invasive tracking – rather than by the end user’s wallet is a thing that probably needs to exist in order to enable the vibrant and diverse web I want to be part of, and lurking behind all the sanctimoniousness and righteous sneers is, inevitably, the question of how much poorer the web would be if only those people who can pay out of their own pockets are allowed onto and into it.
I’m saying they don’t have the right to “know whether they are gaining business from running the ad.”
It’s not necessarily bad for them to know this, but they are also not entitled to know this. On the contrary: The user is entitled to decide whether they want to participate in helping the advertiser.
Well, in order to even get to the point of generating aggregate reporting data someone has to both see an ad and either click through it or otherwise go to the site and buy something. So the user has already decided to have some sort of relationship with the business. If you are someone who never sees an ad and never clicks an ad and never buys anything from anyone who’s advertised to you, you don’t have anything to worry about.
It does not tell them who saw an ad. It does not give them an identifier for the person who saw the ad. It does not tell them what other sites the person visited. It does not tell them what that person is interested in. It does not give them a behavioral profile of that person. It does not give them any identifiable information at all about any person.
Question: how is the ad to be displayed selected? With the introduction of PPA, do advertizers plan on not using profiling to select ads anymore? Because that part of the ad tech equation is just as important as measuring conversions.
Fun fact: Mozilla had a proposal a few years back for how to do ad selection in a privacy-preserving way, by having the browser download bundles of ads with metadata about them and do the selection and display entirely on the client side.
Personally, while I don’t like seeing ads, and on services that I use enough that offer me the option, I pay them money in exchange for not seeing ads, I also understand that being online costs money and that I don’t want the internet to become a place only for those wealthy enough to afford it without support. So having parts of the web that are paid for by mechanisms like advertising – provided it can be done without the invasive tracking – rather than by the end user’s wallet is a thing that probably needs to exist in order to enable the vibrant and diverse web I want to be part of, and lurking behind all the sanctimoniousness and righteous sneers is, inevitably, the question of how much poorer the web would be if only those people who can pay out of their own pockets are allowed onto and into it.
The Internet is already a place only for those wealthy enough to pay out of their own pockets for a computer and Internet connection that is fast enough to participate. Without ads, many sites would have to change their business model and may die. But places like Wikipedia and Lobsters would still exist. Do you really think the web would be poorer if websites were less like Facebook and Twitter and more like Wikipedia and Lobsters?
Someone who doesn’t own a computer or a phone can access the internet in many public libraries – free access to browse should be more plentiful but at least exists.
But web sites generally cannot be had for free without advertising involved, because there is no publicly-funded utility providing them.
So you want to preserve ads so that people who rely on public libraries for Internet access can offset hosting costs by putting ads on their personal websites? That still requires some money to set up the site in the first place, and it requires significant traffic to offset even the small hosting cost of a personal website.
Clearly you have something else in mind but I can’t picture it. Most people don’t have the skills to set up their own website anyway, so they use services such as Facebook or Wikipedia to participate on the Internet. Can you clarify your position?
I thought this discussion was getting really interesting so I’m assuming it fell by the wayside and that you would appreciate me reviving it. did want to respond? or would you rather I stop asking
Privacy Preserving Attribution is a sensible, well-designed feature and a giant step in the right direction for balancing privacy and the primary economic model of the Web, ads.
There is a very simple question you can ask to discover whether a feature like this is reasonable: if the user had to opt in for it, how many users would do so if asked politely?
This is innovation in the wrong direction. The actual problem is that everyone beliefs that ads are the primary/only economical model of the Web and that there is nothing we can do about it. Fixing that is the innovation we actually need.
We could have non-spyware ads that don’t load down browsers with megabytes of javascript, but no-one believes that it is possible to advertise ethically. Maybe if web sites didn’t have 420 partners collecting personal data there would be fewer rent-seeking middlemen and more ad money would go to the web sites.
Ads. We all know them, we all hate them. They slow down your browser with countless tracking scripts.
Want in on a little secret? It doesn’t have to be this way. In fact, the most effective ads don’t actually have any tracking! More about that, right after this message from our sponsor:
(trying to imitate the style of LTT videos here)
We’ve got non-spyware ads that don’t contain any interactivity or JS. They’re all over video content, often called “sponsorships”. Negotiated directly between creators and brands, integrated into the video itself without any interactivity or tracking, most of the time clearly marked. And they’re a win-win-win. The creator earns more, the brand actually gets higher conversion and more control about the context of their ad, and by nature the ads can’t track the consumer either.
(Note that I’m not the parent poster, I’m just replying here because the question of what data is actually being tracked seems like the crux of the matter, not because I want to take out the pitchforks.)
Reading through the data here, it seems to me like the browser is tracking what ads a user sees. Unfortunately the wording there is kind of ambiguous (e.g. what’s an “ad placement”? Is it a specific ad, or a set of ads?) but if I got this right, the browser locally tracks what ad was clicked/viewed and where, with parameters that describe what counts as a view or a click supplied by the advertiser. And that it can do so based on the website’s requirements, i.e. based on whatever that website considers to be an impression.
Now I get that this report isn’t transmitted verbatim to the company whose products are being advertised, but:
Can whoever gets the reports read them and do the tracking for the third-party website?
If the browser maintains a list of ads (for impression attribution), can it be tracked based on the history of what ads it’s seen? Or as a variant: say I deliver a stream of bogus but unique (as in content, or impression parameters for view/click) ads, so each ad will get an impression just once. Along with that, I deliver the “real” ads, for shoes, hats or whatever. Can I now get a list of (unique bogus ads, real ad) pairs?
I realise this is a hot topic for you, but if you’re bringing up the principle of charity, can we maybe try it here, too? :-) That’s why I prefaced this with a “I’m not the parent poster” note.
That technical explainer is actually the document that I read, and on which my questions are based. I’ve literally linked to it in the comment you’re responding to. I’m guessing it’s an internal document of sorts because that’s not “very redable” to someone who doesn’t work in the ad industry at all. It also doesn’t follow almost any convention for spec documents, so it’s not even clear if this is what’s actually implemented or just an early draft, if the values “suggested” there are actually being used, which features are compulsory, or if this the “final” version of the protocol.
My first question straight out comes from this mention in that document:
Our DAP deployment [which processes conversion reports] is jointly run by Mozilla and ISRG. Privacy is lost if the two organizations collude to reveal individual values.
(Emphasis mine).
Charitably, I’m guessing that the support page is glossing over some details in its claim, given that there’s literally a document describing what information about one’s browsing activities is being sent and where. And that either I’m misunderstanding the scope of the DAP processing (is this not used to process information about conversions?) or that you’re glossing over technical details when you’re saying “no”. If it’s the latter, though, this is lobste.rs, I’d appreciate if you didn’t – I’m sure Mozilla’s PR team will be only too happy to gloss over the details for me in their comments section, I was asking you because a) you obviously know more about this than I do and b) you’re not defaulting to “oh, yeah, it’s evil”.
I have no idea what running a DAP deployment entails (which is why I’m asking about it) so I don’t really know the practical details of “the two organizations collude” which, in turn, means I don’t know how practical a concern that is. Which is why I’m asking about it. Where, on the spectrum between “theoretically doable but trivially detected by a third party” and “trivially done by two people and the only way to find out is to ask the actual people who did it”, is it placed?
My second question is also based on that document. I don’t work in the ad industry and I’m not a browser engineer, so much of the language there is completely opaque. Consequently:
I’m obviously aware that only conversions are reported, since that’s the only kind of report described there. But:
The document also says that “a site can register ad impressions [which they do] by generating and dispatching a CustomEvent as follows”. Like I said above: not in the ad industry, not a browser engineer, I have no idea what a CustomEvent is. In its simplest form, reading the doc it sounds like the website is the one generating events. But if that’s the case, they can already count impressions, they don’t even need to query the local impression database. (The harder variant is that the event is fired locally and you can’t hook to it it any way, but it’s still based on website-set parameters – see my note in 5. below for that). I imagine I’m missing something, but what?
The document doesn’t explain what impression data is available to websites outside the report. All it says is “tthe target site cannot query this database directly” which can mean anything between “the JS environment doesn’t even know it’s there” to “you can’t read it directly but there’s an API that exposes limited information about it”.
The document literally lists “richer impression selection logic” and “ability to distribute that value to multiple impressions” as desirable traits that weren’t in the spec purely due to prototyping concerns, so I’ve certainly treated the “one ad at a time” limitation as temporary. And, in any case, I don’t know if that’s what’s actually being implemented here.
The advertiser budget is obviously tunable, the document only suggests two, doesn’t have an upper cap on the limit, and doesn’t have a cap on how often it can be refreshed, either (it only suggests weekly). It also doesn’t explain who assigns these limits.
was actually the subject of my first question and isn’t directly relevant here, although 5 is
I obviously didn’t miss the part about differential privacy. My whole second question is about whether the target site can use this mechanism as a side-channel to derive user tracking information, not whether they can track users based on the impression report themselves, which they obviously can’t, like, that’s the whole point.
regarding PPA, if I have DNT on, what questions are there still unclear?
regarding the primary economic model, that’s indeed the problem to be solved. Once print had ads without tracking and thrived. An acceptable path is IMO payments, not monetised surveillance. Maybe similar https://en.wikipedia.org/wiki/VG_Wort
and regarding opt-in/out: one doesn’t earn trust by going the convenient way. Smells.
Once Google had ads without tracking and thrived, enough to buy their main competitor Doubleclick. Sadly, Doubleclick’s user-surveillance-based direct-marketing business model replaced Google’s web-page-contents-based broadcast-advertising business model. Now no-one can even imagine that advertising might possibly exist without invasive tracking, despite the fact that it used to be normal.
It’s funny because not once in my entire life have I ever seen an invasive tracking ad that was useful or relevant to me. What a scam! I have clicked on two ads in my entire life, which were relevant to me, and they were of the kind where the ad is based on the contents of the site you’re visiting.
great illustration of how the impact of ads is disparately allocated. some people click on ads all the time and it drains their bank account forcing them into further subordination to employers. this obviously correlates with lower education and economic status.
The unlinking behavior makes sense from a security perspective, I think. However this is a good example of what happens when security causes usability problems: People will set up hacks to undermine your security measures.
Ironic, since Signal is pretty famous for having both a high level of security and a high level of usability at the same time. Clearly this is an area that needs more work.
First thought “well if Signal opens on login to the desktop then it’ll stay linked” but if it’s not actually used from that desktop then that’s basically equivalent to OP’s hack which circumvents the security feature.
Also, what if you simply don’t log in to the desktop that often.
Maybe the phone could show a reminder that the desktop app will be unlinked in X day?
At work we use semgrep for this. It does a lot of things, but simple string scans (with good error messages) is the main thing we use.
Combined with pre-commit, you can have these errors before they even get into the codebase. It also allows the scanning to be restrained to only the code that’s changing.
Yeah, I think it doesn’t sound too terrible, depending on the implementation. I think the main issue I have is what seems like “API design smell.”
Ex: I’d rather use QUERY on a widgets endpoint that only returned widgets, than to have a single query endpoint that could return anything. A general-purpose query endpoint allows clients to introduce too much coupling to your underlying data model.
There are use cases and tradeoffs for both ways I guess, which is why I suppose I’m not totally against the idea.
Yeah, I was incredibly confused by this… wondering if it was being compiled to an ML-like language (e.g. Standard ML or OCaml), or if machine learning was being used somewhere, then remembered what the language was called.
I’m not sure that the presence or lack of SELinux, AppArmor, etc. is enough to determine whether a Linux distro is “secure” or not.
First of all, “secure” is arbitrary. Does Debian protect us from zero-day remote code execution vulnerabilities that lead to privilege escalation? Eh, probably not as well as RHEL does. Does that make Debian “insecure?” Nah.
Most of my Debian boxes are behind VPNs
Every service that accepts Internet traffic runs in a container (which I have audited within reason)
I keep my servers (and containers) up-to-date
From a defense-in-depth perspective, robust and usable MAC would be pretty awesome to have. So would a number of kernel features that are never going to be implemented because (according to experts) kernel security is kind of rubbish.
But it’s good enough for my threat model. SELinux is just one brick in the building.
So, this is a fun topic, and extremely useful if you’re a pen tester, etc. However:
The motivation for these tricks is that you might be a vendor that sells software that runs in a customer’s datacenter (a.k.a. on-premises software), so your software has to run inside of a restricted network environment.
If you are a vendor who is intentionally subverting the firewall rules of your customers, that is a recipe for losing business. You’ll eventually get caught (or deserve to be caught) and insta-banned.
If I give you money to put your stuff on my network, you respect my rules.
Oh hell, and the legal liability… What happens if a malicious actor takes advantage of a mistake YOU made while doing this, and you are the means by which your customer gets popped?
No. Just no. Use this information for other reasons, but the “I’m a vendor” reason is the absolute worst.
The argument certainly has holes in it, but I think the most important thing to take away from it is this: Consider your threat model before you worry about privilege escalation. There are some situations where it matters, and a lot of situations where it doesn’t matter.
If it doesn’t matter for your threat model, then don’t waste too much time / effort / complexity trying to mitigate it.
my container host at home uses quadlet to do container stuff directly from systemd using podman. pretty nice and simple. The only annoyance I have run into is getting it to pull conainers from a private registry without tls. This is not production in the sense that I serve a business from it, but it currently runs about 16 different local services I run at home.
It is exceedingly simple compared to anything having to do with k8s.
We do something similar, but we use docker in systemd. In order to avoid the iptables problem, we only use port publishes on localhost, like
-p 127.0.0.1:8001:80
. We also run HAProxy installed directly on the host which does forwarding to the correct containers.I’ve even made a custom solution for zero downtime deploys and failovers where we on deploy just swap out the backend server via the admin API socket in HAProxy. For soft failover, we pause the traffic and then wait for other server to start the container, and then we swap backend and unpauses. (Pausing is done by setting
maxconn = 0
).My home setup is also mostly all containerized through the use of quadlet/systemd, and probably the most important thing for me is that it all just works with incredibly minimal babysitting. I also use podman-auto-update so I don’t need to worry about updates and restarts and whatnot.
this is great, thank you! I’m struggling to find The Docs, but it does look like it’s on official part of podman now so that definitely bodes well
You saw this article: https://matduggan.com/replace-compose-with-quadlet/ Discussed here: https://lobste.rs/s/ss8oea/replace_docker_compose_with_quadlet_for
I don’t think I did see that, or if I did I don’t remember. I’m sorry, I don’t follow?
It’s a quadlet howto, which seems like what you were interested in.
ah ok cool, thanks!
I’ve also been doing the podman+systemd thing for 4.5 years. I haven’t moved to quadlet yet, but now that I’m on Podman 5 I should have that option. Even doing it “the hard way” has been very reliable and manageable, but the new way definitely makes getting up and running easier.
On Arch and even Ubuntu-based systems, I have found that Podman regularly breaks after updates. It happened enough to force me to switch to Docker, which seems much more stable.
Am I the only one with this experience? Am I doing it wrong? Or is everyone on Redhat OSes, where Podman probably works more reliably?
I’m using Podman on macOS and FreeBSD. The macOS version is somewhat cheating, it’s actually podman-remote and a little bit of VM management, so is actually running an Fedora CoreOS VM that runs the containers. I’ve not had problems with either.
docker on macOS is also using sneaky VM’s in the background. Not really other way to have linux ABI, is there?
No, but there’s no reason that contains have a Linux ABI. OCI has specs for Linux and Windows containers ratified. Solaris uses the Linux ABI in branded zones, but the (not yet final) FreeBSD container spec uses native FreeBSD binaries. I’d like to be able to run Darwin binaries in containers on macOS. Apple had a job ad up recently for kernel engineers to work on OCI support, so hopefully that will come soon. The immutable system image model of macOS and APFS providing lightweight CoW snapshots should mean that a lot of building blocks are there.
Been doing it on Debian for over 4 years. I think there was one update that cost me a few hours, but no, I haven’t run into any kind of frequent breakage.
On second thought, I was probably using bleeding-edge Podman (even on Ubuntu) because the Debian version was really old and lacking an important feature or bugfix that I needed.
Hmm, my experience was something like that too. Issues with the networking drivers and weird almost but not quite compatibility with Docker Compose (overpromising and under delivering), etc. (can’t remember everything). I tried going back to it twice but finally gave up as each time I ended up wasting hours over something, whereas Docker has always just worked. Some of my issues could have been a result of using Arch, and perhaps Podman is in a much better state these days, but I’ve been burned too many times by it at this point to give it another go (I don’t trust it anymore) and I don’t really feel like I have anything to gain from it now anyway. Rootless Docker works well enough for me.
Do you write the quadlet
.service
s yourself? Use some generator? Or templates you copy-paste?I used to be the same. However I quickly learned that I am not available enough for my users.
My partner hates maintaining her own system, however she hates even more when a printer stops working, or when she needs to install some new software, and I’m unavailable.
So even though she does hate maintaining her own system, she still wants one that she can manage herself. While I agree with most of what is said, I wouldn’t say these non-enthusiast-friendly OSes are based on “fundamentally wrong” principles.
I maintain my position that they are based on fundamentally wrong principles, even in the case you described. An intentionally limited, non-enthusiast-oriented OS such as the GNOME OS proposed in the blog post linked from mine, would likely not work for your partner, unless she does not wish to install software that isn’t part of said OS (nor of a third-party software store like flathub).
A general purpose operating system will have more packages, and can still remain maintainable by someone who’s not a sysadmin by trade. Limiting software availability is not helping non-enthusiasts.
Now that I can agree with. A good clarification.
On the NixOS computer I set up for my partner I enabled appimages (installed/updated through Gear Lever), Flatpak and the GNOME Software Center, so she can install and update programs without my intervention.
She definitely doesn’t want to maintain the system herself, so stuff like drivers and system software are managed by me and generally shouldn’t break because I handle the updates myself (and even if they do, she can boot from an old generation until I’m available to take a look), but she might need to install new programs or update Chrome every now and then, so I think this is a nice middle ground.
That’s clever. I like it!
I mean, that sounds like the setup still makes sense, but the user is allowed/able to fix stuff on their own, either using the same method, or just doing it and then having etckeeper or similar running to record the change and codify it afterwards.
First class OpenAI API middleware 💀
I really don’t like being overtly cynical to the point of sounding derisive, but, I genuinely stopped reading .NET updates years ago, and one of the reasons I never check on them is because I just know they integrated some bullshit APIs for OpenAI or something else equally unreasonable to bake into an API that’s supposed to exist for decades, that requires an internet connection to an unsustainable service. And, lo and below, they proved me right!
This is why I stopped using C#. I learned my lesson about what Microsoft really thinks about their users back when I swore off C#. I stopped using C# because Microsoft tried to remove hot code reloading from the open source, cross platform CLI tooling so they could lock it behind Visual Studio. The guys who develop C# are great, but at the end of the day, it’s owned by Microsoft, and if you’re using a Microsoft product, they see you as nothing more than livestock to cattle into the next marketing venture, whether you want it or not. Whether you want a Microsoft account or not, whether you want Copilot or not, whether you want Visual Studio or not (I definitely don’t).
They didn’t bake it into .Net itself, it’s just an open source library you can install separately: https://www.nuget.org/packages/Microsoft.Extensions.AI/9.0.0-preview.9.24556.5
Ollama for example doesn’t require an internet connection, and if you can afford the hardware you can probably sustain using it.
Do you?
I guess a better reason to not read .Net updated is that there is that much new stuff coming to .Net. The updates to C# are more interesting. :)
Same thing for F#. It sounds interesting and in theory I’d love to learn it, but I won’t willingly step into Microsoft’s shadow.
Unfortunately I think MS’ actions don’t have too many negative repercussions for them. According to the Stack Overflow survey, most devs still use Windows (~50% professionally), Visual Studio and VS Code are the 2 most popular editors, and Teams is the most used chat software.
(Obviously SO numbers are biased but I expect it’s away from MS…)
EEE works well for their bottom-line, but as most things optimized for profit it’s terrible for the commons. This is all pretty sad, but expected. I’m particularly afraid of long term consequences of WSL.
To end on a more positive note, I love tech and will continue investing my time in projects I believe in, and it’s great there’s a vibrant community that feels like that too!
If you read this far, I encourage you to take the time and remove one piece of MS from your life now :)
F# is nice but it has a ton of sharp edges around interop with C# (no pun intended). And everything is written in C# on .NET. If you’re using an F# library, it’s probably the only one you’re using out of all your dependencies, even if you’re using F#. And that means dealing with everything F# desperately tries to hide from you but still leaves open like a live electrical box, including OOP/inheritance and exception handling, which is much more painful in F# than it ever had any right to be, because you’re forced to use them, even if you don’t want to. So basically F# punishes you for using the OOP-centric platform they deliberately targeted.
It’s probably more correct to call OpenAI first class Microsoft middleware.
Don’t forget to update your lobsters profile ;-)
In my limited experience, DNSSEC is only usable when your resolver is allowed to downgrade to insecure DNS. It just breaks too often. Strict DNSSEC is a pain.
So for me, SSHFP isn’t really much protection against MITM.
How does “it” break? Can some domains not be resolved because they have some form of brokenness in their DNSSEC setup? I’m using a DNSSEC verifying resolver on my laptop and servers, and haven’t run into any issues yet.
With extended dns errors (EDE), you get details about DNS failures. So if an issue arises, it should be relatively easy to diagnose. I am a bit surprised at how new EDE is, seems like a pretty basic requirement for diagnosing issues…
Good reminder, I’m not using SSHFP, but it’s easy enough to setup and use.
DNSSEC needs support from all resolvers so that signatures are passed along correctly and so that DS records are queried in the parent zone. There are sadly a lot of resolvers that still lack basic support for a 19-year-old standard.
Yeah, I wouldn’t rely on the resolvers received through dhcp on random networks to implement dnssec. I run unbound locally. No other resolvers should be involved then (only the authoritative name servers). A local dnssec resolver also makes it more reasonable for software (that reads /etc/resolv.conf) to trust the (often still unverified) connection to the resolver.
If a network I’m on would intercept dns requests (from unbound) towards authoritative dns servers, and would break dnssec-related records, then that would cause trouble. I just checked, and it turns out my local unbound forwards to unbound instances on servers, over a vpn (to resolve internal names). Perhaps my experience would be worse when connecting directly to authoritative name servers on random networks. On servers, I haven’t seen any dns(sec)-related request/response mangling, and would just move elsewhere when that happens.
I’m honestly not sure how it broke; that’s part of the problem. After all my time troubleshooting, I eventually decided to go back to plain-old DNS and get on with my life.
Maybe the tooling is better these days, and next time I set it up, it’ll go more smoothly.
I say pass the ethical judgment call to users. For “summary-only” posts, only show the summary by default. But give users a button (or a setting) to show the scraped article.
That way you are respecting authors’ preferences, but also preserving the user’s right to have a user agent that acts in the best interests of the user.
This is how NetNewsWire does it. I think it’s the best approach.
I’ll keep repeating it until I die: people don’t use desktop environments, people use apps.
Linux desktops have been fine for a long time now. But if a user can’t run Photoshop/MS Office/whatever else they run on their Macs and Windows, there’s no point.
Most things regular people do happen in a browser
Even as an atypical computer user (professional developer) this has been a blessing. Between Firefox and a reasonable POSIX environment I can do what I need to do on a computer for the past couple decades. I do keep some Windows systems around to stay abreast of developments there but I would be able to function just fine without them or other commercial operating system like macOS.
The more specialized your pursuits, the more the commercial operating systems are entrenched, specifically with things like gaming or content creation (video, photo, 3D, audio production tend to favor Windows and macOS). Cubase, Ableton, and a bunch of commercial VSTs remain on my Windows machines but that is just an occasional hobby for me.
That too. Which is another reason they don’t really care about the desktop environment outside of the browser.
I tried switching my non-technical partner from Pop!_OS to Linux Mint (GNOME to Cinnamon). She hated it, mostly because text never looked right (too small in some places, too big in others), and we couldn’t figure out how to get it looking right.
So, she never realized she was using a desktop environment, but she certainly cared when the desktop environment became less user-friendly.
Pardon my ignorance but I’m not sure this brings anything new to the table that isn’t already done by Fedora Silverblue - they ship a recent Gnome desktop and apps, it uses Flatpak as the primary method of obtaining software and it’s atomic.
Fedora makes a few… uh, unique decisions that add some unnecessary sharp edges for normies.
Of course many of those decisions can be worked around with a soft fork. You could just base the new OS on Silverblue, but the author’s objections against overlaying software makes it sound to me like OSTree isn’t an option. OSTree is kinda fundamental to Silverblue though, so that takes Silverblue off the table.
https://universal-blue.org/
This project is kinda going for “fedora silverblue for everyone” with a few different flavors for different dms and types of machine. It feels more like something good for Linux-curious gamers than something you could put on anyone’s laptop, but it’s a step in that direction.
I played with the xfcs flavor a bit before returning to the ego-soothing sharp edges of NixOS.
I’ve seen a lot of these kinds of articles over the years. I’d really like to see an article that uses a bit of very basic threat modeling as the basis for all of the changes that are made.
Let’s teach people to avoid the security cargo cult.
DNS on Linux is a bit of a mess. Fortunately most Linux distros have chosen one way to do things, so troubleshooting DNS problems can be done reliably assuming you know how it all works and you can find the documentation for your current setup.
However with Arch in particular you need to make your own DNS resolution decisions, so it’s that much more important to understand how things are configured currently on your system.
While the author’s solution currently works, I’m not confident that it was fixed entirely correctly, and it may seem to randomly break at some point. We’ll see.
NOYB does great work, but it is wrong on this one. Privacy Preserving Attribution is a sensible, well-designed feature and a giant step in the right direction for balancing privacy and the primary economic model of the Web, ads. I wrote more about why at https://alpaca.gold/@Jeremiah/113198664543831802
I don’t know the specifics of NOYB’s complaint, but reading your thoughts, I think you’re missing an important point:
Sending data over the web, at the very least, leaks your IP, possibly your device type as well. It doesn’t matter how anonymized the data contained in the envelope is. Making a web request that sends some data, any data, will always be leaky compared to making no web requests, which means that the user needs to trust the endpoint the browser is communicating with.
And this is also where NOYB’s complaint may have merit because any service would work just fine without those PPA requests. And let’s be clear, PPA is relevant for third-party ads, less for first-party ones. In other words, user data is shared with third-parties, without the user expecting it as part of the service. Compared with browser cookies, a feature that enables many legitimate uses, PPA is meant only for tracking users. It will be difficult for Mozilla or the advertising industry to claim a legitimate interest here.
Another point is that identifying users as a group is still a privacy violation. Maybe they account for that, maybe people can’t be identified as being part of some minority via this API. But PPA is still experimental, and the feature was pushed to unsuspecting users without notification. Google’s Chrome at least warned people about it when enabling features from Privacy Sandbox. Sure, they used confusing language, but people that care about privacy could make an informed decision.
The fact that Safari already has this feature on doesn’t absolve Firefox. Apple has its issues right now with the EU’s DMA, and I can see Safari under scrutiny for PPA as well.
Don’t get me wrong, I think PPA may be a good thing, but the way Mozilla pushed this experiment, without educating the public, is relatively disappointing.
The reason for why I dislike Chrome is that it feels adversarial, meaning that I can’t trust its updates. Whenever they push a new update, I have to look out for new features and think about how I can get screwed by it. For example, at least when you log into your Google account, Chrome automatically starts sharing your browsing history with the purpose of improving search and according to the ToS, they can profile you as well, AFAIK.
Trusting Firefox to not screw people over is what kept many of its users from leaping to Chrome, and I had hoped they understood this.
The least they could do is a notification linking to some educational material, instead of surprising people with a scary-looking opt-out checkbox (that may even be problematic under GDPR).
The problem with this is that it claims too much. You’re effectively declaring that every web site in existence is in violation of GDPR, because they all need to know your IP address in order to send packets back to you, which makes them recipients and processors of your personal data.
This sort of caricature of GDPR is one reason why basically every site in Europe now has those annoying cookie-consent banners – many of them are almost certainly not legally required, but a generic and wrong belief about all cookies being inherently illegal under GDPR without opt-in, and a desire on the part of industry for malicious compliance, means they’re so ubiquitous now that people build browser extensions to try to automatically hide them or click them away!
Sorry to say this, but this is nonsense.
The GDPR acknowledges that the IP is sent alongside requests, and that it may be logged for security purposes. That’s a legitimate interest. What needs consent is third-party tracking with the purpose of monetizing ads. How you use that data matters, as you require a legal basis for it.
Cookies don’t need notifications if they are needed for providing the service that the user expects (e.g., logins). And consent is not needed for using data in ways that the user expects as part of the service (e.g., delivering pizza to a home address).
The reason most online services have scary cookie banners in the EU is because they do spyware shit.
Case in point, when you first open Microsoft Edge, the browser, they inform you that they’re going to share your data with over 700 of Microsoft’s partners, also claiming legitimate interest for things like “correlating your devices” for the purpose of serving ads, which you can’t reject, and which is clearly illegal. So Microsoft is informing Edge users, in the EU, that they will share their data with the entire advertising industry.
Well, I, for one, would like to be informed of spyware, thanks.
Luckily for Mozilla, PPA does not do “third-party tracking with the purpose of monetizing ads”. In fact, kind of the whole point of PPA is that it provides the advertiser with a report that does not include information sufficient to identify any individual or build a tracking profile of an individual. The advertiser gets aggregate reports that tell them things like how many people saw or clicked on an ad but without any sort of identification of who those people were.
This is why the fact that, yes, technically Mozilla does receive your IP address as part of a web request does not automatically imply that Mozilla is doing processing of personal data which would trigger GDPR. If Mozilla does not use the IP address to track you or share it to other entities, then GDPR should not have any reason to complain about Mozilla receiving it as part of the connection made to their servers.
As I’ve told other people: if you want to be angry, be angry. But be angry at the thing this actually is, rather than at a made-up lie about it.
No, they do it because (like the other reply points out), they have a compliance department who tells them to do it even if they don’t need to, because it’s better to do it.
There’s a parallel here to Proposition 65 in the US state of California: if you’ve ever seen one of those warning labels about something containing “chemicals known to the State of California to cause cancer”, that’s a Proposition 65 warning. The idea behind it was to require manufacturers to accurately label products that contain potentially hazardous substances. But the implementation was set up so that:
So everyone just puts the warning on everything. Even things that have almost no chance of causing cancer, because there’s no penalty for a false cancer warning and if your product ever is found to cause cancer, the fact that you had the warning on it protects you.
Cookie banners are the same way: if you do certain things with data and don’t get up-front opt-in consent, you get a penalty. But if you get the consent and then don’t do anything which required it, you get no penalty. So the only safe thing to do is put the cookie consent popup on everything all the time. This is actually an even more important thing in the EU, because (as Europeans never tire of telling everyone else) EU law does not work on precedent. 1000 courts might find that your use of data does not require consent, but the 1001st court might say “I do not have to respect the precedents and interpretations of anyone else, I find you are in violation” and ruin you with penalties.
Mozilla does not have a legitimate interest in receiving such reports from me.
They can look at their web server logs?
Those are fairly useless for this purpose without a lot of cleaning up and even then I’d say it is impossible to distinguish bots from real visits without actually doing the kind of snooping everyone is against.
This requires no third party?
You are not allowed to associate a session until you have permission for it and you don’t on first page load if visitor didn’t agree to it on a previous visit.
This whole described tracking through website is illegal if you either don’t have a previous agreement or you don’t need session for the pages to even work which you will have a hard time arguing for browsing a web shop.
Using third party doesn’t solve anything because you need permission to do this kind of tracking anyway. My argument however was that you can’t learn how many people saw or clicked an ad from your logs because some who saw it on other peoples pages or search engine of which you don’t have logs and A LOT of those clicks are fake and your logs are unlikely rich enough to know which.
What you want to learn about people’s behavior is more than above which I’m sure you’d know if this was actually remotely your job.
“What you want to learn about people’s behavior” is one thing, “what you should be able to learn about people’s behavior” is something else.
IMHO, it’s not the job of those neck-deep in the industry to set the rules of what’s allowed and not.
I’m not sure anyone here is arguing that these are the same thing and certainly not me.
I’m not sure if you are implying that I am neck-deep in the ad industry, but I certainly never have been. I am, however, responsible also for user experience in our company and there’s a significant overlap in needing to understand visitor/user behavior.
We go to great lengths to not only comply with the letter of the law, but also with its spirit which means we have to make a lot of decisions less informed as we’d prefer. I am not complaining about that either, but it does bother me describing every attempt to ethically learn as either not necessary or sinister.
The condition for requiring a warning label is not “causes cancer” but “exposes users to something that’s on this list of ‘over 900 chemicals’ at levels above the ‘safe harbor levels’”, which is a narrower condition, although maybe not very narrower in practice. (I also thought that putting unnecessary Prop. 65 warning labels on products had also been forbidden (although remaining common), but I don’t see that anywhere in the actual law now.)
No, the reason many have them is that every data privacy consultant will beat you over your head if you don’t have an annoying version of it. Speaking as someone on the receiving end of such reports.
No, you must have an annoying version of it because the theory goes, the more annoying it is the higher the chance the users will frustratingly click the first button they see, e.g. the “accept all” button. The job of privacy consultants is to legitimize such practices.
Which part of “Speaking as someone on the receiving end of such report” was not clear?
Do you think they are trying to persuade us to have more annoying versions so we could collect more information even though we don’t want to for benefit of whom exactly?
My guess is that you don’t have much experience with working with them and how those reports actually look like.
Well, what I do know is that the average consent modal you see on the internet is pretty clearly violating the law, which means that either the average company ignores their data privacy consultants, or the data privacy consultants that they hire are giving advice designed to push the limits of the law.
Yes, IP addresses are personal data and controlled under GDPR, that’s correct. That means each and every HTTP request made needs freely given consent or legitimate interest.
I request a website, the webserver uses my IP address to send me a reply? That’s legitimate interest. The JS on that site uses AJAX to request more information from the same server? Still legitimate interest.
The webserver logs my IP address and the admin posts it on facebook because he thinks 69.0.4.20 is funny? That’s not allowed. The website uses AJAX to make a request to an ad network? That isn’t allowed either.
I type “lobste.rs” into Firefox, and Firefox makes a request to lobsters? Legitimate interest. Firefox makes an additional request to evil-ad-tracking.biz to tell them that I visited lobsters? That’s not allowed.
Balancing lol, for years ad providers ignore all data protections laws (in Germany, way before GDPR) and GDPR. They are staking all users without consent. Then the EU forces the ad companies to follow the law and at least ask the user if they want to share private data. The ad companies successful framed this as bad EU legislation. And now your browser wants to help add companies to staking you. Framing this as balancing is ridiculous.
Just because there is no nametag on it doesn’t mean it’s not private data.
Sorry for the bad comparison: But it’s also reasonable for a thief to want to break in your house. But it’s illegal. Processing personal data is illegal, with some exceptions. Yes there is a “the legitimate interests”, but this has to be balances with “fundamental rights and freedoms of the data subject”. I would say “I like money” isn’t enough to fall under this exception.
``But the other one is also bad’’. This could be an argument, iff you can prove that this is willful ignored by others. There is so much vendors pushing such shit to there paying customers, so I would assume this was overseen. Also Apple should disable it also, because as far as I see it’s against the law (no I’m not a lawyer).
And no I don’t say ads are bad or you shouldn’t be allowed to do some sort of customers analyses. But as the freedom of your fist ends where my nose starts. The freedom of the market analyses ends when you stalking customers. I know it’s not easy to define where customers analyses end and where stalking starts, but currently ad companies are miles away for it. So stop framing this poor little advertisers.
The thing that makes me and presumably some other people sigh and roll our eyes at responses like this is that we’re talking about a feature which is literally designed around not sending personal data to advertisers for processing! The whole point of PPA is to give an advertiser information about ad views/clicks without giving them the ability to track or build profiles of individuals who viewed or clicked, and it does this by not sending the advertiser information about you. All the advertiser gets is an aggregate report telling them things like how many people clicked on the ad.
If you still want to be angry about this feature, by all means be angry. Just be angry about the actual truth of it rather than whatever you seem to currently believe about it.
The only problem I see is that Mozilla is able to track and build profiles of individuals. To some extent, they’ve always been able to do so, but they’ve also historically been a nonprofit with a good track record on privacy. Now we see two things in quick succession: first, they acquire an ad company, and historically, when a tech company acquires an ad company, it’s being reverse-acquired. Second, they implement a feature for anonymizing and aggregating the exact kind of information that advertising companies want (which they must, in the first place, now collect). PPA clearly doesn’t send this information directly to advertisers. But do we now trust Mozilla not to sell it to them separately? Or to use it for the benefit of their internal ad company?
Except they aren’t! They’ve literally thought of this and many other problems, and built the whole thing around distributed privacy-preserving aggregation protocols and random injected noise and other techniques to ensure that even Mozilla does not have sufficient information to build a tracking profile on an individual.
And none of this is secret hidden information. None of it is hard to find. That link? I type “privacy preserving attribution” into my search engine, clicked the Mozilla support page that came up, and read it. This is not buried in a disused lavatory with a sign saying “Beware of the Leopard”. There’s also a more technical explainer linked from that support doc.
Which is why I feel sometimes like I should be tearing my hair out reading these discussions, and why I keep saying that if someone wants to be angry I just want them to be angry at what this actually is, rather than angry at a pile of falsehoods.
How do I actually know that Mozilla’s servers are implementing the protocol honestly?
How do you know anything?
Look, I’ve got a degree in philosophy and if you really want me to go deep on whether you can know things and how, I will, but this is not a productive line of argumentation because there’s no answer that will satisfy. Here’s why:
Suppose that there is some sort of verifier which proves that a server is running the code it claims to be; now you can just reply “ah-ha, but how do I trust that the verifier hasn’t been corrupted by the evil people”, and then you ask how you can know that the verifier for the verifier hasn’t been corrupted, and then the verifier for the verifier for the verifier, and thus we encounter was is known, in philosophy, as the infinite regress – we can simply repeat the same question over and over at deeper and deeper levels, so setting up the hundred-million-billion-trillionth verifier-verifier just prompts a question about how you can trust that and now we need the hundred-million-billion-trillion-and-first verifier verifier, and on and on we keep going.
This is an excellent question, and frankly the basis of my opposition to any kind of telemetry bullshit no matter how benign it might seem to you now. I absolutely don’t know that it’s safe or unsafe, or anonymous or only thought to be anonymous. It turns out you basically can’t type on a keyboard without anybody being able to turn a surprisingly shitty audio recording of your keyboard into a pretty accurate transcript of what you typed. There have been so many papers that have demonstrated that a list of the fonts visible to your browser can often uniquely identify a person. Medical datasets have been de-anonymised just by using different bucketing strategies.
I have zero confidence that this won’t eventually turn out to be similar, so there is zero reason to do it at all. Just cut it out.
If there’s no amount of evidence someone could present to convince you of something, you can just say so and let everyone move on. I don’t like arguing with people who act as if there might be evidence that would convince them when there isn’t.
It’s a perfectly legitimate position to hold that the only valid amount of leaked information is zero. You’re framing it as if that was something unreasonable, but it’s not. Not every disagreement can be solved with a compromise.
I prefer to minimize unnecessary exposure. If I visit a website, then, necessarily, they at a minimum get my IP address. I don’t like it when someone who didn’t need to get data from me, gets data from me. Maybe they’re nice, maybe they’re not nice, but I’d like to take fewer chances.
trust lost is hard regained. The ad industry is obviously in a hard place here.
The thing that really leaves a bitter taste in my mouth is that it feels like “the ad industry” includes Mozilla now.
shouldn’t be surprising; it’s been their #1 funding source for years…
I like your take on this, insomuch as “it’s better than what we currently have”.
I don’t agree with this, it wasn’t even possible to know until about 20 years ago. The old ad-man adage goes that “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” Well that’s just the price you pay when producing material that hardly ever is a benefit to society.
Funnily enough there does seem to have been a swing back towards brands and publishers just cutting all middle men out and partnering up. This suggests to me that online ads aren’t working that well.
This to me is so incredibly naive and I’m speaking as someone who doesn’t like ads. How in the world would anyone hear about your product and services without them, especially if they are novel?
Imagining that every company, no matter how small or new sits on tons of money they can waste on stuff that is ineffective seems unreasonable. Having ads be an option only for companies already successful enough doesn’t seem particularly desirable from point of view of economy.
I’m as much against snooping, profiling and other abuses as the next guy, but I disagree with seeing every tracking, no matter how much it is privacy preserving, as inherently bad.
If your company can’t survive without ad tech, it should just cease to exist.
Why? Justify that. What is it about a company requiring advertising that inherently reduces the value of that company to 0 or less? If I have a new product and I have to tell people about it to reach the economic tipping point of viability, my product is worthless? Honestly, I find this notion totally ridiculous - I see no reason to connect these things.
I never said anything about advertizing, I said ad tech. Go ahead and advertize using methods that don’t violate my privacy or track me in any way.
Now you’re conflating “ad tech” with tracking. And then what about tracking that doesn’t identify you?
What do you think the ad tech industry is? And I simply do not consent to being tracked.
So if an ad didn’t track you you’d be fine with it? If an ad tech company preserved your privacy, you’d be fine?
I am fine with ads that are not targeted at me at all, and don’t transmit any information about me to anyone. For example, if you pay some website to display your ad to all its visitors, that it fine to me. Same as when you pay for a spot in a newspaper, or billboard. I don’t like it, but I’m fine with it.
It’s absolutely naive, and I stand by it because I don’t care if you can’t afford to advertise your product or service. But I do find ads tiresome, especially on the internet. Maybe I’m an old coot but I tend to just buy local and through word of mouth anyway, and am inherently put off by anything I see in an ad.
This is pretty much the state of affairs anyway. Running an ad campaign is a money-hole even in the modern age. If I turn adblock off I just get ads for established players in the game. If I want anything novel I have to seek it out myself.
But as I said, I’m not against this feature per-se, as an improvement on the current system.
It’s worth repeating, society has no intrinsic responsibility to support business as an aggregated constituent, nor as individual businesses.
One might reasonably argue it’s in everyone’s best interest to do so at certain times, but something else entirely to defend sacrosanct business rights reflexively the moment individual humans try to defend themselves from the nasty side effects of business behavior.
We absolutely have a responsibility to do so in a society where people rely on businesses for like… everything. You’re typing on a computer - who produced that? A business. How do you think most Americans retire? A business. How do new products make it onto the market? Advertising.
I think it’s exactly the opposite situation of what you’re purporting. If you want to paint the “society without successful businesses is fine” picture, you have to do so.
Would it not be fair to suggest that there’s a bit of a gulf between businesses people rely on and businesses that rely on advertising? Perhaps it’s just my own bubble, dunno
I am obligated to read a history book for you?
Advertising predates the internet, and still exists robustly outside web & mobile ad banners.
But even if it didn’t, word of mouth & culture can still inform about products & services.
Have you heard of shops? It’s either a physical or virtual place where people with money go to purchase goods they need. And sometimes to browse if there’s anything new and interesting that might be useful.
Also, have you heard of magazines? Some of them are dedicated to talking about new and interesting product developments. There are multiple printed (and digital) magazines detailing new software releases and online services that people might find handy.
Do they sometimes suggest products that are not best for the consumer, but rather best for their bottom line? Possibly. But still, they only suggest new products to consumers who ask for it.
Regardless how well PPA works, I think this is crux of the issue:
Even if PPA is technically perfect in every way, maybe MY personal privacy is preserved. But ad companies need to stop trying to insert themselves into every crack of society. They still have no right to any kind of visibility into consumer traffic, interests, eyeballs, whatever.
PPA does not track users. It tracks that an ad was viewed or clicked and it tracks if an action happened as a result, but the user themself is never tracked in any way. This is an important nuance.
Assuming that’s true (and who can know for sure when your adversary is a well-funded
shower of bastardsad company), what I say still stands:What “visibility into consumer traffic, interests, eyeballs, whatever” do you think PPA provides?
The crux of PPA is literally that an advertiser who runs ads gets an aggregate report with numbers that are not the actual conversion rate (number of times someone who saw an ad later went on to buy the product), but is statistically similar enough to the actual conversion rate to let the advertiser know whether they are gaining business from running the ad.
It does not tell them who saw an ad. It does not give them an identifier for the person who saw the ad. It does not tell them what other sites the person visited. It does not tell them what that person is interested in. It does not give them a behavioral profile of that person. It does not give them any identifiable information at all about any person.
For years, people have insisted that they don’t have a problem with advertising in general, they have a problem with all the invasive tracking and profiling that had become a mainstay of online advertising. For better or worse, Mozilla is taking a swing at eliminating the tracking and profiling, and it’s kind of telling that we’re finding out how many people were not being truthful when they said the tracking was what they objected to.
Personally, while I don’t like seeing ads, and on services that I use enough that offer me the option, I pay them money in exchange for not seeing ads, I also understand that being online costs money and that I don’t want the internet to become a place only for those wealthy enough to afford it without support. So having parts of the web that are paid for by mechanisms like advertising – provided it can be done without the invasive tracking – rather than by the end user’s wallet is a thing that probably needs to exist in order to enable the vibrant and diverse web I want to be part of, and lurking behind all the sanctimoniousness and righteous sneers is, inevitably, the question of how much poorer the web would be if only those people who can pay out of their own pockets are allowed onto and into it.
I’m saying they don’t have the right to “know whether they are gaining business from running the ad.”
It’s not necessarily bad for them to know this, but they are also not entitled to know this. On the contrary: The user is entitled to decide whether they want to participate in helping the advertiser.
Well, in order to even get to the point of generating aggregate reporting data someone has to both see an ad and either click through it or otherwise go to the site and buy something. So the user has already decided to have some sort of relationship with the business. If you are someone who never sees an ad and never clicks an ad and never buys anything from anyone who’s advertised to you, you don’t have anything to worry about.
none of that contradicts the fact that advertisers are not entitled to additional information with the help of the browser.
Question: how is the ad to be displayed selected? With the introduction of PPA, do advertizers plan on not using profiling to select ads anymore? Because that part of the ad tech equation is just as important as measuring conversions.
Fun fact: Mozilla had a proposal a few years back for how to do ad selection in a privacy-preserving way, by having the browser download bundles of ads with metadata about them and do the selection and display entirely on the client side.
People hated that too.
The Internet is already a place only for those wealthy enough to pay out of their own pockets for a computer and Internet connection that is fast enough to participate. Without ads, many sites would have to change their business model and may die. But places like Wikipedia and Lobsters would still exist. Do you really think the web would be poorer if websites were less like Facebook and Twitter and more like Wikipedia and Lobsters?
Someone who doesn’t own a computer or a phone can access the internet in many public libraries – free access to browse should be more plentiful but at least exists.
But web sites generally cannot be had for free without advertising involved, because there is no publicly-funded utility providing them.
So you want to preserve ads so that people who rely on public libraries for Internet access can offset hosting costs by putting ads on their personal websites? That still requires some money to set up the site in the first place, and it requires significant traffic to offset even the small hosting cost of a personal website.
Clearly you have something else in mind but I can’t picture it. Most people don’t have the skills to set up their own website anyway, so they use services such as Facebook or Wikipedia to participate on the Internet. Can you clarify your position?
following up
I thought this discussion was getting really interesting so I’m assuming it fell by the wayside and that you would appreciate me reviving it. did want to respond? or would you rather I stop asking
but who views or clicks on the ad? it would have to be a user.
There is a very simple question you can ask to discover whether a feature like this is reasonable: if the user had to opt in for it, how many users would do so if asked politely?
This is innovation in the wrong direction. The actual problem is that everyone beliefs that ads are the primary/only economical model of the Web and that there is nothing we can do about it. Fixing that is the innovation we actually need.
We could have non-spyware ads that don’t load down browsers with megabytes of javascript, but no-one believes that it is possible to advertise ethically. Maybe if web sites didn’t have 420 partners collecting personal data there would be fewer rent-seeking middlemen and more ad money would go to the web sites.
Ads. We all know them, we all hate them. They slow down your browser with countless tracking scripts.
Want in on a little secret? It doesn’t have to be this way. In fact, the most effective ads don’t actually have any tracking! More about that, right after this message from our sponsor:
(trying to imitate the style of LTT videos here)
We’ve got non-spyware ads that don’t contain any interactivity or JS. They’re all over video content, often called “sponsorships”. Negotiated directly between creators and brands, integrated into the video itself without any interactivity or tracking, most of the time clearly marked. And they’re a win-win-win. The creator earns more, the brand actually gets higher conversion and more control about the context of their ad, and by nature the ads can’t track the consumer either.
Maybe if I could give half a cent per page view to a site, they’d make a lot more than they ever made from ads.
Sure, but IMHO this is still not a reason to turn it on by default.
The browser colluding with advertisers to spy on me is, in fact, not sensible.
Please be clear about what “spying” you think is being performed.
For example: list all personally-identifying information you believe is being transmitted to the advertiser by this feature of the browser.
You can read documentation about the feature yourself.
(Note that I’m not the parent poster, I’m just replying here because the question of what data is actually being tracked seems like the crux of the matter, not because I want to take out the pitchforks.)
Reading through the data here, it seems to me like the browser is tracking what ads a user sees. Unfortunately the wording there is kind of ambiguous (e.g. what’s an “ad placement”? Is it a specific ad, or a set of ads?) but if I got this right, the browser locally tracks what ad was clicked/viewed and where, with parameters that describe what counts as a view or a click supplied by the advertiser. And that it can do so based on the website’s requirements, i.e. based on whatever that website considers to be an impression.
Now I get that this report isn’t transmitted verbatim to the company whose products are being advertised, but:
I realise this is a hot topic for you, but if you’re bringing up the principle of charity, can we maybe try it here, too? :-) That’s why I prefaced this with a “I’m not the parent poster” note.
That technical explainer is actually the document that I read, and on which my questions are based. I’ve literally linked to it in the comment you’re responding to. I’m guessing it’s an internal document of sorts because that’s not “very redable” to someone who doesn’t work in the ad industry at all. It also doesn’t follow almost any convention for spec documents, so it’s not even clear if this is what’s actually implemented or just an early draft, if the values “suggested” there are actually being used, which features are compulsory, or if this the “final” version of the protocol.
My first question straight out comes from this mention in that document:
(Emphasis mine).
Charitably, I’m guessing that the support page is glossing over some details in its claim, given that there’s literally a document describing what information about one’s browsing activities is being sent and where. And that either I’m misunderstanding the scope of the DAP processing (is this not used to process information about conversions?) or that you’re glossing over technical details when you’re saying “no”. If it’s the latter, though, this is lobste.rs, I’d appreciate if you didn’t – I’m sure Mozilla’s PR team will be only too happy to gloss over the details for me in their comments section, I was asking you because a) you obviously know more about this than I do and b) you’re not defaulting to “oh, yeah, it’s evil”.
I have no idea what running a DAP deployment entails (which is why I’m asking about it) so I don’t really know the practical details of “the two organizations collude” which, in turn, means I don’t know how practical a concern that is. Which is why I’m asking about it. Where, on the spectrum between “theoretically doable but trivially detected by a third party” and “trivially done by two people and the only way to find out is to ask the actual people who did it”, is it placed?
My second question is also based on that document. I don’t work in the ad industry and I’m not a browser engineer, so much of the language there is completely opaque. Consequently:
CustomEvent
is. In its simplest form, reading the doc it sounds like the website is the one generating events. But if that’s the case, they can already count impressions, they don’t even need to query the local impression database. (The harder variant is that the event is fired locally and you can’t hook to it it any way, but it’s still based on website-set parameters – see my note in 5. below for that). I imagine I’m missing something, but what?regarding PPA, if I have DNT on, what questions are there still unclear?
regarding the primary economic model, that’s indeed the problem to be solved. Once print had ads without tracking and thrived. An acceptable path is IMO payments, not monetised surveillance. Maybe similar https://en.wikipedia.org/wiki/VG_Wort
and regarding opt-in/out: one doesn’t earn trust by going the convenient way. Smells.
Once Google had ads without tracking and thrived, enough to buy their main competitor Doubleclick. Sadly, Doubleclick’s user-surveillance-based direct-marketing business model replaced Google’s web-page-contents-based broadcast-advertising business model. Now no-one can even imagine that advertising might possibly exist without invasive tracking, despite the fact that it used to be normal.
It’s funny because not once in my entire life have I ever seen an invasive tracking ad that was useful or relevant to me. What a scam! I have clicked on two ads in my entire life, which were relevant to me, and they were of the kind where the ad is based on the contents of the site you’re visiting.
great illustration of how the impact of ads is disparately allocated. some people click on ads all the time and it drains their bank account forcing them into further subordination to employers. this obviously correlates with lower education and economic status.
why should the “primary economic model of the Web” be given any weight whatsoever against user control and consent?
The unlinking behavior makes sense from a security perspective, I think. However this is a good example of what happens when security causes usability problems: People will set up hacks to undermine your security measures.
Ironic, since Signal is pretty famous for having both a high level of security and a high level of usability at the same time. Clearly this is an area that needs more work.
First thought “well if Signal opens on login to the desktop then it’ll stay linked” but if it’s not actually used from that desktop then that’s basically equivalent to OP’s hack which circumvents the security feature.
Also, what if you simply don’t log in to the desktop that often.
Maybe the phone could show a reminder that the desktop app will be unlinked in X day?
When I worked at Signal years ago, this was something we discussed but never implemented. I don’t know the idea’s current status.
I think they do it now. I got a notification somewhere that my iPad was about to get unlinked.
At work we use semgrep for this. It does a lot of things, but simple string scans (with good error messages) is the main thing we use.
Combined with pre-commit, you can have these errors before they even get into the codebase. It also allows the scanning to be restrained to only the code that’s changing.
I like Semgrep for this as well. Recently I’ve also been using pyastgrep on my Python projects - see my blog post
Interesting idea, though…
I get why this would be, but feels kinda gross to me (in a very subjective way that I can’t explain yet).
There’s nothing in HTTP and/or REST that says that every method has to return a representation of the targeted resource (obvious example: POST).
In the case of QUERY, we are interested in other resources, not the endpoint itself, so it seems fine to me.
Yeah, I think it doesn’t sound too terrible, depending on the implementation. I think the main issue I have is what seems like “API design smell.”
Ex: I’d rather use QUERY on a
widgets
endpoint that only returned widgets, than to have a singlequery
endpoint that could return anything. A general-purposequery
endpoint allows clients to introduce too much coupling to your underlying data model.There are use cases and tradeoffs for both ways I guess, which is why I suppose I’m not totally against the idea.
Interesting.
Unfortunate “ML” abbreviation. My mind went immediately to the other very influential language called ML. And then “machine learning.”
Yeah, I was incredibly confused by this… wondering if it was being compiled to an ML-like language (e.g. Standard ML or OCaml), or if machine learning was being used somewhere, then remembered what the language was called.
Yeah, it’s bound to change in the future. I should have chosen a different acronym from the start, but yet here we are XD
You could always call it “Operational C Alternative MiniLang”
I think an alternative would be Standard MiniLang to distinguish itself more.
Boooooo
I’m not sure that the presence or lack of SELinux, AppArmor, etc. is enough to determine whether a Linux distro is “secure” or not.
First of all, “secure” is arbitrary. Does Debian protect us from zero-day remote code execution vulnerabilities that lead to privilege escalation? Eh, probably not as well as RHEL does. Does that make Debian “insecure?” Nah.
From a defense-in-depth perspective, robust and usable MAC would be pretty awesome to have. So would a number of kernel features that are never going to be implemented because (according to experts) kernel security is kind of rubbish.
But it’s good enough for my threat model. SELinux is just one brick in the building.
So, this is a fun topic, and extremely useful if you’re a pen tester, etc. However:
If you are a vendor who is intentionally subverting the firewall rules of your customers, that is a recipe for losing business. You’ll eventually get caught (or deserve to be caught) and insta-banned.
If I give you money to put your stuff on my network, you respect my rules.
Oh hell, and the legal liability… What happens if a malicious actor takes advantage of a mistake YOU made while doing this, and you are the means by which your customer gets popped?
No. Just no. Use this information for other reasons, but the “I’m a vendor” reason is the absolute worst.
I agree with you but Apple doesn’t:
https://mullvad.net/en/blog/2022/4/25/apples-private-relay-can-cause-the-system-to-ignore-firewall-rules
It seems to be (arguably) non-malicious, but intentional.
Eh… The first thing I thought when I saw it was, “Snowflakes? What does that have to do with the Fediverse?”
I’d personally opt for a specifically-designed logo rather than rely on fonts and “claiming” a Unicode code point to do it for us.
I think it fits well, as there are a lot of snowflakes in the fediverse. :P
Was this a joke? I think it was a joke.
I, for one, am comforted that tech companies know me better than my own mother does, and that knowledge will live forever.
Ok that was kind of a joke too. I’m done now.