I tried it when it launched. It’s leaps and bounds superior to all others. Never really payed for it as I am not a consumer of RSS feeds. There was a fine when most sites provided it. Then came the walled garden mentality, I basically find no value in RSS reading.
Good content, but typography is “killing” me. Specifically the choice of Roslindale as the main text typeface. Please consider using something that breathes a little more since even tweaking line-height is not enough.
I think the “Boring” movement, when you read the article and slide deck, are unimpeachable, and a healthy response to “novelty-driven” development of the late aughts. We were pretty high on Paul Graham essays about Common Lisp and some new language-building technologies (ANTLR simplified parsing, LLVM did a lot for compiled languages) so we thought novel languages in particular would help us. And not for nothing: Ruby on Rails did explode, and Ruby’s weirdness is often credited for DHH’s ability to make that framework, and for people to extend it.
That said, like “Agile,” the actual ideas behind “Use Boring Technology” got dropped in favor of a thought-terminating cliche around “use what’s popular, no matter what.” So you have early startups running k8s clusters, or a node_modules folder that’s bigger and uses more tech than the entire Flash Runtime did. To borrow a paragraph from the top comment as I’m writing this:
I doubt some foreman on the job just really wanted to use this new gravel (“his buddy on another road project thought it was cool”) so they decided to give it a shot for infill. More like, the properties of the foamed glass gravel had been extensively tested and documented and so it was known that it’s a suitable replacement for rock gravel. Actual engineers gave the sign-off.
People say “Use Boring Technology” to mean “only use Java, Python, Node, Go, or Ruby; only deploy in commercial clouds with Docker” but never investigate alternatives, or talk too deeply about the tech. It’s true that the properties of foamed glass gravel were known and extensively tested; what if they just ignored all of that and said “use concrete anyway; I don’t feel like having to think about this new thing I’ve haven’t used recently.”
I’m really looking forward to the boring version of kubernetes. A working cluster is pretty great for deploying software. It’ll be great when there’s something that mostly just works that does much the same thing.
“Choose boring technology” basically means first check if technology you and your team already use and understand well (enough) to be able to support it could perform the task. Look at alternatives only as second step. That’s it.
I’d say novelty-driven is much younger than late oughts. At least when it comes to web frontend.
“Choose boring technology” basically means first check if technology you and your team already use and understand well (enough) to be able to support it could perform the task. Look at alternatives only as second step. That’s it.
eh, I don’t think “that’s it.” Maybe it’s just a function of what we mean by “understand well (enough) to be able to support it” and/or what we’ve seen in our various careers, but I’ve worked with a lot of people who didn’t really understand their technologies that well. I wrote a bit about it here:
The thing about most Python Developers™ I’ve worked with is that they don’t even know that much about Python. They couldn’t articulate when and how to use abc vs. just inheriting from object, or why you might prefer namedtuple over dataclasses. They won’t know that import x.y.z will import x/__init__.py and x/y/__init__.py and x/y/z.py (or x/y/z/__init__.py). They couldn’t tell you who GIL is, or why you shouldn’t def send_emails(list_of_recipients, bccs=[]):. This game is especially fun when talking about JavaScript
To your point: there’s a learning curve to picking a different tech than you know well enough already. But the point I always felt: these people are never that good at Python (or JavaScript, or Ruby) anyway; if they spent 2-3 days working through a PragProg book or two, they’d know something like Elixir as well as they know Python.
(ofc my ideal would be hire people who are unafraid of learning new technology in the first place, but that’s harder when you don’t control the company, and/or you’re doing the high-growth VC-backed playbook, where hiring fast is a way to inflate your share price)
Again, I think there’s real merit in just picking something you and the team have used before entirely because you and the team have used it before. I think that’s usually fine. I just think the needle is too far in the direction of “safety, boring,” everyone is miserable, and I don’t see it even producing too many innovative, successful, lasting companies.
I also agree with you regarding the shallowness of people’s knowledge regarding platforms. Most often, one starts with a project and just gets the thing done with minimal knowledge of the underlying technology. There’s only a real need to dig in when the shit really hits the fan. By which time the original authors have probably already moved on (using new, different technology). But isn’t that more of a reason to choose “boring” technology? So that you can dig in deeper and get more experience?
The author of this article has a reading comprehension problem. The linked story about .yu domain does not remotely corroborate his summary. As someone who studied at University of Ljubljana at that time right across the street and never heard anything so wild, which I’m sure I would, I have difficulty believing that it happened as described.
Which parts are you saying it doesn’t corroborate? There are a lot of assertions of fact in it, and also a bunch of subjective stuff.. from a quick read of both pieces, it does look to me like the most important details of the alleged heist are mentioned in the cited source, but I may be failing to understand the significance of something that seems minor enough that it doesn’t occur to me to doubt it, or something like that…?
It would be insanely crazy for something like that to happen during ongoing Balkan war and also completely unnecessary since .yu domain was administered from Ljubljana since start in 1989. The only “heist” was some colleagues allegedly breaking into her office, copying software and disconnecting her network. All of them worked in the same building and no computers were moved.
On a Sunday in July 1992, Jerman-Blažič told me that ARNES, which included some of her former colleagues, broke into her lab, copied the domain software and data from the server, and cut off the line that connected it to the network. “Both ARNES directors had no knowledge of internet networking and did not know how to run the domain server,” she said. Though they only used the network for email, ARNES secretly kept .yu running for the next two years, ignoring requests from a rival group of scientists in Serbia who needed the domain for their work.
I agree that this doesn’t say anything as to whether the former colleagues thought of what they did as a “heist”, and it quotes Jerman-Blažič’s opinion that the people who took the software didn’t know how to use it without help, but she wouldn’t have been in a position to see what happened afterwards, since, as the piece also says, the next part was secret. However, the last sentence is clear-cut that the domain was in fact run by ARNES, at least according to this publication. That’s the substance of the “heist” claim, surely?
I’m unable to find anything in the Dial piece that speaks to whether these people worked in the same building at the time of the alleged theft, as you suggest. I may not be looking closely enough. You’re right, that does bear on the claims in the Every piece, which says the alleged thieves traveled there in connection with the alleged theft.
Your point that no computers were moved doesn’t seem relevant to me; the Every piece says “On arrival, they broke into the university and stole all the hosting software and domain records for the .yu top-level domain—everything they needed to seize control”, which notably makes no claim about hardware, only software and data. In general, in any sort of computer-related theft, real or fictional, the software is the important payload… I don’t really see what difference the hardware would make to anyone.
I think these authors are describing the same factual claims, while differing substantially on who they view as the “protagonist”. That may account for why it feels like they’re talking about something different? I do think it would be entirely reasonable to point out that there’s very little information about how the participants really felt about the events at the time, or how they understood the purpose of their actions.
I do think the author of the Every piece goes perhaps a little too far in inferring people’s intentions, and leaves out the important caveat that the copy of the domain that ARNES ran was kept secret. I think perhaps the author doesn’t have a deep understanding of DNS, and incorrectly believes that running one copy of a domain somewhere, privately, implies that nobody else could be running it at the same time. That does seem like a material error, and quite unfortunate, though it’s not about the so-called heist per se.
So… I think I see why you object, although I’d stop short of saying the story was invented from whole cloth. Does that help?
I don’t have an issue with The Dial piece. I definitely made a mistake about servers; no idea why. I guess reading comprehension issues on my part as well.
My comment was exclusively about the Every piece that has a truly awful summary of The Dial piece. I disagree that they are the same factual claims because to me there’s a huge difference between Slovenian academics going to a different country to steal domain and software or essentially an internal dispute around who will continue running the .yu domain, which since its beginning until mid 90s has always been administered from Ljubljana in current Slovenia.
That makes sense, now that I understand the distinction you’re making. It does seem like an important one. Thanks for talking it through and helping me understand.
Ideally speaking? Sure. But historically ? I can’t help but think back to the AC/DC stuff of the late 1800s.
Besides, fan is kind of ambiguous; these days, it informally means something like “mild supporter”, but the word can be weasel’d to mean more narrowly “extreme crusader” to suit &c.
I’m not real convinced by the Walter Sobchak calmer-than-you-are (& sober-er, too) 20/10 hindsight eagle-eyed type attitude, but I don’t really disagree, either. Some things just don’t work, as built.
This aligns well with my thoughts. I would describe myself as a fan of Python because it fitted me like a glove when I discovered it in 90s and was the first (also only) language where I wrote code of some length that worked correctly the first time before I became experienced in it.
However, my use of Python does not form part of my identity. There’s plenty of what I’m not fond of, but it remains one of my favorite tools with attachment that is not purely rational.
Somewhat off-topic I do feel world would be in better place if peoples identities would be formed around cores as small as possible.
I’m voting too, but honestly, I’ll eat my hat if the lego team actually considers this.
The designer himself even says how brittle and fragile the design is. I also think it’s using at least some 3D printed parts? But I don’t think the designer says if they’re 3D printed parts of standard bricks or if they’re completely custom. I don’t know exactly how much the designer or the lego team would put into retooling it for a proper set, but this is very much an early draft, even if it is the fourth iteration. I’d love to see it go all the way, though. I hope that if it gets selected, then it’ll be properly re-designed as a more stable contraption.
As far as I know they pretty much always retool and change designs even when they look perfect and highly polished. So they would definitely redo this one for reasons you listed, if they pick it up.
I’m not optimistic they’ll pick it either, but it would be so great. I would think that us developers are over-represented among adult Lego builders and that they know this so there’s me hoping.
I love and appreciate it, but I watched the whole video and was sad to not see an example program, however useless, but essentially merely some bit flipping. I’m in awe of the design, though.
NOYB does great work, but it is wrong on this one. Privacy Preserving Attribution is a sensible, well-designed feature and a giant step in the right direction for balancing privacy and the primary economic model of the Web, ads. I wrote more about why at https://alpaca.gold/@Jeremiah/113198664543831802
I don’t know the specifics of NOYB’s complaint, but reading your thoughts, I think you’re missing an important point:
Sending data over the web, at the very least, leaks your IP, possibly your device type as well. It doesn’t matter how anonymized the data contained in the envelope is. Making a web request that sends some data, any data, will always be leaky compared to making no web requests, which means that the user needs to trust the endpoint the browser is communicating with.
And this is also where NOYB’s complaint may have merit because any service would work just fine without those PPA requests. And let’s be clear, PPA is relevant for third-party ads, less for first-party ones. In other words, user data is shared with third-parties, without the user expecting it as part of the service. Compared with browser cookies, a feature that enables many legitimate uses, PPA is meant only for tracking users. It will be difficult for Mozilla or the advertising industry to claim a legitimate interest here.
Another point is that identifying users as a group is still a privacy violation. Maybe they account for that, maybe people can’t be identified as being part of some minority via this API. But PPA is still experimental, and the feature was pushed to unsuspecting users without notification. Google’s Chrome at least warned people about it when enabling features from Privacy Sandbox. Sure, they used confusing language, but people that care about privacy could make an informed decision.
The fact that Safari already has this feature on doesn’t absolve Firefox. Apple has its issues right now with the EU’s DMA, and I can see Safari under scrutiny for PPA as well.
Don’t get me wrong, I think PPA may be a good thing, but the way Mozilla pushed this experiment, without educating the public, is relatively disappointing.
The reason for why I dislike Chrome is that it feels adversarial, meaning that I can’t trust its updates. Whenever they push a new update, I have to look out for new features and think about how I can get screwed by it. For example, at least when you log into your Google account, Chrome automatically starts sharing your browsing history with the purpose of improving search and according to the ToS, they can profile you as well, AFAIK.
Trusting Firefox to not screw people over is what kept many of its users from leaping to Chrome, and I had hoped they understood this.
The least they could do is a notification linking to some educational material, instead of surprising people with a scary-looking opt-out checkbox (that may even be problematic under GDPR).
Sending data over the web, at the very least, leaks your IP, possibly your device type as well. It doesn’t matter how anonymized the data contained in the envelope is. Making a web request that sends some data, any data, will always be leaky compared to making no web requests, which means that the user needs to trust the endpoint the browser is communicating with.
The problem with this is that it claims too much. You’re effectively declaring that every web site in existence is in violation of GDPR, because they all need to know your IP address in order to send packets back to you, which makes them recipients and processors of your personal data.
This sort of caricature of GDPR is one reason why basically every site in Europe now has those annoying cookie-consent banners – many of them are almost certainly not legally required, but a generic and wrong belief about all cookies being inherently illegal under GDPR without opt-in, and a desire on the part of industry for malicious compliance, means they’re so ubiquitous now that people build browser extensions to try to automatically hide them or click them away!
The GDPR acknowledges that the IP is sent alongside requests, and that it may be logged for security purposes. That’s a legitimate interest. What needs consent is third-party tracking with the purpose of monetizing ads. How you use that data matters, as you require a legal basis for it.
Cookies don’t need notifications if they are needed for providing the service that the user expects (e.g., logins). And consent is not needed for using data in ways that the user expects as part of the service (e.g., delivering pizza to a home address).
The reason most online services have scary cookie banners in the EU is because they do spyware shit.
Case in point, when you first open Microsoft Edge, the browser, they inform you that they’re going to share your data with over 700 of Microsoft’s partners, also claiming legitimate interest for things like “correlating your devices” for the purpose of serving ads, which you can’t reject, and which is clearly illegal. So Microsoft is informing Edge users, in the EU, that they will share their data with the entire advertising industry.
Well, I, for one, would like to be informed of spyware, thanks.
The GDPR acknowledges that the IP is sent alongside requests, and that it may be logged for security purposes. That’s a legitimate interest. What needs consent is third-party tracking with the purpose of monetizing ads.
Luckily for Mozilla, PPA does not do “third-party tracking with the purpose of monetizing ads”. In fact, kind of the whole point of PPA is that it provides the advertiser with a report that does not include information sufficient to identify any individual or build a tracking profile of an individual. The advertiser gets aggregate reports that tell them things like how many people saw or clicked on an ad but without any sort of identification of who those people were.
This is why the fact that, yes, technically Mozilla does receive your IP address as part of a web request does not automatically imply that Mozilla is doing processing of personal data which would trigger GDPR. If Mozilla does not use the IP address to track you or share it to other entities, then GDPR should not have any reason to complain about Mozilla receiving it as part of the connection made to their servers.
As I’ve told other people: if you want to be angry, be angry. But be angry at the thing this actually is, rather than at a made-up lie about it.
The reason most online services have scary cookie banners in the EU is because they do spyware shit.
No, they do it because (like the other reply points out), they have a compliance department who tells them to do it even if they don’t need to, because it’s better to do it.
There’s a parallel here to Proposition 65 in the US state of California: if you’ve ever seen one of those warning labels about something containing “chemicals known to the State of California to cause cancer”, that’s a Proposition 65 warning. The idea behind it was to require manufacturers to accurately label products that contain potentially hazardous substances. But the implementation was set up so that:
If your product is eventually found to cause cancer, and you didn’t have a warning, you suffer a huge penalty, but
If your product does not cause cancer, and you put a warning on it anyway, you suffer no penalty.
So everyone just puts the warning on everything. Even things that have almost no chance of causing cancer, because there’s no penalty for a false cancer warning and if your product ever is found to cause cancer, the fact that you had the warning on it protects you.
Cookie banners are the same way: if you do certain things with data and don’t get up-front opt-in consent, you get a penalty. But if you get the consent and then don’t do anything which required it, you get no penalty. So the only safe thing to do is put the cookie consent popup on everything all the time. This is actually an even more important thing in the EU, because (as Europeans never tire of telling everyone else) EU law does not work on precedent. 1000 courts might find that your use of data does not require consent, but the 1001st court might say “I do not have to respect the precedents and interpretations of anyone else, I find you are in violation” and ruin you with penalties.
This is why the fact that, yes, technically Mozilla does receive your IP address as part of a web request does not automatically imply that Mozilla is doing processing of personal data which would trigger GDPR.
Mozilla does not have a legitimate interest in receiving such reports from me.
Those are fairly useless for this purpose without a lot of cleaning up and even then I’d say it is impossible to distinguish bots from real visits without actually doing the kind of snooping everyone is against.
You are not allowed to associate a session until you have permission for it and you don’t on first page load if visitor didn’t agree to it on a previous visit.
This whole described tracking through website is illegal if you either don’t have a previous agreement or you don’t need session for the pages to even work which you will have a hard time arguing for browsing a web shop.
Using third party doesn’t solve anything because you need permission to do this kind of tracking anyway. My argument however was that you can’t learn how many people saw or clicked an ad from your logs because some who saw it on other peoples pages or search engine of which you don’t have logs and A LOT of those clicks are fake and your logs are unlikely rich enough to know which.
What you want to learn about people’s behavior is more than above which I’m sure you’d know if this was actually remotely your job.
I’m not sure anyone here is arguing that these are the same thing and certainly not me.
I’m not sure if you are implying that I am neck-deep in the ad industry, but I certainly never have been. I am, however, responsible also for user experience in our company and there’s a significant overlap in needing to understand visitor/user behavior.
We go to great lengths to not only comply with the letter of the law, but also with its spirit which means we have to make a lot of decisions less informed as we’d prefer. I am not complaining about that either, but it does bother me describing every attempt to ethically learn as either not necessary or sinister.
If your product is eventually found to cause cancer, …
The condition for requiring a warning label is not “causes cancer” but “exposes users to something that’s on this list of ‘over 900 chemicals’ at levels above the ‘safe harbor levels’”, which is a narrower condition, although maybe not very narrower in practice. (I also thought that putting unnecessary Prop. 65 warning labels on products had also been forbidden (although remaining common), but I don’t see that anywhere in the actual law now.)
No, the reason many have them is that every data privacy consultant will beat you over your head if you don’t have an annoying version of it. Speaking as someone on the receiving end of such reports.
No, you must have an annoying version of it because the theory goes, the more annoying it is the higher the chance the users will frustratingly click the first button they see, e.g. the “accept all” button. The job of privacy consultants is to legitimize such practices.
Which part of “Speaking as someone on the receiving end of such report” was not clear?
Do you think they are trying to persuade us to have more annoying versions so we could collect more information even though we don’t want to for benefit of whom exactly?
My guess is that you don’t have much experience with working with them and how those reports actually look like.
Well, what I do know is that the average consent modal you see on the internet is pretty clearly violating the law, which means that either the average company ignores their data privacy consultants, or the data privacy consultants that they hire are giving advice designed to push the limits of the law.
The problem with this is that it claims too much. You’re effectively declaring that every web site in existence is in violation of GDPR, because they all need to know your IP address in order to send packets back to you, which makes them recipients and processors of your personal data.
Yes, IP addresses are personal data and controlled under GDPR, that’s correct. That means each and every HTTP request made needs freely given consent or legitimate interest.
I request a website, the webserver uses my IP address to send me a reply? That’s legitimate interest. The JS on that site uses AJAX to request more information from the same server? Still legitimate interest.
The webserver logs my IP address and the admin posts it on facebook because he thinks 69.0.4.20 is funny? That’s not allowed. The website uses AJAX to make a request to an ad network? That isn’t allowed either.
I type “lobste.rs” into Firefox, and Firefox makes a request to lobsters? Legitimate interest. Firefox makes an additional request to evil-ad-tracking.biz to tell them that I visited lobsters? That’s not allowed.
a giant step in the right direction for balancing privacy and the primary economic model of the Web, ads
Balancing lol, for years ad providers ignore all data protections laws (in Germany, way before GDPR) and GDPR. They are staking all users without consent. Then the EU forces the ad companies to follow the law and at least ask the user if they want to share private data. The ad companies successful framed this as bad EU legislation. And now your browser wants to help add companies to staking you. Framing this as balancing is ridiculous.
All it does is tell a site you have already visited that someone got to the site via an ad without revealing PII.
[…]
which ads worked without knowing who they worked for specifically
Just because there is no nametag on it doesn’t mean it’s not private data.
It’s reasonable for a business to know if their ad worked
Sorry for the bad comparison: But it’s also reasonable for a thief to want to break in your house. But it’s illegal. Processing personal data is illegal, with some exceptions. Yes there is a “the legitimate interests”, but this has to be balances with “fundamental rights and freedoms of the data subject”. I would say “I like money” isn’t enough to fall under this exception.
Apple enabled Privacy Preserving Attribution by default for iOS and Safari on macOS 3 years ago
``But the other one is also bad’’. This could be an argument, iff you can prove that this is willful ignored by others. There is so much vendors pushing such shit to there paying customers, so I would assume this was overseen. Also Apple should disable it also, because as far as I see it’s against the law (no I’m not a lawyer).
And no I don’t say ads are bad or you shouldn’t be allowed to do some sort of customers analyses. But as the freedom of your fist ends where my nose starts. The freedom of the market analyses ends when you stalking customers. I know it’s not easy to define where customers analyses end and where stalking starts, but currently ad companies are miles away for it. So stop framing this poor little advertisers.
The thing that makes me and presumably some other people sigh and roll our eyes at responses like this is that we’re talking about a feature which is literally designed around not sending personal data to advertisers for processing! The whole point of PPA is to give an advertiser information about ad views/clicks without giving them the ability to track or build profiles of individuals who viewed or clicked, and it does this by not sending the advertiser information about you. All the advertiser gets is an aggregate report telling them things like how many people clicked on the ad.
If you still want to be angry about this feature, by all means be angry. Just be angry about the actual truth of it rather than whatever you seem to currently believe about it.
The only problem I see is that Mozilla is able to track and build profiles of individuals. To some extent, they’ve always been able to do so, but they’ve also historically been a nonprofit with a good track record on privacy. Now we see two things in quick succession: first, they acquire an ad company, and historically, when a tech company acquires an ad company, it’s being reverse-acquired. Second, they implement a feature for anonymizing and aggregating the exact kind of information that advertising companies want (which they must, in the first place, now collect). PPA clearly doesn’t send this information directly to advertisers. But do we now trust Mozilla not to sell it to them separately? Or to use it for the benefit of their internal ad company?
The only problem I see is that Mozilla is able to track and build profiles of individuals.
Except they aren’t! They’ve literally thought of this and many other problems, and built the whole thing around distributed privacy-preserving aggregation protocols and random injected noise and other techniques to ensure that even Mozilla does not have sufficient information to build a tracking profile on an individual.
And none of this is secret hidden information. None of it is hard to find. That link? I type “privacy preserving attribution” into my search engine, clicked the Mozilla support page that came up, and read it. This is not buried in a disused lavatory with a sign saying “Beware of the Leopard”. There’s also a more technical explainer linked from that support doc.
Which is why I feel sometimes like I should be tearing my hair out reading these discussions, and why I keep saying that if someone wants to be angry I just want them to be angry at what this actually is, rather than angry at a pile of falsehoods.
Except they aren’t! They’ve literally thought of this and many other problems, and built the whole thing around distributed privacy-preserving aggregation protocols and random injected noise and other techniques to ensure that even Mozilla does not have sufficient information to build a tracking profile on an individual.
How do I actually know that Mozilla’s servers are implementing the protocol honestly?
How do I actually know that Mozilla’s servers are implementing the protocol honestly?
How do you know anything?
Look, I’ve got a degree in philosophy and if you really want me to go deep on whether you can know things and how, I will, but this is not a productive line of argumentation because there’s no answer that will satisfy. Here’s why:
Suppose that there is some sort of verifier which proves that a server is running the code it claims to be; now you can just reply “ah-ha, but how do I trust that the verifier hasn’t been corrupted by the evil people”, and then you ask how you can know that the verifier for the verifier hasn’t been corrupted, and then the verifier for the verifier for the verifier, and thus we encounter was is known, in philosophy, as the infinite regress – we can simply repeat the same question over and over at deeper and deeper levels, so setting up the hundred-million-billion-trillionth verifier-verifier just prompts a question about how you can trust that and now we need the hundred-million-billion-trillion-and-first verifier verifier, and on and on we keep going.
This is an excellent question, and frankly the basis of my opposition to any kind of telemetry bullshit no matter how benign it might seem to you now. I absolutely don’t know that it’s safe or unsafe, or anonymous or only thought to be anonymous. It turns out you basically can’t type on a keyboard without anybody being able to turn a surprisingly shitty audio recording of your keyboard into a pretty accurate transcript of what you typed. There have been so many papers that have demonstrated that a list of the fonts visible to your browser can often uniquely identify a person. Medical datasets have been de-anonymised just by using different bucketing strategies.
I have zero confidence that this won’t eventually turn out to be similar, so there is zero reason to do it at all. Just cut it out.
If there’s no amount of evidence someone could present to convince you of something, you can just say so and let everyone move on. I don’t like arguing with people who act as if there might be evidence that would convince them when there isn’t.
It’s a perfectly legitimate position to hold that the only valid amount of leaked information is zero. You’re framing it as if that was something unreasonable, but it’s not. Not every disagreement can be solved with a compromise.
I prefer to minimize unnecessary exposure. If I visit a website, then, necessarily, they at a minimum get my IP address. I don’t like it when someone who didn’t need to get data from me, gets data from me. Maybe they’re nice, maybe they’re not nice, but I’d like to take fewer chances.
I like your take on this, insomuch as “it’s better than what we currently have”.
It’s reasonable for a business to know if their ad worked.
I don’t agree with this, it wasn’t even possible to know until about 20 years ago. The old ad-man adage goes that “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” Well that’s just the price you pay when producing material that hardly ever is a benefit to society.
Funnily enough there does seem to have been a swing back towards brands and publishers just cutting all middle men out and partnering up. This suggests to me that online ads aren’t working that well.
This to me is so incredibly naive and I’m speaking as someone who doesn’t like ads. How in the world would anyone hear about your product and services without them, especially if they are novel?
Imagining that every company, no matter how small or new sits on tons of money they can waste on stuff that is ineffective seems unreasonable. Having ads be an option only for companies already successful enough doesn’t seem particularly desirable from point of view of economy.
I’m as much against snooping, profiling and other abuses as the next guy, but I disagree with seeing every tracking, no matter how much it is privacy preserving, as inherently bad.
Why? Justify that. What is it about a company requiring advertising that inherently reduces the value of that company to 0 or less? If I have a new product and I have to tell people about it to reach the economic tipping point of viability, my product is worthless? Honestly, I find this notion totally ridiculous - I see no reason to connect these things.
I am fine with ads that are not targeted at me at all, and don’t transmit any information about me to anyone. For example, if you pay some website to display your ad to all its visitors, that it fine to me. Same as when you pay for a spot in a newspaper, or billboard. I don’t like it, but I’m fine with it.
It’s absolutely naive, and I stand by it because I don’t care if you can’t afford to advertise your product or service. But I do find ads tiresome, especially on the internet. Maybe I’m an old coot but I tend to just buy local and through word of mouth anyway, and am inherently put off by anything I see in an ad.
Imagining that every company, no matter how small or new sits on tons of money they can waste on stuff that is ineffective seems unreasonable. Having ads be an option only for companies already successful enough doesn’t seem particularly desirable from point of view of economy.
This is pretty much the state of affairs anyway. Running an ad campaign is a money-hole even in the modern age. If I turn adblock off I just get ads for established players in the game. If I want anything novel I have to seek it out myself.
But as I said, I’m not against this feature per-se, as an improvement on the current system.
It’s worth repeating, society has no intrinsic responsibility to support business as an aggregated constituent, nor as individual businesses.
One might reasonably argue it’s in everyone’s best interest to do so at certain times, but something else entirely to defend sacrosanct business rights reflexively the moment individual humans try to defend themselves from the nasty side effects of business behavior.
We absolutely have a responsibility to do so in a society where people rely on businesses for like… everything. You’re typing on a computer - who produced that? A business. How do you think most Americans retire? A business. How do new products make it onto the market? Advertising.
I think it’s exactly the opposite situation of what you’re purporting. If you want to paint the “society without successful businesses is fine” picture, you have to do so.
Would it not be fair to suggest that there’s a bit of a gulf between businesses people rely on and businesses that rely on advertising? Perhaps it’s just my own bubble, dunno
How in the world would anyone hear about your product and services without them, especially if they are novel?
Have you heard of shops? It’s either a physical or virtual place where people with money go to purchase goods they need. And sometimes to browse if there’s anything new and interesting that might be useful.
Also, have you heard of magazines? Some of them are dedicated to talking about new and interesting product developments. There are multiple printed (and digital) magazines detailing new software releases and online services that people might find handy.
Do they sometimes suggest products that are not best for the consumer, but rather best for their bottom line? Possibly. But still, they only suggest new products to consumers who ask for it.
Regardless how well PPA works, I think this is crux of the issue:
Mozilla has just bought into the narrative that the advertising industry has a right to track users
Even if PPA is technically perfect in every way, maybe MY personal privacy is preserved. But ad companies need to stop trying to insert themselves into every crack of society. They still have no right to any kind of visibility into consumer traffic, interests, eyeballs, whatever.
PPA does not track users. It tracks that an ad was viewed or clicked and it tracks if an action happened as a result, but the user themself is never tracked in any way. This is an important nuance.
What “visibility into consumer traffic, interests, eyeballs, whatever” do you think PPA provides?
The crux of PPA is literally that an advertiser who runs ads gets an aggregate report with numbers that are not the actual conversion rate (number of times someone who saw an ad later went on to buy the product), but is statistically similar enough to the actual conversion rate to let the advertiser know whether they are gaining business from running the ad.
It does not tell them who saw an ad. It does not give them an identifier for the person who saw the ad. It does not tell them what other sites the person visited. It does not tell them what that person is interested in. It does not give them a behavioral profile of that person. It does not give them any identifiable information at all about any person.
For years, people have insisted that they don’t have a problem with advertising in general, they have a problem with all the invasive tracking and profiling that had become a mainstay of online advertising. For better or worse, Mozilla is taking a swing at eliminating the tracking and profiling, and it’s kind of telling that we’re finding out how many people were not being truthful when they said the tracking was what they objected to.
Personally, while I don’t like seeing ads, and on services that I use enough that offer me the option, I pay them money in exchange for not seeing ads, I also understand that being online costs money and that I don’t want the internet to become a place only for those wealthy enough to afford it without support. So having parts of the web that are paid for by mechanisms like advertising – provided it can be done without the invasive tracking – rather than by the end user’s wallet is a thing that probably needs to exist in order to enable the vibrant and diverse web I want to be part of, and lurking behind all the sanctimoniousness and righteous sneers is, inevitably, the question of how much poorer the web would be if only those people who can pay out of their own pockets are allowed onto and into it.
I’m saying they don’t have the right to “know whether they are gaining business from running the ad.”
It’s not necessarily bad for them to know this, but they are also not entitled to know this. On the contrary: The user is entitled to decide whether they want to participate in helping the advertiser.
Well, in order to even get to the point of generating aggregate reporting data someone has to both see an ad and either click through it or otherwise go to the site and buy something. So the user has already decided to have some sort of relationship with the business. If you are someone who never sees an ad and never clicks an ad and never buys anything from anyone who’s advertised to you, you don’t have anything to worry about.
It does not tell them who saw an ad. It does not give them an identifier for the person who saw the ad. It does not tell them what other sites the person visited. It does not tell them what that person is interested in. It does not give them a behavioral profile of that person. It does not give them any identifiable information at all about any person.
Question: how is the ad to be displayed selected? With the introduction of PPA, do advertizers plan on not using profiling to select ads anymore? Because that part of the ad tech equation is just as important as measuring conversions.
Fun fact: Mozilla had a proposal a few years back for how to do ad selection in a privacy-preserving way, by having the browser download bundles of ads with metadata about them and do the selection and display entirely on the client side.
Personally, while I don’t like seeing ads, and on services that I use enough that offer me the option, I pay them money in exchange for not seeing ads, I also understand that being online costs money and that I don’t want the internet to become a place only for those wealthy enough to afford it without support. So having parts of the web that are paid for by mechanisms like advertising – provided it can be done without the invasive tracking – rather than by the end user’s wallet is a thing that probably needs to exist in order to enable the vibrant and diverse web I want to be part of, and lurking behind all the sanctimoniousness and righteous sneers is, inevitably, the question of how much poorer the web would be if only those people who can pay out of their own pockets are allowed onto and into it.
The Internet is already a place only for those wealthy enough to pay out of their own pockets for a computer and Internet connection that is fast enough to participate. Without ads, many sites would have to change their business model and may die. But places like Wikipedia and Lobsters would still exist. Do you really think the web would be poorer if websites were less like Facebook and Twitter and more like Wikipedia and Lobsters?
Someone who doesn’t own a computer or a phone can access the internet in many public libraries – free access to browse should be more plentiful but at least exists.
But web sites generally cannot be had for free without advertising involved, because there is no publicly-funded utility providing them.
So you want to preserve ads so that people who rely on public libraries for Internet access can offset hosting costs by putting ads on their personal websites? That still requires some money to set up the site in the first place, and it requires significant traffic to offset even the small hosting cost of a personal website.
Clearly you have something else in mind but I can’t picture it. Most people don’t have the skills to set up their own website anyway, so they use services such as Facebook or Wikipedia to participate on the Internet. Can you clarify your position?
I thought this discussion was getting really interesting so I’m assuming it fell by the wayside and that you would appreciate me reviving it. did want to respond? or would you rather I stop asking
Privacy Preserving Attribution is a sensible, well-designed feature and a giant step in the right direction for balancing privacy and the primary economic model of the Web, ads.
There is a very simple question you can ask to discover whether a feature like this is reasonable: if the user had to opt in for it, how many users would do so if asked politely?
This is innovation in the wrong direction. The actual problem is that everyone beliefs that ads are the primary/only economical model of the Web and that there is nothing we can do about it. Fixing that is the innovation we actually need.
We could have non-spyware ads that don’t load down browsers with megabytes of javascript, but no-one believes that it is possible to advertise ethically. Maybe if web sites didn’t have 420 partners collecting personal data there would be fewer rent-seeking middlemen and more ad money would go to the web sites.
Ads. We all know them, we all hate them. They slow down your browser with countless tracking scripts.
Want in on a little secret? It doesn’t have to be this way. In fact, the most effective ads don’t actually have any tracking! More about that, right after this message from our sponsor:
(trying to imitate the style of LTT videos here)
We’ve got non-spyware ads that don’t contain any interactivity or JS. They’re all over video content, often called “sponsorships”. Negotiated directly between creators and brands, integrated into the video itself without any interactivity or tracking, most of the time clearly marked. And they’re a win-win-win. The creator earns more, the brand actually gets higher conversion and more control about the context of their ad, and by nature the ads can’t track the consumer either.
(Note that I’m not the parent poster, I’m just replying here because the question of what data is actually being tracked seems like the crux of the matter, not because I want to take out the pitchforks.)
Reading through the data here, it seems to me like the browser is tracking what ads a user sees. Unfortunately the wording there is kind of ambiguous (e.g. what’s an “ad placement”? Is it a specific ad, or a set of ads?) but if I got this right, the browser locally tracks what ad was clicked/viewed and where, with parameters that describe what counts as a view or a click supplied by the advertiser. And that it can do so based on the website’s requirements, i.e. based on whatever that website considers to be an impression.
Now I get that this report isn’t transmitted verbatim to the company whose products are being advertised, but:
Can whoever gets the reports read them and do the tracking for the third-party website?
If the browser maintains a list of ads (for impression attribution), can it be tracked based on the history of what ads it’s seen? Or as a variant: say I deliver a stream of bogus but unique (as in content, or impression parameters for view/click) ads, so each ad will get an impression just once. Along with that, I deliver the “real” ads, for shoes, hats or whatever. Can I now get a list of (unique bogus ads, real ad) pairs?
I realise this is a hot topic for you, but if you’re bringing up the principle of charity, can we maybe try it here, too? :-) That’s why I prefaced this with a “I’m not the parent poster” note.
That technical explainer is actually the document that I read, and on which my questions are based. I’ve literally linked to it in the comment you’re responding to. I’m guessing it’s an internal document of sorts because that’s not “very redable” to someone who doesn’t work in the ad industry at all. It also doesn’t follow almost any convention for spec documents, so it’s not even clear if this is what’s actually implemented or just an early draft, if the values “suggested” there are actually being used, which features are compulsory, or if this the “final” version of the protocol.
My first question straight out comes from this mention in that document:
Our DAP deployment [which processes conversion reports] is jointly run by Mozilla and ISRG. Privacy is lost if the two organizations collude to reveal individual values.
(Emphasis mine).
Charitably, I’m guessing that the support page is glossing over some details in its claim, given that there’s literally a document describing what information about one’s browsing activities is being sent and where. And that either I’m misunderstanding the scope of the DAP processing (is this not used to process information about conversions?) or that you’re glossing over technical details when you’re saying “no”. If it’s the latter, though, this is lobste.rs, I’d appreciate if you didn’t – I’m sure Mozilla’s PR team will be only too happy to gloss over the details for me in their comments section, I was asking you because a) you obviously know more about this than I do and b) you’re not defaulting to “oh, yeah, it’s evil”.
I have no idea what running a DAP deployment entails (which is why I’m asking about it) so I don’t really know the practical details of “the two organizations collude” which, in turn, means I don’t know how practical a concern that is. Which is why I’m asking about it. Where, on the spectrum between “theoretically doable but trivially detected by a third party” and “trivially done by two people and the only way to find out is to ask the actual people who did it”, is it placed?
My second question is also based on that document. I don’t work in the ad industry and I’m not a browser engineer, so much of the language there is completely opaque. Consequently:
I’m obviously aware that only conversions are reported, since that’s the only kind of report described there. But:
The document also says that “a site can register ad impressions [which they do] by generating and dispatching a CustomEvent as follows”. Like I said above: not in the ad industry, not a browser engineer, I have no idea what a CustomEvent is. In its simplest form, reading the doc it sounds like the website is the one generating events. But if that’s the case, they can already count impressions, they don’t even need to query the local impression database. (The harder variant is that the event is fired locally and you can’t hook to it it any way, but it’s still based on website-set parameters – see my note in 5. below for that). I imagine I’m missing something, but what?
The document doesn’t explain what impression data is available to websites outside the report. All it says is “tthe target site cannot query this database directly” which can mean anything between “the JS environment doesn’t even know it’s there” to “you can’t read it directly but there’s an API that exposes limited information about it”.
The document literally lists “richer impression selection logic” and “ability to distribute that value to multiple impressions” as desirable traits that weren’t in the spec purely due to prototyping concerns, so I’ve certainly treated the “one ad at a time” limitation as temporary. And, in any case, I don’t know if that’s what’s actually being implemented here.
The advertiser budget is obviously tunable, the document only suggests two, doesn’t have an upper cap on the limit, and doesn’t have a cap on how often it can be refreshed, either (it only suggests weekly). It also doesn’t explain who assigns these limits.
was actually the subject of my first question and isn’t directly relevant here, although 5 is
I obviously didn’t miss the part about differential privacy. My whole second question is about whether the target site can use this mechanism as a side-channel to derive user tracking information, not whether they can track users based on the impression report themselves, which they obviously can’t, like, that’s the whole point.
regarding PPA, if I have DNT on, what questions are there still unclear?
regarding the primary economic model, that’s indeed the problem to be solved. Once print had ads without tracking and thrived. An acceptable path is IMO payments, not monetised surveillance. Maybe similar https://en.wikipedia.org/wiki/VG_Wort
and regarding opt-in/out: one doesn’t earn trust by going the convenient way. Smells.
Once Google had ads without tracking and thrived, enough to buy their main competitor Doubleclick. Sadly, Doubleclick’s user-surveillance-based direct-marketing business model replaced Google’s web-page-contents-based broadcast-advertising business model. Now no-one can even imagine that advertising might possibly exist without invasive tracking, despite the fact that it used to be normal.
It’s funny because not once in my entire life have I ever seen an invasive tracking ad that was useful or relevant to me. What a scam! I have clicked on two ads in my entire life, which were relevant to me, and they were of the kind where the ad is based on the contents of the site you’re visiting.
great illustration of how the impact of ads is disparately allocated. some people click on ads all the time and it drains their bank account forcing them into further subordination to employers. this obviously correlates with lower education and economic status.
I’ve seen many of these happen just at the company I’m currently at. Extensions are especially awful and constant source of errors. So are mobile browsers themselves injecting their own crap. Users don’t know what the source of breakage is even if they cared. I’d say about 90% if not more of errors we encounter are Javascript related and a similar percentage of those are not caused by code we wrote.
We still use Javascript to improve experience and I don’t see this article arguing against that. Even have a few SPAs around although those are mainly backoffice tools. However we do make sure that main functionality works even if HTML is the only thing you managed to load.
but 15 items shows you how likely it is JavaScript will not be available, or available in a limited fashion
No it doesn’t. In a few million daily page loads and less than 0,05% of my traffic without javascript, and it’s usually curl or LWP or some other scraper doing something silly. Your traffic might be different, so it’s important to measure it, then see what you want to do about it. For me, such a small number the juice probably isn’t worth the squeeze, but I have other issues with this list:
A browser extension has interfered with the sitepossible but so what? ad blockers are designed to block ads. offer scripts help users find better prices than yours. my experience is this goes wrong almost never because that makes the user more likely to uninstall the extension.
A spotty connection hasn’t loaded the dependencies correctlydon’t depend on other people’s network. If my server is up, it can serve all of its assets. “spotty” connections don’t work any other way.
Internal IT policy has blocked dependenciesagain don’t depend on other people’s network
WIFI network has blocked certain CDNsagain don’t depend on other people’s network. bandwidth is so freaking cheap you should just serve it. don’t let some third-party harvest your web visitor data to save a few pennies. They might block your images or your css as well. Your brand could look like shit for pennies.
A user is viewing your site on a train which has just gone into a tunnelpossible but so what? the user knows they are in a tunnel and will hit the refresh button on the other side.
A device doesn’t have enough memory availablepossible but I don’t know anyone with a mobile phone older than 10 years, and that’s still iphone6 era performance, maybe HTC-Ones? Test with a old android and see. This isn’t affecting me. I don’t see it even the page requests.
There’s an error in your JavaScriptyou must be joking. have you met me?
An async fetch request wasn’t fenced off in a try catch and has failedha.
A user has a JavaScript toggle accidentally turned offi don’t believe this. turn it off and ask your mom to try ten of her favourite websites. if she doesn’t ask what’s wrong with your computer, she’s not going to buy anything I sell.
A user uses a JavaScript toggle to prevent ads loadingpossible but so what? that’s what they are designed to do.
An ad blocker has blocked your JavaScript from loadingpossible but unlikely. I test with a few different adblockers and most of them are pretty good about not blocking things that aren’t ads.
A user is using Opera Minipossible. i have something like 5^-6% of my page loads are some kind of Opera, so maybe when I start making a million dollars a day fixing Opera will be worth a dollar a day, but this is not my reality, and heck, even at Google’s revenue this can’t be worth more than a grand a day to them.
A user has data saving turned onpossible but so what? i do this too and it seems fine. Try it.
Rogue, interfering scripts have been added by Google Tag managerdon’t depend on other people’s network. Google tag manager is trash; I don’t use it and I don’t recommend anyone use it. I don’t sympathise with anyone having any variant of this problem.
The browser has locked up trying to parse your JS bundlepossible which is why I like to compare js-loads against telemetry that the JS sends me.
99,67% of my js page loads include the telemetry response; I don’t believe spending any time on a js-free experience is worth anything to the business, but I appreciate it is possible it could be to someone, so I would like to understand more things to check and try, not more things that could go wrong (but won’t or don’t).
I’m not sure how to put this politely, but I seriously doubt your numbers. Bots not running headless browsers themselves should be more than what you estimated.
I’d love to know how you can be so certain in your numbers? What tools do you use or how do you measure you traffic?
In our case our audience are high school students and schools WILL occasionally block just certain resource types. Their incompetence doesn’t make it less of your problem.
will load a.js then load b.txt or c.txt based on what happened in a.js.Then because I know basic math I can compare the number of times a.js loads and the number of times b.txt loads and c.txt loads according to my logfiles.
What tools do you use or how do you measure you traffic?
Tools I build.
I buy media to these web pages and I am motivated to understand every “impression” they want to charge me for.
In our case our audience are high school students and schools WILL occasionally block just certain resource types
I think it’s important to understand the mechanism by which the sysadmin at the school makes a decision to do anything;
If you’ve hosted an old version of jquery that has some XSS vector you’ve got to expect someone is going to block jquery regexes; Even if you’ve updated the version underneath. That’s life.
The way I look at it is this: I find if people can get to Bing or Google and not get to me, that’s my problem, but if they can’t get to Bing or Google either, then they’re going to sort that out. There’s a lot that’s under my control.
A spotty connection hasn’t loaded the dependencies correctly don’t depend on other people’s network. If my server is up, it can serve all of its assets. “spotty” connections don’t work any other way.
Can I invite you to a train ride in a German train from, say, Zurich to Hamburg? Some sites work fine the whole ride, some sites really can’t deal with the connection being, well, spotty.
Some sites work fine the whole ride, some sites really can’t deal with the connection being, well, spotty.
Yeah, and I think those sites can do something about it. Third-party resources is probably the number one issue for a lot of sites. Maybe they should try the network simulator in the browser developer tools once in a while. My point is the javascriptness isn’t as big the problem as the persons writing the javascript.
Never had Galaksija although I read Računari even though it was in Serbian. It had some really excellent writers. Not sure if it was there or in the other Serbian magazine whose name currently escapes me that I encountered a phrase that stayed with me ever since: “Sitnice koje život znače”.
Not that I’m a big fan of interview marathons, and I’m no one’s boss right now, but please keep in mind that (seeing that you’re in the US) - not all markets and jurisdictions are the same.
Very much summarized: in many European country it’s
really hard to get rid of people, so there might be more vetting beforehand
not staying after the 6 month probationary period can be a huge red flag to many. It’s unfair, it’s bullshit, but fact
These and many factors seem to point in the direction that hiring (and firing) is very, very slow and bureaucratic compared to what I hear from the US. And as much as we tech people think we’re special snowflakes, some things are just like in other fields.
And so, yes, personally I am also no fan, if they’d just invite me after 1h of chatting (it has happened before) it’s great, but unless I already know that I really, really, really want to work for this company and know I won’t hate it… several interviews (more like: talking to more different people than 1) also gives me more insight.
You should keep in mind that the author is in the US and is reacting to US interviewing practices. Tech firms are notorious for making candidates run a gauntlet of interviews despite not facing European labour regulations. So your theory of why interview processes may be difficult in Europe, doesn’t really explain the phenomenon in the US.
I live in France, an have worked for… 9 companies over the course of 17 years, I have interviewed for quite a bit more, and not a single one of the companies I interviewed for required more than 2 interviews (one technical, the other not). No one gave me take-home assignment, and I had to go through a formal test only twice. Twice more they had a standard set of technical questions.
When I’m interviewing candidates for my current company, we do have a 90 minutes “coding game” (quizz + a couple simple problems like detecting anagrams), which help me structure the interview (I systematically review the answers with the candidate), but even that isn’t required (those who don’t do it get quizzed anyway 😈).
Despite being in a country in which firing people is not easy. On the contrary, I’ve always been a bit surprised of stupidly long interview processes like the Google Gauntlet.
Then I’d say consider yourself lucky. Just 2 interviews has been the quickest and yet a total outlier in my experience in Germany. Even smaller companies often had screening + interview + meet the team (even informal, still some kind of interview).
Despite firing being legally easier in USA, major tech companies are too inefficient inside to detect or fire ineffective employees anyway, so it really doesn’t happen fast.
not staying after the 6 month probationary period can be a huge red flag to many. It’s unfair, it’s bullshit, but fact
I am super curious about this (having just moved to Europe). Could you elaborate at all? D’you mean not staying the full 6 months is a problem? Or leaving too soon after it?
(At least where I’m coming from, it’s a bit of a yellow flag if you have numerous entries in your CV that are a year or less, but one or two isn’t a big deal.)
I mean, I’m no recruiter and yes, maybe I should have said yellow flag - but every time I was part of some CV sighting when you had people with more than one of those very short stint it was some “Hmmmm”.
But my (maybe badly made) argument was that if all interviews and hiring processes were so “quick”, I have a fear that it might become more widespread. So basically all I have is anecdata and I wouldn’t be overly concerned.
In my experience one such short stint is no issue in reasonable companies. Sometimes things just don’t work out. More can be problematic if they are clustered and recent, but it is basically overall impression that counts (or did when I was more involved in hiring).
The degree of outrage was surprising; I sympathize with the maintainer here.
I’m not sure I would have taken too much notice of it. There is a ton of negative sentiment from tech people, particularly on the orange site. Best to ignore most of it and focus on people who have something constructive to offer.
I usually agree with hindsight being 20/20 but adding AI to a terminal application automatically on a minor software upgrade was not going be received well. Same would have applied to crypto…
It didn’t add it automatically. You had to complete several steps including providing your own API key for it to start working on your explicit request.
I am the first to object to random AI integration, have no use for this feature, and also have other reasons why I don’t and probably will never again use iTerm2. All of that said:
Although the feature was there in the core, it didn’t work out of the box. You needed to explicitly configure it to use an API key that you provided. I am quite surprised that it would be problematic for people.
Part of me really thinks that firewalls were a mistake. The may make sense for tightly controlled machines like a single server or a single-tennant data center. But things like this where the university WiFi is firewalled is just breaking the internet.
It’s worth remembering that the out-of-the-box security of most systems was absolute dogshit well into the 2000s.
Despite what I wrote recently, Cambridge University had fairly relaxed central packet filters, though departments often ran their own local firewalls. The filters were generally justified by the need to avoid wasting computing staff time. For many years the main filters were on Windows NETBIOS-related ports, not just on the border routers but on internal links too (because some departments lacked firewalls). (Looks like it hasn’t changed much, to my mild surprise.)
Security specialists don’t like this kind of reactive stance, but in practice it’s a reasonable pragmatic trade-off for an organization of Cambridge’s size and complexity.
When I was a student, the campus firewall blocked all UDP traffic. At the time, most video conferencing tools used UDP because TCP retransmissions causes too much delay. We wasted hours trying to work out why the Access Grid (video conferencing solution recommended for universities at the time) did not work.
In ingress firewall may make sense to prevent services that are not intended to be exposed to the Internet from being attacked, but these days perimeter defences like that are more security theatre. Someone is going to put a compromised Android device on your network and at that point the firewall is useless.
The university firewall was a complete waste of time because the entire internal network was a single broadcast domain and so a single machine was able to infect every machine on campus with the Slammer worm in about a minute. We also accidentally broke most of the lab machines by connecting a machine that we didn’t realise was running a DHCP server. It had a very small allow list and so responded very quickly with a denied response to DHCP requests from the Windows machines, which then self-assigned an IP address on the wrong subnet and failed to connect to the server that provided login information and roaming profiles.
I agree with that sentiment. Basic “block all ports that aren’t open anyway” seems of limited value.
One might argue that even on a server you only have certain open ports anyways, the parts that your services listen on. So unless something else is open there is no difference - other than one might drop packets instead of returning rejects maybe. However, that’s something that shouldn’t really require a firewall.
However there is other things that firewalls can be used for, such as limiting source addresses (might be tricky for UDP though depending on what you are trying to achieve), and sanitizing packets (though I think that could just be a sysctl or something).
And an attacker that can launch service usually can also just connect back or use other means of not requiring an additional open port to for example exfiltrate information.
Firewalls that try to do this are, in general, really fucking bad at it. QUIC is designed the way it is so that firewalls and other middleboxes cannot “sanitize” its packets.
For example, the Cisco PIX / ASA has an SMTP fixup feature (aka smtp fuxup). One of the things it does is suppress SMTP commands that it doesn’t know about. It does this by replacing the command with XXXX. But it is too shitforbrains to maintain parser state between packets, so if a command (commonly RCPT TO) gets split by a packet boundary it always gets XXXXed.
I once debugged a mysterious mail delivery problem that caused a small proportion of messages to time out and eventually bounce. After much head scratching I worked out that it depended on the length of the recipient list: if it was 513 or 1025 bytes long, the firewall between my mail servers and the department’s mail servers would fuck up its TCP packet resegmentation, the sequence numbers would go out of sync, and the connection would be lost.
QUIC is designed the way it is so that firewalls and other middleboxes cannot “sanitize” its packets.
Yes and that’s good! Not what I meant by that though.
For example, the Cisco PIX / ASA has an SMTP fixup feature (aka smtp fuxup). One of the things it does is suppress SMTP commands that it doesn’t know about.
Talking about a different layer. I think a firewall shouldn’t look into commands. Also SMTP should be encrypted, heh.
if it was 513 or 1025 bytes long, the firewall between my mail servers and the department’s mail servers would fuck up its TCP packet resegmentation
Sounds like a bug? People sadly keep buying shitty products from shitty companies.
What I’ve been talking about though is the opposite. Incoming stuff being fucked up. For example OpenBSD’s pf will get rid of IP packets that simply shouldn’t exist like having a RST flag and a SYN flag. No, thanks!
Yeah those were old anecdotes, more than 15 years.
On systems I run, I generally only use the packet filters to stop unwanted traffic getting to userland. I don’t see much point in configuring one part of the network stack to defend another part of the network stack that should be equally capable of defending itself.
I agree. I see it like layers here. As in you go into the network stack and first you check if that IP packet is valid, no matter what happens later. I think though in most situations that could just be a switch - if you even need to be able to turn it off, but maybe for some testing or debugging situation or when you just wanna look at that incoming packet as it is.
I should have been more precise about what I meant with sanitizing.
Basic “block all ports that aren’t open anyway” seems of limited value.
There’s no single rule of thumb. When you’re designing a server environment, that is pretty much exactly the config you would use for the public facing NICs:
Open only the absolute minimum set of ports (80/443 in, and 80/443/53 out)
Allow for TCP and UDP only as required
Allow for IPv4 and IPv6 only as required
Then (if and when possible) use a different network for ops and administration, with port 22 open, and with SSL configured to never accept passwords (i.e. only accept configured ssh-rsa keys):
PermitRootLogin no
PasswordAuthentication no
ChallengeResponseAuthentication no
If you’re terminating SSL at the reverse proxy layer, this style of deployment becomes even more important, as you’ll have plain text within that server subnet.
But firewalls to protect people who are on WiFi? I guess it still makes sense to block incoming traffic in general, since most client apps are designed now to NAT-punch holes in the firewall (and firewalls are designed to support this). But that’s definitely not a network that should also be hosting servers.
(Rereading my way too long post and yours I think we are agreeing anyways, however the main thing I wanna say is “don’t do things just because..”. That can give a false sense of security)
When you’re designing a server environment, that is pretty much exactly the config you would use
Been designing server environment for 25ish years and have been consulting in that area. Thanks. ;)
PermitRootLogin no
PasswordAuthentication no
ChallengeResponseAuthentication no
Completely unrelated topic, but okay. I’ll keep PermitRootLogin at its default (prohibit-password). I appreciate the sentiment that anything administrative in best case should be on its own network in the sense of having a general admin interface, but OpenSSH is a bit my only exception and in fact I have had setups where it would be my admin interface in the sense that everything admin-related is tunneled through it. At times even with an OpenSSH tun device.
But firewalls to protect people who are on WiFi? I guess it still makes sense to block incoming traffic in general, since most client apps are designed now to NAT-punch holes in the firewall (and firewalls are designed to support this).
Sure thing even though it really doesn’t get you much, because…
As you said there is outgoing traffic through hole punching
Malware connects outbound either simply by creating an outbound connection, or if you want even through hole punching
Both things are sadly sometimes reached through UPnP. Something to always disable, if you care at all about network security.
The main thing your home firewall will prevent is exposing stuff on accident. And that’s the point I wanted to make.
But that’s definitely not a network that should also be hosting servers.
Your server should never ever have stuff accidentally exposed. If you run your server like that you have serious security issues.
And just to be clear: Yeah I also have my firewalls turned on and it matches what you meant with the above. Essentially because if some accident happens it might still be caught. It’s cheap enough.
However, I want people to actually think about security measures they take. Not doing that and just doing the stuff one always does without thinking always causes problems. Like I wrote I have been consulting and it’s always the same picture. People do stuff that somehow is security related, they add something and don’t even know what it really does, or do something because it was sensible in another environment and it’s always the same picture, people add protections for some super edge case like your operating system’s ICMP parsing having a bug or something, sometimes spending tons of money and time on it only to have gaping vulnerabilities, not patching their systems quickly enough, having stuff exposed that shouldn’t be, having unecrypted connections to databases and so on. Or having things like “ah, nobody will guess that IP address” situations and then being suprised about scanners browsing by. And like you mention, doing password based authentication, sometimes even with reused passwords.
Then they buy an expensive product that is essentially snake oil and where they don’t even know what it is supposed to do. And then that very security product is used for the added attack surface.
And of course classics like forgetting about IPv6 and UDP. Around tenish years ago there was that time when people used NTPD servers that were both badly configured and publicly accessible and while they weren’t hacked per se they were used to facilitate reflection attacks. and of course that meant they were down too. And it’s a classic example of just doing some standard thing, without really understanding it and he saw there was an open port for NTP, so he kept it open. I might be mixing up things but I think he also got errors otherwise.
Anyways, I think a lot of harm is done when people and companies think security is just a product or that they just have to follow some online guide. There is so much bad advice out there and it’s really important to actually understand things. There is baselines such as patching. But as soon as it comes to firewalls and things like that it feels like an extra thing that adds security. But it’s very very limited, and people see it as the thing that protects them from attackers which is not something it really does. It can prevent misconfigurations from having a big effect and it can do some special stuff, but it doesn’t prevent much more. If your system is otherwise secure it should basically be as if it wasn’t there.
It’s a bit like with fail2ban for OpenSSH and stuff. It doesn’t really bring a security benefit if your authentication is sane. If it isn’t you should fix that, because fail2ban will not prevent you from being hacked. And as an attacker I really don’t have to be limited by individual IP addresses. I can configure a whole subnet or multiple cheaply on a single system, doesn’t even have to be a physical system. At the same time you have a log parser that parses logs that an attacker can partly control simply by connecting. State tables can also be filled up. All for the feat that an attacker who cannot easily come by IP addresses guesses a password when you should have both no password at all for the account and not having it allowed for authentication.
I agree that it can help in situations like misconfiguration and the trade off is on the “well, just go for it side”, but in normal operation it’s not usually the thing that protects you against anything (again, because of outbound connections which only in rare situations you want to/can block).
If it is your only line of defense it’s a good indicator that you should rethink your security. Firewalls today should be seen as a second layer in case you mess up a config for example. For most other things there is a better option. And keep in mind that the firewall is also something that can be misconfigured, which can result in the very thing you wanna prevent, for example a Denial of Service, if you simply block something you actually needed.
Another anecdote: I’ve once seen a situation where a firewall dropping packets resulted in the service sending the packets going down. The developer had a hard time debugging it, because he would have expected a network error (so rejection packets). I’ve also seen people having confusing issues because of dropping ICMP packets. And at this point it feels a bit like if you cannot trust your system’s handling of ICMP packets, should you really trust that system’s firewall?
It’s important to consider to actual situation. Do a threat model, even if just a very basic one. Only start focusing on specific scenarios (things like “what if the attacker is there, but cannot yet do this”) when you did your general homework. It’s really sad when people wanted to have a secure system and jumped on one interesting case, while having gaping holes. It’s also sad when time and money is invested for setting up security tools only to find out that they don’t do anything for their systems. Like running a WAF checking known security vulnerabilities in Wordpress and Joomla when they aren’t even used.
There are reasons though for doing non-senical stuff. Compliance for example. As in “if you don’t have that your insurance won’t pay”. A lot of bad security stems from things like that. It can even make you feel like you did something for security, when in reality you only fulfilled a business/compliance need.
This results in situations where the engineer notices an issue with what is being implemented and then people look through some contract and say “Oh, that’s fine. We don’t need that according to the contract”. It’s hard to really blame anyone here. It’s just hard to put security into a contract and they are usually written by non-engineers or with the help of engineers doing their best, but it’s also written for someone else that you don’t know internally, often written generically for many parties. So one ends up with your virus scanner on the database server.
Completely unrelated topic, but okay. I’ll keep PermitRootLogin at its default (prohibit-password). I appreciate the sentiment that anything administrative in best case should be on its own network in the sense of having a general admin interface, but OpenSSH is a bit my only exception and in fact I have had setups where it would be my admin interface in the sense that everything admin-related is tunneled through it.
Yeah, having a separate ops/management network is quite a luxury, so 95% of the time I see ssh going in the same nic as public web traffic does. I guess I was talking about “if I ran the zoo” scenarios, and I’d always use a dedicated ops/management network if I could, because it’s much simpler to reason about.
Been designing server environment for 25ish years and have been consulting in that area.
I’m sure most people here have more real experience with it than I do.
Then they buy an expensive product that is essentially snake oil and where they don’t even know what it is supposed to do. And then that very security product is used for the added attack surface.
100% this.
This just happened this year to a company that sells firewalls (Cisco), where it turned out that the firewall was actually the giant security hole that had been being exploited for years by … “nation states”.
(I’m not throwing stones. I have been responsible for some horrid security holes in the past. Just reporting the facts.)
So from my own POV, for security: simple is better. Make your front door super simple to get through, but only for the traffic you want (http/https typically). Dead hole everything else. (Dead holing TCP connections will waste a little extra RAM and CPU cycles on the inevitable port scanners.) On the inbound traffic NICs, use ufw to also shut down all inbound traffic except the traffic you want; this gives you two layers of front door protection. (I’d also suggest using a *-BSD for the public firewall, and Linux for the servers, so that you don’t have the same exact potential exploits present in both layers.)
If it is your only line of defense it’s a good indicator that you should rethink your security. Firewalls today should be seen as a second layer in case you mess up a config for example.
I only use dumb firewalls (and reverse proxies). The goal is just to be able to focus on the real threats when the traffic gets to the back end, by eliminating 99% of the noise before it gets there. Application code will have 100s or 1000s of potential exploits, so getting time to focus on that area is important for public facing services.
So one ends up with your virus scanner on the database server.
Dear God, no. Having virus scanners running is one of the biggest security risks out there. You would not believe how badly written the “big names” in virus scanning are.
Also, let me know when you publish your book on this topic 😉 … this was a good intro.
All of me knows that firewalls were necessary in the past and are now a mistake. The modern WAF exists so that CISOs and Directors in megacorps can spend $$$ and say “we follow best security practices” after their next security breach. Firewalls no longer have anything to do with security and have everything to do with compliance for compliance sake. Why think when you can follow a checklist?
Because sometimes other organizations won’t work with you if you don’t have that compliance certificate and your insurance will be higher too (if you can get one).
I’d bet that huge chunks of people who are in position to make this decision (like I do for our small company) often just feel forced to follow some rules and not because we’d prefer not to think.
People don’t want to pay for a browser, especially one that is Open-Source, because people don’t want to pay unless there’s scarcity.
Mozilla is built on Google’s Ads, and Google can currently kill them at any time by just dropping their deal. Which means Firefox can’t actually compete with Google’s Chrome, unless they diversify. When Mozilla tries stuff out, like integrating paid services (e.g., Pocket or VPN) in the browser, people get mad. Also, Ads, for better or worse, have been fuelling OSS work and the open Internet.
So, I’m curious, how exactly do you think should Mozilla keep the lights on? And what’s the thinking process in establishing that “greed” is the cause, or why is that a problem?
I understand this frustration, but it’s irrelevant.
Enumerate all consumer projects that are as complex as a browser, that are developed out of donations and that have to compete with FOSS products from Big Tech. Donations rarely work. Paying for FOSS doesn’t work, unless you’re paying for complements, like support, certification, or extra proprietary features.
It’s a fantasy to think that if only we had a way to pay, Firefox would get the needed funding.
Indeed, like every other company and organization under the sun who doesn’t want to depend on only one successful product. Where else would they get the resources for developing new ones?
And don’t forget: Collecting User Data!
I’m getting a nervous twitch every time I read “Firefox” and “privacy” in the same sentence. Being ever so slightly less bad than the competition doesn’t make you “privacy first”.
tbh that does seem like one of the better attempts of squaring the circle of “telemetry is genuinely useful” and “we really don’t want to know details about individuals”?
I’m not so convinced. You basically have to trust that their anonymization technique is doing the right thing, since you can’t really verify what’s happening on the server side. If it actually does the right thing, then it should be easy to poison the results, which, given the subject matter at hand, there would be a massive incentive do to so for certain players.
You basically have to trust that their anonymization technique is doing the right thing, since you can’t really verify what’s happening on the server side.
This is an oversimplification of the situation. Yes, you need to trust that the right thing is happening server-side, but you don’t need to trust Mozilla. You need to trust ISRG or Fastly, which are independent actors with their own reputation to uphold. Absolutely not perfect, but significantly better than the picture you’re painting here IMO.
Given that the telemetry is opt-out instead of opt-in, there can be no trust. Trust, for me, involves at a bare minimum consent.
I don’t mind them collecting data, but I don’t want my browser to be adversarial — the reason I stay off Chrome is because I have to check its settings on every release, and think hard about “how is this new setting going to screw me in the future?”
Of all organisations, I hoped Mozilla would have understood this, especially as it caters to the privacy crowd.
I don’t think it’s a problem with organizational understanding. I think Mozilla understands this perfectly well, but they also understand that you’ll suck it up and keep using Firefox because there’s no better option.
I wrote initial version of Firefox telemetry. My goal was to be different from projects like chrome where we can make telemetry available to the public web. Eg I could not make further progress on ff perf without data like https://docs.telemetry.mozilla.org/cookbooks/main_ping_exponential_histograms . Hardest part of this was convincing privacy team that Firefox was gonna die without perf data collection. Soon as we shipped that feature we had a few dozen of critical performance problems that we were not able to see inhouse.
Hardest part was figuring out balance between genuinely useful data and not collecting anything too personal. In practice it turns out it’s not useful for perf work to collect anything that resembles tracking. However, it’s a slippery slope, since my days, they got greedy with data.
I gave up trying to “manage” profiles in Firefox, and just run Firefox from multiple Linux-level users. Yes, that is more resource overhead, but it keeps things completely, utterly separate and unshared. It’s not as much UX friction as one might expect.
Yes. Either too many clicks, or too much supposedly-separate data or configuration is shared.
In my current setup, I go to my always-open multi-tab Konsole, click on the persona of choice (shell already signed in as that user), then up arrow, Enter to launch the persona’s Firefox.
What’s the use case for multiple browser profiles for the same person? I’ve always seen those features as bloat, and one-profile per OS user seems like the “right” way to handle it. That’s how all other software works, I don’t see why browsers need to come up with their own way of doing it.
I, personally, have multiple personalities :) I have one for private stuff, one for the company I’m employed in, one I’m actually working for (agency), one for the legacy company (we’ve been acquired, some tools changed, some remained), one for education (I teach frontend).
All of these has their own set of tools I need, or maybe the same tool, which is just struggling to provide a smooth multi-profile experience (looking at you, microsoft). Sometimes I just don’t want my students to see an accidentally opened tab from my NDA project or bookmarks, etc.
I almost never cross-open (so these boundaries are actually sensible), but when I need, I use BrowserTamer for that.
For web developers like me it is running with different set of extensions for every day use, for development and one without any extensions at all as base line.
I heavily use profiles. Mostly it’s because I set them up before containers and process isolation were a thing; and I never got around to switching to containers.
I like to have tons of tabs open (I unfortunately unlearned to use bookmarks), but I don’t always want/need all of them so I can just close one of the browsers when I don’t need it to free some RAM and start it again later. Do containers allow that?
I don’t see them as bloat because it’s essentially just reading the config from a different directory, while containers probably needed lots of changes to make them work.
That’s how all other software works, I don’t see why browsers need to come up with their own way of doing it.
It’s not that uncommon for software to allow configuring their config/data directory, which is roughly what profiles do.
I find it very useful for task switching. One browser window/profile has technical documentation and such for projects I’m working on, one has my normal daily email/social/rss tabs and general browsing, and one has guides and things for various games I’m playing. I’ve tried various tab grouping extensions, but nothing beats just being able to alt-tab to the browser tabs I want when I switch tasks.
I’ve “fixed” my Firefox profile problem with help from Autohotkey.
I defined multiple profiles and launch different firefox instances using Autohotkey keyboard shortcuts. Works good for me.
It’s… just what it says on the tin. man useradd, and go to town with as my personas as you need. They all use the same /usr/bin/firefox, but each has its own ~/.mozilla/firefox. Each one has its own set of tabs and windows, all closeable at once with a single Quit (for that persona, leaving other personas’ Firefoxes still open).
Minor technical detail: You’ll need to ensure your X Windows (or Wayland or whateve) can handle different Linux users drawing on the one user’s X display.
My “solution” has been to run multiple versions of Firefox (e.g. the standard version and the Developer edition), each with a different default profile, but using the same Firefox Sync account. They get treated as separate windows/applications, I still have the same history/bookmarks/credentials, and so on. It’s a terrible approach, but it’s worked quite well for me. I really hope their improvements to profiles make things easier.
Or even better, is it smart enough to not send ChatGPT every password I type, or does it just assume the AI is smart enough to know it should forget them?
You have to ask for help in a separate window (Composer). It does nothing automatically even if you have provided an API key without which nothing works anyway.
Return commands suitable for copy/pasting into \(shell) on \(uname). Do NOT include commentary NOR Markdown triple-backtick code blocks as your whole response will be copied into my terminal automatically.
The script should do this: \(ai.prompt)
It’s a very safe prompt, can probably be scried into being a bit better, but realistically it’s probably fine.
tell me if the suggestion shell commmand will delete files
tell me if the suggestion shell commmand is irreversible
tell me if the suggestion shell commmand has side effects that are not immediately clear
Or even more basic: If you use the prompt directly in ChatGPT it will also explain what every step in the suggested command does. So why not ask it to return it in a more structured format so that the commentary actually is included so that I can determine that it is correct.
I don’t think it will produce a rm -rf /* if you don’t tell it to. You shouldn’t blindly trust the output it generates anyway, please read before doing that.
How do you know? The prompt doesn’t say anything about avoiding that specific command.
How do you know it wil not return rm -rf / or a similar destructive command?
Having seen quite a bit of random and incorrect suggestions from ChatGPT, I don’t think you can confidently say that there is a safeguard built-in that we do not see or that it will not consider destructive or dangerous commands.
I mean, what is your statement based on? It sounds more like wishful thinking :-)
I don’t know anything. But I believe it is likely that “rm -rf /” is not a response that would be considered “suitable for copy pasting”. That isn’t wishful thinking, it’s simply based on the assumption that ChatGPT has been trained on various commands and that the context around “rm -rf /” is that it is destructive and not suitable for being copy pasted.
For the record, I would likely add more safeguards to the prompt. Something like:
“Be sure to make it clear to the user when a command may be destructive”.
From what I’ve read these commands aren’t automatically run though so it doesn’t matter.
# rm -rf /
(@) Woah hang on, that looks dangerous. Are you super sure about this?
(@) This'll delete every file on your drive.
(-) Yeah I've already set up a new system anyway and I just wanna do this for fun.
(@) Alright...
Does anyone know an iphone app or webapp that I could oluginy openai key to use wi5h chatGPT? I have infrequent use of chatGPT so $20 a month is a waste if I only use it once or twice a day.
You can use the Playground interface directly on their site to work with their models on a pay-as-you-go basis: https://platform.openai.com/playground - it works fine on mobile web too.
Supposedly the new GPT-4o model will be available to free users at some point very soon.
I thought OpenAI’s iPhone app is free to use as long as you are fine with limitations that come with it (not using the latest model…) or have I misunderstood what you are after?
No I want to use the paid pro models which are not free. Simon just answered by question above. Let me check if the playground supports multi modal as well.
It sounds like the developer machine will run quite loud and hot for 2 minutes and prevent the developer from doing any further work while waiting for the automated tests to complete. If it’s a pre-commit hook, it’s going to get annoying pretty fast.
That does depend on what you use, on how you write tests, on how you use git, but yes, this definitely does not work for everybody. I think DHH recognized that fact in the post, just not as explicitly.
The impressive part is that their chip manufacturing effort is done by themselves, being a generation or two behind is totally fine for most things I feel. Is actually pretty impressive that even with constant sanctions and attacks they can come up with a working chip. Another thing to admire is no matter how much western media and influencers attack their tech, they keep working on it and perfecting it. We here in the US should manufacture our own stuff too.
For me, the multiple profile story in Firefox has always been half baked. It seems to me there are multiple competing features attempting to solve the problem in different ways that never fully jived with me. For example, about:profiles is nice but the two instances of Firefox have the exact same icon and exact same name (at least on osx) so when you are alt+tabbing or picking in the doc its a toss up for which one you actually get. Once their open its fine since you can theme them.
Then you have tab containers which work to isolate specific tabs but are pretty clunky when opening links that you expect to take you to a logged in experience.
I feel like Chrome actually nailed this experience pretty well with they way they handle profiles. It feels like a first class experience and isolates things in a way that’s easy to switch to.
Perhaps I’m just being picky but does anyone have a preferred way for managing this sort of experience in Firefox where you might be on a work machine but also want to be logged into some personal accounts and are separate enough to not have them co-mingling where if you shared your screen your suddenly dealing with a mixture of personal stuff and work tabs?
Then you have tab containers which work to isolate specific tabs but are pretty clunky when opening links that you expect to take you to a logged in experience.
I actually really like the container feature, I don’t even bother with the profiles.
You can specifically request domains open within a specific container to prevent issues with opening links as you mentioned, but that mostly works for sites you can separate out easily; it isn’t perfect for things that you’d like to open multiple of at the same time in different containers.
For example, opening your dns provider that you use at work in the work container and then needing to open the same site for your personal dns in your “personal” container but honestly I still like the separation here. A little clunky but worth the security.
I use a tile manager and rarely minimize my windows. I prefer to switch desktops (macos), and usually aim to limit myself to 1 work “window” and 1 personal “window”, so I guess I don’t need the additional title context that you use for switching.
I use the bookmark bar pretty heavily for commonly accessed sites so mixing personal + work bookmarks just makes things messy for my workflow. Profiles fix this but the UX there is in my opinion quite sub-par compared to Chrome’s profiles.
I actually really like the container feature, I don’t even bother with the profiles.
I really like it as well. It’s replaced 90% of my old use of the profile feature. Now I only use profiles for cases where I truly need multiple, different, mutually exclusive logins. (e.g. some clients will want me to work on github using an account tied to their organization, and I don’t always want to tie my main account to their org.)
My approach (in Firefox) is to have two windows, one for work and one for the rest. I also use Simple Tab Groups for grouping tabs in way that makes sense to me and use Firefox containers pinned to group (and hence window) to keep things separate. Switching between windows with keyboard is quick and there’s only one icon in the list of opened apps. Before I screenshare, I tend to minimize private window.
I agree that profile support feels half-baked and I never started using it mainly because I don’t feel the need to use a different set of extensions or some important settings. I probably should. If I did though, I’d use a different theme so the windows would be noticeably different.
I don’t like profiles in Chrome at all because they tend to spill over into everything and it is difficult to not suddenly be logged in left and right into stuff.
I don’t like profiles in Chrome at all because they tend to spill over into everything and it is difficult to not suddenly be logged in left and right into stuff.
I am curious what you mean by this? In my experience Chrome profiles are completely isolated from each other. Meaning different extensions, bookmarks, etc. Or do you mean if you click on a link it can sometimes be hit or miss if it opens in the correct profile or not (i think its due to whatever profile window you last had active)
For me, I personally rely a separation of bookmarks and extensions for work vs personal so I am stuck either using the half-baked Firefox profiles or just using Chrome.
I assume this is me falling for Google’s nudges and not knowing how to avoid what I don’t want, but when I tried to create different profiles, I was pushed into logging with my Google account and then I would be automatically also logged into Google services (which I’m normally not) and then when visiting something like Reddit, it would automatically create an account for me if I didn’t notice the notification and prevent it quickly enough.
I’m sure all this can be avoided because I assume internet would be on fire if this was a common experience, but I’m also otherwise not a fan of the browser more than I need it to do my (webdev) work so I didn’t spend too much time figuring out.
oh oh yes I see. They definitely push you to use your Google account but you can create profiles without being tied specifically to a Google account but you lose some features like cross platform sync, etc I believe
After Google Reader I also used Tiny Tiny RSS, but I really disliked maintenance and even more its maintainer.
I now use Bazqux and couldn’t be happier. The web version is great by itself and it also works well with 3rd party clients I use (Feedler Pro).
I tried it when it launched. It’s leaps and bounds superior to all others. Never really payed for it as I am not a consumer of RSS feeds. There was a fine when most sites provided it. Then came the walled garden mentality, I basically find no value in RSS reading.
Good content, but typography is “killing” me. Specifically the choice of Roslindale as the main text typeface. Please consider using something that breathes a little more since even tweaking line-height is not enough.
My brain cannot process that, I just see a wall of text. If it wasn’t for reader mode I would have bailed.
I wrote something similar a while back. Or a shorter piece, on “Boring” specifically.
I think the “Boring” movement, when you read the article and slide deck, are unimpeachable, and a healthy response to “novelty-driven” development of the late aughts. We were pretty high on Paul Graham essays about Common Lisp and some new language-building technologies (ANTLR simplified parsing, LLVM did a lot for compiled languages) so we thought novel languages in particular would help us. And not for nothing: Ruby on Rails did explode, and Ruby’s weirdness is often credited for DHH’s ability to make that framework, and for people to extend it.
That said, like “Agile,” the actual ideas behind “Use Boring Technology” got dropped in favor of a thought-terminating cliche around “use what’s popular, no matter what.” So you have early startups running k8s clusters, or a
node_modules
folder that’s bigger and uses more tech than the entire Flash Runtime did. To borrow a paragraph from the top comment as I’m writing this:People say “Use Boring Technology” to mean “only use Java, Python, Node, Go, or Ruby; only deploy in commercial clouds with Docker” but never investigate alternatives, or talk too deeply about the tech. It’s true that the properties of foamed glass gravel were known and extensively tested; what if they just ignored all of that and said “use concrete anyway; I don’t feel like having to think about this new thing I’ve haven’t used recently.”
Anyway, glad to see this discussion.
I’m really looking forward to the boring version of kubernetes. A working cluster is pretty great for deploying software. It’ll be great when there’s something that mostly just works that does much the same thing.
“Choose boring technology” basically means first check if technology you and your team already use and understand well (enough) to be able to support it could perform the task. Look at alternatives only as second step. That’s it.
I’d say novelty-driven is much younger than late oughts. At least when it comes to web frontend.
eh, I don’t think “that’s it.” Maybe it’s just a function of what we mean by “understand well (enough) to be able to support it” and/or what we’ve seen in our various careers, but I’ve worked with a lot of people who didn’t really understand their technologies that well. I wrote a bit about it here:
To your point: there’s a learning curve to picking a different tech than you know well enough already. But the point I always felt: these people are never that good at Python (or JavaScript, or Ruby) anyway; if they spent 2-3 days working through a PragProg book or two, they’d know something like Elixir as well as they know Python.
(ofc my ideal would be hire people who are unafraid of learning new technology in the first place, but that’s harder when you don’t control the company, and/or you’re doing the high-growth VC-backed playbook, where hiring fast is a way to inflate your share price)
Again, I think there’s real merit in just picking something you and the team have used before entirely because you and the team have used it before. I think that’s usually fine. I just think the needle is too far in the direction of “safety, boring,” everyone is miserable, and I don’t see it even producing too many innovative, successful, lasting companies.
Thanks for that link, I somehow missed it when it was on lobste.rs.
I also agree with you regarding the shallowness of people’s knowledge regarding platforms. Most often, one starts with a project and just gets the thing done with minimal knowledge of the underlying technology. There’s only a real need to dig in when the shit really hits the fan. By which time the original authors have probably already moved on (using new, different technology). But isn’t that more of a reason to choose “boring” technology? So that you can dig in deeper and get more experience?
The author of this article has a reading comprehension problem. The linked story about .yu domain does not remotely corroborate his summary. As someone who studied at University of Ljubljana at that time right across the street and never heard anything so wild, which I’m sure I would, I have difficulty believing that it happened as described.
Which parts are you saying it doesn’t corroborate? There are a lot of assertions of fact in it, and also a bunch of subjective stuff.. from a quick read of both pieces, it does look to me like the most important details of the alleged heist are mentioned in the cited source, but I may be failing to understand the significance of something that seems minor enough that it doesn’t occur to me to doubt it, or something like that…?
Where is the Belgrade heist mentioned?
It would be insanely crazy for something like that to happen during ongoing Balkan war and also completely unnecessary since .yu domain was administered from Ljubljana since start in 1989. The only “heist” was some colleagues allegedly breaking into her office, copying software and disconnecting her network. All of them worked in the same building and no computers were moved.
The Dial piece, cited as the source, says:
I agree that this doesn’t say anything as to whether the former colleagues thought of what they did as a “heist”, and it quotes Jerman-Blažič’s opinion that the people who took the software didn’t know how to use it without help, but she wouldn’t have been in a position to see what happened afterwards, since, as the piece also says, the next part was secret. However, the last sentence is clear-cut that the domain was in fact run by ARNES, at least according to this publication. That’s the substance of the “heist” claim, surely?
I’m unable to find anything in the Dial piece that speaks to whether these people worked in the same building at the time of the alleged theft, as you suggest. I may not be looking closely enough. You’re right, that does bear on the claims in the Every piece, which says the alleged thieves traveled there in connection with the alleged theft.
Your point that no computers were moved doesn’t seem relevant to me; the Every piece says “On arrival, they broke into the university and stole all the hosting software and domain records for the .yu top-level domain—everything they needed to seize control”, which notably makes no claim about hardware, only software and data. In general, in any sort of computer-related theft, real or fictional, the software is the important payload… I don’t really see what difference the hardware would make to anyone.
I think these authors are describing the same factual claims, while differing substantially on who they view as the “protagonist”. That may account for why it feels like they’re talking about something different? I do think it would be entirely reasonable to point out that there’s very little information about how the participants really felt about the events at the time, or how they understood the purpose of their actions.
I do think the author of the Every piece goes perhaps a little too far in inferring people’s intentions, and leaves out the important caveat that the copy of the domain that ARNES ran was kept secret. I think perhaps the author doesn’t have a deep understanding of DNS, and incorrectly believes that running one copy of a domain somewhere, privately, implies that nobody else could be running it at the same time. That does seem like a material error, and quite unfortunate, though it’s not about the so-called heist per se.
So… I think I see why you object, although I’d stop short of saying the story was invented from whole cloth. Does that help?
I don’t have an issue with The Dial piece. I definitely made a mistake about servers; no idea why. I guess reading comprehension issues on my part as well.
My comment was exclusively about the Every piece that has a truly awful summary of The Dial piece. I disagree that they are the same factual claims because to me there’s a huge difference between Slovenian academics going to a different country to steal domain and software or essentially an internal dispute around who will continue running the .yu domain, which since its beginning until mid 90s has always been administered from Ljubljana in current Slovenia.
That makes sense, now that I understand the distinction you’re making. It does seem like an important one. Thanks for talking it through and helping me understand.
Ideally speaking? Sure. But historically ? I can’t help but think back to the AC/DC stuff of the late 1800s.
Besides,
fan
is kind of ambiguous; these days, it informally means something like “mild supporter”, but the word can be weasel’d to mean more narrowly “extreme crusader” to suit &c.I’m not real convinced by the Walter Sobchak calmer-than-you-are (& sober-er, too) 20/10 hindsight eagle-eyed type attitude, but I don’t really disagree, either. Some things just don’t work, as built.
This aligns well with my thoughts. I would describe myself as a fan of Python because it fitted me like a glove when I discovered it in 90s and was the first (also only) language where I wrote code of some length that worked correctly the first time before I became experienced in it.
However, my use of Python does not form part of my identity. There’s plenty of what I’m not fond of, but it remains one of my favorite tools with attachment that is not purely rational.
Somewhat off-topic I do feel world would be in better place if peoples identities would be formed around cores as small as possible.
Warning: This does not come with a wordless booklet for assembling your own. All you can do is watch an in-depth video and dream.
And vote for it. If it achieves 10k votes, then it gets considered by Lego for production.
I voted because I’d buy one immediately if it became available.
I’m voting too, but honestly, I’ll eat my hat if the lego team actually considers this.
The designer himself even says how brittle and fragile the design is. I also think it’s using at least some 3D printed parts? But I don’t think the designer says if they’re 3D printed parts of standard bricks or if they’re completely custom. I don’t know exactly how much the designer or the lego team would put into retooling it for a proper set, but this is very much an early draft, even if it is the fourth iteration. I’d love to see it go all the way, though. I hope that if it gets selected, then it’ll be properly re-designed as a more stable contraption.
As far as I know they pretty much always retool and change designs even when they look perfect and highly polished. So they would definitely redo this one for reasons you listed, if they pick it up.
I’m not optimistic they’ll pick it either, but it would be so great. I would think that us developers are over-represented among adult Lego builders and that they know this so there’s me hoping.
Aren’t you worried it will come for our jobs?
I love and appreciate it, but I watched the whole video and was sad to not see an example program, however useless, but essentially merely some bit flipping. I’m in awe of the design, though.
NOYB does great work, but it is wrong on this one. Privacy Preserving Attribution is a sensible, well-designed feature and a giant step in the right direction for balancing privacy and the primary economic model of the Web, ads. I wrote more about why at https://alpaca.gold/@Jeremiah/113198664543831802
I don’t know the specifics of NOYB’s complaint, but reading your thoughts, I think you’re missing an important point:
Sending data over the web, at the very least, leaks your IP, possibly your device type as well. It doesn’t matter how anonymized the data contained in the envelope is. Making a web request that sends some data, any data, will always be leaky compared to making no web requests, which means that the user needs to trust the endpoint the browser is communicating with.
And this is also where NOYB’s complaint may have merit because any service would work just fine without those PPA requests. And let’s be clear, PPA is relevant for third-party ads, less for first-party ones. In other words, user data is shared with third-parties, without the user expecting it as part of the service. Compared with browser cookies, a feature that enables many legitimate uses, PPA is meant only for tracking users. It will be difficult for Mozilla or the advertising industry to claim a legitimate interest here.
Another point is that identifying users as a group is still a privacy violation. Maybe they account for that, maybe people can’t be identified as being part of some minority via this API. But PPA is still experimental, and the feature was pushed to unsuspecting users without notification. Google’s Chrome at least warned people about it when enabling features from Privacy Sandbox. Sure, they used confusing language, but people that care about privacy could make an informed decision.
The fact that Safari already has this feature on doesn’t absolve Firefox. Apple has its issues right now with the EU’s DMA, and I can see Safari under scrutiny for PPA as well.
Don’t get me wrong, I think PPA may be a good thing, but the way Mozilla pushed this experiment, without educating the public, is relatively disappointing.
The reason for why I dislike Chrome is that it feels adversarial, meaning that I can’t trust its updates. Whenever they push a new update, I have to look out for new features and think about how I can get screwed by it. For example, at least when you log into your Google account, Chrome automatically starts sharing your browsing history with the purpose of improving search and according to the ToS, they can profile you as well, AFAIK.
Trusting Firefox to not screw people over is what kept many of its users from leaping to Chrome, and I had hoped they understood this.
The least they could do is a notification linking to some educational material, instead of surprising people with a scary-looking opt-out checkbox (that may even be problematic under GDPR).
The problem with this is that it claims too much. You’re effectively declaring that every web site in existence is in violation of GDPR, because they all need to know your IP address in order to send packets back to you, which makes them recipients and processors of your personal data.
This sort of caricature of GDPR is one reason why basically every site in Europe now has those annoying cookie-consent banners – many of them are almost certainly not legally required, but a generic and wrong belief about all cookies being inherently illegal under GDPR without opt-in, and a desire on the part of industry for malicious compliance, means they’re so ubiquitous now that people build browser extensions to try to automatically hide them or click them away!
Sorry to say this, but this is nonsense.
The GDPR acknowledges that the IP is sent alongside requests, and that it may be logged for security purposes. That’s a legitimate interest. What needs consent is third-party tracking with the purpose of monetizing ads. How you use that data matters, as you require a legal basis for it.
Cookies don’t need notifications if they are needed for providing the service that the user expects (e.g., logins). And consent is not needed for using data in ways that the user expects as part of the service (e.g., delivering pizza to a home address).
The reason most online services have scary cookie banners in the EU is because they do spyware shit.
Case in point, when you first open Microsoft Edge, the browser, they inform you that they’re going to share your data with over 700 of Microsoft’s partners, also claiming legitimate interest for things like “correlating your devices” for the purpose of serving ads, which you can’t reject, and which is clearly illegal. So Microsoft is informing Edge users, in the EU, that they will share their data with the entire advertising industry.
Well, I, for one, would like to be informed of spyware, thanks.
Luckily for Mozilla, PPA does not do “third-party tracking with the purpose of monetizing ads”. In fact, kind of the whole point of PPA is that it provides the advertiser with a report that does not include information sufficient to identify any individual or build a tracking profile of an individual. The advertiser gets aggregate reports that tell them things like how many people saw or clicked on an ad but without any sort of identification of who those people were.
This is why the fact that, yes, technically Mozilla does receive your IP address as part of a web request does not automatically imply that Mozilla is doing processing of personal data which would trigger GDPR. If Mozilla does not use the IP address to track you or share it to other entities, then GDPR should not have any reason to complain about Mozilla receiving it as part of the connection made to their servers.
As I’ve told other people: if you want to be angry, be angry. But be angry at the thing this actually is, rather than at a made-up lie about it.
No, they do it because (like the other reply points out), they have a compliance department who tells them to do it even if they don’t need to, because it’s better to do it.
There’s a parallel here to Proposition 65 in the US state of California: if you’ve ever seen one of those warning labels about something containing “chemicals known to the State of California to cause cancer”, that’s a Proposition 65 warning. The idea behind it was to require manufacturers to accurately label products that contain potentially hazardous substances. But the implementation was set up so that:
So everyone just puts the warning on everything. Even things that have almost no chance of causing cancer, because there’s no penalty for a false cancer warning and if your product ever is found to cause cancer, the fact that you had the warning on it protects you.
Cookie banners are the same way: if you do certain things with data and don’t get up-front opt-in consent, you get a penalty. But if you get the consent and then don’t do anything which required it, you get no penalty. So the only safe thing to do is put the cookie consent popup on everything all the time. This is actually an even more important thing in the EU, because (as Europeans never tire of telling everyone else) EU law does not work on precedent. 1000 courts might find that your use of data does not require consent, but the 1001st court might say “I do not have to respect the precedents and interpretations of anyone else, I find you are in violation” and ruin you with penalties.
Mozilla does not have a legitimate interest in receiving such reports from me.
They can look at their web server logs?
Those are fairly useless for this purpose without a lot of cleaning up and even then I’d say it is impossible to distinguish bots from real visits without actually doing the kind of snooping everyone is against.
This requires no third party?
You are not allowed to associate a session until you have permission for it and you don’t on first page load if visitor didn’t agree to it on a previous visit.
This whole described tracking through website is illegal if you either don’t have a previous agreement or you don’t need session for the pages to even work which you will have a hard time arguing for browsing a web shop.
Using third party doesn’t solve anything because you need permission to do this kind of tracking anyway. My argument however was that you can’t learn how many people saw or clicked an ad from your logs because some who saw it on other peoples pages or search engine of which you don’t have logs and A LOT of those clicks are fake and your logs are unlikely rich enough to know which.
What you want to learn about people’s behavior is more than above which I’m sure you’d know if this was actually remotely your job.
“What you want to learn about people’s behavior” is one thing, “what you should be able to learn about people’s behavior” is something else.
IMHO, it’s not the job of those neck-deep in the industry to set the rules of what’s allowed and not.
I’m not sure anyone here is arguing that these are the same thing and certainly not me.
I’m not sure if you are implying that I am neck-deep in the ad industry, but I certainly never have been. I am, however, responsible also for user experience in our company and there’s a significant overlap in needing to understand visitor/user behavior.
We go to great lengths to not only comply with the letter of the law, but also with its spirit which means we have to make a lot of decisions less informed as we’d prefer. I am not complaining about that either, but it does bother me describing every attempt to ethically learn as either not necessary or sinister.
The condition for requiring a warning label is not “causes cancer” but “exposes users to something that’s on this list of ‘over 900 chemicals’ at levels above the ‘safe harbor levels’”, which is a narrower condition, although maybe not very narrower in practice. (I also thought that putting unnecessary Prop. 65 warning labels on products had also been forbidden (although remaining common), but I don’t see that anywhere in the actual law now.)
No, the reason many have them is that every data privacy consultant will beat you over your head if you don’t have an annoying version of it. Speaking as someone on the receiving end of such reports.
No, you must have an annoying version of it because the theory goes, the more annoying it is the higher the chance the users will frustratingly click the first button they see, e.g. the “accept all” button. The job of privacy consultants is to legitimize such practices.
Which part of “Speaking as someone on the receiving end of such report” was not clear?
Do you think they are trying to persuade us to have more annoying versions so we could collect more information even though we don’t want to for benefit of whom exactly?
My guess is that you don’t have much experience with working with them and how those reports actually look like.
Well, what I do know is that the average consent modal you see on the internet is pretty clearly violating the law, which means that either the average company ignores their data privacy consultants, or the data privacy consultants that they hire are giving advice designed to push the limits of the law.
Yes, IP addresses are personal data and controlled under GDPR, that’s correct. That means each and every HTTP request made needs freely given consent or legitimate interest.
I request a website, the webserver uses my IP address to send me a reply? That’s legitimate interest. The JS on that site uses AJAX to request more information from the same server? Still legitimate interest.
The webserver logs my IP address and the admin posts it on facebook because he thinks 69.0.4.20 is funny? That’s not allowed. The website uses AJAX to make a request to an ad network? That isn’t allowed either.
I type “lobste.rs” into Firefox, and Firefox makes a request to lobsters? Legitimate interest. Firefox makes an additional request to evil-ad-tracking.biz to tell them that I visited lobsters? That’s not allowed.
Balancing lol, for years ad providers ignore all data protections laws (in Germany, way before GDPR) and GDPR. They are staking all users without consent. Then the EU forces the ad companies to follow the law and at least ask the user if they want to share private data. The ad companies successful framed this as bad EU legislation. And now your browser wants to help add companies to staking you. Framing this as balancing is ridiculous.
Just because there is no nametag on it doesn’t mean it’s not private data.
Sorry for the bad comparison: But it’s also reasonable for a thief to want to break in your house. But it’s illegal. Processing personal data is illegal, with some exceptions. Yes there is a “the legitimate interests”, but this has to be balances with “fundamental rights and freedoms of the data subject”. I would say “I like money” isn’t enough to fall under this exception.
``But the other one is also bad’’. This could be an argument, iff you can prove that this is willful ignored by others. There is so much vendors pushing such shit to there paying customers, so I would assume this was overseen. Also Apple should disable it also, because as far as I see it’s against the law (no I’m not a lawyer).
And no I don’t say ads are bad or you shouldn’t be allowed to do some sort of customers analyses. But as the freedom of your fist ends where my nose starts. The freedom of the market analyses ends when you stalking customers. I know it’s not easy to define where customers analyses end and where stalking starts, but currently ad companies are miles away for it. So stop framing this poor little advertisers.
The thing that makes me and presumably some other people sigh and roll our eyes at responses like this is that we’re talking about a feature which is literally designed around not sending personal data to advertisers for processing! The whole point of PPA is to give an advertiser information about ad views/clicks without giving them the ability to track or build profiles of individuals who viewed or clicked, and it does this by not sending the advertiser information about you. All the advertiser gets is an aggregate report telling them things like how many people clicked on the ad.
If you still want to be angry about this feature, by all means be angry. Just be angry about the actual truth of it rather than whatever you seem to currently believe about it.
The only problem I see is that Mozilla is able to track and build profiles of individuals. To some extent, they’ve always been able to do so, but they’ve also historically been a nonprofit with a good track record on privacy. Now we see two things in quick succession: first, they acquire an ad company, and historically, when a tech company acquires an ad company, it’s being reverse-acquired. Second, they implement a feature for anonymizing and aggregating the exact kind of information that advertising companies want (which they must, in the first place, now collect). PPA clearly doesn’t send this information directly to advertisers. But do we now trust Mozilla not to sell it to them separately? Or to use it for the benefit of their internal ad company?
Except they aren’t! They’ve literally thought of this and many other problems, and built the whole thing around distributed privacy-preserving aggregation protocols and random injected noise and other techniques to ensure that even Mozilla does not have sufficient information to build a tracking profile on an individual.
And none of this is secret hidden information. None of it is hard to find. That link? I type “privacy preserving attribution” into my search engine, clicked the Mozilla support page that came up, and read it. This is not buried in a disused lavatory with a sign saying “Beware of the Leopard”. There’s also a more technical explainer linked from that support doc.
Which is why I feel sometimes like I should be tearing my hair out reading these discussions, and why I keep saying that if someone wants to be angry I just want them to be angry at what this actually is, rather than angry at a pile of falsehoods.
How do I actually know that Mozilla’s servers are implementing the protocol honestly?
How do you know anything?
Look, I’ve got a degree in philosophy and if you really want me to go deep on whether you can know things and how, I will, but this is not a productive line of argumentation because there’s no answer that will satisfy. Here’s why:
Suppose that there is some sort of verifier which proves that a server is running the code it claims to be; now you can just reply “ah-ha, but how do I trust that the verifier hasn’t been corrupted by the evil people”, and then you ask how you can know that the verifier for the verifier hasn’t been corrupted, and then the verifier for the verifier for the verifier, and thus we encounter was is known, in philosophy, as the infinite regress – we can simply repeat the same question over and over at deeper and deeper levels, so setting up the hundred-million-billion-trillionth verifier-verifier just prompts a question about how you can trust that and now we need the hundred-million-billion-trillion-and-first verifier verifier, and on and on we keep going.
This is an excellent question, and frankly the basis of my opposition to any kind of telemetry bullshit no matter how benign it might seem to you now. I absolutely don’t know that it’s safe or unsafe, or anonymous or only thought to be anonymous. It turns out you basically can’t type on a keyboard without anybody being able to turn a surprisingly shitty audio recording of your keyboard into a pretty accurate transcript of what you typed. There have been so many papers that have demonstrated that a list of the fonts visible to your browser can often uniquely identify a person. Medical datasets have been de-anonymised just by using different bucketing strategies.
I have zero confidence that this won’t eventually turn out to be similar, so there is zero reason to do it at all. Just cut it out.
If there’s no amount of evidence someone could present to convince you of something, you can just say so and let everyone move on. I don’t like arguing with people who act as if there might be evidence that would convince them when there isn’t.
It’s a perfectly legitimate position to hold that the only valid amount of leaked information is zero. You’re framing it as if that was something unreasonable, but it’s not. Not every disagreement can be solved with a compromise.
I prefer to minimize unnecessary exposure. If I visit a website, then, necessarily, they at a minimum get my IP address. I don’t like it when someone who didn’t need to get data from me, gets data from me. Maybe they’re nice, maybe they’re not nice, but I’d like to take fewer chances.
trust lost is hard regained. The ad industry is obviously in a hard place here.
The thing that really leaves a bitter taste in my mouth is that it feels like “the ad industry” includes Mozilla now.
shouldn’t be surprising; it’s been their #1 funding source for years…
I like your take on this, insomuch as “it’s better than what we currently have”.
I don’t agree with this, it wasn’t even possible to know until about 20 years ago. The old ad-man adage goes that “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” Well that’s just the price you pay when producing material that hardly ever is a benefit to society.
Funnily enough there does seem to have been a swing back towards brands and publishers just cutting all middle men out and partnering up. This suggests to me that online ads aren’t working that well.
This to me is so incredibly naive and I’m speaking as someone who doesn’t like ads. How in the world would anyone hear about your product and services without them, especially if they are novel?
Imagining that every company, no matter how small or new sits on tons of money they can waste on stuff that is ineffective seems unreasonable. Having ads be an option only for companies already successful enough doesn’t seem particularly desirable from point of view of economy.
I’m as much against snooping, profiling and other abuses as the next guy, but I disagree with seeing every tracking, no matter how much it is privacy preserving, as inherently bad.
If your company can’t survive without ad tech, it should just cease to exist.
Why? Justify that. What is it about a company requiring advertising that inherently reduces the value of that company to 0 or less? If I have a new product and I have to tell people about it to reach the economic tipping point of viability, my product is worthless? Honestly, I find this notion totally ridiculous - I see no reason to connect these things.
I never said anything about advertizing, I said ad tech. Go ahead and advertize using methods that don’t violate my privacy or track me in any way.
Now you’re conflating “ad tech” with tracking. And then what about tracking that doesn’t identify you?
What do you think the ad tech industry is? And I simply do not consent to being tracked.
So if an ad didn’t track you you’d be fine with it? If an ad tech company preserved your privacy, you’d be fine?
I am fine with ads that are not targeted at me at all, and don’t transmit any information about me to anyone. For example, if you pay some website to display your ad to all its visitors, that it fine to me. Same as when you pay for a spot in a newspaper, or billboard. I don’t like it, but I’m fine with it.
It’s absolutely naive, and I stand by it because I don’t care if you can’t afford to advertise your product or service. But I do find ads tiresome, especially on the internet. Maybe I’m an old coot but I tend to just buy local and through word of mouth anyway, and am inherently put off by anything I see in an ad.
This is pretty much the state of affairs anyway. Running an ad campaign is a money-hole even in the modern age. If I turn adblock off I just get ads for established players in the game. If I want anything novel I have to seek it out myself.
But as I said, I’m not against this feature per-se, as an improvement on the current system.
It’s worth repeating, society has no intrinsic responsibility to support business as an aggregated constituent, nor as individual businesses.
One might reasonably argue it’s in everyone’s best interest to do so at certain times, but something else entirely to defend sacrosanct business rights reflexively the moment individual humans try to defend themselves from the nasty side effects of business behavior.
We absolutely have a responsibility to do so in a society where people rely on businesses for like… everything. You’re typing on a computer - who produced that? A business. How do you think most Americans retire? A business. How do new products make it onto the market? Advertising.
I think it’s exactly the opposite situation of what you’re purporting. If you want to paint the “society without successful businesses is fine” picture, you have to do so.
Would it not be fair to suggest that there’s a bit of a gulf between businesses people rely on and businesses that rely on advertising? Perhaps it’s just my own bubble, dunno
I am obligated to read a history book for you?
Advertising predates the internet, and still exists robustly outside web & mobile ad banners.
But even if it didn’t, word of mouth & culture can still inform about products & services.
Have you heard of shops? It’s either a physical or virtual place where people with money go to purchase goods they need. And sometimes to browse if there’s anything new and interesting that might be useful.
Also, have you heard of magazines? Some of them are dedicated to talking about new and interesting product developments. There are multiple printed (and digital) magazines detailing new software releases and online services that people might find handy.
Do they sometimes suggest products that are not best for the consumer, but rather best for their bottom line? Possibly. But still, they only suggest new products to consumers who ask for it.
Regardless how well PPA works, I think this is crux of the issue:
Even if PPA is technically perfect in every way, maybe MY personal privacy is preserved. But ad companies need to stop trying to insert themselves into every crack of society. They still have no right to any kind of visibility into consumer traffic, interests, eyeballs, whatever.
PPA does not track users. It tracks that an ad was viewed or clicked and it tracks if an action happened as a result, but the user themself is never tracked in any way. This is an important nuance.
Assuming that’s true (and who can know for sure when your adversary is a well-funded
shower of bastardsad company), what I say still stands:What “visibility into consumer traffic, interests, eyeballs, whatever” do you think PPA provides?
The crux of PPA is literally that an advertiser who runs ads gets an aggregate report with numbers that are not the actual conversion rate (number of times someone who saw an ad later went on to buy the product), but is statistically similar enough to the actual conversion rate to let the advertiser know whether they are gaining business from running the ad.
It does not tell them who saw an ad. It does not give them an identifier for the person who saw the ad. It does not tell them what other sites the person visited. It does not tell them what that person is interested in. It does not give them a behavioral profile of that person. It does not give them any identifiable information at all about any person.
For years, people have insisted that they don’t have a problem with advertising in general, they have a problem with all the invasive tracking and profiling that had become a mainstay of online advertising. For better or worse, Mozilla is taking a swing at eliminating the tracking and profiling, and it’s kind of telling that we’re finding out how many people were not being truthful when they said the tracking was what they objected to.
Personally, while I don’t like seeing ads, and on services that I use enough that offer me the option, I pay them money in exchange for not seeing ads, I also understand that being online costs money and that I don’t want the internet to become a place only for those wealthy enough to afford it without support. So having parts of the web that are paid for by mechanisms like advertising – provided it can be done without the invasive tracking – rather than by the end user’s wallet is a thing that probably needs to exist in order to enable the vibrant and diverse web I want to be part of, and lurking behind all the sanctimoniousness and righteous sneers is, inevitably, the question of how much poorer the web would be if only those people who can pay out of their own pockets are allowed onto and into it.
I’m saying they don’t have the right to “know whether they are gaining business from running the ad.”
It’s not necessarily bad for them to know this, but they are also not entitled to know this. On the contrary: The user is entitled to decide whether they want to participate in helping the advertiser.
Well, in order to even get to the point of generating aggregate reporting data someone has to both see an ad and either click through it or otherwise go to the site and buy something. So the user has already decided to have some sort of relationship with the business. If you are someone who never sees an ad and never clicks an ad and never buys anything from anyone who’s advertised to you, you don’t have anything to worry about.
none of that contradicts the fact that advertisers are not entitled to additional information with the help of the browser.
Question: how is the ad to be displayed selected? With the introduction of PPA, do advertizers plan on not using profiling to select ads anymore? Because that part of the ad tech equation is just as important as measuring conversions.
Fun fact: Mozilla had a proposal a few years back for how to do ad selection in a privacy-preserving way, by having the browser download bundles of ads with metadata about them and do the selection and display entirely on the client side.
People hated that too.
The Internet is already a place only for those wealthy enough to pay out of their own pockets for a computer and Internet connection that is fast enough to participate. Without ads, many sites would have to change their business model and may die. But places like Wikipedia and Lobsters would still exist. Do you really think the web would be poorer if websites were less like Facebook and Twitter and more like Wikipedia and Lobsters?
Someone who doesn’t own a computer or a phone can access the internet in many public libraries – free access to browse should be more plentiful but at least exists.
But web sites generally cannot be had for free without advertising involved, because there is no publicly-funded utility providing them.
So you want to preserve ads so that people who rely on public libraries for Internet access can offset hosting costs by putting ads on their personal websites? That still requires some money to set up the site in the first place, and it requires significant traffic to offset even the small hosting cost of a personal website.
Clearly you have something else in mind but I can’t picture it. Most people don’t have the skills to set up their own website anyway, so they use services such as Facebook or Wikipedia to participate on the Internet. Can you clarify your position?
following up
I thought this discussion was getting really interesting so I’m assuming it fell by the wayside and that you would appreciate me reviving it. did want to respond? or would you rather I stop asking
but who views or clicks on the ad? it would have to be a user.
There is a very simple question you can ask to discover whether a feature like this is reasonable: if the user had to opt in for it, how many users would do so if asked politely?
This is innovation in the wrong direction. The actual problem is that everyone beliefs that ads are the primary/only economical model of the Web and that there is nothing we can do about it. Fixing that is the innovation we actually need.
We could have non-spyware ads that don’t load down browsers with megabytes of javascript, but no-one believes that it is possible to advertise ethically. Maybe if web sites didn’t have 420 partners collecting personal data there would be fewer rent-seeking middlemen and more ad money would go to the web sites.
Ads. We all know them, we all hate them. They slow down your browser with countless tracking scripts.
Want in on a little secret? It doesn’t have to be this way. In fact, the most effective ads don’t actually have any tracking! More about that, right after this message from our sponsor:
(trying to imitate the style of LTT videos here)
We’ve got non-spyware ads that don’t contain any interactivity or JS. They’re all over video content, often called “sponsorships”. Negotiated directly between creators and brands, integrated into the video itself without any interactivity or tracking, most of the time clearly marked. And they’re a win-win-win. The creator earns more, the brand actually gets higher conversion and more control about the context of their ad, and by nature the ads can’t track the consumer either.
Maybe if I could give half a cent per page view to a site, they’d make a lot more than they ever made from ads.
Sure, but IMHO this is still not a reason to turn it on by default.
The browser colluding with advertisers to spy on me is, in fact, not sensible.
Please be clear about what “spying” you think is being performed.
For example: list all personally-identifying information you believe is being transmitted to the advertiser by this feature of the browser.
You can read documentation about the feature yourself.
(Note that I’m not the parent poster, I’m just replying here because the question of what data is actually being tracked seems like the crux of the matter, not because I want to take out the pitchforks.)
Reading through the data here, it seems to me like the browser is tracking what ads a user sees. Unfortunately the wording there is kind of ambiguous (e.g. what’s an “ad placement”? Is it a specific ad, or a set of ads?) but if I got this right, the browser locally tracks what ad was clicked/viewed and where, with parameters that describe what counts as a view or a click supplied by the advertiser. And that it can do so based on the website’s requirements, i.e. based on whatever that website considers to be an impression.
Now I get that this report isn’t transmitted verbatim to the company whose products are being advertised, but:
I realise this is a hot topic for you, but if you’re bringing up the principle of charity, can we maybe try it here, too? :-) That’s why I prefaced this with a “I’m not the parent poster” note.
That technical explainer is actually the document that I read, and on which my questions are based. I’ve literally linked to it in the comment you’re responding to. I’m guessing it’s an internal document of sorts because that’s not “very redable” to someone who doesn’t work in the ad industry at all. It also doesn’t follow almost any convention for spec documents, so it’s not even clear if this is what’s actually implemented or just an early draft, if the values “suggested” there are actually being used, which features are compulsory, or if this the “final” version of the protocol.
My first question straight out comes from this mention in that document:
(Emphasis mine).
Charitably, I’m guessing that the support page is glossing over some details in its claim, given that there’s literally a document describing what information about one’s browsing activities is being sent and where. And that either I’m misunderstanding the scope of the DAP processing (is this not used to process information about conversions?) or that you’re glossing over technical details when you’re saying “no”. If it’s the latter, though, this is lobste.rs, I’d appreciate if you didn’t – I’m sure Mozilla’s PR team will be only too happy to gloss over the details for me in their comments section, I was asking you because a) you obviously know more about this than I do and b) you’re not defaulting to “oh, yeah, it’s evil”.
I have no idea what running a DAP deployment entails (which is why I’m asking about it) so I don’t really know the practical details of “the two organizations collude” which, in turn, means I don’t know how practical a concern that is. Which is why I’m asking about it. Where, on the spectrum between “theoretically doable but trivially detected by a third party” and “trivially done by two people and the only way to find out is to ask the actual people who did it”, is it placed?
My second question is also based on that document. I don’t work in the ad industry and I’m not a browser engineer, so much of the language there is completely opaque. Consequently:
CustomEvent
is. In its simplest form, reading the doc it sounds like the website is the one generating events. But if that’s the case, they can already count impressions, they don’t even need to query the local impression database. (The harder variant is that the event is fired locally and you can’t hook to it it any way, but it’s still based on website-set parameters – see my note in 5. below for that). I imagine I’m missing something, but what?regarding PPA, if I have DNT on, what questions are there still unclear?
regarding the primary economic model, that’s indeed the problem to be solved. Once print had ads without tracking and thrived. An acceptable path is IMO payments, not monetised surveillance. Maybe similar https://en.wikipedia.org/wiki/VG_Wort
and regarding opt-in/out: one doesn’t earn trust by going the convenient way. Smells.
Once Google had ads without tracking and thrived, enough to buy their main competitor Doubleclick. Sadly, Doubleclick’s user-surveillance-based direct-marketing business model replaced Google’s web-page-contents-based broadcast-advertising business model. Now no-one can even imagine that advertising might possibly exist without invasive tracking, despite the fact that it used to be normal.
It’s funny because not once in my entire life have I ever seen an invasive tracking ad that was useful or relevant to me. What a scam! I have clicked on two ads in my entire life, which were relevant to me, and they were of the kind where the ad is based on the contents of the site you’re visiting.
great illustration of how the impact of ads is disparately allocated. some people click on ads all the time and it drains their bank account forcing them into further subordination to employers. this obviously correlates with lower education and economic status.
why should the “primary economic model of the Web” be given any weight whatsoever against user control and consent?
What a depressing comment section here.
I’ve seen many of these happen just at the company I’m currently at. Extensions are especially awful and constant source of errors. So are mobile browsers themselves injecting their own crap. Users don’t know what the source of breakage is even if they cared. I’d say about 90% if not more of errors we encounter are Javascript related and a similar percentage of those are not caused by code we wrote.
We still use Javascript to improve experience and I don’t see this article arguing against that. Even have a few SPAs around although those are mainly backoffice tools. However we do make sure that main functionality works even if HTML is the only thing you managed to load.
No it doesn’t. In a few million daily page loads and less than 0,05% of my traffic without javascript, and it’s usually curl or LWP or some other scraper doing something silly. Your traffic might be different, so it’s important to measure it, then see what you want to do about it. For me, such a small number the juice probably isn’t worth the squeeze, but I have other issues with this list:
99,67% of my js page loads include the telemetry response; I don’t believe spending any time on a js-free experience is worth anything to the business, but I appreciate it is possible it could be to someone, so I would like to understand more things to check and try, not more things that could go wrong (but won’t or don’t).
I’m not sure how to put this politely, but I seriously doubt your numbers. Bots not running headless browsers themselves should be more than what you estimated.
I’d love to know how you can be so certain in your numbers? What tools do you use or how do you measure you traffic?
In our case our audience are high school students and schools WILL occasionally block just certain resource types. Their incompetence doesn’t make it less of your problem.
phantomjs/webdriver (incl. extra-stealth) is about 2,6% by my estimate. They load the javascript just fine.
A page that has in the
<body>
some code like this:will load a.js then load b.txt or c.txt based on what happened in a.js.Then because I know basic math I can compare the number of times a.js loads and the number of times b.txt loads and c.txt loads according to my logfiles.
Tools I build.
I buy media to these web pages and I am motivated to understand every “impression” they want to charge me for.
I think it’s important to understand the mechanism by which the sysadmin at the school makes a decision to do anything;
If you’ve hosted an old version of jquery that has some XSS vector you’ve got to expect someone is going to block jquery regexes; Even if you’ve updated the version underneath. That’s life.
The way I look at it is this: I find if people can get to Bing or Google and not get to me, that’s my problem, but if they can’t get to Bing or Google either, then they’re going to sort that out. There’s a lot that’s under my control.
Can I invite you to a train ride in a German train from, say, Zurich to Hamburg? Some sites work fine the whole ride, some sites really can’t deal with the connection being, well, spotty.
If you can host me I’m happy to come visit.
Yeah, and I think those sites can do something about it. Third-party resources is probably the number one issue for a lot of sites. Maybe they should try the network simulator in the browser developer tools once in a while. My point is the javascriptness isn’t as big the problem as the persons writing the javascript.
Never had Galaksija although I read Računari even though it was in Serbian. It had some really excellent writers. Not sure if it was there or in the other Serbian magazine whose name currently escapes me that I encountered a phrase that stayed with me ever since: “Sitnice koje život znače”.
Not that I’m a big fan of interview marathons, and I’m no one’s boss right now, but please keep in mind that (seeing that you’re in the US) - not all markets and jurisdictions are the same.
Very much summarized: in many European country it’s
These and many factors seem to point in the direction that hiring (and firing) is very, very slow and bureaucratic compared to what I hear from the US. And as much as we tech people think we’re special snowflakes, some things are just like in other fields.
And so, yes, personally I am also no fan, if they’d just invite me after 1h of chatting (it has happened before) it’s great, but unless I already know that I really, really, really want to work for this company and know I won’t hate it… several interviews (more like: talking to more different people than 1) also gives me more insight.
You should keep in mind that the author is in the US and is reacting to US interviewing practices. Tech firms are notorious for making candidates run a gauntlet of interviews despite not facing European labour regulations. So your theory of why interview processes may be difficult in Europe, doesn’t really explain the phenomenon in the US.
That was basically my whole reply, yes. And it’s not even in the article, I think I checked Github because it read that way.
I live in France, an have worked for… 9 companies over the course of 17 years, I have interviewed for quite a bit more, and not a single one of the companies I interviewed for required more than 2 interviews (one technical, the other not). No one gave me take-home assignment, and I had to go through a formal test only twice. Twice more they had a standard set of technical questions.
When I’m interviewing candidates for my current company, we do have a 90 minutes “coding game” (quizz + a couple simple problems like detecting anagrams), which help me structure the interview (I systematically review the answers with the candidate), but even that isn’t required (those who don’t do it get quizzed anyway 😈).
Despite being in a country in which firing people is not easy. On the contrary, I’ve always been a bit surprised of stupidly long interview processes like the Google Gauntlet.
Then I’d say consider yourself lucky. Just 2 interviews has been the quickest and yet a total outlier in my experience in Germany. Even smaller companies often had screening + interview + meet the team (even informal, still some kind of interview).
Despite firing being legally easier in USA, major tech companies are too inefficient inside to detect or fire ineffective employees anyway, so it really doesn’t happen fast.
I am super curious about this (having just moved to Europe). Could you elaborate at all? D’you mean not staying the full 6 months is a problem? Or leaving too soon after it?
(At least where I’m coming from, it’s a bit of a yellow flag if you have numerous entries in your CV that are a year or less, but one or two isn’t a big deal.)
I mean, I’m no recruiter and yes, maybe I should have said yellow flag - but every time I was part of some CV sighting when you had people with more than one of those very short stint it was some “Hmmmm”.
But my (maybe badly made) argument was that if all interviews and hiring processes were so “quick”, I have a fear that it might become more widespread. So basically all I have is anecdata and I wouldn’t be overly concerned.
Right! Thanks for your response. :)
In my experience one such short stint is no issue in reasonable companies. Sometimes things just don’t work out. More can be problematic if they are clustered and recent, but it is basically overall impression that counts (or did when I was more involved in hiring).
It boggles my mind they didn’t start with this plugin/add-on implementation but I’m glad to see they listened to their users.
Everything always looks obvious in hindsight though.
The degree of outrage was surprising; I sympathize with the maintainer here.
I’m surprised that you find it surprising, genuinely. Every hype cycle creates a corresponding backlash.
I’m not sure I would have taken too much notice of it. There is a ton of negative sentiment from tech people, particularly on the orange site. Best to ignore most of it and focus on people who have something constructive to offer.
I usually agree with hindsight being 20/20 but adding AI to a terminal application automatically on a minor software upgrade was not going be received well. Same would have applied to crypto…
It didn’t add it automatically. You had to complete several steps including providing your own API key for it to start working on your explicit request.
I am the first to object to random AI integration, have no use for this feature, and also have other reasons why I don’t and probably will never again use iTerm2. All of that said:
Although the feature was there in the core, it didn’t work out of the box. You needed to explicitly configure it to use an API key that you provided. I am quite surprised that it would be problematic for people.
Part of me really thinks that firewalls were a mistake. The may make sense for tightly controlled machines like a single server or a single-tennant data center. But things like this where the university WiFi is firewalled is just breaking the internet.
It’s worth remembering that the out-of-the-box security of most systems was absolute dogshit well into the 2000s.
Despite what I wrote recently, Cambridge University had fairly relaxed central packet filters, though departments often ran their own local firewalls. The filters were generally justified by the need to avoid wasting computing staff time. For many years the main filters were on Windows NETBIOS-related ports, not just on the border routers but on internal links too (because some departments lacked firewalls). (Looks like it hasn’t changed much, to my mild surprise.)
Security specialists don’t like this kind of reactive stance, but in practice it’s a reasonable pragmatic trade-off for an organization of Cambridge’s size and complexity.
When I was a student, the campus firewall blocked all UDP traffic. At the time, most video conferencing tools used UDP because TCP retransmissions causes too much delay. We wasted hours trying to work out why the Access Grid (video conferencing solution recommended for universities at the time) did not work.
In ingress firewall may make sense to prevent services that are not intended to be exposed to the Internet from being attacked, but these days perimeter defences like that are more security theatre. Someone is going to put a compromised Android device on your network and at that point the firewall is useless.
The university firewall was a complete waste of time because the entire internal network was a single broadcast domain and so a single machine was able to infect every machine on campus with the Slammer worm in about a minute. We also accidentally broke most of the lab machines by connecting a machine that we didn’t realise was running a DHCP server. It had a very small allow list and so responded very quickly with a denied response to DHCP requests from the Windows machines, which then self-assigned an IP address on the wrong subnet and failed to connect to the server that provided login information and roaming profiles.
I agree with that sentiment. Basic “block all ports that aren’t open anyway” seems of limited value.
One might argue that even on a server you only have certain open ports anyways, the parts that your services listen on. So unless something else is open there is no difference - other than one might drop packets instead of returning rejects maybe. However, that’s something that shouldn’t really require a firewall.
However there is other things that firewalls can be used for, such as limiting source addresses (might be tricky for UDP though depending on what you are trying to achieve), and sanitizing packets (though I think that could just be a sysctl or something).
And an attacker that can launch service usually can also just connect back or use other means of not requiring an additional open port to for example exfiltrate information.
the_office_no_god_please_no.gif
Firewalls that try to do this are, in general, really fucking bad at it. QUIC is designed the way it is so that firewalls and other middleboxes cannot “sanitize” its packets.
For example, the Cisco PIX / ASA has an SMTP fixup feature (aka smtp fuxup). One of the things it does is suppress SMTP commands that it doesn’t know about. It does this by replacing the command with XXXX. But it is too shitforbrains to maintain parser state between packets, so if a command (commonly RCPT TO) gets split by a packet boundary it always gets XXXXed.
I once debugged a mysterious mail delivery problem that caused a small proportion of messages to time out and eventually bounce. After much head scratching I worked out that it depended on the length of the recipient list: if it was 513 or 1025 bytes long, the firewall between my mail servers and the department’s mail servers would fuck up its TCP packet resegmentation, the sequence numbers would go out of sync, and the connection would be lost.
Just say no to “smart” middleboxes.
Yes and that’s good! Not what I meant by that though.
Talking about a different layer. I think a firewall shouldn’t look into commands. Also SMTP should be encrypted, heh.
Sounds like a bug? People sadly keep buying shitty products from shitty companies.
What I’ve been talking about though is the opposite. Incoming stuff being fucked up. For example OpenBSD’s pf will get rid of IP packets that simply shouldn’t exist like having a RST flag and a SYN flag. No, thanks!
Yeah those were old anecdotes, more than 15 years.
On systems I run, I generally only use the packet filters to stop unwanted traffic getting to userland. I don’t see much point in configuring one part of the network stack to defend another part of the network stack that should be equally capable of defending itself.
I agree. I see it like layers here. As in you go into the network stack and first you check if that IP packet is valid, no matter what happens later. I think though in most situations that could just be a switch - if you even need to be able to turn it off, but maybe for some testing or debugging situation or when you just wanna look at that incoming packet as it is.
I should have been more precise about what I meant with sanitizing.
There’s no single rule of thumb. When you’re designing a server environment, that is pretty much exactly the config you would use for the public facing NICs:
Then (if and when possible) use a different network for ops and administration, with port 22 open, and with SSL configured to never accept passwords (i.e. only accept configured ssh-rsa keys):
If you’re terminating SSL at the reverse proxy layer, this style of deployment becomes even more important, as you’ll have plain text within that server subnet.
But firewalls to protect people who are on WiFi? I guess it still makes sense to block incoming traffic in general, since most client apps are designed now to NAT-punch holes in the firewall (and firewalls are designed to support this). But that’s definitely not a network that should also be hosting servers.
(Rereading my way too long post and yours I think we are agreeing anyways, however the main thing I wanna say is “don’t do things just because..”. That can give a false sense of security)
Been designing server environment for 25ish years and have been consulting in that area. Thanks. ;)
Completely unrelated topic, but okay. I’ll keep PermitRootLogin at its default (prohibit-password). I appreciate the sentiment that anything administrative in best case should be on its own network in the sense of having a general admin interface, but OpenSSH is a bit my only exception and in fact I have had setups where it would be my admin interface in the sense that everything admin-related is tunneled through it. At times even with an OpenSSH tun device.
Sure thing even though it really doesn’t get you much, because…
The main thing your home firewall will prevent is exposing stuff on accident. And that’s the point I wanted to make.
Your server should never ever have stuff accidentally exposed. If you run your server like that you have serious security issues.
And just to be clear: Yeah I also have my firewalls turned on and it matches what you meant with the above. Essentially because if some accident happens it might still be caught. It’s cheap enough.
However, I want people to actually think about security measures they take. Not doing that and just doing the stuff one always does without thinking always causes problems. Like I wrote I have been consulting and it’s always the same picture. People do stuff that somehow is security related, they add something and don’t even know what it really does, or do something because it was sensible in another environment and it’s always the same picture, people add protections for some super edge case like your operating system’s ICMP parsing having a bug or something, sometimes spending tons of money and time on it only to have gaping vulnerabilities, not patching their systems quickly enough, having stuff exposed that shouldn’t be, having unecrypted connections to databases and so on. Or having things like “ah, nobody will guess that IP address” situations and then being suprised about scanners browsing by. And like you mention, doing password based authentication, sometimes even with reused passwords.
Then they buy an expensive product that is essentially snake oil and where they don’t even know what it is supposed to do. And then that very security product is used for the added attack surface.
And of course classics like forgetting about IPv6 and UDP. Around tenish years ago there was that time when people used NTPD servers that were both badly configured and publicly accessible and while they weren’t hacked per se they were used to facilitate reflection attacks. and of course that meant they were down too. And it’s a classic example of just doing some standard thing, without really understanding it and he saw there was an open port for NTP, so he kept it open. I might be mixing up things but I think he also got errors otherwise.
Anyways, I think a lot of harm is done when people and companies think security is just a product or that they just have to follow some online guide. There is so much bad advice out there and it’s really important to actually understand things. There is baselines such as patching. But as soon as it comes to firewalls and things like that it feels like an extra thing that adds security. But it’s very very limited, and people see it as the thing that protects them from attackers which is not something it really does. It can prevent misconfigurations from having a big effect and it can do some special stuff, but it doesn’t prevent much more. If your system is otherwise secure it should basically be as if it wasn’t there.
It’s a bit like with fail2ban for OpenSSH and stuff. It doesn’t really bring a security benefit if your authentication is sane. If it isn’t you should fix that, because fail2ban will not prevent you from being hacked. And as an attacker I really don’t have to be limited by individual IP addresses. I can configure a whole subnet or multiple cheaply on a single system, doesn’t even have to be a physical system. At the same time you have a log parser that parses logs that an attacker can partly control simply by connecting. State tables can also be filled up. All for the feat that an attacker who cannot easily come by IP addresses guesses a password when you should have both no password at all for the account and not having it allowed for authentication.
I agree that it can help in situations like misconfiguration and the trade off is on the “well, just go for it side”, but in normal operation it’s not usually the thing that protects you against anything (again, because of outbound connections which only in rare situations you want to/can block).
If it is your only line of defense it’s a good indicator that you should rethink your security. Firewalls today should be seen as a second layer in case you mess up a config for example. For most other things there is a better option. And keep in mind that the firewall is also something that can be misconfigured, which can result in the very thing you wanna prevent, for example a Denial of Service, if you simply block something you actually needed.
Another anecdote: I’ve once seen a situation where a firewall dropping packets resulted in the service sending the packets going down. The developer had a hard time debugging it, because he would have expected a network error (so rejection packets). I’ve also seen people having confusing issues because of dropping ICMP packets. And at this point it feels a bit like if you cannot trust your system’s handling of ICMP packets, should you really trust that system’s firewall?
It’s important to consider to actual situation. Do a threat model, even if just a very basic one. Only start focusing on specific scenarios (things like “what if the attacker is there, but cannot yet do this”) when you did your general homework. It’s really sad when people wanted to have a secure system and jumped on one interesting case, while having gaping holes. It’s also sad when time and money is invested for setting up security tools only to find out that they don’t do anything for their systems. Like running a WAF checking known security vulnerabilities in Wordpress and Joomla when they aren’t even used.
There are reasons though for doing non-senical stuff. Compliance for example. As in “if you don’t have that your insurance won’t pay”. A lot of bad security stems from things like that. It can even make you feel like you did something for security, when in reality you only fulfilled a business/compliance need.
This results in situations where the engineer notices an issue with what is being implemented and then people look through some contract and say “Oh, that’s fine. We don’t need that according to the contract”. It’s hard to really blame anyone here. It’s just hard to put security into a contract and they are usually written by non-engineers or with the help of engineers doing their best, but it’s also written for someone else that you don’t know internally, often written generically for many parties. So one ends up with your virus scanner on the database server.
Yeah, having a separate ops/management network is quite a luxury, so 95% of the time I see ssh going in the same nic as public web traffic does. I guess I was talking about “if I ran the zoo” scenarios, and I’d always use a dedicated ops/management network if I could, because it’s much simpler to reason about.
I’m sure most people here have more real experience with it than I do.
100% this.
This just happened this year to a company that sells firewalls (Cisco), where it turned out that the firewall was actually the giant security hole that had been being exploited for years by … “nation states”.
(I’m not throwing stones. I have been responsible for some horrid security holes in the past. Just reporting the facts.)
So from my own POV, for security: simple is better. Make your front door super simple to get through, but only for the traffic you want (http/https typically). Dead hole everything else. (Dead holing TCP connections will waste a little extra RAM and CPU cycles on the inevitable port scanners.) On the inbound traffic NICs, use ufw to also shut down all inbound traffic except the traffic you want; this gives you two layers of front door protection. (I’d also suggest using a *-BSD for the public firewall, and Linux for the servers, so that you don’t have the same exact potential exploits present in both layers.)
I only use dumb firewalls (and reverse proxies). The goal is just to be able to focus on the real threats when the traffic gets to the back end, by eliminating 99% of the noise before it gets there. Application code will have 100s or 1000s of potential exploits, so getting time to focus on that area is important for public facing services.
Dear God, no. Having virus scanners running is one of the biggest security risks out there. You would not believe how badly written the “big names” in virus scanning are.
Also, let me know when you publish your book on this topic 😉 … this was a good intro.
All of me knows that firewalls were necessary in the past and are now a mistake. The modern WAF exists so that CISOs and Directors in megacorps can spend $$$ and say “we follow best security practices” after their next security breach. Firewalls no longer have anything to do with security and have everything to do with compliance for compliance sake. Why think when you can follow a checklist?
Because sometimes other organizations won’t work with you if you don’t have that compliance certificate and your insurance will be higher too (if you can get one).
I’d bet that huge chunks of people who are in position to make this decision (like I do for our small company) often just feel forced to follow some rules and not because we’d prefer not to think.
And also, ads ads ads ads ads! 🎉
It’s sad to watch greed destroy the internet.
Just the internet?
It’s sad to see greed becoming normalized, seeping into everything, destroying all that used to make us human.
People don’t want to pay for a browser, especially one that is Open-Source, because people don’t want to pay unless there’s scarcity.
Mozilla is built on Google’s Ads, and Google can currently kill them at any time by just dropping their deal. Which means Firefox can’t actually compete with Google’s Chrome, unless they diversify. When Mozilla tries stuff out, like integrating paid services (e.g., Pocket or VPN) in the browser, people get mad. Also, Ads, for better or worse, have been fuelling OSS work and the open Internet.
So, I’m curious, how exactly do you think should Mozilla keep the lights on? And what’s the thinking process in establishing that “greed” is the cause, or why is that a problem?
There’s no option to pay for firefox. None. You can donate to Mozilla, but they will use the money to any and all projects, not just firefox.
They won’t use that money for Firefox at all, since you’re donating to the foundation, while FF is developed by the corporation.
I understand this frustration, but it’s irrelevant.
Enumerate all consumer projects that are as complex as a browser, that are developed out of donations and that have to compete with FOSS products from Big Tech. Donations rarely work. Paying for FOSS doesn’t work, unless you’re paying for complements, like support, certification, or extra proprietary features.
It’s a fantasy to think that if only we had a way to pay, Firefox would get the needed funding.
Indeed, like every other company and organization under the sun who doesn’t want to depend on only one successful product. Where else would they get the resources for developing new ones?
And don’t forget: Collecting User Data! I’m getting a nervous twitch every time I read “Firefox” and “privacy” in the same sentence. Being ever so slightly less bad than the competition doesn’t make you “privacy first”.
tbh that does seem like one of the better attempts of squaring the circle of “telemetry is genuinely useful” and “we really don’t want to know details about individuals”?
I’m not so convinced. You basically have to trust that their anonymization technique is doing the right thing, since you can’t really verify what’s happening on the server side. If it actually does the right thing, then it should be easy to poison the results, which, given the subject matter at hand, there would be a massive incentive do to so for certain players.
This is an oversimplification of the situation. Yes, you need to trust that the right thing is happening server-side, but you don’t need to trust Mozilla. You need to trust ISRG or Fastly, which are independent actors with their own reputation to uphold. Absolutely not perfect, but significantly better than the picture you’re painting here IMO.
Given that the telemetry is opt-out instead of opt-in, there can be no trust. Trust, for me, involves at a bare minimum consent.
I don’t mind them collecting data, but I don’t want my browser to be adversarial — the reason I stay off Chrome is because I have to check its settings on every release, and think hard about “how is this new setting going to screw me in the future?”
Of all organisations, I hoped Mozilla would have understood this, especially as it caters to the privacy crowd.
I don’t think it’s a problem with organizational understanding. I think Mozilla understands this perfectly well, but they also understand that you’ll suck it up and keep using Firefox because there’s no better option.
Yeah, agreed, this really doesn’t seem so bad to me. Given the level of trust a browser required it still makes me nervous though.
I wrote initial version of Firefox telemetry. My goal was to be different from projects like chrome where we can make telemetry available to the public web. Eg I could not make further progress on ff perf without data like https://docs.telemetry.mozilla.org/cookbooks/main_ping_exponential_histograms . Hardest part of this was convincing privacy team that Firefox was gonna die without perf data collection. Soon as we shipped that feature we had a few dozen of critical performance problems that we were not able to see inhouse.
Hardest part was figuring out balance between genuinely useful data and not collecting anything too personal. In practice it turns out it’s not useful for perf work to collect anything that resembles tracking. However, it’s a slippery slope, since my days, they got greedy with data.
they are an appendage of an advertising company after all
I gave up trying to “manage” profiles in Firefox, and just run Firefox from multiple Linux-level users. Yes, that is more resource overhead, but it keeps things completely, utterly separate and unshared. It’s not as much UX friction as one might expect.
did you try
about:profiles
or only “multi-account containers”?Yes. Either too many clicks, or too much supposedly-separate data or configuration is shared.
In my current setup, I go to my always-open multi-tab Konsole, click on the persona of choice (shell already signed in as that user), then up arrow, Enter to launch the persona’s Firefox.
What’s the use case for multiple browser profiles for the same person? I’ve always seen those features as bloat, and one-profile per OS user seems like the “right” way to handle it. That’s how all other software works, I don’t see why browsers need to come up with their own way of doing it.
I have the same question. I think in most case, the tab container and private mode just works for me.
I, personally, have multiple personalities :) I have one for private stuff, one for the company I’m employed in, one I’m actually working for (agency), one for the legacy company (we’ve been acquired, some tools changed, some remained), one for education (I teach frontend).
All of these has their own set of tools I need, or maybe the same tool, which is just struggling to provide a smooth multi-profile experience (looking at you, microsoft). Sometimes I just don’t want my students to see an accidentally opened tab from my NDA project or bookmarks, etc.
I almost never cross-open (so these boundaries are actually sensible), but when I need, I use BrowserTamer for that.
For web developers like me it is running with different set of extensions for every day use, for development and one without any extensions at all as base line.
I heavily use profiles. Mostly it’s because I set them up before containers and process isolation were a thing; and I never got around to switching to containers.
I like to have tons of tabs open (I unfortunately unlearned to use bookmarks), but I don’t always want/need all of them so I can just close one of the browsers when I don’t need it to free some RAM and start it again later. Do containers allow that?
I don’t see them as bloat because it’s essentially just reading the config from a different directory, while containers probably needed lots of changes to make them work.
It’s not that uncommon for software to allow configuring their config/data directory, which is roughly what profiles do.
I find it very useful for task switching. One browser window/profile has technical documentation and such for projects I’m working on, one has my normal daily email/social/rss tabs and general browsing, and one has guides and things for various games I’m playing. I’ve tried various tab grouping extensions, but nothing beats just being able to alt-tab to the browser tabs I want when I switch tasks.
I’ve “fixed” my Firefox profile problem with help from Autohotkey. I defined multiple profiles and launch different firefox instances using Autohotkey keyboard shortcuts. Works good for me.
You got a guide or tips about this setup?
It’s… just what it says on the tin.
man useradd
, and go to town with as my personas as you need. They all use the same/usr/bin/firefox
, but each has its own~/.mozilla/firefox
. Each one has its own set of tabs and windows, all closeable at once with a single Quit (for that persona, leaving other personas’ Firefoxes still open).Minor technical detail: You’ll need to ensure your X Windows (or Wayland or whateve) can handle different Linux users drawing on the one user’s X display.
It is that last part that I have heard is tricky
My “solution” has been to run multiple versions of Firefox (e.g. the standard version and the Developer edition), each with a different default profile, but using the same Firefox Sync account. They get treated as separate windows/applications, I still have the same history/bookmarks/credentials, and so on. It’s a terrible approach, but it’s worked quite well for me. I really hope their improvements to profiles make things easier.
Can this thing hallucinate a rogue
rm -rf
somewhere in a long complex command? Because I really hope it can.Or even better, is it smart enough to not send ChatGPT every password I type, or does it just assume the AI is smart enough to know it should forget them?
You have to ask for help in a separate window (Composer). It does nothing automatically even if you have provided an API key without which nothing works anyway.
Here’s the prompt:
It’s a very safe prompt, can probably be scried into being a bit better, but realistically it’s probably fine.
How do you come to that conclusion? There are literally zero safeguards in that prompt.
What kind of safeguards are you looking for here?
The prompt could add hints like:
Or even more basic: If you use the prompt directly in ChatGPT it will also explain what every step in the suggested command does. So why not ask it to return it in a more structured format so that the commentary actually is included so that I can determine that it is correct.
I don’t think it will produce a
rm -rf /*
if you don’t tell it to. You shouldn’t blindly trust the output it generates anyway, please read before doing that.There’s one safeguard, at least. It will likely not consider “rm -rf /” suitable for copy pasting.
How do you know? The prompt doesn’t say anything about avoiding that specific command.
How do you know it wil not return
rm -rf /
or a similar destructive command?Having seen quite a bit of random and incorrect suggestions from ChatGPT, I don’t think you can confidently say that there is a safeguard built-in that we do not see or that it will not consider destructive or dangerous commands.
I mean, what is your statement based on? It sounds more like wishful thinking :-)
I don’t know anything. But I believe it is likely that “rm -rf /” is not a response that would be considered “suitable for copy pasting”. That isn’t wishful thinking, it’s simply based on the assumption that ChatGPT has been trained on various commands and that the context around “rm -rf /” is that it is destructive and not suitable for being copy pasted.
For the record, I would likely add more safeguards to the prompt. Something like:
“Be sure to make it clear to the user when a command may be destructive”.
From what I’ve read these commands aren’t automatically run though so it doesn’t matter.
I imagine the inverse:
(@) Alright... if you're sure, you probably should pass --no-preserve-root. Have fun.
More like, “Alright… if you are sure, you should use dd to wipe the disk or a secure erase tool, not rm. rm will leave traces of data behind.”
How likely is that same
rm -rf
going to show up on stack overflow or in acurl | sudo sh
or an post-inst script with bad quotes?Does anyone know an iphone app or webapp that I could oluginy openai key to use wi5h chatGPT? I have infrequent use of chatGPT so $20 a month is a waste if I only use it once or twice a day.
You can use the Playground interface directly on their site to work with their models on a pay-as-you-go basis: https://platform.openai.com/playground - it works fine on mobile web too.
Supposedly the new GPT-4o model will be available to free users at some point very soon.
I thought OpenAI’s iPhone app is free to use as long as you are fine with limitations that come with it (not using the latest model…) or have I misunderstood what you are after?
No I want to use the paid pro models which are not free. Simon just answered by question above. Let me check if the playground supports multi modal as well.
You can also use https://openrouter.ai/ which also gives you access to other models.
It sounds like the developer machine will run quite loud and hot for 2 minutes and prevent the developer from doing any further work while waiting for the automated tests to complete. If it’s a pre-commit hook, it’s going to get annoying pretty fast.
Very soon you would see –no-verify being used :)
There are also other hooks, like pre-push, that are probably better matches.
That does depend on what you use, on how you write tests, on how you use git, but yes, this definitely does not work for everybody. I think DHH recognized that fact in the post, just not as explicitly.
The impressive part is that their chip manufacturing effort is done by themselves, being a generation or two behind is totally fine for most things I feel. Is actually pretty impressive that even with constant sanctions and attacks they can come up with a working chip. Another thing to admire is no matter how much western media and influencers attack their tech, they keep working on it and perfecting it. We here in the US should manufacture our own stuff too.
You are manufacturing a lot of your own stuff and the sanctions on chips and related technologies are actually fairly recent.
For me, the multiple profile story in Firefox has always been half baked. It seems to me there are multiple competing features attempting to solve the problem in different ways that never fully jived with me. For example, about:profiles is nice but the two instances of Firefox have the exact same icon and exact same name (at least on osx) so when you are alt+tabbing or picking in the doc its a toss up for which one you actually get. Once their open its fine since you can theme them.
Then you have tab containers which work to isolate specific tabs but are pretty clunky when opening links that you expect to take you to a logged in experience.
I feel like Chrome actually nailed this experience pretty well with they way they handle profiles. It feels like a first class experience and isolates things in a way that’s easy to switch to.
Perhaps I’m just being picky but does anyone have a preferred way for managing this sort of experience in Firefox where you might be on a work machine but also want to be logged into some personal accounts and are separate enough to not have them co-mingling where if you shared your screen your suddenly dealing with a mixture of personal stuff and work tabs?
I actually really like the container feature, I don’t even bother with the profiles.
You can specifically request domains open within a specific container to prevent issues with opening links as you mentioned, but that mostly works for sites you can separate out easily; it isn’t perfect for things that you’d like to open multiple of at the same time in different containers.
For example, opening your dns provider that you use at work in the work container and then needing to open the same site for your personal dns in your “personal” container but honestly I still like the separation here. A little clunky but worth the security.
I use a tile manager and rarely minimize my windows. I prefer to switch desktops (macos), and usually aim to limit myself to 1 work “window” and 1 personal “window”, so I guess I don’t need the additional title context that you use for switching.
I use the bookmark bar pretty heavily for commonly accessed sites so mixing personal + work bookmarks just makes things messy for my workflow. Profiles fix this but the UX there is in my opinion quite sub-par compared to Chrome’s profiles.
I use Firefox for work but I see no mention of tab containers or anything like that, where is it at in the UI?
It’s actually an addon/plugin by mozilla themselves: https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/
I actually thought it had been built in I’ve had it installed for so long.
If you right click on the tab you should see an option “Open in New Container Tab” which has whatever containers you have set it.
https://support.mozilla.org/en-US/kb/how-use-firefox-containers
I really like it as well. It’s replaced 90% of my old use of the profile feature. Now I only use profiles for cases where I truly need multiple, different, mutually exclusive logins. (e.g. some clients will want me to work on github using an account tied to their organization, and I don’t always want to tie my main account to their org.)
My approach (in Firefox) is to have two windows, one for work and one for the rest. I also use Simple Tab Groups for grouping tabs in way that makes sense to me and use Firefox containers pinned to group (and hence window) to keep things separate. Switching between windows with keyboard is quick and there’s only one icon in the list of opened apps. Before I screenshare, I tend to minimize private window.
I agree that profile support feels half-baked and I never started using it mainly because I don’t feel the need to use a different set of extensions or some important settings. I probably should. If I did though, I’d use a different theme so the windows would be noticeably different.
I don’t like profiles in Chrome at all because they tend to spill over into everything and it is difficult to not suddenly be logged in left and right into stuff.
I am curious what you mean by this? In my experience Chrome profiles are completely isolated from each other. Meaning different extensions, bookmarks, etc. Or do you mean if you click on a link it can sometimes be hit or miss if it opens in the correct profile or not (i think its due to whatever profile window you last had active)
For me, I personally rely a separation of bookmarks and extensions for work vs personal so I am stuck either using the half-baked Firefox profiles or just using Chrome.
I assume this is me falling for Google’s nudges and not knowing how to avoid what I don’t want, but when I tried to create different profiles, I was pushed into logging with my Google account and then I would be automatically also logged into Google services (which I’m normally not) and then when visiting something like Reddit, it would automatically create an account for me if I didn’t notice the notification and prevent it quickly enough.
I’m sure all this can be avoided because I assume internet would be on fire if this was a common experience, but I’m also otherwise not a fan of the browser more than I need it to do my (webdev) work so I didn’t spend too much time figuring out.
oh oh yes I see. They definitely push you to use your Google account but you can create profiles without being tied specifically to a Google account but you lose some features like cross platform sync, etc I believe
There was an extension to address this (Show Profile), but sadly it could not be updated to WebExtensions when Firefox dropped XUL