08 Feb 2025

the #Eurostack, so hot right now

Is it just me or is it all about Europe right now? Put on some Kraftwerk and follow along I guess.

Fedora Chooses Forgejo! This is GitHub-like project hosting software with version control, issues, pull requests, all the usual stuff. I have a couple of small projects on Codeberg, which is the (EU) hosted nonprofit instance and it works fine as far as I can tell. Also a meissa GmbH presentation at FOSDEM 2025 You know X, Facebook, Xing, SourceForge? What about GitHub? It is time to de-risk OpenSource engagement!

Lots more Europe-hosted Saas, too. Baldur Bjarnason has more info on Todo notes as a storm approaches

The Sovereign Tech Agency is supporting some Linux plumbing: Arun Raghavan: PipeWire ♥ Sovereign Tech Agency.

The northern German state of Schleswig-Holstein is moving 30,000 PCs from Microsoft Windows and Office to Linux and LibreOffice: LibreOffice at the Univention Summit 2025 I know, I know, government in Germany goes desktop Linux is the hey, Rocky, watch me pull a rabbit out of my hat of IT, but this time they’re not up against Microsoft in its prime, they’re up against a new generation that can’t open their old files, while LibreOffice can.

They Said It Couldn’t Be Done by Pierre-Carl Langlais, Anastasia Stasenko, and Catherine Arnett. These represent the first ever models trained exclusively on open data, meaning data that are either non-copyrighted or are published under a permissible license. Trained on the Jean Zay supercomputer. Related: Pirate Libraries Are Forbidden Fruit for AI Companies. But at What Cost?

Scott Locklin lists Examples of group madness in technology. One of the worst arguments I hear is that thing X is inevitable because the smart people are doing it. As I’ve extensively documented over the last 15 years on this blog, smart people in groups are not smart and are even more subject to crazes and mob behavior as everyone else.

Not a European product: Framework Laptop’s RISC-V board for open source diehards is available for $199 but there is a Europe angle here. European Union Seeks Chip Sovereignty Using RISC-V - EE Times, RISC-V Summit Europe. RISC-V holds significance for Europe due to its potential to foster innovation, enhance technological sovereignty, and stimulate economic growth within the region. By embracing RISC-V, European countries can reduce their dependency on foreign technologies and proprietary architectures, thereby enhancing their autonomy in critical sectors such as telecommunications, cybersecurity, and data processing.

Also international, not Europe-specific: Postgres full-text search is Good Enough! by Rachid Belaid. (But there is a tech autonomy angle, and an active PostgreSQL Europe, so for practical purposes PostgreSQL is part of the Eurostack.)

Good advice from tante/Jürgen Geuter: Innovation is a distraction The demand for more Innovation (and sometimes even the request for more research) has become a way to legitimize not doing anything. A way to say the unpleasant solutions we have are not perfect but in the future there might be a magic solution that doesn’t bother us and everyone gets a fucking unicorn.

Marloes de Koning interviews Cristina Caffarra. ‘We have to get to work and put Europe first. But we are late. Terribly late’ You really don’t have to buy everything in Europe, says the competition expert, who is familiar with the criticism that the American supply is simply superior. But start with 30 percent of your procurement budget in Europe. That already makes a huge difference. (That seems like an easy target. Not only are way more than 30 percent of the European Alternatives up to a servicable level by now, but unfortunately a lot of the legacy US vendors are having either quality or compliance problems, or both. The risks, technical and otherwise, keep going up.

Greg Nojeim and Silvia Lorenzo Perez cover Trump’s Sacking of PCLOB Members Threatens Data Privacy Aside from its importance in protecting civil liberties, the PCLOB cannot play its key role in enforcing U.S. obligations under the EU-U.S. Data Privacy Framework (DPF) while it lacks a quorum of members. The European Commission would lose a key oversight tool for which it bargained, and the adequacy decision that it issued to support the DPF could be struck down under review at the Court of Justice of the European Union (CJEU), which struck down two predecessor EU-U.S. data privacy arrangements, the Safe Harbor Agreement and the Privacy Shield.

Karl Bode writes, Apple Has To Pull Its “AI” News Synopses Because They Were Routinely Full Of Shit (If the features unavailable in Europe are problematic anyway…)

Sarah Perez covers Report: Majority of US teens have lost trust in Big Tech. Common Sense says that 64% of surveyed U.S. teens don’t trust Big Tech companies to care about their mental health and well-being and 62% don’t think the companies will protect their safety if it hurts profits. Over half of surveyed U.S. teens (53%) also don’t think major tech companies make ethical and responsible design decisions (think: the growing use of dark patterns in user interface design meant to trick, confuse, and deceive. A further 52% don’t think that Big Tech will keep their personal information safe and 51% don’t think the companies are fair and inclusive when considering the needs of different users. (What if the Eurostack becomes the IT version of those European food brands that sell well in other countries too?)

29 Jan 2025

time to sharpen your pencils, people

Mariana Olaizola Rosenblat covers How Meta Turned Its Back on Human Rights for Tech Policy Press. Zuckerberg announced that his company will no longer work to detect abuses of its platforms other than high-severity violations of content policy, such as those involving illicit drugs, terrorism, and child sexual exploitation. The clear implication is that the company will no longer strive to police its platform against other harmful content, including hate speech and targeted harassment.

Sounds like a brand-unsafe environment. So is another rush of advertiser boycott stories coming? Not this time. Lara O’Reilly reports that brand safety has recently become a political hot potato and been a flash point for some influential, right-leaning figures. In uncertain times, marketing decision-makers are keeping a low profile. Most companies aren’t really set up to take on the open-ended security risk of coming out against hate speech by users with friends in high places. According to the Fraternal Order of Police, the January 6 pardons send a dangerous message, and that message is being heard in marketing departments. The CMOs who boycotted last time are fully aware that stochastic terrorism is a thing, and that rage stories about companies spread quickly in Facebook groups and other extremist media. If an executive makes the news for pulling ads from Meta, they would be putting employees at risk from lone, deniable attacks. So instead of announcing a high-profile boycott, marketers are more likely to follow the example of Federal employees and do the right thing, by the book, and quietly.

Fortunately, big advertisers got some lower-stakes practice with the X (former Twitter) situation. Instead of either (1) staying on there and putting the brand at risk of being associated with material copied out of Henry Ford’s old newspaper or (2) risking getting snarled up in a lawsuit for pulling the X ads entirely, brands got the best of both by cutting way back on the actual money without dropping X entirely or saying much one way or the other.

And it’s possible for advertisers to reduce support for Meta without making a stink or drawing fire. Fortunately, Meta ads are hella expensive, and results can be unrealistic and unsustainable. Like all the Big Tech companies these days, Meta is coping with a slowdown in innovation by tweaking the ad rules to capture more revenue from existing services. As Jakob Nielsen pointed out back in 2006, in Search Engines as Leeches on the Web, ad platforms can even capture the value created by others. A marketer doesn’t have to shout ¡No Pasarán! or anything—just sharpen your best math pencil, quietly go through the numbers, spot something that looks low-ROAS or fraudulent in the Meta column, tweak the budget, repeat. If users can dial down Meta, so can marketers. (Update: Richard Kirk writes, Brands could be spending three times too much on social. You read that right. Read the math, do the math.) And if Meta comes out with something new and risky like the adfraud in the browser thing, Privacy-Preserving Attribution, it’s easy to use the fraud problem as the reason not to do it—you don’t have to stand up and talk politics at work.

From the user side

It’s not that hard to take privacy measures that result in less money for Big Tech. Even if you can’t quit Meta entirely, some basic tools and settings can make an impact, especially if you use both a laptop and a phone, not just a phone. With a few minutes of work, an individual in the USA can, in effect, fine the surveillance business about $50/month.

My list of effective privacy tips is prioritized by how much I think they’ll cost the surveillance business per minute spent. A privacy tips list for people who don’t like doing privacy tips but also don’t like creepy oligarchs. (As they say in the clickbait business, number 9 will shock you: if you get your web browser info from TV and social media, you probably won’t guess which browsers have built-in surveillance and/or fraud features.) That page also has links to more intensive privacy advice for those who want to get into it.

A lawyer question

As an Internet user, I realize I can’t get to Meta surveillance neutral just with my own privacy tools and settings. For the foreseeable future, companies are going to be doing server-to-server tracking of me with Meta CAPI.

So in order to get to a rough equivalent of not being surveilled, I need to balance out their actual surveillance by introducing some free speech into the system. (And yes, numbers can be speech. O, the Tables tell!) So what I’d like to do is write a surrogate script (that can be swapped in by a browser extension in place of the real Meta Pixel, like the surrogate scripts uBlock Origin uses) to enable the user to send something other than valid surveillance data. The user would configure what message the script would send. The surrogate script would then encode the message and pass it to Meta in place of the surveillance data sent by the original Meta script. There is a possible research angle to this, since I think that in general, reducing ad personalization tends to help people buy better products and services. An experiment would probably show that people who mess with cross-context surveillance are happier with their purchases than those who allow surveillance. Releasing a script like that is the kind of thing I could catch hell for, legally, so I’m going to wait to write it until I can find a place to host it and a lawyer to represent me. Anyone?

25 Jan 2025

security headers for a static site

This site now has an OPML version (XML) of the blogroll. What can I do with it? It seems like the old Share your OPML site is no more. Any ideas?

Also went through Securing your static website with HTTP response headers by Matt Hobbs and got a clean bill of health from the Security Headers site. Here’s what I have on here as of today:

Access-Control-Allow-Origin "https://blog.zgp.org/"
Cache-Control "max-age=3600"
Content-Security-Policy "base-uri 'self'; default-src 'self'; frame-ancestors 'self';"
Cross-Origin-Opener-Policy "same-origin"
Permissions-Policy "accelerometer=(),autoplay=(),browsing-topics=(),camera=(),display-capture=(),document-domain=(),encrypted-media=(),fullscreen=(),geolocation=(),gyroscope=(),magnetometer=(),microphone=(),midi=(),payment=(),picture-in-picture=(),publickey-credentials-get=(),screen-wake-lock=(),sync-xhr=(self),usb=(),web-share=(),xr-spatial-tracking=()" "expr=%{CONTENT_TYPE} =~ m#text\/(html|javascript)|application\/pdf|xml#i"
Referrer-Policy no-referrer-when-downgrade
Cross-Origin-Resource-Policy same-origin
Cross-Origin-Embedder-Policy require-corp
Strict-Transport-Security "max-age=2592000"
X-Content-Type-Options: nosniff

(update 2 Feb 2025) This site has some pages with inline styles, so I can’t use that CSP line right now. This is because I use the SingleFile extension to make mirrored copies of pages, so I need to move those into their own virtual host so I can go back to using the version without the unsafe-inline.

(update 23 Feb 2025) The Pagefind site search requires ‘unsafe-eval’ in CSP in order to support WASM. This should be wasm-unsafe-eval in the future.

To do WASM and inline styles, the new value for the Content-Security-Policy header is:

"base-uri 'self'; default-src 'self'; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-eval'; frame-ancestors 'self';"

I saved a copy of Back to the Building Blocks: A Path Toward Secure and Measurable Software (PDF). The original seems to have been taken down, but it’s a US Government document so I can keep a copy on here (like the FBI alert that got taken down last year, which I also have a copy of.)

18 Jan 2025

Supreme Court files confusing bug report

I’m still an Internet optimist despite…things…so I was hoping that Friday’s Supreme Court opinion in the TikTok case would have some useful information about how to design online social networking in a way that does get First Amendment protection, even if TikTok doesn’t. But no. Considered as a bug report, the opinion doesn’t help much. We basically got (1) TikTok collects lots of personal info (2) Congress gets to decide if and how it’s a national security problem to make personal info available to a foreign adversary, and so TikTok is banned. But everyone else doing social software, including collaboration software, is going to have a lot to find out for themselves.

The Supreme Court pretty much ignores TikTok’s dreaded For You Page algorithm and focuses on the privacy problem. So we don’t know if some future ban of some hypothetical future app that somehow fixed its data collection issues would hold up in court just based on how it does content recommendations. (Regulating recommendation algorithms is a big issue that I’m not surprised the Court couldn’t agree on in the short time they had for this case.) We also get the following, on p. 9—TikTok got the benefit of the doubt and received some First Amendment consideration that future apps might or might not.

This Court has not articulated a clear framework for determining whether a regulation of non-expressive activity that disproportionately burdens those engaged in expressive activity triggers heightened review. We need not do so here. We assume without deciding that the challenged provisions fall within this category and are subject to First Amendment scrutiny.

Page 11 should be good news for anybody drafting a privacy law anyway. Regulating data collection is content neutral for First Amendment purposes—which should be common sense.

The Government also supports the challenged provisions with a content-neutral justification: preventing China from collecting vast amounts of sensitive data from 170 million U. S. TikTok users. That rationale is decidedly content agnostic. It neither references the content of speech on TikTok nor reflects disagreement with the message such speech conveys….Because the data collection justification reflects a purpose[e] unrelated to the content of expression, it is content neutral.

The outbound flow of data from people in the USA is what makes the TikTok ban hold up in court. Prof. Eric Goldman writes that the ban is taking advantage of a privacy pretext for censorship, which is definitely something to watch out for in future privacy laws, but doesn’t apply in this case.

But so far the to-do list for future apps looks manageable.

  • Don’t surveil US users for a foreign adversary

  • Comply with whatever future restrictions on recommendation algorithms turn out to hold up in court. (Disclosure of rules or source code? Allow users to switch to chronological? Allow client-side or peer-to-peer filtering and scoring? Lots of options but possible to get out ahead of.)

Not so fast. Here’s the hard part. According to the Court the problem is not just the info that the app collects automatically and surreptitiously, or the user actions it records, but also the info that users send by some deliberate action. On page 14:

If, for example, a user allows TikTok access to the user’s phone contact list to connect with others on the platform, TikTok can access any data stored in the user’s contact list, including names, contact information, contact photos, job titles, and notes. Access to such detailed information about U. S. users, the Government worries, may enable China to track the locations of Federal employees and contractors, build dossiers of personal information for blackmail, and conduct corporate espionage.

and in Justice Gorsuch’s concurrence,

According to the Federal Bureau of Investigation, TikTok can access any data stored in a consenting user’s contact list—including names, photos, and other personal information about unconsenting third parties. Ibid. (emphasis added). And because the record shows that the People’s Republic of China (PRC) can require TikTok’s parent company to cooperate with [its] efforts to obtain personal data, there is little to stop all that information from ending up in the hands of a designated foreign adversary.

On the one hand, yes, sharing contacts does transfer a lot of information about people in the USA to TikTok. But sharing a contact list with an app can work a lot of different ways. It can be

  1. covert surveillance (although mobile platforms generally do their best to prevent this)

  2. data sharing that you get tricked into

  3. deliberate, more like choosing to email a copy of the company directory as an attachment

If it’s really a problem to enable a user to choose to share contact info, then that makes running collaboration software like GitHub in China a problem from the USA side. (Git repositories are full of metadata about who works on what, with who. And that information is processed by other users, by the platform itself, and by third-party tools.) Other content creation tools also share the kinds of info on skills and work relationships that would be exactly what a foreign adversary murder robot needs to prioritize targets. But the user, not some surveillance software, generally puts that info there. If intentional contact sharing by users is part of the reason that the USA can ban TikTok, what does that mean for other kinds of user to user communication?

Kleptomaniac princesses

There’s a great story I read when I was a kid that I wish I had the citation for. It might be fictional, but I’m going to summarize it anyway because it’s happening again.

Once upon a time there was a country that the UK really, really wanted to maintain good diplomatic relations with. The country was in a critical strategic location and had some kind of natural resources or something, I don’t remember the details. The problem, though, was that the country was a monarchy, and one of the princesses loved to visit London and shoplift. And she was really bad at it. So diplomats had to go around to the stores in advance to tell the manager what’s going on, convince the store to let her steal stuff, and promise to settle up afterwards.

Today, the companies that run the surveillance apps are a lot like that princess. techbros don’t have masculine energy, they have kleptomaniac princess energy If one country really needs to maintain good relations with another, they’ll allow that country’s surveillance apps to get away with privacy shenanigans. If relations get chillier, then normal law enforcement applies. At least for now, though, we don’t know what the normal laws here will look like, and the Supreme Court didn’t provide many hints yesterday.

15 Jan 2025

How this site uses AI

This site is written by me personally except for anything that is clearly marked up and cited as a direct quotation. If you see anything on here that is not cited appropriately, please contact me.

Generative AI output appears on this site only if I think it really helps make a point and only if I believe that my use of a similar amount and kind of material from a relevant work in the training set would be fair use.

For example, I quote a sentence of generative AI output in LLMs and reputation management. I believe that I would have been within my fair use rights to use the same amount of text from a copyrighted history book or article.

In LLMs and the web advertising business, my point was not only that the Big Tech companies are crooked, but that it’s so obvious. A widely available LLM can easily point out that a site running Big Tech ads—for real brands—is full of ripped-off content. So I did include a short question and answer session with ChatGPT. It’s really getting old that big companies are constantly being shocked to discover infringement and other crimes when their own technology could have spotted it.

Usually when I mention AI or LLMs on here I don’t include any generated content.

More slash pages

11 Jan 2025

Click this to buy better stuff and be happier

Here’s my contender for Internet tip of the year. It’s going to take under a minute, and will not just help you buy better stuff, but also make you happier in general. Ready? Here it is, step by step.

  1. Log in to your Google account if you’re not logged in already. (If you have a Gmail or Google Drive tab open in the browser, you’re logged in.)

  2. Go to My Ad Center.

  3. Find the Personalized ads control. It looks something like this.

Personalized ads on
  1. Turn it off.
Personalized ads off
  1. That’s it. Unless you have another Google account. If you do have multiple Google acccounts (like home, school, and work accounts) do this for each one.

This will affect the ads you get on all the Google sites and apps, including Google Search and YouTube, along with the Google ads on other sites. Google is probably going to show you some message to try to discourage you from doing this. From what I can tell from the outside, it looks like turning off personalized ads will cost Google money. Last time I checked, I got the following message.

Ads may seem less relevant When your info isn’t used for ads, you may see fewer ads for products and brands that interest you. Non-personalized ads on Google are shown to you according to factors like the time of day, device type, your current search or the website you’re visiting, or your current location (based on your IP address or device permissions).

But what they don’t say is anything about how personalized ads will help you buy better products and services. And that’s because—and I’m going out on a limb here data-wise, but a pretty short and solid limb, and I’ll explain why—they just don’t. Choosing to turn off personalized ads somehow makes you a more satisfied shopper and better off.

How does this work?

I still don’t know how exactly how this tip works, but so far there have been a few theories.

1: lower fraud risk. It’s possible that de-personalizing the ads reduces the number of scam advertisers who can successfully reach you. Bian et al., in Consumer Surveillance and Financial Fraud, show that Apple App Tracking Transparency, which reduces the ability of apps to personalize ads, tended to reduce fraud complaints to the FTC.

We estimate that the reduction in tracking reduces money lost in all complaints by 4.7% and money lost reported in internet and data security complaints by 40.1%.

That’s a pretty big effect. De-personalizing ads might mean that your employer doesn’t get compromised by an ad campaign that delivers malware targeting a specific company, and you don’t get targeted for fake ads targeted to users of a software product. Even if the increase in fraud risk for users with personalization left on is relatively small, getting scammed has a big impact and can move the average money and happiness metrics a lot.

2: more mindful buying. Another possibility is that people who get fewer personalized ads are making fewer impulse purchases. Jessica Fierro and Corrine Reichert bought a selection of products from those Temu ads that seem to be everywhere, and decided they weren’t worth it. Maybe people without personalized ads are making fewer buying decisions but each one is better thought out.

3. buy more from higher quality vendors. Or maybe companies that put more money into personalized advertising tend to put less into improving product quality.ICMYI: Product is the P all marketers should strive to influence by Mark Ritson In Behavioral advertising and consumer welfare: An empirical investigation, Mustri et al. found that

targeted ads are more likely to be associated with lower quality vendors, and higher prices for identical products, compared to competing alternatives found in organic search results

In Why Your Brand Feels Like a Cheap Date: All Flash, No Substance in the World of Performance Marketing, Pesach Lattin writes,

Between 2019 and 2021, brands that focused on brand equity saw a 72% increase in value, compared to just 20% for brands that relied primarily on performance tactics. Ignoring brand-building not only weakens your baseline sales but forces you to spend more and more on performance marketing just to keep your head above water.

Brands that are over-focused on surveillance advertising might be forced to under-invest in product improvements.

4. limited algorithmic and personalized pricing. Personalized ads might be set up to offer the same product at higher prices to some people. The FTC was investigating, but from the research point of view, personalized pricing is really hard to tell apart from dynamic pricing. Even if you get volunteers to report prices, some might be getting a higher price because stock is running low, not because of who the individual is. So it’s hard to show how much impact this has, but hard to rule it out too.

5. it’s just a step on the journey. Another possibility is that de-personalizing the ads is a gateway to blocking ads entirely. What if, without personalization, the ads get gross or annoying enough that people tend to move up to an ad blocker? And, according to Lin et al. in The Welfare Effects of Ad Blocking,

[P]articipants that were asked to install an ad-blocker become less likely to regret recent purchases, while participants that were asked to uninstall their ad-blocker report lower levels of satisfaction with their recent purchases.

Maybe you don’t actually make better buying decisions while ads are on but personalization is off—but it’s a step toward full ad blocking where you do get better stuff and more happiness.

How do I know this works?

I’m confident that this tip works because if turning ad personalization off didn’t help you, Google would have said so a while ago. Remember the 52% paper about third-party cookies? Google made a big deal out of researching the ad revenue impact of turning cookie tracking on or off. And this ad personalization setting also has a revenue impact for Google. According to documents from one of Google’s Federal cases, keeping the number of users with ad personalization off low is a goal for Google—they make more money from you if you have personalization on, so they have a big incentive to try to convince you that personalization is a win-win. So why so quiet? The absence of a PDF about this is just as informative as the actual PDF would be.

And it’s not just Google. Research showing user benefits from personalized ads would be a fairly easy project not just for Google, but for any company that can both check a privacy setting and measure some kind of shopping outcome. Almost as long as Internet privacy tools have been a thing, so has advice from Internet Thought Leaders telling us they’re not a good idea. But for a data-driven industry, they’re bringing surprisingly little data—especially considering that for many companies it’s data they already have and would only need to do stats on, make graphs, and write (or have an LLM write) the abstract and body copy.

Almost any company with a mobile app could do research to show any benefits from ad personalization, too. Are the customers who use Apple iOS and turn off tracking more or less satisfied with their orders? Do banks get more fraud reports from app users with tracking turned on or off? It would be straightforward for a lot of companies to show that turning off personalization or turning on some privacy setting makes you a less happy customer—if it did.

The closest I have found so far is Balancing User Privacy and Personalization by Malika Korganbekova and Cole Zuber. This study simulated the effects of a privacy feature by truncating browsing history for some Wayfair shoppers, and found that people who were assigned to the personalized group and chose a product personalized to them were 10% less likely to return it than people in the non-personalized group. But that’s about a bunch of vendors of similar products that were all qualified by the same online shopping platform, not about the mix of honest and dishonest personalized ads that people get in total. So go back and do the tip if you didn’t already, enjoy your improved shopping experience, and be happy. More: effective privacy tips

05 Jan 2025

ads.txt for a site with no ads

This site does not have programmatic ads on it.

But just in case, since there’s a lot of malarkey in the online advertising business, I’m putting up this file to let the advertisers know that if someone sold you an ad and claimed it ran on here, you got burned.

That’s the ads.txt file for this site. The format is defined in a specification from the IAB Tech Lab (PDF). The important part is the last line. The placeholder is how you tell the tools that are supposed to be checking this stuff that you don’t have ads.

In other news, selling info on US citizens to North Korean murder robots is illegal now so we’ve got that going for us which is nice. See Justice Department Issues Final Rule Addressing Threat Posed by Foreign Adversaries’ Access to Americans’ Sensitive Personal Data

04 Jan 2025

Links for 4 Jan 2025: news from the low-trust society

Aram Zucker-Scharff writes, in Never Forgive Them,

If this year has revealed anything about the tech billionaires it is that they have a very specific philosophy other than just growth and that philosophy is malicious…I don’t think we can really take on the obstacle of, let’s call it more accurately, the scam economy without acknowledging this is all part of the design. They think they are richer than you and therefore you must be stupid and because you are stupid you should be controlled…

Read the whole thing. A lot of tech big shots want to play the rest of us like a real-time strategy game. (Ever notice that the list of skills in the we don’t hire US job applicants because the culture doesn’t value the following skills tweets is the same as the list of skills in the our AI has achieved human-level performance in the following skills tweets?) I predicted that low-trust society will trend in 2025, and I agree with Aram that a big part of that is company decision-makers deliberately making decisions that make it harder to trust others. I’m working on a list of known good companies. (Work in progress, please share yours if you have one.)

And yes, my link collecting tool as queued up a bunch of links about the shift towards a lower-trust society along with ways that people are adapting to it or trying to shift things back.

Opinion: We Need More Consequences for Reckless Driving. But That Doesn’t Mean More Punishment — Streetsblog USA (a lot of this is reactions to reactions to app-driven rat running through neighborhoods. Bollards can be a way to game the algorithm.)

Judge blocks parts of California bid to protect kids from social media (the ban on addictive feeds without consent is still there)

Self-Own (bullshit about economics, explained)

The Cows in the Coal Mine (bullshit about health, only getting worse)

This Year in Worker Conquests

Boeing strike ends after workers vote to accept “life-changing” wage increase

Steinar H. Gunderson: git.sesse.net goes IPv6-only (coping with AI scrapers)

OpenAI’s Board, Paraphrased: ‘To Succeed, All We Need Is Unimaginable Sums of Money’

Namma Yatri is a rideshare app that offers a better deal to drivers. Daily or per-trip flat rates, not a percentage

5 Rideshare Strategies That Are Complete BS

How to block Chrome from signing you into a Google account automatically

Leave Me Alone.

Firefox-maker Mozilla’s boosted revenue significantly in 2023, but the financial report may also raise concern

Google Cuts Thousands of Workers Improving Search After Search Results Scientifically Shown to Suck (a lot of the bullshit problem is downstream from Google’s labor/management issues)

Why is it so hard to buy things that work well? (imho Mark Ritson still explained it best—companies over-emphasize the promotion P of marketing, trying to find people slightly more likely to buy the product as is, over the product refinements that would tend to get more buyers. George Tannenbaum on destroying brand trust with too much of one P, too little of another: Ad Aged: Leave Me Alone.)

Why Big Business May Wind Up Missing Lina Khan

An ad giant wants to run your next TV’s operating system

Yes, your phone is tracking you via advertising ID, and companies are using it to sell your location and identity to anyone. Protect yourself by disabling this feature on your device.

Meta beats suit over tool that lets Facebook users unfollow everything (I guess now it turns out you can’t unfollow the AI bots anyway?)

Sweet Dreams and Sour Deals: How White-Noise Apps Are Playing Advertisers

NFL Player Uses Pirate Streaming Site to Watch His Own Team

Missouri AG claims Google censors Trump, demands info on search algorithm

Ex-coiner Y Combinator startup bro: ‘dawg i chatgpt’d the license, can’t be bothered with legal’

Steam adds the harsh truth that you’re buying “a license,” not the game itself

31 Dec 2024

predictions for 2025

(looks like I had enough notes for an upcoming event to do A-Z this year…)

Ad blocking will get bigger and more widely reported on. Besides the usual suspects, the current wave of ad blocking is also partly driven by professional, respectable security vendors. Malwarebytes Labs positions their ad blocker as an security tool and certain well-known companies are happy to help them with their content marketing by running malvertising. (example: Malicious ad distributes SocGholish malware to Kaiser Permanente employees) Silent Push is another security vendor helping to make the ads/malware connection. And, according to research by Lin et al., users who installed an ad blocker reported fewer regrets with purchases and an improvement in subjective well-being. Some of those users who installed an ad blocker reluctantly because of security concerns will be hard to convince to turn it off even if the malvertising situation improves.

Bullshit is going to be everywhere, and more of it. In 2025 it won’t be enough to just ignore the bullshit itself. People will also have to ignore what you might think of as a bullshit Smurf attack, where large amounts of content end up amplifying a small amount of bullshit. Some politician is going to tweet something about how these shiftless guys today need to pull up their pants higher, and then a bunch of mainstream media reporters are going to turn in their diligently researched 2000-word think pieces about the effect of higher pants on the men’s apparel market and human reproductive system. And by the time the stories run, the politician has totally forgotten about the pants thing and is bullshitting about something else. The ability to ignore the whole cycle will be key. So people’s content discovery habits are going to change, we just don’t know how.

Chrome: Google will manage to hang on to their browser, as prospective buyers don’t see the value in it. Personally I think there are two logical buyers. The Trade Desk could rip out the janky Privacy Sandbox stuff and put in OpenPass and UID2. Not all users would leave those turned on, but enough would to make TTD the dominant source for user identifiers in web ads. Or a big bank could buy Chrome as a fraud protection play and run it to maximize security, not just ad revenue. At the scale of the largest banks, protecting existing customers from Internet fraud would save the bank enough money to pay for browser development. Payment platform integration and built-in financial services upsell would be wins on top of that.

Both possible Chrome buyers would be better off keeping open-source Chromium open. Google would keep contributing code even if they didn’t control the browser 100%. They would feel the need to hire or sponsor people to participate on a legit open-source basis to support better interoperability with Google services. They wouldn’t be able to get the anticompetitive shenanigans back in, but the legit work would continue—so the buyer’s development budget would be lower than Google’s, long term. But that’s not going to happen. So far, decision makers are convinced that the only way to make money with the browser is with tying to Google services, so they’re going to pass up this opportunity.

Development tools will keep getting more AI in them. It will be easier to test new AI stuff in the IDE than to not test it. But a flood of plausible-looking new code that doesn’t necessarily work in all cases or reflect the unwritten assumptions of the project means a lot more demand for testing and documentation. The difference between a software project that spends 2025 doing self-congratulatory AI productivity win blog posts and one that has an AI code catastrophe is going to be how much test coverage they started with or were able to add quickly.

Environmental issues: we’re in for more fires, floods, and storms. Pretty much everybody knows why, but some people will only admit it when they have to. A lot of homeowners won’t be able to renew their insurance, so will end up selling to investors who are willing to demolish the house and hold the land for eventual resale. More former house occupants will pivot to #vanlife, and 24-hour health clubs will sell more memberships to people who mainly need the showers.

Firefox will keep muddling through. There will be more Internet drama over their ill-advised adfraud in the browser thing, but the core software will be able to keep going and even pick up a few users on desktop because of the ad blocking trend. The search ad deal going away won’t have much effect—Google pays Firefox to exist and limit the amount of antitrust trouble it’s in, not for some insignificant number of search ad clicks. If they can’t pay Firefox for default search engine placement, they’ll find some other excuse to send them enough cash to keep going. Maybe not as high on the hog as they have been used to, but enough to keep the browser usable.

Google Zero, where Google just stops sending traffic to a site, will arrive for a significant minority of sites. But not even insiders at Google know which. (I Attended Google’s Creator Conversation Event, And It Turned Into A Funeral | GIANT FREAKIN ROBOT, Google, the search engine that’s forgotten how to search)

Homeschooling will increase faster because of safety concerns, but parents will feel uncomfortable about social isolation and seek out group activities such as sports, crafts, parent-led classes, and group playdates. Homeschoooling will continue to be a lifestyle niche that’s relatively easy to reach with good influencer and content creator connections, but not well-covered by the mainstream media.

Immigration into the USA will continue despite high-profile deportations and associated human rights violations. But whether or not a particular person is going to be able to make it in, or be able to stay, is going to be a lot less predictable. If you know who the person is who might be affected by immigration policy changes, you might be able to plan around it, but what’s more likely from the business decision-making point of view is the person affected is an employee of some supplier of your supplier, or a family member, and you can’t predict what happens when their life gets disrupted. Any company running in lean or just-in-time mode, and relying on low disruption and high predictability, will be most at a disadvantage. Big Tech companies will try to buy their way out of the shitstorm, but heavy reliance on networks of supplier companies will mean they’re still affected in hard-to-predict ways.

Journalism will continue to go non-profit and journalist-owned. The bad news is there’s not enough money in journalism, now or in the near future, to sustain too many levels of managers and investors, and the good news is there’s enough money in it to keep a nonprofit or lifestyle company going. (Kind of like tech conferences. LinuxWorld had to support a big company, so wasn’t sustainable, but Southern California Linux Expo, a flatter organization, is.)

Killfile is the old Usenet word for a blocklist, and I already had something for B. The shared lists that are possible with the Fediverse and Bluesky are too useful not to escape into other categories of software. I don’t know which ones yet, but a shared filter list to help fix the search experience is the kind of thing we’re likely to see. People’s content discovery and shopping habits will have to change, we just don’t know how.

Low-trust society will trend. It’s possible for a country to move from high trust to low, or the other way around, as the Pew Research Center covered in 2008. The broligarchy-dominated political and business environment in the USA, along with the booms in growth hacking and AI slop, will make things a lot easier for corporate crime and scam culture. So people’s content discovery and shopping habits will have to change, we just don’t know how. Multi-national companies that already operate in middle-income low-trust countries will have some advantages in figuring out the new situation, if they can bring the right people in from there to here.

Military affairs, revolution in: If you think AI hype at the office in the USA is intense, just watch the AI hype in Europe about how advanced drones and other AI-enabled defense projects can protect countries from being occupied by an evil dictator without having to restore or expand conscription. Surveillance advertisers and growth hackers in the USA are constantly complaining about restrictions on AI in Europe—but the AI Act over there has an exception for the defense industry. In 2025 it will be clear that the USA is over-investing in bullshit AI and under-investing in defense AI, but it won’t be clear what to do about it. (bonus link: The Next Arsenal of Democracy | City Journal)

Neighborhood organizations: As Molly White recommended in November, more people will be looking for community and volunteer opportunities. The choice to become a joiner and not just a consumer in unpredictable times is understandable and a good idea in general. This trend could enter a positive feedback loop with non-profit and journalist-owned local news, as news sites try more community connections like Cleveland Documenters.

Office, return to: Companies that are doing more crime will tend to do more RTO, because signaling loyalty is more important than productivity or retaining people with desired skills. Companies that continue avoiding doing crimes, even in what’s going to be a crime-friendly time in the USA, will tend to continue cutting back on office space. The fun part is that the company can tell the employee that work from home privileges are a benefit, and not free office space for the employer. Win-win! So the content niche for how-tos on maximizing home (and van) offices will grow.

Prediction markets will benefit from 2024’s 15 minutes of fame to catch on for some niche corporate projects, and public prediction market prices will be quoted in more news stories.

Quality, flight to (not): If I were going to be unrealistically optimistic here, I’d say that the only way for advertisers to deal with the flood of AI slop sites and fake AI users is to go into full Check My Ads mode and just advertise on known legit sites made by and for people. But right now the habits and skills around race-to-the-bottom ad placements are too strong, so there won’t be much change on the advertiser side in 2025. A few forward-thinking advertisers will get good results from quality buying for specific campaigns, but that’s about it.

Research on user behavior will get a lot more important. The AI crapflood and resulting search quality crisis mean that (say the line, Bart) people’s content discovery and shopping habits will have to change, we just don’t know how. Companies that build user research capacity, especially in studying privacy users and the gaps they leave in the marketing data, will have an advantage.

State privacy law season will be spicy again. A few states will get big comprehensive privacy bills through the process again, but the laws to watch will be specific ones on health, protecting teens from the algorithm, social media censorship, and other areas. More states will get laws like Daniel’s Law. (We need a Daniel’s Law for military personnel, their families, and defense manufacturing workers, but we’re probably going to see some states do them for health insurance company employees instead.)update 1 Feb 2025: Compliance issues that came up for AADC will have to get another look.

Troll lawyer letters alleging violations of the California Invasion of Privacy Act (CIPA) and similar laws will increase. Operators of small sites can incur a lot of legal risk now just by running a Big Tech tracking pixel. But Big Tech will continue to ignore the situation, and put all the risks on the small site. (kind of like how Amazon.com uses delivery partner companies to take the legal risks of employing algorithmically micromanaged, overstressed delivery drivers.)

Unemployment and underemployment will trend up, not down, in 2025. Yes, there will be more political pressure on companies here to hire and manufacture locally, but actual job applicants aren’t interchangeable worker units in an RTS game—there’s a lot of mismatch between the qualities that job seekers will have and the qualities that companies will be looking for, which will mean a lot of jobs going unfilled. And employers tend to hire fewer people in unpredictable times anyway.

Virginia’s weak privacy law will continue to be ignored by most companies that process personal data. Companies will treat all the privacy law states as Privacyland, USA which means basically California.

Why is my cloud computing bill so high? will be a common question. But the biggest item on the bill will be the AI that [employee redacted] is secretly in love with, so you’ll never find it.

X-rated sites will face an unfriendly regulatory environment in many states, so will help drive mass-market adoption of VPNs, privacy technologies, cryptocurrencies, and fintech. The two big results will be that first, after people have done all the work to go underground to get their favorite pr0n site, they might as well use their perceived invisibility to get infringing copies of other content too. And second, a lot of people will get scammed by fake VPNs and dishonest payment services.

Youth privacy laws will drive more investment in better content for kids. (This is an exception to the Q prediction.) We’re getting a bunch of laws that affect surveillance advertising to people under 18. As Tobias Kircher and Jens Foerderer reported, in Ban Targeted Advertising? An Empirical Investigation of the Consequences for App Development, a privacy policy change tended to drive a lot of Android apps for kids out of the Google Play Store, but the top 10 percent of apps did better. If you have ever visited an actual app store, it’s clear that Sturgeon’s law applies, and it’s likely that the top 10 percent of apps account for almost all of the actual usage. All the kids privacy laws and regs will make youth-directed content a less lucrative play for makers of crap and spew who can make anything, leaving more of the revenue for dedicated and high-quality content creators.

ZFS will catch on in more households, as early adopters replace complicated streaming services (and their frequent price increases and disappearing content) with storage-heavy media PCs.

28 Dec 2024

How we get to the end of prediction market winter

Taylor Lorenz writes, in Prediction markets go mainstream,

Prediction markets—platforms where users buy and sell shares based on the probability of future events—are poised to disrupt the media landscape in 2025, transforming not only how news is shared but how it is valued and consumed.

Prediction markets did get some time in the spotlight this year. But the reasons for the long, ongoing prediction market winter are bigger than just prediction markets not being famous. Prediction markets have been around for a long time, and have stubbornly failed to go mainstream.

The first prediction market to get famous was the University of Iowa’s Iowa Electronic Markets which launched in the late 1980s and has been covered in the Wall Street Journal since at least the mid-1990s. They originally used pre-web software and you had to mail in a paper check (update 4 Jan 2024: paper checks are still the only way to fund your account on there). But IEM wasn’t the first. Prof. Robin Hanson, in Hail Jeffrey Wernick, writes about an early prediction market entrepreneur who started his first one in 1981. (A secretary operated the market manually, with orders coming in by fax.) Prediction markets were more famous than Linux or the World Wide Web before Linux or the World Wide Web. Prediction markets have been around since before stop trying to make fetch happen happened.

So the safe prediction would be that 2025 isn’t going to be the year of prediction markets either. But just like the year of Linux on the desktop never happened because the years of Linux in your pocket and in the data center did, the prediction markets that do catch on are going to be different from the markets that prediction market nerds are used to today. Some trends to watch are:

Payment platforms: Lorenz points out, Prediction markets are currently in legal limbo, but I’d bet against a ban, especially given the new administration. Right now in the USA there is a lot of VC money tied up in fintech, and a lot of political pressure from well-connected people to deregulate everything having to do with money. For most people the biggest result will be more scams and more hassles dealing with transactions that are legal and mostly trustworthy today but that will get enshittified in the new regulatory environment. But all those money-ish services will give prediction markets a lot more options for getting money in and out in a way that enables more adoption.

Adding hedging and incentivization: The prediction markets that succeed probably won’t be pure, ideal prediction markets, but will add on some extra market design to attract and retain traders. Nick Whitaker and J. Zachary Mazlish, in Why prediction markets aren’t popular, write that so far, prediction markets don’t appeal to the kinds of people who play other kinds of markets. People enter markets for three reasons. Savers are trying to build wealth, Gamblers play for thrills, and Sharps enter to profit from less well-informed traders. No category out of the three is well-served by existing prediction markets, because a prediction market is zero-sum, so not a way to build wealth long-term, and it’s too slow-moving and not very thrilling compared to other kinds of gambling. And the sharps need a flow of less well informed traders to profit from, but prediction markets don’t have a good way to draw non-sharps into the market.

Whitaker and Mazlish do suggest hedging as a way to get more market participants, but say

We suspect there is simply very little demand for hedging events like whether a certain law gets passed; there is only demand for hedging the market outcomes those events affect, like what price the S&P 500 ends the month at. Hedging market outcomes already implicitly hedges for not just one event but all the events that could impact financial outcomes.

That’s probably true for hedging in a large public prediction market. An existing oil futures market is more generally useful to more traders that a prediction market on all the events that might affect the price of oil. And certain companies’ stocks today are largely prediction markets on future AI breakthroughs and the future legal status of various corporate crimes. But I suspect that it’s different for a private market for events within a company or organization. For example, a market with sales forecasting contracts on individual large customers could provide much more actionable numbers to management than just trading on predicted total sales.

You could, in effect, pay for a prediction market’s information output by subsidizing it, and Whitaker and Mazlish suggest this. A company that runs an internal prediction market can dump money in and get info out. Like paying for an analyst or consulting firm, but in a distributed way where the sources of expertise are self-selecting by making trade/no trade decisions based on what they know or don’t know. But it’s also possible, usually on the smaller side, for a prediction market to become an incentivization market. To me, the difference is that in an incentivization market, a person with ability to affect the results holds a large enough investment in the market that it influences them to do so. The difference is blurry and the same market can be a prediction market for some traders and an incentivization market for others. But by designing incentives for action in, a market operator can make it drift away from a pure prediction market design to one that tends to produce an outcome. related: The private provision of public goods via dominant assurance contracts by Alexander Tabarrok

Proof of concept projects can already address specific information needs: A problem that overlaps with the prediction market incentivization problem in interesting ways is the problem of how to pay for information products and services that can be easily copied. How do we fund open source? is a persistent question. And Bruce Perens, original author of what became the Open Source Definition, wants to move on entirely. The problem of funding open source is hard enough that we mainly hear about it when a high-profile security issue makes the news.

As Luis Villa points out,

If you don’t know what’s in the box, you can’t secure it, so it is your responsibility as builders to know what’s in the box. We need better tools, we need better engagement to enable everybody to do that with less effort and less burden on individual volunteer maintainers and non-profits.

Companies that use open source software need to measure and reduce risks. The problem is that the biggest open source risks are related to hard-to-measure human factors like developer turnover and burnout. Developers of open source software can take actions that help companies understand their risks, but they’re not compensated for doing it. A prediction/incentivization market can both help quantify hidden risks and incentivize changes.

If you have an internal market that functions as both a prediction market and an incentivization market, you can subsidize both the information and the desired result by predicting the events that you don’t want to happen. This is similar to how commodities markets and software bug futures markets can work. Some traders are pure speculators, others take actions that can move the market. Farmers can plan which crops to plant based on predicted or contracted prices, companies can allocate money to fuel futures and/or fuel-saving projects, developers can prioritize tasks.

Synergy with AI projects: An old corporate Intranet rule of thumb [citation needed] is that you need five daily active editors to have a useful company or organization Wiki. I don’t know what the number is for a prediction market, but as Prof. Andrew Gelman points out, prediction markets need “dumb money” to create incentives for well-informed traders to play and win.

Noisy, stupid bots are a minus for most kinds of social software, but a win for markets. If only there were some easy way to crank up a bunch of noisy, stupid bots. Oh, wait, there’s a whole AI boom happening. Good timing, right? And AI projects need ways to test their output quality in a scalable way, just as much as prediction markets need extra trading churn. AI projects and prediction market projects solve each other’s problems.

  • Prediction markets need liquidity and dumb money. Bots can already do those.

  • AI projects need scalable quality checks. Slop is easier to make than to check, so evaluating the quality of AI output keeps growing relative to the declining costs of everything else. You can start up a lot of bots, fund each with a small stake, and shut down the broke ones. The only humans required are the traders who can still beat the bots. and if at some point the humans lose all their money, you know you won AI. Congratulations, and I for one welcome our bot plutocrat overlords.

Bots can also be run behind a filter to only make offers that, if accepted, would further the market operator’s goals in some way. For example, bots can be set up to be biased to over-invest on predicting unfavorable outcomes (like buying the UNFIXED side of bug futures) to add some incentivization.

Fixing governance by learning from early market experiences: Internal prediction markets at companies tend to go through about the same story arc. First, the market launches with some sponsorship and internal advocacy from management. Second, the market puts up some encouraging results. (Even in 2002 a prediction market was producing more accurate sales forecasts than the official ones at HP.) And for its final act, the prediction market ends up perpetrating the unforgivable corporate sin: accurately calling some powerful executive’s baby ugly. So the prediction market ends up going to live with a nice family on a farm. Read the (imho, classic) paper, Corporate Prediction Markets: Evidence from Google, Ford, and Firm X by Bo Cowgill and Eric Zitzewitz, and, in Professor Hanson’s post, why a VC firm could not get prediction markets into portfolio companies. Wernick blames the ego of managers who think their judgment best, hire sycophants, and keep key org info close to their chests.

The main lesson is that the approval and budget for the prediction market itself needs to be handled as many management levels as possible above the managers that the prediction market is likely to bring bad news to. Either limit the scope of issues traded on, or sell the market to a more highly placed decision maker, or both. The prediction market administrator needs to report to someone safely above the level of the decision-makers for the issues being traded on. The really interesting experiment would be a private equity or VC firm that has its own team drop in and install a prediction market at each company it owns. The other approach is bottom-up: start with limiting the market to predicting small outcomes like the status of individual software bugs, and be disciplined about not trading on more consequential issues until the necessary sponsorship is in place.

So, is 2025 the year of prediction markets? Sort of. A bunch of factors are coming together. Payment platform options, the ability to do proof of concept niche projects, and the good fit as a QA tool for AI will make internal market projects more appealing in 2025. And if market operators can learn from history to avoid what tends to happen to bearers of bad news, this could be the year.