Is it just me or is it all about Europe right now? Put on some Kraftwerk
and follow along I guess.
Fedora
Chooses Forgejo! This is GitHub-like project hosting software with
version control, issues, pull requests, all the usual stuff. I have a
couple of small projects on Codeberg,
which is the (EU) hosted nonprofit instance and it works fine as far as
I can tell. Also a meissa
GmbHpresentation
at FOSDEM 2025You know X, Facebook, Xing, SourceForge? What
about GitHub? It is time to de-risk OpenSource engagement!
Scott Locklin lists Examples
of group madness in technology. One of the worst arguments I hear
is that thing X is inevitable because the smart people are doing
it. As I’ve extensively documented over the last 15 years on this
blog, smart people in groups are not smart and are even more subject to
crazes and mob behavior as everyone else.
Also international, not Europe-specific: Postgres
full-text search is Good Enough! by Rachid Belaid. (But there is a
tech autonomy angle, and an active PostgreSQL Europe, so for
practical purposes PostgreSQL is part of the Eurostack.)
Good advice from tante/Jürgen Geuter: Innovation
is a distractionThe demand for more Innovation (and sometimes
even the request for more research) has become a way to legitimize not
doing anything. A way to say the unpleasant solutions we have are not
perfect but in the future there might be a magic solution that doesn’t
bother us and everyone gets a fucking unicorn.
Marloes de Koning interviews Cristina Caffarra. ‘We
have to get to work and put Europe first. But we are late. Terribly
late’You really don’t have to buy everything in Europe,
says the competition expert, who is familiar with the criticism that the
American supply is simply superior. But start with 30 percent of your
procurement budget in Europe. That already makes a huge
difference. (That seems like an easy target. Not only are way
more than 30 percent of the European Alternatives up to
a servicable level by now, but unfortunately a lot of the legacy US
vendors are having either quality or compliance problems, or both. The
risks, technical and otherwise, keep going up.
Greg Nojeim and Silvia Lorenzo Perez cover Trump’s
Sacking of PCLOB Members Threatens Data PrivacyAside from its
importance in protecting civil liberties, the PCLOB cannot play its key
role in enforcing U.S. obligations under the EU-U.S. Data Privacy
Framework (DPF) while it lacks a quorum of members. The European
Commission would lose a key oversight tool for which it bargained, and
the adequacy decision that it issued to support the DPF could be struck
down under review at the Court of Justice of the European Union (CJEU),
which struck down two predecessor EU-U.S. data privacy arrangements, the
Safe Harbor Agreement and the Privacy Shield.
Sarah Perez covers Report:
Majority of US teens have lost trust in Big Tech. Common Sense
says that 64% of surveyed U.S. teens don’t trust Big Tech companies to
care about their mental health and well-being and 62% don’t think the
companies will protect their safety if it hurts profits. Over half of
surveyed U.S. teens (53%) also don’t think major tech companies make
ethical and responsible design decisions (think: the growing use of dark
patterns in user interface design meant to trick, confuse, and deceive.
A further 52% don’t think that Big Tech will keep their personal
information safe and 51% don’t think the companies are fair and
inclusive when considering the needs of different users. (What if
the Eurostack becomes the IT version of those European food brands that
sell well in other countries too?)
Mariana Olaizola Rosenblat covers How
Meta Turned Its Back on Human Rights for Tech Policy Press.
Zuckerberg announced that his company will no longer work to detect
abuses of its platforms other than high-severity violations of
content policy, such as those involving illicit drugs, terrorism, and
child sexual exploitation. The clear implication is that the company
will no longer strive to police its platform against other harmful
content, including hate speech and targeted harassment.
Sounds like a brand-unsafe environment. So is another rush of
advertiser boycott stories coming? Not
this time. Lara O’Reilly reports that brand safety has recently
become a political hot potato and been a flash point for some
influential, right-leaning figures. In uncertain times, marketing
decision-makers are keeping a low profile. Most companies aren’t really
set up to take on the open-ended security risk of coming out against
hate speech by users with friends in high places. According
to the Fraternal Order of Police, the January 6 pardons send a
dangerous message, and that message is being heard in marketing
departments. The CMOs who boycotted
last time are fully aware that stochastic
terrorism is a thing, and that rage stories about companies spread
quickly in Facebook groups and other extremist media. If an executive
makes the news for pulling ads from Meta, they would be putting
employees at risk from lone, deniable attacks. So instead of
announcing a high-profile boycott, marketers are more likely to follow
the example of Federal
employees and do the right thing, by the book, and quietly.
And it’s possible for advertisers to reduce support for Meta without
making a stink or drawing fire. Fortunately, Meta ads are hella
expensive, and results can be unrealistic
and unsustainable. Like all the Big Tech companies these days, Meta
is coping with a slowdown in innovation by tweaking the ad rules to
capture more revenue from existing services. As Jakob Nielsen pointed
out back in 2006, in Search
Engines as Leeches on the Web, ad platforms can even capture the
value created by others. A marketer doesn’t have to shout ¡No
Pasarán! or anything—just sharpen your best math pencil, quietly go
through the numbers, spot something that looks low-ROAS or fraudulent in
the Meta column, tweak the budget, repeat. If users can dial
down Meta, so can marketers. (Update: Richard Kirk writes, Brands
could be spending three times too much on social. You read that
right. Read the math, do the math.) And if Meta comes out with
something new and risky like the adfraud
in the browser thing, Privacy-Preserving Attribution, it’s easy to
use the fraud problem as the reason not to do it—you don’t have to stand
up and talk politics at work.
From the user side
It’s not that hard to take privacy measures that result in less money
for Big Tech. Even if you can’t quit Meta entirely, some basic tools and
settings can make an impact, especially if you use both a laptop and a
phone, not just a phone. With a few minutes of work, an individual in
the USA can, in effect, fine the surveillance
business about $50/month.
My list of effective privacy
tips is prioritized by how much I think they’ll cost the
surveillance business per minute spent. A privacy tips list for people
who don’t like doing privacy tips but also don’t like creepy oligarchs.
(As they say in the clickbait business, number 9 will shock you: if you
get your web browser info from TV and social media, you probably won’t
guess which browsers have built-in surveillance and/or fraud features.)
That page also has links to more intensive privacy advice for those who
want to get into it.
A lawyer question
As an Internet user, I realize I can’t get to Meta surveillance
neutral just with my own privacy tools and settings. For the foreseeable
future, companies are going to be doing server-to-server tracking of me
with Meta CAPI.
So in order to get to a rough equivalent of not being surveilled, I
need to balance out their actual surveillance by introducing some free
speech into the system. (And yes, numbers can be speech. O, the
Tables tell!) So what I’d like to do is write a surrogate script
(that can be swapped in by a browser extension in place of the real Meta
Pixel, like the
surrogate scripts uBlock Origin uses) to enable the user to send
something other than valid surveillance data. The user would configure
what message the script would send. The surrogate script would then
encode the message and pass it to Meta in place of the surveillance data
sent by the original Meta script. There is a possible research angle to
this, since I think that in
general, reducing ad personalization tends to help people buy better
products and services. An experiment would probably show that people
who mess with cross-context surveillance are happier with their
purchases than those who allow surveillance. Releasing a script like
that is the kind of thing I could catch hell for, legally, so I’m going
to wait to write it until I can find a place to host it and a lawyer to
represent me. Anyone?
After Big Social.
Dan Phiffer covers the question of where to next. I am going into
this clear-eyed; I’m going to end up losing touch with a lot of people.
For many of my contacts, Meta controls the only connection we have. It’s
a real loss, withdrawing from communities that I’ve built up over the
years (or decades in the case of Facebook). But I’m also finding new
communities with different people on the networks I’m spending more time
in.
(update 2 Feb 2025) This site has some pages with inline
styles, so I can’t use that CSP line right now. This is because I use
the SingleFile
extension to make mirrored copies of pages, so I need to move those into
their own virtual host so I can go back to using the version without the
unsafe-inline.
(update 23 Feb 2025) The Pagefind site search requires ‘unsafe-eval’ in
CSP in order to support WASM. This should be
wasm-unsafe-eval in the future.
To do WASM and inline styles, the new value for the
Content-Security-Policy header is:
Why
is Big Tech hellbent on making AI opt-out? by Richard Speed.
Rather than asking we’re going to shovel a load of AI services
into your apps that you never asked for, but our investors really need
you to use, is this OK? the assumption instead is that users will be
delighted to see their formerly pristine applications cluttered with AI
features. Customers, however, seem largely dissatisfied. (IMHO if
the EU is really going to throw down and do a software trade war with
the USA, this is the best time to switch to European
Alternatives.
Big-time proprietary software is breaking
compatibility while independent alternatives keep on going. People
lined up for Microsoft Windows 95 in 1995 and Apple iPhones in 2007, and
a trade war with the USA would have been a problem for software users
then, but now the EuroStack is a
thing. The China stack, too, as Prof. Yu Zhou points out: China
tech shrugged off Trump’s ‘trade war’ − there’s no reason it won’t do
the same with new tariffs. I updated generative ai
antimoats with some recent links. Even if the AI boom does catch on
among users, services that use AI are more likely to use predictable
independently-hosted models than to rely on Big Tech APIs that can be
EOLed or nerfed at any time, or just have the price increased.)
California vs
Texas Minimum Wage, 2013-2024 by Barry Ritholtz. [F]or seven
years–from January 2013 to March 2020–[California and Texas
quick-service restaurant] employment moved almost identically, the
correlation between them 0.994. During that seven year period, however,
TX had a flat $7.25/hr minimum wage while CA increased its minimum wage
by 50%, from $8/hr to $12. Related: Is a Big
Mac in Denmark Pricier Than in US?
What’s
happening on RedNote? A media scholar explains the app TikTok users are
fleeing to – and the cultural moment unfolding there Jianqing Chen
covers the Xiaohongshu boom in the USA. This spontaneous convergence
recalls the internet’s original dream of a global village. It’s a
glimmer of hope for connection and communication in a divided world.
(This is such authentic organic social that the Xiaohongshu ToS
hasn’t even been translated into English yet. And not only does nobody
read privacy policies (we knew that) but videos about reuniting with
your Chinese spy from TikTok are a whole trend on there. One
marketing company put up a page of Rules
& Community Guidelines translated into English but I haven’t
cross-checked it. Practice the core socialist values. and
Promote scientific thinking and popularize scientific
knowledge.)
Bob Sullivan reports Facebook
acknowledges it’s in a global fight to stop scams, and might not be
winning (The bigger global fight they’re in is a labor/management
one, and when moderator jobs get less remunerative or more stressful,
the users get stuck dealing with more crime.) Related: Meta
AI case lawyer quits after Mark Zuckerberg’s ‘Neo-Nazi madness’; Llama
depositions unsealed by Amy Castor and David Gerard. (The direct
mail/database/surveillance marketing business, get-rich-quick schemes,
and various right-wing political movements have been one big overlapping
scene in the USA for quite a while, at least back to the Direct
Mail and the Rise of the New Right days and possibly further. People
in the USA get targeted for a lot of political disinformation and fraud
(one scheme can be both),
so the Xiaohongshu mod team will be in for a shock as scammers, trolls,
and worse will follow the US users onto their platform.)
I’m still an Internet optimist despite…things…so I was hoping that
Friday’s Supreme
Court opinion in the TikTok case would have some useful information
about how to design online social networking in a way that does get
First Amendment protection, even if TikTok doesn’t. But no. Considered
as a bug report, the opinion doesn’t help much. We basically got (1)
TikTok collects lots of personal info (2) Congress gets to decide if and
how it’s a national security problem to make personal info available to
a foreign adversary, and so TikTok is banned. But everyone else
doing social software, including collaboration software, is going to
have a lot to find out for themselves.
The Supreme Court pretty much ignores TikTok’s dreaded For You Page
algorithm and focuses on the privacy problem. So we don’t know if some
future ban of some hypothetical future app that somehow fixed its data
collection issues would hold up in court just based on how it does
content recommendations. (Regulating recommendation algorithms is a big
issue that I’m not surprised the Court couldn’t agree on in the short
time they had for this case.) We also get the following, on p. 9—TikTok
got the benefit of the doubt and received some First Amendment
consideration that future apps might or might not.
This Court has not articulated a clear framework for determining
whether a regulation of non-expressive activity that disproportionately
burdens those engaged in expressive activity triggers heightened review.
We need not do so here. We assume without deciding that the challenged
provisions fall within this category and are subject to First Amendment
scrutiny.
Page 11 should be good news for anybody drafting a privacy law
anyway. Regulating data collection is content neutral for First
Amendment purposes—which should be common sense.
The Government also supports the challenged provisions with a
content-neutral justification: preventing China from collecting vast
amounts of sensitive data from 170 million U. S. TikTok users. That
rationale is decidedly content agnostic. It neither references the
content of speech on TikTok nor reflects disagreement with the message
such speech conveys….Because the data collection justification reflects
a purpose[e] unrelated to the content of expression, it is
content neutral.
But so far the to-do list for future apps looks manageable.
Don’t surveil US users for a foreign adversary
Comply with whatever future restrictions on recommendation
algorithms turn out to hold up in court. (Disclosure of rules or source
code? Allow users to switch to chronological? Allow client-side or
peer-to-peer filtering and scoring? Lots of options but possible to get
out ahead of.)
Not so fast. Here’s the hard part. According to the Court the problem
is not just the info that the app collects automatically and
surreptitiously, or the user actions it records, but also the info that
users send by some deliberate action. On page 14:
If, for example, a user allows TikTok access to the user’s phone
contact list to connect with others on the platform, TikTok can access
any data stored in the user’s contact list, including names,
contact information, contact photos, job titles, and notes. Access to
such detailed information about U. S. users, the Government worries, may
enable China to track the locations of Federal employees and
contractors, build dossiers of personal information for blackmail, and
conduct corporate espionage.
and in Justice Gorsuch’s concurrence,
According to the Federal Bureau of Investigation, TikTok can access
any data stored in a consenting user’s contact
list—including names, photos, and other personal information about
unconsenting third parties. Ibid. (emphasis added). And because the
record shows that the People’s Republic of China (PRC) can require
TikTok’s parent company to cooperate with [its] efforts to obtain
personal data, there is little to stop all that information from
ending up in the hands of a designated foreign adversary.
On the one hand, yes, sharing contacts does transfer a lot of
information about people in the USA to TikTok. But sharing a contact
list with an app can work a lot of different ways. It can be
covert surveillance (although mobile platforms generally do their
best to prevent this)
data sharing that you get tricked into
deliberate, more like choosing to email a copy of the company
directory as an attachment
If it’s really a problem to enable a user to choose to share contact
info, then that makes running collaboration software like GitHub
in China a problem from the USA side. (Git repositories are full of
metadata about who works on what, with who. And that information is
processed by other users, by the platform itself, and by third-party
tools.) Other content creation tools also share the kinds of info on
skills and work relationships that would be exactly what a foreign
adversary murder robot needs to prioritize targets. But the user, not
some surveillance software, generally puts that info there. If
intentional contact sharing by users is part of the reason that the USA
can ban TikTok, what does that mean for other kinds of user to user
communication?
Kleptomaniac princesses
There’s a great story I read when I was a kid that I wish I had the
citation for. It might be fictional, but I’m going to summarize it
anyway because it’s happening again.
Once upon a time there was a country that the UK really, really
wanted to maintain good diplomatic relations with. The country was in a
critical strategic location and had some kind of natural resources or
something, I don’t remember the details. The problem, though, was that
the country was a monarchy, and one of the princesses loved to visit
London and shoplift. And she was really bad at it. So diplomats had to
go around to the stores in advance to tell the manager what’s going on,
convince the store to let her steal stuff, and promise to settle up
afterwards.
Today, the companies that run the surveillance apps are a lot like
that princess. techbros don’t have masculine energy,
they have kleptomaniac princess energy If one country really
needs to maintain good relations with another, they’ll allow that
country’s surveillance apps to get away with privacy shenanigans. If
relations get chillier, then normal law enforcement applies. At least
for now, though, we don’t know what the normal laws here will look like,
and the Supreme Court didn’t provide many hints yesterday.
In
TikTok v. Garland, Supreme Court Sends Good Vibes for Privacy Laws, But
Congress’s Targeting of TikTok Alone Won’t Do Much to Protect
Privacy by Tom McBrien, EPIC Counsel. The Court’s opinion was
also a good sign for privacy advocates because it made clear that
regulating data practices is an important and content-neutral regulatory
intervention. Tech companies and their allies have long misinterpreted a
Supreme Court case called Sorrell v. IMS Health to mean
that all privacy laws are presumptively unconstitutional under the First
Amendment because information is speech. But the TikTok Court
explained that passing a law to protect privacy is decidedly content
agnostic because it neither references the content of speech…nor
reflects disagreement with the message such speech conveys. In fact,
the Court found the TikTok law constitutional specifically on the
grounds that it was passed to regulate privacy and emphasized how
important the government interest is in protecting American’s
privacy.
Bonus links
TikTok,
AliExpress, SHEIN & Co surrender Europeans’ data to authoritarian
ChinaToday, noyb has filed GDPR complaints against TikTok,
AliExpress, SHEIN, Temu, WeChat and Xiaomi for unlawful data transfers
to China….As none of the companies responded adequately to the
complainants’ access requests, we have to assume that this includes
China. But EU law is clear: data transfers outside the EU are only
allowed if the destination country doesn’t undermine the protection of
data.
Total
information collapse by Carole Cadwalladr It was the open society
that enabled Zuckerberg to build his company, that educated his
engineers and created a modern scientific country that largely obeyed
the rules-based order. But that’s over. And, this week is a curtain
raiser for how fast everything will change. Zuckerberg took a smashing
ball this week to eight years’ worth of “trust and safety” work that has
gone into trying to make social media a place fit for humans. That’s
undone in a single stroke.
Baltic
Leadership in Brussels: What the New High Representative Kaja Kallas
Means for Tech Policy | TechPolicy.Press by Sophie L. Vériter.
[O]nline platforms and their users are affected by EU foreign policy
through counter-disinformation regulations aimed at addressing foreign
threats of interference and manipulation. Indeed, technology is
increasingly considered a matter of security in the EU, which means that
the HRVP may well have a significant impact on the digital space within
and beyond the EU.
The
Ministry of Empowerment by danah boyd. This isn’t about
shareholder value. It’s about a kayfabe war between tech demagogues
vying to be the most powerful boy in the room.
This site is written by me personally
except for anything that is clearly marked up and cited as a direct
quotation. If you see anything on here that is not cited appropriately,
please contact me.
Generative AI output appears on this site only if I think it really
helps make a point and only if I believe that my use of a similar amount
and kind of material from a relevant work in the training set would be
fair use.
For example, I quote a sentence of generative AI output in LLMs and
reputation management. I believe that I would have been within my
fair use rights to use the same amount of text from a copyrighted
history book or article.
In LLMs and the web
advertising business, my point was not only that the Big Tech
companies are crooked, but that it’s so obvious. A widely available LLM
can easily point out that a site running Big Tech ads—for real brands—is
full of ripped-off content. So I did include a short question and answer
session with ChatGPT. It’s really getting old that big companies are
constantly being shocked to discover infringement and other crimes when
their own technology could have spotted it.
Usually when I mention AI or LLMs on here I don’t include any
generated content.
Here’s my contender for Internet tip of the year. It’s going to take
under a minute, and will not just help you buy better stuff, but also
make you happier in general. Ready? Here it is, step by step.
Log in to your Google account if you’re not logged in already.
(If you have a Gmail or Google Drive tab open in the browser, you’re
logged in.)
Find the Personalized ads control. It looks something like
this.
Personalized ads on
Turn it off.
Personalized ads off
That’s it. Unless you have another Google account. If you do have
multiple Google acccounts (like home, school, and work accounts) do this
for each one.
This will affect the ads you get on all the Google sites and apps,
including Google Search and YouTube, along with the Google ads on other
sites. Google is probably going to show you some message to try to
discourage you from doing this. From what I can tell from the outside,
it looks like turning off personalized ads will cost Google money. Last
time I checked, I got the following message.
Ads may seem less relevant When your info isn’t used
for ads, you may see fewer ads for products and brands that interest
you. Non-personalized ads on Google are shown to you according to
factors like the time of day, device type, your current search or the
website you’re visiting, or your current location (based on your IP
address or device permissions).
But what they don’t say is anything about how personalized ads will
help you buy better products and services. And that’s because—and I’m
going out on a limb here data-wise, but a pretty short and solid limb,
and I’ll explain why—they just don’t. Choosing to turn off personalized
ads somehow makes you a more satisfied shopper and better off.
How does this work?
I still don’t know how exactly how this tip works, but so far there
have been a few theories.
1: lower fraud risk. It’s possible that
de-personalizing the ads reduces the number of scam advertisers who can
successfully reach you. Bian et al., in Consumer Surveillance and
Financial Fraud, show that Apple App Tracking Transparency, which
reduces the ability of apps to personalize ads, tended to reduce fraud
complaints to the FTC.
We estimate that the reduction in tracking reduces money lost in all
complaints by 4.7% and money lost reported in internet and data security
complaints by 40.1%.
targeted ads are more likely to be associated with lower quality
vendors, and higher prices for identical products, compared to competing
alternatives found in organic search results
Between 2019 and 2021, brands that focused on brand equity saw a 72%
increase in value, compared to just 20% for brands that relied primarily
on performance tactics. Ignoring brand-building not only weakens your
baseline sales but forces you to spend more and more on performance
marketing just to keep your head above water.
Brands that are over-focused on surveillance advertising might be
forced to under-invest in product improvements.
4. limited algorithmic and personalized pricing.
Personalized ads might be set up to offer the same product at higher
prices to some people. The
FTC was investigating, but from the research point of view,
personalized pricing is really hard to tell apart from dynamic pricing.
Even if you get volunteers to report prices, some might be getting a
higher price because stock is running low, not because of who the
individual is. So it’s hard to show how much impact this has, but hard
to rule it out too.
5. it’s just a step on the journey. Another
possibility is that de-personalizing the ads is a gateway to blocking
ads entirely. What if, without personalization, the ads get gross or
annoying enough that people tend to move up to an ad blocker? And,
according to Lin et al. in The
Welfare Effects of Ad Blocking,
[P]articipants that were asked to install an ad-blocker become less
likely to regret recent purchases, while participants that were asked to
uninstall their ad-blocker report lower levels of satisfaction with
their recent purchases.
Maybe you don’t actually make better buying decisions while ads are
on but personalization is off—but it’s a step toward full ad blocking
where you do get better stuff and more happiness.
How do I know this works?
I’m confident that this tip works because if turning ad
personalization off didn’t help you, Google would have said so a while
ago. Remember the 52%
paper about third-party cookies? Google made a big deal out of
researching the ad revenue impact of turning cookie tracking on or off.
And this ad personalization setting also has a revenue impact for
Google. According to documents
from one of Google’s Federal cases, keeping the number of users with
ad personalization off low is a goal for Google—they make more money
from you if you have personalization on, so they have a big incentive to
try to convince you that personalization is a win-win. So why so quiet?
The absence of a PDF about this is just as informative as the actual PDF
would be.
And it’s not just Google. Research showing user benefits from
personalized ads would be a fairly easy project not just for Google, but
for any company that can both check a privacy setting and measure some
kind of shopping outcome. Almost as long as Internet privacy tools have
been a thing, so has advice from
Internet Thought Leaders telling us they’re not a good idea. But for
a data-driven industry, they’re bringing surprisingly little
data—especially considering that for many companies it’s data they
already have and would only need to do stats on, make graphs, and write
(or have an LLM write) the abstract and body copy.
Almost any company with a mobile app could do research to show any
benefits from ad personalization, too. Are the customers who use Apple
iOS and turn
off tracking more or less satisfied with their orders? Do banks get
more fraud reports from app users with tracking turned on or off? It
would be straightforward for a lot of companies to show that turning off
personalization or turning on some privacy setting makes you a less
happy customer—if it did.
The closest I have found so far is Balancing
User Privacy and Personalization by Malika Korganbekova and Cole
Zuber. This study simulated the effects of a privacy feature by
truncating browsing history for some Wayfair shoppers, and found that
people who were assigned to the personalized group and chose a product
personalized to them were 10% less likely to return it than people in
the non-personalized group. But that’s about a bunch of vendors of
similar products that were all qualified by the same online shopping
platform, not about the mix of honest and dishonest personalized ads
that people get in total. So go back and do the tip if you didn’t
already, enjoy your improved shopping experience, and be happy.
More:effective privacy
tips
Related
You can’t totally turn off ad personalization on Meta sites like
Facebook, but there are settings to limit the flow of targeting data in
or out. See Mad
at Meta? Don’t Let Them Collect and Monetize Your Personal Data by
Lena Cohen at the Electronic Frontier Foundation.
B L O C K in
the U S A Ad blocking is trending up, and for the first time the
people surveyed gave their number one reason as privacy, not annoyance
or performance.
The $16
hack to blocking ads on your devices for life (I don’t know about
the product or the offer, just interesting to see it on a site with ads.
Maybe the affiliate revenue is a much bigger deal than the programmatic
ad revenue?)
personalization
risks In practice, most of the privacy risks related to advertising
are the result not of identifying individuals, but of treating different
people in the same context differently.
Bonus links
Samuel Bendett and David Kirichenko cover Battlefield
Drones and the Accelerating Autonomous Arms Race in Ukraine.
Ukrainian officials started to describe their country as a war lab
for the future—highlighting for allies and partners that, because
these technologies will have a significant impact on warfare going
forward, the ongoing combat in Ukraine offers the best environment for
continuous testing, evaluation, and refinement of [autonomous] systems.
Many companies across Europe and the United States have tested their
drones and other systems in Ukraine. At this point in the conflict,
these companies are striving to gain battle-tested in Ukraine
credentials for their products.
Aram Zucker-Scharff writes, in The
bounty hunter tendency, the future of privacy, and ad tech’s new profit
frontier., The new generation of laws that are authorizing
citizens to become bounty hunters are implicitly tied to the use of
surveillance technology. They encourage the use of citizen vs citizen
surveillance and create a dangerous environment that worsens the
information imbalance between wealthy citizens and everyone else.
(Is this a good argument against private right of action in privacy
laws? It’s likely that troll lawyers will use existing wiretapping laws
against legit news sites, which tend to have long and vulnerable lists
of adtech partners.)
Scharon Harding covers TVs
at CES 2025. On the one hand, TVs are adding far-field
microphones which, um, yikes. But on the other hand, remember how
the Microsoft Windows business and gaming market helped drive down the
costs of Linux-capable workstation-class hardware? What is the big
innovation that developers, designers, and architects will make out of
big, inexpensive screens subsidized by the surveillance business?
But just in case, since there’s a lot of malarkey in the online
advertising business, I’m putting up this file to let the advertisers
know that if someone sold you an ad and claimed it ran on here, you got
burned.
That’s the ads.txt file for this site. The
format is defined in a
specification from the IAB Tech Lab (PDF). The important part is the
last line. The placeholder is how you tell the tools that are supposed
to be checking this stuff that you don’t have ads.
Rachel explains Web page
annoyances that I don’t inflict on you here in a handy list of web
antipatterns. Removing more of these could be a good start to making a
less frustrating, more accessible, higher performing site.
More useful things to check for security and performance: Securing
your static website with HTTP response headers by Matt Hobbs. I have
some of these set already but it’s helpful to have them all in one
place. A browser can do a lot of stuff that a blog like this one won’t
use, so safer to tell it not to.
Chris Coyier suggest that a list of Slash Pages
could be a good list of blogging ideas. (That is a good idea. I made a
list at /slashes and will
fill it in. Ads.txt is technically not a page I guess since it’s
just text but I’m counting it.)
Elie Berreby follows up on his search engine that’s forgotten how
to search post with a long look at Search
engines think I plagiarized my own content! My Hacker News Case
Study. One of many parts that interests me about this whole issue is
the problem of how much more money certain companies can make when
returning a page on a sketchy infringing
site than on the original. Typically an original content site is
able to get a better ad deal than an illegal site that has to settle for
scraps and leave more of the ad revenue for Google.
Simon Willison says, I
still don’t think companies serve you ads based on spying through your
microphone. For the accusation to be true, Apple would need to be
recording those wake word audio snippets and transmitting them back to
their servers for additional processing (likely true), but then they
would need to be feeding those snippets in almost real time into a
system which forwards them onto advertising partners who then feed that
information into targeting networks such that next time you view an ad
on your phone the information is available to help select the relevant
ad. That is so far fetched. He’s totally right if you define your
microphone as the microphone on your cell phone, which has limited
battery energy and bandwidth. But most people own microphones, plural,
and a smart TV or kitchen appliance is typically plugged in so the juice
to process ambient audio for keywords is there.
Dean W. Ball covers the Texas Responsible AI Governance Act in Texas Plows
Ahead. (This bill doesn’t have a national defense exception the way
the EU’s AI Act does, which is strange.)
I’m looking forward to the new Charles Stross novel that past me
thoughtfully pre-ordered from Books Inc. for near future me. In A
Conventional Boy a man was sentenced to prison for playing Dungeons
and Dragons in the 1980s, and many years later he’s putting his escape
plan into action…
If this year has revealed anything about the tech billionaires it is
that they have a very specific philosophy other than just growth and
that philosophy is malicious…I don’t think we can really take on the
obstacle of, let’s call it more accurately, the scam economy without
acknowledging this is all part of the design. They think they are richer
than you and therefore you must be stupid and because you are stupid you
should be controlled…
Read the whole thing. A lot of tech big shots want to play the
rest of us like a real-time strategy game. (Ever notice that the list of
skills in the we don’t hire US job applicants because the culture
doesn’t value the following skills tweets is the same as the list of
skills in the our AI has achieved human-level performance in the
following skills tweets?) I predicted that
low-trust society will trend in 2025, and I agree with Aram that
a big part of that is company decision-makers deliberately making
decisions that make it harder to trust others. I’m working on a list of known good
companies. (Work in progress, please share yours if you have
one.)
And yes, my link collecting tool as queued up a bunch of links about
the shift towards a lower-trust society along with ways that people are
adapting to it or trying to shift things back.
Why is it so hard to buy
things that work well? (imho Mark Ritson still explained
it best—companies over-emphasize the promotionP of
marketing, trying to find people slightly more likely to buy the product
as is, over the product refinements that would tend to get more
buyers. George Tannenbaum on destroying brand trust with too much of one
P, too little of another: Ad Aged:
Leave Me Alone.)
(looks like I had enough notes for an upcoming event to do A-Z this
year…)
Ad blocking will get bigger and more widely reported
on. Besides the usual suspects, the current wave of ad blocking is also
partly driven by professional, respectable security vendors. Malwarebytes Labs
positions their ad blocker as an security tool and certain
well-known companies are happy to help them with their content marketing
by running malvertising. (example: Malicious
ad distributes SocGholish malware to Kaiser Permanente employees) Silent
Push is another security vendor helping to make the ads/malware
connection. And, according to research by Lin
et al., users who installed an ad blocker reported fewer regrets
with purchases and an improvement in subjective well-being. Some
of those users who installed an ad blocker reluctantly because of
security concerns will be hard to convince to turn it off even if the
malvertising situation improves.
Bullshit is going to be everywhere, and more of it.
In 2025 it won’t be enough to just ignore the bullshit itself. People
will also have to ignore what you might think of as a bullshit Smurf attack,
where large amounts of content end up amplifying a small amount of
bullshit. Some politician is going to tweet something about how
these shiftless guys today need to pull up their pants higher,
and then a bunch of mainstream media reporters are going to turn in
their diligently researched 2000-word think pieces about the effect of
higher pants on the men’s apparel market and human reproductive system.
And by the time the stories run, the politician has totally forgotten
about the pants thing and is bullshitting about something else. The
ability to ignore the whole cycle will be key. So people’s content
discovery habits are going to change, we just don’t know how.
Chrome: Google will manage to hang on to their
browser, as prospective buyers don’t see the value in it. Personally I
think there are two logical buyers. The Trade Desk could rip out the
janky Privacy Sandbox stuff and put in OpenPass and UID2. Not all
users would leave those turned on, but enough would to make TTD the
dominant source for user identifiers in web ads. Or a big bank could buy
Chrome as a fraud protection play and run it to maximize security, not
just ad revenue. At the scale of the largest banks, protecting existing
customers from Internet fraud would save the bank enough money to pay
for browser development. Payment platform integration and built-in
financial services upsell would be wins on top of that.
Both possible Chrome buyers would be better off keeping open-source
Chromium open. Google would keep contributing code even if they didn’t
control the browser 100%. They would feel the need to hire or sponsor
people to participate on a legit open-source basis to support better
interoperability with Google services. They wouldn’t be able to get the
anticompetitive shenanigans back in, but the legit work would
continue—so the buyer’s development budget would be lower than Google’s,
long term. But that’s not going to happen. So far, decision makers are
convinced that the only way to make money with the browser is with tying
to Google services, so they’re going to pass up this opportunity.
Development tools will keep getting more AI
in them. It will be easier to test new AI stuff in the IDE than to not
test it. But a flood of plausible-looking new code that doesn’t
necessarily work in all cases or reflect the unwritten assumptions of
the project means a lot more demand for testing and documentation. The
difference between a software project that spends 2025 doing
self-congratulatory AI productivity win blog posts and one that has an
AI code catastrophe is going to be how much test coverage they started
with or were able to add quickly.
Environmental issues: we’re in for more fires,
floods, and storms. Pretty much everybody knows why, but
some people will only admit it when they have to. A lot of
homeowners won’t be able to renew their insurance, so will end up
selling to investors who are willing to demolish the house and hold the
land for eventual resale. More former house occupants will pivot to
#vanlife, and 24-hour health clubs will sell more memberships to people
who mainly need the showers.
Firefox will keep muddling through. There will be
more Internet drama over their ill-advised adfraud in
the browser thing, but the core software will be able to keep going
and even pick up a few users on desktop because of the ad blocking
trend. The search ad deal going away won’t have much effect—Google pays
Firefox to exist and limit the amount of antitrust trouble it’s in, not
for some insignificant number of search ad clicks. If they can’t pay
Firefox for default search engine placement, they’ll find some other
excuse to send them enough cash to keep going. Maybe not as high
on the hog as they have been used to, but enough to keep the browser
usable.
Homeschooling will increase faster because of safety
concerns, but parents will feel uncomfortable about social isolation and
seek out group activities such as sports, crafts, parent-led classes,
and group playdates. Homeschoooling will continue to be a lifestyle
niche that’s relatively easy to reach with good influencer and content
creator connections, but not well-covered by the mainstream media.
Immigration into the USA will continue despite
high-profile deportations and associated human rights violations. But
whether or not a particular person is going to be able to make it in, or
be able to stay, is going to be a lot less predictable. If you know who
the person is who might be affected by immigration policy changes, you
might be able to plan around it, but what’s more likely from the
business decision-making point of view is the person affected is an
employee of some supplier of your supplier, or a family member, and you
can’t predict what happens when their life gets disrupted. Any company
running in lean or just-in-time mode, and relying on low disruption and
high predictability, will be most at a disadvantage. Big
Tech companies will try to buy their way out of the shitstorm, but
heavy reliance on networks of supplier companies will mean they’re still
affected in hard-to-predict ways.
Journalism will continue to go non-profit and journalist-owned.
The bad news is there’s not enough money in journalism, now or in
the near future, to sustain too many levels of managers and
investors, and the good news is there’s enough money in it to keep a
nonprofit or lifestyle company going. (Kind of like tech conferences.
LinuxWorld had to support a big company, so wasn’t sustainable, but Southern California Linux
Expo, a flatter organization, is.)
Killfile is the old Usenet word for a blocklist, and
I already had something for B. The shared lists that are possible with
the Fediverse and Bluesky are too useful not to escape into other
categories of software. I don’t know which ones yet, but a shared filter
list to help fix the
search experience is the kind of thing we’re likely to see. People’s
content discovery and shopping habits will have to change, we just don’t
know how.
Low-trust society will trend. It’s possible for a
country to move from high trust to low, or the other way around, as the
Pew Research Center covered in 2008. The broligarchy-dominated
political and business environment in the USA, along with the booms in
growth hacking
and AI slop, will make things a lot easier for corporate crime and
scam
culture. So people’s content discovery and shopping habits will have
to change, we just don’t know how. Multi-national companies that already
operate in middle-income low-trust countries will have some advantages
in figuring out the new situation, if they can bring the right people in
from there to here.
Military affairs, revolution in: If you think AI
hype at the office in the USA is intense, just watch the AI hype in
Europe about how advanced drones and other AI-enabled defense projects
can protect countries from being occupied by an evil dictator without
having to restore or expand conscription. Surveillance advertisers and
growth hackers in the USA are constantly complaining about restrictions
on AI in Europe—but the AI Act over there has an exception for the
defense industry. In 2025 it will be clear that the USA is
over-investing in bullshit AI and under-investing in defense AI, but it
won’t be clear what to do about it. (bonus link: The
Next Arsenal of Democracy | City Journal)
Neighborhood organizations: As Molly White
recommended in November, more people will be looking for community
and volunteer opportunities. The choice to become a joiner and
not just a consumer
in unpredictable times is understandable and a good idea in general.
This trend could enter a positive feedback loop with non-profit and
journalist-owned local news, as news sites try more community
connections like Cleveland
Documenters.
Office, return to: Companies that are doing more
crime will tend to do more RTO, because signaling
loyalty is more important than productivity or retaining people with
desired skills. Companies that continue avoiding doing crimes, even in
what’s going to be a crime-friendly time in the USA, will tend to
continue cutting back on office space. The fun part is that the company
can tell the employee that work from home privileges are a
benefit, and not free office space for the employer. Win-win! So the
content niche for how-tos on maximizing home (and van) offices will
grow.
Prediction
markets will benefit from 2024’s 15 minutes of fame to
catch on for some niche corporate projects, and public prediction market
prices will be quoted in more news stories.
Quality, flight to (not): If I were going to be
unrealistically optimistic here, I’d say that the only way for
advertisers to deal with the flood of AI
slop sites and fake
AI users is to go into full Check
My Ads mode and just advertise on known legit sites made by and for
people. But right now the habits and skills around race-to-the-bottom ad
placements are too strong, so there won’t be much change on the
advertiser side in 2025. A few forward-thinking advertisers will get
good results from quality buying for specific campaigns, but that’s
about it.
Research on user behavior will get a lot more
important. The AI
crapflood and resulting search quality crisis mean that (say the
line, Bart) people’s content discovery and shopping habits will have to
change, we just don’t know how. Companies that build user research
capacity, especially in studying privacy users and the gaps they leave
in the marketing data, will have an advantage.
State privacy law season will be spicy again. A few
states will get big comprehensive privacy bills through the process
again, but the laws to watch will be specific ones on health, protecting
teens from the algorithm, social media censorship, and
other areas. More states will get laws like Daniel’s
Law. (We need a Daniel’s Law for military personnel, their families,
and defense manufacturing workers, but we’re probably going to see some
states do them for health insurance company employees instead.)update 1 Feb 2025: Compliance issues that came up for AADC will have to get
another look.
Troll lawyer letters alleging violations of
the California Invasion of Privacy Act (CIPA) and similar laws will
increase. Operators of small sites can incur a lot of legal risk now
just by running a Big Tech tracking pixel. But Big Tech will continue to
ignore the situation, and put all the risks on the small site. (kind of
like how Amazon.com uses delivery partner companies to take the
legal risks of employingalgorithmically
micromanaged, overstressed delivery drivers.)
Unemployment and underemployment will trend up, not
down, in 2025. Yes, there will be more political pressure on companies
here to hire and manufacture locally, but actual job applicants aren’t
interchangeable worker units in an RTS game—there’s a lot of mismatch
between the qualities that job seekers will have and the qualities that
companies will be looking for, which will mean a lot of jobs going
unfilled. And employers tend to hire fewer people in unpredictable times
anyway.
Virginia’sweak
privacy law will continue to be ignored by most companies that
process personal data. Companies will treat all the privacy law states
as Privacyland, USA which means basically California.
Why is my cloud computing bill so high? will be a
common question. But the biggest item on the bill will be the AI that
[employee redacted] is secretly in love with, so you’ll never find
it.
X-rated sites will face
an unfriendly regulatory environment in many states, so will help
drive mass-market adoption of VPNs, privacy technologies,
cryptocurrencies, and fintech. The two big results will be that first,
after people have done all the work to go underground to get their
favorite pr0n site, they might as well use their perceived invisibility
to get infringing copies of other content too. And second, a lot of
people will get scammed by fake VPNs and dishonest payment services.
Youth privacy laws will drive more investment in
better content for kids. (This is an exception to the Q prediction.)
We’re getting a bunch of laws that affect surveillance advertising to
people under 18. As Tobias Kircher and Jens Foerderer reported, in Ban
Targeted Advertising? An Empirical Investigation of the Consequences for
App Development, a privacy policy change tended to drive a lot of
Android apps for kids out of the Google Play Store, but the top 10
percent of apps did better. If you have ever visited an actual app
store, it’s clear that Sturgeon’s law applies, and it’s likely that the
top 10 percent of apps account for almost all of the actual usage. All
the kids privacy laws and regs will make youth-directed content a less
lucrative play for makers of crap and spew who can make anything,
leaving more of the revenue for dedicated and high-quality content
creators.
ZFS
will catch on in more households, as early adopters replace complicated
streaming services (and their frequent price increases and disappearing
content) with storage-heavy media PCs.
Prediction markets—platforms where users buy and sell shares based on
the probability of future events—are poised to disrupt the media
landscape in 2025, transforming not only how news is shared but how it
is valued and consumed.
Prediction markets did get some time in the
spotlight this year. But the reasons for the long, ongoing
prediction market winter are bigger than just prediction markets not
being famous. Prediction markets have been around for a long time, and
have stubbornly failed to go mainstream.
The first prediction market to get famous was the University of
Iowa’s Iowa Electronic
Markets which launched in the late 1980s and has been covered in the
Wall Street Journal since at least the mid-1990s. They
originally used pre-web software and you had to mail in a paper check
(update 4 Jan 2024: paper checks
are still the only way to fund your account on there). But IEM
wasn’t the first. Prof. Robin Hanson, in Hail
Jeffrey Wernick, writes about an early prediction market
entrepreneur who started his first one in 1981. (A secretary operated
the market manually, with orders coming in by fax.) Prediction markets
were more famous than Linux or the World Wide Web before Linux or the
World Wide Web. Prediction markets have been around since before stop
trying to make fetch happen happened.
So the safe prediction would be that 2025 isn’t going to be the year
of prediction markets either. But just like the year of Linux on the
desktop never happened because the years of Linux in your pocket and
in the data center did, the prediction markets that do catch on are
going to be different from the markets that prediction market nerds are
used to today. Some trends to watch are:
Payment platforms: Lorenz points out, Prediction
markets are currently in legal limbo, but I’d bet against a ban,
especially given the new administration. Right now in the USA there
is a lot of VC money tied up in fintech, and a lot of political pressure
from well-connected people to deregulate everything having to do with
money. For most people the biggest result will be more scams and more
hassles dealing with transactions that are legal and mostly trustworthy
today but that will get enshittified in the new regulatory environment.
But all those money-ish services will give prediction markets a lot more
options for getting money in and out in a way that enables more
adoption.
Adding hedging and incentivization: The prediction
markets that succeed probably won’t be pure, ideal prediction markets,
but will add on some extra market design to attract and retain traders.
Nick Whitaker and J. Zachary Mazlish, in Why
prediction markets aren’t popular, write that so far, prediction
markets don’t appeal to the kinds of people who play other kinds of
markets. People enter markets for three reasons. Savers
are trying to build wealth, Gamblers play for thrills,
and Sharps enter to profit from less well-informed
traders. No category out of the three is well-served by existing
prediction markets, because a prediction market is zero-sum, so not a
way to build wealth long-term, and it’s too slow-moving and not very
thrilling compared to other kinds of gambling. And the sharps need a
flow of less well informed traders to profit from, but prediction
markets don’t have a good way to draw non-sharps into the market.
Whitaker and Mazlish do suggest hedging as a way to get more market
participants, but say
We suspect there is simply very little demand for hedging events like
whether a certain law gets passed; there is only demand for hedging the
market outcomes those events affect, like what price the S&P 500
ends the month at. Hedging market outcomes already implicitly hedges for
not just one event but all the events that could impact financial
outcomes.
That’s probably true for hedging in a large public prediction market.
An existing oil futures market is more generally useful to more traders
that a prediction market on all the events that might affect the price
of oil. And certain companies’ stocks today are largely prediction
markets on future AI breakthroughs and the future legal status of
various corporate crimes. But I suspect that it’s different for a
private market for events within a company or organization. For example,
a market with sales forecasting contracts on individual large customers
could provide much more actionable numbers to management than just
trading on predicted total sales.
You could, in effect, pay for a prediction market’s information
output by subsidizing it, and Whitaker and Mazlish suggest this. A
company that runs an internal prediction market can dump money in and
get info out. Like paying for an analyst or consulting firm, but in a
distributed way where the sources of expertise are self-selecting by
making trade/no trade decisions based on what they know or don’t know.
But it’s also possible, usually on the smaller side, for a prediction
market to become an incentivization market. To me, the difference is
that in an incentivization market, a person with ability to affect the
results holds a large enough investment in the market that it influences
them to do so. The difference is blurry and the same market can be a
prediction market for some traders and an incentivization market for
others. But by designing incentives for action in, a market operator can
make it drift away from a pure prediction market design to one that
tends to produce an outcome. related: The private
provision of public goods via dominant assurance contracts by
Alexander Tabarrok
If you don’t know what’s in the box, you can’t secure it, so it is
your responsibility as builders to know what’s in the box. We need
better tools, we need better engagement to enable everybody to do that
with less effort and less burden on individual volunteer maintainers and
non-profits.
Companies that use open source software need to measure and reduce
risks. The problem is that the biggest open source risks are related to
hard-to-measure human factors like developer turnover and burnout.
Developers of open source software can take actions that help companies
understand their risks, but they’re not compensated for doing it. A
prediction/incentivization market can both help quantify hidden risks
and incentivize changes.
If you have an internal market that functions as both a prediction
market and an incentivization market, you can subsidize both the
information and the desired result by predicting the events that
you don’t want to happen. This is similar to how commodities markets and
software bug futures markets can work. Some traders are pure
speculators, others take actions that can move the market. Farmers can
plan which crops to plant based on predicted or contracted prices,
companies can allocate money to fuel futures and/or fuel-saving
projects, developers can prioritize tasks.
Synergy with AI projects: An old corporate Intranet
rule of thumb [citation needed] is that you need five daily active
editors to have a useful company or organization Wiki. I don’t know what
the number is for a prediction market, but as Prof. Andrew Gelman points
out, prediction
markets need “dumb money” to create incentives for well-informed
traders to play and win.
Prediction markets need liquidity and dumb money. Bots can
already do those.
AI projects need scalable quality checks. Slop
is easier to make than to check, so evaluating the quality of AI output
keeps growing relative to the declining
costs of everything else. You can start up a lot of bots, fund each
with a small stake, and shut down the broke ones. The only humans
required are the traders who can still beat the bots. and if at some point the humans lose all their money, you
know you won AI. Congratulations, and I for one welcome our bot plutocrat
overlords.
Bots can also be run behind a filter to only make offers that, if
accepted, would further the market operator’s goals in some way. For
example, bots can be set up to be biased to over-invest on predicting
unfavorable outcomes (like buying the UNFIXED side of bug futures) to
add some incentivization.
Fixing governance by learning from early market
experiences: Internal prediction markets at companies tend to
go through about the same story arc. First, the market launches with
some sponsorship and internal advocacy from management. Second, the
market puts up some encouraging results. (Even in 2002 a prediction
market was producing more accurate sales forecasts than the official
ones at HP.) And for its final act, the prediction market ends up
perpetrating the unforgivable corporate sin: accurately calling some
powerful executive’s baby ugly. So the prediction market ends up going
to live with a nice family on a farm. Read the (imho, classic) paper, Corporate Prediction
Markets: Evidence from Google, Ford, and Firm X by Bo Cowgill and
Eric Zitzewitz, and, in Professor
Hanson’s post, why a VC firm could not get prediction markets into
portfolio companies. Wernick blames the ego of managers who think
their judgment best, hire sycophants, and keep key org info close to
their chests.
The main lesson is that the approval and budget for the prediction
market itself needs to be handled as many management levels as possible
above the managers that the prediction market is likely to bring bad
news to. Either limit the scope of issues traded on, or sell the market
to a more highly placed decision maker, or both. The prediction market
administrator needs to report to someone safely above the level of the
decision-makers for the issues being traded on. The really interesting
experiment would be a private equity or VC firm that has its own team
drop in and install a prediction market at each company it owns. The
other approach is bottom-up: start with limiting the market to
predicting small outcomes like the status of individual software bugs,
and be disciplined about not trading on more consequential
issues until the necessary sponsorship is in place.
So, is 2025 the year of prediction markets? Sort of.
A bunch of factors are coming together. Payment platform options, the
ability to do proof of concept niche projects, and the good fit as a QA
tool for AI will make internal market projects more appealing in 2025.
And if market operators can learn from history to avoid what tends to
happen to bearers of bad news, this could be the year.
Conditional
market: The seer.io prediction market supports conditional positions
(that only win or lose if some other position pays off) with an
arbitrary number of nesting levels.
The
History Crisis Is a National Security ProblemDemocracies such as
the United States rely on the public to set broad strategic priorities
through elections and on civilian leaders to translate those priorities
into executable policies. Fostering historical knowledge in the public
at large is also an important aspect of U.S. competitiveness. (and
we really don’t want to be learning
about history from bots)
Why
the deep learning boom caught almost everyone by surpriseFei-Fei
Li….created an image dataset that seemed ludicrously large to most of
her colleagues. But it turned out to be essential for demonstrating the
potential of neural networks trained on GPUs.
Developing
a public-interest training commons of booksCurrently, AI
development is dominated by a handful of companies that, in their rush
to beat other competitors, have paid insufficient attention to the
diversity of their inputs, questions of truth and bias in their outputs,
and questions about social good and access. Authors Alliance,
Northeastern University Library, and our partners seek to correct this
tilt through the swift development of a counterbalancing
project…