https://www.ovl.design/around-the-web/feed.xml ovl.design » around the web 2023-12-18T12:04:13.535Z https://github.com/jpmonette/feed Oscar [email protected] https://www.ovl.design https://www.ovl.design/img/favicon/favicon-32x32.png https://www.ovl.design/img/favicon/favicon-32x32.png <![CDATA[The Denkzwerge are it. Again.]]> https://www.ovl.design/around-the-web/023-the-denkzwerge-are-it-again/ 2023-10-22T14:12:00.000Z <![CDATA[New manifesto without anything new dropped, controlling complex systems, Algospeak in war times, and revaluing the strike.]]> <![CDATA[

Collected between 9.10.2023 and 22.10.2023.


Welcome to Around the Web.

First, a correction: In the last issue I said that there is no oversight and zero fines in space, that wasn't entirely true. The FCC fined Dish Networks $150,000 for leaving junk in space. Shortly after I published the issue, Ordinary Things published his newest video Satellites: Crimes Against Space which explains the whole situation in way more detail.

I revamped the design of my website a bit over the last weeks. I’m quite happy with it, so you may want to read this issue on the web.

Also, pardon my French German in the title, but I don’t know a translation that captures the essence of the word Denkzwerge quite like the original.

Before we start with our regular programming, here are

Updates from the Department of Facepalms

First, Marc Andreessen published the world’s worst meta bad take.

It’s long. It’s badly written. It feels like Andreessen dictated it into ChatGPT while being trapped at Burning Man. It makes some absolutely wild statements without any efforts to back them up. It quotes Filippo Marinetti, notable proto-fascist and OG techno-optimist. Basically, it is the 30,000 word version of the «This is fine» meme.

The this is fine comic strip, but first Andreessen’s face. In the first

If you don’t feel like reading the entire thing – and there is no reason in the world why you should feel like reading the entire thing – Ben Grosser captured the essence in his redaction poetry version.

An excerpt from Andreessen’s manifesto, but heavily redacted.

In Why can’t our tech billionaires learn anything new? Dave Karpf pointedly analyses what feels so off about the richest persons in the world crying that the rest of us don’t applaud everything they do anymore.

The most powerful people in the world (people like Andreessen!) are optimists. And therein lies the problem: Look around. Their optimism has not helped matters much. The sort of technological optimism that Andreessen is asking for is a shield. He is insisting that we judge the tech barons based on their lofty ambitions, instead of their track records.

In an interview with the German newspaper Der SPIEGEL, Theodor W. Adorno called out those «who frantically cry over objective despair with the ‹hurrah› optimism of immediate action to make it psychologically easier for themselves». Andreessen’s manifesto is the epitome of this «hurrah» optimism.

He openly illustrates the limits of imaginations of him and those in the Valley in a way no critic ever could. AI as the force that will destroy but also save the world, capitalism as the only form of economy that can work and if it doesn’t work, we need more more more more more until if fcuking works. The destruction of society will continue until morale improves.

If that’s his future, I’m happy to fight for another one.

the problem with holding up a manifesto and saying “this is what vcs believe” is that their beliefs only exist to serve their goal

not to be all satre on antisemites here but these people only care about power, and you don’t need to read their fanfic to work that out

that said, it is funny that someone who desperately wants to be a leader doesn’t have enough charisma to lead a cult

Elsewhere in rich people who want the world to believe they are geniuses but are just rich (and fascists): Peter Thiel snitched for the FBI.

Why, you might ask? Well, probably it feels good if someone says «Thank you, Mr. Thiel, that’s very valuable information, Mr. Thiel».

Dead and kicking: Silvio Berlusconi. Apparently, he spent his retirement late-night-shopping, spending some twenty million Euros on essentially worthless works of art.

In a rare display of decency, his successor, Giorgia Meloni, split from her partner after he made sexist remarks. I wonder if, one day, she finds out about the rest of her party. Just kidding, she didn’t become their leader by accident.

Essentially worthless, but undisturbed, Jim Jordan refused to give up on this bid to become speaker of the House, it failed, and his party ousted him. Will this election cycle be McCarthy bad? I, depressed, hope not, but at the same time hope it becomes worse. Spectacle is the fuel that keeps me alive.

This ain’t intelligence

Rest of World generated thousands of images using Midjourney, analysing the output for racial stereotypes. They conclude that AI reduces the world to stereotypes. Make sure to read the whole story over at Rest of World. It includes visual representations of the output, which makes the point even more convincing than some text about it.

Rest of World also reported on the way China forces students at vocational schools into data labelling for its AI industry.

To combat stereotypes, it’s often said that we need to understand the algorithms, have some kind of transparency. That’s not enough, argues Rachel Thomas. Instead, we need to have mechanisms that let us contest the power those systems wield.

These mechanisms, are ever more important as complex systems are hard to control.

Focusing on complex systems leads to several perspectives (incentive shaping, non-deployment, self-regulation, and limited aims) that are uncommon in traditional engineering, and also highlights ideas (diversification and feedback loops) that are common in engineering but not yet widely utilized in machine learning. I expect these approaches to be collectively important for controlling powerful ML systems, as well as intellectually fruitful to explore.

Jacob Steinhardt – Complex System are Hard to Control

Facebook announced some new celebrity chatbots, which manage to feel outdated while using the hot technology of the moment. Tom Brady’s chatbot incarnation quickly insulted Colin Kaepernick. Facebook said the usual thing, that these high-profile, highly expensive features are «experimental».

It’s no surprise, that some celebrities simulated by these chatbots were shilling crypto not that long ago.

After all, we are living in the age of the grift shift.

The Grift Shift is a new paradigm of debating technologies within a society that is based a lot less on the actual realistic use cases or properties of a certain technology but a surface level fascination with technologies but even more their narratives of future deliverance. Within the Grift Shift paradigm the topics and technologies addressed are mere material for public personalities to continuously claim expertise and “thought leadership” in every cycle of the shift regardless of what specific technologies are being talked about.

tante – The Age of the Grift Shift

Building cutting-edge models requires an immense amount of data and computing power, making it basically impossible to do it without the backing of one of the larger players in the space. How Big Tech is co-opting the rising stars of artificial intelligence explains these dynamics in more detail.

It’s also the origin of the latest paper by Meredith Whittaker and her co-authors, Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI. In it, they argue that the current state of generative AI makes it extremely hard, if not impossible, to build truly open systems. If reading an entire paper doesn’t fit into your schedule, Whittaker joined This Machine Kills to discuss the paper.

But who, actually, makes money? For most companies currently in the AI race, profits are a possible future, but nothing they have figured out yet. GitHub Copilot is losing $20 per user and month. This might still be a power grab, but I don’t see any real competition in the coding market anyway?

OpenAI hosts its first developer conference at the beginning of November.

The models do require electricity, but also an ever-increasing amount of water. According to researcher Shaolei Ren, ChatGPT consumes as much as 500 ml water per prompt.

But it’s not just the public imagination and electricity consumption that are taken over by the race for powerful AI models. Open Philantophy, flagship of the Effective Altruism movement, sponsored the salary of multiple advisors in the US Congress.

Google’s Phones let you create composite images out of multiple images taken shortly after another. All smiles.

You might have heard about Reinforcement Learning in connection with machine learning models, maybe seen the abbreviation «RLHF» turning up. RLHF stands for Reinforcement Learning from Human Feedback. But what is this, how is it applied in the training process, and are there alternatives? Sebastian Raschka explains.

In an interesting bit of research, Anthropic managed to build a (small) model and were able to analyse features instead of individual neurons activating in generating output. This might lead to better interpretability and control of models, if the approach can scale to the size of Large Language Models.

Social Mediargh

Truth, they say, is the first victim of war. While that’s technically untrue, truth dies pretty fast, as all parties of a war have a story to tell which might or might not align with what is actually happening.

In the ongoing Israel–Hamas war, an explosion occurred near to the Al-Ahli in the Gaza Strip. Hamas was quick to denounce Israel, saying that 500 people died in the attack.

The message spread like wildfire through social media and engagement-driven news organisations (which basically means all news orgs). Open-Source Intelligence (OSINT) researchers, like Bellingcat, and teams such as BBC Verify painstakingly constructed are more nuanced picture, and as morning dawned the supposedly bomb attack turned out to be a smaller crater and some burnt-out cars.

While some OSINT researchers do what they have done over the last years by providing well-researched facts, others are using Twitter’s engagement machine to destroy the information ecosystem, as 404 Media reports.

Instagram’s translations added the word «terrorist» to posts containing the Palestinian flag.

Users posting about the war, are – as with other topics – resorting increasingly to Algospeak, the use of a language that uses symbols, abstractions, or neologisms to evade the algorithms of the social media platforms. Side note: Facebook trains their Language Model on the posts in its platform, so it will be interesting to see wether «‘P*les+in1ans» shows up in its future generations.

I won’t comment on the larger conflict here. John Ganz’s The Trap manages to formulate my feelings pretty conclusively:

Strategy and tactics are not what’s really at issue here. At core of the worldviews in question is a belief in sheer murderousness. What both Hamas and the far right in Israel want this is to become is a war of annihilation and extermination. This is the fundamental vision of their nationalism of despair: races and peoples pitted against each other in interminable conflicts that can only be concluded with “final solutions.” Of course, a similar vision of permanent racial war underpinned Nazism and the Holocaust. I categorically refuse to be recruited to this conception of the world. And I will not be manipulated by emotional appeals and propaganda—by one side or the other—to participate in it.

I enjoyed reading Farmers only, a look at algorithmic amplification in the age of enshittification.

Looking around at the overharvested fields of digital shit, it’s hard not to ask: is the personality quiz the ouroboros of the algorithm? Is all of social media just a personality test? AITA, the Twitter hypotheticals, the personality type tutorials invite us to project ourselves into a handful of predetermined choices. To pick one of seven essences. To choose between Jay Z or $500,000. It gives the illusion of randomization, customization and personalization, but ultimately all it does is produce a quantization of who we are. It turns the mass of the self into a collective of discrete and finite individual components.

WordPress announced official support for ActivityPub, the web standard that powers Mastodon.

Gmail users can now reply to Gmail users with emojis. The rest of us will be annoyed by yet another mail. It’s not without precedent, Apple did the same when iOS users reacted to messages from non-iOS users.

Do you have what it takes to lead a Trust & Safety team at a fast-growing social media site? Trust & Safety Tycoon lets you find out about the intricacies policy decisions entail.

What are you looking at?

The German Federal Cartel Office (Bundeskartellamt) forced Google to give users in Germany better control over the data Google collects from them. Users will be able to control how Google collects and combines data from different sources.

In MIT Technology Review Stephen Ornes published an in-depth look at the past and future of encryption.

Psychology is in the midst of its replication crisis, the new editor of Psychological Science hopes to combat it.

No surprise. Songtradr laid off large swaths of Bandcamp’s staff, among them most employees who were eligible in the unionisation process and the whole bargaining unit.

Apropos of this, but also everything, it is time to revalue the strike, as Erik Baker argues in Jewish Currents.

Stills from the Barbie move. A scene in twilight. In the first Ken says «I was hoping to come over tonight». The second shows a perplex looking Barbie asking «To do what?» To which Ken responds: «Actually … to build horizontally organised networks of mutual aid to break our collective dependency on capitalist institutions.»

Terrorists, according to German law enforcement: Leftists who, allegedly, sprayed pro-Antifa graffiti.

No terrorists, according to German law enforcement: Nazis hoarding firearms.

Still trying to solve the trolley dilemma? Maybe learn a new language first, researchers at the University of Chicago claim that our moral decisions are dependent on wether we think in our native or a foreign language.

Ever used WinAmp? You will love the Winamp Skin Museum.

Recipe of the Issue: roasted red pepper and tomato soup with chunky chorizo-y croutons.


That’s it for this issue. Stay sane, hug your friends, and Antifa forever.

]]>
<![CDATA[Taking ducks to outer space]]> https://www.ovl.design/around-the-web/022-taking-ducks-to-outer-space/ 2023-10-08T14:12:00.000Z <![CDATA[AI alignment, space junk traffic jams, settler colonialism, and the geopolitics of web domains.]]> <![CDATA[

Collected between 1.8.2023 and 8.10.2023.


Welcome to Around the Web.

The world leaves me in too cynical a state to even write some nice words welcoming you this issue, so late that late doesn’t cut it. Time is a social construct, people! And as nothing ever happens, or at least few things seem to get better, it doesn’t really matter.

There were some local elections in Germany, with significant wins for the (far) right. Most notably, Elon Musk’s favourite party, the Alternative für Deutschland. Söder’s CSU staid steady, the Freie Wähler gained slightly, even though their leader, Hubert Aiwanger, was accused of distributing antisemitic pamphlets in school. I hate everything about this paragraph so much.

Thanks, world. Hey, dear reader, don’t despair. Autumn is here, but carry on we must. Here are some links:

This ain’t intelligence

The AI doomer crowd, notably OpenAI and their quest for «Superalignment», has been pretty vocal about the necessity to better align the output of Large Language Models with human preferences. But what is alignment, and for which goals is it useful? Jessica Dai took a closer look.

I’m not advocating for OpenAI or Anthropic to stop what they’re doing; I’m not suggesting that people — at these companies or in academia — shouldn’t work on alignment research, or that the research problems are easy or not worth pursuing. I’m not even arguing that these alignment methods will never be helpful in addressing concrete harms. It’s just a bit too coincidental to me that the major alignment research directions just so happen to be incredibly well-designed to building better products.

Examples of the harm that comes from colonialist, racist systems are abound. Arsenii Alenichev asked Midjourney to generate images of Black doctors treating white children. The system failed spectacularly.

While the Hollywood writers won their strike and, among other things, have better control of AI systems in their work, employees in countries are less lucky. In India, a CEO fired the whole staff of his company, replaced it with ChatGPT and subsequently insulted the humans he fired.

The AI systems that we have to deal with, are built in the Global North, for the Global North. This perpetuates postcolonial power structures and is harmful to those not in the focus groups of our overlords. AI must be decolonialised to fulfil its full potential, argues Mahlet Zimeta.

OpenAI announced DALL-E 3. The headline new feature: Integration with ChatGPT to alleviate the burden of writing prompts. Microsoft’s Bing integrated DALL-E 3 quickly. Not shortly after, users found that Bing will readily spit out images of SpongeBob flying a passenger plane into the World Trade Center. When there is harm to be done, 4chan is ready, posting instructions on how to generate racist imagery.

And it’s not just images. Researchers found that adding certain bogus strings to the end of malicious prompts reliably breaks all current bots. OpenAI, Anthropic and Co. are whack-a-mole grandmasters by now, so some are «fixed» (more likely, the filter before you actually get model outputs filters them out), the researchers said «we have thousands of these». Great!

How’s Bing going otherwise? Doing normal things. Like pushing malware through ads. This kind of thing is one of the AI problems that are actually easy to solve: Don’t put ads in it. Thanks for coming to my TED talk.

Google, meanwhile, picks up ChatGPT bogus on sites like Quora and presents you with melting eggs and invents mails that there were never sent or received. Meanwhile, there appears to be a network of fake news sites, serving real ads to (most likely) fake visitors.

Melting eggs? Cute. Making money with fake news sites nobody visits? The perfect crime.

Unfortunately, most of the disinformation we have to grapple with will not be, is not as harmless. In a town in Spain, children circulated AI-generated nude images of other children. On YouTube, the first videos scripted by AI are popping up, promising to educate children. The only problem? The education is fake. Unlike those fake news sites, these videos garner views, thanks to YouTube’s ever-reliant algorithmic amplification. Google showed a fake selfie as the first result for «Tank Man» searches. This was easy to spot. For now, at least. But are you, or the parents in your vicinity, regularly checking in with the YouTube videos your kid consumes or are certain that you know what happens on the schoolyard?

And I haven’t even talked about election disinformation and that basically all social networks bailed and said «yeah, whatever, let’s make moneys». I’ll spare you, dear reader, and me. Until next issue!

That’s not to say that generative AI can have no applications in education. But it requires meticulous planning and teachers that understand the fallacies of the technology. One such example: Simulating History with ChatGPT.

Another possibility is to design the interfaces and underlying models in ways that break the «bigger is better and put a chatbot on it» way that’s currently so popular. Maggie Appleton spoke about this and how to force structure onto the wobbly things. Besides this, the first part of the talk is also a great rundown of how language models work. Recommended all around. Her thinking here is really on point, and manages to make a point coherently I wanted to make for a while, but just couldn’t bring to the point:

You should treat models as tiny reasoning engines for specific tasks. Don’t try to make some universal text input that claims to do everything. Because it can’t. And you'll just disappoint people by pretending it can.

I don’t think I mentioned it in Around the Web, but this summer there was this viral story that ChatGPT managed to hire a Task Rabbit worker to solve CAPTCHAs on its behalf. The closer look shows a more nuanced picture: With the help of humans, ChatGPT kinda managed to solve a task that involved a CAPTCHA and a Task Rabbit worker.

Speaking of CAPTCHAs, I’ve bad news for you, bots do outperform humans in solving CAPTCHAs. They are both faster and more accurate.

A new study interrogates the Surveillance AI pipeline, coming to the conclusion that basically all research in computer vision targets humans and leads to applications in surveillance. If you see someone reporting on video analysis and the talk is about «object detection», unless stated otherwise, those objects are humans.

With all these issues being reported, the grifts and misrepresentations, the announcement of imminent doom and big investments, it sometimes feels as if critics are shouting into a void.

Thoughtworks just published a report where they asked 10,000 customers across the world what they demand from Generative AI systems and if they have concerns about their application. While a third of the participants are generally excited about these systems, less than ten percent reportedly have no ethical or privacy concerns.

Is AI standing on the edge of a disillusionment cliff? Maybe. Hopefully, to be honest.

Here’s a question to conclude this section: Are you allowed to take ducks home from the park, and if so, how? «No!» I hear you say. Correct, and all Chatbots agree. ChatGPT will let you take the ducks if you ask in German. Which is very friendly. Don’t speak German? Then you might need a more elaborate scheme, dynomight has got you covered.

There are quite a few loose ends in this issue. Pick your favourite!

Do you know how many satellites are orbiting earth? Take a guess. I’d have said some hundred. Perhaps a thousand. The answer is: 7,000. 4k of those belong to Starlink. Surely, with that many objects in space, there are rules how to avoid collisions or plans to clean up if a collision happens or a satellite malfunctions? Of course not! Elsewhere in space: The ancient technology keeping space missions alive.

From outer space to underwater (I’m sorry). The Secret Life of the 500+ Cables That Run the Internet. The fact that we just throw cables in the oceans and this somehow manages to keep the internet running is one of my favourite things. So I’m always in for a good cable story.

When we discuss settler colonialism, we probably think of Israel or the USA. But those two countries are far from the only countries who claim land by settling their population. This Aeon essay takes a closer look at the phenomenon, its ideological justifications, and critiques.

You know what’s wonderful about capitalism? It’s likely the only system that puts screens in doors that show you what’s behind those doors (oh, and ads of course) which then show things that are not behind those doors and occasionally catch fire. Innovation, baby!

Bandcamp, a year after being acquired by Epic, and in the midst of unionisation, has been sold to Songtradr. According to Bandcamp United, employees have been locked off from critical systems and their employment status in unclear. Unions are a capitalist’s worst enemy. If you can’t beat them, dismiss them.

Meanwhile, music marketplace Discogs’ latest updates leave sellers wondering if the site cares about them.

By now you probably have heard, that .ai and .tv domains belong to countries, maybe even that they make significant money with them. In Reboot, Tianyu Fang looked closer at the history and geopolitical implications of domains.

The Consumer Aesthetics Research Institute compiled a handy list of (digital) design aesthetics. Take a look, test is next week.

Open Source Gardens

Remember To avoid straining your eyes when you're continuously working, follow the 20-20-20 rule. After 20 minutes of work, look at something 20 feet away, then spend 20 years in the forest.


That’s it for this issue. Stay, sane, hug your friends, and do the cyberbougie.

]]>
<![CDATA[The Ed Hardy shirt of argumentative figures]]> https://www.ovl.design/around-the-web/021-the-ed-hardy-shirt-of-argumentative-figures/ 2023-07-30T14:12:00.000Z <![CDATA[How Large Language Models work, the era of global boiling, passport privileges, a swan song to masculinity, and Barbie’s merchandise.]]> <![CDATA[

Collected between 17.7.2023 and 30.7.2023.


Welcome to Around the Web.

Pop culture isn’t universally known for its backbone. The brighter shine those who put integrity before commercial success. This week, Sinéad O’Connor died, and with her pop lost a good part of its backbone.

Screen grab of Sinéad O’Connor ripping an image of the pope in pieces.

Rest in power, Sinéad.

This ain’t intelligence

Let’s start this section with a step back. I’ve written a lot about Large Language Models and their impact on society over the past issues. But how do they … work? It’s fancy autocomplete, but who puts the fancy in the complete? Timothy B. Lee and Sean Trott explain LLMs with a minimum of math and jargon.

Another primer, but with a tad more jargon, is this explanation of how in-context learning emerges by Jacky Liang. It’s called in-context learning when LLMs learn new tasks for which they haven’t been trained originally.

Whenever I talked about AI to friends who aren't following the discourse too closely, one question loomed large: When will AI overpower humans? «Don’t worry, for now» I said. AI companies and influencers have been very vocal about a prospective future in which a superintelligent AI Model will kill us all. Essentially creating an illusion of AI’s existential risk.

A group of authors took to Noema to dispel the myth and bring reason back to the discourse. AI isn’t going to kill us, but there are certainly dangers to the fabric of our societies in the here now which AI might amplify. For all of these problems, like autonomous weapons and the impact of training models on the climate crisis, boring, is quite capable to act as a fire accelerant.

As it stands, superintelligent autonomous AI does not present a clear and present existential risk to humans. AI could cause real harm, but superintelligence is neither necessary nor sufficient for that to be the case. There are some hypothetical paths by which a superintelligent AI could cause human extinction in the future, but these are speculative and go well beyond the current state of science, technology or our planet’s physical economy.

If you are worried about AI, read this piece. You won’t be less worried after it, but your worry will be better focussed.

ChatGPT losing it?

You might have heard that ChatGPT has gotten worse over the last months. After all, there is a paper saying so, isn’t there? Not really. First, the paper used partly questionable methodology, for example, the inclusion of explanations in the coding example counts as being less proficient in coding. But maybe most important, the paper never claimed that ChatGPT got worse, but that its behaviour changed.

As Arvind Narayanan and Sayash Kapoor in AI Snake Oil explain, there is a difference between model capability and model behaviour.

Chatbots acquire their capabilities through pre-training. It is an expensive process that takes months for the largest models, so it is never repeated. On the other hand, their behavior is heavily affected by fine tuning, which happens after pre-training. Fine tuning is much cheaper and is done regularly.

The authors of the paper found no evidence of capability degradation. However, by documenting the shifting behaviour, the paper highlights a different problem: It’s incredibly brittle to build products and do research with OpenAI’s API offerings. There are no changelogs, the available snapshots of the models are deprecated and removed frequently. As Narayanan and Kapoor conclude:

It is little comfort to a frustrated ChatGPT user to be told that the capabilities they need still exist, but now require new prompting strategies to elicit. This is especially true for applications built on top of the GPT API. Code that is deployed to users might simply break if the model underneath changes its behavior.

There might be another reason that the paper found such fertile ground. A few months after its release and the accompanying PR blitz, the novelty of Generative AI is worn off. Or, as Baldur Bjarnason puts it, «Generative ‹AI› is just fucking boring.»

The only thing that isn’t boring about generative “AI” is the harm tech companies and their spineless hangers on seem intent on inflicting on our society and economy: replacing the variety of human creative work with the tedious sameness of synthetic work in the name of “productivity” or, worse, “cost”.

Or, as Paris Marx concludes in Disconnect:

Once again, the tech industry has deceived us in another bid to expand their power and increase their wealth, and much of the media was all too happy to go along for the ride. Generative AI is not going to bring about a wonderfully utopian future — or the end of humanity. But it will be used to make our lives more difficult and further erode our ability to fight for anything better. We need to stop buying into Silicon Valley fantasies that never arrive, and wisen up to the con before it’s too late.

Faded – collapsing new models, watch them – collapsing

While it’s highly unlikely that existing models use their capability, the popularity of these models will present a different problem for future models.

As model output becomes more prevalent across more and more domains, it will be harder to train new models on datasets that do not contain output of other AI models.

This matters, as generative models – be it language models or image generation – rely on massive amounts of random data. And this randomness needs to contain some long tail, rare data. AI models are not good at producing this. Remember, they compute the highest likelihood that something matches. Not the most creative. So, the more AI-generated content we get, the less likely are outputs with a low probability.

In The Curse of Recursion: Training on Generated Data Makes Models Forget found evidence for exactly this issue, they call it Model Collapse. This «refers to a degenerative learning process where models start forgetting improbable events over time, as the model becomes poisoned with its projection of reality».

This “pollution” with AI-generated data results in models gaining a distorted perception of reality. Even when researchers trained the models not to produce too many repeating responses, they found model collapse still occurred, as the models would start to make up erroneous responses to avoid repeating data too frequently.

The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated content

If you remember the explanation of in-context learning above, this was also reliant on the long tail random data points. So future models might not only collapse and produce nonsense, they might also lose some capabilities today’s models have.

Having Real Human Data™ will be even more valuable in the future. Shame that click-workers on platforms such Mechanical Turk are already using language models to train language models.

At one point in the future, you might see AI researchers standing in dark street corners, dealing with 2022 Common Crawl backups like it’s crack.

Self-serving regulation

«Look at us!» some AI vendors scream, «We are regulated!» Some large AI companies pledged to let government institutions inspect their code, and – among other things – watermark the output of their models. As we’ve just seen, they have a considerable incentive for watermarking, going beyond regulatory compliance. And, of course, the agreement isn’t legally binding. If only there was some kind of precedent of what happens when companies pinky-promise to not do evil things.

So, yeah. Welcome to the bare minimum. Again.

In an unexpected contribution to the AI ethics discourse, the Pentagon claimed that their AI-driven war machines are more ethical than other AI-driven war machines, because of the ingrained Judeo-Christian society. You know straight away that absolute bullshit is being said, when someone wants to justify something with «Judeo-Christian». The Ed Hardy shirt of the argumentative figures.

Mustafa Suleyman, co-founder of Inflection AI, proposed a new Turing test for AI models. Instead of faking being a human, it shall hereto forth be the goal to fake capitalism. That’s probably more illustrative of the limits of imagination of the AI zealots as Suleyman intended.

Speaking of capitalism, Microsoft announced ridiculously high prices for its Autopilot office suite and the stock market is going like yay money, who cares if anyone’s going to pay.

Benedict Evans tries to situate automation through Machine Learning models in capitalism’s wider history of automation and changing jobs. It’s certainly an interesting read, which gives better roots to the often ahistorical approaches in other publications.

Facebook published the second version of its Large Language Model, Llama 2. In contrast to OpenAI, Llama 2 is open-source and has a generous free usage license, though no information about the dataset it has been trained on has been published yet.

One of the applications of AI that falls squarely into the «That’s useful»-category is reviving ancient languages and making translation of their texts easier. One recent example where researchers managed to do just this is Akkadian cuneiform. Note, however, that this is no Large Language Model, but neural machine translation, the technology that powers, among others, Google Translate.

Certainly, no useful application: The Free Democratic Party in Switzerland faked an image of climate protesters blocking an ambulance because climate protesters don’t block ambulances, but you somehow need to cultivate the image of your enemy, I guess.

EOL of Humanity

Amid an excruciating heat wave, climate researches warned that global warming could push the Atlantic past a tipping point this century. Or as early as 2025. Which, yes, is in two fcuking years.

July 2023 has been the hottest month on record. Or, in the words of the UN secretary general, António Guterres: «The era of global boiling has arrived.»

Isn’t it romantic that we get live pictures of a cargo ship transporting (electric) cars burning in the North Sea while we need to read all this? The Zeitgeist certainly knows a thing or two about drama. Maybe someone in Hollywood hire this thing.

I’d rather watch Tenacious D running along a beach, being as happy as happy can be. Luckily, this is totally possible.

What are you looking at?

For Algorithmwatch, Josephine Lulamae travelled to the German city of Mannheim to report on the video surveillance in its city center. In Mannheim, an automated system reports hugs to the police, she reports.

In reality, the surveillance program highlights a perfect storm of lofty promises, police violence and racist sentiments.

One lady who works at the square shares that officers made her teenage son stand facing the wall outside her shop and empty his pockets. “Do you know how much this hurt me?” she says. Around two years ago, she filmed officers tying a 15-year-old boy’s hands behind his back with a cable while he was lying on the ground of the square, because, she says, the officers had felt threatened by the puppy that he and his friends had been playing with.

The encrypted European radio standard TETRA, used by police and critical infrastructure throughout Europe, contains massive security flaws, researchers found.

In the UK, the Home Office plans to push for facial recognition systems to surveil shoppers.

The Border Regime

Thought about your passport lately? Given the assumed demographics of the readers of this newsletter, probably only to use it when travelling. In Passports and Power Rafia Zakaria takes a closer look at the power dynamics embedded in this seemingly innocuous document.

What could better illustrate the sheer entitlement of the wealthy and the increasing lack of moral shame or outrage at the reduction of one group of humans to a subordinate category while others can afford to reduce anything to an “experience” for “making content”? Both faddish words are examples of the awkward lingo that is meant to sound uplifting. There is no moral shame attached to the consumption of these “experiences,” in which the “thrilling” nature bypasses the depravity with which others with a different set of documents have no choice but to contend.

In the EU’s current push to tighten the border regime to the point where basically no-one uninvited can reach its land anymore, gigantic sums of money are poured into countries such as Tunisia, despite widespread reports of torture, such as displacing refugees into its desert.

Libya seemingly tried to ban Frontex planes and drones from its airspace.

Social Mediargh

Elon Musk finally did it. He killed the bird. And replaced it with a logo so generic that every corporate sans serif typeface of the last twenty years seems incredibly unique. X, as it is now called, will be, if all goes according to plan (it won’t), the US equivalent to WeChat.

Ryan Broderick summarises the dumpster fire (incredibly, still burning):

And so, the answer to “why is he turning Twitter in WeChat” is because he simply cannot imagine an internet beyond Twitter, just like all the users still using it currently. He wants his own WeChat because he wants to control all of human life both on Earth and beyond and he can’t conceive of other websites mattering more than Twitter because Twitter makes him feel good when he posts memes. As far as I’m concerned, Musk is simply doing the billionaire equivalent of when someone breathlessly explains insular Twitter drama at you irl like it’s the news. He thinks Twitter is real life and he’s willing to light as much of his fortune on fire as possible to literally force that to be true. Now matter how cringe it is.

Elon Musk thought he was buying the whole internet

Now, we have Threads, whose sole raison d’être seems to be that Facebook can sell more ads, X fka Twitter, and some smaller alternatives such as Bluesky, somehow locked into Beta limbo and with a flailing content moderation approach.

It’s perfectly fine to be a “feminine” man. Young men do not need a vision of “positive masculinity.” They need what everyone else needs: to be a good person who has a satisfying, meaningful life: Against Masculinity

Zuckerberg, meanwhile, thinks that Threads can attract a billion users.

I haven’t watched Barbie yet, I’ll definitely watch Barbie soon, and I appreciate Jessica Defino’s explanation of how the sprawling merchandise industry undermines the feminist aspirations the movie has.

No matter. Barbie profits from both the feel-good performance of embracing cellulite and wrinkles and the practical tools of erasing them.

“Things can be both/and,” Gerwig has said. “I’m doing the thing and subverting the thing.” But in terms of production and consumption, they can’t be, and she’s not.

CrowdView is a search engine that only searches in forums.

In The Nib, Tom Humberstone illustrates (really!) why we should all Luddites to build a better future, with technology that serves humans rather than corporations: I’m a Luddite (and So Can You!) (Unfortunately, the images miss alt text, which is a shame.)

An illustration of a person holding a sledgehammer on their shoulder. The person is seen from diagonally behind. The lighting indicates a sunset. In the bottom half of the illustration the words «Welcome to the future. Sabotage it.» are written in comic style.

That’s it for this week. Stay sane, hug your friends, and nothing compares 2 u.

]]>
<![CDATA[You can’t spell AI without C-A-P-I-T-A-L-I-S-M]]> https://www.ovl.design/around-the-web/020-you-can-t-spell-ai-without-c-a-p-i-t-a-l-i-s-m/ 2023-07-16T14:12:00.000Z <![CDATA[Not-so-news from your favourite AI shovel sellers, beaver bombing, how screen readers work, and the physics of riding a bike.]]> <![CDATA[

Collected between 27.3.2023 and 16.7.2023.


Welcome to Around the Web. It’s been a while.

In my absence the AI bros convinced the world that the singularity is nigh and a doom-machine impeding. «Pay us», they screamed in their finest snake oil salesmen voices, «or AI will destroy us all!» What a pitch! Welcome to the marketplace of doomsaying. But also: What a nonsense. Doom is mine, but I’ll share it.

Here we go.

This ain’t intelligence

In May, Sam Altman, CEO of OpenAI had a weasel-eyed appearance in front of the US Congress, asking pretty-please regulate AI; otherwise it’ll kill us all. Shortly after, he learned about the EU AI Act, which – of course – is actual regulation. Altman reacted by saying that this is the wrong regulation and OpenAI may stop operating in the EU if it isn’t changed.

You see, it’s a gold rush and as the shovel sellers are the only ones getting rich, it’s only fair that they decide how shovel selling in a gold rush will be regulated. They have the experience in the market after all, not those pesky politicians.

After all, they promise AI will destroy humanity if they – because they only want the best for humanity (which is a nice way to say their profits) – don’t build it responsibly. There is no evidence for such claims (except for the Terminator movies, which aren’t scientific literature, thinking about it). But this doesn’t stop ideological capture from taking over elite colleges.

These are the people who could actually pause it if they wanted to. They could unplug the data centres. They could whistleblow. These are some of the most powerful people when it comes to having the levers to actually change this, so it’s a bit like the president issuing a statement saying somebody needs to issue an executive order. It’s disingenuous.

Meredith Whittaker

Now, the FTC is investigating Open AI, too.

While OpenAI and Google try to make a secret out of their every move, data and development, Facebook decided to take a different route. They tend to be much more open with their models. For better or worse. Truth be told: For better and worse.

German tabloid Bild has cut jobs and is looking to integrate AI deeper into their workflows. As they just make shjt up anyway, a bit of hallucinating won’t make a difference.

Short interlude: Here’s ChatGPT failing at ASCII art.

The gay-detectors are at it again. A model that can detect homosexuality! Science! I’m so happy I’m not in Zurich right now. I especially like the quote at the end, where the honourable scientist is like «Of course we oversimplified, but we did it to showcase human diversity.» Slow clap.

On the bright side, we are seeing more and more journalists actually using their brain and looking deeper than press releases.

If you are a journalist, or know a journalist, or just want to be able to critically read what journalists write, the Columbia Journalism Review had an actual useful step-by-step guide on how to report better on artificial intelligence.

Intelligencer published a must-read piece on the human work that underpins every Machine Learning model, no matter if it trains itself or relies on annotated data. This work isn’t going anywhere and as the companies developing the models have understood capitalism quite well, it’s low-paid and outsourced.

But behind even the most impressive AI system are people — huge numbers of people labeling data to train it and clarifying data when it gets confused. Only the companies that can afford to buy this data can compete, and those that get it are highly motivated to keep it secret. The result is that, with few exceptions, little is known about the information shaping these systems’ behavior, and even less is known about the people doing the shaping.

Those doing the work have formed their first union. Here’s to many more to come.

Andrew Deck, in Rest of World, looked at outsourced and contract workers in the majority world. They are the ones who’ll will take the brunt of AI’s impact and are adapting to generative AI already. The takeaway here is that AI won’t make humans redundant, but workflows will change.

Black artists are investigating the datasets and outputs of generative models and the way those models are (not) able to reproduce authentic images of Blacks. Those models distort their faces, or lighten their skin. Meanwhile, the terms of service hinder research, as e.g., the generation of images depicting slave ships is blocked. For good reason, as we all know that certain corners of the internet will do with it. At the same time, this block makes it impossible to investigate some pieces of human history.

Those datasets, also include tons of private data – to the surprise of no-one. An analysis by the German public broadcaster Bayerischer Rundfunk found thousands of files with intact EXIF Metadata in the LAION-5B dataset. EXIF data can contain names or geolocations, enough to deanonymise people on photos in the dataset. Deleting such metadata isn’t hard, and not doing it a colossal oversight. As LAION is public, there likely is a multitude of models trained on the data already.

Google’s new «Search Generative Experience» might break the internet, argues Avram Piltch in Tom’s Hardware. Google is set to plagiarise content it finds, deprioritising actual search results further and further. The next time a Googler explains how much they care about the open web, you are allowed to laugh them out of town.

Maybe it’s time to lock off GooBingAI from our websites? «We can rescind our invitation to Google», says Jeremy Keith. And actually, we should focus on building corners of unadulterated humanity, hidden away from corporate crawlers.

Mozilla, nowadays unfortunately always one to ride a wave, even if it’s only one to embarrassment, tried to use some Chatbot model on the Mozilla Developer Network. It didn’t end well.

Okay, here’s something pretty impressive. Redditor nhciao posted a series of graphics where they used Stable Diffusion to embed working QR codes in anime images. Mindboggling stuff.

An illustration in the style of traditonal Japanese art. The illustratration shows stylised houses with a large window in each corner. The main part of the image is filled with trees or bushed. On second glance the format reveals itself to be a working QR code.

EOL of Humanity

While Uruguay is suffering its worst drought in 74 years, with the government even mixing saltwater into the drinking supply, Google plans to build a data center in the country using ever more fresh water. «At Google, sustainability is at the core of everything we do», says Google, while Uruguayans say «This is not drought, it’s pillage».

Some months ago, the EU made headlines, when they planned to ban most hazardous chemicals. At the tides of time and some millions of lobbying money: EU to drop ban of hazardous chemicals after industry pressure. The mimimimi of capitalists is the most pathetic sound of our century.

Meanwhile, in the Nature is Healing Department: A renegade sea otter is terrorizing California surfers: «It’s a little scary». Also, the orcas are still out to get us, and I’m in the cheering section.

A sea otter munches on a surfboard.

If I’d ask you to explain beaver bombing to me, what would you come up with?

If it’s releasing beavers into habitats to reverse their extinction that is indeed the correct answer. If you thought about something else, I did too. The whole process is illegal and some say may harm the environment. Beaver bombers don’t care.

“I had all the authorizations I needed,” Rubbers said. “Which, in my mind, meant no authorization.”

“He bombs quite a bit,” Schwab admitted about Rubbers. “He wanted to do something for nature.”

Paris, at one point a city engulfed in car emissions, piece by piece manages to become a greener and healthier city.

Social Mediargh

The biggest story over the past few months was probably Reddit’s update to its API policy. In short, you’ll now have to pay good money to access Reddit’s content for, say, an app you are building. Which sounds nice, but is a threat to a whole host of 3rd party providers who built their apps around the free API available until now. It kinda sounds reasonable, except there has been as little as no warning and Reddit lives of free work of others.

In protest, Redditors made their subreddits private, the mod team of the Ask me Anything subreddit announced they will cut down their volunteer work drastically. Reddit, so far, remained sturdy and said they won’t budge.

Because everything is going great, Facebook finally decided to launch their long «awaited» real-time messaging thingy called Threads. Names are dead, let’s just put something generic on our product. Old Elon wasn’t amused and threatened to sue Facebook. Which probably left some litigators in Facebook’s House of Litigators pretty amused.

Of course, as it’s technology, and fuxk you, disabled people, Facebook «missed» some crucial accessibility features such as alt text for images. It’s well done all around, you see.

In Kenya, Facebook is busy busting unions, fortunately the courts have none of it.

As Facebook launched Threads, Zuckerberg launched Hobbyist Zuck, the brand-new version of himself, trying to get rid of his bland cyborg alter ego.

Now, you may be tempted to feel something like sympathy for Zuckerberg. Human after all? Don’t be fooled. Cat Valente was kind enough to write one of those cathartic pieces of contempt that are refreshing, like a bath in a mountain lake.

Well, it’s pretty fucking weird how the launch of Threads, which is ostensibly, you know, a company and a profit-generating service, almost immediately did a sickening costume reveal and became Mark fucking Zuckerberg’s Redemption/Woobiefication tour, and only like four non-Nazi people and one of their alt accounts are pushing back on that because everyone rushed to join this thing with a smile on their lips and a song in their heart a big anime heart-eyes for the guy we all knew was Noonian Soong’s first janky and obviously evil Build-a-Bloke workshop project three weeks ago.

Seriously, have we all lost our entire screaming minds?

I’d like to quote it in full, but I trust in you, dear reader, to read it anyway. Go on, I’ll wait.

Speaking of «friends», is anyone left in the Metaverse?

What are you looking at?

France is set to allow police to spy on suspects through remote phone access. But there is nothing to worry about, as it’ll only affect some cases and only after a judge allowed it, and politicians never in the history of ever tried to change this after the fact. Except all the time.

Speaking of the history of ever: Chat control, the zombie hunting encryption for the children, is refusing to die. The EU member states are now at the brain worm state where they say they would rather not weaken encryption but read encrypted content. How? Who knows! Sadly, we might find out. Of course, all of this is dogshjt:

The proposed regulation would also set a global precedent for filtering the Internet, controlling who can access it, and taking away some of the few tools available for people to protect their right to a private life in the digital space. This will have a chilling effect on society and is likely to negatively affect democracies across the globe.

Maybe the EU should have a ’lil look at the spyware vendors. LetMeSpy (an app you hopefully have never heard about) was hacked and got all their data stolen and published.

The German Ministries of the Interior decided not to use Palantir’s software, as it’s too bad. Bavaria, meanwhile, was like «We got a password, can share the account.» Peter Thiel will not be amused if you do the Netflix, I guess.

Hard times to be a surveillance vendor. Perhaps sell your assets! That’s what NSO Group tries to do. The Guardian reported that a producer of Adam Sandler movies and an heir to the Wrigley family (of chewing gum fame) are interested in its assets. Of course, the NSO Group and its Pegasus spyware can’t be used in the USA, and as such the Nation Security Council responded with «do whatever but do not think we’ll be impressed» (not a direct quote).

Adam Sandler. Chewing Gum. Pegasus. At one point, I might try not to feel like I’m in a fever dream and everything is just plain stupid when writing this newsletter. I have close to zero hope this will happen.

When talking about the EU border, the first organisation that comes to mind is likely Frontex. In its shadow there is the International Centre for Migration Policy Development, a non-EU but EU-funded organisation, helping coast guards to keep refugees out of Europe. Coda explained how an EU-funded agency is working to keep migrants from reaching Europe.

I tend not to understand low-level technical content, and as such, I’m always super happy if there is deeply technical content I can actually understand. Neill Hadder published a series of three articles explaining how screen readers actually work. Here’s part 1, Swinging Through the Accessibility Tree Like a Ring-Tailed Lemur. But make sure to not miss Part 2, and Part 3.

I promised myself that I’ll finish this epic about Ticketmaster and its dark history (spooky but true) at one point. But, honestly, I started in January and progress is slow. So, I might not. If you are into music and the way live music has been monopolised, it’s a worthwhile read.

The Serotonin Signal succinctly explains the current state of science around the impact of Serotonin on depression. The gist: While Serotonin levels are not linked directly to depression, there are downstream effects that Serotonin does have which are linked to depression. The human brain is a pretty darn complex and wonderful thing.

And, finally: The physics of riding a bicycle. It’s the latest piece by Bartosz Ciechanowski (regular readers might remember his explanation of a mechanical watch), and it’s great as ever.


That’s it. Writing more than two kind-of-coherent sentences at once felt pretty good, albeit exhausting. While I can’t make any promises to return to a steady publishing rhythm as long as my lovable but flawed brain recovers, I’ll try my very best to at least have a publishing something.

In the meantime: Stay sane, hug your friends, and ride your bicycles fast and slow, far and near.

]]>
<![CDATA[Generally Pretty Tired 4]]> https://www.ovl.design/around-the-web/019-generally-pretty-tired-4/ 2023-03-26T14:12:00.000Z <![CDATA[ProfitAI, generating disinformation, Okra against microplastics, and filming the speed of light.]]> <![CDATA[

Collected between 4.2.2022 and 26.3.2023.


Welcome to Around the Web. The newsletter for hibernation and soap bubbles.

I had grand plans for the first anniversary of this newsletter in February. But, as you’ve well noticed, nothing happened. Why? Because I’ve been too tired, and my brain essentially entered a phase of awake hibernation. I managed to get my work done, but nothing else.

But it's spring, I’ve been on vacation and the headlines are still headlines and underneath there was no beach, but rubble. Let’s look into it.

This ain’t intelligence

Before I start, I want to highlight one link which is broadly applicable. Baldur Bjarnason thankfully compiled a great list of tips and tricks to assess AI research and dissect between information and public relations.

Secondly, I highly recommend one article I’ve read about language models and image generators that explained why the outputs of these models is much like a blurry JPEG of all the information it was trained on: ChatGPT Is a Blurry JPEG of the Web. These models consumed more or less the whole internet, when prompting a response they try to recreate this information – sometimes it works well, sometimes it’s distorted beyond recognition.

Now, the news.

OpenAI still tests their models on the public. Which is an interesting idea, but also very wrong. We had not coped with ChatGPT, as Microsoft added some GPT to Bing, essentially turning their search engine into a bullshit spewing hate machine. Hot on the heels of this, OpenAI «published» GPT-4. Why «published»? Well, in essence, OpenAI published nothing.

While being a bit more cautious in their announcements, they did promise some things. Among them, that GTP-4 is better than previous versions to prevent the generation of misinformation. This appears to be false. News Guard tested some prompts against ChatGPT 3.5 and ChatGPT 4. ChatGPT 3.5 did block 20 of 100 prompts, whereas version 4 happily generated text for all prompts. 3.5 added more addendums that the generation contains falsehoods than 4, too.

For some, ChatGPT is still too woke. Conservatives Aim to Build a Chatbot of Their Own.

The version bump is also affecting research. Codex, an API for researchers, was shut down with just three days notice, asking researchers to move to ChatGPT. Essentially, this makes it impossible to reproduce any research done using the Codex API. The former research lab is now firmly a for-profit hype vendor.

Amazon’s lawyers, meanwhile, are begging its employees to stop using ChatGPT, as they have discovered out «closely resembling» internal company data.

On Monday, ChatGPT was briefly shut down, as the tool showed the prompt history of other users when using the tool. In a blog post, OpenAI acknowledged that some payment information has been shown to wrong users, too. They blamed a bug in Python’s Redis package.

Is this the defining headline of all I ever talk about? Fake ChatGPT Chrome Extension Hijacking Facebook Accounts for Malicious Advertising.

Enough with ProfitAI for now.

Fakes, fakers, and facts

The new machine learning tools have made it easier than ever to generate content, as we’ve seen above, with no regards to truth. Some years ago, deep fakes were largely a theoretical problem. «As of 2018, according to one study, fewer than 10,000 deepfakes had been detected online. Today the number of deepfakes online is almost certainly in the millions.» (The Deepfake Dangers Ahead)

The more mediums are involved in fabricating these falsehoods, the harder they become to recognise. Deepfakes capitalise on this, and they are getting better. Speech synthesis, as an example, has made rapid progress over the last few years.

Not to mention, what might happen if the subjects of the fakes post the fakes themselves. Say, for example, an ex-president of the United States of America posts a faked photo of himself? Oh: Donald Trump Shares Fake AI-Created Image Of Himself On Truth Social.

Excuse me, but I’ll mention Trump again. Last weekend, he posted that he will be arrested on Tuesday. This didn't happen. But Bellingcat founder Eliot Higgins took the opportunity to let Midjourney imagine what it might look like.

The pictures certainly aren’t close to real if you look closely. But in a media environment where nobody looks closely, as we scroll through a stream of information, they are good enough to sow doubt and disbelief.

Midjourney promptly suspended Higgins’ account.

Which images can you trust, which stories believe when enough of what you read is a fabrication? And what if one system cites the bogus output of another, as it already happened with Bing and Bard? Or when more and more journalists flock to chatbots to get their articles off the ground and don’t check every single sentence (out of laziness, time pressure or bad will) if these are correct? The Grayzone published an article trying to claim that the documentary Navalny contains misinformation, the article was based on a conversation with Chat Sonic, a ChatGPT alternative. And so, it cited misinformation to claim misinformation.

And yes, the solution to all of this is media literacy, but how do we train this? We can look to Finland, where this is taught in school. But somehow I don’t see this happening in the rest of the world.

It’s truly a shame, since all these advancements in technology could also do good, such as making sense of our universe.

Social Mediargh

In Germany, content moderators working on Facebook’s and TikTok’s products began to organise to demand better conditions for the often gruesome work.

What they don’t moderate are ads. Seemingly, they are getting worse (only proving that whatever you think is the worst, is only a glimpse of the possibilities of bad).

But advertising experts agree that crummy ads — some just irritating, others malicious — appear to be proliferating. They point to a variety of potential causes: internal turmoil at tech companies, weak content moderation and higher-tier advertisers exploring alternatives. In addition, privacy changes by Apple and other tech companies have affected the availability of users’ data and advertisers’ ability to track it to better tailor their ads.

Meanwhile, we've seen the end of free speech on Twitter. It’s now against the terms of service to wish harm to other users. In India, Twitter has decided that fighting the demands of oppressive regimes is too time-consuming and is now blocking accounts on the request of the government. We’ve come a long way from «I’ll allow all speech». Antisemitism might still be fine.

Twitter, always a fan of announcing, announced that it will disable checkmarks for legacy verified users on April, 1st. Yes. LOL. Will they do it? Who knows? What does it mean? What does anything mean today? Anyway. If they did it, every person stupid enough to pay Musk will be visible at a glance. Great. Ryan Broderick has been kind enough to summarise the whole farce in Garbage Day:

Elon Musk and an army of the tech industry’s biggest reactionary dorks literally bought and took over Twitter after years of being both obsessed with it and also completely consumed with resentment over “the liberal establishment’s” perceived importance on the app. They were furious that they did not also get the same little blue checkmark that 22-year-old viral news reporters were given so they could protect themselves from impersonators and mute some of the death threats they get on a daily basis. And so these giant losers built a new way to pay for a blue checkmark so they could pretend like they were just as important as they assumed the verified users believed themselves to be. And they expected everyone else to eventually pay to keep their checkmarks. No one has, of course, but Twitter is still moving forward with this. But they seem to realize that if they do that all it’ll do is make Musk’s try-hard fanboys immediately identifiable on the app. So now they’re building a way to hide how lame they will look alone on the site with their paid checkmarks.

Elsewhere in free speech, TikTok is under threat to be banned in the USA.

EOL of humanity

The comprehensive review of human knowledge of the climate crisis took hundreds of scientists eight years to compile and runs to thousands of pages, but boiled down to one message: act now, or it will be too late.

Scientists deliver ‘final warning’ on climate crisis: act now or it’s too late

It’s incredibly important to switch the surrounding discourse of this to the present tense. The thing we called «normal» is gone.

The map shows that per- and polyfluoroalkyl substances (PFAS), a family of about 10,000 chemicals valued for their non-stick and detergent properties, have made their way into water, soils and sediments from a wide range of consumer products, firefighting foams, waste and industrial processes.

Revealed: scale of ‘forever chemical’ pollution across UK and Europe

Open Mind, thankfully, wrote a fair bit about climate footprint calculators and their role in blaming individual behaviour for the climate crisis. We can’t individual ourselves out of it – which, on the flip side, does not resolve us from changing our individual behaviours.

BP was correct that carbon calculators can be useful. And individual responsibility has a place. But BP hijacked legitimate scientific research and weaponized it to serve the company’s purposes by blaming us instead of itself. While this sounds pretty bad, there is some good news: You can take the science back and use it for the change it was intended to make.

The vegans were right, plants are (part of) the answer: Texas Researchers Use Okra to Remove Microplastics from Wastewater.

Tante reflects on Metaverses and why they might never come to be.

The Metaverse never came to pass not because of lacking tech but because of tech that worked massively well: The Internet has been so useful that it now is part of the real world. And the Metaverse idea only makes sense in a world where that didn’t happen.

While I was away, the US of A shot down several balloons. Aliens? China? Maybe just hobbyists.

Shot: Implicit bias training for cops will surely prevent them from killing people. Chaser:

Although the training was linked to higher knowledge for at least 1 month, it was ineffective at durably increasing concerns or strategy use. These findings suggest that diversity trainings as they are currently practiced are unlikely to change police behavior.

Lai, Lisnek – The Impact of Implicit-Bias-Oriented Diversity Training on Police Officers’ Beliefs, Motivations, and Actions

Did you know that you can film the speed of light? Me neither. But you can.


That’s it for this issue. As always, thanks for reading and if you have a friend who might enjoy reading it too, subscribing is free, free like a bird.

Stay sane, hug your friends, and be kind to the skeleton within you.

]]>
<![CDATA[Nothing to lose but our fear]]> https://www.ovl.design/around-the-web/018-nothing-to-lose-but-our-fear/ 2023-02-03T14:12:00.000Z <![CDATA[A crisis prayed into existence, the end of writing, how not to fight the climate crisis, and mechanical cows.]]> <![CDATA[

Collected between 15.1.2022 and 3.2.2023.


Welcome to Around the Web.

Around the Web is one issue away from its first anniversary. Here’s a little wish: If you have a friend (or two) who might enjoy this little newsletter, why not recommend a subscription to them? It’s free (very), fun (maybe kind of), and informative (mostly).

Another note: I’m struggling a bit with winter and getting my brain to think straight. So writing this has been rather laborious and took longer than it should. I’m glad I made it, though. Hope you enjoy and, as always, thanks for reading.

Lay-offs or A crisis prayed into existence

On January, 20th, Google laid off 12,000 workers, around 6% of its total headcount. Microsoft got rid of 10,000 people. Amazon laid off 18,000 people. Sundar Pichai wrote a letter which sounded roughly like every letter, which McSweeney succinctly parodied.

Speaking of: PagerDuty CEO Jennifer Tejada wrote a letter after laying off a part of her stuff so blunt that ChatGPT somehow managed to generate almost the same thing. She apologised later, while staying in office.

Only weeks after laying off 11,000 workers, Facebook announced a $40 billion stock buyback program. It’s hard to imagine pressing economic reasons to lay off this many people when a company plans to spend this much money on their stock. And lost another $13.7 billion dollars in the Metaverse experiment nobody but Zuckerberg is interested in.

All of this is because of some crisis, recession, which – realistically – fails to materialise.

Most if not all of the people let go from these companies could be retained, but corporations - and in particular tech companies - have consciously colluded with each other to push a false narrative about how they are the victims of an economy that continues to enrich them. And that’s because their leadership isn’t judged by how well they treat their employees, but rather by how they protect the interests of their shareholders.

Ed Zitron – Google Should Fire Sundar Pichai

To have an excuse, we have upper management praying a crisis into existence not because they have to, but because they want to. In the end, they are not the ones bearing the brunt of it. On the contrary, the top 1% of the population took two thirds of all new wealth created since 2020. This disparity is combined with a cost of living crisis, during whichfood companies rake in record profits.

Capitalism is alive and kicking. There is no crisis. There is money to be made. Prices are not rising, they are being increased.

But why all the lay-offs, then?

The goal, besides increasing shareholder value (shareholders love layoffs), is instilling fear in the workforce (you, yes you, might be next).

Layoffs suck for those laid off, obviously, but they also work as a disciplinary measure for this left behind, leading to a condition that Anne Helen, reflecting on her time at BuzzFeed, recently described as Layoff Brain:

Layoffs are the worst for the people who lose their job, but there’s a ripple effect on those who keep them — particularly if they keep them over the course of multiple layoffs. It’s a curious mix of guilt, relief, trepidation, and anger. Are you supposed to be grateful to the company whose primary leadership strategy seems to be keeping its workers trapped in fear? How do you trust your manager’s assurances of security further than the end of the next pay period? If the company actually “wishes the best” for the employees it let go, why wouldn’t they fucking recognize the union whose animating goal was to create a modicum of security for when the next layoff arrived, as we all knew it would?

That’s why companies so afraid of powerful unions. A perspective of solidarity and comradeship is their ultimate enemy. There’s a cruelty involved, and this cruelty is the point. After years of generous compensations and free coffee, tech CEOs remembered that there have to be chains and discipline.

It was easy to be disgusted when Musk took over Twitter and blame him for being a bad manager (which he is, don’t get me wrong). The past months have shown that he is but a symptom of a capitalist reality that does not care about you. If you blame Musk but not mention the systemic issues behind all of this, you miss the point.

Marx and Engels said that we have nothing to lose but our chains. To do so, we must lose our fear.

[whispers] strike

This ain’t intelligence

Microsoft has bought 49% of OpenAI. But how exactly the investment is supposed a profit is unclear for now. For now, OpenAI announced to charge $20 a month for a premium offering.

Billy Perrigo published a piece in Time Magazine, which shines a light on the working conditions making ChatGPT slightly less toxic. To achieve this, OpenAI hired Sama, a content moderation company relied upon by many western technology companies. Sama’s workers in Kenya were paid as low as $2 an hour to label toxic content to improve OpenAI’s filters.

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

Someone with the face of a politician had ChatGPT write a speech to hold in the US congress. It’s boring.

With ChatGPT being used for homework and university exams, OpenAI announced a ChatGPT detector. Problem: it’s incredibly bad at detecting:

In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives).

I’m bad at maths, but 100% thankful for Rusty deciphering the math in Today in Math:

If the numbers don’t leap right out at you, imagine a college class with 100 students where 10 of them use ChatGPT to write an essay. If you run all 100 essays through the OpenAI classifier, it will correctly flag 26% of the AI essays—2 or 3 of the 10. But of the 90 human-written essays, it will incorrectly flag 9%, which is 8, as AI-generated.

Great.

With Microsoft’s investment into OpenAI finished and Google playing catch-up, we’ll soon see chat search interfaces. The problem remains: Those models are bad at it, not designed for it and traditional search should stick around.

ChatGPT is, too, the newest craze for get rich quick scheme hustlers on YouTube and TikTok. Elsewhere in scamtown: some kind of ML implementation is, for now, a sure-fire way to boost your stock value.

OpenAI, Midjourney and so forth will not stop shoving their creations down our throats. Their models will increasingly produce the world, subverting our sense of truth and reality. An issue Rob Horning investigates, drawing on Baudrillard’s theory about hyperreality.

More generally, the fact that AI models will give plausible answers to any question posed to them will come to be more valuable than whether those answers are correct. Instead we will change our sense of what is plausible to fit what the models can do. If the models are truly generative, they will gradually produce the world where they have all the right answers in advance.

Every Answer

So, now we have a lot of linear algebra pushing into our life, where does this leave us, as humans, you know?

Before we all get sucked into that black hole, let’s remember the idea of human language. Language connects us. Language connects one human being to another. Through space and time. Language transports meaning between minds, sense between bodies, it can make us understand each other and ourselves. It can make us feel what others feel. Language is a bridge.

The End of Writing

Maybe we aren’t in such a bad spot after all if we keep communicating. The whole piece is really great, go read it.

CNET tried the thing with ending writing already, and it didn’t really end well. At first, they tried to hide it, then it was discovered that a model writes articles including «very dumb errors». Which wasn't the end: CNET’s AI-written articles aren't just riddled with errors. They also appear to be substantially plagiarised.. It stands to reason that this is what happens when private equity converts journalism into a cash cow. Futurism really did a stellar job pursuing the story, leaking internal Red Venture communication as a follow-up to the follow-up to the follow-up.

In legal trouble, Getty Images is suing Stability AI, which tends to show Getty’s famous watermark on Stable Diffusion’s output.

Maybe AI can represent AI in court? Start up DoNotPay certainly thinks so, fails at it, and consequently updated their Terms of Service to ban people from testing their claims.

This overview of the inventions leading us to the state of the art of generative AI we are witnessing now is nonetheless worth your time. I’ll let you, dear reader, decipher the parts where the author falls into the hype trap.

Social Mediargh

AlgorithmWatch is trying to better understand TikTok’s algorithm. If you are based in Germany and use TikTok, do the right thing and donate your data.

I had lost interest in Twitter for a bit. Sure, publishers were annoyed, everything was slowly falling apart, apps had to shut down. It has all signs of slowly turning into tumbleweed.

How things changed this week! First, accounts started to see an increase in tweet visibility once they lock their account. Which led to Musk locking his account to «investigate the issue». If only he had something like an engineering department. Shortly after, Twitter Dev announced that the free API tier will be shut down on February, 9th, bringing an end to a whole host of third-party services. While the changes for the regular API are still forthcoming, it seems like the research API has been shut down already. Per the EU’s Digital Services Act Twitter is required to allow access to its data for researchers.

The Financial Times set up and teared down a Mastodon server, in a kind of chaotic move in which nothing really adds up.

What are you looking at?

Uber’s drivers in Geneva are trying to better understand Uber’s systems – using Uber’s data. The data is a mess, though, so making sense of it is basically impossible without the help of data scientists and additional data sources.

Over the past week, Uber drivers have been turning up at the University of Geneva's FaceLab to get an independent analysis of their data. The drivers have all been offered individual compensation packages by Uber for the back-dated pay and expenses they are owed, after a court found in May last year that drivers in Geneva, Switzerland, were employees, not independent contractors.

Meanwhile, Uber attempts to add gamblification elements to gig work. You are promised a nice bonus after completing one hundred rides, but the algorithm gives you fewer rides the closer you get to the bonus marker.

The police in Mecklenburg-Western Pomerania (that’s in northern Germany) has been sentenced to look a bit less into the life of citizens. Germany’s highest court ruled parts of a «security law» to be against the constitution. Politicians from the Greens and the CDU in Hesse are, meanwhile, pushing forward with a new assembly law which would drastically restrict the right to free assembly.

EOL of Humanity

The push towards electric vehicles is way under way. They won’t solve the crisis, though. The solution to overconsumption isn't more consumption. IEEE Spectrum had a whole series on EVs and hopes and problems tied to them. Besides the first linked piece, I’ll recommend the one on their impact on the job market.

Oil companies, incidentally, have reported record earnings across the board. They were also very active in lobbying around the latest COP summit in Egypt and bought ads on Facebook and Instagram for around $4 million.

At least we have carbon offsets, which magically turn money into climate change. Right? Of course not.

The research into Verra, the world’s leading carbon standard for the rapidly growing $2bn (£1.6bn) voluntary offsets market, has found that, based on analysis of a significant percentage of the projects, more than 90% of their rainforest offset credits – among the most commonly used by companies – are likely to be “phantom credits” and do not represent genuine carbon reductions.

But what’s an apocalypse if it doesn’t turn a profit?

Okay, maybe we just give up, make extinction prevail and use genetic engineering to bring all those animals we exctincted back to life. It sounds like a bad idea because it is a bad idea, which of course does not stop companies with a «vision» from thinking that it’s a good idea.

It’s called de-extinction, and its newest goal is the Dodo. Not that any of the animals they tried to de-extinct previously have actually been de-extincted.

Crucial Computer Program for Particle Physics at Risk of Obsolescence. It’s the XKCD comic about open-source dependencies in real life.

Here’s a tab about cows and their intersection with machines. Cow tabs tend to be excellent.

Perhaps it is helpful to consider mechanical cows as a window to a worldview. Plant-based milks or automatic milking systems might play a significant role in the agriculture and food policies we’d like to see in the world. A mechanical cow can be a starting point to examine identity, climate anxiety, or animal welfare, and an opportunity to exercise skepticism towards promising food technologies and the people who control them.

Molly Nilsson wants a world without billionaires. Me too, Molly, me too.

One of the weirdest news cycles going on is the ongoing drama around George Santos in the USA. Mother Jones now tries to call top donors to his 2020 campaign. Somehow, they don’t exist.

Second contender for the oddest news cycle is the outrage over M&Ms. Parker Molloy summarised the malaise: Right-Wing Rage About «Wokeness» at Candy Company Known for Using Child Labor Gets Results!

Marie Kondo gives up. Chaos will prevail.

Trans women athletes have no unfair advantage under current rules, report finds. It’s almost like TERFs make shit up out of thin air. Who would have thought.


That’s it for this issue. Stay calm, hug your friends, and be human after all.

]]>
<![CDATA[Disgusted, but not surprised]]> https://www.ovl.design/around-the-web/017-disgusted-but-not-surprised/ 2023-01-15T14:12:00.000Z <![CDATA[Police violence for fossil future, stochastic parrots doing cybercrime, TikTok’s secret, Tesla’s magic, and why peer review failed.]]> <![CDATA[

Collected between 14.12.2022 and 15.1.2023.


Welcome to Around the Web.

For a brief moment, it seemed like Greta Thunberg managed to slam dunk Andrew Tate into jail. It seemed like the perfect Christmas miracle. But wasn't true after all. Still, Tate faces prosecution in Romania after fleeing to Romania because he thought he won’t be prosecuted there.

Never forget and welcome 2023. Behave yours… ah, well, too late.

Lützerath

On Wednesday, Germany decided to fuck around and find out. The fucking around is the goal to limit global warming to 1,5 degree Celsius. Cops from almost all states converged on the squatted village Lützerath in North-Rhine Westphalia.

The village lies adjacent to Garzweiler, a gargantuan, dystopian coal mine. Enough coal to emit around 280 million tonnes of CO2 when burned.

And that’s the problem. By giving the energy company RWE permission to mine and burn the coal, Germany will certainly not be able to meet the 1.5 degree target. As if that wasn’t already highly unlikely.

All of this could have been prevented, it’s not even like Germany needs the coal for anything other than RWE’s profits. But here we are.

The Greens from the local to federal level transformed into RWE’s PR department, parroting claims that the coal is indeed needed. A «deal» to speed up the coal phase-out, which at the same time allows burning even more coal in the shorter timeframe serves as the fig leave. It took the Greens – almost to the day – 32 years to successfully end their march through the institutions: By becoming the institution and betraying everything they stood for.

Luisa Neubauer und Pauline Brünger wrote a commentary for the German newspaper taz, succinctly describing the dire situation we are in. The consequences of which will be felt worldwide.

"A spookily lit picture of a group of riot cops, helms, shields and all, standing in front of a gigantic bucket excavator."
German climate policy (Symbolic image), photo by Carla Hinrichs

On Wednesday, January 11th, police moved in and started the eviction.

On Saturday, January 14th, tens of thousands of protesters travelled to Lützerath to support those activists remaining in the village.

The police promised «unpleasant images if protesters try to break through to Lützerath».

The day ended with several serious injuries, one protester was hospitalised with life-threatening injuries. In one case, police officers continued to beat an injured person despite ongoing treatment by the paramedics. Iza Hoffmann, a paramedic, described the actions of the police as «fear and terror». Journalists say they were assaulted by the police and obstructed in their reporting.

One thing is abundantly clear: Every cop showing up to work – work being hitting the heads of protesters – does so by choice. They choose violence. The only good cop is a cop that quits their job.

At the time of writing this newsletter, according to the police, the eviction has finished, with all activists but two hiding in a tunnel removed from the village. Ende Gelände mobilises for a return to Lützerath on January, 17th.

And that’s the situation we’ll have to deal with. Cops won’t quit their job. Politicians will use climate change only to garner some votes in the next election. RWE will excavate the coal. Germany will miss its climate targets.

Activists will continue to give them hell. There’s no other way out.

To end on a lighter note, cops stuck in mud and trolled by a protester in a monk’s costume is, simply, the best. Thank you, monk.

#Luetzerathbleibt #BennyHill twitter.com/HerrHoert/status/1614386528934846470

Image from Tweet

Storm a Parliament Month

While Jair Bolsonaro hides away in Florida, his supporters took to the streets and stormed Brasil’s parliament. Which makes January the official Storm a Parliament Month. I wonder which one is next.

It’s easy to categorise these events as just another version of January, 6th. But that’s too simple. For one, storming the parliament has been attempted in Germany before. Didn’t happen in January, though, and was stopped by a total of two police officers.

And, as Ryan Broderick argues in Garbage Day, the Brazilian version isn’t a reenactment, it’s an escalation.

Also, unlike after the storm on the capitol, Brazilian police was able to detain 1,000 would be insurrectionists.

This ain’t intelligence

Maggie Appleton wrote an amazing essay on proving you're a human on a web flooded with generative AI content.

Issue number 200 of Last Week in AI provides a look back at 2022. The year was rich with break-throughs (DALL-E 2 was released only in April) and of course a lot of AI hype theatre. Luckily, Emily M. Bender and Alex Hanna made a show out of it and pushed back on some of the more common (and outlandish) claims.

Meanwhile, we have ChatGPT running around the hype theatre.

Seemingly everybody uses it for everything, but in reality, it’s incredibly hard to know whether the information spouted by the model is true or not. Eva Wolfangel took a closer look at this in German science magazine Spektrum.

Wenn ChatGPT also noch so intelligent wirkt, machen wir uns klar: Das ist eine Täuschung. Das System nutzt unsere kognitive Schwäche aus, die Sprachgewandtheit mit Intelligenz in Verbindung bringt.

Keep in mind that Microsoft has exclusive usage rights for everything OpenAI publishes. Bing will implement ChatGPT. Google declared a Code Red over ChatGPT’s popularity, acknowledging publicly the danger such models can persist to the business models of IT companies.

The implementation of Large Language Models as each interfaces face problems, though. First, as Emily Bender and Chirag Shah argue in Situating Search, LLMs can not present information in a way that allows the searcher to know where it comes from, and hence if the source is reliable or not – or if it exists at all.

The other problem, how do you make money out of it? A search with ChatGPT is estimated to cost a cent. This quickly adds up if you take Bing’s size into account.

Episode 221 of This Machine Kills discusses the political economy behind these models. In the end, all models are a way to make money, and 2023 will be the year when we see the ways to make money.

GPTZero is a tool designed to detect the output of ChatGPT based on linguistic properties. Hugging Face developed a similar tool for GPT-2, which seems to work for ChatGPT, too.

This kind of tools might be a future revenue stream for OpenAI and their ilk.

But the second reason is to enable a new form of monetization. Flood the zone with bullshit (or facilitate others doing so), then offer paid services to detect said bullshit. (I use bullshit as a technical term for text produced without commitment to truth values; see Frankfurt 2009.) It’s guaranteed to work because as I wrote, the market forces are in place and they will be relentless.

Cybercriminals start using ChatGPT. Because why wouldn't they.

Want to use ChatGPT, but are an open-source type of person? There’s now an open-source alternative to ChatGPT, but good luck running it.

David Holz, founder of Midjourney, admitted that there’s no way to train a model searching proper consent beforehand.

The Washington Post created a visual explainer of the process behind generative AI models such as DALL-E or Stable Diffusion.

Following GitHub, Stability AI, makers of Stable Diffusion, is now sued over their use of copyrighted content. The site skips on the legalese, while doing a pretty solid job explaining the diffusion process and why they think collage tools such as Stable Diffusion.

German public broadcaster, ZDF, published information, including model cards, of the algorithms used to fine-tune personalisation in their digital offering. Well done.

China now enforces its AI legislation, meaning the output of generative AI has to be clearly labeled, and is not allowed to be used to produce deep fakes.

Microsoft announced VALL-E, a speech synthesiser which only needs three seconds of input to duplicate a person’s voice. I can’t for the life of me imagine what could go wrong.

The fight over the EU’s AI Act continues, Germany favours more broad exception for facial recognition technology.

Social Mediargh

I kind of lost interest in Twitter, but thanks to my friends at Twitter Is Going Great I can rest assured that it’s going terrific, with CSAM material still prevalent on the platform and some 3rd party apps cut off from their API without any explanation. So yeah, perfect. Here’s an  interview with Markus Beckedahl explaining the misery from a journalistic perspective. And here Some More News on Musk and free speech.

Musk’s takeover drove more than a million people to Mastodon – but many aren’t sticking around. Mastodon’s user numbers still are higher than before the takeover.

TikTok has been accused of pushing content promoting eating disorders and self-harm in the feeds of teenagers.

TikTok’s success has often been attributed (including by me) to its algorithm. The truth might be slightly different. As Arvind Narayanan argues, the true advantage might be the way TikTok is designed around the algorithm.

TikTok’s recommender system is not its secret: rather, it’s the design, which, of course, isn’t secret at all. More generally, in AI applications, the sophistication of the algorithm is rarely the limiting factor. The quality of the design, the data, and the people that make up the system all tend to matter more.

This might explain the trouble every other tech company has in replicating TikTok’s success «because they were originally designed for a very different experience, and they are locked into it due to their users’ and creators’ preferences».

Facebook, meanwhile was fined 390 million Euro and needs to revamp their recommendation algorithms within the next three months.

What are you looking at?

A mother tries to go to a Christmas show with her daughter and her group of Girl Scouts. While entering the vicinity, security guards approached her and tell her she has to leave the building because she has the wrong job. Sounds far-fetched? Yes. And that’s precisely the reason it happened. The mother is employed by a law-firm currently suing a subsidiary of Madison Square Garden. The entertainment behemoth subsequently scraped all employees photos from the law firm’s website and fed them into the facial recognition system used to screen every guest at their venues. They say they do nothing wrong, which is quite a take.

PimEyes might face a fine coming out of the German hinterland. As always, enforcement is difficult, which is why I don’t hold my breath. Still, it’s good to see that regulators and data protection offices have PimEyes in their sight.

German police use human «super recognisers» to identify persons suspected of doing crimes.

What’s your Roomba looking at? You. And because data wouldn’t be the same without a leak, the Internet can now look at you, too.

Google is implementing an appeal process for users whose accounts get locked if automated systems suspect the hosting of child sexual abuse material (CSAM). Last year, cases have been publicised where Google accounts of parents have been locked because they uploaded photos of their children to send to doctors.

Palantir has sold their services to Ukraine’s armed forces, which reportedly gives them the edge over the Russian army. The Washington Post was able to report on the use.

The other side of collecting data in war zones has been demonstrated by hackers from the Chaos Computer Club. They bought old US Army equipment on eBay and upon analysing it found that it still contained biometric data collected in Afghanistan.

EOL of humanity

The Lancet stepped up its warning about the public health emergency constituted by the climate crisis:

For these reasons the 2009 Lancet Commission on managing the health effects of climate change 3 described climate change as the “greatest global health threat of the 21st century”. However, it was wrong, both qualitatively and temporally. The threat is now to our very survival and to that of the ecosystem upon which we depend. Grave impacts of climate change are already with us and could worsen catastrophically within decades. A UN Environment Programme report states there is “no credible pathway to 1·5°C in place” today.

Biodiversity is on a constant decline, with conservation efforts lacking. Forty percent of insect species are threatened by extinction, thereby endangering the future of whole ecosystems.

The potential dangers of widespread insect loss are alarming. And yet, while money, effort, and attention have been poured into saving the celebrated beasts of our time—the orangutans, the rhinos, the elephants—our attempts to arrest the loss of insects have barely begun. Many people also don’t yet realize how far the problem goes beyond honeybees. What’s required isn’t an army of urban beekeepers, but rather a fundamental rethink of our relationship with nature.

So, snafu. But at least there’s a vaccine protecting honeybees from sickness. I dearly hope there are no anti-vax bees.

The UCI road cycling season kicked off in Australia. Cycling has a long and complicated history with complicated sponsors. Recently more and more nation states buy their way into World Tour teams. The Tour Down Under meanwhile is sponsored by Santos. Santos is Australia’s largest energy company. Burning coal and cycling don’t add up, really. But thanks to the power of greenwashing, why not. Extinction Rebellion demands to drop Santos’ sponsorship and started protests against the tour. Before the tour started, two members of the group glued themselves to bikes in front of Santos’ headquarter. The first day of the women’s race saw a small protest at the site of the race.

Information Insecurity

Over the holidays, LastPass was forced to admit that a previously disclosed breach was far worse than disclosed. They put a PR statement full of half-truths and attempts to shift blame to their users.

Shortly after, Slack announced that someone accessed their internal GitHub repositories and stole source code. In an interesting use of technology, a noindex meta tag ended up on Slack’s blog page announcing the incident. Who knows why they don’t want search engines to index this post.

Hold Security published a throve of data on Solaris, a Russian drug market. Including their Ansible scripts.

Google is stuck with Gmail and that might be a problem.

Advertising within Gmail is very low key and easy to avoid altogether, and Google is very clear that it doesn’t monetize your email content: “We do not scan or read your Gmail messages to show you ads.“ Google has played fast and loose about how it uses data, but if it cheated here it would be beyond catastrophic.

Tesla announced that their Cybertruck can «pull near-infinite mass» while towing 14,000 pounds (ca. 6,350 kg). Does anyone at Tesla know the meaning of words? I do know the meaning of the following words and, yes, please: Tesla’s Brand Is Tanking, Survey Finds.

Trump’s Spiritual Adviser Paula White Accused of Breaking Into the Bank Account of Rock Band Journey. What a sentence.

Bye bye peer review? Science’s biggest experiment has failed, and it’s past time to replace it with something else. What this else is? We have to find out.

A new book takes a closer look at the archives and artefacts of LGTBQ+ cultural production.

I haven’t finished my Mastodon embed plugin yet, so I’ll close this issue with Martha Stewart’s eggnog recipe.


That’s it for this issue. Stay sane, hug your friends, and don’t forget to mud the police.

]]>
<![CDATA[Overpowered Communism]]> https://www.ovl.design/around-the-web/016-overpowered-communism/ 2022-12-13T14:12:00.000Z <![CDATA[German’s law enforcement and its bullshit, a new stochastic parrot, Mastodon’s first main character (it’s a cop), and a lawsuit because cooking pasta takes too long.]]> <![CDATA[

Collected between 28.11.2022 and 13.12.2022.


Welcome to Around the Web, the newsletter that celebrates giving. I’ll give you some links which turn to tabs. And you’ll donate whatever you can to netzpolitik.org.

This issue comes a bit late and is at times erratic, thanks sickness. And it’s the last Around the Web in 2022 as I’ll pause the computering between Christmas and New Year. Computering returns in January.

With that out of the way, enjoy the read.

A tale of two raids

On December, 7th, law enforcement raided 300 objects in connection with a conspiracy of so called Reichsbürger. They plotted to overthrow the German government, aiming to install one of their own, Heinrich XIII Prinz Reuß zu Köstritz, as the new king.

On December, 13th, law enforcement raided objects in connection with the climate activist group Letzte Generation in connection with … this might sound a bit weird: closing, or trying to close, pipeline valves.

The climate activists face being charged of forming a criminal organisation, even though none of them was detained during the action they have allegedly taken part in.

So, on the one hand we’ve armed reactionaries plotting to throw over the government, and on the other a group of activists that tried to use valves to protest against the reliance on fossil fuel.

The Letzte Generation activists have been likened to some form of Green Red Army Fraction, their acts of terrorism: Superglue and soup on paintings. Terrorism much, so much so that they were on the agenda of over forty meetings of Germany’s Gemeinsamen Extremismus- und Terrorismusabwehrzentrum (Joint Counter-Extremism and Counter-Terrorism Centre).

Die, Germany, die.

This ain’t intelligence

There’s a new stochastic parrot in town! OpenAI released ChatGPT, which is a vamped up version of GPT-3, with a conversational interface. I think the following paragraphs could also be written by a generative AI tasked with commenting on model releases. But, alas, nothing changes, so here we go.

Shortly after the release, researchers, users, and activists found more (generate code from JSON, oops, it’s racist!) or less («Ignore previous instructions») creative ways to bypass OpenAI’s safety filter.

In its default state, ChatGPT behaves much like a politician. Ask it about anything mildly controversial, and it will bullshit its way out of the question with some it-depends-both-sides-might-be-worth-looking-at-ism. You can, however, instruct the model to synthesise speech simulating the style of other persons. This reduces the neutrality. But, as the model has no clue, what it is generating – as long as it matches mathematical predictions everything is fair game – you can get it to generate an answer for, say, carbon credits or against them.

Despite these pretty obvious (and already known) shortcomings, the makers of Large Language Models show little to no motivation to find a fix for them. They spend immense amounts of computing power, write a gloating blog post, and release it. Once researchers discover that the flaws they’ve written about a thousand times already are still in there, the CEO of AI Corp. is sorry and nothing changes.

Abeba Birhane and Deborah Raji wrote about all this, calling the Progress Trap in Wired.

And asymmetries of blame and praise persist. Model builders and tech evangelists alike attribute impressive and seemingly flawless output to a mythically autonomous model, a supposed technological marvel. The human decision-making involved in model development is erased, and a model’s feats are observed as independent of the design and implementation choices of its engineers. But without naming and recognizing the engineering choices that contribute to the outcomes of these models, it’s almost impossible to acknowledge the related responsibilities. As a result, both functional failures and discriminatory outcomes are also framed as devoid of engineering choices—blamed on society at large or supposedly “naturally occurring” datasets, factors the companies developing these models claim they have little control over.

- Abeba Birhane and Deborah Raji – ChatGPT, Galactica, and the Progress Trap

Vicky Boykis compiled some resources on ChatGPT, among them what we know about its dataset. A closer look at the origin of the underlying data reveals that it is pretty likely a large amount of Internet users have contributed to whatever ChatGPT spits out.

One thing that ChatGPT does better as previous models is reducing the friction where you’d need some knowledge on how to write prompts (aka prompt engineering). Suddenly, everyone is a prompt engineer.

Many people resorted to ChatGPT for writing code, too. Which lead to StackOverflow banning AI-generated answers. Luckily, it can’t center a div, so my job is safe for now.

The problem with code, shows a larger problem. These models are wrong. Often. In the last issue, I talked about Facebook’s Galactica bullshitting its way through science. ChatGPT is in no way better, it can’t be. It predicts which words come next, and these predictions have to be presented with utter conviction, as such humans traits as doubt do not exist in their mathematical models. The problem is, humans are easily fooled and – especially for more complex problems – don’t have the necessary knowledge to check if whatever these models spit out is true. But with every new release, we have one disinformation machine more at our disposal.

Super funny that we've managed to make "Tech/AI is going to replace our jobs" into a dystopian outcome.

Endless lols.

What a bunch of idiots.

Oh, almost forgot, there’s this other thing called Lensa. Lensa has been around for a while, but made waves over the last couple of weeks. The tool, developed by Tencent, takes your selfies and generates a number of stylised photos from them. All fun and games? Of course not. Keep in mind, too, that you are basically paying for Tencent to train their facial recognition capabilities. Lose-lose-situation.

Practical computer vision: This article traces colour in museum artefacts and as such paints a timeline of a world that gets more anthracite by the year.

"A graph of colour distribution in museum objects from 1800 to 2020. On the Y axis we seen colours distribution in percentage, on the X axis are the years. The further the X axis progresses, more and more of the distribution is taken over by monochrome shades of colours, mostly greys. At the begninning it looks almost like a rainbow, at the end it’s rather depressing."

Singapore deployed a free therapy bot, and it didn’t go well.

Social Mediargh

The Metaverse. The favourite place of nobody. The EU invited to a party and nobody came along. Thanks for trying to do something digital, nonetheless.

The favourite person of nobody? Elon Musk. Even Dave Chapelle fans think Musk is an idiot. In an unsurprising turn of events the photo of him with Ghishlaine Maxwell might have been no accident, as stated by Musk, after all.

Speaking of Musk, Tesla, and Musk’s fascism speed-run might not help the environment but Big Oil. Most of Musk’s claims are dubious, some wrong.

While Tesla barely just began reporting its own emissions, it does report a guesstimate of how many emissions were avoided through the usage of its cars and solar panels. They’re not nothing, but compared to the growth of wind and solar power around the world – particularly wind power – they’re relatively small. Wind turbine manufacturer Vestas can boast avoided emissions several hundred times greater than Tesla. It’s not a competition, but if you’re going to claim hero status, you should expect to be fact checked.

The more the electric vehicle industry is booming, the clearer the environmental burden. Tesla’s production facilities were dubbed the Plantage by its Black employees. Twitter is becoming a sweatshop. Paired with Musk getting ever closer to QAnon talking points, and blowing a whistle for right-wingers so large it’s an Alphorn by now. Somehow, Meanwhile, Twitter is not processing data deletion requests anymore.

Crimethinc, after being removed from Twitter, reflect on the death of social media and what might come next. Spoiler: talking to real humans.

Not Musk. Raspberry Pi somehow managed to be the first main person on Mastodon, where some deemed it impossible for such things to happen. Turns out, boasting about hiring an ex-surveillance cop as maker-in-residence makes it possible. Congratulations, and as we in the Interwebs say: LOL.

Facebook joins illustrious companies like Basecamp and Coinbase in forbidding to talk about potentially divisive politic topics at work.

Prevailing Surveillance

Apple ditched its plans to monitor the iCloud Photo Library for CSAM material. The plans have been heavily criticised for invading the privacy of iCloud users with little advantages in terms of actually battling CSAM.

Elsewhere, the EU isn’t there yet, ploughing ahead with the plans to implement chat control. The proposal, if passed, will make it mandatory for messaging providers to scan messages for CSAM content. It, too, has been heavily criticised ever since it was announced. The European Commission has now published a blog post, which unfortunately contains lies, half-truths, or omissions in basically every sentence.

Such policies can have an immensely negative impact, say if the system detects nudity and locks down your account, when in reality you have only sent a photo of your sick child to a doctor. Eva Wolfangel noted these and other negative impacts of chat control in a comprehensive article in Republik.

Der Fall zeigt, dass sich das Problem nicht technisch lösen lässt. Doch genau darauf hoffen die Befürworter der Chat­kontrolle. Die EU diskutiert verschiedene Technologien, um die geplante Verordnung umzusetzen: Bereits bekannte Fotos können KI-Systeme einfach aufspüren. Schwieriger wird es bei neuen Fotos: Im Zuge der Debatte haben verschiedene Fachleute immer wieder darauf hingewiesen, dass es bis dato nicht möglich ist, noch unbekannte Fotos zweifelsfrei als Kinder­pornografie zu identifizieren, und dass es deshalb zu einer Vielzahl falsch positiver Meldungen kommen wird.

Dear, EU, repeat after me: You can’t solve social problems by throwing technology on them.

Speaking of which: Vorratsdatenspeicherung. The zombie whom every German minister of the interior falls in love with – truly, madly, deeply – is still on life-support. No matter how many courts say that it shall not pass, be in Germany or in Bulgaria.

There’s a new version of the companion bot ElliQ, which allows you to turn it into a memoir of your life. That’s as creepy as all home surveillance devices, but with the added non-benefit of not being helpful in administering care.

Luckily for everyone else, there’s Amazon, determined as always to get more of your data. And as stealing is bad, hmmkay, they will pay you $2 a month for you to allow them to monitor your phone traffic. Such a steal. Bargain. I meant bargain.

Facebook needs to give users the possibility to opt out of personalised advertising altogether.

World Wide Web

Jeremy Keith wrote about approaches to Web Design und CSS methodologies in the measured and thoughtful way that he masters like few others.

Web3 is one of those technological fever dreams. Marc Hutchins took a closer look and concludes: Web3 Doesn't Exist.

This message is brought to you by IP over Avian Carriers with Quality of Service.

Communism? Overpowered. If you nerf, you lie.

a16z shut down their media project Future. Feels good, man.

Slides. Fun! Fun? After reading this article about slides I’m honestly not sure anymore and will pretty likely never leave my bed again.

The alliteration of the week is won by Vice for Fyre Festival Fraudster. Yes, Billy McFarland is out of jail and wants to go back to the Bahamas. I guess Netflix and Hulu bought the rights for the next documentary, and Billy now needs to double up.

Justice in pasta land? A Florida woman sues Velveeta, claiming its macaroni takes longer than 3 1/2 minutes.

Image from Tweet

That’s it for this issue. I wish you, dear reader, a pleasant end of year, and we’ll see us in the next one. Until then: Stay sane, hug your friends, and know that «Die» is an indefinite German article.

]]>
<![CDATA[Prime time in diversity theatre]]> https://www.ovl.design/around-the-web/015-prime-time-in-diversity-theatre/ 2022-11-27T14:12:00.000Z <![CDATA[Another attack on queer spaces, diversity theatre, border regimes, maps of the world, and cows surviving a hurricane.]]> <![CDATA[

Collected between 14.11.2022 and 27.11.2022.


Welcome to Around the Web. This newsletter is decidedly pro-trans.

If you are a TERF, this newsletter is not for you. Bugger off and never come back. Last weekend saw another attack on a space where LGBTQ+ people were trying to feel safer. If you are still trying to divide between LGB and the rest, you are an asshole, and your cowardice will not protect you.

"A bass amplifier on stage, surrounded by some other gear. Pink tape runs across the surface. Written on this are the words «Transphobes eat shit and die alone."
Godspeed You! Black Emperor’s bass amp; via Janus Rose.

Besides a lot of venting (by me), this issue features some incredible writing (by others). So grab a cup of tea, make yourself comfortable, and let’s get linking.

The politics of hate & the theatre of diversity

Last weekend, a homophobe decided to go into a queer safe space and murder people. On the eve of Trans Day of Remembrance, a gunman entered Club Q in Colorado Springs. The evening ended with five people dead and twenty-five in hospital. The names of the dead are Daniel Aston, Kelly Loving, Ashley Paugh, Derrick Rump, and Raymond Green Vallace.

It was up to the patrons of Club Q to overpower the attacker, preventing a far worse outcome. The police, always the heroes, arrested the guest who knocked the attacker out. Fuck them.

The attack comes amidst a political climate, where homo- and transphobia are equipped with a facade of respectability. It hasn’t been the first attack on queer bar, it won’t be the last. It is, as James Greig, writes in Dazed, no surprise that these attacks happen. And the do not only take the form of gun violence. Too many queers still in their closets, too many bullied in school, spit at in the streets.

These politicians and their voters aim at making queer life invisible, forcing anyone who does not comply with their petty vision of the heterosexual family. They haven’t succeeded so far. They may never succeed. But they are able to cause unmeasurable suffering for those they declare their enemies.

The resilience of the queer community may be inspiring, but we shouldn’t look too hard for hidden positives in a situation where five people have died. There is no upside for the victims, their grieving loved ones, or the people who survived. The LGBTQ+ community will show, once again, its capacity for solidarity and endurance, but it shouldn’t have to.

We switch to our culture reporting, live from football’s diversity theatre.

This week also saw the start of the FIFA World Cup in Qatar. Some European clubs wanted to do something about diversity, but not really. So, they decided to start with One Love armbands. Whatever happens, they said, we will start wearing them.

Then FIFA did FIFA things. They threatened any player who dared to wear the armband with a yellow card.

That's too much for the masters of diversity theatre. The armbands, useless as they were in the first place, were off.

The colourful armband was supposed to make football fans feel good, a symbolic handkerchief to console them over the unignorable performative contradictions of this World Cup. And this armband would, of course, have been just one of the most colourful publicity stunts of the year; a swallow in the penalty-free space that was supposed to show the public that in this game, which could only be made possible by human rights violations in the first place, at least everyone involved in the game is actually naturally in favour of human rights.

Samira El Ouassil – Sehen, hören, aber nichts sagen

The German team have come up with an even more pointless act of symbolism: Before kicking off their first game, they hold their mouths shut.

It’s a great gesture – in the sense that nobody knows what it means. Is it forbidden to speak? They are just spineless cretins, afraid of the slightest consequence. Or does it mean, as El Ouassil suggests, that they prefer to keep their mouths shut, once they meet the slightest resistance.

Who knows.

Who needs the theatre, while terrorists with guns storm into queer spaces and shoot us up? Who needs theatre when the media treats those who deny trans youth the right to live as just another opinion? Who needs theatre when Pride means a police stand, when the same police will never protect the community?

Until they take their «allyship» seriously – which means more than a few flags in June and shying away from even the bare minimum – every company, football team and media outlet can shut right up.

Allyship is not wearing an armband, if FIFA says it's ok. Allyship is not taking up space at the bar. Allyship is not a hashtag. Allyship is not branding your chainstore window. Allyship is pistolwhipping the gunman until he lapses into unconsciousness and a trans woman can stomp him out.

Huw Lemmey – Give Homophobia and Transphobia the Yellow Card

This ain’t intelligence

The current state of machine learning feels like a remake of Groundhog Day. So, in this issue’s version, we have Facebook publishing Galactica. A model with a grand name, and a grand vision: Parroting scientific papers. And Cicero. A Large Language Model that's designed to manipulate people.

There’s one problem, though: even if Large Language Models have the grandest name, they have no idea what’s true. That’s a problem in itself. But even more so if you market your model as a fact machine.

It didn’t take long for Galactica to be exposed as «the AI knowledge base that makes stuff up». Nuclear reactors made of cheese? Sure. The benefits of suicide and eating glass? No problem.

While confidently inventing cheesy nuclear power, the model has a «safety filter» which prevented it from answering questions related to queer theory, racism, or AIDS.

It only took three days for the public demonstration to be shut down.

Yann LeCun, Facebook’s head of AI, continued to rant for days afterwards. Piling on the researchers and journalists who do Facebook’s work. To just about every valid criticism and vector of harm the model causes, he responded with «GALACTICA DOES NO HARM YOU LIAAARRRR». It is incredibly painful to watch.

Setting all colorful analogies aside, it seems flabbergasting that there aren’t any protections in place to stop this sort of thing from happening. Meta’s AI told me to eat glass and kill myself. It told me that queers and Jewish people were evil. And, as far as I can see, there are no consequences.

Tristan Greene – Meta takes new AI system offline because Twitter users are mean

Again and again the same few companies burn their money into models which ethical is torn apart when even looking sternly. So again and again it’s important to remember that such products, the appplication of Machine Learning is not inevitable. It’s possible to build community, not profit, focussed ML systems, as Dylan Barker and Alex Hanna detail.

The Distributed AI Research Institute, to which Barker and Hanna belong, is celebrating their first anniversary. That’s reason for a big yay. Information on the event can be found at Eventbrite.

This disaster has almost made Facebook’s sacking of its AI infrastructure team all but forgotten. Following the axing of Twitter’s ethical AI team, this is the second high-profile team to lose their jobs in as many weeks. A radical turn of events in a job market that seemed safe.

Elsewhere in bullshit, highly sexualised AI output is supposed to be AI «mastering the female form». Stable Diffusion has mastered the reproduction of specific art forms.

Stable Diffusion version 2 has been released. By default, it removes nude images and images of children from its dataset, which should prevent the creation of NSFW and CSAM content. It’s also supposed to make it harder to copy the style of human artists. Users weren’t amused when their porn was taken away from them. But since Stable Diffusion is open source, it will only be a matter of time before a new porn fork appears.

Better control of input data is desperately needed, as no one really knows what kind of data is in the datasets used to train generative AI models.

Despite the pending lawsuit. GitHub is moving forward with marketing Copilot. If you are thinking of using it beyond tinkering, be warned it’s probably not worth the risk.

A story I’ve been meaning to link for a while, but somehow it slipped through the cracks: Found in Translation. It tells the fascinating story of a large-scale project to collect data from all the languages spoken in India.

What does the future of Machine Learning look like? If it is larger and larger datasets, it definitely seems like that the hunt for more data might come to its limits.

In robotics, Boston Dynamics plans to study «robot-human interactions» by having its robodogs roam college campuses. It’s a good time to remember how to turn them off.

A brief look at Twitter’s state

At first there were glorious sides (Eli Lilly never forget [Side note: Eli Lilly's CEO made some kind of weird statement admitting that the insulin price might somehow be too high]).

Meanwhile, the state of Twitter is pure chaos, with a foundation of fascism.If you are super curious, Twitter Is Going Great has all the updates. Here’s the one minute redux:

Over the past few weeks, Musk has moved closer and closer to the right-wing pundits. He has echoed their claims and conspiracy theories, using a rhetoric that is becoming more and more reminiscent of QAnon. Twitter 2.0, as he calls it, must be extremely hardcore. His staff said «no thanks». Remaining are those who are either Musk fans or need the job to stay in the U.S. Barring future acts of cruelty, as the pre-thanksgiving layoffs. Earning applause from the fascist government in Italy, while at it. Currently, it seems he’s not so much speeding running the content moderation learning curve, as doing the opposite. Almost all the banned accounts are being reinstated. Left-wing accounts such as CrimethInc. are being removed. Nazis are circulating a list of 5,000 «antifa accounts» – including the CIA, Britney Spears and Libs of TikTok. NPR reports that 50% of Twitter's biggest advertisers have left the platform. Musk, meanwhile, eager to please his right-wing heroes, said he might start a phone company if iOS and Google deplatform Twitter. If this reminds anyone of the Freedom Phone, yes, I never again heard anything from them either. Twitter’s Brussels office has been completely disbanded. Showing Musk’s total disregard for anything beyond the US. If someone asks you to spell this: T-R-O-U-B-L-E.

While the situation is dire, Twitter is – in theory – too important to die. Here are some quick tips on how to stay safe on Twitter.

We need to get over the idea that any of these tech billionaires and self-proclaimed innovators have anything apart from preserving their wealth in mind.

This is nothing new, as Rose Eveleth, looking back at the Futurist movement and its fascist links in Italy, explains.

This love of disruption and progress at all costs led Marinetti and his fellow artists to construct what some call a “a church of speed and violence.” They embraced fascism, pushed aside the idea of morality, and argued that innovation must never, for any reason, be hindered. Marinetti and his movement cheered, for example, when Italy invaded Northern Africa.

Prevailing surveillance

Back to the FIFA World Cup, albeit briefly and without football: The Bureau of Investigative Journalism published an in-depth investigation of hackers targeting European journalists and politicians critical of Qatar’s World Cup bid. A companion piece details the global hack-for-hire industry.

The Minderoo Centre for Technology & Democracy has examined the use of facial recognition by police forces in England and Wales. The Centre looked at the programmes from an ethical and legal perspective. Unsurprisingly, none of the cases they looked at passed the audit.

Police in Israel are using a completely opaque facial recognition algorithm. The system is supposed to detect drug smugglers trying to enter the country through Tel Aviv’s Ben Gurion airport. It acts without any judicial oversight.

Facebook has been hit by another lawsuit targeting its data collection practices.

Hot on the heels of the announcement that Alexa could respond to search queries with ads, Amazon, the poster child for home surveillance, is reported to be losing $10 billion this year alone. The global Make Amazon Pay campaign organised protests in 40 countries on Black Friday. And in Leipzig, Germany, a worker died of natural causes during their shift at a warehouse. The corpse was hidden behind cardboard while work continued more or less normally. A story so grotesque that even Amazon can’t come up with an excuse.

"The Amazon Tower in Berlin by night. A powerful projector projects «The wrong Amazon is burning» on its side."
Greetings to Jeff Bezos. Protest on the Amazon Tower in Berlin during the Make Amazon Pay action days. via Reddit.

Have you ever wondered why so few mobile apps have implemented cookie banners? The answer is not that they don’t have to, or that they don’t track you. They just don’t care, and recent reports have found that up to 90% of all apps are in breach of Europe's GDPR.

In other news from the Captain Obvious Department of Stating the Obvious, researchers found that companies in America tend to report data breaches while other news dominate the news cycle.

Borders & maps

The way we look at our world is shaped by maps. And those maps are segregated by borders. While it easy to look at these borders as purely physical demarkations, the reality is far more complex.

Borders are not fixed lines demarcating territory. They are elastic; bordering regimes can be enforced anywhere. Subjected to surveillance and disciplinary mechanisms within the nation-state, undocumented migrants endure the omnipresent threat of immigration enforcement, dangerous and low-wage work, and barriers to accessing public services. The production and policing of the border becomes a quotidian workplace ritual as law enforcement, doctors, teachers, landlords, and social workers regularly report migrants to border agencies.

Harsha Walia – There Is No “Migrant Crisis”

The whole piece is excellent, tracing the border regime of modern states and the construction of the scapegoat «migrant» around the world.

With an increase in global conflicts, and more and more areas of the world becoming inhabitable thanks to the climate catastrophe, the global north is taking immense efforts to wall itself off. It should be noted here, that the even those cocooned parts of the world will become large uninhabitable wastelands if the increase in temperature continues.

Europe’s walling is the Mediterranean Sea, the deadliest border of the world. Medicins Sans Frontieres found refugees handcuffed and injured on the Greek island of Lesvos. According to their reporting, a group approached them, saying they were doctors, beating them up immediately. The group flew when MSF approached the scene.

Upcoming and related: The Disruption Network Lab hosts a panel on Algorithms of Violence: Border Management, Migration & Enforced Discrimination. It’s on Friday, 2nd December at 5pm CET. Tickets are available.

When imagining maps, it’s easy to fall for misconceptions, shaped by map projections and ideological images of the world.

But what happens if we take our old looks of earth and turn them inside out. That’s a question answered by the Spilhaus Projection. It maps the world by its oceans, and it is beautiful.

"A map of the world, center on the north pole. The design is centered on the oceans, emphasising that the the oceans are one giant mass of water rather than multiple divided entity."

How do the most spoken languages of the world look like visualised? Like this.

Earth’s land mass can be arranged to look like a chicken. Terrific work.

The continents can be arranged to look like a chicken

Image from Tweet

EOL of humanity

An unholy coalition in Germany’s public opinion is fantasising about a green Rote Armee Fraktion because climate activist are fed up by their bullshit.

Meanwhile, billionaires responsible for million times more emissions than average person, Oxfam report finds. Tax. Them. Out. Of. Existence. Or be like Winky, the owl that trashes fancy houses.

Elizabeth Holmes, fraudster of Theranos fame, has been sentenced to eleven years in prison. It should be noted here that a) no male CEO has ever faced a similar sentence (which does not imply that Holmes shouldn’t), b) this sentence is for defrauding investors not the public. The only way to be hold accountable is, as we’ve seen with Bernie Madoff, if you fuck around with rich people.

The FTX-induced crypto collapse is continuing, meanwhile. And while it’s easy to blame Sam Bankman-Fried, the media spent months writing him into the heavens.

BuzzFeed! What fun we had, fun number seven will surprise you. Not more than a shell of its former glory, BuzzFeed is still chugging along, kind of. Mia Sato wrote The unbearable lightness of BuzzFeed looking at the past decade of a changing media environment.

Nature is healing. A massive sinkhole threatens to swallow West Virginia police department.

The UK treasury tried something fancy, created a read-only Discord and users still found a way to troll the heck out of them.

Let’s close this issue of with two wide-ranging essays.

Looking back at something old, Huw Lemmey links the Dutch art style Pronkstilleven with the Instagram selfie, and what the desire to picture earthly pleasures say of our fear of death: Soon You Will Die: A History of the Culinary Selfie.

True Grit is a wonderful story of survival, and facts about cows you didn’t know you needed. It’s one of the best pieces I’ve ever linked in Around the Web. Now go read.

Finally, here’s a thread of Ryan Gosling looking like butterflies (check out season 2, Machine Learning researchers looking like moths, while you are at it).

Ryan Gosling as butterflies: a 🧵

White AdmiralC

Image from Tweet

That’s it for this issue. Stay sane, hug your friends, and have if you have a TERF friend, now it’s the time for your friendship to end.

]]>
<![CDATA[Just go ‘aahh!’ Hardcore!]]> https://www.ovl.design/around-the-web/014-just-go-aahh-hardcore/ 2022-11-14T14:12:00.000Z <![CDATA[Twitter, Facebook, how Apple broke its privacy promise, and GitHub getting sued. But a good news interlude, too.]]> <![CDATA[

Collected between 31.10.2022 and 14.11.2022.


Welcome to Around the Web. The newsletter where birds go to die.

Two weeks ago, I promised myself I wouldn’t write about Elon Musk’s bird affairs. But I’m terminally online, and for anyone who is terminally online, the last two weeks have been a bloody rollercoaster of emotions. I’m sorry, but this issue is very much about Elon Musk’s bird affairs.

Let the chirping commence.

The bird is freed but kind of dead

Below, I try to make sense of what happened by breaking it down into several things. Each thing moved quickly, and usually several things happened on any given day.

The sum of things is called the Mess. It’s just like the Queue, only bigger.

the terminally online partner explaining the necessary ten minutes of context for the blissfully offline partner to understand the tweet they're about to show them:

Image from Tweet

Editor’s note: While I tried to keep it succint, it turned out to be impossible. And I still have missed things. If you want to follow along with the most recent developments, check out Twitter Is Going Great.

The labour thing

While Musk and Twitter were still dancing in legal limbo, it was reported that Musk was planning to fire 75% of Twitter’s employees. Just before the actual takeover, he tried to reassure his future employees by saying that he wasn’t going to lay off that many people.

For once, he wasn’t lying.

He fired «only» 50% of Twitter’s workforce.

Teams cut included Twitter’s entire accessibility team, ethical AI team, content curation, outreach

It quickly became clear that some of the people who were let go had critical knowledge of Twitter’s infrastructure or were needed to build future features. Leading to the odd situation where you’d be let go one day and asked to return the next.

A few days after the layoffs, the remaining senior management began quitting in droves.

The global impact appears even greater than that in the United States. Twitter has laid off almost all the staff at its office in Ghana, its only office in Africa. Offices in India, Japan, Mexico, Brazil, and Singapore appear to have been cut drastically.

Initially, the layoffs seemed to affect only staff. On 13 November, Platformer’s Casey Newton tweeted that contractors are now also being affected.

Contractors aren’t being notified at all, they’re just losing access to Slack and email. Managers figured it out when their workers just disappeared from the system.

The thing with maintaining a website

Drastically reducing staff on any system that’s more complex than simple is risky. No website is without bugs, and fixing them in a reasonable amount of time requires the knowledge and capacity to do so.

If no one is left to fix these bugs, they will accumulate.

A massive tech platform like Twitter is built upon very many interdependent parts. “The larger catastrophic failures are a little more titillating, but the biggest risk is the smaller things starting to degrade,” says Ben Krueger, a site reliability engineer who has more than two decades of experience in the tech industry. “These are very big, very complicated systems.” Krueger says one 2017 presentation from Twitter staff includes a statistic suggesting that more than half the back-end infrastructure was dedicated to storing data.

Here’s how a Twitter engineer says it will break in the coming weeks

Twitter’s professional category display now seems to be an unfollow button in disguise. The spam filter is going ham. Protected tweets are visible for a short time. These kinds of things.

Or, hell, even using the own account on the own platform.

Musk’s takeover of the company had been so brutish and poorly planned that, we’re told, there was not even a proper handover of the company’s social accounts. As a result, having spent $44 billion to acquire Twitter, for his first week-plus of owning the company, Musk and his team were unable even to tweet from the @twitter account.

Inside the Twitter Meltdown

The thing with content moderation

Content moderation is hard. Very knowledgeable people have been working on this for years, and it’s still a mess.

Mike Masnick was kind enough to some up the lessons to be learned in Hey Elon: Let Me Help You Speed Run The Content Moderation Learning Curve. Elon didn’t listen.

Destroying teams which are responsible for safety and accountability will hurt those already marginalised and with the least resources first.

The impact of staff cuts is already being felt, said Nighat Dad, a Pakistani digital rights activist who runs a helpline for women facing harassment on social media.

When female political dissidents, journalists, or activists in Pakistan are impersonated online or experience targeted harassment such as false accusations of blasphemy that could put their lives at risk, Dad’s group has a direct line to Twitter.

But since Musk took over, Twitter has not been as responsive to her requests for urgent takedowns of such high-risk content, said Dad, who also sits on Twitter’s Trust and Safety Council of independent rights advisors.

Content moderation, ultimately depending on underpaid, traumatising human labour, wasn’t great before. Everything that happens now will be worse.

The checkmark thing

The thing with the verified checkmark. People who fell somewhere in the realm of «public figures» could be verified by Twitter. In the past, this meant verifying that these people were these people. There were no benefits, just a blue tick.

Now, Elon has been quite vocal about this for some time. After all, non-fans of Elon were being verified. In a rare attempt at Marxist analysis, Musk identified a two-class system. Not being a Marxist, he decided to let the market sort it out. For $20, anyone can buy a checkmark. And better ads. And priority placement in the timeline.

Stephen King, yes, the real one, complained. In a now-deleted tweet, Musk said, «We’ve got to pay the bills somehow!» and lowered the price to $8. If that sounds unbelievable, I regret to inform you that it’s also true.

Since the blue mark was essentially worthless as a sign of trustworthiness, someone came up with another mark. The official tag was added to a subset of previously verified accounts. And immediately removed.

A screenshot of Twitter’s user interface. You see Elon Musk’s profile. But every letter on the whole page is changed to be Twitter’s checkmark. On the right you see the browser development tools, which show that the used font has been changed to «Twitter Sans Neue».
Twitter’s new font, solving the verification problem by changing every glyph to a checkmark. Designed by Cristoph Köberlin. (Parody)

The result of the Blue verification was pure chaotic energy. One of the best days on Twitter. Someone verified an account, pretending to be the pharmaceutical company Eli Lilly, and announced that insulin was now free. Eli Lilly, the real Eli Lilly, quickly tweeted that this was, in fact, not true. A vial of insulin costs almost $100 in the US.

Imagine being on Eli Lilly’s social media team and having to say that you continue to overcharge for drugs.

For a brief moment it seemed like Twitter sent the stock price of Eli Lilly diving. But correlation is not causation.

Addendum: A previous version of this section stated that the tweet did have an impact on Eli Lilly’s stock price. Serving as a reminder that funny does not equal true.

Was it worth it for Twitter? Probably not.

If you factor that only the 10 percent of Twitter users that the company considers to be "power users" would be interested in paying, the conversion rate is a bit better but not great at just over 0.25 percent. The average conversion rate in e-commerce is roughly between 2 and 3 percent.

On the not-funny-at-all-side-of-things: neo-Nazis immediately used the opportunity to buy verification badges. And this is the thing. While it may was fun and games for a day (given that you are not Eli Lilly’s social media manager), eventually such a product will be used primarily for abuse.

The advertising thing

The important thing with advertisements on Twitter is that Twitter makes 90% of its revenue through ads. So, anything that hurts ad income is a pretty substantial blow to Twitter’s bottom-line.

Advertising companies, at large, don’t like risks and don’t like to be associated with shit.

As you might imagine, a platform where «verified» users shitpost under the name of multi-billion dollar companies owned by a billionaire dreaming of being one of the shitposters but not managing the posting isn’t exactly where advertising thrives.

After the glorious days of the verified disaster, Omnicon recommended to its clients (including Mc Donald’s and Apple) to stop advertising on Twitter. IPG, another media network, recommended pausing advertising spend immediately after Musk took over.

The good thing about this is that advertising isn’t a thing anymore. The key result «Reduce reliance on advertisements in percentage of total revenue» has been achieved. Well done, team, pop the champagne.

The bad thing is that revenue isn’t a thing anymore.

The thing with the laws

A pretty substantial mess we have here. Teams axed, senior leadership gone and a lawyer of Musk saying «Elon puts rockets into space, he’s not afraid of the FTC».

Meanwhile, at the FDC:

“We are tracking recent developments at Twitter with deep concern,” an FTC spokesperson told The Hill in a statement. “No CEO or company is above the law, and companies must follow our consent decrees. Our revised consent order gives us new tools to ensure compliance, and we are prepared to use them.”

A company lawyer warned employees that they might face hefty fines if they sign off that a certain product is compliant with regulations. In the same message, he advised to take paid time off.

What do hefty fines look like when violating FTC demands? In 2019, the FTC settled with Facebook for $5 billion dollars, after Facebook violated a privacy order. Ooops.

It’s not just the FCD. The EU has regulations, too. This includes the new Digital Services Act. After Musk tweeted that the bird is freed, Europe’s internal market commissioner, Thierry Breton, was quick to respond:

In Europe, the bird will fly by our rules.

The thing with Twitter being essential

Over the last few weeks, many people were demanding to leave Twitter or even abandon social media altogether. The thing here is: It’s not that simple.

An example of those is the disability community. Twitter is an important part of their support networks and visibility they dearly miss basically everywhere else.

It's frustrating when people say they wish social media wasn't a thing because it's a literal lifeline for so many. Like, just log off. Disabled people have literally needed social media to stay alive during the pandemic. When these sites go down, we lose entire support networks.

Twitter had been one of the most user-friendly social media platforms out there—with a world-class team that made sure it was usable by people who had a variety of different needs. Plus, it’d been a megaphone and a lifeline to the outside world, for those who’d been especially vulnerable during the pandemic and mostly stayed indoors. Everything was now up in the air.

Twitter Was a Lifeline for People With Disabilities. Musk’s Reign Is Changing All of That

If the demand to leave is voiced by white liberals, running from harassment they don’t have to face, while Black Twitter is staying and fighting is only amplifying the problem with whiteness.

Do we need better, non-corporate alternatives? For sure. But leaving those behind who are reliant on the platform is doing them a disservice.

The thing with Space Karen being so stupid that this headline does not do it justice

Throughout all of this, Musk didn’t stop tweeting. And he made a mess of it. If he tries to be smart, he is not and if he tries to be funny, he is a cringe lord. By now, his interactions have been mostly reduced to incredibly inappropriate rolling on floor laughing emojis.

While complaining about sinking revenue, he was gleefully reporting usage numbers were higher than ever. Yeah, mate, you set fire to one of the largest websites and made sure there is no one left to extinguish it. Of course, everyone is looking at you.

Burning Man is canceled. How can it compete with the spectacle of setting $44 billion on fire?

He has further decided to include bots in his active users calcuation. Which is slightly weird, given that he didn’t want to buy Twitter because of the bots. But at this point, nothing is expected to make any sense anymore.

All his business decisions were completely erratic. He refuses to learn by example. Doing the one thing at one point, reversing it two hours later. He is suffering from late stage Billionaire Brain Damage. The only cure is to tax billionaires out of existing and crack down on the networks of yay-sayers and bootlickers. None of which seems particularly likely at the moment.

Everything he has done so far is so nakedly bad and wrong that it is almost impossible to understand why he’s doing it, other than the fact that he can and wants to. It’s one thing to disagree about what verification is, or means, or should do - it’s another to lose many of your advertisers at a time when you specifically need to make more money. The actions Musk has to take are ‹big,› but not particularly complex, and yet he appears to be deliberately choosing to do the wrong thing every single time.

There we are. An incredibly incompetent person bought a company he wanted to avoid buying. His only plan seems to be trying out whatever comes to mind and reverting it immediately.

The problem with throwing shit at a wall and seeing what sticks is that you have a room full of shit. Which is what I imagine Twitter’s board meeting room to look like right now.

A good news interlude

Researchers at the University of California have developed a brain implant which can turn brainwaves into words.

A man using the interface discussed in the link above, looking at a screen. The screen show two section with words. The first shows «How are you today?». The sections shows the answer «I am very good».
A paralyzed man who hasn’t spoken in 15 years uses a brain-computer interface that decodes his intended speech, one word at a time. University of California, San Francisco

And egg whites can be transformed into a material capable of filtering microplastics from seawater.

The researchers used egg whites to create an aerogel, a lightweight and porous material that can be used in many types of applications, including water filtration, energy storage, and sound and thermal insulation.

You don’t even need egg whites from real eggs, but can use other proteins, which makes this even more useful. The research is not yet ready for commercial application, though.

Social Mediargh

The bird is one thing, but there have been other things in social media! Take Facebook. Mark «Android» Zuckerberg took full personal responsibility for destroying stock value by shoving billions into a product which no one needs.

And he showed it by buckling up and taking the hit. Kidding. He showed it by letting 11,000 people go. Soon after, laid-off employees who depend on the employment for their visas were complaining about radio silence. Proper leadership, Mark. It deserves a present. How about a new antitrust charge in the EU?

Big Tech can’t be trusted. I guess that’s clear by now.

Maybe we should push for small tech? Mastodon has seen massive user growth amid all the chaos. It’s an art project, but an interesting one: Minus. A social network where you get one hundred posts. For life. Ben Grosser also did Deficit of Less and Orders of Magnitude. Films reducing Zuckerberg to saying less and more.

There’s a small movement, called the IndieWeb, which pushes for an independent web. The problem, as Max Böck argues, is that it can feel super intimidating for non-developers to join the fun.

Want to start on Mastodon, but feel unsure how? You might want to know how to pick an instance and read Everything I know about Mastodon.

I really enjoyed this exploration of how to create an alternative protocol for a timeless less communications platform: Specifying Spring ‘83.

This ain’t intelligence

What happens if corporate AIs take over caring for children? Companies like Amazon push their home surveillance devices to ever younger children, addressing issues with bias – as always – after harm has been done. Elsewhere in Alexa: Amazon says screw it, lets Alexa respond to search queries with ads.

GitHub is now being sued over its use of open-source code in Copilot. Elsewhere in generative AI: How one unwilling illustrator found herself turned into an AI model.

When talking about the output of Large Language Models, it’s tempting to say stuff like «written by GPT-3». But, as Matthias Ott reminds us, It Wasn’t Written.

Sasha Luccioni built a handy tool to explore bias in Stable Diffusions.

Prevailing surveillance

In October, France’s data protection authority fined Clearview $20 million for data protection violations. It’s not the first European country to do so, but it’s unclear if the fine will ever be paid.

This limit of enforcement lays bare a fundamental problem in EU’s fight to protect the data of its citizens. If you can’t enforce the fine, why should companies care?

In China, internet users resort to puns to get around the state’s draconian internet censorship. Banning words is no singular quality of the Internet: Kotobagari: Japan’s Hunt for Taboo Words.

Hacked documents provide an inside look at an Iranian government program that lets authorities monitor and manipulate people’s phones.

Apple managed to break its unique selling proposition of being the privacy-focussed company. After Gizmodo reported that Apple collects usage data, even if the corresponding setting is disabled, Apple now faces a class action lawsuit. A separate antitrust lawsuit alleges that Apple and Amazon colluded to fix iPad prices on Amazon’s marketplace.

Not working: Using eagles to catch drones.

EOL of humanity

The next person who says «Yeah, climate change, really not that great, but I love the warmth» to me might get slapped in the face.

Europe had its warmest October in the record, with temperatures nearly 2°C above the 1991-2020 reference period. In western Europe a warm spell brought record daily temperatures and it was a record-warm October for Austria, Switzerland and France, as well as for large parts of Italy and Spain.

Surface air temperature for October 2022

After the death of a cyclist in Berlin, German politicians stopped hiding their authoritarian tendencies and openly contemplate prosecuting climate activists as terrorists. Under Bavaria’s new restrictive police law, some activists are already in police detainment for up to two months.

The internet runs on data centres. And a lot of them have no contingency plans for the climate crisis.

This is all going swimmingly, y’all. Don’t mind the piranhas.

The Web at Large

Using Open Type characters that look like latin italic characters is wracking havoc for users of assistive technology. Please stop using this.

Correctiv used many words to say that use can buy new domains faster than you can block them. Consequently, blocking RT domains in the EU is rather useless.

FTX, one of the largest crypto exchanges, exploded. Crypto investors still think they are going to get rich.

Right-wing superhero movie ends ’in disaster’ after $1 million in funders’ cash goes missing.


Thanks for reading ’till the end. We’ll see us again in two weeks. Until then, stay sane, hug your friends, and don’t smoke crack.

]]>
<![CDATA[Do robots eat electric salad?]]> https://www.ovl.design/around-the-web/013-do-robots-eat-electric-salad/ 2022-10-30T14:12:00.000Z <![CDATA[A lettuce, machine learning’s stealing problem, an update on humanity’s end of life, and pictures from the beginning of life.]]> <![CDATA[

Collected between 16.10.2022 and 30.10.2022.


Welcome to Around the Web, where we welcome our overlord the lettuce with open arms and vinaigrette.

The news were chockfull of everything these past two weeks. I tried to keep up, but deleted some topics nonetheless. Still, it’s the longest issue so far. Get a tea, some cookies, and enjoy the ride.

Before we start, friends of the format Algorithm Watch are offering five fellowships to report on algorithmic accountability in Europe.

So, what’s up, world?

This ain’t intelligence

The Generate AI hype models continue to be plagued by copyright issues—or theft, to put it less mildly. GitHub's Copilot was the subject of an article in The Register, which explored the issues that can arise from scraping code to generate new code. GitHub claimed the model was fair use. Which is nothing more than a claim at the moment.

Of course, it's ironic that GitHub, a company that has built its reputation and market value on its deep ties to the open source community, would release a product that monetizes open source in a way that harms the community. On the other hand, given Microsoft's long history of hostility towards open source, perhaps it's not so surprising. When Microsoft bought GitHub in 2018, many open source developers - myself included - hoped for the best. Apparently, those hopes were misplaced.

You can read more about Matthew Butterick's ongoing investigation at githubcopilotinvestigation.com.

Rachel Metz reported the indiscriminate use of art in models like DALL-E or Midjourney. The models can reproduce the artists' style to a degree of similarity that is disconcerting for the artists concerned. The artists whose work is included in datasets such as LAION-5B, which serves as the basis for Stable Diffusion, are not amused.

The discussion has only just begun. It touches on an incredibly wide range of issues. Starting with «What is art?» (machine learning model generated images probably not), copyright, consent, to the more technical. Or how much of our reality can actually be described by language alone. How about protecting the personal data in datasets from prompt injection attacks?

How to build less harmful AI systems is an incredibly difficult question to answer. But the companies that are not asking it are flush with billions of dollars. They actively choose to take the easy way out by ignoring ethics altogether or dealing with them half-heartedly after the damage has been done.

Venture capital, after burning billions in crypto, is riding the hype train again. Recently, Stability AI and Jasper AI have both secured fresh funding, each now valued at over a billion dollars.

In stark contrast, a group of volunteers has launched a bounty programme to combat bias in AI. As much as I applaud the intention, I'm appalled that this was necessary. And that Microsoft and Amazon have the audacity to offer a few thousand dollars in rewards or computing resources. Remember that Microsoft has invested a billion dollars into OpenAI. Donating ten thousand feels like an insult by comparison.

Generated images will be included in the Office suite as part of Microsoft's exclusive right to build commercial features on top of OpenAI's «research».

Sofia Quaglia writes about the dangers of using machine-translated text in high-stakes situations in Death by Machine Translation?.

In Israel, a young man captioned a photo of himself leaning against a bulldozer with the Arabic caption "يصبحهم", or "good morning", but the social media's AI translation rendered it as "hurt them" in English or "attack them" in Hebrew. This led to the man, a construction worker, being arrested and questioned by the police.

The neo-fascist government in Italy has proposed building an algorithm to assign young people to compulsory work. It is an unsettling suggestion, but not an unprecedented idea. .

You don’t die completely, as long as someone thinks of you. Which might soon be forever. A new set of ML assisted technologies set out to clone our relatives, but basically anyone, and make them «live» forever (I’ve linked to Amazon’s product offer in issue 11).

While we talk about death, let’s briefly talk about weapons on robots, shall we?

Remember the last issue where robot manufacturers promised not to weaponise their robots? Police are doing it for them. And the Netherlands has deployed NATO’s first killer robot. The only silver lining is that Amazon might make you immortal after robots shot you down. Hurray.

Legislation readings

Canada is moving forward with their legislation, called the AI and Data Act (AIDA).

As Bianca Wylie argues in the series of posts (read part one here), it’s important to take time and get these things right, or skip them at all:

However, the foundational error that informs both data protection and AI legislation is that the idea of human rights should be subsumed to commercial interests and state efficiencies. Fast forward 20+ years, and the way these two pieces are getting blended into one another (industry and the state) because of the use of private technologies in public service delivery is another element of this conversation that requires expansion.

The Ada Lovelace Institute asked how the path forward for the AI liability issues might look like and if the directive is enough.

In this post, I look at three legal developments that progressively show how existing approaches to AI liability have not kept abreast of technological developments, which may lead to overcoming traditional civil liability regimes tout court.

A lettuce prime minister

Liz Truss. The only prime minister that made it really hard to not link to the Daily Star. The British tabloid live-streamed a lettuce, sitting on a desk, hiding under the desk (which forced the prime minister’s office to say that Truss is, really, not hiding underneath her desk).

The lettuce won, by the time of Truss’ resignation, equipped with a whip, goggly eyes and a pack of tofu.

She’s gone now, taking the Queen, the British economy and her party with her. Quite an impressive feat for 45 days in office. 45 days, which will earn her 115,000 GBP a month for the rest of her life.

A picture by Liz Truss. She is wearing a safety helmet and a high-visibility waistcoat. In her hand she is holding a button. She looks slightly insane. To her left is Kwasi Kwarteng. To her right is an elderly man.
Liz Truss having a blast, blowing the country to smithereens.

A fair compensation, considering that she broke all kinds of records. As Slate has calculated, Brits spent 2 percent of her time in office standing in the Queue. A book on her «astonishing rise to power» will never see the shelves. She was the only prime minister to not have a show of Doctor Who air during their time in office since the show’s inception.

The Guardian summarised Truss’ second to last day in office in all its chaos. If there has ever been a day in parliament which describes a political party in a state of meltdown, this might be it. Hell, most raves are more orderly than this.

From inadvertently leaking the government's agenda, to berating MPs for toeing the party line, to Truss herself missing a vote on fracking that was dubbed a vote of no confidence. This day would have been weeks ago in a normal timeline. But we live in the worst of timelines. So, it's been just fourteen hours.

The Tories clung to power. For a brief moment, it even looked like Theresa May or Boris Johnson might get back into the office they'd been thrown out of.

But as every rival dropped out, Rishi Sunak was crowned prime minister. The richest prime minister in British history immediately spoke of the hard times ahead. For his citizens, of course.

Every single one of the cretins we call politicians is completely incapable of leading a country to anything but ruin. Maybe bring back the lettuce. By now, it's probably just as rotten as the rest of the parliament.

Prevailing surveillance

Surveillance capitalism is alive and well. TikTok is reportedly tracking location data «of some specific American citizens», as Forbes reports.

The panic over Chinese state surveillance was quickly grounded by Uber, which plans to build an advertising system for their users, which is based on the locations they went to in the past. The Vice article is a good reminder of Uber’s past blunders, too. Just in case anyone forgot about this, as every other company is trying to keep up.

But we don’t even need advertising in Uber cars. We still have Amazon’s Echo or Google’s Home (which of course come with advertising). Amazon’s plans for the smart home of the future is a panopticon in every household. Neat appliances watching the every move of the residents.

This intense devotion to tracking and quantifying all aspects of our waking and non-waking hours is nothing new—see the Apple Watch, the Fitbit, social media writ large, and the smartphone in your pocket—but Amazon has been unusually explicit about its plans. The Everything Store is becoming an Everything Tracker, collecting and leveraging large amounts of personal data related to entertainment, fitness, health, and, it claims, security. It’s surveillance that millions of customers are opting in to.

Welcome to Ring Nation. Smile. You will be on camera.

In their newest report At the Digital Doorstep, Aiha Nguyen and Eve Zelickson lay bare the implications the constant home surveillance has on those coming to the Ring equipped doors. Feeling entitled, they turn against delivery workers, who are now managed by the algorithm’s of their employer and the boss behaviour of the customers.

After the Vorratsdatenspeicherung got struck down, again, by European courts, lawmakers in Germany have now proposed the quick freeze. Instead of saving every communication data of everyone, this proposal would only allow to «freeze» data of those accused of capital crimes. The SPD-led ministry of the interior can’t let got of a terrible idea, even if it’s smells funny. The end of the saga is all but clear.

A browser extension by the Verbraucherzentrale Bayern is automatically removing cookie banners. Unfortunately, it’s removing some legitimate content too.

Is proctoring, the use of surveillance technology during digital exams, an encroachment on human rights? The Gesellschaft für Freiheitsrechte certainly thinks so, and is suing the University of Erfurt.

EOL of humanity

Germany has a party which was founded on sunflowers and doing things better. Fast-forward some decades, and this party has become so ingrained in the political process that their leaders now claim that extracting further coal is somehow good for the climate. On their latest party congress, those brown-turned-greens sanctioned the mining around Lützenrath. With this, Germany will certainly fail the 1.5 degree goal. Well done.

Scientists have now discovered that the ice in Antarctica may be melting even faster than previously thought. That is, in the next decade.

After van Gogh in London, activists of the Letzte Generation targeted a Monet in the Barberini museum in Potsdam, Germany. Rightfully claiming that all these nice paintings will be worth nothing once we ruined the planet.

How handy that those thought midgets who are somehow paid to write bullshit in feuilletons can be enraged at climate activists throwing soup at paintings. Meanwhile, Christian Lindner, the German minister of finance and fast cars, wants to deploy fracking in world heritage sites.

It’s postulated that radical actions harm the cause, which might not be true after all. What is true however is that we should scandalise not those who throw soup, but those who destroy the last bits of future we might have.

A UN report found that in East Africa alone, 89 million humans don’t have access to enough food.

Social Mediargh

Facebook lost 70% of its value this year. By now, it seems certain that Zuckerberg will destroy the company.

Facebook was involved in a very strange news cycle too. Here's the gist: The Indian newspaper The Wire published a story accusing Facebook of giving the Indian government access to internal moderation tools. Facebook denied it. The Wire doubled down. Facebook denied it again, in more detail. The Wire tripled down and then pulled out of the conversation.

By now, The Wire has retracted the story, saying they were duped by a (now ex-) employee trying to discredit the newspaper. That’s significant, as The Wire is an independent newspaper, and any dent in its credibility has an outsized impact. Amit Malviya has already announced that he will sue the newspaper for defamation.

The other side of the story is that Facebook is so broken that they have to respond to these allegations in great detail. Otherwise no one will believe them when they talk about integrity.

Kanye West, after losing his Twitter and Instagram accounts for spewing antisemitic hate, was quick to announce he is going to buy Parler, that is the far-right social media network, not the adjacent cloud hosting provider which is known (hardly) as Parlement.

Meanwhile, Twitter, the hellsite, is now, indeed, owned by Elon Musk. The drama will continue, though. Musk immediately fired the senior leadership, and announced to lay off some staff, too.

The layoffs at Twitter would take place before a Nov. 1 date when employees were scheduled to receive stock grants as part of their compensation. Such grants typically represent a significant portion of employees’ pay.

There’s one problem though: California requires a sixty day notice for layoffs. Smells like court spirit.

Racists, transphobes and the rest of the bigotry parade were quick to jump on the opportunity. Use of racial slurs jumped 500% in the hours after news of the investigation broke. For Musk, taking over Twitter will become hell.

Why the bigots come back, Twitter’s most active users have been leaving for years. Ryan Broderick summarised why Twitter can be a pain to be on, and what’s happening to social media at large.

Twitter has never been able to deal with the fact its users both hate using it and also hate each other. There’s a lot of explanations for why. You could argue that by actively courting journalists and politicians early on, it just absorbed the toxic negativity of those spheres. But I think it’s largely about boundaries. TikTok, though its search is beginning to open up the platform more, is relatively siloed. Your TikTok experience and my TikTok experience are, presumably, totally different. And even if we see the same meme or trends on the app, chances are we’re seeing different lenses of it. While on Twitter, because there are no guardrails, content is constantly careening across the whole network. This is what people call the Main Character Effect of Twitter. It is not only possible, but very common for the majority of the site to see the same tweet.

Ich verzeihe Musk nicht dass wir wegen ihm ständig über Mastodon schreiben.

A whistleblower lay bare the founding of Truth Social.

German police might raid your home for a like on social media.

Tollwerk has published the comprehensive explanation why should never use fake unicode formatting on social media (or anywhere for that matter), as it is a huge accessibility problem.

The Internet at Large

netzpolitik.org, taz, and Correctiv investigated a German IT company which seems to play a vital role in Iran’s internet infrastructure.

A quick primer: Over the last few years, Iran’s regime has been working hard to centralise its Internet infrastructure. Currently, there are only four connections between Iran’s network, and the rest of the world.

Two of them lead to Germany. A smaller research focused autonomous system to Frankfurt. And then there is the case of ArvanCloud. ArvanCloud is an Iranian cloud computing provider. As the investigation now reveals a German company, Softqloud (I guess every cloud trademark sold out already), runs data centres, works as a façade to process payments and registered one of those four autonomous systems which connects Iran to the rest of the world.

Unfortunately, accessibility overlays are still a thing. Promising easy fixes for hard problems, they hiurt those the claim to help.

ZDF Magazin Royale and Frag den Staat leaked the NSU report. It’s heavily redacted, but nonetheless documents the utter failure of Germany’s Verfassungsschutz to, you know, actually protect the constitution. EXIF Recherche published a great run-down of the things the report (does not) include. Whatever the police says, they don’t keep us save, we keep us save.

If you were seeking a fortune be being brave and followed crypto.com’s advert, you’ll by now have lost 66% of the money you invested.

But if you need to use Pantone colours in Adobe products, you’ll only have to pay $20 per month or all your colours will turn to black. How emo. Who’s to blame for this? Probably everyone involved.

Breaking the STEM boys club, one Wikipedia article at a time.

Typefaces can symbolise racism, and it is past time to stop using those that cater to such tropes.

Someone has written tweets for venture capitalists and earned $200,000 with it. The story is hilarious and depressing, an account of how investment changed and got even more broken than before, and requiem for reality.

Anti-abortion activists have tried to paint a picture of the foetus at ten weeks as an almost complete human being. But how does the reality look like? The Guardian documented it in fascinating pictures, that have very little resemblance to the «pro-life» propaganda.

White tissue in a petri dish. It shows five blobs, each a bit larger than the previous one. A centimetre measure lies under the bowl to clarify the scale. The smallest blob has a diameter of about half a centimetre. The largest is almost eight centimetres long.
Tissue from five weeks of pregnancy to nine weeks. Photograph: MYA Network

In viruses: Monkeypox is still a thing. In Uganda, there’s a new outbreak of Ebola. By a vaccine-resistant strain.

Technology was fun once.

It might be fun again: Kilogram. It’s like Imgur, but every image is compressed to 1 kilobyte or less.

And, finally.

we gave morning people way too much power


That’s all for this issue. Stay sane, hug your friends, and be kind to lettuces.

]]>
<![CDATA[Around the Web]]> https://www.ovl.design/around-the-web/012/ 2022-10-15T14:12:00.000Z <![CDATA[Bots, AI regulation advances, Facebook does not find its legs, cops are disgusting, and how QR codes work.]]> <![CDATA[

Collected between 30.9.2022 and 15.10.2022.


Welcome to Around the Web, the newsletter not generated by an AI model but cynicism.

I went to Stuttgart on Thursday to talk about artificial intelligence and its impact on labour and state surveillance. I will link to the recording once it is published.

With this issue Around the Web passes 500 linked stories from all corners of the web. If any of you clicked on all the links (and has the browser history to prove this), I owe you one. If you have the history, however, you might want to think about deleting it, or at least get rid of cookies.

Before we get to the usual linking, I take a closer look at the impact bots have on social media, specifically.

Beep Beep Bot?

Not only since the train-wreck that is Musk’s takeover attempt of Twitter, bots are the topic of heated discussion. Musk claims that there are more bots on Twitter than Twitter said. Which served as a pretext for him to try to bail out of buying Twitter, until he changed his mind and wants to buy Twitter again.

Bots are said to destroy democracy, CAPTCHAs turn ever weirder in their never-ending quest of telling computers and humans apart. Wired recently published a series of articles on the topic, called Bots Run The Internet.

So, let’s take a moment and talk about bots. And humans. And the internet. And COVID-19. And fun, too. 🎢

At their weirdest incarnation of the theory that the Internet has been taken over by bots, we have the Dead Internet Theory. I’ve linked to it before, but I’ll never not link to it if I have the chance. It simply is the best conspiracy theory ever. It stipulates that the Internet in its entirety is run by bots. Which is genius and completely bogus at the same time.

But, as we see reflected in the Twitter takeover, bots are believed to be a very common phenomenon on social media. And, undoubtedly, they are, right?

The answer to this is more nuanced than it seems at first glance. The main reason for this is that it’s rather hard to differentiate between bots and humans. When we discuss bots, we often mean a certain behaviour rather than a technical implementation. If an account posts frequently, maybe even advocating a political view we don’t prescribe to, it’s easy to mark it as a bot.

In Bot or Not Brian Justie traces the history of CAPTCHA systems which are built to achieve exactly this distinction, as well as the role of the bot accusation in public discourse.

But those wielding “bot” as a pejorative seem largely agnostic about whether their targets are, in fact, automated systems simulating human behavior. Rather, crying “bot!” is a strategy for discrediting and dehumanizing others by reframing their conduct as fundamentally insincere, inauthentic, or enacted under false pretenses.

In his talk Social Bots, Fake News und Filterblasen the data journalist Michael Kreil analysed «bots» on Twitter and the difficulties with recognising bots and their impact. Spoiler: It’s complicated. In a follow-up talk he analysed the impact of bot networks on elections and if there is such a thing in the first place.

In the wake of the COVID-19 pandemic, the New York Times took a closer look at the topic.

“So, even if there are a lot of bots in a network, it is misleading to suggest they are leading the conversation or influencing real people who are tweeting in those same networks,” Dr. Jackson said.

While bots on social media might not be as prevalent or impactful as it might seem on the surface, there is however an increasing volume of automated traffic. This led to the developer of the search engine Marginalia proclaiming a Botspam Apocalypse.

The only option is to route all search traffic through this sketchy third party service. It sucks in a wider sense because it makes the Internet worse, it drives further centralization of any sort of service that offers communication or interactivity, it turns us all into renters rather than owners of our presence on the web. That is the exact opposite of what we need.

The sketchy third-party service is, of course, Cloudflare.

There is also the problem (though I wouldn’t really call it a problem) of online ads which are only seen by bots.

While there certainly are bots and a problematic account of automated traffic, we should be cautious to equate the existence of them with political influence. Evidence for this claim is thin. At this point, bots – at least on social media – seem to be more of an insult than an injury.

After all this, we shouldn't forget that bots can be incredibly funny and entertaining. To the bots mentioned in the article, I’d like to add Threat Update which combines a colour coded threat level with a more or less nonsensical request. It’s one of the best pieces of my timeline.

This ain’t intelligence

Facebook announced their video generation model Make-A-Video. Google followed suit. It should not be a surprise that the data for these models has been scraped from whatever sources Facebook could find. Andy Baio used the release for a closer at AI Data Laundering, as he calls the practice of big tech companies trying to mask their products as science to later put them into commercial use.

It’s currently unclear if training deep learning models on copyrighted material is a form of infringement, but it’s a harder case to make if the data was collected and trained in a non-commercial setting.

As more and more models do more and more things, AI hype will get louder and louder. To resist this cycle, stick to the these tips for reporting on AI (which are very handy to read reporting on AI, too).

Adrienne Williams, Milagros Miceli and Timnit Gebru took another close look at the – often precarious - human labour that powers AI and argue that this labour should become the center of AI ethics.

This episode of The Gradient Podcast with Laura Weidinger on Large Language Models and their ethical implications offers a wealth of knowledge. It should have been in the last issue, but slipped through the cracks.

Two ex-Google engineers started their own company called Character.ai which lets users chat with bot versions of Donald Trump or Elon Musk. This company is a symptom of a trend where developers start their companies to avoid those pesky questions of ethics for technological advances. Or, as Timnit Gebru puts it:

We’re talking about making horse carriages safe and regulating them and they’ve already created cars and put them on the roads.

A robot gave evidence in the House of Lords of the UK parliament. It shut down in the middle of giving a (pre-recorded) answer. Which is an apt symbol for the state of robotics and that Terminator fears are over-blown at the moment. Why you would let a robot testify in parliament in the first place … whatever.

Boston Dynamics and five other robot companies pledged that they won’t weaponise their robots. This pledge is a response to several incidents over the last months. In an interview with IEEE Spectrum, Brendan Schulman of Boston Dynamics calls for legislation to enforce this. How it could be enforced in the first place remains an open question. In essence, giving another example of how tech companies get things wrong. Maybe, instead of building potentially harmful products and realising ethical complications after the fact, this order should be reversed? What a world this would be.

Tales from the law

Good news from Brussels. According to reporting by Euractiv, European lawmakers seem to favour a broad ban on facial recognition technology through the upcoming AI Act.

While the purpose of the preliminary discussion was precisely to get the views of the political groups out in the open, two European Parliament officials told EURACTIV that there appeared to be a clear majority in favour of the ban.

Keep in mind though that it’s still early in the discussion, and the lobby power of technology companies is nothing to sniff at. Keeping up the pressure will be important in the coming months until the law is passed.

Accompanying the AI Act is the AI Liability Directive. This directive would allow European citizens to sue if they are harmed by AI systems. The problem here is the need to prove that the harm is a direct consequence of AI.

“In a world of highly complex and obscure ‘black box’ AI systems, it will be practically impossible for the consumer to use the new rules,” Pachl says. For example, she says, it will be extremely difficult to prove that racial discrimination against someone was due to the way a credit scoring system was set up.

Not only is this difficult enough to prove on its own. Research shows that humans tend to look at discrimination by automated systems with less outrage than discrimination through humans. Bigman et al. call this algorithmic outrage deficit.

The paper further finds that people are «less likely to find the company legally liable when the discrimination was caused by an algorithm». The AI Liability Directive needs to account for this if it should have an impact.

The Biden administration in the USA announced the AI Bill of Rights, a white paper which could serve as the beginning of a legal framework similar to the AI Act. For now, it is non-binding, though, and reactions have been mixed.

What are you looking at?

The US Department of Defence extended their contract with Palantir. Palantir came under criticism as Bloomberg unveiled their strategy to «buy their way in» contracts with the British NHS. In this scheme, Palantir was seeking to buy smaller companies which already had contracts with the NHS, thus enabling them to expand their business with lower levels of scrutiny.

A contract by state police with in North Rhine-Westphalia, Germany, exploded in costs and time. Meanwhile, the Gesellschaft für Freiheitsrechte filed a constitutional complaint against the so called «Palantir paragraph» in NRW’s police law, which allows to compile and analyse a broad swath of personal data.

And, to end on a good note, Palantir’s stock price crashed by more than 60% year over year. No tears.

In the US, a cop used state surveillance technology to gather data on women, got them hacked and extorted them with sexually explicit imagery stolen from their Snapchat.

According to a sentencing memorandum, Bryan Wilson used his law enforcement access to Accurint, a powerful data-combing software used by police departments to assist in investigations, to obtain information about potential victims. He would then share that information with a hacker, who would hack into private Snapchat accounts to obtain sexually explicit photos and videos.

Prosecutors recommend the lowest sentence. Fuck, and I can’t stress this enough, all of this.

Cops using the data available through official means for being bad persons is – of course – no isolated incidence. In Germany, police came under scrutiny for allegedly supplying personal information available to them to the right-wing author(s) of letter threatening persons of colour and left politicians.

Apple AirTags are now used to track stolen election campaign signs.

Social, they said

Facebook had their Connect conference touting virtual reality, the Metaverse and this hype which won’t be. For a brief moment, it even seemed like legs have finally arrived in Facebook’s famously torso-centric Horizon World. But, alas, no legs. The sequence showing legs was made with motion capture technology, not the real imagined shizzle.

The virtual reality revolution is so revolutionary that even Facebook’s employees aren’t on board. Likely because they don’t like revolutions? Nah. They don’t use it because it’s buggy and bad. At least, it has found a «creative» new method of tracking: Facial expressions.

Anyway. Facebook not finding its legs is a pretty adequate metaphor for its current state. And I’ll leave it that.

Nieman Lab had a look on state the of echo chambers and found that most Twitter users don’t have one … because they don’t consume political content in the first place.

In other words: Most people don’t follow a bunch of political “elites” on Twitter — a group that, for these authors’ purposes, also includes news organizations. But those who do typically follow many more people they agree with politically than people who they don’t. Conservatives follow many more conservatives; liberals follow many more liberals. When it comes to retweeting, people are even more likely to share their political allies than their enemies. And when people do retweet their enemies, they’re often dunking on how dumb/terrible/wrong/evil those other guys are. And conservatives do this more than liberals, overall.

The tool Cover Your Tracks is a handy little helper to see if your browser fingerprint is unique.

Sara Bazoobandi and Dastan Jasim wrote about the socio-economic and ethnic background of the protests in Iran.

Ever wondered how QR codes really work? Me neither. Dan Hollick explained it nonetheless, and now I’m all the wiser and impressed by the technology.

This infographic visualises noise pollution through car traffic, and it’s bad.

Two climate activists threw canned tomatoes at a painting of van Gogh and glued themselves to the wall of the museum. On social media, they were ridiculed quickly. While you do not have to agree with those actions, you need to defend them, and criticise climate change, Nathan J. Robinson argues in Current Affairs.

Why ask what’s wrong with them rather than asking what’s wrong with everyone else? Is not climate change act of vandalism (and ultimately, theft and murder) far, far worse than the spilling of the soup? If we are sane, should we not discuss the thing they were protesting about rather than the protest itself?

The internet was fun once, and this Internet Explorer 1.0 ad clearly shows this.


That’s all for the last weeks. Read you next time. Stay sane, hug your friends, and enjoy the colours of autumn.

]]>
<![CDATA[Around the Web]]> https://www.ovl.design/around-the-web/011/ 2022-10-02T14:12:00.000Z <![CDATA[Summer is over. Winter is coming. Around the Web is back. AI art, deep fakes, and David Attenborough.]]> <![CDATA[

Collected between 29.5.2022 and 2.10.2022.


Around the Web ends its summer break and returns to the regular programming. Whatever regular mean these days.

What a summer it has been. Crypto tanked, NFTs are dead. Italy (#girlboss) and Sweden elected right-wing governments . The coolest summer of the rest of our lives, yet marked by droughts and catastrophic floods. Cloudflare was forces to drop Kiwi Farms. Elon Musk still hasn’t bought Twitter, instead they now fight in court. As neither platform X nor billionaire Y can count me, I expect the case to be a bucket of popcorn. If the world ends, we might as well get diabetes.

To avoid the end of the world as we know it (sorry) the question to be asked, and collectively answered, is how to organise and do so fast to have any shot at a better future. Time’s running out. It’s probably time to stop being picky.

Summer’s most wholesome internet story has likely been corn kid. I’m glad to read that he’s fine.

There’s always hope, and we better never forget this.

On a technical note: I finally added tweet and image support to the newsletter. Yay.

So, what’s up, world?

This ain’t intelligence

The summer has been dominated by generative AI, with images created by algorithms popping up everywhere. Questions of copyright remain, as well as the problem of datasets that scrape the internet without consent.

I have #stablediffusion batch generating street photographs taken in Paris.

It's a known effect that "signatures" appear in images, but here is an example of a specific one: It has learned the @GettyImages watermark!

Compare for example with this: www.gettyimages.ie/detail/news-photo/la-voiture-italienne-arbath-expos%C3%A9e-au-salon-de-lauto-%C3%A0-news-photo/1260060365

Image from Tweet

And of course, the question of what creativity is and how these algorithms will impact the future of work. Thorny problems, with no clear answer yet. Which – of course – won’t hold deter venture capitalists from pouring millions of dollars into the space.

On the intersection of men being toxic and AI: Men Are Creating AI Girlfriends and Then Verbally Abusing Them. I hope we have hit rock bottom with this, but I fear that not.

Human Touch is a great profile of women workers in India annotation datasets used to train AI models, the perspectives it gives and global economic dynamics.

India is one of the world’s largest markets for data annotation labour. As of 2021, there were roughly 70,000 people working in the field, which had a market size of an estimated $250 million, according to the IT industry body NASSCOM. Around 60 percent of the revenues came from the United States, while only 10 percent of the demand came from India.

For a brief news cycle, an engineer at Google made AI sentient. It has been inevitable. Luckily, this news cycle is over.

A researcher at Google’s DeepMind meanwhile published a paper which stated that AI might end humanity. Google wasn't amused.

As the climate crisis has the world in its grip, the pundits claiming technology will save the world are as loud as always. There is, however, no conclusive evidence that artificial intelligence will help to solve the crisis. On the contrary, it’s still unclear how to reduce the carbon footprint of training and operating these models. It certainly isn't helpful if your data center is using even more water than promised.

This interview with Timnit Gebru is a great read. No surprises there.

The real problem with deep fakes

One rogue field in which artificial intelligence is cause for concern are so-called deep fakes. The technology analyses past audio and video recordings of a person and renders new content based on it.

Amazon will offer to deep fake the voice of deceased relatives. The company touts this as a way to relive memories or have a ghostly voice read a good night story to your kids. While creepy, it makes sense. Most of the products, shaping the digital world, have a hard time coping with dead. After all, they are built to store the amassed data forever and ever, fading away does not fit in this concept.

If anyone at Amazon thought about the potential for misuse? I would rather not hear that the answer is «No».

Deep fakes are being used for illegal purposes already. A recurring target is Elon Musk. And because crypto has to play a part in every better scam today, it’s only logical to see Elon deep fakes shilling cryptocurrencies. But other companies are targeted by deep fakes as well.

As with every technology, it will get broader adapted as time goes on. And as with every bad thing on the internet, women will feel the brunt of it.

While Amazon’s latest product offering might not interest you and you are not interested in Ponzi economics, this means that sooner rather than later, you will be exposed to a deep fake, be it as a form of harassment or to sell you something.

Luckily, there are tell-tale signs of deep fakes, and it is possible to spot them.

The great thing about technology is that … ah, fuck it. On the Horizon: Interactive and Compositional Deepfakes. Yes:

Interactive deepfakes have the capability to impersonate people with realistic interactive behaviors, taking advantage of advances in multimodal interaction. Compositional deepfakes leverage synthetic content in larger disinformation plans that integrate sets of deepfakes over time with observed, expected, and engineered world events to create persuasive synthetic histories. Synthetic histories can be constructed manually but may one day be guided by adversarial generative explanation (AGE) techniques. In the absence of mitigations, interactive and compositional deepfakes threaten to move us closer to a post-epistemic world, where fact cannot be distinguished from fiction.

If you don’t feel like reading the paper, Davis Blalock has summarised it in a tweet thread.

"On the Horizon: Interactive and Compositional Deepfakes"

So you probably know that neural nets can generate videos of people saying stuff they never said. But Microsoft’s chief science officer articulates two threats beyond this that could be way worse: [1/11]

Image from Tweet

Is it me you are looking for?

No fake (hopefully): Europe edges closer to a ban on facial recognition.

The support from Renew, which joins the Greens and Socialists & Democrats groups in backing a ban, shows how a growing part of Europe's political leadership is in favor of restrictions on artificial intelligence that go far beyond anything in other technologically-advanced regions of the world including the U.S.

Vorratsdatenspeicherung, a German project to store internet connectivity data on all its citizens, has been declared illegal by the European Court of Justice. Again. German governments try to get different forms of this to pass for fifteen years now.

PimEye’s new CEO has been trying to improve the public image of his company, declaring his tool is not spyware but digital self-defence and the user is the problem. Yeah well no thanks.

Tesla is a surveillance company that builds cars.

The Internet at Large

YouTube still has a harassment problem. And their dislike buttons don’t work. What does work however: there is creepy content for kids.

Is TikTok boring yet? Probably. They remain on the top of the social media hill, though. For now.

Apropos of TikTok, the Food and Drug Administration in the USA tried to be modern, and instead promoted a not-so-viral not-really-challenge. If you want to read the non-alarmist history of the NyQuil chicken, Ryan Broderick has got you covered.

The internet shutdown in Tigray is nearing its second anniversary. The regime in Iran, too, reacted to the current uprising with shutting down the internet. Elon Musk pr’d to the rescue, announcing to «unlock» the access to Starlink’s internet in the country. It won’t help much.

In good people: David Attenborough holds a special place in many people’s heart, including mine. Rachel Riederer’s account of his work is therefore well worth a read: The Lost Art of Looking at Nature 

In bad people: Mother Jones published a portrait of Blake Masters.

Cheating. It has been a while since I’ve been exposed to it – I don’t play video games anymore and have left school a while ago. Still, cheating is alive and well, as Matt Crump details in his post My students cheated … A lot.

Ever wondered how to draw a crab? Wonder no more.


Let’s see what the winter brings. Stay sane, hug your friends, and shut down fossil infrastructure.

]]>
<![CDATA[Around the Web]]> https://www.ovl.design/around-the-web/010/ 2022-05-28T14:12:00.000Z <![CDATA[Rentier capitalism and expropriation, AI models large and larger, the EU tightens its border regime, Elon Musk speed-runs fascism, and what prison inmates did to police cars.]]> <![CDATA[

Collected between 16.5.2022 and 28.5.2022.


Around the Web maintains its biweekly publishing schedule for now because the author still hasn’t fully recovered from their sicknesses of the last few months.

Given the current dooming news cycles, that might be a feature.

It also means that I made to issue No. 10 a bit later than expected. Nonetheless, I’m happy to have made it this far. My normal rate of abandonment left me wondering if I’ll publish more than three issues.

After ten issues, some pieces have fallen into place, and I’ve found my beat (ranting while hoping for a better world). To celebrate the 10th issue, I’ve built a little statistics page.

If you enjoy Around the Web, it would mean the world to me if you recommend the newsletter and/or website to a friend or two.

Thanks for reading, let’s get into the reading.

Enjoy capitalism

While last year saw the Great Resignation and everyone was happy on r/antiwork this year sees the Great Layoffing. Gorillas, Getir, Klarna, Nvidia, Netflix, the list goes on and on, all terminating contracts to make their investors happy.

High-flying startups with record valuations, huge hiring goals and ambitious expansion plans are now announcing hiring slowdowns, freezes and in some cases widespread layoffs. It’s the dot-com bust all over again — this time, without the cute sock puppet and in the midst of a global pandemic we just can’t seem to shake.

Everything that is wrong with venture capitalists. Josh Gabert-Doyon thankfully deconstructed the most recent «VC is awesome because it is money» cold take, tracing VC firms back to whaling expeditions.

There’s a long history of rich people throwing money at stupid projects, and VC investment is best seen as a systematized method for cutting the risks involved.

Rich people throwing money at stupid projects? Andreesen Horowitz set up a new $4.5 billion fund for crypto projects. When the likes of a16z talk about a «building a better internet» it’s time to run.

Who else is shilling cryptocurrencies? White supremacists. Cryptocurrency possession is above average with white supremacist influencers. I wish you all a very happy di…, sorry, downward slope.

In other capitalism, Trevor Jackson reviewed Rentier Capitalism: Who Owns the Economy and Who Pays for It?. In the book, Jackson makes a compelling argument against rentier capitalism.

What is new about the rentiers of today, then, is not their prevalence, their dominance, or that they face less serious opposition than in the past. What is most distinctive about our contemporary rentiers is that it has become difficult to discern whether their maneuvers represent rational strategies of elite wealth defense in conditions of declining productivity and technological change, or instead, the implacable drive of a nihilistic death cult.

Reading this article left me with Gwen Guthrie’s Nothing Goin’ On But the Rent stuck inside my head (it’s a good tune, don’t hesitate).

That even the battles against rentier capitalism that are won, are not won, is currently on display in Berlin. Last September, fifty-nine percent of Berliners voted to expropriate companies owning a large swatch of housing units. Since then … nothing happened. Franziska Giffey, the social-democratic mayor, is a vocal opponent of anything threatening the rule of capital. As such, an the social-democrats actively delay an expert commission and keep promises only if they benefit the housing companies.

At the same time, pollitcians seriously wonder that election turnout is hitting lows and people think politicians are lying. Go figure.

We interrupt the reporting on current capitalism and return to capitalism’s beginning. The New York Times published an extensive reporting about Haiti’s way to independence, and the crushing debt regime that France imposed in an act of vengeance. The work is respectable, though the NYT claimed that the story is brand new reporting. Which is a lie. Michael Harriot published the same story back in 2018.

Back to the present. netzpolitik.org continued their reporting on digital colonialism with a look into the bloody supply chain of today’s devices.

The extractivism of our economy, paired with inflation, is now a threat to remodelling the transition to green(er) energy.

Apple is increasing the minimum wage for their retail employees. They will pinky swear that is has nothing to do with the union drive in Apple Stores.

This ain’t intelligence

According to some at Google’s DeepMind, AI is now in fact almost intelligent. Do I need to change my headline? DeepMind published a paper about Gato a new kind of machine learning model, which is capable of learning multiple tasks at once.

Previous models could, for example, play Go or StarCraft, and needed to forget everything about the previously learned skill, to learn the next. Gato can perform 604 tasks. There are limitations: Gato is generally worse at those tasks than specialised models. So if you read anything claiming that General Artificial Intelligence is near, forget about it.

Some external researchers were explicitly dismissive of de Freitas’s claim. “This is far from being ‘intelligent,’” says Gary Marcus, an AI researcher who has been critical of deep learning. The hype around Gato demonstrates that the field of AI is blighted by an unhelpful “triumphalist culture,” he says.

While Gato is certainly an interesting piece of technolgoy, Large Language Models – which attracted lot of hype over the last years – are certainly here to stay. Eliza Strickland took a closer look at Facebook’s OPT-175B model (see also Around the Web 009).

If you want to get up to speed or back on track on the common criticisms of such models, this article by Emerging Tech Brew is a great introductory resource.

In The Markup’s newsletter, Julia Angwin interviewed Timnit Gebru on the same topic, and as always when Gebru speaks it’s worth a read. Gebru reflects on enviromental and societal problems of the race to build ever larger models.

Currently, there is a race to create larger and larger language models for no reason. This means using more data and more computing power to see additional correlations between data. This discourse is truly a “mine is bigger than yours” kind of thing. These larger and larger models require more compute power, which means more energy. The population who is paying these energy costs—the cost of the climate catastrophe—and the population that is benefiting from these large language models, the intersection of these populations is nearly zero.

Besides Gato, Google announced that they built an image generation model, that is in direct competition to OpenAI’s DALL-E. Like DALL-E, Google’s Imagen takes a text input and generates pictures from scratch. All we can see of it, though, is heavily filtered promotional material.

There’s a technical, as well as PR, reason for this. Mixing concepts like “fuzzy panda” and “making dough” forces the neural network to learn how to manipulate those concepts in a way that makes sense. But the cuteness hides a darker side to these tools, one that the public doesn’t get to see because it would reveal the ugly truth about how they are created.

There is no beta that’s usable by anyone outside of Google, as Google is scared of possible abuse:

Downstream applications of text-to-image models are varied and may impact society in complex ways. The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo.

I do not agree with everything said in it, but this interview with Kai-Fu Lee about the AI and the future of work makes some interesting points. Thinking about a way to employ AI and humans side-by-side is especially worthwhile, even though – or rather because – it surfaces the need to criticise the capitalist mode of production.

Bitch Magazine published a worthwhile essay on algorithms and how they shape our way to remember.

But as we slip deeper into the reality of being able to catalog and retrieve—theoretically—all our life’s experiences, we lose a degree of autonomy over what French philosopher Jacques Derrida (I’m so sorry) called “archive fever”—a drive to document that becomes a compulsion to collect everything, leaving the archive overflowing and unreadable. This is the very problem that Big Tech purports to solve via memory features, promising that its algorithms will remind us of everything worth remembering. But the metric for what’s worth remembering is fundamentally unknowable. We change, we move, we make new friends, we outgrow old pastimes. More importantly, when Apple, Google, and Facebook continually demonstrate that their users are simply data to mine, why trust them to begin with?

Algorithms shape the way we speak, too. Some weeks back, I wrote about Algospeak and how social media algorithms force their users to adapt speech to avoid shadow-banning. Wired zeroed in on mental health and how talking about being «unalive» arguably worsens the discourse about suicide and mental health.

Williams worries that the word “unalive” could entrench stigma around suicide. “I think as great as the word is at avoiding TikTok taking videos down, it means the word “suicide” is still seen as taboo and a harsh subject to approach,” she says. She also swaps out other mental health terminology so her videos aren’t automatically flagged for review—“eating disorder” becomes “ED,” “self-harm” is “SH,” “depression” is “d3pression.” (Other users on the site use tags like #SewerSlidel and #selfh_rm).

What are you looking at?

Welcome to the week in surveillance. Before we start looking at current measures in the European Union, it’s nice to see that PimEyes come under closer scrutiny. The NYT went after them. PimEyes’ current owner denies frantically that they are building stalker ware, insisting that their technology should only be used to search photos of oneself. The only thing they demonstrate with such a statement is that they neither understand technology nor humans.

Panoptiropean Union

The European Data Journalism Network published an investigation into smart border control measures imposed by the EU, and the multi-billion dollar surveillance industry enabled by ever more control. As Matthias Monroy reports, one of those projects is the European System for Traveller Surveillance (ESTS). A joint venture between Frontex and Europol, the ESTS system is effectively a predictive policing network, which will scan every traveller coming into the EU – including EU citizens. Linking multiple existing databases, including those containing biometric data.

With pre-screening, the agencies want to make predictions as to whether travellers might be dangerous. This is aimed primarily at persons from third countries. However, a „traveller file“ is also to be created for EU citizens when they cross the border.

Predictive policing is known to reproduce whatever biases the society employing the technology inherits. With an agency like Frontex which is time and time again accused of ignoring human rights, it’s frightenly easy to imagine how the system will be (ab)used.

And, of course, it will use Machine Learning because fuck everything.

There’s another project set to be effective later this year. The Entry/Exit System aims to keep taps, and data (of course including biometric data) on every human travelling to the EU. Belgium’s Federal Government passed a law to implement its part of the system.

With the new system, each time travellers from non-EU countries (both short-stay visa holders and visa-exempt travellers) cross an EU external border, they will be registered in the automated IT system using their name, type of the travel document, biometric data (fingerprints and captured facial images) and the date and place of entry and exit.

But it is not only the outer border of the EU, border policing is also increased inside the Schengen area.

While the EU tightens its border regime, the landmark data protection law, GDPR, is so far underperforming.

In other surveillance

Not surveillance: The company behind Proton Mail has rebranded itself as Proton. Its focus stays building digital products sans the surveillance.

Amazon uses Alexa’s voice data to target you with ads. I guess that’s no real surprise, but maybe helpful to convince people who have such a surveillance device at home to get rid of it.

Meta agreed to share political ad targeting data.

The problems for Clearview AI, poster child of surveillance capitalism, continue.

The U.K.'s Information Commissioner’s Office, the country's privacy watchdog, has ordered facial recognition company Clearview AI to delete all data belonging to the country's residents.
Clearview has also been ordered to stop collecting additional data from U.K. residents and will pay a fine of roughly $9.4 million for violating the country's data protection laws.

Clearview AI ordered to delete all data on UK residents

How does Clearview react? By trying to implement school surveillance systems.

Police in San Fransisco are using driverless cars as mobile surveillance cameras. Because, of course, they do.

Cars. General Motor disclosed a data breach, loosing driver data. Remember, if you collect data, it will get stolen.

Social, they said

Let's start with something obvious. Jeff Bezos does not know how Twitter works.

That’s in stark contrast to Elon Musk, who on Wednesday used Twitter to announce that he isn’t voting for Democrats any more. The tweet is part of his recent speed-run attempt in the category Tech billionaire to fascist any%. On Friday, he met with Jair Bolsonaro, far-right president of Brazil.

The week before, Insider reported about a settlement between SpaceX and a flight-attendant, who accused Musk is sexually harassing her. Musk denies the allegations. His argument: Nothing ever came to light before. Well, okay, Elon. He, allegedly, said «If you were my employee I would fire you» to his first wive during their marriage … seems weird, man. China, meanwhile, is reportedly contemplating to shoot Starlink satellites out of earth’s orbit. Let’s do it.

Elon Musk has not bought Twitter yet. Twitter seems eager to enforce the takeover, however. Some Twitter stakeholders sue Musk.

Apropos of Twitter, after the lay-offs at the beginning of the month, more senior leadership left the company.

Remember the Best Viewed in IE 8 badges on old websites? Adrian Roselli had some fun rebuilding them with modern HTML and CSS.

Is there a use for blockchains? Maybe! Residents in Shanghai use the technology to keep records of the imposed lockdown and combat censorship.

Another surprise: There’s a ransomware gang, seemingly based in India, that forces their targets to do acts of «philanthropy». I guess that’s … still a bad thing. Good effort, though.

People knew how to make bread 14,400 years ago.

It’s funny because it’s true: Prison inmates photoshopped a pig in the Vermont State Police car decal.

To end this issue, here’s a very funny thread on Twitter.

When I was 7, my teacher told us to write an article about “world cultures” for school over the weekend. I remembered it late on Sunday so in a panic I made up something called the "Icelandic Fish Festival", figuring said teacher wouldn’t know either way.


That’s it for this week. The world is mad, now more than ever: Stay sane, hug your friends, and please for the love of god never trust a cop.

]]>
<![CDATA[Around the Web]]> https://www.ovl.design/around-the-web/009/ 2022-05-15T14:12:00.000Z <![CDATA[Roe v Wade & privacy, crypto & and its big crash, Europe & an attack on encrypted messaging, how a mechanical clock works, and an anthem to women riding bicycles.]]> <![CDATA[

Collected between 2.5.2022 and 15.5.2022.


There was no Around the Web last week because I became very sick over the weekend. I spent this week lying in bed, too, and here I am still – recovering but also a bit annoyed. I had much better plans.

Annoyance, fittingly, is the topic of this newsletter: The last weeks saw the publication of a draft legislation threatening to overturn Roe v Wade in the USA, the inevitable collapse of the crypto market, an unrelenting heat-wave in south-east Asia, and a new attack on end-to-end encryption by the European Union.

This explanation of how a mechanical clock works, complete with interactive 3D visualisations, is marvellous. As it’s the best thing I read, I’ll leave it right in the intro.

So, how did the world turn? Let’s find out.

Roe v. Wade & the end of privacy

On May, 2nd, Politico leaked a draft opinion of the US Supreme Court.

Abortion is no bipartisan issue, though. A large majority of Americans supports safe abortion. What we see here is a decade-long coordinated attack by an ever radicalising far-right, aiming to undermine human rights. Laurie Penny published the chapter on abortion and reproductive freedom from her book Sexual Revolution.

These laws are not about the ‘right to life’. They are about enshrining maximalist control over women as a core principle of conservative rule. They are about owning women. They are about women as things.

As Charlie Warzel notes, the right is obnoxious even when they are winning.

The sore-winner complex highlights a fundamental asymmetry between the style of culture warring employed by the left and right. The right’s vision is ahistorical and logically confused, but more importantly, it is relentless. There is no appeasing this type of politics. It is a politics that will manage to use its victories to stoke additional fears inside its voters. For the media, there is no amount of evenhanded or both-sides coverage that will get the right to back down from calling the press illegitimate, biased, and corrupt. For non-Republican politicians, there is no amount of bipartisan language or good faith attempts at dialogue or engagement that will inspire bipartisanship, compromise, and a desire for majority rule. For the right, even in victory, there is only grievance and fear.

But there’s another angle I want to take a closer look it, which is how surveillance capitalism enables criminalisation. The day after Politico’s leak, Vice published an article which showed how incredibly easy it is to obtain location data of women visiting abortion clinics.

You do not need to by somewhere physical, mind. As Lil Kalish reports in Mother Jones, getting information about abortion or attending telemedicine sessions leaves a digital trace.

Now, if abortion gets outlawed, these trails all of a sudden become evidence. And instead of more than annoying ads, you might get a visit from the cops.

News coverage of digital forensics often celebrates its role in prosecuting serious felonies. But when it comes to reproductive rights, Conti-Cook says, the same tools “will be a powerful [asset] to police and prosecutors in a more criminalized landscape” for abortion seekers.

This whole situation is also the topic of the latest edition of T3thcis. Shout out, and if you want to get more tech ethics news in your inbox, you should follow them anyway.

The lesson here is to always protect your digital trails. Data is forever, legislation changes. Digital self-defence is the only viable way to protect yourself. Shoshana Wodinsky over at Gizmodo published a hands-on guide on how to stay protected.

In related news, user data from gay dating-app Grindr was sold through an ad network, too.

Enjoy capitalism

NFT trading has been on a downward slump for a while. Coinbase’s stock crashed 50% over the last year.

But this week Bitcoin and Ethereum, too, saw their courses collapse. At the time of writing, Bitcoin seems to have stabilised at around $30.000, the lowest price since December 2020 and down more than 50% from its all-time high in November 2021.

In El Salvador Bitcoin is legal tender which will become a huge problem for the country’s economy should Bitcoin collapse further. Most citizens have already abandoned their wallets amid security concerns.

The most dramatic story has probably been TerraUSD. Terra is an algorithmic stablecoin. Or: a Ponzi scheme. Rusty Foster tried to explain stablecoins as simple and enraged as possible, and I won't even try to do a better job, as I would inevitably fail. Now over the last week, Terra collapsed completely, taking with it some $18 billion.

Terra, and other stablecoins, have not been without criticism. Especially the algorithm flavoured variant, which are backed by basically nothing. Terra seems to be gone for good. But the volatile market remains.

Given that Bitcoin is a direct response to the 2008 financial crisis and an attempt to do things better all this feels a lot like the 2008 financial crisis, except worse.

Cryptocurrency trading throws around alleged millions and billions. Those numbers are fictions built on fictions, with a much smaller—but still real—amount of actual money at the bottom. The gateways to genuine dollars are narrow and have yet to be significantly breached. But that’s not for lack of effort from the cryptocurrency world, whose endgame appears to be to make cryptocurrency systemic and leave the government as the bag-holder of last resort when the tottering heaps of leverage fall down. It worked in 2008, after all.

This collapse spells disaster for those who had not much to lose, and their only assets in Terra. Which is the bitter thing. While it’s fun to poke at the crypto bros and their stupid ideas, the ones suffering are not the ones we laughed at.

Maybe have some Bored Apes for breakfast, or are you choking anyway?

To unchoke you, Molly White gave an interview to the Harvard Business Review, and talks sense about web3 and how it is a solution without a problem.

While crypto is imploding, tech companies lost a combined $1 trillion in market value over the last week. Facebook announced that it will largely stop hiring across the company. Now there’s good news after all.

Speaking of good news, the unionisation drive across Starbuck in the USA continues, leaving the company on its back-foot. Someone leaked the anti-union talking points Apple uses in their stores. Which are the same as is in every other company. Think different, huh.

Amazon is on a firing spree. It fired two union organisers as well as senior managers in the Staten Island warehouse which voted to unionise some weeks back.

In other capitalist news, Buy Now, Pay Later schemes are sending young people in a debt spiral.

Space, reduced from the final frontier to a billboard. SpaceX and Blue Origin spend more and more money on lobbying. As I’m sick, I also had the time to watch the new Netflix documentary on Musk’s SpaceX. Despite it being a two-hour-long PR puff-piece, you walk away with the impression that Musk has not the slightest idea how a rocket works. Which is a remarkable feat after running a rocket company for almost two decades.

News from outer space: There are earthquakes on Mars. The idea of Musk-Bezos flying to Mars only to have their nice colonies destroyed by a marsquake is, frankly, what kept me laughing while in hospital.

This ain’t intelligence

Sigal Samuel wrote a great piece for Vox, exploring the notion of fairness in AI, and why it’s so hard to build fair systems.

Computer scientists are used to thinking about “bias” in terms of its statistical meaning: A program for making predictions is biased if it’s consistently wrong in one direction or another. (For example, if a weather app always overestimates the probability of rain, its predictions are statistically biased.) That’s very clear, but it’s also very different from the way most people colloquially use the word “bias” — which is more like “prejudiced against a certain group or characteristic.”

The whole piece is not only interesting when thinking about AI, but for society as a whole.

Over at netzpolitik.org, there is a new article series on digital colonialism. The first article covers the Global labor chains of the western AI.

At the same time, the outsourcing of digital work in the Global South is inextricably linked to exploitive labor practices employed by foreign firms. The digital labor market in these regions is rampant with low wages, harsh working conditions, alienation, income disparity, racism, stress, and lack of global recognition.

Facebook published a new Large Language Model. The model, Open Pretrained Transformer (OPT-175B) is a 175-billion-parameter based model, which – according to Facebook – aims to compete with OpenAI’s GPT-3 model. It is, as Arthur Holland Michel noted, certainly doing so in terms of toxicity.

Comparing it to GPT-3, another language model released last year, the team found that OPT-175B ‘has a higher toxicity rate” and it “appears to exhibit more stereotypical biases in almost all categories except for religion.”

Given that Facebook trained the model on unmoderated Reddit comments, this seems about right. So, what we have got was not a generous gift to the research society, but an open-sourced hate spewing monster. Slow clap, Facebook.

GPT-3 is now used to write marketing copy. This might be one use of AI where I don’t object. Simply because marketing copy is the worst.

What are you looking at?

The big news in surveillance this week was certainly the EU proposal aiming to scan private chat messages and scan them for child pornography. Which seems laudable at first. Which is until you realise that such measures are useless, not even children protection organisation want them, and they are bound to end encrypted communication as we know it. Should the proposal pass, EU citizens would be subject to a level of control, commonly associated with the likes of China.

Of course, there are those who have something to gain if the EU decides to end privacy and implement such measures. Namely, the companies developing the analysis tools. One of those is run by Ashton Kutcher, who is already lobbying in Brussels.

Cities in the USA are trying to reverse bans of facial recognition technology.

Clearview AI settled a lawsuit which limits the sale of its product to private companies.

Thousands of popular websites see what you type – before you submit. It will not surprise you that a script from Facebook is among the culprits.

Express VPN advertises with far-right figure Ben Shapiro.

Social, they said

Facebook, reportedly, used the Facebook Pages of Australian public sector organisation as a bargaining chip.

According to internal documents and emails provided to The WSJ, Facebook not only shoddily took down pages for the Children’s Cancer Institute, women’s shelters, and fire rescue services (during fire season, no less), they prevented certain COVID-19 info pages from reaching users during initial vaccine rollouts. Facebook slowly restored these pages a few days later, following tentative alterations to Australian legislation regarding compensating publishers for their original news content.

We’ve seen more states coming to terms with social media companies, the Federal Trade Commission (FTC) has re-discovered a tool that cuts the problem at its roots: Algorithmic destruction. The slightly aggressive name basically means: If a company collects data it should not collect and uses an algorithm to facilitate data collection, it not only needs to delete the data, but destroy the algorithm for good measure.

Oh hahaha. Elon Musk’s Twitter bid is on hold. Presumably because he fears that bots make up more than 5% of Twitter’s users base. This will likely exclude those bots who root for Mr. Musk. Twitter’s legal team came after him for violating an NDA. Twitter, of course, does not Elon Musk for a bit of turmoil. This week Twitter’s Head of Product and Head of Revenue where ousted from the company. The former while being on parental leave.

The Atlantic asks what happened to Jon Stewart and if, just maybe, his enemies might have prevailed.

The ESC happened yesterday, a festival of weirdness and somehow also music. Laurie Penny argues it is Europe at its best, which is true.

And while we are at singing, let’s end this issue on a beautiful note: The Brooklyn Youth Choir sings Anthem for Women’s Freedom of Body and Mind, an anthem to the liberating role the bicycle plays for women.


That’s it for this week. What fun we had. If you like Around the Web, feel free to show it to a friend who likes Around the Web. Thanks for reading. Stay sane, hug your friends, and see you next week.

]]>
<![CDATA[Around the Web]]> https://www.ovl.design/around-the-web/008/ 2022-05-01T14:12:00.000Z <![CDATA[Facebook does not know what it is doing (with your data), someone bought a website, others have no internet, and 185 hellos from British Columbia.]]> <![CDATA[

Collected between 19.4.2022 and 1.5.2022.


Happy Labour Day, everyone.

Let’s remember those who lost their lives fighting against capitalism, making the world a better place, and keep up the fight.

I spent multiple days without looking at a computer, or work. 10/10. While I’ve been looking away, the internet has been revolving around a billionaire, mostly. While the rest of the world was busy bombing itself to pieces and burning the remains to the ground. 0/10.

Other people have been busy writing, and here’s what I read:

Social, they said

Facebook does not know what it is doing (with your data). According to a leaked document, all the data Facebook’s collects is ink flowing into a lake, while no one has the slightest idea to control it.

In other words, even Facebook’s own engineers admit that they are struggling to make sense and keep track of where user data goes once it’s inside Facebook’s systems, according to the document.

netpolitik.org’s Unsplash game is on point as usual.

While they do have no idea where data goes, they, of course, know how to do harm, amplifying pro-eating disorder content.

They, too, still have no clue how to tackle misinformation. Do they even try? Misinformation on Facebook’s platform is going rampant in Africa, with Facebook from the outside looking in.

When I first saw the title of this video, I thought that someone was playing an elaborate prank with John Cage’s 4'33". But, as we live in the worst timeline, I was – of course – wrong. Twitch is copyright claiming literal silence. Twitch is also cutting creator payouts as overlord Bezos is becoming jealous of his billiomate Elon.

Speaking of whom!

Elon from Twitter

Okay, so Elon Musk bought Twitter and the fallout so far has been mesmerising. Musk comes up with stupid idea after stupid idea, the right is buzzing with enjoyment, and the rest of us is trying to parse what happens next. Musk apparently thinks he is a billionaire with an ego problem.

Tumblr, still around, meanwhile saw a considerable bump in user registrations. Is this a sign of things to come? Maybe. Read Ryan Broderick’s theory of a breach of containment.

Of course, nothing has happened yet, and the current news cycle is driven by Musk’s frantic tweeting and little of newsworthy substance.

If there’s any, it’s the impact Musk’s free-speech mania shitposting has on everyone who is not a cis-het white male.

I can’t help but get the feeling that the whole thing has the same vibes as some years, driven by the Twitter usage of a certain ex-president of the USA. I’m not alone with this observation:

Like Trump, Musk puts his critics in a real bind. Broadly speaking, in an attention economy there’s no satisfying way to deal with people like them. There’s a circular thing where they command attention because they have some kind of power (fame, money, etc.) but, increasingly, their ability to command attention also grants them power (to influence/program the news cycle, amass cult-like followings, enhance their businesses).

Charlie Wenzel – Elon Musk Is Already Grinding Us Down

A point that is underreported, even after the hundredth piece on what might, might not or who knows happen, is that we can’t solve society’s problems with Twitter, regardless of who owns it.

The dissociation of truth and the fabric that holds our world together is going on for a while. Somehow this piece from The Awl (R.I.P.) came up again. It was published a century ago in 2016, and reminds us that right-wing pundits have been trying to dissipate what’s left of a shared understanding of, well, anything since at least 2004.

As we speak about Twitter anyway

Where have all the tweets gone? Twitter throttled tweets mentioning the HBO docuseries Q: Into the Storm. Twitter said they did so because they wanted to avoid amplifying QAnon. Which is quite ridiculous, given that the docuseries tries to dismantle the Q world-view. It certainly has nothing to do with the fact that the documentary criticises Twitter for enabling Q in the first place.

And now back to our regular 2022 schedule: Snickers dick vein. A Twitter user named Juniper dropped the «fact» that Snickers is removing the dick vein from its sugar bars. Which, obviously, is trolling. Which, obviously, didn't stop the right-wing social media power stupids from getting outraged. So angry, that Snopes was forced to publish a fact-check. I’ll stop laughing now.

What are you looking at?

After the USA left Afghanistan, and the Taliban took over, they also got control of the installed biometric surveillance systems. Having biometric data in the hands of a terrorist government is as bad an idea as it sounds.

Apple introduced its App Tracking Transparency to much fanfare last year. Facebook’s revenue took a hit because of it. But, in the end, tracking might still be possible.

Taking an Uber after trying to overthrow the US government? Bad idea.

Palantir is reportedly set to close a new deal for a data platform of the British NHS.

This ain’t intelligence

Julia Angwin interviewed Meredith Broussard on AI ethics and if it is possible to build AI products that embed as little bias as possible. Meredith Broussard is author of the book Artificial Unintelligence: How Computers Misunderstand the World and More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech.

Every time there is some supposedly new, world-changing AI system, it turns out that the problems of humanity are just reflected inside the computational system. Honestly, I’m a little tired of the narrative that computers are going to deliver us. I think the narrative itself is tired.

The MIT Technology Review published a series of articles on Artificial Intelligence and how it embeds the remnants of colonial domination. In its introduction, Karen Hao writes:

Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.

Bot Populi answers how we can decolonize and depatriarchalize AI. The piece also features the most succinct description of AI I’ve read so far: «Artificial intelligence is the holy grail of capital accumulation and socio-political control in contemporary societies.»

As a response to datafication, algorithmic mediation and automation of social life, communities worldwide are trying to pursue justice on their terms, developing the technology they need, committing to the community’s best interests, and building pathways to autonomy and a dignified life. We have explored some such initiatives and the ideas underpinning them below. These initiatives provide insights about different dimensions of AI technologies: feminist values applied to AI design and development, communitarian principles of AI governance, indigenous data stewardship principles, and the recognition of original languages and cultures.

Not AI, but colonialism nonetheless: A group of crypto bros is trying to buy an island (yes, again). The cryptonians are once again showing how capitalism with cryptocurrencies is only capitalism after all, and the Jacobin piece does a fantastic job showing how we ended up where we are.

Fantasies of libertarian exit from society were not uncommon at the time. The 1960s in the United States was as much the heyday of market libertarianism as it was of New Left anti-capitalism. Fears of demographic, ecological, and monetary collapse, combined with anxieties over the activities of social movements seeking racial, gender, and economic justice and redress, hastened efforts to find ways to abandon the sinking ship of state and to start anew elsewhere.

Providers of facial recognition technology are no fans of transparency and accountability.

The Tech & the Web

Rest of World analysed the history of internet shutdowns. Once Egypt open Pandora’s box, shutting down the internet to quell dissent, it became a favourite tool in the box of governments around the world. It’s not only bad for protests but also bad for business, as shown at the example of Kashmir, the region most affected by internet shutdowns in the world.

Hence the practice of blocking the internet outright has given way to more nuanced approaches. As witnessed in Russia, where the once chaotic infrastructure powering the internet has been centralised, and internet service providers are required to install government provided control software.

Meanwhile, Russia is the target of wave after wave of attack by Ukraine’s «IT Army» and hacktivist groups like Distributed Denial of Secrets. So far, they have yet to prove any real impact on the war, and the all out cyberwar some analysts were predicting is not happening.

Displaying colours on the Web is broken, and browsers do not seem to care.

Enjoy capitalism

Apple enlists union-busting lawyers as more retail locations organise. Everybody is surprised. Except they are not.

The world is burning

In Around the Web 006 I mentioned Tesla’s Gigafactory in Brandenburg, Germany, and how the surrounding area is increasingly susceptible to drought. Unfortunately, not much has changed. But the problem is more complex than Tesla.

On the blockchain, nothing is safe, but everything is forever. Binance meanwhile, snitched their customer data to the Russian state – because of course they do.

What does it take to humiliate a bunch of conspiracy theorists and their trucks? Eggs suffice.

A consortium of journalists found that Frontex is misclassifying illegal pushbacks in an internal database. Fabrice Leggeri, long-time head of Frontex, finally resigned.

And finally, Justin McElroy ranked 185 welcome signs in British Columbia. That’s the content I’m on the internet for.


That’s it for this week. If you’ve made it this far, why not recommend Around the Web to a friend?

Until next week. Stay sane and hug your friends.

]]>
<![CDATA[Around the Web]]> https://www.ovl.design/around-the-web/007/ 2022-04-18T14:12:00.000Z <![CDATA[AI keeps snake-oiling, bankrupt surveillance, cultivating memory, evading algorithms, how space became a billboard, and a browser mitigating tremors.]]> <![CDATA[

Collected between 10.4.2022 and 18.4.2022.


A wild potpourri awaits you, dear reader, in this week’s issue. We are travelling from ancient history to the latest in AI snake oil. Turkey weighs a war against Kurdistan, I will cover this in the next issue. For now, I need to recover from my second bout of sickness in four weeks.

On a technical note: Images are currently not displayed in the RSS feed and, consequently, the newsletter. You can see everything on the website. I aim to fix this before publishing the next issue, and use more images going forward.

Enjoy the read.

This ain’t intelligence

A new Machine Learning model is reportedly able to spot depression in 88 percent of participants based on their tweets. Which is of course a terribly bad thing to do. Once such a model gets into the wild, how do you control who uses it to analyse who?

However, the bot can also be used after a post has made it into the public domain, potentially allowing employers and other businesses to assess a user’s mental state based on their social media posts. It could be used for a number of reasons, the researchers say, including for use in sentiment analysis, criminal investigations or employment screening,

No matter how good or bad, such models should never exist.

Giving the battered mind no rest-bite, we will see emotion detection implemented into Zoom, and other products in the upcoming months. The company promises to use ML models to help analyse digital sales pitches. It’s «ironic» that not even the companies building these snake oil products really believe in them.

But Ehlen recognized the limitations of the technology. “There is no real objective way to measure people’s emotions,” he said. “You could be smiling and nodding, and in fact, you’re thinking about your vacation next week.”

It is, of course, impossible for machine learning models to determine human emotion accurately.

This impossibility does not stop the implementation of models in ever more areas of life, Intel is reportedly seeking to implement emotion detection into classroom software.

The NYT Magazine published a long read on Open AI’s GPT-3 Large Language Model. Unfortunately, it did a mediocre job at best. Emily Bender, who was interviewed for the piece, published a response. In it, she dismantles the uncritical parroting of Open AI’s PR (and with it that of the wider industry), as well as providing a better framework for journalists to report on tech-solutionism.

Puff pieces that fawn over what Silicon Valley techbros have done, with amassed capital and computing power, are not helping us get any closer to solutions to problems created by the deployment of so-called “AI”. On the contrary, they make it harder by refocusing attention on strawman problems.

What are you looking at?

The spyware company NSO Group has been declared «valueless» to investors. NSO Group became infamous for their Pegasus software, which has been used against investigative journalists and activists around the world. Maybe building surveillance tools for autocrats isn't worthwhile. Mere weeks ago, German company FinFisher, which fished (I’m terribly sorry) in the same murky waters, has declared bankruptcy. I won’t even try to hide my Schadenfreude.

The participants of the Zurich Marathon last weekend have been subjected to facial recognition, without it being mentioned in the event’s privacy policy. After the event, participants were able to find images of their run, by uploading a selfie.

The Mute button in the video chat application of your choice might not do what you think it does. While the other participants cannot hear you, analysis of the streamed data shows that the companies providing the tools might well do.

They found that all of the apps they tested occasionally gather raw audio data while mute is activated, with one popular app gathering information and delivering data to its server at the same rate regardless of whether the microphone is muted or not.

What the companies do with the collected data will remain unclear.

It’s easier to guess what the FBI plans to do with its million dollar investment in surveillance technology. It’s far from new that acts of violence serve as the pretext to expand policing budgets, even if the institutions in question are already more than able to surveil what they want.

As the Rolling Stone reports, the FBI has been actively monitoring social media feeds during the 2020 Black Lives Matter protests. Somehow they failed to do so in the run-up to January, 6th, 2021, when the Trump crowd stormed the US capitol.

The new documents suggest the agency has all the authority it needs to monitor the social-media platforms in the name of public safety — and, in fact, the bureau had done just that during the nationwide wave of racial justice protests in 2020. Critics of the FBI say that the bureau’s desire for more authority and surveillance tools is part of a decades-long expansion of the vast security apparatus inside the federal government.

Panopticon company Clearview AI is building a product to verify customer identity.

News from the past

Those white classical statues we’ve grown accustomed to might not always been so white. The aesthetic theory derived from the whitened image served as one of the predecessors of the anthropologists. Reality might have been more colourful, though.

Large polychrome tauroctony relief of Mithras killing a bull, originally from the mithraeum of S. Stefano Rotondo, dating to the end of the 3rd century CE. Now at the Baths of Diocletian Museum, Rome (photo by Carole Raddato, CC BY-SA 2.0).

Coda has published one of the best articles on memory culture, spanning an ark from the south of the USA to Germany. The piece goes into depth why it’s so hard to achieve a form of Vergangenheitsbewältigung that does not stop when the own family gets involved.

Silence distorts memory in various ways. It can happen when a nation, collectively, refuses to engage with the realities of its past, opening up space for revisionist histories and feel good counter-narratives that gloss over the horrors of the past. Sometimes national silence is summoned as an act of avoidance; other times, to serve a political or ideological agenda.

The whole piece is long, but every sentence worth your time.

Social, they said

The internet has always been modernising language. You can view early abbreviated uses of language (kthxbye, lol) as an outlet needed in chat rooms or a quick way to communicate in online games, and Emoticons and Kaomoji offered to convey emotion through text.

With today’s algorithmic moderation system the pressure is different, though. Automated systems rank down specific words (or at least that’s suspected, as nobody knows how those systems work). This leads to a new form of online speak, dubbed Algospeak.

“There’s a line we have to toe, it’s an unending battle of saying something and trying to get the message across without directly saying it,” said Sean Szolek-VanValkenburgh, a TikTok creator with over 1.2 million followers. “It disproportionately affects the LGBTQIA community and the BIPOC community because we’re the people creating that verbiage and coming up with the colloquiums.”

Elon from Twitter

Shortly after it was announced that Elon Musk will join Twitter’s board, it was announced that Elon Musk will, in fact, not join Twitter’s board. Shortly after it was announced that Elon Musk will not join Twitter’s board, Elon Musk announced that he wants to buy Twitter. Twitter said no, though it was unlikely that Musk ever intended to follow through.

Tesla’s vows continued in the meantime. An analysis found bot activity, which seems to correlate with rises in Tesla’s stock market.

Enjoy capitalism

A first investigation of the collapse of an Amazon warehouse last year, showed signs that some columns supporting the building might not have been anchored to the floor. Results are preliminary, and Amazon denies any wrongdoing.

Workplace safety has never been Amazon’s specialty, things went even more downhill in 2021.

The ad market is seemingly even more rigged than we would have thought. It seems like most of the money pumped into the market never reaches the publications offering the ad placements in their content. A good time to remember that digital ads might be completely useless.

Capitalism must expand and commodify every last thing on earth. And earth is not enough, so capitalism moves to commodify space too, changing the final frontier into a billboard.

The new space race pursued by the likes of Bezos, Musk, and Virgin’s Richard Branson taps into that same thirst for inspiration and transcendence. Their companies are pushing the limits of technology in remarkable ways. At the same time, there is something deeply unsettling about the space barons’ capitalist swagger. They measure the grandeur of space in terms of dollars and Bitcoin. They look out into the cosmic expanse and see another frontier for business expansion, ripe for profit-making colonies, mining operations, and satellite swarms.

The world is burning

New reporting by The Intercept shows, yet again, how the industrial capital tries to undermine scientific publications and concerted action against the climate crisis.

Many scholars have noted the influential role the [Global Climate Coalition] played in obstructing climate policy in the 1990s, but the first peer-reviewed paper on the group, published this week, reveals that the original and lasting intention of the GCC was to push for voluntary efforts only and torpedo international momentum toward setting mandatory limits on greenhouse gas emissions.

The climate catastrophe does not care about this too much. Chile is rationing water for its residents. The same is happening in the vicinity of Tesla’s Giga Factory in Brandenburg, Germany.

World Wide Web

The open Web has always been there, sometimes ignored, and forgotten. Is a renaissance underway as tiredness of the walled gardens of Facebook et al. increases? It seems like, as Anil Dash argues in his new post A Web Renaissance.

While the core technology of the web is decades old, the tools that help make it and run have been quietly evolving into something extraordinary in the last few years, too. There’s a flourishing of powerful new frameworks that make it simpler than ever to build flexible, responsive, useful sites. New hosting platforms let those sites be deployed and delivered faster and more reliably than ever. And you can build one of these sites in literally under a minute, then collaborate with people anywhere in the world to iterate on making the site better.

Don’t-call-them-overlay-company AudioEye sent a cease & desist letter to accessibility specialist Adrian Roselli for criticising the company on Twitter. AudioEye is one of several companies who've come under criticism for marketing that claims foolproof accessibility from day one. According to AudioEye’s lawyers, it is a misrepresentation of the company to classify them as an overlay vendor, as they offer manual testing, too.

The W3C, the governing body of the World Wide Web, is seemingly in disarray.

DuckDuckGo has launched a new privacy-centric browser. It’s built on top of WebKit, and as such for now a Mac only product. Using WebKit is an interesting move, as most of the recent newer browsers (Brave, Edge) use Chromium as their base. DuckDuckGo has decided that Chromium includes too much of Google to be good. One stand-alone feature is the Browser’s ability to automatically interact with some cookie banners, rejecting cookies as a user visits a website.

The road to hell is paved with crypto intentions

Jordan Belfort, best known as the Wolf of Wall Street, is now into crypto. He held a workshop in house. Participation fee? One Bitcoin, which is roughly $40,000. All guests were male, to Mr. Belfort’s astonishment.

As they dined on caviar and rigatoni, some of the guests shared stories of their own debauchery; Mr. Belfort, it turned out, was not the only wolf in the room. Two guests discussed the mechanics of pursuing younger women without risking entanglement in a “sugar baby” situation. Someone speculated about how an enterprising strip club owner might incorporate NFTs into the business.

Can't imagine why only bros participated.

The Bitcoin conference is done and dusted. Two articles and a podcast to get you into the loop:

The crypto mining world tour continues. After being ousted from China, and pariahed in Kazakhstan, miners were turning their attention to the USA. NowNew York may crackdown on crypto mining, too

Utilising the iPad’s accelerometer, there is a new stabilised browser for users dealing with hand tremors.

Being photographed in conflict and war and ending up in viral image, might haunt you forever.

In today’s conflict you risk being called a Crisis Actor, in addition to your trauma. Suspension of Belief explains the genesis of this conspiracy theory.

War is gendered and homophobic. The Russian war in Ukraine is no exception.

Wonder what happens if you accept only necessary cookies? In case of the Deutsche Bahn app, it opens a Pandora’s box of tracking technology regardless.


That’s it for this week. If you’ve made it this far, why not recommend Around the Web to a friend?

Around the Web will be on hiatus next weekend, as I’ll celebrate two birthdays. The next issue will come to your inboxes on April, 30th. Until then, stay sane and hug your friends.

]]>
<![CDATA[Around the Web]]> https://www.ovl.design/around-the-web/006/ 2022-04-09T14:12:00.000Z <![CDATA[Gig work regulation, Mark Sauron, Elon from Twitter, the AI of Google, the Fuckups of Crypto, 25 years of web accessibility, and a different look on earth.]]> <![CDATA[

Collected between 3.4.2022 and 9.4.2022.


Welcome to a super dense issue of Around the Web. I have bunch of links, so will keep the intro short. Enjoy the read.

Enjoy capitalism

You can still feel the aftermath of the unionisation of the JFK8 warehouse last week. Over the course of the week, many stories reported on how the Amazon Labor Union made the seemingly impossible possible. I thoroughly enjoyed reading the Huffpost piece about the organising efforts.

“The whole [idea] was to lead by example,” he said. “The last thing you want is someone to go up to the polling booth and be too scared to check off ‘yes.’ If you can get in the managers’ faces and show how pro-union you are, then voting ‘yes’ seems like nothing in comparison.”

The City profiled Chris Smalls and Derrick Palmer in the run-up to the vote.

Resistance against Amazon isn’t confined to the USA. Campaigners in France have been successful in blocking multiple warehouse projects.

Pam Greer is the latest among senior HR staff departing from Amazon. Her last task was to make Amazon the world’s best employer. Bad luck.

The “future” of work

The EU Commission proposed a new Directive to regulate gig work in the European Union. The Ada Lovelace Institute published a criticism of the Directive. It might lead to weakening of already existing national court rulings, not protecting enough of the 28 million people currently working in the gig economy, and therefore further segregation of the workforce.

It stands to reason that policy changes can’t be enough, but has to accompanied by worker organising. Unfortunately, workers at German delivery company Gorillas suffered a setback in court. The workers were fired after participating in a strike that was not endorsed by a union, which is illegal in Germany. The workers and their lawyers will try to overturn this law.

Free speech, but they say what you can say

Language matters. This fact might get disputed from time to time, but it only takes a look at the fierce battle capitalism takes to enforce language.

The Intercept leaked plans of a blocklist of words in a planned internal Amazon chat app. Besides obvious candidates such as «unions» Amazon employees would not be allowed to say restroom, accessibility, or pay raise.

Google, meanwhile, has instructed their Russian translator to not call the war in Ukraine a war. Falling in line – as is Google’s specialty, remember Project Dragonfly – with the official propaganda doctrine of the Russian government. What is a «special operation» for the Kremlin is now «extraordinary circumstances» for Google.

Remember, kids, making money is more important than telling the truth.

The German police and the German nazis

Police in Germany raided multiple locations linked to the German arm of Atomwaffen Division. Right-wing terrorism has been declared a mayor threat by the new Minister of the Interior, Nancy Faeser.

After being blocked by Horst Seehofer and the government led by Christian Democratic Party, a study of right-wind tendencies in the German security apparatus appears to be underway finally. Given the fact that this week yet another right-wing chat group in Hesse’s police was disclosed, and the involvement of a member of the German army with Atomwaffen Division are a stark reminder that the Police in its current state are not part of the solution to the Nazi problem.

Social, they said

My brain was melting at some points this week, but probably never more than when I read that Mark «Android» Zuckerberg honestly thought that employees calling him the Eye of Sauron is not an insult. Yes, Mark. Of course.

Elon from Twitter

I don’t really want to write about it, but I have to. Elon Musk bought 9.1% of Twitter. Because of the lulz, right? Shortly thereafter, Parag Agrawal, Twitter’s CEO, announced that Elon Musk will join its board.

After initially filing his stake as passive investor, Musk has since corrected the paperwork. It remains to be seen if he faces trouble from the US Securities and Exchange Commission (SEC) for disclosing his stake too late.

Update 11.04.2021 Elon Musk will not join Twitter’s board, as Agrawal shared on Twitter.

As a condition of joining the board, Musk agreed to limit his stake to 14.9%. As a board member, he also would have been bound by Twitter's code of conduct. Musk, whose past behavior suggests a studied lack of respect for rules, will not have to abide by those ones.

His Twitter habits have been controversial. He is required to have any tweets involving Tesla approved before posting. He insulted Vernon Unsworth, who helped rescue 12 boys from a cave in Thailand as «pedo guy». Musk won an ensuing defamation lawsuit.

Musk’s excitement over Twitter, came along with anewed racism allegations at his Tesla’s facilities. Black workers describing the factory as «the plantation» and «the slave-ship». This Machine Kills discussed the allegations, calling Tesla «the world’s most valuable racism factory». Those reports are not the first ones.

Tesla’s recent factory opening in Brandenburg, Germany happened amidst massive pushback over its water demands. The Berlin senate now announced that it is looking into rationing fresh water for its residents.

Musk’s much publicised Starlink shipment to Ukraine hasn’t been so charitable after all. As the Washington Post reports, a part of the Starlink terminals has been purchased by the United States Agency for International Development.

In other news, Truth social, a free-speech social network with mostly bots, lost two core members. I’m thrilled that this is the only thing I’ve heard about it so far.

This ain’t intelligence

Google announced a new Large Language Model. The Pathways Language Model (PaLM) is a supposed technical marvel. Google claims «breakthrough performance». Critics have been quick to point out the environmental impact and the impossibility to thoroughly review the ethical implications of such a large model. Google also quoted the paper they fired Timnit Gebru for, while not addressing any of the concerns raised by Gebru et al in their Stochastic Parrots paper.

This interview with Margaret Mitchell by Hugging Face is well worth your time.

What are you looking at?

PimEyes is back. After its inception in 2020 netzpolitik.org quickly found the company to be ultra-creepy.

Our investigation shows: PimEyes is a broad attack on anonymity and it is possibly illegal. A snapshot may be enough to identify a stranger using PimEyes. The search engine does not directly provide the name of a person you are looking for. It does however find matching faces, and in many cases the shown websites can be used to find out names, professions and much more.

In March they returned to the EU. Cher Scarlett searched for her face on PimEyes. Before I link to what she found: The story talks about sexual abuse, and attempted suicide. I’ll not detail those parts here. Want to see scenes from an actual sex trafficking torture porn? Check out PimEyes.

What I will talk about is PimEyes business model. It touts itself as a mechanism to preserve privacy. But as it scans the internet deeper than most and combines this with facial recognition technology and a public search, it really achieves the opposite. Cher and other affected users have to pay the hefty monthly fee of $299.99.

That’s when I noticed I could ask PimEyes to hide all of the images of me from their search results. That makes sense, right? Surely I should be able to control who can search for my face? Wrong. For an ongoing monthly fee of $79.99, PimEyes will allow me to control the search results in their basic search results features. To get all of them, of course, I’ll need to pay $330.59 ($299.99 + taxes) every single month, indefinitely, to stop people from finding them using PimEyes service.

If you meet someone working for, or advertising, PimEyes … chase them through the streets.

The road to hell is paved with crypto intentions

Crypto had some weeks. After the Axie Infinity hack should have blasted every little bit of trust anyone had into some kind of far away orbit, it was Bitcoin Conference this week.

And while it might be a good idea to come up with something reassuring, good ideas are – still – not crypto’s strong side.

They invited the favourite tech billionaire of everyone who likes not so good ideas on stage. Peter Thiel. And oh did he deliver. Coming on stage, ripping $100 notes into pieces, it only went downhill from there. In a weird but, giving the larger (very large) picture coherent, attack on anything that isn't him. Which is basically the story of his life. He invented the «financial gerontocracy» – firing shots at Warren Buffett and the CEOs of JP Morgan and BlackRock. Issue 002 of Around the Web focussed on Thiel and his ventures into politics.

But alas, Thiel wasn’t the only speaker with … interesting views. On Friday JP Sears, dubbed the Clown Prince of Wellness, took to the stage. Sears has promoted a mixture of esoteric beliefs and conspiracy theories.

Worldcoin, another grand idea to solve the world by scanning faces in a dystopian device called The Orb, failed to live up to its promises.

The currency has not yet been launched, but a BuzzFeed News investigation has found that Worldcoin is already wrestling with a host of problems, from managing angry Orb operators to concerns that the company is using its cryptocurrency as a way to amass millions of biometrics and perfect a new kind of authentication technology for the blockchain era.

The NFT bubble continues to shrink. On LooksRare, one of the larger marketplaces, most transaction are users selling to them themselves.

NFT projects focussing on women have been dubbed the girlbossification of NFTs. Gwyneth Paltrow, Mila Kunis, among others, pushing NFTs is basically a digital version of the white, «Lean in» type of capitalist confirmative feminism.

Block, Jack Dorsey’s new endeavour, confirmed a breach of CashApp. An employee downloaded customer information.

The Tech & the Web

What a week, huh? Let’s take a look at the technical side of tech, at least there I’ve ignored anything which does not fill me with joy.

Josh W. Comeau has the ability to explain complex systems in words and pictures you get. Which is an invaluable skill. Recently, he has explained CSS layout algorithms, and it frankly does not get any more complex than this.

One of the cornerstones of the modern web had its twenty-first birthday this week. April, 4th 2001 saw the first draft of the Media Queries specification. It took nine more years until Ethan Marcotte came up with a name for the thing media queries enabled: Responsive Web Design. What a ride it has been.

Another anniversary, but unfortunately, one with a lesser impact: The Web Accessibility Initiative was invoked twenty-five years ago. Despite this early commitment, today 96.8% percent of the top million homepages have detectable accessibility issues. WebAIM has been tracking this kind of errors since 2019 in their annual WebAIM million report. A sobering read every spring.

Pre-dating both stories is Windows 95. The launch video has been finally uploaded in full glory. While I’m writing this issue, I’m sitting in a train that’s running late, so of course I’m watching this. Gizmodo has the highlights.

Another part of history is this republication of a 1985 story in IEEE Spectrum about the Commodore 64. I only understand half (not even) of the technical details, but still think it’s a rather enjoyable read. My favourite parts might be the lengths devs had to go to build graphics back then, and results they achieved. This nerding about colours, in a time when we are leaving sRGB behind is a great blast of the past. All in all, it is a fascinating story about tech, the effects of saving costs, and of course: marketing.

“If you let marketing get involved with product definition, you’ll never get it done quickly," Yannes said. “And you squander the ability to make something unique, because marketing always wants a product compatible with something else.”

Not so much has changed since 1985.

Wondering about products of the recent past? Take a look at Killed by Tech, a list of all the products sunsetted by Google, Microsoft & Co.

It sucked to be famous on YouTube this week. Drake, Justin Bieber, and Taylor Swift all had their YouTube accounts hacked.

I’m probably late to the party, but have only recently discovered the earth visualiser made by nullschool.net. It gives all kinds of overlays, showing wind and waves. The kind of website I can look at for hours on end.


What a week, huh? Thanks for reading. If you enjoyed this issue, why not share it with a friend. Until next week. Stay sane, hug your friends, and donate to Sea Watch.

]]>
<![CDATA[Around the Web]]> https://www.ovl.design/around-the-web/005/ 2022-04-03T14:12:00.000Z <![CDATA[The joy of unionisation, the catastrophe in Tigray, the PR bullshit of Facebook, and Saturn losing its rings.]]> <![CDATA[

Collected between 25.3.2022 and 3.4.2022.


Greetings. This issue is a bit too late because I tried to rebuild my build process only to realise that it will not work. So I undid a Saturday’s work (note to self: working on Saturday is always stupid), and nothing has changed.

Here’s what I read last week. Let’s start with a reason to celebrate.

JFK8

Amazon tried for months to stop what happened in the end: Workers at Amazon’s JFK8 warehouse in Staten Island, New York City voted to unionise.

Amazon has been criticised repeatedly for unsafe working conditions, and pushing workers to brink of exhaustion, and past that.

Over the last months Amazon turned every page in the large book of union busting, reportedly spending 4.3 million $ in the process.

Amazon called police officers to arrest union organizers, taking Smalls to court over accusations of trespassing on their property. Amazon hired a Trump Hotel union buster to crush the union drive and hired influential pro-union Democratic Pollster Global Strategy Group to help produce anti-union materials, all part of its failed bid to launch a comprehensive union-busting campaign.

To no avail. With 2.654 votes, Amazon Labor Union won the vote. And with it a grassroots organisation beats one of the world’s largest companies. Meanwhile, the 9th Starbucks café voted to unionise, too. The current drive for unionisation happens against the backdrop of the ultra-wealthy getting even more wealthy by the day.

The discourse about unions and work councils is reaching the German tech industry as well. After successful organising attempts at companies like N26 and Zalando, and the ongoing struggle at delivery company Gorillas, it seems like unions are here to stay. junge Welt published an excerpt of Nina Scholz’s new book Die wunden Punkte von Google, Amazon, Deutsche Wohnen & Co..

There is power in a union.

The war we don’t talk about

After the truce in Tigray declared last week, humanitarian aid is slow to trickle into the region.

The situation in the region stays dire. So is information getting out of the region. Tech companies are heavily under-investing in moderation, their language processing tools not even able to detect Tigrinya reliably.

Turkey continues drone attacks in Kurdish territories, deliberately targeting civilians. Which of course does not stop German politicians to court the Erdoğan government and deporting people to Kurdistan.

What are you looking at?

Getty Images introduced a new licence which includes signing away the rights to your biometric data.

Data predators Clearview AI announced version 2 of their product, claiming that it consists of twenty billion images.

Schools in the USA have adopted «smart» camera systems, and utilised them during the pandemic to sport maskless pupils. The systems weren’t good at that in he first place, but now that they are installed, they are likely to stay.

While the cameras are intruding every aspect of our lives, we might be looking at non-persons. Computer-generated faces are nothing new, but the technology becomes ever more pervasive. NPR published a story about computer generated «recruiters».

It will probably not surprise you that deepfakes are utilised in Russia’s war against Ukraine.

Germany’s federal police, Bundeskriminalamt, seems to have realised that Google’s vast data trove can be utilised for state surveillance. Using the location history Google Maps stores on its users is an increasingly popular form, as it is trumps traditional location tracking methods in accuracy. In the USA, requests for stored location data jumped from 982 requests in 2018 to 11.554 in 2020.

This ain’t intelligence

The Verge explained how an algorithm helps to monitor hospitalised patients for sepsis. Keep in mind though that medical AI, in the grand scheme of things, is largely useless.

Waymo is expanding fully driverless cars to San Fransisco, the second city in the USA . AI is taking over! No, it isn’t. Every city requires immense amounts of training. Remember, AI is only half-decent at analysing the path. We are still a long way of any form of AI, which would enable going driverless by itself.

Google’s «responsible» AI team, is doing what it can do best, leaving its team in a torrent of hate and abuse.

Social, they said

Facebook has been caught smearing TikTok, hiring a public relation firm to push a non-existent viral challenge to newspaper editorials. This is, of course, not the first time, that Facebook has decided to produce disinformation, rather than just building the platform to spread it. It’s a remarkable stupid idea, but I guess that fits Facebook’s brand.

Casey Newton comments on the cynicism of it:

There’s the cynicism of planting op-eds and letters to the editor in local newspapers, with their internet-decimated staffs and diminished investigative powers, knowing they need the content and likely won’t ask too many questions about where it came from.

There’s the cynicism of borrowing credibility from local politicians, handing them a few paragraphs of someone else’s ideas and encouraging them to pass the talking points off as their own.

There’s the cynicism of assuming no one will ever find out.

The lesson Facebook won't learn

Welcome to the metaverse.

It’s not that TikTok has been without its problems. Late in March content moderators filed a lawsuit against the app, citing the extreme emotional toll of reviewing graphic material.

The suit says TikTok and ByteDance controlled the day-to-day work of Young and Velez by directly tying their pay to how well they moderated content in TikTok's system and by pushing them to hit aggressive quota targets. Before they could start work, moderators had to sign non-disclosure agreements, the suit said, preventing them from discussing what they saw with even their families.

As Russia’s forces get pushed back from the region of Kyiv, atrocities committed by the Russian army get documented and amplified into our social media feeds. This is your reminder that you don’t need to look at the footage. And further, you don’t need to push this content into people’s timeline. I recommend Shoshana Wodinsky’s thread on the matter.

if u force obscene amounts of violence into ppl’s TL’s without a heads up, ur not “creating a historical record.” ur being an asshole

If you decide to share information, you need to make sure that you don’t amplify dis- and misinformation. Mozilla wrote a handy guide on how to make sure you aren’t spreading misinformation. The NYT detailed the steps their visual investigations team takes to make sure shared imagery is valid.

KrebsOnSecurity on a scheme, where hackers take over government email accounts to issue emergency data requests. It is nigh impossible to know if the request is from a hacked account, leaving companies really no choice but to hand over the data.

Saturn is loosing it rings. Which, of course, will take a lot longer than humankind burning the planet to dust. Somehow, the cosmic timescale has lost its calming qualities.

When you take a picture with an iPhone you are not so much taking a picture as letting a robot create an approximation of the picture that you wanted. Have iPhone cameras become too smart? I cannot help but to think of the picturebox in Terry Pratchett’s The Colour of Magic.

Let’s end this issue on a colourful note.


That's it for this week. Stay sane, hug your friends, and donate to Mission Lifeline.

]]>
<![CDATA[Around the Web]]> https://www.ovl.design/around-the-web/004/ 2022-03-26T14:12:00.000Z <![CDATA[The war in Tigray, the effort of resposible AI, digital gardens, dead internet, aesthetics of NFTs and why «My body, my choice» feels out of date.]]> <![CDATA[

Collected between 19.3.2022 and 26.3.2022.


As I spent a good deal of my week lying in bed, I had too much time at hand to read the internet. This issue is going to be on the longer end. If you only want to read two things, make it the article about the use of AI by The Trevor Project and this article about digital gardening, a blooming subculture in a wonderful niche of the Web. The rest is good too, though.

On a meta level: I’ve written an article which offers a glimpse into the architecture and infrastructure of this very post and how it reaches your mailbox or feed-reader.

The war we don’t talk about

The war in Ukraine still dominates European headlines, diverting attentions all too easily from other wars going on.

In Ethiopia the government has declared a truce in their attacks against Tigray. Tigray is a region in the north of country, home of the Tigray’s People Liberation Front (TPLF). The war is ongoing for the last sixteen months, were government troops, helped by local militias and forces from Eritrea, try to break the resistance of the TPLF. The war is deeply tied to the country’s history. If you want to get a better understand, the New York Times has published an explainer.

Millions of humans in Tigray are mostly cut off from access to food, as famine looms. The truce might make it possible to deliver humanitarian aid to the region, something that has been all but impossible for the last months. But, as Voice of Africa reports, the declaration of the truce might not spell the end of the suffering.

However, Hassan Khannenje, the head of the Horn Institute for Strategic Studies, does not believe the government and the Tigray People's Liberation Front, or TPLF, will give aid groups a free hand.

Tek from Tgaht but it even more bluntly in a YouTube video, calling the truce a lie, designed to skirt looming sanctions.

This ain’t intelligence

AI is often touted as the solution for a broad range of problems, including in medicine. So far, those solutions failed to materialise. Protocol wrote in-depth about the use of AI by The Trevor Project, a nonprofit in the USA catering to LGBTQ+ teenagers in the USA.

The project took immense care how it uses AI, specifically large language models such as GPT. But still, they are aware of the fact that AI can train humans, but not replace them. As such their models are not used in care, but only in training and even there they need to be recalibrated regularly.

While he said the persona models are relatively stable, Fichter said the organization may need to re-train them with new data as the casual language used by kids and teens evolves to incorporate new acronyms, and as current events such as a new law in Texas defining gender-affirming medical care as “child abuse” becomes a topic of conversation, he said.

The whole piece is really worth a read, and a great example of the lengths companies have to go to build products on top of present AI capabilities if they do not want these to be hurtful.

Algorithmenethik, a Bertelsmann Stiftung affiliated group in Germany, posted a review in similar manner, exploring the ways and preqrequisites to use alorithmic systems in the support of women who have experienced domest violence.

But what to do if you found that an algorithm hasn’t treated you fairly? Currently there’s no real legal basis on which you can appeal. AlogorithmWatch wrote down some demands to adapt the German anti-discrimination law.

I’m still closing old tabs (I have a lot of them). A while back Wired reported on AcccesiBe. AccessiBle offers an «accessibility overlay». Overlays are snake oil products that promise to use some obscure mixture of Artificial Unintelligence and public relation promises to make your sites accessible.

Accessibility practitioners agree that overlays do not work, at times make your site harder to use for people with accessibility needs, and are an all-in-all superbad idea. Adrian Roselli has posted an in-depth look into yet another overlay company, their claims, and their failures.

The social and the media

Two years after Facebook has been accused of amplifying hate against the Rohingya minority in Myanmar, hate is still rampant on its platform.

Platforms were mostly locked out of Russia. One notable exception being TikTok. They decided to practise self-censorship, making it impossible for users outside of Russia to view content out of Russia and vice versa.

Nonetheless, an investigation found that TikTok pushes misinformation to new users almost immediately after sign-up. False information is rampant on TikTok, and it’s designed to make it hard to see.

Its core features prime it for remixing media, allowing users to upload videos and sound clips without attributing their origins, the paper said, which makes it difficult to contextualize and factcheck videos. This has created a digital atmosphere in which “it is difficult – even for seasoned journalists and researchers – to discern truth from rumor, parody and fabrication”, researchers added.

Officially because it’s not used to disseminate what the Kreml calls lies, WhatsApp is another service exempt from being blocked in Russia.

Meanwhile pressure to regulate content in its service grows on Telegram. The app was close to being blocked in Brazil, the block was renounced shortly thereafter as Telegram reacted to the demands of Brazil’s highest court.

World Wide Web

Most of us use a version of the internet that is controlled by algorithms, and non-stop feeds of informational overload. But underneath the concrete surface, a small movement of digital gardeners is planting their seeds. Reading this piece about digital gardening gave a sense of calm I seldom experienced when reading about the Web recently.

A garden is a collection of evolving ideas that aren't strictly organised by their publication date. They're inherently exploratory – notes are linked through contextual associations. They aren't refined or complete - notes are published as half-finished thoughts that will grow and evolve over time. They're less rigid, less performative, and less perfect than the personal websites we're used to seeing.

A tweet by Claire Evans reminded of one of my favourite articles about conspiracy theories ever. There’s a thing called Dead Internet Theory. It stipulates that the internet died some years ago, and is a barren wasteland full of bots. Which totally feels true, even if it’s nonsense. Still: I believe.

Thankfully, if all of this starts to bother you, you don’t have to rely on a wacky conspiracy theory for mental comfort. You can just look for evidence of life: The best proof I have that the internet isn’t dead is that I wandered onto some weird website and found an absurd rant about how the internet is so, so dead.

The road to hell is paved with crypto intentions

I really loved the article Why do all NFTs look the same by Max Kohler, as it does not try to do lazy argument «This is not art, it’s just a computer», but ties NFTs into a larger picture of virtual effects in movies, as well as the reproducibility of all art, which Walter Benjamin already observed.

People who make [NFTs] recog­nise it’s dif­fi­cult to ar­gue that a dig­i­tal im­age can be “original” on any ma­te­r­ial level, so they sug­gest a kind of au­then­tic­ity-by-proxy: Buy an NFT and you get a unique en­try in our spe­cial data­base say­ing you own the im­age. That data­base en­try has ef­fec­tively the same func­tion as those fancy art his­to­ri­ans and copy­right lawyers: Establish au­thor­ship, keep track of prove­nance, au­tho­rise de­riv­a­tive works, me­di­ate roy­alty pay­ments, and so on.

Last week, TIME published a longer portrait of Vitalik Buterin, the creator of the Ethereum blockchain. The interview paints a sympathetic picture. A picture I do not want to disagree with. It makes it obvious, though, that Ethereal (and crypto at large) is yet another problem created by privileged white men, who did not need to think about the real-life consequences their «pure» and intellectually challenging project might have.

Vice visited SXSW and witnessed the takeover by crypto mediocrity and a version of the future that is disappointingly blunt, there’s no fun, nowhere.

And yet, despite all the talk I heard about ushering in a new era of diversity and inclusion, it was hard to not notice that every room felt largely the same: mobs of white wealthy men who quickly volunteered that they worked in finance, tech, marketing, or some buzzy fusion of the three.

I decided to cut down on reading crypto news. I still follow web3 is going great, for the lulz, but will dedicate way less time on this topic, especially on the fraud part.

What a lapses

At the beginning of the week hacking group Lapsus$ made the news when they were able to compromise Okta. Okta is an identity broker, a service large companies use to handle identities of their employees and manage capability such as Single Sign On.

Lapsus$ quickly rose from relative obscurity to hacking stardom through multiple high profile breaches over the last months.

It didn't last too long. On Wednesday Bloomberg reported that they have identified one core member of Lapsus$, a teenager from Oxford. On Thursday, the Police of London announced seven arrests in connection with the hacking group.

The episode Dirty Coms of Darknet Diaries profiled a contemporary hacking culture, of wich Lapsus$ appears to be a part, painting a picture of a hacking culture mostly revolving around pranks and money.

Doom is mine but I will share it

Climate. What a bummer. The direst predictions of scientists are coming true fast and even more dire than predicted.

This week, the disappearance of underwater permafrost made the news. Only shortly after it was reported that temperatures in the Arctic and Antarctic were 30°C and 47°C, respectively, higher than normal.

As the Verge reports, Stephen Wilhite, creator of the GIF, has died age 74 from COVID-19. Thank you for the dancing.

GIF of a dancing baby
Dancing baby, the best GIF on the internet, according to Stephen Wilhite

Der Spiegel published the English translation of research accusing Frontex director Fabrice Leggeri of covering up Frontex’s involvement in illegal pushbacks in the Aegean (German version here, but behind a paywall). That Frontex is complicit in pushbacks has been reported for years, consequences failed to materialise so far.

My body, my choice? Individualism made it harder to fully embrace this cornerstone of feminist rethoric. The slogan has been co-opted by anti mask mandate protesters. Why though? That’s a question answered in a piece in Geschichte der Gegenwart. I first read this angle in Bitch Magazine some months ago.

“My body, my choice” is highly individualistic and—in the end—fails to convey the ways we’re bound up with each other. Especially as Texas institutes a near-complete ban on abortions, it’s crucial that we embrace language and frameworks that emphasize our mutual responsibilities and interconnectedness.


That’s it for this week. Stay sane, hug your friends, and donate to Self-Defined.

]]>
<![CDATA[Around the Web]]> https://www.ovl.design/around-the-web/003/ 2022-03-19T14:12:00.000Z <![CDATA[Predictive policing without oversight, the wall in which Deep Learning crashed, cryptocurrencies in wartime, and billionaires won’t save us.]]> <![CDATA[

Collected between 12.3.2022 and 19.3.2022.


A coughing hello.

I had a meeting with SarS CoV 2 and my head wrapped in cotton for a few days. Thanks to the vaccinations, I’m mostly fine so far, and will hopefully be back on track next week.

Nonetheless, I’ve saved some links. So let’s get linking.

What are you looking at?

Gizmodo has reported that the Department of Justice in the USA, which is in theory responsible to survey funding of predictive policing programs, has no idea how much money actually police departments have spent. So now there is an unknown amount of money poured into technology, which does not prevent crime while discriminating against minotirised groups in society. Well done.

In Europe, we see the fallout of the takeover of encrypted messaging service Encrochat by police in France. Throughout Europe, we see lawsuits, based on intercepted chats, the evidence has been heavily manipulated, and the original is not available to the public, as France has classified the records as military secrets. Which might set dangerous precedents for lawsuits based on digital evidence, as netzpolitik.org reports.

This ain’t intelligence

Wired has published an interview with Palmer Luckey, one of the leading figures in AI assisted defence tech. It’s another example of the role of ideology which manifested in their products, as well as the presented justifications. To answer with «I'm still really proud of our work that we do with border security» if you are asked how the separation of families and imprisonments of children by the ICE made you feel, takes some serious amounts of dehumanisation.

 Ukraine reportedly uses Clearview’s facial recognition product in the ongoing war. Does the end justify the means? Probably not, giving the fact that Clearview’s massive database has been largely scraped from the web, without asking anyone for consent.

There’s a longer article by Gary Marcus in Nautilus, in which he explains how Deep Learning as we know it today came to be, and why it is not making any significant progress at the moment.

In truth, we are still a long way from machines that can genuinely understand human language, and nowhere near the ordinary day-to-day intelligence of Rosey the Robot, a science-fiction housekeeper that could not only interpret a wide variety of human requests but safely act on them in real time. Sure, Elon Musk recently said that the new humanoid robot he was hoping to build, Optimus, would someday be bigger than the vehicle industry, but as of Tesla’s AI Demo Day 2021, in which the robot was announced, Optimus was nothing more than a human in a costume. Google’s latest contribution to language is a system (Lamda) that is so flighty that one of its own authors recently acknowledged it is prone to producing “bullshit.” Turning the tide, and getting to AI we can really trust, ain’t going to be easy.

The road to hell is paved with crypto intentions

The war in Ukraine has been of the moments of crypto so far. Numerous projects mobilised their users and mobilised substantial amounts of money. Still, grift and bullshit seem to follow where crypto goes. While projects like the UkraineDAO collected money in good faith, things collapsed when they airdropped their LOVE tokens and the helpers turned LOVE into a speculative asset. Episode 7 of Scam Economy covers the good help and the bad grift in the context of the Ukraine war. Among it also how crypto bros pressured Ukraine into opening wallets of their tokens, instead of, dunno, sending dollars or something.

Peter Howson explores the history of cryptocurrencies in Ukraine, and its troubled relation to despotic regimes around the world, which are undoubtedly also happening in Ukraine, when we see donations to right-wing paramilitaries. Ukraine in crypto have a long history. During the time of the Euromaidan revolution, half of the world’s Bitcoin were mined in Ukraine. This might explain, a point also made by Matt Binder in Scam Economy, the role of crypto in the war, and Ukraine’s willingness to accept crypto as part of their fundraising, but might make it hard for cryptocurrencies to play a similar role in other conflicts.

What we are seeing here is not new, at all. Crypto positions itself in the heart of capitalism, amplifying its dynamics, and gives a fresh, digital paint job. So while we see signs of the hype cycle around NFTs and web3 dying down, cryptocurrencies are certainly here to stay as long as capitalism is.

Okay. Let’s cut the seriousness for a moment. NFTs. The buyer of a Pepe the Frog NFT the sum of $537,084 for the receipt of this image. Shortly after, his costly receipt got «devalued», as 99 receipts were released for … free, as was announced beforehand. The buyer is now suing. You can’t make this shit up.

Meanwhile, Spotify has decided to jump on the NFT bandwagon. Supposedly to help pay artists. Which is pretty funny, given the fact that Spotify is the streaming service which pays the lowest royalties to artists. So instead of over-engineering bullshit, they could just pay the artists. But that’s not tech enough, I guess.

The Bill & Melinda Gates Foundation, a flagship of philantrophy made bold claims about ending hunger, and increasing crops in Africa. It has since became quieter, and attempting to escape systematic review. The datapoints of the Growing Africa’s Agriculture which are available, however, paint a dire picture. Billionaires won’t save us.

That’s it for this week. Stay sane, hug your friends, and donate to Bildungsinitiative Ferhat Unvar.

]]>
<![CDATA[Around the Web]]> https://www.ovl.design/around-the-web/002/ 2022-03-12T14:12:00.000Z <![CDATA[The war and the cyber, impeding climate doom, working on the web, and a ship underneath the arctic sea.]]> <![CDATA[

Collected between 4.3.2022 and 12.3.2022.


Good day.

As I’ve been travelling the second edition of my little collection of links comes a bit later. Though I think Saturday might be a better day anyway. Still trying things out here. But again, quite a lot of links collected.

The War & The Cyber

There still is a technical aspect to the whole thing, too. Just not as people expected. For the last few years, Russia has been known as a cyber state. For one, Russian hacking groups have been accused of breaches again and again. On the other hand the Russian state financed a sophisticated disinformation network, trying to influence political processes all over the world – mostly by propping up the far right. Under current sanctions it has collapsed. At least for now.

While the organised disinformation is in a bad shape, it still invents new forms of misinformation. The fact check, long the hallmark of truth and enlightenment (sorry, git a bit carried away there), is now a tool of misinformation. In this new scheme, «fake facts» are invented to put a new layer of fake upon them and disguise those as a fact check.

Researchers at Clemson University’s Media Forensics Hub and ProPublica identified more than a dozen videos that purport to debunk apparently nonexistent Ukrainian fakes. The videos have racked up more than 1 million views across pro-Russian channels on the messaging app Telegram, and have garnered thousands of likes and retweets on Twitter. A screenshot from one of the fake debunking videos was broadcast on Russian state TV, while another was spread by an official Russian government Twitter account.

Not old, but suddenly back on the table of conspiracies is the tale that the US maintains biowarfare labs in Ukraine. A wild QAnon appears.

What we, so far, haven’t seen too is the offensive cyberprowess of the Russian state. While their emerged a new Wiper malware on the eve of the war, the war in and off itself has been largely conventional. Bombing civilians, sieging cities. Why? We really don’t know at this point, as Farhad Manjoo writes in the New York Times:

What accounts for Russia’s apparent cyberrestraint? Nobody quite knows. Russia could be holding back its best cyberweapons for a more critical time in the war. It could also just be incompetent. Maybe its hackers were no match for Ukraine’s cyberdefenses, which the country has been beefing up for years.

The Ukrainian Cyberwar That Wasn’t

The western sanctions against media outlets such as RT and Sputnik have been exposed as more far-reaching then initially sought. Google published a request by the European Commission to remove all references to content by blocked entities from their search results, YouTube and so forth. The blocks have been widely criticised as ineffective and counter-productive even before this detail emerged.

Doom is mine but I will share it

War is bad. For humans, but also for the planet. Tanks and planes are no cornerstones of green transport. It’s a good time to revisit the piece The climate cost of war, while written about a prospective war in Iran, fossil fuels don’t care where they are burned.

Food markets are in a turmoil, too, as vast swathes of Ukrainian land are used to grow crops. The World Food Programme warns about serious implications for global food security.

The conflict comes at a time of unprecedented humanitarian needs, as a ring of fire circles the earth with climate shocks, conflict, COVID-19 and rising costs driving millions closer to starvation.

Fuck. It. All.

In other dooms:

Carbon dioxide emissions hit a record high last year, as the world bounced back from the pandemic slump and nothing has been learned.

The Amazon is near its tipping point, which would change it from a rainforest to a savannah. With drastic consequences for life on earth.

All this impeding doom still gives way to new ideologies. Green capitalism is sweeping through the political landscape, in Germany represented by a green party which made their biggest gains ever in last year’s federal elections.

The green power grab hasn’t prevented police from denying assembly rights to so called climate camps. Fridays for Future and the Gesellschaft für Freiheitsrechte are now going to court to ensure those rights for long running protest infrastructure.

What are you looking at?

Police in Germany’s federal states expand their business with Thiel’s Palantir. Following Hesse and North Rhine Westphalia, Bavaria is now working with the dystopia behemoth too. Of course they promise data protection and what have you. Nonetheless, critics say the procurement procedure was tailored to let Palantir win. In February Forbes reported that Thiel invested in a «cyber warfare» start-up, which allegedly hacked WhatsApp. Which is kind of ironic given that Thiel was one of the earliest Facebook investors. Facebook announced that Thiel is stepping down from its board a few weeks back.

Peter Thiel is one of the most influential figures in the Valley, and has long been described as the reactionary outlier in its political discourse. Why that’s not accurate and how Thiel came to power, is the topic of this week’s Tech Won’t Save us episode.

Nearing the end Paris Marx and Moira Weigel discuss how much of Palantir’s might is factual efficiency of the tech, and how much is propped. Given the secretive nature of its operations we can’t now for certain.

What we do know however is that other parts of teched up policing fail again and again. Wired reports how the lives of three Black men and their families were derailed by wrongful arrests based on facial recognition software. Such errors however are not stopping police departments on relying ever more on such software, often leaving victims in the dark about the role it played in their arrests.

World Wide Web

As part of my ongoing quest to absolve myself from too many open tabs I finally managed to read Why are hyperlinks blue? on the Mozilla blog, a fascinating excursion into the history of browsing hypertext.

Another article that has been in the state of «open tab» for too long, is this compilation of links what it means to be of the web and how we as developers can build products that embrace the grain of the web.

The other way of building for the web is to go with the web’s grain, embracing flexibility and playing to the strengths of the medium through progressive enhancement. This is the distinction I was getting at when I talked about something being not just on the web, but of the web.

Laura Maw explores the decaying internet and draws parallels to theories of architecture theory of the horrible. I’ll never look on websites that show signs of decay same way again.

The architecture of the internet is rendered “horrible,” then, partially by the demands of capitalism: In its wake, we find signs of deterioration and ruin. Navigating a landscape of dead sites changes the way we look at living ones; clean, minimalist design only cloaks the evidence of inevitable decay.

All good things are three, but not web3

Let’s start with the good things here. Interest in NFTs in Google Search is collapsing.

Crypto is becoming like any other industry, expanding its lobby influence. Quelle surprise.

Two, literal, crypto bros, Paul and Julian Zehetmayr, have bought LimeWire’s branding and are hoping to return the famous file-sharing platform into the limelight. The relict of an anarchic internet that was once was, is set to become a marketplace for music-related NFTs, backstage passes, and similar commodified crap. In a realisation that the masses don’t care about crypto, they’ll accept fiat currency too.

It does, however, make perfect sense for the Web3 movement, which appears immune to shame and dead set on making us believe in a crypto future, one brand takeover at a time.

The ghost of LimeWire returns to haunt you as an NFT marketplace

Molly White has been a guest on this week’s episode of Scam Economy, talking to Matt Binder about the various ways in which crypto and web3 are not the future.

To close this issue of, here are some links which didn't warrant their own category:

In Stones can’t talk (german translation) Mirjam Brusius explores the complex relationship on Germany’s history, and how Vergangenheitsbewältigung (or lack thereof) fails Black and People of Color communities today. The piece is very dense, but very recommended.

Carnival celebrations went ahead after the massacre in Hanau, while a vigil to mourn the deaths could not. Antisemites were still allowed to march in the streets. Some can even stand for election. Taking stock of these asymmetries, to say nothing of the endless secretiveness around the NSU murders, the surreal Mbembe debate, or the fact that being left-wing and Jewish means feeling unprotected by a state that claims to do the reverse: might Germany be reaching a grotesque low point in its history? If antisemitism and racism have no space (‘keinen Platz’) in Germany, why do they still claim so much room? Who will set the future terms of historical memory in a country where for large multiethnic sectors of the society, denazification simply never happened?

Ernest Shackleton’s ship Endurance has been discovered. 107 years ago Shackleton and his crew needed to leave the entrapped ship behind in the Arctic ice shelf. It has now been discovered.

The Smithsonian is giving back its collection of Benin bronzes to Nigeria. The deal seeks to foster future collaboration under the leadership of Nigerian historians. Way to go. Looking at you, Humboldt Forum.

Kony 2012, ten years on. Once the most viral video of all time, the film reads as both a digital relic and a precursor to an era in which footage of conflict dominates the internet.

I’ve talked about Contileaks last week, it seems like the leaks haven’t disrupted the operations of the group for too long.

With that this issue comes to its end. I’ll experiment further in the upcoming weeks, as I already see the mode of «Write down everything once a week» becoming too much work to be sustainable in the long term.

Stay sane, hug your friends, and donate to Cadus.

]]>
<![CDATA[Around the Web]]> https://www.ovl.design/around-the-web/001/ 2022-03-04T14:12:00.000Z <![CDATA[States of Surveillance, AI legislation, Contileaks, the dumbest vending machine in the history of ever. And war.]]> <![CDATA[

Collected between 22.2.2022 and 4.3.2022.


Around the Web is a new format, in which I compile the articles I’ve read over the last week or so which influenced me in some way. It will center around digital society, combining tech and ethics, blend in a bit of design, and we’ll see what else.

I’m not sure how this will go, what will be featured, and if it will stay like this issue you are reading, or if I change it to something which requires less work.

Let’s get it started in here.

There’s a war going on outside

The news cycle has been relentless, I won’t even try to be a geo-political analyst here. Still, here are some compiled links I’ve found helpful in making sense of the senseless.

In German newspaper analyse & kritik Tomasz Konicz analyses how this war fits within a shifting global power-dynamic, that sees the USA enthroned from their role as «world police». Russia, China, and Europe try to fill in the gap. This is happening against a backdrop of economic recession. And if in crisis, war is always an option.

There’s an ongoing discussion how tech should react to Russia’s war. Namecheap, for one, decided to cut all ties with Russian customers. A move wildly critised, as it leaves them with no other choice than to submit to providers in Russia. The Russian government has made it very clear that it will not tolerate any dissent.

It has, however, made wide-reaching changes to its Internet infrastructure over the last few years. Around ten years ago the Russian internet was considered mostly resistant against censorship. This has changed. As Samanth Subramanian reports in Qaurtz, Russia has been preparing to have its Internet cut off. While the Russian states cracks down on media, the BBC started transmitting on shortwave radio frequences again.

Meanwhile, there was a small little glimpse of dissent from where nobody expected it. Alex Ovechkin said «Please, no more war.» Which is significant, not only because he’s famous, but also because he has been close to Putin. Ice hockey has been of the national sport in Russia ever since the soviet team dominated.

There’s dissent louder than the mutterings of an ice hockey player. CrimeThinc has published a statement of Russian Group Autonomous Action.

Not only one war

It is a flat-out lie that there is a war going on. The bitter truth is that there a multiple. Turkey is still shelling Kurds. The Taliban are still oppressing. Some reactions to the war in Ukraine were therefore overtly racist. Emran Feroz comments on western media coverage splitting refugees in welcome and unwelcome.

Julian Hilgers has published an interview with Sidi Omar. Omar is represantative to the United Nations of the Polisario. The Polisario declared the Arab Democratic Republic in 1976. Since then it finds itself perpetually surpressed, prosecuted and bombed by Morocco.

The interview has been published as part og Sham Jaff’s what happened last week? newsletter. If you are interested in a newsletter reporting beyond the sight of western media, make sure to subscribe.

This ain’t intelligence

In Artificial Intelligence news, the first law concerning algorithmic transparency came into effect in China. As Shen Lu reports:

The regulations stipulate that tech companies have to inform users “in a conspicuous way” if algorithms are being used to push content to them. Users reportedly will be allowed to opt out of being targeted with algorithmic recommendations.

While it has to be seen how it plays out, this is a significant step in reigning in the power of algorithms.

Similar legislation in Europe (European AI Act) and the USA (Algorithmic Accountability Act) is still rather far off. In its current state it might also not be the solution one might have hoped for. This week multiple initiatives in Europe called on the EU to ban predictive policing through the AI act.

Uber, meanwhile, has opened another Black Box of Pandora by implementing a new payment algorithm in the USA. One driver describes it as «not based on anything». Well done, Uber.

I finally got around to read the interview with Safiya Umoja Noble and Meredith Whittaker in Logic. They discuss ways to hold tech companies accountable, and to build better communities.

We have to be in community. We have to be in conversation. And we also have to recognize what our piece of the puzzle is ours to work on. While it is true, yes, we’re just individual people, together we’re a lot of people and we can shift the zeitgeist and make the immorality of what the tech sector is doing—through all its supply chains around the world—more legible. It’s our responsibility to do that as best we can.

Safiya Umoja Noble

What are you looking at?

The March/April edition of Interactions has been published, focussing on States of Surveillance.

In Resetting the Expectation of Surveillance Jonathan Bean explores how surveillance has been so ingrained in our everyday life tha we sometimes take it for granted or – arguably worse – forget that it exists at all.

So much of our technological stuff doesn’t really present us with a choice. Set up a new computer, load up a phone with apps, turn on that robot vacuum, or hop in the car, and the chances are pretty good that something, somewhere, is collecting data. Is this surveillance? The word, with roots in French and Latin, means to watch over, in the visual sense. Access to the private visual realm clearly crosses the line: Witness the emergence of the practice of taping over or physically disabling laptop webcams. In contrast, the streams of data we generate through our everyday use of technology, from smartphones to thermostats to light bulbs, are largely invisible.

While surveillance undoubtly is everywhere, it is not without alternative and resistance is not futile. Alex Jiahong Lu explores state and workplace surveillance and how these systems leave room for everyday resistance.

In Sareeta Amrute’s piece they explore the Facebook Files. The impact Facebook’s & Co surveillance apparatus has has always been unevenly distributed, hitting hardest where the companies care the least. Which is, surprise, not the global west.

The sharp inequities exhibited by these revelations of the overheated pursuit of young eyeballs regardless of deleterious effects on youth wellbeing, on the one hand, and the callous disregard for how the platform is used to propagate violence and hatred for other populations, on the other, suggest an uncomfortable fact: Race, place, and position matter deeply to these tech companies, and not in the ways that their DEIA handbooks might suggest. As such, the Facebook Files exhibit a classic case of racial capitalism.

Inside Extortion

Russian ransomware group Conti has been hit by a large leak of their internal communication. The Twitter account ContiLeaks started publishing chat logs on February, 27th.

The leaks offer an interesting look on the inner workings of an extortion group. Luckily, you don’t have to read this by yourself. Brian Krebs started a series of articles analysing the content.

Part I offers a timeline of Conti’s high profile attacks as well as prosecution efforts against the groups. Part II shows labour relations as well as the difficulties of managing a cybercrime operation. My takeaway? Maybe Conti’s employees need a union.

Similar, though from the outside look in, is the podcast series Extortion Economy by the MIT Technology Review and Propublica.

All good things might be three, just not web3

Because tech is tech, we have to cope with a new version of democracy, which tokenises all the hierarchies built into analogue democracies. And calling it future. I’m talking about governance tokens in Decentralised Organisations (DAO). Shanti Escalante-De Mattei has written a piece for Wired exploring the idea, which seems not too bad at first, until it is.

Sure, web3 has the potential to make our lives more democratic, but it’s not a silver bullet. Scale is an insurmountable problem, so is capitalistic greed. If we fall too quickly for these promises, we’ll end up looking back fondly at the days of data harvesting as we navigate a segregated internet.

A conclusion I see time and time again when technical solutions try to solve societal problems without doing the work of understanding the ways in which they inflict harm.

In its current state web3 is not much more than an update to capitalist grifting. In False Futurism Paris Marx (host of the excellent Tech Won’t Save Us podcast) writes about the Metaverse and its crypto related parts. Paris concludes:

Tech companies have always overstated the benefits their technologies will grant us and understated how much they serve their own ends of power and profit. The metaverse will be no different, especially since it’s unlikely to arrive in the form currently being sold to us.

Especially with Facebook trying to call the shots, and centralise gearing and infrastructure the future is blue. We shall paint it colourful.

Another aspect of virtualised reality is haptics. How do we feel when we are in a computer? Gadgets try to replicate the bodily experience of touching, but are reducing the once existing concept of cyborgs to another point of datafication.

Perhaps more crucially, when enveloped in Meta’s gloves, the hands will ooze data. Every motion will be captured, every gesture will be mapped, and every haptic stimulus we respond to will be recorded. Expanding biometrics to include touch could thus enable a new mode of “haptic coercion,” as Dave Birnbaum, the former head of design strategy and outreach for Immersion Corporation terms it, where digital touch can be used to prod or nudge us into making purchasing decisions preferred by a brand or advertiser.

David Parisi – Can’t Touch This

Meanwhile in art: Maybe The Guardian has found the bottom in a lake of facepalms. A hilariously malfunctioning NFT vending machine in New York City. I can’t even.

But then, the lake is deep and the water dark. And Associated Press has been trying really hard to sink to the bottom, too.

Here some more things not relating to a larger theme:

Bandcamp has been bought by Epic Games. I have mixed feelings. None of them positive. It seems more and more impossible to build anything independent on the World Wide Web today. Which is scary.

In Packaging the Pill Theresa Christine Johnson takes a closer look at something seemingly irrelevant. How changing the packaging of the birth control pill helped women stick to the regime.

To close this issue, have you ever thought about the dystopian view CAPTCHAs offers on the world? Me neither, at least not so thorougly as this piece does: Why CAPTCHA Pictures Are So Unbearably Depressing.

After eighteen years, Markus Beckedahl stepped down as editor-in-chief of netpolitik.org. During this time netzpolitik.org advanced to an important voice in digital policy. Anna Biselli will take his position.


That’s it for this week. Stay sane, hug your friends, and donate to medecins sans frontieres.

]]>