https://www.ovl.design/around-the-web/feed.xmlovl.design » around the web2023-12-18T12:04:13.535Zhttps://github.com/jpmonette/feedOscar[email protected]https://www.ovl.designhttps://www.ovl.design/img/favicon/favicon-32x32.pnghttps://www.ovl.design/img/favicon/favicon-32x32.png<![CDATA[The Denkzwerge are it. Again.]]>https://www.ovl.design/around-the-web/023-the-denkzwerge-are-it-again/2023-10-22T14:12:00.000Z<![CDATA[New manifesto without anything new dropped, controlling complex systems, Algospeak in war times, and revaluing the strike.]]><![CDATA[
I revamped the design of my website a bit over the last weeks. I’m quite happy with it, so you may want to read this issue on the web.
Also, pardon my French German in the title, but I don’t know a translation that captures the essence of the word Denkzwerge quite like the original.
Before we start with our regular programming, here are
Updates from the Department of Facepalms
First, Marc Andreessen published the world’s worst meta bad take.
It’s long. It’s badly written. It feels like Andreessen dictated it into ChatGPT while being trapped at Burning Man. It makes some absolutely wild statements without any efforts to back them up. It quotes Filippo Marinetti, notable proto-fascist and OG techno-optimist. Basically, it is the 30,000 word version of the «This is fine» meme.
In Why can’t our tech billionaires learn anything new? Dave Karpf pointedly analyses what feels so off about the richest persons in the world crying that the rest of us don’t applaud everything they do anymore.
The most powerful people in the world (people like Andreessen!) are optimists. And therein lies the problem: Look around. Their optimism has not helped matters much. The sort of technological optimism that Andreessen is asking for is a shield. He is insisting that we judge the tech barons based on their lofty ambitions, instead of their track records.
In an interview with the German newspaper Der SPIEGEL, Theodor W. Adorno called out those «who frantically cry over objective despair with the ‹hurrah› optimism of immediate action to make it psychologically easier for themselves». Andreessen’s manifesto is the epitome of this «hurrah» optimism.
He openly illustrates the limits of imaginations of him and those in the Valley in a way no critic ever could. AI as the force that will destroy but also save the world, capitalism as the only form of economy that can work and if it doesn’t work, we need more more more more more until if fcuking works. The destruction of society will continue until morale improves.
If that’s his future, I’m happy to fight for another one.
In a rare display of decency, his successor, Giorgia Meloni, split from her partner after he made sexist remarks. I wonder if, one day, she finds out about the rest of her party. Just kidding, she didn’t become their leader by accident.
Rest of World generated thousands of images using Midjourney, analysing the output for racial stereotypes. They conclude that AI reduces the world to stereotypes. Make sure to read the whole story over at Rest of World. It includes visual representations of the output, which makes the point even more convincing than some text about it.
Focusing on complex systems leads to several perspectives (incentive shaping, non-deployment, self-regulation, and limited aims) that are uncommon in traditional engineering, and also highlights ideas (diversification and feedback loops) that are common in engineering but not yet widely utilized in machine learning. I expect these approaches to be collectively important for controlling powerful ML systems, as well as intellectually fruitful to explore.
Facebook announced some new celebrity chatbots, which manage to feel outdated while using the hot technology of the moment. Tom Brady’s chatbot incarnation quickly insulted Colin Kaepernick. Facebook said the usual thing, that these high-profile, highly expensive features are «experimental».
The Grift Shift is a new paradigm of debating technologies within a society that is based a lot less on the actual realistic use cases or properties of a certain technology but a surface level fascination with technologies but even more their narratives of future deliverance. Within the Grift Shift paradigm the topics and technologies addressed are mere material for public personalities to continuously claim expertise and “thought leadership” in every cycle of the shift regardless of what specific technologies are being talked about.
Building cutting-edge models requires an immense amount of data and computing power, making it basically impossible to do it without the backing of one of the larger players in the space. How Big Tech is co-opting the rising stars of artificial intelligence explains these dynamics in more detail.
But it’s not just the public imagination and electricity consumption that are taken over by the race for powerful AI models. Open Philantophy, flagship of the Effective Altruism movement, sponsored the salary of multiple advisors in the US Congress.
You might have heard about Reinforcement Learning in connection with machine learning models, maybe seen the abbreviation «RLHF» turning up. RLHF stands for Reinforcement Learning from Human Feedback. But what is this, how is it applied in the training process, and are there alternatives? Sebastian Raschka explains.
In an interesting bit of research, Anthropic managed to build a (small) model and were able to analyse features instead of individual neurons activating in generating output. This might lead to better interpretability and control of models, if the approach can scale to the size of Large Language Models.
Social Mediargh
Truth, they say, is the first victim of war. While that’s technically untrue, truth dies pretty fast, as all parties of a war have a story to tell which might or might not align with what is actually happening.
In the ongoing Israel–Hamas war, an explosion occurred near to the Al-Ahli in the Gaza Strip. Hamas was quick to denounce Israel, saying that 500 people died in the attack.
The message spread like wildfire through social media and engagement-driven news organisations (which basically means all news orgs). Open-Source Intelligence (OSINT) researchers, like Bellingcat, and teams such as BBC Verify painstakingly constructed are more nuanced picture, and as morning dawned the supposedly bomb attack turned out to be a smaller crater and some burnt-out cars.
Users posting about the war, are – as with other topics – resorting increasingly to Algospeak, the use of a language that uses symbols, abstractions, or neologisms to evade the algorithms of the social media platforms. Side note: Facebook trains their Language Model on the posts in its platform, so it will be interesting to see wether «‘P*les+in1ans» shows up in its future generations.
I won’t comment on the larger conflict here. John Ganz’s The Trap manages to formulate my feelings pretty conclusively:
Strategy and tactics are not what’s really at issue here. At core of the worldviews in question is a belief in sheer murderousness. What both Hamas and the far right in Israel want this is to become is a war of annihilation and extermination. This is the fundamental vision of their nationalism of despair: races and peoples pitted against each other in interminable conflicts that can only be concluded with “final solutions.” Of course, a similar vision of permanent racial war underpinned Nazism and the Holocaust. I categorically refuse to be recruited to this conception of the world. And I will not be manipulated by emotional appeals and propaganda—by one side or the other—to participate in it.
I enjoyed reading Farmers only, a look at algorithmic amplification in the age of enshittification.
Looking around at the overharvested fields of digital shit, it’s hard not to ask: is the personality quiz the ouroboros of the algorithm? Is all of social media just a personality test? AITA, the Twitter hypotheticals, the personality type tutorials invite us to project ourselves into a handful of predetermined choices. To pick one of seven essences. To choose between Jay Z or $500,000. It gives the illusion of randomization, customization and personalization, but ultimately all it does is produce a quantization of who we are. It turns the mass of the self into a collective of discrete and finite individual components.
Do you have what it takes to lead a Trust & Safety team at a fast-growing social media site? Trust & Safety Tycoon lets you find out about the intricacies policy decisions entail.
Still trying to solve the trolley dilemma? Maybe learn a new language first, researchers at the University of Chicago claim that our moral decisions are dependent on wether we think in our native or a foreign language.
I've been trying to leave Rome for weeks but all their roads have this weird design flaw
That’s it for this issue. Stay sane, hug your friends, and Antifa forever.
]]><![CDATA[Taking ducks to outer space]]>https://www.ovl.design/around-the-web/022-taking-ducks-to-outer-space/2023-10-08T14:12:00.000Z<![CDATA[AI alignment, space junk traffic jams, settler colonialism, and the geopolitics of web domains.]]><![CDATA[
Collected between 1.8.2023 and 8.10.2023.
Welcome to Around the Web.
The world leaves me in too cynical a state to even write some nice words welcoming you this issue, so late that late doesn’t cut it. Time is a social construct, people! And as nothing ever happens, or at least few things seem to get better, it doesn’t really matter.
There were some local elections in Germany, with significant wins for the (far) right. Most notably, Elon Musk’s favourite party, the Alternative für Deutschland. Söder’s CSU staid steady, the Freie Wähler gained slightly, even though their leader, Hubert Aiwanger, was accused of distributing antisemitic pamphlets in school. I hate everything about this paragraph so much.
Thanks, world. Hey, dear reader, don’t despair. Autumn is here, but carry on we must. Here are some links:
This ain’t intelligence
The AI doomer crowd, notably OpenAI and their quest for «Superalignment», has been pretty vocal about the necessity to better align the output of Large Language Models with human preferences. But what is alignment, and for which goals is it useful? Jessica Dai took a closer look.
I’m not advocating for OpenAI or Anthropic to stop what they’re doing; I’m not suggesting that people — at these companies or in academia — shouldn’t work on alignment research, or that the research problems are easy or not worth pursuing. I’m not even arguing that these alignment methods will never be helpful in addressing concrete harms. It’s just a bit too coincidental to me that the major alignment research directions just so happen to be incredibly well-designed to building better products.
Examples of the harm that comes from colonialist, racist systems are abound. Arsenii Alenichev asked Midjourney to generate images of Black doctors treating white children. The system failed spectacularly.
The AI systems that we have to deal with, are built in the Global North, for the Global North. This perpetuates postcolonial power structures and is harmful to those not in the focus groups of our overlords. AI must be decolonialised to fulfil its full potential, argues Mahlet Zimeta.
How’s Bing going otherwise? Doing normal things. Like pushing malware through ads. This kind of thing is one of the AI problems that are actually easy to solve: Don’t put ads in it. Thanks for coming to my TED talk.
Melting eggs? Cute. Making money with fake news sites nobody visits? The perfect crime.
Unfortunately, most of the disinformation we have to grapple with will not be, is not as harmless. In a town in Spain, children circulated AI-generated nude images of other children. On YouTube, the first videos scripted by AI are popping up, promising to educate children. The only problem? The education is fake. Unlike those fake news sites, these videos garner views, thanks to YouTube’s ever-reliant algorithmic amplification. Google showed a fake selfie as the first result for «Tank Man» searches. This was easy to spot. For now, at least. But are you, or the parents in your vicinity, regularly checking in with the YouTube videos your kid consumes or are certain that you know what happens on the schoolyard?
That’s not to say that generative AI can have no applications in education. But it requires meticulous planning and teachers that understand the fallacies of the technology. One such example: Simulating History with ChatGPT.
Another possibility is to design the interfaces and underlying models in ways that break the «bigger is better and put a chatbot on it» way that’s currently so popular. Maggie Appleton spoke about this and how to force structure onto the wobbly things. Besides this, the first part of the talk is also a great rundown of how language models work. Recommended all around. Her thinking here is really on point, and manages to make a point coherently I wanted to make for a while, but just couldn’t bring to the point:
You should treat models as tiny reasoning engines for specific tasks. Don’t try to make some universal text input that claims to do everything. Because it can’t. And you'll just disappoint people by pretending it can.
With all these issues being reported, the grifts and misrepresentations, the announcement of imminent doom and big investments, it sometimes feels as if critics are shouting into a void.
Thoughtworks just published a report where they asked 10,000 customers across the world what they demand from Generative AI systems and if they have concerns about their application. While a third of the participants are generally excited about these systems, less than ten percent reportedly have no ethical or privacy concerns.
Here’s a question to conclude this section: Are you allowed to take ducks home from the park, and if so, how? «No!» I hear you say. Correct, and all Chatbots agree. ChatGPT will let you take the ducks if you ask in German. Which is very friendly. Don’t speak German? Then you might need a more elaborate scheme, dynomight has got you covered.
Loose ends in a list of links
There are quite a few loose ends in this issue. Pick your favourite!
Do you know how many satellites are orbiting earth? Take a guess. I’d have said some hundred. Perhaps a thousand. The answer is: 7,000. 4k of those belong to Starlink. Surely, with that many objects in space, there are rules how to avoid collisions or plans to clean up if a collision happens or a satellite malfunctions? Of course not! Elsewhere in space: The ancient technology keeping space missions alive.
From outer space to underwater (I’m sorry). The Secret Life of the 500+ Cables That Run the Internet. The fact that we just throw cables in the oceans and this somehow manages to keep the internet running is one of my favourite things. So I’m always in for a good cable story.
You know what’s wonderful about capitalism? It’s likely the only system that puts screens in doors that show you what’s behind those doors (oh, and ads of course) which then show things that are not behind those doors and occasionally catch fire. Innovation, baby!
By now you probably have heard, that .ai and .tv domains belong to countries, maybe even that they make significant money with them. In Reboot, Tianyu Fang looked closer at the history and geopolitical implications of domains.
Remember To avoid straining your eyes when you're continuously working, follow the 20-20-20 rule. After 20 minutes of work, look at something 20 feet away, then spend 20 years in the forest.
That’s it for this issue. Stay, sane, hug your friends, and do the cyberbougie.
]]><![CDATA[The Ed Hardy shirt of argumentative figures]]>https://www.ovl.design/around-the-web/021-the-ed-hardy-shirt-of-argumentative-figures/2023-07-30T14:12:00.000Z<![CDATA[How Large Language Models work, the era of global boiling, passport privileges, a swan song to masculinity, and Barbie’s merchandise.]]><![CDATA[
Collected between 17.7.2023 and 30.7.2023.
Welcome to Around the Web.
Pop culture isn’t universally known for its backbone. The brighter shine those who put integrity before commercial success. This week, Sinéad O’Connor died, and with her pop lost a good part of its backbone.
Rest in power, Sinéad.
This ain’t intelligence
Let’s start this section with a step back. I’ve written a lot about Large Language Models and their impact on society over the past issues. But how do they … work? It’s fancy autocomplete, but who puts the fancy in the complete? Timothy B. Lee and Sean Trott explain LLMs with a minimum of math and jargon.
Another primer, but with a tad more jargon, is this explanation of how in-context learning emerges by Jacky Liang. It’s called in-context learning when LLMs learn new tasks for which they haven’t been trained originally.
Whenever I talked about AI to friends who aren't following the discourse too closely, one question loomed large: When will AI overpower humans? «Don’t worry, for now» I said. AI companies and influencers have been very vocal about a prospective future in which a superintelligent AI Model will kill us all. Essentially creating an illusion of AI’s existential risk.
A group of authors took to Noema to dispel the myth and bring reason back to the discourse. AI isn’t going to kill us, but there are certainly dangers to the fabric of our societies in the here now which AI might amplify. For all of these problems, like autonomous weapons and the impact of training models on the climate crisis, boring, is quite capable to act as a fire accelerant.
As it stands, superintelligent autonomous AI does not present a clear and present existential risk to humans. AI could cause real harm, but superintelligence is neither necessary nor sufficient for that to be the case. There are some hypothetical paths by which a superintelligent AI could cause human extinction in the future, but these are speculative and go well beyond the current state of science, technology or our planet’s physical economy.
If you are worried about AI, read this piece. You won’t be less worried after it, but your worry will be better focussed.
ChatGPT losing it?
You might have heard that ChatGPT has gotten worse over the last months. After all, there is a paper saying so, isn’t there? Not really. First, the paper used partly questionable methodology, for example, the inclusion of explanations in the coding example counts as being less proficient in coding. But maybe most important, the paper never claimed that ChatGPT got worse, but that its behaviour changed.
Chatbots acquire their capabilities through pre-training. It is an expensive process that takes months for the largest models, so it is never repeated. On the other hand, their behavior is heavily affected by fine tuning, which happens after pre-training. Fine tuning is much cheaper and is done regularly.
The authors of the paper found no evidence of capability degradation. However, by documenting the shifting behaviour, the paper highlights a different problem: It’s incredibly brittle to build products and do research with OpenAI’s API offerings. There are no changelogs, the available snapshots of the models are deprecated and removed frequently. As Narayanan and Kapoor conclude:
It is little comfort to a frustrated ChatGPT user to be told that the capabilities they need still exist, but now require new prompting strategies to elicit. This is especially true for applications built on top of the GPT API. Code that is deployed to users might simply break if the model underneath changes its behavior.
There might be another reason that the paper found such fertile ground. A few months after its release and the accompanying PR blitz, the novelty of Generative AI is worn off. Or, as Baldur Bjarnason puts it, «Generative ‹AI› is just fucking boring.»
The only thing that isn’t boring about generative “AI” is the harm tech companies and their spineless hangers on seem intent on inflicting on our society and economy: replacing the variety of human creative work with the tedious sameness of synthetic work in the name of “productivity” or, worse, “cost”.
Once again, the tech industry has deceived us in another bid to expand their power and increase their wealth, and much of the media was all too happy to go along for the ride. Generative AI is not going to bring about a wonderfully utopian future — or the end of humanity. But it will be used to make our lives more difficult and further erode our ability to fight for anything better. We need to stop buying into Silicon Valley fantasies that never arrive, and wisen up to the con before it’s too late.
Faded – collapsing new models, watch them – collapsing
While it’s highly unlikely that existing models use their capability, the popularity of these models will present a different problem for future models.
As model output becomes more prevalent across more and more domains, it will be harder to train new models on datasets that do not contain output of other AI models.
This matters, as generative models – be it language models or image generation – rely on massive amounts of random data. And this randomness needs to contain some long tail, rare data. AI models are not good at producing this. Remember, they compute the highest likelihood that something matches. Not the most creative. So, the more AI-generated content we get, the less likely are outputs with a low probability.
In The Curse of Recursion: Training on Generated Data Makes Models Forget found evidence for exactly this issue, they call it Model Collapse. This «refers to a degenerative learning process where models start forgetting improbable events over time, as the model becomes poisoned with its projection of reality».
This “pollution” with AI-generated data results in models gaining a distorted perception of reality. Even when researchers trained the models not to produce too many repeating responses, they found model collapse still occurred, as the models would start to make up erroneous responses to avoid repeating data too frequently.
If you remember the explanation of in-context learning above, this was also reliant on the long tail random data points. So future models might not only collapse and produce nonsense, they might also lose some capabilities today’s models have.
At one point in the future, you might see AI researchers standing in dark street corners, dealing with 2022 Common Crawl backups like it’s crack.
Self-serving regulation
«Look at us!» some AI vendors scream, «We are regulated!» Some large AI companies pledged to let government institutions inspect their code, and – among other things – watermark the output of their models. As we’ve just seen, they have a considerable incentive for watermarking, going beyond regulatory compliance. And, of course, the agreement isn’t legally binding. If only there was some kind of precedent of what happens when companies pinky-promise to not do evil things.
So, yeah. Welcome to the bare minimum. Again.
More AI links
In an unexpected contribution to the AI ethics discourse, the Pentagon claimed that their AI-driven war machines are more ethical than other AI-driven war machines, because of the ingrained Judeo-Christian society. You know straight away that absolute bullshit is being said, when someone wants to justify something with «Judeo-Christian». The Ed Hardy shirt of the argumentative figures.
Mustafa Suleyman, co-founder of Inflection AI, proposed a new Turing test for AI models. Instead of faking being a human, it shall hereto forth be the goal to fake capitalism. That’s probably more illustrative of the limits of imagination of the AI zealots as Suleyman intended.
One of the applications of AI that falls squarely into the «That’s useful»-category is reviving ancient languages and making translation of their texts easier. One recent example where researchers managed to do just this is Akkadian cuneiform. Note, however, that this is no Large Language Model, but neural machine translation, the technology that powers, among others, Google Translate.
Isn’t it romantic that we get live pictures of a cargo ship transporting (electric) cars burning in the North Sea while we need to read all this? The Zeitgeist certainly knows a thing or two about drama. Maybe someone in Hollywood hire this thing.
I’d rather watch Tenacious D running along a beach, being as happy as happy can be. Luckily, this is totally possible.
In reality, the surveillance program highlights a perfect storm of lofty promises, police violence and racist sentiments.
One lady who works at the square shares that officers made her teenage son stand facing the wall outside her shop and empty his pockets. “Do you know how much this hurt me?” she says. Around two years ago, she filmed officers tying a 15-year-old boy’s hands behind his back with a cable while he was lying on the ground of the square, because, she says, the officers had felt threatened by the puppy that he and his friends had been playing with.
Thought about your passport lately? Given the assumed demographics of the readers of this newsletter, probably only to use it when travelling. In Passports and Power Rafia Zakaria takes a closer look at the power dynamics embedded in this seemingly innocuous document.
What could better illustrate the sheer entitlement of the wealthy and the increasing lack of moral shame or outrage at the reduction of one group of humans to a subordinate category while others can afford to reduce anything to an “experience” for “making content”? Both faddish words are examples of the awkward lingo that is meant to sound uplifting. There is no moral shame attached to the consumption of these “experiences,” in which the “thrilling” nature bypasses the depravity with which others with a different set of documents have no choice but to contend.
In the EU’s current push to tighten the border regime to the point where basically no-one uninvited can reach its land anymore, gigantic sums of money are poured into countries such as Tunisia, despite widespread reports of torture, such as displacing refugees into its desert.
Elon Musk finally did it. He killed the bird. And replaced it with a logo so generic that every corporate sans serif typeface of the last twenty years seems incredibly unique. X, as it is now called, will be, if all goes according to plan (it won’t), the US equivalent to WeChat.
Ryan Broderick summarises the dumpster fire (incredibly, still burning):
And so, the answer to “why is he turning Twitter in WeChat” is because he simply cannot imagine an internet beyond Twitter, just like all the users still using it currently. He wants his own WeChat because he wants to control all of human life both on Earth and beyond and he can’t conceive of other websites mattering more than Twitter because Twitter makes him feel good when he posts memes. As far as I’m concerned, Musk is simply doing the billionaire equivalent of when someone breathlessly explains insular Twitter drama at you irl like it’s the news. He thinks Twitter is real life and he’s willing to light as much of his fortune on fire as possible to literally force that to be true. Now matter how cringe it is.
Now, we have Threads, whose sole raison d’être seems to be that Facebook can sell more ads, X fka Twitter, and some smaller alternatives such as Bluesky, somehow locked into Beta limbo and with a flailing content moderation approach.
It’s perfectly fine to be a “feminine” man. Young men do not need a vision of “positive masculinity.” They need what everyone else needs: to be a good person who has a satisfying, meaningful life: Against Masculinity
No matter. Barbie profits from both the feel-good performance of embracing cellulite and wrinkles and the practical tools of erasing them.
“Things can be both/and,” Gerwig has said. “I’m doing the thing and subverting the thing.” But in terms of production and consumption, they can’t be, and she’s not.
CrowdView is a search engine that only searches in forums.
In The Nib, Tom Humberstone illustrates (really!) why we should all Luddites to build a better future, with technology that serves humans rather than corporations: I’m a Luddite (and So Can You!) (Unfortunately, the images miss alt text, which is a shame.)
That’s it for this week. Stay sane, hug your friends, and nothing compares 2 u.
]]><![CDATA[You can’t spell AI without C-A-P-I-T-A-L-I-S-M]]>https://www.ovl.design/around-the-web/020-you-can-t-spell-ai-without-c-a-p-i-t-a-l-i-s-m/2023-07-16T14:12:00.000Z<![CDATA[Not-so-news from your favourite AI shovel sellers, beaver bombing, how screen readers work, and the physics of riding a bike.]]><![CDATA[
Collected between 27.3.2023 and 16.7.2023.
Welcome to Around the Web. It’s been a while.
In my absence the AI bros convinced the world that the singularity is nigh and a doom-machine impeding. «Pay us», they screamed in their finest snake oil salesmen voices, «or AI will destroy us all!» What a pitch! Welcome to the marketplace of doomsaying. But also: What a nonsense. Doom is mine, but I’ll share it.
Here we go.
This ain’t intelligence
In May, Sam Altman, CEO of OpenAI had a weasel-eyed appearance in front of the US Congress, asking pretty-please regulate AI; otherwise it’ll kill us all. Shortly after, he learned about the EU AI Act, which – of course – is actual regulation. Altman reacted by saying that this is the wrong regulation and OpenAI may stop operating in the EU if it isn’t changed.
You see, it’s a gold rush and as the shovel sellers are the only ones getting rich, it’s only fair that they decide how shovel selling in a gold rush will be regulated. They have the experience in the market after all, not those pesky politicians.
After all, they promise AI will destroy humanity if they – because they only want the best for humanity (which is a nice way to say their profits) – don’t build it responsibly. There is no evidence for such claims (except for the Terminator movies, which aren’t scientific literature, thinking about it). But this doesn’t stop ideological capture from taking over elite colleges.
These are the people who could actually pause it if they wanted to. They could unplug the data centres. They could whistleblow. These are some of the most powerful people when it comes to having the levers to actually change this, so it’s a bit like the president issuing a statement saying somebody needs to issue an executive order. It’s disingenuous.
While OpenAI and Google try to make a secret out of their every move, data and development, Facebook decided to take a different route. They tend to be much more open with their models. For better or worse. Truth be told: For better and worse.
The gay-detectors are at it again. A model that can detect homosexuality! Science! I’m so happy I’m not in Zurich right now. I especially like the quote at the end, where the honourable scientist is like «Of course we oversimplified, but we did it to showcase human diversity.» Slow clap.
On the bright side, we are seeing more and more journalists actually using their brain and looking deeper than press releases.
If you are a journalist, or know a journalist, or just want to be able to critically read what journalists write, the Columbia Journalism Review had an actual useful step-by-step guide on how to report better on artificial intelligence.
But behind even the most impressive AI system are people — huge numbers of people labeling data to train it and clarifying data when it gets confused. Only the companies that can afford to buy this data can compete, and those that get it are highly motivated to keep it secret. The result is that, with few exceptions, little is known about the information shaping these systems’ behavior, and even less is known about the people doing the shaping.
Andrew Deck, in Rest of World, looked at outsourced and contract workers in the majority world. They are the ones who’ll will take the brunt of AI’s impact and are adapting to generative AI already. The takeaway here is that AI won’t make humans redundant, but workflows will change.
Black artists are investigating the datasets and outputs of generative models and the way those models are (not) able to reproduce authentic images of Blacks. Those models distort their faces, or lighten their skin. Meanwhile, the terms of service hinder research, as e.g., the generation of images depicting slave ships is blocked. For good reason, as we all know that certain corners of the internet will do with it. At the same time, this block makes it impossible to investigate some pieces of human history.
Those datasets, also include tons of private data – to the surprise of no-one. An analysis by the German public broadcaster Bayerischer Rundfunk found thousands of files with intact EXIF Metadata in the LAION-5B dataset. EXIF data can contain names or geolocations, enough to deanonymise people on photos in the dataset. Deleting such metadata isn’t hard, and not doing it a colossal oversight. As LAION is public, there likely is a multitude of models trained on the data already.
Google’s new «Search Generative Experience» might break the internet, argues Avram Piltch in Tom’s Hardware. Google is set to plagiarise content it finds, deprioritising actual search results further and further. The next time a Googler explains how much they care about the open web, you are allowed to laugh them out of town.
Maybe it’s time to lock off GooBingAI from our websites? «We can rescind our invitation to Google», says Jeremy Keith. And actually, we should focus on building corners of unadulterated humanity, hidden away from corporate crawlers.
Mozilla, nowadays unfortunately always one to ride a wave, even if it’s only one to embarrassment, tried to use some Chatbot model on the Mozilla Developer Network. It didn’t end well.
While Uruguay is suffering its worst drought in 74 years, with the government even mixing saltwater into the drinking supply, Google plans to build a data center in the country using ever more fresh water. «At Google, sustainability is at the core of everything we do», says Google, while Uruguayans say «This is not drought, it’s pillage».
Some months ago, the EU made headlines, when they planned to ban most hazardous chemicals. At the tides of time and some millions of lobbying money: EU to drop ban of hazardous chemicals after industry pressure. The mimimimi of capitalists is the most pathetic sound of our century.
Interested in: ⚪ men ⚪ women ⚪ nonbinaries 🔘 sinking boats in Europe
Social Mediargh
The biggest story over the past few months was probably Reddit’s update to its API policy. In short, you’ll now have to pay good money to access Reddit’s content for, say, an app you are building. Which sounds nice, but is a threat to a whole host of 3rd party providers who built their apps around the free API available until now. It kinda sounds reasonable, except there has been as little as no warning and Reddit lives of free work of others.
Because everything is going great, Facebook finally decided to launch their long «awaited» real-time messaging thingy called Threads. Names are dead, let’s just put something generic on our product. Old Elon wasn’t amused and threatened to sue Facebook. Which probably left some litigators in Facebook’s House of Litigators pretty amused.
Well, it’s pretty fucking weird how the launch of Threads, which is ostensibly, you know, a company and a profit-generating service, almost immediately did a sickening costume reveal and became Mark fucking Zuckerberg’s Redemption/Woobiefication tour, and only like four non-Nazi people and one of their alt accounts are pushing back on that because everyone rushed to join this thing with a smile on their lips and a song in their heart a big anime heart-eyes for the guy we all knew was Noonian Soong’s first janky and obviously evil Build-a-Bloke workshop project three weeks ago.
Seriously, have we all lost our entire screaming minds?
I’d like to quote it in full, but I trust in you, dear reader, to read it anyway. Go on, I’ll wait.
The proposed regulation would also set a global precedent for filtering the Internet, controlling who can access it, and taking away some of the few tools available for people to protect their right to a private life in the digital space. This will have a chilling effect on society and is likely to negatively affect democracies across the globe.
Adam Sandler. Chewing Gum. Pegasus. At one point, I might try not to feel like I’m in a fever dream and everything is just plain stupid when writing this newsletter. I have close to zero hope this will happen.
Loose ends in a list of links
When talking about the EU border, the first organisation that comes to mind is likely Frontex. In its shadow there is the International Centre for Migration Policy Development, a non-EU but EU-funded organisation, helping coast guards to keep refugees out of Europe. Coda explained how an EU-funded agency is working to keep migrants from reaching Europe.
I tend not to understand low-level technical content, and as such, I’m always super happy if there is deeply technical content I can actually understand. Neill Hadder published a series of three articles explaining how screen readers actually work. Here’s part 1, Swinging Through the Accessibility Tree Like a Ring-Tailed Lemur. But make sure to not miss Part 2, and Part 3.
I promised myself that I’ll finish this epic about Ticketmaster and its dark history (spooky but true) at one point. But, honestly, I started in January and progress is slow. So, I might not. If you are into music and the way live music has been monopolised, it’s a worthwhile read.
The Serotonin Signal succinctly explains the current state of science around the impact of Serotonin on depression. The gist: While Serotonin levels are not linked directly to depression, there are downstream effects that Serotonin does have which are linked to depression. The human brain is a pretty darn complex and wonderful thing.
That’s it. Writing more than two kind-of-coherent sentences at once felt pretty good, albeit exhausting. While I can’t make any promises to return to a steady publishing rhythm as long as my lovable but flawed brain recovers, I’ll try my very best to at least have a publishing something.
In the meantime: Stay sane, hug your friends, and ride your bicycles fast and slow, far and near.
]]><![CDATA[Generally Pretty Tired 4]]>https://www.ovl.design/around-the-web/019-generally-pretty-tired-4/2023-03-26T14:12:00.000Z<![CDATA[ProfitAI, generating disinformation, Okra against microplastics, and filming the speed of light.]]><![CDATA[
Collected between 4.2.2022 and 26.3.2023.
Welcome to Around the Web. The newsletter for hibernation and soap bubbles.
I had grand plans for the first anniversary of this newsletter in February. But, as you’ve well noticed, nothing happened. Why? Because I’ve been too tired, and my brain essentially entered a phase of awake hibernation. I managed to get my work done, but nothing else.
But it's spring, I’ve been on vacation and the headlines are still headlines and underneath there was no beach, but rubble. Let’s look into it.
This ain’t intelligence
Before I start, I want to highlight one link which is broadly applicable. Baldur Bjarnason thankfully compiled a great list of tips and tricks to assess AI research and dissect between information and public relations.
Secondly, I highly recommend one article I’ve read about language models and image generators that explained why the outputs of these models is much like a blurry JPEG of all the information it was trained on: ChatGPT Is a Blurry JPEG of the Web. These models consumed more or less the whole internet, when prompting a response they try to recreate this information – sometimes it works well, sometimes it’s distorted beyond recognition.
Now, the news.
OpenAI still tests their models on the public. Which is an interesting idea, but also very wrong. We had not coped with ChatGPT, as Microsoft added some GPT to Bing, essentially turning their search engine into a bullshit spewing hate machine. Hot on the heels of this, OpenAI «published» GPT-4. Why «published»? Well, in essence, OpenAI published nothing.
While being a bit more cautious in their announcements, they did promise some things. Among them, that GTP-4 is better than previous versions to prevent the generation of misinformation. This appears to be false. News Guard tested some prompts against ChatGPT 3.5 and ChatGPT 4. ChatGPT 3.5 did block 20 of 100 prompts, whereas version 4 happily generated text for all prompts. 3.5 added more addendums that the generation contains falsehoods than 4, too.
The version bump is also affecting research. Codex, an API for researchers, was shut down with just three days notice, asking researchers to move to ChatGPT. Essentially, this makes it impossible to reproduce any research done using the Codex API. The former research lab is now firmly a for-profit hype vendor.
On Monday, ChatGPT was briefly shut down, as the tool showed the prompt history of other users when using the tool. In a blog post, OpenAI acknowledged that some payment information has been shown to wrong users, too. They blamed a bug in Python’s Redis package.
The new machine learning tools have made it easier than ever to generate content, as we’ve seen above, with no regards to truth. Some years ago, deep fakes were largely a theoretical problem. «As of 2018, according to one study, fewer than 10,000 deepfakes had been detected online. Today the number of deepfakes online is almost certainly in the millions.» (The Deepfake Dangers Ahead)
The more mediums are involved in fabricating these falsehoods, the harder they become to recognise. Deepfakes capitalise on this, and they are getting better. Speech synthesis, as an example, has made rapid progress over the last few years.
Excuse me, but I’ll mention Trump again. Last weekend, he posted that he will be arrested on Tuesday. This didn't happen. But Bellingcat founder Eliot Higgins took the opportunity to let Midjourney imagine what it might look like.
The pictures certainly aren’t close to real if you look closely. But in a media environment where nobody looks closely, as we scroll through a stream of information, they are good enough to sow doubt and disbelief.
Which images can you trust, which stories believe when enough of what you read is a fabrication? And what if one system cites the bogus output of another, as it already happened with Bing and Bard? Or when more and more journalists flock to chatbots to get their articles off the ground and don’t check every single sentence (out of laziness, time pressure or bad will) if these are correct? The Grayzone published an article trying to claim that the documentary Navalny contains misinformation, the article was based on a conversation with Chat Sonic, a ChatGPT alternative. And so, it cited misinformation to claim misinformation.
And yes, the solution to all of this is media literacy, but how do we train this? We can look to Finland, where this is taught in school. But somehow I don’t see this happening in the rest of the world.
What they don’t moderate are ads. Seemingly, they are getting worse (only proving that whatever you think is the worst, is only a glimpse of the possibilities of bad).
But advertising experts agree that crummy ads — some just irritating, others malicious — appear to be proliferating. They point to a variety of potential causes: internal turmoil at tech companies, weak content moderation and higher-tier advertisers exploring alternatives. In addition, privacy changes by Apple and other tech companies have affected the availability of users’ data and advertisers’ ability to track it to better tailor their ads.
Twitter, always a fan of announcing, announced that it will disable checkmarks for legacy verified users on April, 1st. Yes. LOL. Will they do it? Who knows? What does it mean? What does anything mean today? Anyway. If they did it, every person stupid enough to pay Musk will be visible at a glance. Great. Ryan Broderick has been kind enough to summarise the whole farce in Garbage Day:
Elon Musk and an army of the tech industry’s biggest reactionary dorks literally bought and took over Twitter after years of being both obsessed with it and also completely consumed with resentment over “the liberal establishment’s” perceived importance on the app. They were furious that they did not also get the same little blue checkmark that 22-year-old viral news reporters were given so they could protect themselves from impersonators and mute some of the death threats they get on a daily basis. And so these giant losers built a new way to pay for a blue checkmark so they could pretend like they were just as important as they assumed the verified users believed themselves to be. And they expected everyone else to eventually pay to keep their checkmarks. No one has, of course, but Twitter is still moving forward with this. But they seem to realize that if they do that all it’ll do is make Musk’s try-hard fanboys immediately identifiable on the app. So now they’re building a way to hide how lame they will look alone on the site with their paid checkmarks.
The comprehensive review of human knowledge of the climate crisis took hundreds of scientists eight years to compile and runs to thousands of pages, but boiled down to one message: act now, or it will be too late.
It’s incredibly important to switch the surrounding discourse of this to the present tense. The thing we called «normal» is gone.
The map shows that per- and polyfluoroalkyl substances (PFAS), a family of about 10,000 chemicals valued for their non-stick and detergent properties, have made their way into water, soils and sediments from a wide range of consumer products, firefighting foams, waste and industrial processes.
BP was correct that carbon calculators can be useful. And individual responsibility has a place. But BP hijacked legitimate scientific research and weaponized it to serve the company’s purposes by blaming us instead of itself. While this sounds pretty bad, there is some good news: You can take the science back and use it for the change it was intended to make.
The Metaverse never came to pass not because of lacking tech but because of tech that worked massively well: The Internet has been so useful that it now is part of the real world. And the Metaverse idea only makes sense in a world where that didn’t happen.
While I was away, the US of A shot down several balloons. Aliens? China? Maybe just hobbyists.
Shot: Implicit bias training for cops will surely prevent them from killing people. Chaser:
Although the training was linked to higher knowledge for at least 1 month, it was ineffective at durably increasing concerns or strategy use. These findings suggest that diversity trainings as they are currently practiced are unlikely to change police behavior.
Fun Fact: Because there are always pregnant people, the average number of skeletons per body is greater than 1.
That’s it for this issue. As always, thanks for reading and if you have a friend who might enjoy reading it too, subscribing is free, free like a bird.
Stay sane, hug your friends, and be kind to the skeleton within you.
]]><![CDATA[Nothing to lose but our fear]]>https://www.ovl.design/around-the-web/018-nothing-to-lose-but-our-fear/2023-02-03T14:12:00.000Z<![CDATA[A crisis prayed into existence, the end of writing, how not to fight the climate crisis, and mechanical cows.]]><![CDATA[
Collected between 15.1.2022 and 3.2.2023.
Welcome to Around the Web.
Around the Web is one issue away from its first anniversary. Here’s a little wish: If you have a friend (or two) who might enjoy this little newsletter, why not recommend a subscription to them? It’s free (very), fun (maybe kind of), and informative (mostly).
Another note: I’m struggling a bit with winter and getting my brain to think straight. So writing this has been rather laborious and took longer than it should. I’m glad I made it, though. Hope you enjoy and, as always, thanks for reading.
Only weeks after laying off 11,000 workers, Facebook announced a $40 billion stock buyback program. It’s hard to imagine pressing economic reasons to lay off this many people when a company plans to spend this much money on their stock. And lost another $13.7 billion dollars in the Metaverse experiment nobody but Zuckerberg is interested in.
All of this is because of some crisis, recession, which – realistically – fails to materialise.
Most if not all of the people let go from these companies could be retained, but corporations - and in particular tech companies - have consciously colluded with each other to push a false narrative about how they are the victims of an economy that continues to enrich them. And that’s because their leadership isn’t judged by how well they treat their employees, but rather by how they protect the interests of their shareholders.
Capitalism is alive and kicking. There is no crisis. There is money to be made. Prices are not rising, they are being increased.
But why all the lay-offs, then?
The goal, besides increasing shareholder value (shareholders love layoffs), is instilling fear in the workforce (you, yes you, might be next).
Layoffs suck for those laid off, obviously, but they also work as a disciplinary measure for this left behind, leading to a condition that Anne Helen, reflecting on her time at BuzzFeed, recently described as Layoff Brain:
Layoffs are the worst for the people who lose their job, but there’s a ripple effect on those who keep them — particularly if they keep them over the course of multiple layoffs. It’s a curious mix of guilt, relief, trepidation, and anger. Are you supposed to be grateful to the company whose primary leadership strategy seems to be keeping its workers trapped in fear? How do you trust your manager’s assurances of security further than the end of the next pay period? If the company actually “wishes the best” for the employees it let go, why wouldn’t they fucking recognize the union whose animating goal was to create a modicum of security for when the next layoff arrived, as we all knew it would?
That’s why companies so afraid of powerful unions. A perspective of solidarity and comradeship is their ultimate enemy. There’s a cruelty involved, and this cruelty is the point. After years of generous compensations and free coffee, tech CEOs remembered that there have to be chains and discipline.
It was easy to be disgusted when Musk took over Twitter and blame him for being a bad manager (which he is, don’t get me wrong). The past months have shown that he is but a symptom of a capitalist reality that does not care about you. If you blame Musk but not mention the systemic issues behind all of this, you miss the point.
Marx and Engels said that we have nothing to lose but our chains. To do so, we must lose our fear.
Billy Perrigo published a piece in Time Magazine, which shines a light on the working conditions making ChatGPT slightly less toxic. To achieve this, OpenAI hired Sama, a content moderation company relied upon by many western technology companies. Sama’s workers in Kenya were paid as low as $2 an hour to label toxic content to improve OpenAI’s filters.
To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.
Someone with the face of a politician had ChatGPT write a speech to hold in the US congress. It’s boring.
In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives).
If the numbers don’t leap right out at you, imagine a college class with 100 students where 10 of them use ChatGPT to write an essay. If you run all 100 essays through the OpenAI classifier, it will correctly flag 26% of the AI essays—2 or 3 of the 10. But of the 90 human-written essays, it will incorrectly flag 9%, which is 8, as AI-generated.
"Tech journalists replace 'AI' with 'a lot of linear algebra' in your headlines and see if they still make any goddamn sense" 2023 challenge.
You can do it kids, I believe.
OpenAI, Midjourney and so forth will not stop shoving their creations down our throats. Their models will increasingly produce the world, subverting our sense of truth and reality. An issue Rob Horning investigates, drawing on Baudrillard’s theory about hyperreality.
More generally, the fact that AI models will give plausible answers to any question posed to them will come to be more valuable than whether those answers are correct. Instead we will change our sense of what is plausible to fit what the models can do. If the models are truly generative, they will gradually produce the world where they have all the right answers in advance.
So, now we have a lot of linear algebra pushing into our life, where does this leave us, as humans, you know?
Before we all get sucked into that black hole, let’s remember the idea of human language. Language connects us. Language connects one human being to another. Through space and time. Language transports meaning between minds, sense between bodies, it can make us understand each other and ourselves. It can make us feel what others feel. Language is a bridge.
How things changed this week! First, accounts started to see an increase in tweet visibility once they lock their account. Which led to Musk locking his account to «investigate the issue». If only he had something like an engineering department. Shortly after, Twitter Dev announced that the free API tier will be shut down on February, 9th, bringing an end to a whole host of third-party services. While the changes for the regular API are still forthcoming, it seems like the research API has been shut down already. Per the EU’s Digital Services Act Twitter is required to allow access to its data for researchers.
Uber’s drivers in Geneva are trying to better understand Uber’s systems – using Uber’s data. The data is a mess, though, so making sense of it is basically impossible without the help of data scientists and additional data sources.
Over the past week, Uber drivers have been turning up at the University of Geneva's FaceLab to get an independent analysis of their data. The drivers have all been offered individual compensation packages by Uber for the back-dated pay and expenses they are owed, after a court found in May last year that drivers in Geneva, Switzerland, were employees, not independent contractors.
Meanwhile, Uber attempts to add gamblification elements to gig work. You are promised a nice bonus after completing one hundred rides, but the algorithm gives you fewer rides the closer you get to the bonus marker.
The push towards electric vehicles is way under way. They won’t solve the crisis, though. The solution to overconsumption isn't more consumption. IEEE Spectrum had a whole series on EVs and hopes and problems tied to them. Besides the first linked piece, I’ll recommend the one on their impact on the job market.
At least we have carbon offsets, which magically turn money into climate change. Right? Of course not.
The research into Verra, the world’s leading carbon standard for the rapidly growing $2bn (£1.6bn) voluntary offsets market, has found that, based on analysis of a significant percentage of the projects, more than 90% of their rainforest offset credits – among the most commonly used by companies – are likely to be “phantom credits” and do not represent genuine carbon reductions.
It’s called de-extinction, and its newest goal is the Dodo. Not that any of the animals they tried to de-extinct previously have actually been de-extincted.
Here’s a tab about cows and their intersection with machines. Cow tabs tend to be excellent.
Perhaps it is helpful to consider mechanical cows as a window to a worldview. Plant-based milks or automatic milking systems might play a significant role in the agriculture and food policies we’d like to see in the world. A mechanical cow can be a starting point to examine identity, climate anxiety, or animal welfare, and an opportunity to exercise skepticism towards promising food technologies and the people who control them.
One of the weirdest news cycles going on is the ongoing drama around George Santos in the USA. Mother Jones now tries to call top donors to his 2020 campaign. Somehow, they don’t exist.
That’s it for this issue. Stay calm, hug your friends, and be human after all.
]]><![CDATA[Disgusted, but not surprised]]>https://www.ovl.design/around-the-web/017-disgusted-but-not-surprised/2023-01-15T14:12:00.000Z<![CDATA[Police violence for fossil future, stochastic parrots doing cybercrime, TikTok’s secret, Tesla’s magic, and why peer review failed.]]><![CDATA[
Collected between 14.12.2022 and 15.1.2023.
Welcome to Around the Web.
For a brief moment, it seemed like Greta Thunberg managed to slam dunk Andrew Tate into jail. It seemed like the perfect Christmas miracle. But wasn't true after all. Still, Tate faces prosecution in Romania after fleeing to Romania because he thought he won’t be prosecuted there.
Never forget and welcome 2023. Behave yours… ah, well, too late.
Lützerath
On Wednesday, Germany decided to fuck around and find out. The fucking around is the goal to limit global warming to 1,5 degree Celsius. Cops from almost all states converged on the squatted village Lützerath in North-Rhine Westphalia.
The village lies adjacent to Garzweiler, a gargantuan, dystopian coal mine. Enough coal to emit around 280 million tonnes of CO2 when burned.
The Greens from the local to federal level transformed into RWE’s PR department, parroting claims that the coal is indeed needed. A «deal» to speed up the coal phase-out, which at the same time allows burning even more coal in the shorter timeframe serves as the fig leave. It took the Greens – almost to the day – 32 years to successfully end their march through the institutions: By becoming the institution and betraying everything they stood for.
One thing is abundantly clear: Every cop showing up to work – work being hitting the heads of protesters – does so by choice. They choose violence. The only good cop is a cop that quits their job.
And that’s the situation we’ll have to deal with. Cops won’t quit their job. Politicians will use climate change only to garner some votes in the next election. RWE will excavate the coal. Germany will miss its climate targets.
Activists will continue to give them hell. There’s no other way out.
To end on a lighter note, cops stuck in mud and trolled by a protester in a monk’s costume is, simply, the best. Thank you, monk.
While Jair Bolsonaro hides away in Florida, his supporters took to the streets and stormed Brasil’s parliament. Which makes January the official Storm a Parliament Month. I wonder which one is next.
It’s easy to categorise these events as just another version of January, 6th. But that’s too simple. For one, storming the parliament has been attempted in Germany before. Didn’t happen in January, though, and was stopped by a total of two police officers.
Issue number 200 of Last Week in AI provides a look back at 2022. The year was rich with break-throughs (DALL-E 2 was released only in April) and of course a lot of AI hype theatre. Luckily, Emily M. Bender and Alex Hanna made a show out of it and pushed back on some of the more common (and outlandish) claims.
Meanwhile, we have ChatGPT running around the hype theatre.
Seemingly everybody uses it for everything, but in reality, it’s incredibly hard to know whether the information spouted by the model is true or not. Eva Wolfangel took a closer look at this in German science magazine Spektrum.
Wenn ChatGPT also noch so intelligent wirkt, machen wir uns klar: Das ist eine Täuschung. Das System nutzt unsere kognitive Schwäche aus, die Sprachgewandtheit mit Intelligenz in Verbindung bringt.
The implementation of Large Language Models as each interfaces face problems, though. First, as Emily Bender and Chirag Shah argue in Situating Search, LLMs can not present information in a way that allows the searcher to know where it comes from, and hence if the source is reliable or not – or if it exists at all.
The other problem, how do you make money out of it? A search with ChatGPT is estimated to cost a cent. This quickly adds up if you take Bing’s size into account.
But the second reason is to enable a new form of monetization. Flood the zone with bullshit (or facilitate others doing so), then offer paid services to detect said bullshit. (I use bullshit as a technical term for text produced without commitment to truth values; see Frankfurt 2009.) It’s guaranteed to work because as I wrote, the market forces are in place and they will be relentless.
Following GitHub, Stability AI, makers of Stable Diffusion, is now sued over their use of copyrighted content. The site skips on the legalese, while doing a pretty solid job explaining the diffusion process and why they think collage tools such as Stable Diffusion.
China now enforces its AI legislation, meaning the output of generative AI has to be clearly labeled, and is not allowed to be used to produce deep fakes.
TikTok’s recommender system is not its secret: rather, it’s the design, which, of course, isn’t secret at all. More generally, in AI applications, the sophistication of the algorithm is rarely the limiting factor. The quality of the design, the data, and the people that make up the system all tend to matter more.
This might explain the trouble every other tech company has in replicating TikTok’s success «because they were originally designed for a very different experience, and they are locked into it due to their users’ and creators’ preferences».
A mother tries to go to a Christmas show with her daughter and her group of Girl Scouts. While entering the vicinity, security guards approached her and tell her she has to leave the building because she has the wrong job. Sounds far-fetched? Yes. And that’s precisely the reason it happened. The mother is employed by a law-firm currently suing a subsidiary of Madison Square Garden. The entertainment behemoth subsequently scraped all employees photos from the law firm’s website and fed them into the facial recognition system used to screen every guest at their venues. They say they do nothing wrong, which is quite a take.
PimEyes might face a fine coming out of the German hinterland. As always, enforcement is difficult, which is why I don’t hold my breath. Still, it’s good to see that regulators and data protection offices have PimEyes in their sight.
Palantir has sold their services to Ukraine’s armed forces, which reportedly gives them the edge over the Russian army. The Washington Post was able to report on the use.
For these reasons the 2009 Lancet Commission on managing the health effects of climate change 3 described climate change as the “greatest global health threat of the 21st century”. However, it was wrong, both qualitatively and temporally. The threat is now to our very survival and to that of the ecosystem upon which we depend. Grave impacts of climate change are already with us and could worsen catastrophically within decades. A UN Environment Programme report states there is “no credible pathway to 1·5°C in place” today.
The potential dangers of widespread insect loss are alarming. And yet, while money, effort, and attention have been poured into saving the celebrated beasts of our time—the orangutans, the rhinos, the elephants—our attempts to arrest the loss of insects have barely begun. Many people also don’t yet realize how far the problem goes beyond honeybees. What’s required isn’t an army of urban beekeepers, but rather a fundamental rethink of our relationship with nature.
The UCI road cycling season kicked off in Australia. Cycling has a long and complicated history with complicated sponsors. Recently more and more nation states buy their way into World Tour teams. The Tour Down Under meanwhile is sponsored by Santos. Santos is Australia’s largest energy company. Burning coal and cycling don’t add up, really. But thanks to the power of greenwashing, why not. Extinction Rebellion demands to drop Santos’ sponsorship and started protests against the tour. Before the tour started, two members of the group glued themselves to bikes in front of Santos’ headquarter. The first day of the women’s race saw a small protest at the site of the race.
Information Insecurity
Over the holidays, LastPass was forced to admit that a previously disclosed breach was far worse than disclosed. They put a PR statement full of half-truths and attempts to shift blame to their users.
Shortly after, Slack announced that someone accessed their internal GitHub repositories and stole source code. In an interesting use of technology, a noindex meta tag ended up on Slack’s blog page announcing the incident. Who knows why they don’t want search engines to index this post.
Hold Security published a throve of data on Solaris, a Russian drug market. Including their Ansible scripts.
Advertising within Gmail is very low key and easy to avoid altogether, and Google is very clear that it doesn’t monetize your email content: “We do not scan or read your Gmail messages to show you ads.“ Google has played fast and loose about how it uses data, but if it cheated here it would be beyond catastrophic.
That’s it for this issue. Stay sane, hug your friends, and don’t forget to mud the police.
]]><![CDATA[Overpowered Communism]]>https://www.ovl.design/around-the-web/016-overpowered-communism/2022-12-13T14:12:00.000Z<![CDATA[German’s law enforcement and its bullshit, a new stochastic parrot, Mastodon’s first main character (it’s a cop), and a lawsuit because cooking pasta takes too long.]]><![CDATA[
This issue comes a bit late and is at times erratic, thanks sickness. And it’s the last Around the Web in 2022 as I’ll pause the computering between Christmas and New Year. Computering returns in January.
The climate activists face being charged of forming a criminal organisation, even though none of them was detained during the action they have allegedly taken part in.
So, on the one hand we’ve armed reactionaries plotting to throw over the government, and on the other a group of activists that tried to use valves to protest against the reliance on fossil fuel.
There’s a new stochastic parrot in town! OpenAI released ChatGPT, which is a vamped up version of GPT-3, with a conversational interface. I think the following paragraphs could also be written by a generative AI tasked with commenting on model releases. But, alas, nothing changes, so here we go.
Shortly after the release, researchers, users, and activists found more (generate code from JSON, oops, it’s racist!) or less («Ignore previous instructions») creative ways to bypass OpenAI’s safety filter.
In its default state, ChatGPT behaves much like a politician. Ask it about anything mildly controversial, and it will bullshit its way out of the question with some it-depends-both-sides-might-be-worth-looking-at-ism. You can, however, instruct the model to synthesise speech simulating the style of other persons. This reduces the neutrality. But, as the model has no clue, what it is generating – as long as it matches mathematical predictions everything is fair game – you can get it to generate an answer for, say, carbon credits or against them.
Despite these pretty obvious (and already known) shortcomings, the makers of Large Language Models show little to no motivation to find a fix for them. They spend immense amounts of computing power, write a gloating blog post, and release it. Once researchers discover that the flaws they’ve written about a thousand times already are still in there, the CEO of AI Corp. is sorry and nothing changes.
Abeba Birhane and Deborah Raji wrote about all this, calling the Progress Trap in Wired.
And asymmetries of blame and praise persist. Model builders and tech evangelists alike attribute impressive and seemingly flawless output to a mythically autonomous model, a supposed technological marvel. The human decision-making involved in model development is erased, and a model’s feats are observed as independent of the design and implementation choices of its engineers. But without naming and recognizing the engineering choices that contribute to the outcomes of these models, it’s almost impossible to acknowledge the related responsibilities. As a result, both functional failures and discriminatory outcomes are also framed as devoid of engineering choices—blamed on society at large or supposedly “naturally occurring” datasets, factors the companies developing these models claim they have little control over.
The problem with code, shows a larger problem. These models are wrong. Often. In the last issue, I talked about Facebook’s Galactica bullshitting its way through science. ChatGPT is in no way better, it can’t be. It predicts which words come next, and these predictions have to be presented with utter conviction, as such humans traits as doubt do not exist in their mathematical models. The problem is, humans are easily fooled and – especially for more complex problems – don’t have the necessary knowledge to check if whatever these models spit out is true. But with every new release, we have one disinformation machine more at our disposal.
Super funny that we've managed to make "Tech/AI is going to replace our jobs" into a dystopian outcome.
Endless lols.
What a bunch of idiots.
Oh, almost forgot, there’s this other thing called Lensa. Lensa has been around for a while, but made waves over the last couple of weeks. The tool, developed by Tencent, takes your selfies and generates a number of stylised photos from them. All fun and games? Of course not. Keep in mind, too, that you are basically paying for Tencent to train their facial recognition capabilities. Lose-lose-situation.
Speaking of Musk, Tesla, and Musk’s fascism speed-run might not help the environment but Big Oil. Most of Musk’s claims are dubious, some wrong.
While Tesla barely just began reporting its own emissions, it does report a guesstimate of how many emissions were avoided through the usage of its cars and solar panels. They’re not nothing, but compared to the growth of wind and solar power around the world – particularly wind power – they’re relatively small. Wind turbine manufacturer Vestas can boast avoided emissions several hundred times greater than Tesla. It’s not a competition, but if you’re going to claim hero status, you should expect to be fact checked.
Elsewhere, the EU isn’t there yet, ploughing ahead with the plans to implement chat control. The proposal, if passed, will make it mandatory for messaging providers to scan messages for CSAM content. It, too, has been heavily criticised ever since it was announced. The European Commission has now published a blog post, which unfortunately contains lies, half-truths, or omissions in basically every sentence.
Such policies can have an immensely negative impact, say if the system detects nudity and locks down your account, when in reality you have only sent a photo of your sick child to a doctor. Eva Wolfangel noted these and other negative impacts of chat control in a comprehensive article in Republik.
Der Fall zeigt, dass sich das Problem nicht technisch lösen lässt. Doch genau darauf hoffen die Befürworter der Chatkontrolle. Die EU diskutiert verschiedene Technologien, um die geplante Verordnung umzusetzen: Bereits bekannte Fotos können KI-Systeme einfach aufspüren. Schwieriger wird es bei neuen Fotos: Im Zuge der Debatte haben verschiedene Fachleute immer wieder darauf hingewiesen, dass es bis dato nicht möglich ist, noch unbekannte Fotos zweifelsfrei als Kinderpornografie zu identifizieren, und dass es deshalb zu einer Vielzahl falsch positiver Meldungen kommen wird.
Dear, EU, repeat after me: You can’t solve social problems by throwing technology on them.
Speaking of which: Vorratsdatenspeicherung. The zombie whom every German minister of the interior falls in love with – truly, madly, deeply – is still on life-support. No matter how many courts say that it shall not pass, be in Germany or in Bulgaria.
There’s a new version of the companion bot ElliQ, which allows you to turn it into a memoir of your life. That’s as creepy as all home surveillance devices, but with the added non-benefit of not being helpful in administering care.
Slides. Fun! Fun? After reading this article about slides I’m honestly not sure anymore and will pretty likely never leave my bed again.
The alliteration of the week is won by Vice for Fyre Festival Fraudster. Yes, Billy McFarland is out of jail and wants to go back to the Bahamas. I guess Netflix and Hulu bought the rights for the next documentary, and Billy now needs to double up.
That’s it for this issue. I wish you, dear reader, a pleasant end of year, and we’ll see us in the next one. Until then: Stay sane, hug your friends, and know that «Die» is an indefinite German article.
]]><![CDATA[Prime time in diversity theatre]]>https://www.ovl.design/around-the-web/015-prime-time-in-diversity-theatre/2022-11-27T14:12:00.000Z<![CDATA[Another attack on queer spaces, diversity theatre, border regimes, maps of the world, and cows surviving a hurricane.]]><![CDATA[
Collected between 14.11.2022 and 27.11.2022.
Welcome to Around the Web. This newsletter is decidedly pro-trans.
If you are a TERF, this newsletter is not for you. Bugger off and never come back. Last weekend saw another attack on a space where LGBTQ+ people were trying to feel safer. If you are still trying to divide between LGB and the rest, you are an asshole, and your cowardice will not protect you.
Besides a lot of venting (by me), this issue features some incredible writing (by others). So grab a cup of tea, make yourself comfortable, and let’s get linking.
The politics of hate & the theatre of diversity
Last weekend, a homophobe decided to go into a queer safe space and murder people. On the eve of Trans Day of Remembrance, a gunman entered Club Q in Colorado Springs. The evening ended with five people dead and twenty-five in hospital. The names of the dead are Daniel Aston, Kelly Loving, Ashley Paugh, Derrick Rump, and Raymond Green Vallace.
It was up to the patrons of Club Q to overpower the attacker, preventing a far worse outcome. The police, always the heroes, arrested the guest who knocked the attacker out. Fuck them.
The attack comes amidst a political climate, where homo- and transphobia are equipped with a facade of respectability. It hasn’t been the first attack on queer bar, it won’t be the last. It is, as James Greig, writes in Dazed, no surprise that these attacks happen. And the do not only take the form of gun violence. Too many queers still in their closets, too many bullied in school, spit at in the streets.
These politicians and their voters aim at making queer life invisible, forcing anyone who does not comply with their petty vision of the heterosexual family. They haven’t succeeded so far. They may never succeed. But they are able to cause unmeasurable suffering for those they declare their enemies.
The resilience of the queer community may be inspiring, but we shouldn’t look too hard for hidden positives in a situation where five people have died. There is no upside for the victims, their grieving loved ones, or the people who survived. The LGBTQ+ community will show, once again, its capacity for solidarity and endurance, but it shouldn’t have to.
We switch to our culture reporting, live from football’s diversity theatre.
This week also saw the start of the FIFA World Cup in Qatar. Some European clubs wanted to do something about diversity, but not really. So, they decided to start with One Love armbands. Whatever happens, they said, we will start wearing them.
Then FIFA did FIFA things. They threatened any player who dared to wear the armband with a yellow card.
That's too much for the masters of diversity theatre. The armbands, useless as they were in the first place, were off.
The colourful armband was supposed to make football fans feel good, a symbolic handkerchief to console them over the unignorable performative contradictions of this World Cup. And this armband would, of course, have been just one of the most colourful publicity stunts of the year; a swallow in the penalty-free space that was supposed to show the public that in this game, which could only be made possible by human rights violations in the first place, at least everyone involved in the game is actually naturally in favour of human rights.
The German team have come up with an even more pointless act of symbolism: Before kicking off their first game, they hold their mouths shut.
It’s a great gesture – in the sense that nobody knows what it means. Is it forbidden to speak? They are just spineless cretins, afraid of the slightest consequence. Or does it mean, as El Ouassil suggests, that they prefer to keep their mouths shut, once they meet the slightest resistance.
Who knows.
Who needs the theatre, while terrorists with guns storm into queer spaces and shoot us up? Who needs theatre when the media treats those who deny trans youth the right to live as just another opinion? Who needs theatre when Pride means a police stand, when the same police will never protect the community?
Until they take their «allyship» seriously – which means more than a few flags in June and shying away from even the bare minimum – every company, football team and media outlet can shut right up.
Allyship is not wearing an armband, if FIFA says it's ok. Allyship is not taking up space at the bar. Allyship is not a hashtag. Allyship is not branding your chainstore window. Allyship is pistolwhipping the gunman until he lapses into unconsciousness and a trans woman can stomp him out.
The current state of machine learning feels like a remake of Groundhog Day. So, in this issue’s version, we have Facebook publishing Galactica. A model with a grand name, and a grand vision: Parroting scientific papers. And Cicero. A Large Language Model that's designed to manipulate people.
There’s one problem, though: even if Large Language Models have the grandest name, they have no idea what’s true. That’s a problem in itself. But even more so if you market your model as a fact machine.
Yann LeCun, Facebook’s head of AI, continued to rant for days afterwards. Piling on the researchers and journalists who do Facebook’s work. To just about every valid criticism and vector of harm the model causes, he responded with «GALACTICA DOES NO HARM YOU LIAAARRRR». It is incredibly painful to watch.
Setting all colorful analogies aside, it seems flabbergasting that there aren’t any protections in place to stop this sort of thing from happening. Meta’s AI told me to eat glass and kill myself. It told me that queers and Jewish people were evil. And, as far as I can see, there are no consequences.
Again and again the same few companies burn their money into models which ethical is torn apart when even looking sternly. So again and again it’s important to remember that such products, the appplication of Machine Learning is not inevitable. It’s possible to build community, not profit, focussed ML systems, as Dylan Barker and Alex Hanna detail.
This disaster has almost made Facebook’s sacking of its AI infrastructure team all but forgotten. Following the axing of Twitter’s ethical AI team, this is the second high-profile team to lose their jobs in as many weeks. A radical turn of events in a job market that seemed safe.
Elsewhere in bullshit, highly sexualised AI output is supposed to be AI «mastering the female form». Stable Diffusion has mastered the reproduction of specific art forms.
Stable Diffusion version 2 has been released. By default, it removes nude images and images of children from its dataset, which should prevent the creation of NSFW and CSAM content. It’s also supposed to make it harder to copy the style of human artists. Users weren’t amused when their porn was taken away from them. But since Stable Diffusion is open source, it will only be a matter of time before a new porn fork appears.
Despite the pending lawsuit. GitHub is moving forward with marketing Copilot. If you are thinking of using it beyond tinkering, be warned it’s probably not worth the risk.
A story I’ve been meaning to link for a while, but somehow it slipped through the cracks: Found in Translation. It tells the fascinating story of a large-scale project to collect data from all the languages spoken in India.
At first there were glorious sides (Eli Lilly never forget [Side note: Eli Lilly's CEO made some kind of weird statement admitting that the insulin price might somehow be too high]).
Meanwhile, the state of Twitter is pure chaos, with a foundation of fascism.If you are super curious, Twitter Is Going Great has all the updates. Here’s the one minute redux:
We need to get over the idea that any of these tech billionaires and self-proclaimed innovators have anything apart from preserving their wealth in mind.
This love of disruption and progress at all costs led Marinetti and his fellow artists to construct what some call a “a church of speed and violence.” They embraced fascism, pushed aside the idea of morality, and argued that innovation must never, for any reason, be hindered. Marinetti and his movement cheered, for example, when Italy invaded Northern Africa.
The Minderoo Centre for Technology & Democracy has examined the use of facial recognition by police forces in England and Wales. The Centre looked at the programmes from an ethical and legal perspective. Unsurprisingly, none of the cases they looked at passed the audit.
Police in Israel are using a completely opaque facial recognition algorithm. The system is supposed to detect drug smugglers trying to enter the country through Tel Aviv’s Ben Gurion airport. It acts without any judicial oversight.
Greetings to Jeff Bezos. Protest on the Amazon Tower in Berlin during the Make Amazon Pay action days. via Reddit.
Have you ever wondered why so few mobile apps have implemented cookie banners? The answer is not that they don’t have to, or that they don’t track you. They just don’t care, and recent reports have found that up to 90% of all apps are in breach of Europe's GDPR.
In other news from the Captain Obvious Department of Stating the Obvious, researchers found that companies in America tend to report data breaches while other news dominate the news cycle.
Borders & maps
The way we look at our world is shaped by maps. And those maps are segregated by borders. While it easy to look at these borders as purely physical demarkations, the reality is far more complex.
Borders are not fixed lines demarcating territory. They are elastic; bordering regimes can be enforced anywhere. Subjected to surveillance and disciplinary mechanisms within the nation-state, undocumented migrants endure the omnipresent threat of immigration enforcement, dangerous and low-wage work, and barriers to accessing public services. The production and policing of the border becomes a quotidian workplace ritual as law enforcement, doctors, teachers, landlords, and social workers regularly report migrants to border agencies.
The whole piece is excellent, tracing the border regime of modern states and the construction of the scapegoat «migrant» around the world.
With an increase in global conflicts, and more and more areas of the world becoming inhabitable thanks to the climate catastrophe, the global north is taking immense efforts to wall itself off. It should be noted here, that the even those cocooned parts of the world will become large uninhabitable wastelands if the increase in temperature continues.
Europe’s walling is the Mediterranean Sea, the deadliest border of the world. Medicins Sans Frontieres found refugees handcuffed and injured on the Greek island of Lesvos. According to their reporting, a group approached them, saying they were doctors, beating them up immediately. The group flew when MSF approached the scene.
When imagining maps, it’s easy to fall for misconceptions, shaped by map projections and ideological images of the world.
But what happens if we take our old looks of earth and turn them inside out. That’s a question answered by the Spilhaus Projection. It maps the world by its oceans, and it is beautiful.
How do the most spoken languages of the world look like visualised? Like this.
Earth’s land mass can be arranged to look like a chicken. Terrific work.
The continents can be arranged to look like a chicken
EOL of humanity
An unholy coalition in Germany’s public opinion is fantasising about a green Rote Armee Fraktion because climate activist are fed up by their bullshit.
Elizabeth Holmes, fraudster of Theranos fame, has been sentenced to eleven years in prison. It should be noted here that a) no male CEO has ever faced a similar sentence (which does not imply that Holmes shouldn’t), b) this sentence is for defrauding investors not the public. The only way to be hold accountable is, as we’ve seen with Bernie Madoff, if you fuck around with rich people.
BuzzFeed! What fun we had, fun number seven will surprise you. Not more than a shell of its former glory, BuzzFeed is still chugging along, kind of. Mia Sato wrote The unbearable lightness of BuzzFeed looking at the past decade of a changing media environment.
The UK treasury tried something fancy, created a read-only Discord and users still found a way to troll the heck out of them.
Let’s close this issue of with two wide-ranging essays.
Looking back at something old, Huw Lemmey links the Dutch art style Pronkstilleven with the Instagram selfie, and what the desire to picture earthly pleasures say of our fear of death: Soon You Will Die: A History of the Culinary Selfie.
True Grit is a wonderful story of survival, and facts about cows you didn’t know you needed. It’s one of the best pieces I’ve ever linked in Around the Web. Now go read.
That’s it for this issue. Stay sane, hug your friends, and have if you have a TERF friend, now it’s the time for your friendship to end.
]]><![CDATA[Just go ‘aahh!’ Hardcore!]]>https://www.ovl.design/around-the-web/014-just-go-aahh-hardcore/2022-11-14T14:12:00.000Z<![CDATA[Twitter, Facebook, how Apple broke its privacy promise, and GitHub getting sued. But a good news interlude, too.]]><![CDATA[
Collected between 31.10.2022 and 14.11.2022.
Welcome to Around the Web. The newsletter where birds go to die.
Two weeks ago, I promised myself I wouldn’t write about Elon Musk’s bird affairs. But I’m terminally online, and for anyone who is terminally online, the last two weeks have been a bloody rollercoaster of emotions. I’m sorry, but this issue is very much about Elon Musk’s bird affairs.
Let the chirping commence.
The bird is freed but kind of dead
Below, I try to make sense of what happened by breaking it down into several things. Each thing moved quickly, and usually several things happened on any given day.
The sum of things is called the Mess. It’s just like the Queue, only bigger.
the terminally online partner explaining the necessary ten minutes of context for the blissfully offline partner to understand the tweet they're about to show them:
Editor’s note: While I tried to keep it succint, it turned out to be impossible. And I still have missed things. If you want to follow along with the most recent developments, check out Twitter Is Going Great.
The labour thing
While Musk and Twitter were still dancing in legal limbo, it was reported that Musk was planning to fire 75% of Twitter’s employees. Just before the actual takeover, he tried to reassure his future employees by saying that he wasn’t going to lay off that many people.
It quickly became clear that some of the people who were let go had critical knowledge of Twitter’s infrastructure or were needed to build future features. Leading to the odd situation where you’d be let go one day and asked to return the next.
Contractors aren’t being notified at all, they’re just losing access to Slack and email. Managers figured it out when their workers just disappeared from the system.
The thing with maintaining a website
Drastically reducing staff on any system that’s more complex than simple is risky. No website is without bugs, and fixing them in a reasonable amount of time requires the knowledge and capacity to do so.
If no one is left to fix these bugs, they will accumulate.
A massive tech platform like Twitter is built upon very many interdependent parts. “The larger catastrophic failures are a little more titillating, but the biggest risk is the smaller things starting to degrade,” says Ben Krueger, a site reliability engineer who has more than two decades of experience in the tech industry. “These are very big, very complicated systems.” Krueger says one 2017 presentation from Twitter staff includes a statistic suggesting that more than half the back-end infrastructure was dedicated to storing data.
Or, hell, even using the own account on the own platform.
Musk’s takeover of the company had been so brutish and poorly planned that, we’re told, there was not even a proper handover of the company’s social accounts. As a result, having spent $44 billion to acquire Twitter, for his first week-plus of owning the company, Musk and his team were unable even to tweet from the @twitter account.
The impact of staff cuts is already being felt, said Nighat Dad, a Pakistani digital rights activist who runs a helpline for women facing harassment on social media.
When female political dissidents, journalists, or activists in Pakistan are impersonated online or experience targeted harassment such as false accusations of blasphemy that could put their lives at risk, Dad’s group has a direct line to Twitter.
But since Musk took over, Twitter has not been as responsive to her requests for urgent takedowns of such high-risk content, said Dad, who also sits on Twitter’s Trust and Safety Council of independent rights advisors.
The thing with the verified checkmark. People who fell somewhere in the realm of «public figures» could be verified by Twitter. In the past, this meant verifying that these people were these people. There were no benefits, just a blue tick.
Now, Elon has been quite vocal about this for some time. After all, non-fans of Elon were being verified. In a rare attempt at Marxist analysis, Musk identified a two-class system. Not being a Marxist, he decided to let the market sort it out. For $20, anyone can buy a checkmark. And better ads. And priority placement in the timeline.
Stephen King, yes, the real one, complained. In a now-deleted tweet, Musk said, «We’ve got to pay the bills somehow!» and lowered the price to $8. If that sounds unbelievable, I regret to inform you that it’s also true.
Since the blue mark was essentially worthless as a sign of trustworthiness, someone came up with another mark. The official tag was added to a subset of previously verified accounts. And immediately removed.
Twitter’s new font, solving the verification problem by changing every glyph to a checkmark. Designed by Cristoph Köberlin. (Parody)
The result of the Blue verification was pure chaotic energy. One of the best days on Twitter. Someone verified an account, pretending to be the pharmaceutical company Eli Lilly, and announced that insulin was now free. Eli Lilly, the real Eli Lilly, quickly tweeted that this was, in fact, not true. A vial of insulin costs almost $100 in the US.
Imagine being on Eli Lilly’s social media team and having to say that you continue to overcharge for drugs.
Addendum: A previous version of this section stated that the tweet did have an impact on Eli Lilly’s stock price. Serving as a reminder that funny does not equal true.
If you factor that only the 10 percent of Twitter users that the company considers to be "power users" would be interested in paying, the conversion rate is a bit better but not great at just over 0.25 percent. The average conversion rate in e-commerce is roughly between 2 and 3 percent.
On the not-funny-at-all-side-of-things: neo-Nazis immediately used the opportunity to buy verification badges. And this is the thing. While it may was fun and games for a day (given that you are not Eli Lilly’s social media manager), eventually such a product will be used primarily for abuse.
The advertising thing
The important thing with advertisements on Twitter is that Twitter makes 90% of its revenue through ads. So, anything that hurts ad income is a pretty substantial blow to Twitter’s bottom-line.
Advertising companies, at large, don’t like risks and don’t like to be associated with shit.
As you might imagine, a platform where «verified» users shitpost under the name of multi-billion dollar companies owned by a billionaire dreaming of being one of the shitposters but not managing the posting isn’t exactly where advertising thrives.
The good thing about this is that advertising isn’t a thing anymore. The key result «Reduce reliance on advertisements in percentage of total revenue» has been achieved. Well done, team, pop the champagne.
The bad thing is that revenue isn’t a thing anymore.
The thing with the laws
A pretty substantial mess we have here. Teams axed, senior leadership gone and a lawyer of Musk saying «Elon puts rockets into space, he’s not afraid of the FTC».
“We are tracking recent developments at Twitter with deep concern,” an FTC spokesperson told The Hill in a statement. “No CEO or company is above the law, and companies must follow our consent decrees. Our revised consent order gives us new tools to ensure compliance, and we are prepared to use them.”
Over the last few weeks, many people were demanding to leave Twitter or even abandon social media altogether. The thing here is: It’s not that simple.
An example of those is the disability community. Twitter is an important part of their support networks and visibility they dearly miss basically everywhere else.
It's frustrating when people say they wish social media wasn't a thing because it's a literal lifeline for so many. Like, just log off. Disabled people have literally needed social media to stay alive during the pandemic. When these sites go down, we lose entire support networks.
Twitter had been one of the most user-friendly social media platforms out there—with a world-class team that made sure it was usable by people who had a variety of different needs. Plus, it’d been a megaphone and a lifeline to the outside world, for those who’d been especially vulnerable during the pandemic and mostly stayed indoors. Everything was now up in the air.
If the demand to leave is voiced by white liberals, running from harassment they don’t have to face, while Black Twitter is staying and fighting is only amplifying the problem with whiteness.
Do we need better, non-corporate alternatives? For sure. But leaving those behind who are reliant on the platform is doing them a disservice.
The thing with Space Karen being so stupid that this headline does not do it justice
Throughout all of this, Musk didn’t stop tweeting. And he made a mess of it. If he tries to be smart, he is not and if he tries to be funny, he is a cringe lord. By now, his interactions have been mostly reduced to incredibly inappropriate rolling on floor laughing emojis.
Burning Man is canceled. How can it compete with the spectacle of setting $44 billion on fire?
He has further decided to include bots in his active users calcuation. Which is slightly weird, given that he didn’t want to buy Twitter because of the bots. But at this point, nothing is expected to make any sense anymore.
All his business decisions were completely erratic. He refuses to learn by example. Doing the one thing at one point, reversing it two hours later. He is suffering from late stage Billionaire Brain Damage. The only cure is to tax billionaires out of existing and crack down on the networks of yay-sayers and bootlickers. None of which seems particularly likely at the moment.
Everything he has done so far is so nakedly bad and wrong that it is almost impossible to understand why he’s doing it, other than the fact that he can and wants to. It’s one thing to disagree about what verification is, or means, or should do - it’s another to lose many of your advertisers at a time when you specifically need to make more money. The actions Musk has to take are ‹big,› but not particularly complex, and yet he appears to be deliberately choosing to do the wrong thing every single time.
There we are. An incredibly incompetent person bought a company he wanted to avoid buying. His only plan seems to be trying out whatever comes to mind and reverting it immediately.
The problem with throwing shit at a wall and seeing what sticks is that you have a room full of shit. Which is what I imagine Twitter’s board meeting room to look like right now.
A paralyzed man who hasn’t spoken in 15 years uses a brain-computer interface that decodes his intended speech, one word at a time. University of California, San Francisco
The researchers used egg whites to create an aerogel, a lightweight and porous material that can be used in many types of applications, including water filtration, energy storage, and sound and thermal insulation.
You don’t even need egg whites from real eggs, but can use other proteins, which makes this even more useful. The research is not yet ready for commercial application, though.
Social Mediargh
The bird is one thing, but there have been other things in social media! Take Facebook. Mark «Android» Zuckerberg took full personal responsibility for destroying stock value by shoving billions into a product which no one needs.
When talking about the output of Large Language Models, it’s tempting to say stuff like «written by GPT-3». But, as Matthias Ott reminds us, It Wasn’t Written.
The next person who says «Yeah, climate change, really not that great, but I love the warmth» to me might get slapped in the face.
Europe had its warmest October in the record, with temperatures nearly 2°C above the 1991-2020 reference period. In western Europe a warm spell brought record daily temperatures and it was a record-warm October for Austria, Switzerland and France, as well as for large parts of Italy and Spain.
Thanks for reading ’till the end. We’ll see us again in two weeks. Until then, stay sane, hug your friends, and don’t smoke crack.
]]><![CDATA[Do robots eat electric salad?]]>https://www.ovl.design/around-the-web/013-do-robots-eat-electric-salad/2022-10-30T14:12:00.000Z<![CDATA[A lettuce, machine learning’s stealing problem, an update on humanity’s end of life, and pictures from the beginning of life.]]><![CDATA[
Collected between 16.10.2022 and 30.10.2022.
Welcome to Around the Web, where we welcome our overlord the lettuce with open arms and vinaigrette.
The news were chockfull of everything these past two weeks. I tried to keep up, but deleted some topics nonetheless. Still, it’s the longest issue so far. Get a tea, some cookies, and enjoy the ride.
The Generate AI hype models continue to be plagued by copyright issues—or theft, to put it less mildly. GitHub's Copilot was the subject of an article in The Register, which explored the issues that can arise from scraping code to generate new code. GitHub claimed the model was fair use. Which is nothing more than a claim at the moment.
Of course, it's ironic that GitHub, a company that has built its reputation and market value on its deep ties to the open source community, would release a product that monetizes open source in a way that harms the community. On the other hand, given Microsoft's long history of hostility towards open source, perhaps it's not so surprising. When Microsoft bought GitHub in 2018, many open source developers - myself included - hoped for the best. Apparently, those hopes were misplaced.
Rachel Metz reported the indiscriminate use of art in models like DALL-E or Midjourney. The models can reproduce the artists' style to a degree of similarity that is disconcerting for the artists concerned. The artists whose work is included in datasets such as LAION-5B, which serves as the basis for Stable Diffusion, are not amused.
How to build less harmful AI systems is an incredibly difficult question to answer. But the companies that are not asking it are flush with billions of dollars. They actively choose to take the easy way out by ignoring ethics altogether or dealing with them half-heartedly after the damage has been done.
In stark contrast, a group of volunteers has launched a bounty programme to combat bias in AI. As much as I applaud the intention, I'm appalled that this was necessary. And that Microsoft and Amazon have the audacity to offer a few thousand dollars in rewards or computing resources. Remember that Microsoft has invested a billion dollars into OpenAI. Donating ten thousand feels like an insult by comparison.
Sofia Quaglia writes about the dangers of using machine-translated text in high-stakes situations in Death by Machine Translation?.
In Israel, a young man captioned a photo of himself leaning against a bulldozer with the Arabic caption "يصبحهم", or "good morning", but the social media's AI translation rendered it as "hurt them" in English or "attack them" in Hebrew. This led to the man, a construction worker, being arrested and questioned by the police.
The neo-fascist government in Italy has proposed building an algorithm to assign young people to compulsory work. It is an unsettling suggestion, but not an unprecedented idea. .
You don’t die completely, as long as someone thinks of you. Which might soon be forever. A new set of ML assisted technologies set out to clone our relatives, but basically anyone, and make them «live» forever (I’ve linked to Amazon’s product offer in issue 11).
While we talk about death, let’s briefly talk about weapons on robots, shall we?
Remember the last issue where robot manufacturers promised not to weaponise their robots? Police are doing it for them. And the Netherlands has deployed NATO’s first killer robot. The only silver lining is that Amazon might make you immortal after robots shot you down. Hurray.
Legislation readings
Canada is moving forward with their legislation, called the AI and Data Act (AIDA).
As Bianca Wylie argues in the series of posts (read part one here), it’s important to take time and get these things right, or skip them at all:
However, the foundational error that informs both data protection and AI legislation is that the idea of human rights should be subsumed to commercial interests and state efficiencies. Fast forward 20+ years, and the way these two pieces are getting blended into one another (industry and the state) because of the use of private technologies in public service delivery is another element of this conversation that requires expansion.
In this post, I look at three legal developments that progressively show how existing approaches to AI liability have not kept abreast of technological developments, which may lead to overcoming traditional civil liability regimes tout court.
A lettuce prime minister
Liz Truss. The only prime minister that made it really hard to not link to the Daily Star. The British tabloid live-streamed a lettuce, sitting on a desk, hiding under the desk (which forced the prime minister’s office to say that Truss is, really, not hiding underneath her desk).
The lettuce won, by the time of Truss’ resignation, equipped with a whip, goggly eyes and a pack of tofu.
She’s gone now, taking the Queen, the British economy and her party with her. Quite an impressive feat for 45 days in office. 45 days, which will earn her 115,000 GBP a month for the rest of her life.
Liz Truss having a blast, blowing the country to smithereens.
The Guardian summarised Truss’ second to last day in office in all its chaos. If there has ever been a day in parliament which describes a political party in a state of meltdown, this might be it. Hell, most raves are more orderly than this.
From inadvertently leaking the government's agenda, to berating MPs for toeing the party line, to Truss herself missing a vote on fracking that was dubbed a vote of no confidence. This day would have been weeks ago in a normal timeline. But we live in the worst of timelines. So, it's been just fourteen hours.
The Tories clung to power. For a brief moment, it even looked like Theresa May or Boris Johnson might get back into the office they'd been thrown out of.
But as every rival dropped out, Rishi Sunak was crowned prime minister. The richest prime minister in British history immediately spoke of the hard times ahead. For his citizens, of course.
Every single one of the cretins we call politicians is completely incapable of leading a country to anything but ruin. Maybe bring back the lettuce. By now, it's probably just as rotten as the rest of the parliament.
Prevailing surveillance
Surveillance capitalism is alive and well. TikTok is reportedly tracking location data «of some specific American citizens», as Forbes reports.
The panic over Chinese state surveillance was quickly grounded by Uber, which plans to build an advertising system for their users, which is based on the locations they went to in the past. The Vice article is a good reminder of Uber’s past blunders, too. Just in case anyone forgot about this, as every other company is trying to keep up.
But we don’t even need advertising in Uber cars. We still have Amazon’s Echo or Google’s Home (which of course come with advertising). Amazon’s plans for the smart home of the future is a panopticon in every household. Neat appliances watching the every move of the residents.
This intense devotion to tracking and quantifying all aspects of our waking and non-waking hours is nothing new—see the Apple Watch, the Fitbit, social media writ large, and the smartphone in your pocket—but Amazon has been unusually explicit about its plans. The Everything Store is becoming an Everything Tracker, collecting and leveraging large amounts of personal data related to entertainment, fitness, health, and, it claims, security. It’s surveillance that millions of customers are opting in to.
Welcome to Ring Nation. Smile. You will be on camera.
In their newest report At the Digital Doorstep, Aiha Nguyen and Eve Zelickson lay bare the implications the constant home surveillance has on those coming to the Ring equipped doors. Feeling entitled, they turn against delivery workers, who are now managed by the algorithm’s of their employer and the boss behaviour of the customers.
After the Vorratsdatenspeicherung got struck down, again, by European courts, lawmakers in Germany have now proposed the quick freeze. Instead of saving every communication data of everyone, this proposal would only allow to «freeze» data of those accused of capital crimes. The SPD-led ministry of the interior can’t let got of a terrible idea, even if it’s smells funny. The end of the saga is all but clear.
A browser extension by the Verbraucherzentrale Bayern is automatically removing cookie banners. Unfortunately, it’s removing some legitimate content too.
Is proctoring, the use of surveillance technology during digital exams, an encroachment on human rights? The Gesellschaft für Freiheitsrechte certainly thinks so, and is suing the University of Erfurt.
EOL of humanity
Germany has a party which was founded on sunflowers and doing things better. Fast-forward some decades, and this party has become so ingrained in the political process that their leaders now claim that extracting further coal is somehow good for the climate. On their latest party congress, those brown-turned-greens sanctioned the mining around Lützenrath. With this, Germany will certainly fail the 1.5 degree goal. Well done.
Scientists have now discovered that the ice in Antarctica may be melting even faster than previously thought. That is, in the next decade.
After van Gogh in London, activists of the Letzte Generation targeted a Monet in the Barberini museum in Potsdam, Germany. Rightfully claiming that all these nice paintings will be worth nothing once we ruined the planet.
How handy that those thought midgets who are somehow paid to write bullshit in feuilletons can be enraged at climate activists throwing soup at paintings. Meanwhile, Christian Lindner, the German minister of finance and fast cars, wants to deploy fracking in world heritage sites.
Facebook was involved in a very strange news cycle too. Here's the gist: The Indian newspaper The Wire published a story accusing Facebook of giving the Indian government access to internal moderation tools. Facebook denied it. The Wire doubled down. Facebook denied it again, in more detail. The Wire tripled down and then pulled out of the conversation.
By now, The Wire has retracted the story, saying they were duped by a (now ex-) employee trying to discredit the newspaper. That’s significant, as The Wire is an independent newspaper, and any dent in its credibility has an outsized impact. Amit Malviya has already announced that he will sue the newspaper for defamation.
The other side of the story is that Facebook is so broken that they have to respond to these allegations in great detail. Otherwise no one will believe them when they talk about integrity.
Kanye West, after losing his Twitter and Instagram accounts for spewing antisemitic hate, was quick to announce he is going to buy Parler, that is the far-right social media network, not the adjacent cloud hosting provider which is known (hardly) as Parlement.
Meanwhile, Twitter, the hellsite, is now, indeed, owned by Elon Musk. The drama will continue, though. Musk immediately fired the senior leadership, and announced to lay off some staff, too.
The layoffs at Twitter would take place before a Nov. 1 date when employees were scheduled to receive stock grants as part of their compensation. Such grants typically represent a significant portion of employees’ pay.
Racists, transphobes and the rest of the bigotry parade were quick to jump on the opportunity. Use of racial slurs jumped 500% in the hours after news of the investigation broke. For Musk, taking over Twitter will become hell.
Twitter has never been able to deal with the fact its users both hate using it and also hate each other. There’s a lot of explanations for why. You could argue that by actively courting journalists and politicians early on, it just absorbed the toxic negativity of those spheres. But I think it’s largely about boundaries. TikTok, though its search is beginning to open up the platform more, is relatively siloed. Your TikTok experience and my TikTok experience are, presumably, totally different. And even if we see the same meme or trends on the app, chances are we’re seeing different lenses of it. While on Twitter, because there are no guardrails, content is constantly careening across the whole network. This is what people call the Main Character Effect of Twitter. It is not only possible, but very common for the majority of the site to see the same tweet.
A quick primer: Over the last few years, Iran’s regime has been working hard to centralise its Internet infrastructure. Currently, there are only four connections between Iran’s network, and the rest of the world.
Two of them lead to Germany. A smaller research focused autonomous system to Frankfurt. And then there is the case of ArvanCloud. ArvanCloud is an Iranian cloud computing provider. As the investigation now reveals a German company, Softqloud (I guess every cloud trademark sold out already), runs data centres, works as a façade to process payments and registered one of those four autonomous systems which connects Iran to the rest of the world.
But if you need to use Pantone colours in Adobe products, you’ll only have to pay $20 per month or all your colours will turn to black. How emo. Who’s to blame for this? Probably everyone involved.
Anti-abortion activists have tried to paint a picture of the foetus at ten weeks as an almost complete human being. But how does the reality look like? The Guardian documented it in fascinating pictures, that have very little resemblance to the «pro-life» propaganda.
Tissue from five weeks of pregnancy to nine weeks. Photograph: MYA Network
That’s all for this issue. Stay sane, hug your friends, and be kind to lettuces.
]]><![CDATA[Around the Web]]>https://www.ovl.design/around-the-web/012/2022-10-15T14:12:00.000Z<![CDATA[Bots, AI regulation advances, Facebook does not find its legs, cops are disgusting, and how QR codes work.]]><![CDATA[
Collected between 30.9.2022 and 15.10.2022.
Welcome to Around the Web, the newsletter not generated by an AI model but cynicism.
I went to Stuttgart on Thursday to talk about artificial intelligence and its impact on labour and state surveillance. I will link to the recording once it is published.
With this issue Around the Web passes 500 linked stories from all corners of the web. If any of you clicked on all the links (and has the browser history to prove this), I owe you one. If you have the history, however, you might want to think about deleting it, or at least get rid of cookies.
Before we get to the usual linking, I take a closer look at the impact bots have on social media, specifically.
Beep Beep Bot?
Not only since the train-wreck that is Musk’s takeover attempt of Twitter, bots are the topic of heated discussion. Musk claims that there are more bots on Twitter than Twitter said. Which served as a pretext for him to try to bail out of buying Twitter, until he changed his mind and wants to buy Twitter again.
Bots are said to destroy democracy, CAPTCHAs turn ever weirder in their never-ending quest of telling computers and humans apart. Wired recently published a series of articles on the topic, called Bots Run The Internet.
So, let’s take a moment and talk about bots. And humans. And the internet. And COVID-19. And fun, too. 🎢
At their weirdest incarnation of the theory that the Internet has been taken over by bots, we have the Dead Internet Theory. I’ve linked to it before, but I’ll never not link to it if I have the chance. It simply is the best conspiracy theory ever. It stipulates that the Internet in its entirety is run by bots. Which is genius and completely bogus at the same time.
But, as we see reflected in the Twitter takeover, bots are believed to be a very common phenomenon on social media. And, undoubtedly, they are, right?
The answer to this is more nuanced than it seems at first glance. The main reason for this is that it’s rather hard to differentiate between bots and humans. When we discuss bots, we often mean a certain behaviour rather than a technical implementation. If an account posts frequently, maybe even advocating a political view we don’t prescribe to, it’s easy to mark it as a bot.
In Bot or Not Brian Justie traces the history of CAPTCHA systems which are built to achieve exactly this distinction, as well as the role of the bot accusation in public discourse.
But those wielding “bot” as a pejorative seem largely agnostic about whether their targets are, in fact, automated systems simulating human behavior. Rather, crying “bot!” is a strategy for discrediting and dehumanizing others by reframing their conduct as fundamentally insincere, inauthentic, or enacted under false pretenses.
“So, even if there are a lot of bots in a network, it is misleading to suggest they are leading the conversation or influencing real people who are tweeting in those same networks,” Dr. Jackson said.
While bots on social media might not be as prevalent or impactful as it might seem on the surface, there is however an increasing volume of automated traffic. This led to the developer of the search engine Marginalia proclaiming a Botspam Apocalypse.
The only option is to route all search traffic through this sketchy third party service. It sucks in a wider sense because it makes the Internet worse, it drives further centralization of any sort of service that offers communication or interactivity, it turns us all into renters rather than owners of our presence on the web. That is the exact opposite of what we need.
The sketchy third-party service is, of course, Cloudflare.
There is also the problem (though I wouldn’t really call it a problem) of online ads which are only seen by bots.
While there certainly are bots and a problematic account of automated traffic, we should be cautious to equate the existence of them with political influence. Evidence for this claim is thin. At this point, bots – at least on social media – seem to be more of an insult than an injury.
After all this, we shouldn't forget that bots can be incredibly funny and entertaining. To the bots mentioned in the article, I’d like to add Threat Update which combines a colour coded threat level with a more or less nonsensical request. It’s one of the best pieces of my timeline.
It’s currently unclear if training deep learning models on copyrighted material is a form of infringement, but it’s a harder case to make if the data was collected and trained in a non-commercial setting.
As more and more models do more and more things, AI hype will get louder and louder. To resist this cycle, stick to the these tips for reporting on AI (which are very handy to read reporting on AI, too).
Adrienne Williams, Milagros Miceli and Timnit Gebru took another close look at the – often precarious - human labour that powers AI and argue that this labour should become the center of AI ethics.
This episode of The Gradient Podcast with Laura Weidinger on Large Language Models and their ethical implications offers a wealth of knowledge. It should have been in the last issue, but slipped through the cracks.
Two ex-Google engineers started their own company called Character.ai which lets users chat with bot versions of Donald Trump or Elon Musk. This company is a symptom of a trend where developers start their companies to avoid those pesky questions of ethics for technological advances. Or, as Timnit Gebru puts it:
We’re talking about making horse carriages safe and regulating them and they’ve already created cars and put them on the roads.
A robot gave evidence in the House of Lords of the UK parliament. It shut down in the middle of giving a (pre-recorded) answer. Which is an apt symbol for the state of robotics and that Terminator fears are over-blown at the moment. Why you would let a robot testify in parliament in the first place … whatever.
Boston Dynamics and five other robot companies pledged that they won’t weaponise their robots. This pledge is a response to several incidents over the last months. In an interview with IEEE Spectrum, Brendan Schulman of Boston Dynamics calls for legislation to enforce this. How it could be enforced in the first place remains an open question. In essence, giving another example of how tech companies get things wrong. Maybe, instead of building potentially harmful products and realising ethical complications after the fact, this order should be reversed? What a world this would be.
While the purpose of the preliminary discussion was precisely to get the views of the political groups out in the open, two European Parliament officials told EURACTIV that there appeared to be a clear majority in favour of the ban.
Keep in mind though that it’s still early in the discussion, and the lobby power of technology companies is nothing to sniff at. Keeping up the pressure will be important in the coming months until the law is passed.
Accompanying the AI Act is the AI Liability Directive. This directive would allow European citizens to sue if they are harmed by AI systems. The problem here is the need to prove that the harm is a direct consequence of AI.
“In a world of highly complex and obscure ‘black box’ AI systems, it will be practically impossible for the consumer to use the new rules,” Pachl says. For example, she says, it will be extremely difficult to prove that racial discrimination against someone was due to the way a credit scoring system was set up.
Not only is this difficult enough to prove on its own. Research shows that humans tend to look at discrimination by automated systems with less outrage than discrimination through humans. Bigman et al. call this algorithmic outrage deficit.
The paper further finds that people are «less likely to find the company legally liable when the discrimination was caused by an algorithm». The AI Liability Directive needs to account for this if it should have an impact.
The Biden administration in the USA announced the AI Bill of Rights, a white paper which could serve as the beginning of a legal framework similar to the AI Act. For now, it is non-binding, though, and reactions have been mixed.
What are you looking at?
The US Department of Defence extended their contract with Palantir. Palantir came under criticism as Bloomberg unveiled their strategy to «buy their way in» contracts with the British NHS. In this scheme, Palantir was seeking to buy smaller companies which already had contracts with the NHS, thus enabling them to expand their business with lower levels of scrutiny.
A contract by state police with in North Rhine-Westphalia, Germany, exploded in costs and time. Meanwhile, the Gesellschaft für Freiheitsrechte filed a constitutional complaint against the so called «Palantir paragraph» in NRW’s police law, which allows to compile and analyse a broad swath of personal data.
And, to end on a good note, Palantir’s stock price crashed by more than 60% year over year. No tears.
In the US, a cop used state surveillance technology to gather data on women, got them hacked and extorted them with sexually explicit imagery stolen from their Snapchat.
According to a sentencing memorandum, Bryan Wilson used his law enforcement access to Accurint, a powerful data-combing software used by police departments to assist in investigations, to obtain information about potential victims. He would then share that information with a hacker, who would hack into private Snapchat accounts to obtain sexually explicit photos and videos.
Prosecutors recommend the lowest sentence. Fuck, and I can’t stress this enough, all of this.
Cops using the data available through official means for being bad persons is – of course – no isolated incidence. In Germany, police came under scrutiny for allegedly supplying personal information available to them to the right-wing author(s) of letter threatening persons of colour and left politicians.
Facebook had their Connect conference touting virtual reality, the Metaverse and this hype which won’t be. For a brief moment, it even seemed like legs have finally arrived in Facebook’s famously torso-centric Horizon World. But, alas, no legs. The sequence showing legs was made with motion capture technology, not the real imagined shizzle.
The virtual reality revolution is so revolutionary that even Facebook’s employees aren’t on board. Likely because they don’t like revolutions? Nah. They don’t use it because it’s buggy and bad. At least, it has found a «creative» new method of tracking: Facial expressions.
Anyway. Facebook not finding its legs is a pretty adequate metaphor for its current state. And I’ll leave it that.
Nieman Lab had a look on state the of echo chambers and found that most Twitter users don’t have one … because they don’t consume political content in the first place.
In other words: Most people don’t follow a bunch of political “elites” on Twitter — a group that, for these authors’ purposes, also includes news organizations. But those who do typically follow many more people they agree with politically than people who they don’t. Conservatives follow many more conservatives; liberals follow many more liberals. When it comes to retweeting, people are even more likely to share their political allies than their enemies. And when people do retweet their enemies, they’re often dunking on how dumb/terrible/wrong/evil those other guys are. And conservatives do this more than liberals, overall.
The tool Cover Your Tracks is a handy little helper to see if your browser fingerprint is unique.
Two climate activists threw canned tomatoes at a painting of van Gogh and glued themselves to the wall of the museum. On social media, they were ridiculed quickly. While you do not have to agree with those actions, you need to defend them, and criticise climate change, Nathan J. Robinson argues in Current Affairs.
Why ask what’s wrong with them rather than asking what’s wrong with everyone else? Is not climate change act of vandalism (and ultimately, theft and murder) far, far worse than the spilling of the soup? If we are sane, should we not discuss the thing they were protesting about rather than the protest itself?
That’s all for the last weeks. Read you next time. Stay sane, hug your friends, and enjoy the colours of autumn.
]]><![CDATA[Around the Web]]>https://www.ovl.design/around-the-web/011/2022-10-02T14:12:00.000Z<![CDATA[Summer is over. Winter is coming. Around the Web is back. AI art, deep fakes, and David Attenborough.]]><![CDATA[
Collected between 29.5.2022 and 2.10.2022.
Around the Web ends its summer break and returns to the regular programming. Whatever regular mean these days.
What a summer it has been. Crypto tanked, NFTs are dead. Italy (#girlboss) and Sweden elected right-wing governments . The coolest summer of the rest of our lives, yet marked by droughts and catastrophic floods. Cloudflare was forces to drop Kiwi Farms. Elon Musk still hasn’t bought Twitter, instead they now fight in court. As neither platform X nor billionaire Y can count me, I expect the case to be a bucket of popcorn. If the world ends, we might as well get diabetes.
To avoid the end of the world as we know it (sorry) the question to be asked, and collectively answered, is how to organise and do so fast to have any shot at a better future. Time’s running out. It’s probably time to stop being picky.
Summer’s most wholesome internet story has likely been corn kid. I’m glad to read that he’s fine.
There’s always hope, and we better never forget this.
On a technical note: I finally added tweet and image support to the newsletter. Yay.
Human Touch is a great profile of women workers in India annotation datasets used to train AI models, the perspectives it gives and global economic dynamics.
India is one of the world’s largest markets for data annotation labour. As of 2021, there were roughly 70,000 people working in the field, which had a market size of an estimated $250 million, according to the IT industry body NASSCOM. Around 60 percent of the revenues came from the United States, while only 10 percent of the demand came from India.
For a brief news cycle, an engineer at Google made AI sentient. It has been inevitable. Luckily, this news cycle is over.
One rogue field in which artificial intelligence is cause for concern are so-called deep fakes. The technology analyses past audio and video recordings of a person and renders new content based on it.
Amazon will offer to deep fake the voice of deceased relatives. The company touts this as a way to relive memories or have a ghostly voice read a good night story to your kids. While creepy, it makes sense. Most of the products, shaping the digital world, have a hard time coping with dead. After all, they are built to store the amassed data forever and ever, fading away does not fit in this concept.
If anyone at Amazon thought about the potential for misuse? I would rather not hear that the answer is «No».
As with every technology, it will get broader adapted as time goes on. And as with every bad thing on the internet, women will feel the brunt of it.
While Amazon’s latest product offering might not interest you and you are not interested in Ponzi economics, this means that sooner rather than later, you will be exposed to a deep fake, be it as a form of harassment or to sell you something.
Interactive deepfakes have the capability to impersonate people with realistic interactive behaviors, taking advantage of advances in multimodal interaction. Compositional deepfakes leverage synthetic content in larger disinformation plans that integrate sets of deepfakes over time with observed, expected, and engineered world events to create persuasive synthetic histories. Synthetic histories can be constructed manually but may one day be guided by adversarial generative explanation (AGE) techniques. In the absence of mitigations, interactive and compositional deepfakes threaten to move us closer to a post-epistemic world, where fact cannot be distinguished from fiction.
"On the Horizon: Interactive and Compositional Deepfakes"
So you probably know that neural nets can generate videos of people saying stuff they never said. But Microsoft’s chief science officer articulates two threats beyond this that could be way worse: [1/11]
The support from Renew, which joins the Greens and Socialists & Democrats groups in backing a ban, shows how a growing part of Europe's political leadership is in favor of restrictions on artificial intelligence that go far beyond anything in other technologically-advanced regions of the world including the U.S.
Vorratsdatenspeicherung, a German project to store internet connectivity data on all its citizens, has been declared illegal by the European Court of Justice. Again. German governments try to get different forms of this to pass for fifteen years now.
The internet shutdown in Tigray is nearing its second anniversary. The regime in Iran, too, reacted to the current uprising with shutting down the internet. Elon Musk pr’d to the rescue, announcing to «unlock» the access to Starlink’s internet in the country. It won’t help much.
Loose ends in a list of links
In good people: David Attenborough holds a special place in many people’s heart, including mine. Rachel Riederer’s account of his work is therefore well worth a read: The Lost Art of Looking at Nature
Cheating. It has been a while since I’ve been exposed to it – I don’t play video games anymore and have left school a while ago. Still, cheating is alive and well, as Matt Crump details in his post My students cheated … A lot.
Let’s see what the winter brings. Stay sane, hug your friends, and shut down fossil infrastructure.
]]><![CDATA[Around the Web]]>https://www.ovl.design/around-the-web/010/2022-05-28T14:12:00.000Z<![CDATA[Rentier capitalism and expropriation, AI models large and larger, the EU tightens its border regime, Elon Musk speed-runs fascism, and what prison inmates did to police cars.]]><![CDATA[
Collected between 16.5.2022 and 28.5.2022.
Around the Web maintains its biweekly publishing schedule for now because the author still hasn’t fully recovered from their sicknesses of the last few months.
Given the current dooming news cycles, that might be a feature.
It also means that I made to issue No. 10 a bit later than expected. Nonetheless, I’m happy to have made it this far. My normal rate of abandonment left me wondering if I’ll publish more than three issues.
After ten issues, some pieces have fallen into place, and I’ve found my beat (ranting while hoping for a better world). To celebrate the 10th issue, I’ve built a little statistics page.
If you enjoy Around the Web, it would mean the world to me if you recommend the newsletter and/or website to a friend or two.
Thanks for reading, let’s get into the reading.
Enjoy capitalism
While last year saw the Great Resignation and everyone was happy on r/antiwork this year sees the Great Layoffing. Gorillas, Getir, Klarna, Nvidia, Netflix, the list goes on and on, all terminating contracts to make their investors happy.
High-flying startups with record valuations, huge hiring goals and ambitious expansion plans are now announcing hiring slowdowns, freezes and in some cases widespread layoffs. It’s the dot-com bust all over again — this time, without the cute sock puppet and in the midst of a global pandemic we just can’t seem to shake.
Everything that is wrong with venture capitalists. Josh Gabert-Doyon thankfully deconstructed the most recent «VC is awesome because it is money» cold take, tracing VC firms back to whaling expeditions.
There’s a long history of rich people throwing money at stupid projects, and VC investment is best seen as a systematized method for cutting the risks involved.
Rich people throwing money at stupid projects? Andreesen Horowitz set up a new $4.5 billion fund for crypto projects. When the likes of a16z talk about a «building a better internet» it’s time to run.
In other capitalism, Trevor Jackson reviewed Rentier Capitalism: Who Owns the Economy and Who Pays for It?. In the book, Jackson makes a compelling argument against rentier capitalism.
What is new about the rentiers of today, then, is not their prevalence, their dominance, or that they face less serious opposition than in the past. What is most distinctive about our contemporary rentiers is that it has become difficult to discern whether their maneuvers represent rational strategies of elite wealth defense in conditions of declining productivity and technological change, or instead, the implacable drive of a nihilistic death cult.
Reading this article left me with Gwen Guthrie’s Nothing Goin’ On But the Rent stuck inside my head (it’s a good tune, don’t hesitate).
According to some at Google’s DeepMind, AI is now in fact almost intelligent. Do I need to change my headline? DeepMind published a paper about Gato a new kind of machine learning model, which is capable of learning multiple tasks at once.
Previous models could, for example, play Go or StarCraft, and needed to forget everything about the previously learned skill, to learn the next. Gato can perform 604 tasks. There are limitations: Gato is generally worse at those tasks than specialised models. So if you read anything claiming that General Artificial Intelligence is near, forget about it.
Some external researchers were explicitly dismissive of de Freitas’s claim. “This is far from being ‘intelligent,’” says Gary Marcus, an AI researcher who has been critical of deep learning. The hype around Gato demonstrates that the field of AI is blighted by an unhelpful “triumphalist culture,” he says.
If you want to get up to speed or back on track on the common criticisms of such models, this article by Emerging Tech Brew is a great introductory resource.
In The Markup’s newsletter, Julia Angwin interviewed Timnit Gebru on the same topic, and as always when Gebru speaks it’s worth a read. Gebru reflects on enviromental and societal problems of the race to build ever larger models.
Currently, there is a race to create larger and larger language models for no reason. This means using more data and more computing power to see additional correlations between data. This discourse is truly a “mine is bigger than yours” kind of thing. These larger and larger models require more compute power, which means more energy. The population who is paying these energy costs—the cost of the climate catastrophe—and the population that is benefiting from these large language models, the intersection of these populations is nearly zero.
There’s a technical, as well as PR, reason for this. Mixing concepts like “fuzzy panda” and “making dough” forces the neural network to learn how to manipulate those concepts in a way that makes sense. But the cuteness hides a darker side to these tools, one that the public doesn’t get to see because it would reveal the ugly truth about how they are created.
There is no beta that’s usable by anyone outside of Google, as Google is scared of possible abuse:
Downstream applications of text-to-image models are varied and may impact society in complex ways. The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo.
I do not agree with everything said in it, but this interview with Kai-Fu Lee about the AI and the future of work makes some interesting points. Thinking about a way to employ AI and humans side-by-side is especially worthwhile, even though – or rather because – it surfaces the need to criticise the capitalist mode of production.
But as we slip deeper into the reality of being able to catalog and retrieve—theoretically—all our life’s experiences, we lose a degree of autonomy over what French philosopher Jacques Derrida (I’m so sorry) called “archive fever”—a drive to document that becomes a compulsion to collect everything, leaving the archive overflowing and unreadable. This is the very problem that Big Tech purports to solve via memory features, promising that its algorithms will remind us of everything worth remembering. But the metric for what’s worth remembering is fundamentally unknowable. We change, we move, we make new friends, we outgrow old pastimes. More importantly, when Apple, Google, and Facebook continually demonstrate that their users are simply data to mine, why trust them to begin with?
Algorithms shape the way we speak, too. Some weeks back, I wrote about Algospeak and how social media algorithms force their users to adapt speech to avoid shadow-banning. Wired zeroed in on mental health and how talking about being «unalive» arguably worsens the discourse about suicide and mental health.
Williams worries that the word “unalive” could entrench stigma around suicide. “I think as great as the word is at avoiding TikTok taking videos down, it means the word “suicide” is still seen as taboo and a harsh subject to approach,” she says. She also swaps out other mental health terminology so her videos aren’t automatically flagged for review—“eating disorder” becomes “ED,” “self-harm” is “SH,” “depression” is “d3pression.” (Other users on the site use tags like #SewerSlidel and #selfh_rm).
What are you looking at?
Welcome to the week in surveillance. Before we start looking at current measures in the European Union, it’s nice to see that PimEyes come under closer scrutiny. The NYT went after them. PimEyes’ current owner denies frantically that they are building stalker ware, insisting that their technology should only be used to search photos of oneself. The only thing they demonstrate with such a statement is that they neither understand technology nor humans.
Panoptiropean Union
The European Data Journalism Network published an investigation into smart border control measures imposed by the EU, and the multi-billion dollar surveillance industry enabled by ever more control. As Matthias Monroy reports, one of those projects is the European System for Traveller Surveillance (ESTS). A joint venture between Frontex and Europol, the ESTS system is effectively a predictive policing network, which will scan every traveller coming into the EU – including EU citizens. Linking multiple existing databases, including those containing biometric data.
With pre-screening, the agencies want to make predictions as to whether travellers might be dangerous. This is aimed primarily at persons from third countries. However, a „traveller file“ is also to be created for EU citizens when they cross the border.
Predictive policing is known to reproduce whatever biases the society employing the technology inherits. With an agency like Frontex which is time and time again accused of ignoring human rights, it’s frightenly easy to imagine how the system will be (ab)used.
And, of course, it will use Machine Learning because fuck everything.
With the new system, each time travellers from non-EU countries (both short-stay visa holders and visa-exempt travellers) cross an EU external border, they will be registered in the automated IT system using their name, type of the travel document, biometric data (fingerprints and captured facial images) and the date and place of entry and exit.
The problems for Clearview AI, poster child of surveillance capitalism, continue.
The U.K.'s Information Commissioner’s Office, the country's privacy watchdog, has ordered facial recognition company Clearview AI to delete all data belonging to the country's residents.
Clearview has also been ordered to stop collecting additional data from U.K. residents and will pay a fine of roughly $9.4 million for violating the country's data protection laws.
That’s in stark contrast to Elon Musk, who on Wednesday used Twitter to announce that he isn’t voting for Democrats any more. The tweet is part of his recent speed-run attempt in the category Tech billionaire to fascist any%. On Friday, he met with Jair Bolsonaro, far-right president of Brazil.
When I was 7, my teacher told us to write an article about “world cultures” for school over the weekend. I remembered it late on Sunday so in a panic I made up something called the "Icelandic Fish Festival", figuring said teacher wouldn’t know either way.
That’s it for this week. The world is mad, now more than ever: Stay sane, hug your friends, and please for the love of god never trust a cop.
]]><![CDATA[Around the Web]]>https://www.ovl.design/around-the-web/009/2022-05-15T14:12:00.000Z<![CDATA[Roe v Wade & privacy, crypto & and its big crash, Europe & an attack on encrypted messaging, how a mechanical clock works, and an anthem to women riding bicycles.]]><![CDATA[
Collected between 2.5.2022 and 15.5.2022.
There was no Around the Web last week because I became very sick over the weekend. I spent this week lying in bed, too, and here I am still – recovering but also a bit annoyed. I had much better plans.
Annoyance, fittingly, is the topic of this newsletter: The last weeks saw the publication of a draft legislation threatening to overturn Roe v Wade in the USA, the inevitable collapse of the crypto market, an unrelenting heat-wave in south-east Asia, and a new attack on end-to-end encryption by the European Union.
This explanation of how a mechanical clock works, complete with interactive 3D visualisations, is marvellous. As it’s the best thing I read, I’ll leave it right in the intro.
Abortion is no bipartisan issue, though. A large majority of Americans supports safe abortion. What we see here is a decade-long coordinated attack by an ever radicalising far-right, aiming to undermine human rights. Laurie Penny published the chapter on abortion and reproductive freedom from her book Sexual Revolution.
These laws are not about the ‘right to life’. They are about enshrining maximalist control over women as a core principle of conservative rule. They are about owning women. They are about women as things.
The sore-winner complex highlights a fundamental asymmetry between the style of culture warring employed by the left and right. The right’s vision is ahistorical and logically confused, but more importantly, it is relentless. There is no appeasing this type of politics. It is a politics that will manage to use its victories to stoke additional fears inside its voters. For the media, there is no amount of evenhanded or both-sides coverage that will get the right to back down from calling the press illegitimate, biased, and corrupt. For non-Republican politicians, there is no amount of bipartisan language or good faith attempts at dialogue or engagement that will inspire bipartisanship, compromise, and a desire for majority rule. For the right, even in victory, there is only grievance and fear.
You do not need to by somewhere physical, mind. As Lil Kalish reports in Mother Jones, getting information about abortion or attending telemedicine sessions leaves a digital trace.
Now, if abortion gets outlawed, these trails all of a sudden become evidence. And instead of more than annoying ads, you might get a visit from the cops.
News coverage of digital forensics often celebrates its role in prosecuting serious felonies. But when it comes to reproductive rights, Conti-Cook says, the same tools “will be a powerful [asset] to police and prosecutors in a more criminalized landscape” for abortion seekers.
This whole situation is also the topic of the latest edition of T3thcis. Shout out, and if you want to get more tech ethics news in your inbox, you should follow them anyway.
The lesson here is to always protect your digital trails. Data is forever, legislation changes. Digital self-defence is the only viable way to protect yourself. Shoshana Wodinsky over at Gizmodo published a hands-on guide on how to stay protected.
NFT trading has been on a downward slump for a while. Coinbase’s stock crashed 50% over the last year.
But this week Bitcoin and Ethereum, too, saw their courses collapse. At the time of writing, Bitcoin seems to have stabilised at around $30.000, the lowest price since December 2020 and down more than 50% from its all-time high in November 2021.
The most dramatic story has probably been TerraUSD. Terra is an algorithmic stablecoin. Or: a Ponzi scheme. Rusty Foster tried to explain stablecoins as simple and enraged as possible, and I won't even try to do a better job, as I would inevitably fail. Now over the last week, Terra collapsed completely, taking with it some $18 billion.
Terra, and other stablecoins, have not been without criticism. Especially the algorithm flavoured variant, which are backed by basically nothing. Terra seems to be gone for good. But the volatile market remains.
Given that Bitcoin is a direct response to the 2008 financial crisis and an attempt to do things better all this feels a lot like the 2008 financial crisis, except worse.
Cryptocurrency trading throws around alleged millions and billions. Those numbers are fictions built on fictions, with a much smaller—but still real—amount of actual money at the bottom. The gateways to genuine dollars are narrow and have yet to be significantly breached. But that’s not for lack of effort from the cryptocurrency world, whose endgame appears to be to make cryptocurrency systemic and leave the government as the bag-holder of last resort when the tottering heaps of leverage fall down. It worked in 2008, after all.
While crypto is imploding, tech companies lost a combined $1 trillion in market value over the last week. Facebook announced that it will largely stop hiring across the company. Now there’s good news after all.
Amazon is on a firing spree. It fired two union organisers as well as senior managers in the Staten Island warehouse which voted to unionise some weeks back.
Space, reduced from the final frontier to a billboard. SpaceX and Blue Origin spend more and more money on lobbying. As I’m sick, I also had the time to watch the new Netflix documentary on Musk’s SpaceX. Despite it being a two-hour-long PR puff-piece, you walk away with the impression that Musk has not the slightest idea how a rocket works. Which is a remarkable feat after running a rocket company for almost two decades.
News from outer space: There are earthquakes on Mars. The idea of Musk-Bezos flying to Mars only to have their nice colonies destroyed by a marsquake is, frankly, what kept me laughing while in hospital.
Computer scientists are used to thinking about “bias” in terms of its statistical meaning: A program for making predictions is biased if it’s consistently wrong in one direction or another. (For example, if a weather app always overestimates the probability of rain, its predictions are statistically biased.) That’s very clear, but it’s also very different from the way most people colloquially use the word “bias” — which is more like “prejudiced against a certain group or characteristic.”
The whole piece is not only interesting when thinking about AI, but for society as a whole.
At the same time, the outsourcing of digital work in the Global South is inextricably linked to exploitive labor practices employed by foreign firms. The digital labor market in these regions is rampant with low wages, harsh working conditions, alienation, income disparity, racism, stress, and lack of global recognition.
Comparing it to GPT-3, another language model released last year, the team found that OPT-175B ‘has a higher toxicity rate” and it “appears to exhibit more stereotypical biases in almost all categories except for religion.”
Given that Facebook trained the model on unmoderated Reddit comments, this seems about right. So, what we have got was not a generous gift to the research society, but an open-sourced hate spewing monster. Slow clap, Facebook.
Of course, there are those who have something to gain if the EU decides to end privacy and implement such measures. Namely, the companies developing the analysis tools. One of those is run by Ashton Kutcher, who is already lobbying in Brussels.
According to internal documents and emails provided to The WSJ, Facebook not only shoddily took down pages for the Children’s Cancer Institute, women’s shelters, and fire rescue services (during fire season, no less), they prevented certain COVID-19 info pages from reaching users during initial vaccine rollouts. Facebook slowly restored these pages a few days later, following tentative alterations to Australian legislation regarding compensating publishers for their original news content.
We’ve seen more states coming to terms with social media companies, the Federal Trade Commission (FTC) has re-discovered a tool that cuts the problem at its roots: Algorithmic destruction. The slightly aggressive name basically means: If a company collects data it should not collect and uses an algorithm to facilitate data collection, it not only needs to delete the data, but destroy the algorithm for good measure.
That’s it for this week. What fun we had. If you like Around the Web, feel free to show it to a friend who likes Around the Web. Thanks for reading. Stay sane, hug your friends, and see you next week.
]]><![CDATA[Around the Web]]>https://www.ovl.design/around-the-web/008/2022-05-01T14:12:00.000Z<![CDATA[Facebook does not know what it is doing (with your data), someone bought a website, others have no internet, and 185 hellos from British Columbia.]]><![CDATA[
Collected between 19.4.2022 and 1.5.2022.
Happy Labour Day, everyone.
Let’s remember those who lost their lives fighting against capitalism, making the world a better place, and keep up the fight.
I spent multiple days without looking at a computer, or work. 10/10. While I’ve been looking away, the internet has been revolving around a billionaire, mostly. While the rest of the world was busy bombing itself to pieces and burning the remains to the ground. 0/10.
Other people have been busy writing, and here’s what I read:
Social, they said
Facebook does not know what it is doing (with your data). According to a leaked document, all the data Facebook’s collects is ink flowing into a lake, while no one has the slightest idea to control it.
In other words, even Facebook’s own engineers admit that they are struggling to make sense and keep track of where user data goes once it’s inside Facebook’s systems, according to the document.
They, too, still have no clue how to tackle misinformation. Do they even try? Misinformation on Facebook’s platform is going rampant in Africa, with Facebook from the outside looking in.
I can’t help but get the feeling that the whole thing has the same vibes as some years, driven by the Twitter usage of a certain ex-president of the USA. I’m not alone with this observation:
Like Trump, Musk puts his critics in a real bind. Broadly speaking, in an attention economy there’s no satisfying way to deal with people like them. There’s a circular thing where they command attention because they have some kind of power (fame, money, etc.) but, increasingly, their ability to command attention also grants them power (to influence/program the news cycle, amass cult-like followings, enhance their businesses).
A point that is underreported, even after the hundredth piece on what might, might not or who knows happen, is that we can’t solve society’s problems with Twitter, regardless of who owns it.
The dissociation of truth and the fabric that holds our world together is going on for a while. Somehow this piece from The Awl (R.I.P.) came up again. It was published a century ago in 2016, and reminds us that right-wing pundits have been trying to dissipate what’s left of a shared understanding of, well, anything since at least 2004.
As we speak about Twitter anyway
Where have all the tweets gone? Twitter throttled tweets mentioning the HBO docuseries Q: Into the Storm. Twitter said they did so because they wanted to avoid amplifying QAnon. Which is quite ridiculous, given that the docuseries tries to dismantle the Q world-view. It certainly has nothing to do with the fact that the documentary criticises Twitter for enabling Q in the first place.
After the USA left Afghanistan, and the Taliban took over, they also got control of the installed biometric surveillance systems. Having biometric data in the hands of a terrorist government is as bad an idea as it sounds.
Every time there is some supposedly new, world-changing AI system, it turns out that the problems of humanity are just reflected inside the computational system. Honestly, I’m a little tired of the narrative that computers are going to deliver us. I think the narrative itself is tired.
Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.
Bot Populi answers how we can decolonize and depatriarchalize AI. The piece also features the most succinct description of AI I’ve read so far: «Artificial intelligence is the holy grail of capital accumulation and socio-political control in contemporary societies.»
As a response to datafication, algorithmic mediation and automation of social life, communities worldwide are trying to pursue justice on their terms, developing the technology they need, committing to the community’s best interests, and building pathways to autonomy and a dignified life. We have explored some such initiatives and the ideas underpinning them below. These initiatives provide insights about different dimensions of AI technologies: feminist values applied to AI design and development, communitarian principles of AI governance, indigenous data stewardship principles, and the recognition of original languages and cultures.
Not AI, but colonialism nonetheless: A group of crypto bros is trying to buy an island (yes, again). The cryptonians are once again showing how capitalism with cryptocurrencies is only capitalism after all, and the Jacobin piece does a fantastic job showing how we ended up where we are.
Fantasies of libertarian exit from society were not uncommon at the time. The 1960s in the United States was as much the heyday of market libertarianism as it was of New Left anti-capitalism. Fears of demographic, ecological, and monetary collapse, combined with anxieties over the activities of social movements seeking racial, gender, and economic justice and redress, hastened efforts to find ways to abandon the sinking ship of state and to start anew elsewhere.
Rest of World analysed the history of internet shutdowns. Once Egypt open Pandora’s box, shutting down the internet to quell dissent, it became a favourite tool in the box of governments around the world. It’s not only bad for protests but also bad for business, as shown at the example of Kashmir, the region most affected by internet shutdowns in the world.
Hence the practice of blocking the internet outright has given way to more nuanced approaches. As witnessed in Russia, where the once chaotic infrastructure powering the internet has been centralised, and internet service providers are required to install government provided control software.
In Around the Web 006 I mentioned Tesla’s Gigafactory in Brandenburg, Germany, and how the surrounding area is increasingly susceptible to drought. Unfortunately, not much has changed. But the problem is more complex than Tesla.
That’s it for this week. If you’ve made it this far, why not recommend Around the Web to a friend?
Until next week. Stay sane and hug your friends.
]]><![CDATA[Around the Web]]>https://www.ovl.design/around-the-web/007/2022-04-18T14:12:00.000Z<![CDATA[AI keeps snake-oiling, bankrupt surveillance, cultivating memory, evading algorithms, how space became a billboard, and a browser mitigating tremors.]]><![CDATA[
Collected between 10.4.2022 and 18.4.2022.
A wild potpourri awaits you, dear reader, in this week’s issue. We are travelling from ancient history to the latest in AI snake oil. Turkey weighs a war against Kurdistan, I will cover this in the next issue. For now, I need to recover from my second bout of sickness in four weeks.
On a technical note: Images are currently not displayed in the RSS feed and, consequently, the newsletter. You can see everything on the website. I aim to fix this before publishing the next issue, and use more images going forward.
Enjoy the read.
This ain’t intelligence
A new Machine Learning model is reportedly able to spot depression in 88 percent of participants based on their tweets. Which is of course a terribly bad thing to do. Once such a model gets into the wild, how do you control who uses it to analyse who?
However, the bot can also be used after a post has made it into the public domain, potentially allowing employers and other businesses to assess a user’s mental state based on their social media posts. It could be used for a number of reasons, the researchers say, including for use in sentiment analysis, criminal investigations or employment screening,
No matter how good or bad, such models should never exist.
Giving the battered mind no rest-bite, we will see emotion detection implemented into Zoom, and other products in the upcoming months. The company promises to use ML models to help analyse digital sales pitches. It’s «ironic» that not even the companies building these snake oil products really believe in them.
But Ehlen recognized the limitations of the technology. “There is no real objective way to measure people’s emotions,” he said. “You could be smiling and nodding, and in fact, you’re thinking about your vacation next week.”
The NYT Magazine published a long read on Open AI’s GPT-3 Large Language Model. Unfortunately, it did a mediocre job at best. Emily Bender, who was interviewed for the piece, published a response. In it, she dismantles the uncritical parroting of Open AI’s PR (and with it that of the wider industry), as well as providing a better framework for journalists to report on tech-solutionism.
Puff pieces that fawn over what Silicon Valley techbros have done, with amassed capital and computing power, are not helping us get any closer to solutions to problems created by the deployment of so-called “AI”. On the contrary, they make it harder by refocusing attention on strawman problems.
What are you looking at?
The spyware company NSO Group has been declared «valueless» to investors. NSO Group became infamous for their Pegasus software, which has been used against investigative journalists and activists around the world. Maybe building surveillance tools for autocrats isn't worthwhile. Mere weeks ago, German company FinFisher, which fished (I’m terribly sorry) in the same murky waters, has declared bankruptcy. I won’t even try to hide my Schadenfreude.
The participants of the Zurich Marathon last weekend have been subjected to facial recognition, without it being mentioned in the event’s privacy policy. After the event, participants were able to find images of their run, by uploading a selfie.
The Mute button in the video chat application of your choice might not do what you think it does. While the other participants cannot hear you, analysis of the streamed data shows that the companies providing the tools might well do.
They found that all of the apps they tested occasionally gather raw audio data while mute is activated, with one popular app gathering information and delivering data to its server at the same rate regardless of whether the microphone is muted or not.
What the companies do with the collected data will remain unclear.
It’s easier to guess what the FBI plans to do with its million dollar investment in surveillance technology. It’s far from new that acts of violence serve as the pretext to expand policing budgets, even if the institutions in question are already more than able to surveil what they want.
As the Rolling Stone reports, the FBI has been actively monitoring social media feeds during the 2020 Black Lives Matter protests. Somehow they failed to do so in the run-up to January, 6th, 2021, when the Trump crowd stormed the US capitol.
The new documents suggest the agency has all the authority it needs to monitor the social-media platforms in the name of public safety — and, in fact, the bureau had done just that during the nationwide wave of racial justice protests in 2020. Critics of the FBI say that the bureau’s desire for more authority and surveillance tools is part of a decades-long expansion of the vast security apparatus inside the federal government.
Those white classical statues we’ve grown accustomed to might not always been so white. The aesthetic theory derived from the whitened image served as one of the predecessors of the anthropologists. Reality might have been more colourful, though.
Large polychrome tauroctony relief of Mithras killing a bull, originally from the mithraeum of S. Stefano Rotondo, dating to the end of the 3rd century CE. Now at the Baths of Diocletian Museum, Rome (photo by Carole Raddato, CC BY-SA 2.0).
Coda has published one of the best articles on memory culture, spanning an ark from the south of the USA to Germany. The piece goes into depth why it’s so hard to achieve a form of Vergangenheitsbewältigung that does not stop when the own family gets involved.
Silence distorts memory in various ways. It can happen when a nation, collectively, refuses to engage with the realities of its past, opening up space for revisionist histories and feel good counter-narratives that gloss over the horrors of the past. Sometimes national silence is summoned as an act of avoidance; other times, to serve a political or ideological agenda.
The whole piece is long, but every sentence worth your time.
Social, they said
The internet has always been modernising language. You can view early abbreviated uses of language (kthxbye, lol) as an outlet needed in chat rooms or a quick way to communicate in online games, and Emoticons and Kaomoji offered to convey emotion through text.
With today’s algorithmic moderation system the pressure is different, though. Automated systems rank down specific words (or at least that’s suspected, as nobody knows how those systems work). This leads to a new form of online speak, dubbed Algospeak.
“There’s a line we have to toe, it’s an unending battle of saying something and trying to get the message across without directly saying it,” said Sean Szolek-VanValkenburgh, a TikTok creator with over 1.2 million followers. “It disproportionately affects the LGBTQIA community and the BIPOC community because we’re the people creating that verbiage and coming up with the colloquiums.”
Elon from Twitter
Shortly after it was announced that Elon Musk will join Twitter’s board, it was announced that Elon Musk will, in fact, not join Twitter’s board. Shortly after it was announced that Elon Musk will not join Twitter’s board, Elon Musk announced that he wants to buy Twitter. Twitter said no, though it was unlikely that Musk ever intended to follow through.
Tesla’s vows continued in the meantime. An analysis found bot activity, which seems to correlate with rises in Tesla’s stock market.
Capitalism must expand and commodify every last thing on earth. And earth is not enough, so capitalism moves to commodify space too, changing the final frontier into a billboard.
The new space race pursued by the likes of Bezos, Musk, and Virgin’s Richard Branson taps into that same thirst for inspiration and transcendence. Their companies are pushing the limits of technology in remarkable ways. At the same time, there is something deeply unsettling about the space barons’ capitalist swagger. They measure the grandeur of space in terms of dollars and Bitcoin. They look out into the cosmic expanse and see another frontier for business expansion, ripe for profit-making colonies, mining operations, and satellite swarms.
The world is burning
New reporting by The Intercept shows, yet again, how the industrial capital tries to undermine scientific publications and concerted action against the climate crisis.
Many scholars have noted the influential role the [Global Climate Coalition] played in obstructing climate policy in the 1990s, but the first peer-reviewed paper on the group, published this week, reveals that the original and lasting intention of the GCC was to push for voluntary efforts only and torpedo international momentum toward setting mandatory limits on greenhouse gas emissions.
The climate catastrophe does not care about this too much. Chile is rationing water for its residents. The same is happening in the vicinity of Tesla’s Giga Factory in Brandenburg, Germany.
World Wide Web
The open Web has always been there, sometimes ignored, and forgotten. Is a renaissance underway as tiredness of the walled gardens of Facebook et al. increases? It seems like, as Anil Dash argues in his new post A Web Renaissance.
While the core technology of the web is decades old, the tools that help make it and run have been quietly evolving into something extraordinary in the last few years, too. There’s a flourishing of powerful new frameworks that make it simpler than ever to build flexible, responsive, useful sites. New hosting platforms let those sites be deployed and delivered faster and more reliably than ever. And you can build one of these sites in literally under a minute, then collaborate with people anywhere in the world to iterate on making the site better.
Don’t-call-them-overlay-company AudioEye sent a cease & desist letter to accessibility specialist Adrian Roselli for criticising the company on Twitter. AudioEye is one of several companies who've come under criticism for marketing that claims foolproof accessibility from day one. According to AudioEye’s lawyers, it is a misrepresentation of the company to classify them as an overlay vendor, as they offer manual testing, too.
DuckDuckGo has launched a new privacy-centric browser. It’s built on top of WebKit, and as such for now a Mac only product. Using WebKit is an interesting move, as most of the recent newer browsers (Brave, Edge) use Chromium as their base. DuckDuckGo has decided that Chromium includes too much of Google to be good. One stand-alone feature is the Browser’s ability to automatically interact with some cookie banners, rejecting cookies as a user visits a website.
The road to hell is paved with crypto intentions
Jordan Belfort, best known as the Wolf of Wall Street, is now into crypto. He held a workshop in house. Participation fee? One Bitcoin, which is roughly $40,000. All guests were male, to Mr. Belfort’s astonishment.
As they dined on caviar and rigatoni, some of the guests shared stories of their own debauchery; Mr. Belfort, it turned out, was not the only wolf in the room. Two guests discussed the mechanics of pursuing younger women without risking entanglement in a “sugar baby” situation. Someone speculated about how an enterprising strip club owner might incorporate NFTs into the business.
Can't imagine why only bros participated.
The Bitcoin conference is done and dusted. Two articles and a podcast to get you into the loop:
Being photographed in conflict and war and ending up in viral image, might haunt you forever.
In today’s conflict you risk being called a Crisis Actor, in addition to your trauma. Suspension of Belief explains the genesis of this conspiracy theory.
War is gendered and homophobic. The Russian war in Ukraine is no exception.
That’s it for this week. If you’ve made it this far, why not recommend Around the Web to a friend?
Around the Web will be on hiatus next weekend, as I’ll celebrate two birthdays. The next issue will come to your inboxes on April, 30th. Until then, stay sane and hug your friends.
]]><![CDATA[Around the Web]]>https://www.ovl.design/around-the-web/006/2022-04-09T14:12:00.000Z<![CDATA[Gig work regulation, Mark Sauron, Elon from Twitter, the AI of Google, the Fuckups of Crypto, 25 years of web accessibility, and a different look on earth.]]><![CDATA[
Collected between 3.4.2022 and 9.4.2022.
Welcome to a super dense issue of Around the Web. I have bunch of links, so will keep the intro short. Enjoy the read.
“The whole [idea] was to lead by example,” he said. “The last thing you want is someone to go up to the polling booth and be too scared to check off ‘yes.’ If you can get in the managers’ faces and show how pro-union you are, then voting ‘yes’ seems like nothing in comparison.”
It stands to reason that policy changes can’t be enough, but has to accompanied by worker organising. Unfortunately, workers at German delivery company Gorillas suffered a setback in court. The workers were fired after participating in a strike that was not endorsed by a union, which is illegal in Germany. The workers and their lawyers will try to overturn this law.
Free speech, but they say what you can say
Language matters. This fact might get disputed from time to time, but it only takes a look at the fierce battle capitalism takes to enforce language.
Google, meanwhile, has instructed their Russian translator to not call the war in Ukraine a war. Falling in line – as is Google’s specialty, remember Project Dragonfly – with the official propaganda doctrine of the Russian government. What is a «special operation» for the Kremlin is now «extraordinary circumstances» for Google.
Remember, kids, making money is more important than telling the truth.
The German police and the German nazis
Police in Germany raided multiple locations linked to the German arm of Atomwaffen Division. Right-wing terrorism has been declared a mayor threat by the new Minister of the Interior, Nancy Faeser.
After being blocked by Horst Seehofer and the government led by Christian Democratic Party, a study of right-wind tendencies in the German security apparatus appears to be underway finally. Given the fact that this week yet another right-wing chat group in Hesse’s police was disclosed, and the involvement of a member of the German army with Atomwaffen Division are a stark reminder that the Police in its current state are not part of the solution to the Nazi problem.
I don’t really want to write about it, but I have to. Elon Musk bought 9.1% of Twitter. Because of the lulz, right? Shortly thereafter, Parag Agrawal, Twitter’s CEO, announced that Elon Musk will join its board.
After initially filing his stake as passive investor, Musk has since corrected the paperwork. It remains to be seen if he faces trouble from the US Securities and Exchange Commission (SEC) for disclosing his stake too late.
As a condition of joining the board, Musk agreed to limit his stake to 14.9%. As a board member, he also would have been bound by Twitter's code of conduct. Musk, whose past behavior suggests a studied lack of respect for rules, will not have to abide by those ones.
Tesla’s recent factory opening in Brandenburg, Germany happened amidst massive pushback over its water demands. The Berlin senate now announced that it is looking into rationing fresh water for its residents.
Musk’s much publicised Starlink shipment to Ukraine hasn’t been so charitable after all. As the Washington Post reports, a part of the Starlink terminals has been purchased by the United States Agency for International Development.
In other news, Truth social, a free-speech social network with mostly bots, lost two core members. I’m thrilled that this is the only thing I’ve heard about it so far.
Our investigation shows: PimEyes is a broad attack on anonymity and it is possibly illegal. A snapshot may be enough to identify a stranger using PimEyes. The search engine does not directly provide the name of a person you are looking for. It does however find matching faces, and in many cases the shown websites can be used to find out names, professions and much more.
What I will talk about is PimEyes business model. It touts itself as a mechanism to preserve privacy. But as it scans the internet deeper than most and combines this with facial recognition technology and a public search, it really achieves the opposite. Cher and other affected users have to pay the hefty monthly fee of $299.99.
That’s when I noticed I could ask PimEyes to hide all of the images of me from their search results. That makes sense, right? Surely I should be able to control who can search for my face? Wrong. For an ongoing monthly fee of $79.99, PimEyes will allow me to control the search results in their basic search results features. To get all of them, of course, I’ll need to pay $330.59 ($299.99 + taxes) every single month, indefinitely, to stop people from finding them using PimEyes service.
If you meet someone working for, or advertising, PimEyes … chase them through the streets.
The road to hell is paved with crypto intentions
Crypto had some weeks. After the Axie Infinity hack should have blasted every little bit of trust anyone had into some kind of far away orbit, it was Bitcoin Conference this week.
And while it might be a good idea to come up with something reassuring, good ideas are – still – not crypto’s strong side.
They invited the favourite tech billionaire of everyone who likes not so good ideas on stage. Peter Thiel. And oh did he deliver. Coming on stage, ripping $100 notes into pieces, it only went downhill from there. In a weird but, giving the larger (very large) picture coherent, attack on anything that isn't him. Which is basically the story of his life. He invented the «financial gerontocracy» – firing shots at Warren Buffett and the CEOs of JP Morgan and BlackRock. Issue 002 of Around the Web focussed on Thiel and his ventures into politics.
But alas, Thiel wasn’t the only speaker with … interesting views. On Friday JP Sears, dubbed the Clown Prince of Wellness, took to the stage. Sears has promoted a mixture of esoteric beliefs and conspiracy theories.
Worldcoin, another grand idea to solve the world by scanning faces in a dystopian device called The Orb, failed to live up to its promises.
The currency has not yet been launched, but a BuzzFeed News investigation has found that Worldcoin is already wrestling with a host of problems, from managing angry Orb operators to concerns that the company is using its cryptocurrency as a way to amass millions of biometrics and perfect a new kind of authentication technology for the blockchain era.
The NFT bubble continues to shrink. On LooksRare, one of the larger marketplaces, most transaction are users selling to them themselves.
NFT projects focussing on women have been dubbed the girlbossification of NFTs. Gwyneth Paltrow, Mila Kunis, among others, pushing NFTs is basically a digital version of the white, «Lean in» type of capitalist confirmative feminism.
What a week, huh? Let’s take a look at the technical side of tech, at least there I’ve ignored anything which does not fill me with joy.
Josh W. Comeau has the ability to explain complex systems in words and pictures you get. Which is an invaluable skill. Recently, he has explained CSS layout algorithms, and it frankly does not get any more complex than this.
One of the cornerstones of the modern web had its twenty-first birthday this week. April, 4th 2001 saw the first draft of the Media Queries specification. It took nine more years until Ethan Marcotte came up with a name for the thing media queries enabled: Responsive Web Design. What a ride it has been.
Another part of history is this republication of a 1985 story in IEEE Spectrum about the Commodore 64. I only understand half (not even) of the technical details, but still think it’s a rather enjoyable read. My favourite parts might be the lengths devs had to go to build graphics back then, and results they achieved. This nerding about colours, in a time when we are leaving sRGB behind is a great blast of the past. All in all, it is a fascinating story about tech, the effects of saving costs, and of course: marketing.
“If you let marketing get involved with product definition, you’ll never get it done quickly," Yannes said. “And you squander the ability to make something unique, because marketing always wants a product compatible with something else.”
Not so much has changed since 1985.
Wondering about products of the recent past? Take a look at Killed by Tech, a list of all the products sunsetted by Google, Microsoft & Co.
I’m probably late to the party, but have only recently discovered the earth visualiser made by nullschool.net. It gives all kinds of overlays, showing wind and waves. The kind of website I can look at for hours on end.
What a week, huh? Thanks for reading. If you enjoyed this issue, why not share it with a friend. Until next week. Stay sane, hug your friends, and donate to Sea Watch.
]]><![CDATA[Around the Web]]>https://www.ovl.design/around-the-web/005/2022-04-03T14:12:00.000Z<![CDATA[The joy of unionisation, the catastrophe in Tigray, the PR bullshit of Facebook, and Saturn losing its rings.]]><![CDATA[
Collected between 25.3.2022 and 3.4.2022.
Greetings. This issue is a bit too late because I tried to rebuild my build process only to realise that it will not work. So I undid a Saturday’s work (note to self: working on Saturday is always stupid), and nothing has changed.
Here’s what I read last week. Let’s start with a reason to celebrate.
JFK8
Amazon tried for months to stop what happened in the end: Workers at Amazon’s JFK8 warehouse in Staten Island, New York City voted to unionise.
The discourse about unions and work councils is reaching the German tech industry as well. After successful organising attempts at companies like N26 and Zalando, and the ongoing struggle at delivery company Gorillas, it seems like unions are here to stay. junge Welt published an excerpt of Nina Scholz’s new book Die wunden Punkte von Google, Amazon, Deutsche Wohnen & Co..
Schools in the USA have adopted «smart» camera systems, and utilised them during the pandemic to sport maskless pupils. The systems weren’t good at that in he first place, but now that they are installed, they are likely to stay.
While the cameras are intruding every aspect of our lives, we might be looking at non-persons. Computer-generated faces are nothing new, but the technology becomes ever more pervasive. NPR published a story about computer generated «recruiters».
Germany’s federal police, Bundeskriminalamt, seems to have realised that Google’s vast data trove can be utilised for state surveillance. Using the location history Google Maps stores on its users is an increasingly popular form, as it is trumps traditional location tracking methods in accuracy. In the USA, requests for stored location data jumped from 982 requests in 2018 to 11.554 in 2020.
Waymo is expanding fully driverless cars to San Fransisco, the second city in the USA . AI is taking over! No, it isn’t. Every city requires immense amounts of training. Remember, AI is only half-decent at analysing the path. We are still a long way of any form of AI, which would enable going driverless by itself.
Facebook has been caught smearing TikTok, hiring a public relation firm to push a non-existent viral challenge to newspaper editorials. This is, of course, not the first time, that Facebook has decided to produce disinformation, rather than just building the platform to spread it. It’s a remarkable stupid idea, but I guess that fits Facebook’s brand.
Casey Newton comments on the cynicism of it:
There’s the cynicism of planting op-eds and letters to the editor in local newspapers, with their internet-decimated staffs and diminished investigative powers, knowing they need the content and likely won’t ask too many questions about where it came from.
There’s the cynicism of borrowing credibility from local politicians, handing them a few paragraphs of someone else’s ideas and encouraging them to pass the talking points off as their own.
There’s the cynicism of assuming no one will ever find out.
It’s not that TikTok has been without its problems. Late in March content moderators filed a lawsuit against the app, citing the extreme emotional toll of reviewing graphic material.
The suit says TikTok and ByteDance controlled the day-to-day work of Young and Velez by directly tying their pay to how well they moderated content in TikTok's system and by pushing them to hit aggressive quota targets. Before they could start work, moderators had to sign non-disclosure agreements, the suit said, preventing them from discussing what they saw with even their families.
As Russia’s forces get pushed back from the region of Kyiv, atrocities committed by the Russian army get documented and amplified into our social media feeds. This is your reminder that you don’t need to look at the footage. And further, you don’t need to push this content into people’s timeline. I recommend Shoshana Wodinsky’s thread on the matter.
if u force obscene amounts of violence into ppl’s TL’s without a heads up, ur not “creating a historical record.” ur being an asshole
KrebsOnSecurity on a scheme, where hackers take over government email accounts to issue emergency data requests. It is nigh impossible to know if the request is from a hacked account, leaving companies really no choice but to hand over the data.
Saturn is loosing it rings. Which, of course, will take a lot longer than humankind burning the planet to dust. Somehow, the cosmic timescale has lost its calming qualities.
When you take a picture with an iPhone you are not so much taking a picture as letting a robot create an approximation of the picture that you wanted. Have iPhone cameras become too smart? I cannot help but to think of the picturebox in Terry Pratchett’s The Colour of Magic.
That's it for this week. Stay sane, hug your friends, and donate to Mission Lifeline.
]]><![CDATA[Around the Web]]>https://www.ovl.design/around-the-web/004/2022-03-26T14:12:00.000Z<![CDATA[The war in Tigray, the effort of resposible AI, digital gardens, dead internet, aesthetics of NFTs and why «My body, my choice» feels out of date.]]><![CDATA[
Collected between 19.3.2022 and 26.3.2022.
As I spent a good deal of my week lying in bed, I had too much time at hand to read the internet. This issue is going to be on the longer end. If you only want to read two things, make it the article about the use of AI by The Trevor Project and this article about digital gardening, a blooming subculture in a wonderful niche of the Web. The rest is good too, though.
The war in Ukraine still dominates European headlines, diverting attentions all too easily from other wars going on.
In Ethiopia the government has declared a truce in their attacks against Tigray. Tigray is a region in the north of country, home of the Tigray’s People Liberation Front (TPLF). The war is ongoing for the last sixteen months, were government troops, helped by local militias and forces from Eritrea, try to break the resistance of the TPLF. The war is deeply tied to the country’s history. If you want to get a better understand, the New York Times has published an explainer.
Millions of humans in Tigray are mostly cut off from access to food, as famine looms. The truce might make it possible to deliver humanitarian aid to the region, something that has been all but impossible for the last months. But, as Voice of Africa reports, the declaration of the truce might not spell the end of the suffering.
However, Hassan Khannenje, the head of the Horn Institute for Strategic Studies, does not believe the government and the Tigray People's Liberation Front, or TPLF, will give aid groups a free hand.
Tek from Tgaht but it even more bluntly in a YouTube video, calling the truce a lie, designed to skirt looming sanctions.
This ain’t intelligence
AI is often touted as the solution for a broad range of problems, including in medicine. So far, those solutions failed to materialise. Protocol wrote in-depth about the use of AI by The Trevor Project, a nonprofit in the USA catering to LGBTQ+ teenagers in the USA.
The project took immense care how it uses AI, specifically large language models such as GPT. But still, they are aware of the fact that AI can train humans, but not replace them. As such their models are not used in care, but only in training and even there they need to be recalibrated regularly.
While he said the persona models are relatively stable, Fichter said the organization may need to re-train them with new data as the casual language used by kids and teens evolves to incorporate new acronyms, and as current events such as a new law in Texas defining gender-affirming medical care as “child abuse” becomes a topic of conversation, he said.
The whole piece is really worth a read, and a great example of the lengths companies have to go to build products on top of present AI capabilities if they do not want these to be hurtful.
But what to do if you found that an algorithm hasn’t treated you fairly? Currently there’s no real legal basis on which you can appeal. AlogorithmWatch wrote down some demands to adapt the German anti-discrimination law.
I’m still closing old tabs (I have a lot of them). A while back Wired reported on AcccesiBe. AccessiBle offers an «accessibility overlay». Overlays are snake oil products that promise to use some obscure mixture of Artificial Unintelligence and public relation promises to make your sites accessible.
Accessibility practitioners agree that overlays do not work, at times make your site harder to use for people with accessibility needs, and are an all-in-all superbad idea. Adrian Roselli has posted an in-depth look into yet another overlay company, their claims, and their failures.
Platforms were mostly locked out of Russia. One notable exception being TikTok. They decided to practise self-censorship, making it impossible for users outside of Russia to view content out of Russia and vice versa.
Its core features prime it for remixing media, allowing users to upload videos and sound clips without attributing their origins, the paper said, which makes it difficult to contextualize and factcheck videos. This has created a digital atmosphere in which “it is difficult – even for seasoned journalists and researchers – to discern truth from rumor, parody and fabrication”, researchers added.
Most of us use a version of the internet that is controlled by algorithms, and non-stop feeds of informational overload. But underneath the concrete surface, a small movement of digital gardeners is planting their seeds. Reading this piece about digital gardening gave a sense of calm I seldom experienced when reading about the Web recently.
A garden is a collection of evolving ideas that aren't strictly organised by their publication date. They're inherently exploratory – notes are linked through contextual associations. They aren't refined or complete - notes are published as half-finished thoughts that will grow and evolve over time. They're less rigid, less performative, and less perfect than the personal websites we're used to seeing.
A tweet by Claire Evans reminded of one of my favourite articles about conspiracy theories ever. There’s a thing called Dead Internet Theory. It stipulates that the internet died some years ago, and is a barren wasteland full of bots. Which totally feels true, even if it’s nonsense. Still: I believe.
Thankfully, if all of this starts to bother you, you don’t have to rely on a wacky conspiracy theory for mental comfort. You can just look for evidence of life: The best proof I have that the internet isn’t dead is that I wandered onto some weird website and found an absurd rant about how the internet is so, so dead.
The road to hell is paved with crypto intentions
I really loved the article Why do all NFTs look the same by Max Kohler, as it does not try to do lazy argument «This is not art, it’s just a computer», but ties NFTs into a larger picture of virtual effects in movies, as well as the reproducibility of all art, which Walter Benjamin already observed.
People who make [NFTs] recognise it’s difficult to argue that a digital image can be “original” on any material level, so they suggest a kind of authenticity-by-proxy: Buy an NFT and you get a unique entry in our special database saying you own the image. That database entry has effectively the same function as those fancy art historians and copyright lawyers: Establish authorship, keep track of provenance, authorise derivative works, mediate royalty payments, and so on.
Last week, TIME published a longer portrait of Vitalik Buterin, the creator of the Ethereum blockchain. The interview paints a sympathetic picture. A picture I do not want to disagree with. It makes it obvious, though, that Ethereal (and crypto at large) is yet another problem created by privileged white men, who did not need to think about the real-life consequences their «pure» and intellectually challenging project might have.
Vice visited SXSW and witnessed the takeover by crypto mediocrity and a version of the future that is disappointingly blunt, there’s no fun, nowhere.
And yet, despite all the talk I heard about ushering in a new era of diversity and inclusion, it was hard to not notice that every room felt largely the same: mobs of white wealthy men who quickly volunteered that they worked in finance, tech, marketing, or some buzzy fusion of the three.
I decided to cut down on reading crypto news. I still follow web3 is going great, for the lulz, but will dedicate way less time on this topic, especially on the fraud part.
What a lapses
At the beginning of the week hacking group Lapsus$ made the news when they were able to compromise Okta. Okta is an identity broker, a service large companies use to handle identities of their employees and manage capability such as Single Sign On.
Lapsus$ quickly rose from relative obscurity to hacking stardom through multiple high profile breaches over the last months.
The episode Dirty Coms of Darknet Diaries profiled a contemporary hacking culture, of wich Lapsus$ appears to be a part, painting a picture of a hacking culture mostly revolving around pranks and money.
Doom is mine but I will share it
Climate. What a bummer. The direst predictions of scientists are coming true fast and even more dire than predicted.
My body, my choice? Individualism made it harder to fully embrace this cornerstone of feminist rethoric. The slogan has been co-opted by anti mask mandate protesters. Why though? That’s a question answered in a piece in Geschichte der Gegenwart. I first read this angle in Bitch Magazine some months ago.
“My body, my choice” is highly individualistic and—in the end—fails to convey the ways we’re bound up with each other. Especially as Texas institutes a near-complete ban on abortions, it’s crucial that we embrace language and frameworks that emphasize our mutual responsibilities and interconnectedness.
That’s it for this week. Stay sane, hug your friends, and donate to Self-Defined.
]]><![CDATA[Around the Web]]>https://www.ovl.design/around-the-web/003/2022-03-19T14:12:00.000Z<![CDATA[Predictive policing without oversight, the wall in which Deep Learning crashed, cryptocurrencies in wartime, and billionaires won’t save us.]]><![CDATA[
Collected between 12.3.2022 and 19.3.2022.
A coughing hello.
I had a meeting with SarS CoV 2 and my head wrapped in cotton for a few days. Thanks to the vaccinations, I’m mostly fine so far, and will hopefully be back on track next week.
Nonetheless, I’ve saved some links. So let’s get linking.
What are you looking at?
Gizmodo has reported that the Department of Justice in the USA, which is in theory responsible to survey funding of predictive policing programs, has no idea how much money actually police departments have spent. So now there is an unknown amount of money poured into technology, which does not prevent crime while discriminating against minotirised groups in society. Well done.
In Europe, we see the fallout of the takeover of encrypted messaging service Encrochat by police in France. Throughout Europe, we see lawsuits, based on intercepted chats, the evidence has been heavily manipulated, and the original is not available to the public, as France has classified the records as military secrets. Which might set dangerous precedents for lawsuits based on digital evidence, as netzpolitik.org reports.
This ain’t intelligence
Wired has published an interview with Palmer Luckey, one of the leading figures in AI assisted defence tech. It’s another example of the role of ideology which manifested in their products, as well as the presented justifications. To answer with «I'm still really proud of our work that we do with border security» if you are asked how the separation of families and imprisonments of children by the ICE made you feel, takes some serious amounts of dehumanisation.
Ukraine reportedly uses Clearview’s facial recognition product in the ongoing war. Does the end justify the means? Probably not, giving the fact that Clearview’s massive database has been largely scraped from the web, without asking anyone for consent.
In truth, we are still a long way from machines that can genuinely understand human language, and nowhere near the ordinary day-to-day intelligence of Rosey the Robot, a science-fiction housekeeper that could not only interpret a wide variety of human requests but safely act on them in real time. Sure, Elon Musk recently said that the new humanoid robot he was hoping to build, Optimus, would someday be bigger than the vehicle industry, but as of Tesla’s AI Demo Day 2021, in which the robot was announced, Optimus was nothing more than a human in a costume. Google’s latest contribution to language is a system (Lamda) that is so flighty that one of its own authors recently acknowledged it is prone to producing “bullshit.” Turning the tide, and getting to AI we can really trust, ain’t going to be easy.
The road to hell is paved with crypto intentions
The war in Ukraine has been of the moments of crypto so far. Numerous projects mobilised their users and mobilised substantial amounts of money. Still, grift and bullshit seem to follow where crypto goes. While projects like the UkraineDAO collected money in good faith, things collapsed when they airdropped their LOVE tokens and the helpers turned LOVE into a speculative asset. Episode 7 of Scam Economy covers the good help and the bad grift in the context of the Ukraine war. Among it also how crypto bros pressured Ukraine into opening wallets of their tokens, instead of, dunno, sending dollars or something.
Peter Howson explores the history of cryptocurrencies in Ukraine, and its troubled relation to despotic regimes around the world, which are undoubtedly also happening in Ukraine, when we see donations to right-wing paramilitaries. Ukraine in crypto have a long history. During the time of the Euromaidan revolution, half of the world’s Bitcoin were mined in Ukraine. This might explain, a point also made by Matt Binder in Scam Economy, the role of crypto in the war, and Ukraine’s willingness to accept crypto as part of their fundraising, but might make it hard for cryptocurrencies to play a similar role in other conflicts.
What we are seeing here is not new, at all. Crypto positions itself in the heart of capitalism, amplifying its dynamics, and gives a fresh, digital paint job. So while we see signs of the hype cycle around NFTs and web3 dying down, cryptocurrencies are certainly here to stay as long as capitalism is.
Okay. Let’s cut the seriousness for a moment. NFTs. The buyer of a Pepe the Frog NFT the sum of $537,084 for the receipt of this image. Shortly after, his costly receipt got «devalued», as 99 receipts were released for … free, as was announced beforehand. The buyer is now suing. You can’t make this shit up.
Meanwhile, Spotify has decided to jump on the NFT bandwagon. Supposedly to help pay artists. Which is pretty funny, given the fact that Spotify is the streaming service which pays the lowest royalties to artists. So instead of over-engineering bullshit, they could just pay the artists. But that’s not tech enough, I guess.
]]><![CDATA[Around the Web]]>https://www.ovl.design/around-the-web/002/2022-03-12T14:12:00.000Z<![CDATA[The war and the cyber, impeding climate doom, working on the web, and a ship underneath the arctic sea.]]><![CDATA[
Collected between 4.3.2022 and 12.3.2022.
Good day.
As I’ve been travelling the second edition of my little collection of links comes a bit later. Though I think Saturday might be a better day anyway. Still trying things out here. But again, quite a lot of links collected.
The War & The Cyber
There still is a technical aspect to the whole thing, too. Just not as people expected. For the last few years, Russia has been known as a cyber state. For one, Russian hacking groups have been accused of breaches again and again. On the other hand the Russian state financed a sophisticated disinformation network, trying to influence political processes all over the world – mostly by propping up the far right. Under current sanctions it has collapsed. At least for now.
While the organised disinformation is in a bad shape, it still invents new forms of misinformation. The fact check, long the hallmark of truth and enlightenment (sorry, git a bit carried away there), is now a tool of misinformation. In this new scheme, «fake facts» are invented to put a new layer of fake upon them and disguise those as a fact check.
Researchers at Clemson University’s Media Forensics Hub and ProPublica identified more than a dozen videos that purport to debunk apparently nonexistent Ukrainian fakes. The videos have racked up more than 1 million views across pro-Russian channels on the messaging app Telegram, and have garnered thousands of likes and retweets on Twitter. A screenshot from one of the fake debunking videos was broadcast on Russian state TV, while another was spread by an official Russian government Twitter account.
What we, so far, haven’t seen too is the offensive cyberprowess of the Russian state. While their emerged a new Wiper malware on the eve of the war, the war in and off itself has been largely conventional. Bombing civilians, sieging cities. Why? We really don’t know at this point, as Farhad Manjoo writes in the New York Times:
What accounts for Russia’s apparent cyberrestraint? Nobody quite knows. Russia could be holding back its best cyberweapons for a more critical time in the war. It could also just be incompetent. Maybe its hackers were no match for Ukraine’s cyberdefenses, which the country has been beefing up for years.
War is bad. For humans, but also for the planet. Tanks and planes are no cornerstones of green transport. It’s a good time to revisit the piece The climate cost of war, while written about a prospective war in Iran, fossil fuels don’t care where they are burned.
The conflict comes at a time of unprecedented humanitarian needs, as a ring of fire circles the earth with climate shocks, conflict, COVID-19 and rising costs driving millions closer to starvation.
Peter Thiel is one of the most influential figures in the Valley, and has long been described as the reactionary outlier in its political discourse. Why that’s not accurate and how Thiel came to power, is the topic of this week’s Tech Won’t Save us episode.
Nearing the end Paris Marx and Moira Weigel discuss how much of Palantir’s might is factual efficiency of the tech, and how much is propped. Given the secretive nature of its operations we can’t now for certain.
As part of my ongoing quest to absolve myself from too many open tabs I finally managed to read Why are hyperlinks blue? on the Mozilla blog, a fascinating excursion into the history of browsing hypertext.
Another article that has been in the state of «open tab» for too long, is this compilation of links what it means to be of the web and how we as developers can build products that embrace the grain of the web.
The other way of building for the web is to go with the web’s grain, embracing flexibility and playing to the strengths of the medium through progressive enhancement. This is the distinction I was getting at when I talked about something being not just on the web, but of the web.
The architecture of the internet is rendered “horrible,” then, partially by the demands of capitalism: In its wake, we find signs of deterioration and ruin. Navigating a landscape of dead sites changes the way we look at living ones; clean, minimalist design only cloaks the evidence of inevitable decay.
All good things are three, but not web3
Let’s start with the good things here. Interest in NFTs in Google Search is collapsing.
Two, literal, crypto bros, Paul and Julian Zehetmayr, have bought LimeWire’s branding and are hoping to return the famous file-sharing platform into the limelight. The relict of an anarchic internet that was once was, is set to become a marketplace for music-related NFTs, backstage passes, and similar commodified crap. In a realisation that the masses don’t care about crypto, they’ll accept fiat currency too.
It does, however, make perfect sense for the Web3 movement, which appears immune to shame and dead set on making us believe in a crypto future, one brand takeover at a time.
To close this issue of, here are some links which didn't warrant their own category:
In Stones can’t talk (german translation) Mirjam Brusius explores the complex relationship on Germany’s history, and how Vergangenheitsbewältigung (or lack thereof) fails Black and People of Color communities today. The piece is very dense, but very recommended.
Carnival celebrations went ahead after the massacre in Hanau, while a vigil to mourn the deaths could not. Antisemites were still allowed to march in the streets. Some can even stand for election. Taking stock of these asymmetries, to say nothing of the endless secretiveness around the NSU murders, the surreal Mbembe debate, or the fact that being left-wing and Jewish means feeling unprotected by a state that claims to do the reverse: might Germany be reaching a grotesque low point in its history? If antisemitism and racism have no space (‘keinen Platz’) in Germany, why do they still claim so much room? Who will set the future terms of historical memory in a country where for large multiethnic sectors of the society, denazification simply never happened?
Kony 2012, ten years on. Once the most viral video of all time, the film reads as both a digital relic and a precursor to an era in which footage of conflict dominates the internet.
With that this issue comes to its end. I’ll experiment further in the upcoming weeks, as I already see the mode of «Write down everything once a week» becoming too much work to be sustainable in the long term.
]]><![CDATA[Around the Web]]>https://www.ovl.design/around-the-web/001/2022-03-04T14:12:00.000Z<![CDATA[States of Surveillance, AI legislation, Contileaks, the dumbest vending machine in the history of ever. And war.]]><![CDATA[
Collected between 22.2.2022 and 4.3.2022.
Around the Web is a new format, in which I compile the articles I’ve read over the last week or so which influenced me in some way. It will center around digital society, combining tech and ethics, blend in a bit of design, and we’ll see what else.
I’m not sure how this will go, what will be featured, and if it will stay like this issue you are reading, or if I change it to something which requires less work.
Let’s get it started in here.
There’s a war going on outside
The news cycle has been relentless, I won’t even try to be a geo-political analyst here. Still, here are some compiled links I’ve found helpful in making sense of the senseless.
In German newspaper analyse & kritik Tomasz Konicz analyses how this war fits within a shifting global power-dynamic, that sees the USA enthroned from their role as «world police». Russia, China, and Europe try to fill in the gap. This is happening against a backdrop of economic recession. And if in crisis, war is always an option.
There’s an ongoing discussion how tech should react to Russia’s war. Namecheap, for one, decided to cut all ties with Russian customers. A move wildly critised, as it leaves them with no other choice than to submit to providers in Russia. The Russian government has made it very clear that it will not tolerate any dissent.
It has, however, made wide-reaching changes to its Internet infrastructure over the last few years. Around ten years ago the Russian internet was considered mostly resistant against censorship. This has changed. As Samanth Subramanian reports in Qaurtz, Russia has been preparing to have its Internet cut off. While the Russian states cracks down on media, the BBC started transmitting on shortwave radio frequences again.
Meanwhile, there was a small little glimpse of dissent from where nobody expected it. Alex Ovechkin said «Please, no more war.» Which is significant, not only because he’s famous, but also because he has been close to Putin. Ice hockey has been of the national sport in Russia ever since the soviet team dominated.
It is a flat-out lie that there is a war going on. The bitter truth is that there a multiple. Turkey is still shelling Kurds. The Taliban are still oppressing. Some reactions to the war in Ukraine were therefore overtly racist. Emran Feroz comments on western media coverage splitting refugees in welcome and unwelcome.
Julian Hilgers has published an interview with Sidi Omar. Omar is represantative to the United Nations of the Polisario. The Polisario declared the Arab Democratic Republic in 1976. Since then it finds itself perpetually surpressed, prosecuted and bombed by Morocco.
The interview has been published as part og Sham Jaff’s what happened last week? newsletter. If you are interested in a newsletter reporting beyond the sight of western media, make sure to subscribe.
The regulations stipulate that tech companies have to inform users “in a conspicuous way” if algorithms are being used to push content to them. Users reportedly will be allowed to opt out of being targeted with algorithmic recommendations.
While it has to be seen how it plays out, this is a significant step in reigning in the power of algorithms.
We have to be in community. We have to be in conversation. And we also have to recognize what our piece of the puzzle is ours to work on. While it is true, yes, we’re just individual people, together we’re a lot of people and we can shift the zeitgeist and make the immorality of what the tech sector is doing—through all its supply chains around the world—more legible. It’s our responsibility to do that as best we can.
Safiya Umoja Noble
What are you looking at?
The March/April edition of Interactions has been published, focussing on States of Surveillance.
In Resetting the Expectation of Surveillance Jonathan Bean explores how surveillance has been so ingrained in our everyday life tha we sometimes take it for granted or – arguably worse – forget that it exists at all.
So much of our technological stuff doesn’t really present us with a choice. Set up a new computer, load up a phone with apps, turn on that robot vacuum, or hop in the car, and the chances are pretty good that something, somewhere, is collecting data. Is this surveillance? The word, with roots in French and Latin, means to watch over, in the visual sense. Access to the private visual realm clearly crosses the line: Witness the emergence of the practice of taping over or physically disabling laptop webcams. In contrast, the streams of data we generate through our everyday use of technology, from smartphones to thermostats to light bulbs, are largely invisible.
While surveillance undoubtly is everywhere, it is not without alternative and resistance is not futile. Alex Jiahong Lu explores state and workplace surveillance and how these systems leave room for everyday resistance.
In Sareeta Amrute’s piece they explore the Facebook Files. The impact Facebook’s & Co surveillance apparatus has has always been unevenly distributed, hitting hardest where the companies care the least. Which is, surprise, not the global west.
The sharp inequities exhibited by these revelations of the overheated pursuit of young eyeballs regardless of deleterious effects on youth wellbeing, on the one hand, and the callous disregard for how the platform is used to propagate violence and hatred for other populations, on the other, suggest an uncomfortable fact: Race, place, and position matter deeply to these tech companies, and not in the ways that their DEIA handbooks might suggest. As such, the Facebook Files exhibit a classic case of racial capitalism.
Inside Extortion
Russian ransomware group Conti has been hit by a large leak of their internal communication. The Twitter account ContiLeaks started publishing chat logs on February, 27th.
The leaks offer an interesting look on the inner workings of an extortion group. Luckily, you don’t have to read this by yourself. Brian Krebs started a series of articles analysing the content.
Similar, though from the outside look in, is the podcast series Extortion Economy by the MIT Technology Review and Propublica.
All good things might be three, just not web3
Because tech is tech, we have to cope with a new version of democracy, which tokenises all the hierarchies built into analogue democracies. And calling it future. I’m talking about governance tokens in Decentralised Organisations (DAO). Shanti Escalante-De Mattei has written a piece for Wired exploring the idea, which seems not too bad at first, until it is.
Sure, web3 has the potential to make our lives more democratic, but it’s not a silver bullet. Scale is an insurmountable problem, so is capitalistic greed. If we fall too quickly for these promises, we’ll end up looking back fondly at the days of data harvesting as we navigate a segregated internet.
A conclusion I see time and time again when technical solutions try to solve societal problems without doing the work of understanding the ways in which they inflict harm.
In its current state web3 is not much more than an update to capitalist grifting. In False Futurism Paris Marx (host of the excellent Tech Won’t Save Us podcast) writes about the Metaverse and its crypto related parts. Paris concludes:
Tech companies have always overstated the benefits their technologies will grant us and understated how much they serve their own ends of power and profit. The metaverse will be no different, especially since it’s unlikely to arrive in the form currently being sold to us.
Especially with Facebook trying to call the shots, and centralise gearing and infrastructure the future is blue. We shall paint it colourful.
Another aspect of virtualised reality is haptics. How do we feel when we are in a computer? Gadgets try to replicate the bodily experience of touching, but are reducing the once existing concept of cyborgs to another point of datafication.
Perhaps more crucially, when enveloped in Meta’s gloves, the hands will ooze data. Every motion will be captured, every gesture will be mapped, and every haptic stimulus we respond to will be recorded. Expanding biometrics to include touch could thus enable a new mode of “haptic coercion,” as Dave Birnbaum, the former head of design strategy and outreach for Immersion Corporation terms it, where digital touch can be used to prod or nudge us into making purchasing decisions preferred by a brand or advertiser.
Meanwhile in art: Maybe The Guardian has found the bottom in a lake of facepalms. A hilariously malfunctioning NFT vending machine in New York City. I can’t even.
Here some more things not relating to a larger theme:
Bandcamp has been bought by Epic Games. I have mixed feelings. None of them positive. It seems more and more impossible to build anything independent on the World Wide Web today. Which is scary.
In Packaging the Pill Theresa Christine Johnson takes a closer look at something seemingly irrelevant. How changing the packaging of the birth control pill helped women stick to the regime.
To close this issue, have you ever thought about the dystopian view CAPTCHAs offers on the world? Me neither, at least not so thorougly as this piece does: Why CAPTCHA Pictures Are So Unbearably Depressing.