Tags: human

56

sparkline

Saturday, November 9th, 2024

Optimism

I think of myself of as an optimist. It makes me insufferable sometimes.

When someone is having a moan about something in the news and they say something like “people are terrible”, I can’t resist weighing in with a “well, actually…” Then I’ll start channeling Rutger Bregman, Rebecca Solnit, and Hans Rosling, pointing to all the evidence that people are, by and large, decent. I should really just read the room and shut up.

I opened my talk Of Time And The Web with a whole spiel about how we seem to be hard-wired to pay more attention to bad news than good (perhaps for valid evolutionary reasons).

I like to think that my optimism is rational, backed up by data. But if I’m going to be rational, then I also can’t become too attached to any particualar position (like, say, optimism). I should be willing to change my mind when I’m confronted with new evidence.

A truckload of new evidence got dumped on my psyche this week. The United States of America elected Donald Trump as president. Again.

Even here I found a small glimmer of a bright side: at least the result was clear cut. I was dreading weeks or even months of drawn-out ballot counting, lawsuits and uncertainty. At least the band-aid was decisively ripped away.

Back in 2016, I could tell myself all sorts of reasons why this might have happened. Why people might have been naïve or misled into voting a dangerous idiot into power. But the naïveté was all mine. The majority of America really is that sexist.

This feels very different to 2016. And hey, remember when we woke up to that election result and one of the first things we did was take out subscriptions to the New York Times and the Washington Post to “support real journalism”? Yeah, that worked out just great, didn’t it?

My faith in human nature is taking quite a hit. An electoral experiment has been run three times now—having this mysogistic racist narcissistic idiot run for the highest office in the land—and the same result came up twice.

I naïvely thought that the more people saw of his true nature, the less chance he would have. When he kept going off-script at his rallies, spouting the vilest of threats, I thought there was an upside. At least now people would see for themselves what he’s really like.

But in the end it didn’t matter one whit. Like I said in a different context:

To use an outdated movie reference, imagine a raving Charlton Heston shouting that “Soylent Green is people!”, only to be met with indifference. “Everyone knows Soylent Green is people. So what?”

I never liked talking about “faith” in human nature. To me, it wasn’t faith. It was just a rational assessment. Now I’m not so sure. Maybe I need some faith after all.

I wonder if my optimism will return. It probably will (see? I’m such an optimist). But if it does, perhaps it will have to be an optimism that exists despite the data, not because of it.

Monday, October 14th, 2024

The Value Of Science by Richard P. Feynman [PDF]

This short essay by Richard Feynman is quite a dose of perspective on a Monday morning

Tuesday, August 27th, 2024

Sunday, August 11th, 2024

Aboard Newsletter: Why So Bad, AI Ads?

The human desire to connect with others is very profound, and the desire of technology companies to interject themselves even more into that desire—either by communicating on behalf of humans, or by pretending to be human—works in the opposite direction. These technologies don’t seem to be encouraging connection as much as commoditizing it.

Thursday, June 27th, 2024

Filters

My phone rang today. I didn’t recognise the number so although I pressed the big button to answer the call, I didn’t say anything.

I didn’t say anything because usually when I get a call from a number I don’t know, it’s some automated spam. If I say nothing, the spam voice doesn’t activate.

But sometimes it’s not a spam call. Sometimes after a few seconds of silence a human at the other end of the call will say “Hello?” in an uncertain tone. That’s the point when I respond with a cheery “Hello!” of my own and feel bad for making this person endure those awkward seconds of silence.

Those spam calls have made me so suspicious that real people end up paying the price. False positives caught in my spam-detection filter.

Now it’s happening on the web.

I wrote about how Google search, Bing, and Mozilla Developer network are squandering trust:

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

But it’s not just limited to specific companies. I’ve noticed more and more suspicion related to any online activity.

I’ve seen members of a community site jump to the conclusion that a new member’s pattern of behaviour was a sure sign that this was a spambot. But it could just as easily have been the behaviour of someone who isn’t neurotypical or who doesn’t speak English as their first language.

Jessica was looking at some pictures on an AirBnB listing recently and found herself examining some photos that seemed a little too good to be true, questioning whether they were in fact output by some generative tool.

Every email that lands in my inbox is like a little mini Turing test. Did a human write this?

Our guard is up. Our filters are activated. Our default mode is suspicion.

This is most apparent with web search. We’ve always needed to filter search results through our own personal lenses, but now it’s like playing whack-a-mole. First we have to find workarounds for avoiding slop, and then when we click through to a web page, we have to evaluate whether’s it’s been generated by some SEO spammer making full use of the new breed of content-production tools.

There’s been a lot of hand-wringing about how this could spell doom for the web. I don’t think that’s necessarily true. It might well spell doom for web search, but I’m okay with that.

Back before its enshittification—an enshittification that started even before all the recent AI slop—Google solved the problem of accurate web searching with its PageRank algorithm. Before that, the only way to get to trusted information was to rely on humans.

Humans made directories like Yahoo! or DMOZ where they categorised links. Humans wrote blog posts where they linked to something that they, a human, vouched for as being genuinely interesting.

There was life before Google search. There will be life after Google search.

Look, there’s even a new directory devoted to cataloging blogs: websites made by humans. Life finds a way.

All of the spam and slop that’s making us so suspicious may end up giving us a new appreciation for human curation.

It wouldn’t be a straightforward transition to move away from search. It would be uncomfortable. It would require behaviour change. People don’t like change. But when needs must, people adapt.

The first bit of behaviour change might be a rediscovery of bookmarks. It used to be that when you found a source you trusted, you bookmarked it. Browsers still have bookmarking functionality but most people rely on search. Maybe it’s time for a bookmarking revival.

A step up from that would be using a feed reader. In many ways, a feed reader is a collection of bookmarks, but all of the bookmarks get polled regularly to see if there are any updates. I love using my feed reader. Everything I’ve subscribed to in there is made by humans.

The ultimate bookmark is an icon on the homescreen of your phone or in the dock of your desktop device. A human source you trust so much that you want it to be as accessible as any app.

Right now the discovery mechanism for that is woeful. I really want that to change. I want a web that empowers people to connect with other people they trust, without any intermediary gatekeepers.

The evangelists of large language models (who may coincidentally have invested heavily in the technology) like to proclaim that a slop-filled future is inevitable, as though we have no choice, as though we must simply accept enshittification as though it were a force of nature.

But we can always walk away.

Saturday, June 15th, 2024

On being human and “creative”

Now we have this collision of those who, with the specific intent of creative expression, make things that are wholly the product of their unique experience and skills and offer them in the marketplace. Then there are those who use machines to produce derivatives of other’s creative work to offer as products in the marketplace. Both are seeking an audience and financial benefit for their offering.

Those who wholly manufacture creative works are asking the same value be put on their imitation of creative expression as the value inherent with sentient creation. They are saying they deserve the same recognition—be that in respect, attention, acknowledgement or compensation—that works created by a person might receive. But they haven’t earned it.

Using generative AI is to ask What If but then hand off not only the responsibility and effort of answering the question but also accountability for the answer. When the machine creates something pleasing or marketable, it’s “look at what I did”. When the machine creates something terrible or wrong, it’s “not my fault, the machine did it”. The claim of ownership is conditional and only maintained if the output can generate value.

There’s so much to love here, like this:

My art is the story of how I have spent the time in my life.

And this:

The value of an idea comes from the execution of the idea.

Wednesday, May 29th, 2024

The Danger Of Superhuman AI Is Not What You Think - NOEMA

Once you have reduced the concept of human intelligence to what the markets will pay for, then suddenly, all it takes to build an intelligent machine — even a superhuman one — is to make something that generates economically valuable outputs at a rate and average quality that exceeds your own economic output. Anything else is irrelevant.

By describing as superhuman a thing that is entirely insensible and unthinking, an object without desire or hope but relentlessly productive and adaptable to its assigned economically valuable tasks, we implicitly erase or devalue the concept of a “human” and all that a human can do and strive to become. Of course, attempts to erase and devalue the most humane parts of our existence are nothing new; AI is just a new excuse to do it.

Wednesday, May 15th, 2024

AI Safety for Fleshy Humans: a whirlwind tour

This is a terrificly entertaining level-headed in-depth explanation of AI safety. By the end of this year, all three parts will be published; right now the first part is ready for you to read and enjoy.

This 3-part series is your one-stop-shop to understand the core ideas of AI & AI Safety — explained in a friendly, accessible, and slightly opinionated way!

( Related phrases: AI Risk, AI X-Risk, AI Alignment, AI Ethics, AI Not-Kill-Everyone-ism. There is no consensus on what these phrases do & don’t mean, so I’m just using “AI Safety” as a catch-all.)

Monday, May 13th, 2024

Manifesto for a Humane Web

I endorse this message.

This manifesto is intended as a personal response to the current state of the web. It is a statement of intent and a call to arms, inviting you, the reader, to go forth and build humane websites, and to resist the erosion of the web we know and love.

Saturday, May 4th, 2024

AI is not like you and me

AI is the most anthropomorphized technology in history, starting with the name—intelligence—and plenty of other words thrown around the field: learning, neural, vision, attention, bias, hallucination. These references only make sense to us because they are hallmarks of being human.

But ascribing human qualities to AI is not serving us well. Anthropomorphizing statistical models leads to confusion about what AI does well, what it does poorly, what form it should take, and our agency over all of the above.

There is something kind of pathological going on here. One of the most exciting advances in computer science ever achieved, with so many promising uses, and we can’t think beyond the most obvious, least useful application? What, because we want to see ourselves in this technology?

Meanwhile, we are under-investing in more precise, high-value applications of LLMs that treat generative A.I. models not as people but as tools.

Anthropomorphizing AI not only misleads, but suggests we are on equal footing with, even subservient to, this technology, and there’s nothing we can do about it.

Wednesday, April 17th, 2024

The Analog Web - The History of the Web

Owning your own piece of the Internet (to borrow a recent phrase from Anil Dash) is itself a radical act. Linking to others at will is subversive all on its own. Or as Jeremy Keith once put it, “it sounds positively disruptive to even suggest that you should have your own website.” The web still exists for everyone. And beneath this increasingly desiccated surface, there is plenty of creators still simply creating.

People create these sites simply so that they exist. They are not fed to an algorithm, or informed by any trends. It is quieter and slower, meant to tether us to a more mechanical framework of the web.

This is the analog web.

Saturday, February 17th, 2024

“‘AI’ is pretty much just shorthand for mediocre” — Piper Haywood

Continuous partial ick …or perhaps continuous partial cringe.

Anyways, maybe we’ll eventually get to the point where AI has that human “spark”, who knows. Maybe it’ll happen next month and I’ll eat my words. Until then, as most of the content we experience online becomes more grey and sludgy, the personal will become far more valuable.

Can Apple Win Back Music | Vulf Opinion | Brad Frost

There’s no AI substitute for a human-produced drawing of someone on the subway, even if a similar-or-even-better result could be produced in seconds by AI. The artifact is often less important than the process — the human process — that made it. That’s why I suspect videos of creative processes are so attractive; we are captivated by seeing humans doing human things.

Friday, January 5th, 2024

To Own the Future, Read Shakespeare | WIRED

I see what you nerds have done with AI image-creation software so far. Look at Midjourney’s “Best of” page. If you don’t know a lot about art but you know what you like, and what you like is large-breasted elf maidens, you are entering the best possible future.

Friday, November 3rd, 2023

Emily F. Gorcenski: How I Read 40 Books and Extinguished the World on Fire

I’ve found that there’s way more good people than bad. There’s way more people willing to help than willing to hurt. Some things are really scary but there’s way more people out there willing to guide us through the darkness than we think. The cynic in me wants to say that the “powers that be” want us to be endlessly doomscrolling and losing hope and snuffing out optimism. We shouldn’t give them what they want. There’s a lot of beauty in the world still within our grasp. We’re better when we’re poets, when we’re learners and listeners, when we’re builders and not breakers.

Tuesday, May 30th, 2023

“Artificial Intelligence & Humanity,” an article by Dan Mall

AI is great anything quantity-related and bad and anything quality-related.

Sensible thinking from Dan here, that mirrors what we’re thinking at Clearleft.

In other words, it leans heavily on averages; the closer the training data matches an average, the higher degree of confidence that the result is more “correct,” or at least desirable.

The problem is that this is the polar opposite of what we consider creativity to be. Creativity isn’t about averages. It’s about the outliers, sometimes the one thing that’s different than all the rest.

Tuesday, April 18th, 2023

The Technium: Dreams are the Default for Intelligence

I feel like there’s a connection here between what Kevin Kelly is describing and what I wrote about guessing (though I think he might be conflating consciousness with intelligence).

This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.

Sunday, April 9th, 2023

Tuesday, March 14th, 2023

Guessing

The last talk at the last dConstruct was by local clever clogs Anil Seth. It was called Your Brain Hallucinates Your Conscious Reality. It’s well worth a listen.

Anil covers a lot of the same ground in his excellent book, Being You. He describes a model of consciousness that inverts our intuitive understanding.

We tend to think of our day-to-day reality in a fairly mechanical cybernetic manner; we receive inputs through our senses and then make decisions about reality informed by those inputs.

As another former dConstruct speaker, Adam Buxton, puts it in his interview with Anil, it feels like that old Beano cartoon, the Numskulls, with little decision-making homonculi inside our head.

But Anil posits that it works the other way around. We make a best guess of what the current state of reality is, and then we receive inputs from our senses, and then we adjust our model accordingly. There’s still a feedback loop, but cause and effect are flipped. First we predict or guess what’s happening, then we receive information. Rinse and repeat.

The book goes further and applies this to our very sense of self. We make a best guess of our sense of self and then adjust that model constantly based on our experiences.

There’s a natural tendency for us to balk at this proposition because it doesn’t seem rational. The rational model would be to make informed calculations based on available data …like computers do.

Maybe that’s what sets us apart from computers. Computers can make decisions based on data. But we can make guesses.

Enter machine learning and large language models. Now, for the first time, it appears that computers can make guesses.

The guess-making is not at all like what our brains do—large language models require enormous amounts of inputs before they can make a single guess—but still, this should be the breakthrough to be shouted from the rooftops: we’ve taught machines how to guess!

And yet. Almost every breathless press release touting some revitalised service that uses AI talks instead about accuracy. It would be far more honest to tout the really exceptional new feature: imagination.

Using AI, we will guess who should get a mortgage.

Using AI, we will guess who should get hired.

Using AI, we will guess who should get a strict prison sentence.

Reframed like that, it’s easy to see why technologists want to bury the lede.

Alas, this means that large language models are being put to use for exactly the wrong kind of scenarios.

(This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.)

Take search engines. They’re based entirely on trust and accuracy. Introducing a chatbot that confidentally conflates truth and fiction doesn’t bode well for the long-term reputation of that service.

But what if this is an interface problem?

Currently facts and guesses are presented with equal confidence, hence the accurate descriptions of the outputs as bullshit or mansplaining as a service.

What if the more fanciful guesses were marked as such?

As it is, there’s a “temperature” control that can be adjusted when generating these outputs; the more the dial is cranked, the further the outputs will stray from the safest predictions. What if that could be reflected in the output?

I don’t know what that would look like. It could be typographic—some markers to indicate which bits should be taken with pinches of salt. Or it could be through content design—phrases like “Perhaps…”, “Maybe…” or “It’s possible but unlikely that…”

I’m sure you’ve seen the outputs when people request that ChatGPT write their biography. Perfectly accurate statements are generated side-by-side with complete fabrications. This reinforces our scepticism of these tools. But imagine how differently the fabrications would read if they were preceded by some simple caveats.

A little bit of programmed humility could go a long way.

Right now, these chatbots are attempting to appear seamless. If 80% or 90% of their output is accurate, then blustering through the other 10% or 20% should be fine, right? But I think the experience for the end user would be immensely more empowering if these chatbots were designed seamfully. Expose the wires. Show the workings-out.

Mind you, that only works if there is some way to distinguish between fact and fabrication. If there’s no way to tell how much guessing is happening, then that’s a major problem. If you can’t tell me whether something is 50% true or 75% true or 25% true, then the only rational response is to treat the entire output as suspect.

I think there’s a fundamental misunderstanding behind the design of these chatbots that goes all the way back to the Turing test. There’s this idea that the way to make a chatbot believable and trustworthy is to make it appear human, attempting to hide the gears of the machine. But the real way to gain trust is through honesty.

I want a machine to tell me when it’s guessing. That won’t make me trust it less. Quite the opposite.

After all, to guess is human.

Thursday, July 14th, 2022

Food Timeline: food history research service

The history of humanity in food and recipes.

The timeline of this website is equally impressive—it’s been going since 1999!