New Blog Moderation Policy

There has been a lot of toxicity in the comments section of this blog. Recently, we’re having to delete more and more comments. Not just spam and off-topic comments, but also sniping and personal attacks. It’s gotten so bad that I need to do something.

My options are limited because I’m just one person, and this website is free, ad-free, and anonymous. I pay for a part-time moderator out of pocket; he isn’t able to constantly monitor comments. And I’m unwilling to require verified accounts.

So starting now, we will be pre-screening comments and letting through only those that 1) are on topic, 2) contribute to the discussion, and 3) don’t attack or insult anyone. The standard is not going to be “well, I guess this doesn’t technically quite break a rule,” but “is this actually contributing.”

Obviously, this is a subjective standard; sometimes good comments will accidentally get thrown out. And the delayed nature of the screening will result in less conversation and more disjointed comments. Those are costs, and they’re significant ones. But something has to be done, and I would like to try this before turning off all comments.

I am going to disable comments on the weekly squid posts. Topicality is too murky on an open thread, and these posts are especially hard to keep on top of.

Comments will be reviewed and published when possible, usually in the morning and evening. Sometimes it will take longer. Again, the moderator is part time, so please be patient.

I apologize to all those who have just kept commenting reasonably all along. But I’ve received three e-mails in the past couple of months about people who have given up on comments because of the toxicity.

So let’s see if this works. I’ve been able to maintain an anonymous comment section on this blog for almost twenty years. It’s kind of astounding that it’s worked as long as it has. Maybe its time is up.

Posted on June 19, 2024 at 4:26 PM98 Comments

Comments

Charles June 19, 2024 4:42 PM

Hi Bruce. Really keen to get your thoughts on this, and seems timely given the unfortunate nature of this post.

What do you think about requiring identity verification for all social media accounts, via third-party identity verification services? User anonymity can be maintained, as identity verifiers need only to pass back to website requesting identity verification an affirmative response, and some sort of identity token. My assumption is that this process would ensure website operators are able to enforce bans of abusive users, and mitigate the effects of bots and trolls, while maintaining plausible anonymity.

Thanks for keeping things civilized.

willmore June 19, 2024 4:49 PM

Darn, I’m sorry to hear it’s come to this. I haven’t been reading the comments nor contributing them them much recently, but I’ll try to make an effort in the future if I have anything meaningful to add. Maybe more signal will help with the bad signal to noise ratio? (Probably only less noise would really help)

b walker June 19, 2024 5:21 PM

Good call. There are often excellent points raised in comments from many different perspective. Glad you are trying to keep this going.

Nobody June 19, 2024 5:28 PM

Well, Sir, it was time someone handled the problem. Though I’m a mere lurker, I was quietly stopping passing by here because of the new normal in comments. Let’s hope Mr Clive Robinson coming back is good omen !

David Rudling June 19, 2024 6:01 PM

I am not sure if this will be judged to “contribute to the discussion” but, even if read only by your moderator, I have to comment that this will be a welcome change to many of us, sadly necessary.

Just a fan June 19, 2024 6:27 PM

Does the comments feature support a hybrid approach of 1) allowing registered users to comment without pre-screening; and 2) pre-screening those who are not logged in/unregistered? Maybe this would ease the burden on the moderator + allow established commenters to not have to be subjected to extra scrutiny.

TimH June 19, 2024 6:41 PM

A possibility is to whitelist known multi-post-positive contributors, such as Clive Robinson, by email address and perhaps IP, and pass ’em through. No need to publish the metric for gating metric.

Gary Moore June 19, 2024 7:39 PM

Great idea. I am sorry that you have to cope with it. I do applaud you willingness to take a stand. Why people resort to such crappy behaviour remains a mystery. Keep up the good work.

Pseudonymous Jeffrey June 19, 2024 9:21 PM

I am going to disable comments on the weekly squid posts. Topicality is too murky on an open thread, and these posts are especially hard to keep on top of.

I can understand why things need to change. It seems that, sometimes, half the comments (on squid posts and elsewhere) consist of appeals to the moderator, vague accusations of being or impersonating someone, and so on. That said, I’ve seen lots of good comments on the squid posts, and I’ll be sad to see them go.

I hope that they’ll eventually come back in some form, perhaps once everyone gets used to the new moderation system. I wonder if enabling threading, perhaps just for those posts, could help with topicality; the moderator would then have some idea of the discussion that one’s purporting to continue—or, if top-level, whether it’s a topic worth bringing up. Or how about an anonymous way to suggest a story?

I don’t really feel like squid-post topicality was a huge problem, though, and I’m curious to read what others think. If a somewhat-off-topic conversation gets started in an open thread, but never devolves into personal attacks and sniping, is it a bother? I’m not talking about some of those topics that have been popping up recently, that seem to breed nothing but controversy (regular readers will be able to think of examples; let’s not “name names”). I mean stuff like, maybe someone posts an operating system vulnerability, and then we get onto a discussion of the incentive structures that affect vendors, system-design techniques to avoid such vulnerabilities, and so on. Those were, to me, some of the most interesting discussions.

On the topic of moderation delays, intentional slow-downs have been repeatedly proposed to solve various ills of society (for example, having a stock market with only one round of trades each day—or each year—to avoid some of “Mr. Market’s” less rational behavior). I feel like such ideas are under-explored. Maybe what appears to be a disadvantage actually won’t be.

Keith Rettig June 20, 2024 3:14 AM

How about sending the ‘close but not good enough’ comments back to the email address used to post? That way, the writer can reflect on what they wrote and re-submit a better version.
If the comment is offensive or clearly not useful, then delete away; no sympathy for those posters.

Bob in Vancouver June 20, 2024 3:15 AM

My hunch all along has been that it’s the same person behind the 3 or 4 or more frequent posters of off topic comments and that it’s his other personas that are replying or accusing or threatening or whatever.

And that I think he’s had us fooled for quite a while.

Occasionally I return to the Friday squid posts to see if it’s full of nonsense comments as always.

I hope you can find a solution.

All the best,
Bob

Robin June 20, 2024 3:20 AM

Three cheers, and thank you for testing new solutions before going full lockdown.

But like @Pseudonymous Jeffrey I will be sad to see the constructive Squid comments go, while acknowledging that they are getting harder to find amongst the rest and shutting down for now is probably a good tactic. I have found many a topic to explore further from the Squid snippets posted by a small handful of contributors. I hope that some method will emerge for re-instating them in the future.

Perhaps when the dust settles, we could do a collective brainstorm of ideas, from slower publication to a limited number of characters to other more sophisticated methods?

understanding down June 20, 2024 6:15 AM

Also, maybe this is an opportunity to use a comments form more secure than Google’s?
I apologize if my S:N ratio was too weird. Some LSB people elsewhere needed that. MIR

wiredog June 20, 2024 6:26 AM

Insert “Argument Clinic bit” here.

There are only two other places I go to that allow comments from people who aren’t logged in with an account: One is Reactormag (formerly tor.com, the SF publisher), and they hold them for moderators and also have a lot more resources than one guy running a bespoke blog. The other is Dave Barry’s website, and the comment section there is getting more toxic and politicized.

Dinah June 20, 2024 6:33 AM

I’ve received three e-mails in the past couple of months about people who have given up on comments because of the toxicity.

I largely gave up on them long ago. Partly for toxicity. Partly because I realized that I first started regularly following you in the RSS days because I value your expert opinion and the comment section is, well, to paraphrase a great man: ask amateurs to comment about security and you get amateur comments about security.

I am going to disable comments on the weekly squid posts.

Losing most comments will be no loss. However, my understanding is that you use the squid comments as an invite for missed news items. Assuming those comments provide value, do you have an alternative plan for collecting missed items?

iAPX June 20, 2024 6:59 AM

Aren’t rules created to be hacked? 😉

The more rules the more loopholes and the more problem to enforce them with consistency.
This seems a very simple set of rule, and a very reasonable one.

The ability to comment on a gentlemen’s agreement without having to register beforehand is refreshing, thanks for this space and community!

Clive Robinson June 20, 2024 7:49 AM

@ Charles, ALL,

“What do you think about requiring identity verification for all social media accounts, via third-party identity verification services?”

I personally think it’s a very bad idea.

For quite a few reasons but primarily,

1, it’s a trust system that will fail and fail badly.
2, It’s an unneeded and unwarranted and effectively illegal surveillance system.

Consider the second point carefully.

As a consumer with cash you can go into a store or similar, make a purchase and leave. The transaction is effectively anonymous. But with the receipt you can take back any goods that are defective and get a refund (which should also be in cash).

As an online purchaser you to should be entitled to the same level of anonymity without question.

You should not have people profiting off of your details that have no need under law to know them.

Likewise in most streets you can walk down them without proof of identity etc.

In the US if you check the law you have various rights not to incriminate yourself and be secure in your person, papers and possessions from unwarranted interference.

Why should the “electronic world” be any different?

I could go on at length about the almost lowest form of scum around like Palantir and similar. That are not just data aggregators but they draw unwarranted inference from the data that they then pass on for profit that others treat as though factual, when there is no evidence to support the claims sold.

Being forced to “produce papers” which is what it is, is what you expect from Police States and similar despotic governments and states.

It takes very little thought to see just how the at very best virtually minimal advantages will be outweighed not just by the cost of running such a system but the very great harms it will cause.

If you can not see this then you realy need to study quite a bit more of history about how such systems of serfdom became murdering police states.

Wannabe Techguy June 20, 2024 8:13 AM

Bruce, what about questions? As my name shows,I’m not an IT professional and I do learn here from you and others(though of course, some of it goes over my head).That being said,some of my questions, like recently when I asked someone why they trust government(any government) were given smart ass replies that didn’t answer my question. Sometimes, I’m just curious what people are thinking.

Andy June 20, 2024 9:31 AM

I maybe haven’t followed enough of Friday posts but is it possible to refer to an example of toxic comment? Maybe sanitized of toxic words? I fear people may be a little too thin-skinned these days. I remember a perfectly-valid comment of mine being flagged to the moderator. I don’t remember the outcome

dbCooper June 20, 2024 10:40 AM

I’ve been a follower since the Counterpane Newsletter days. The past months have certainly seen toxicity in the comments, seemingly from just several users and one very vocal user. It’s so unfortunate when a uncouth minority ruins what a majority enjoys and benefits from. Another example of the unfairness that is life.

Thank you Bruce for trying to salvage this valuable port of the blog. Sincerely hope it does not come to all comments being prohibited.

Jaqulyn June 20, 2024 12:11 PM

While I support this policy on the whole, I am going to miss the comments on the Squid posts. That was the main reason I read them. The vagueness of what is considered on topic allows for people to discuss and link to some really amazing stuff at times.

Clive Robinson June 20, 2024 1:51 PM

@ Bruce, ALL,

The problem with killing the squid page is that a very significant number of news stories that later became threads were fed into the blog.

This “pre-selection” gave everyone the chance to read the story in advance of it becoming a thread and could thus get their thoughts in order etc.

Now a story that is very old to me and I’ve been waving a red flag over which is the death of E2EE on single devices is now here for all commercial consumer OS’s from Apple, Google, Microsoft because of the now AI hardware in the CPU “neural network support” systems. All of which now will sit there and read the User Interface for “Surveillance Purposes”

The Bull-Scat put out by Apple and Microsoft is “Child Protection” it’s nothing of the sort. It monitors everything you do and in MS’s case updates a series of databases every 5secs or if the database is online in the cloud as often as it can do.

The story the other day that MS were pulling back on it is “a steaming load” as they say as it’s a Beta product in Win-11 that is still very definitely moving forward.

Any way although I’ve been warning about this killing of E2EE by end-running it on the device, and giving advice and mitigation for over a decade… For many it’s snuck up on them apparently without warning, which is why videos like,

https://m.youtube.com/watch?v=c52pKpYeZ74

Are starting to appear.

Jon June 20, 2024 2:10 PM

I rarely post but I do frequently read comments. Perhaps a function could be introduced for viewers to tag toxic/unnecessary/off topic comments so readers can be a first pass of review. That way you and others don’t have to necessarily read all the comments – just those that are flagged.

A next step would be to ban repeat offenders. One of the comments above raised the thought that a handful of commenters were generating most of the toxic comments. Perhaps they can be at least partially managed this way.

Charles June 20, 2024 2:58 PM

@Clive Robinson

You have misunderstood my intent, along with the proposed implementation. Here’s a simple analogy that will help everyone understand. If you have ever been to a nightclub, you will know that many of them do identity and age verification at the door, and will give you a wristband or stamp to indicate that you have passed the verification. They don’t care who you are, retain your information, or care what you do once you are inside. Same goes for the bartenders inside: they don’t care who you are, and you remain effectively anonymous. There’s no way to trace your wristband back to your ID.

David Rudling June 20, 2024 5:48 PM

I understand some people’s concern concern about being unable to raise possibly interesting news topics without the Friday Squid thread.
However if the moderator is reading every post it shold perhaps be possible to post on any current topic something like:-

@Moderator
OFF TOPIC NEWS ITEN
Lorem ipsum dolor ,,,

giving the possibility for a message judged to be of sufficient value to be trasferred by either the modeastor or Bruce to a “New Item” thread instead of the one it was notionally submitted to, assumig this is feasible.
Of course it is our host’s blog and this may smack of allowing others to seek to hijack it.

Clive Robinson June 20, 2024 7:54 PM

@ Charles

“You have misunderstood my intent, along with the proposed implementation.”

Err not really I’ve “previous knowledge” of how such systems work in the real world rather than in theory, and the two are more than a country mile apart for “legal reasons”.

“If you have ever been to a nightclub, you will know that many of them do identity and age verification at the door, and will give you a wristband or stamp to indicate that you have passed the verification.”

By law in most places they are not allowed to record your ID, but such laws do not apply “online”, in fact the very opposite for a multitude of reasons, including US companies claiming they were “safe harbour compliant” when they were nothing of the sort. Hence the increasingly more stringent EU legislation.

But I have already described such a physical world “receipt” system. And the reason the stamp is not traceable backwards is that it is a physical real world tangible object that is too complex to use “serial numbers” or similar to make it traceable with (although time-stamp and CCTV systems are now being combined for “Patron Safety”).

Online systems however, supposedly to “prevent fraud”, are almost always a 100% traceable system with the equivalent of crypto protected serial numbers.

All this US company noise about anonymous tokens is nothing of the sort. Because they keep a record of both sides of the token. As either a user or business you only ever get to see one side. The token arbitrator how ever gets to not just see but record both sides. This they can and do store away “to be compliant with US Legislation” and also “give for free” to the US Government for “legal immunity” protection.

Clive Robinson June 20, 2024 8:21 PM

@ David Rudling,

“… giving the possibility for a message judged to be of sufficient value to be trasferred by either the modeastor or Bruce to a “New Item” thread instead of the one it was notionally submitted to”

The problem is you still loose the user comments at the time.

For instance @vas pup very recently posted a news item about giving robot hands a sense of touch by pressure sensor.

It’s a subject I did original work in back in the late 1970’s / early 1980’s in a couple of UK education establishments that are now Universities.

What I’d found out back then that building pressure sensors did not solve the “sufficient grip” problem and in fact can not with fragile items like food stuffs. The solution most used back then was not to grip like a human hand but in effect cage by using curved claws that went around and under the object but did not apply pressure.

The “human touch” solution is to realise humans mostly do not use pressure as an indicator of sufficient grip, but feel for the object slipping. Thus you need to design a “slip sensor” which are way more difficult than pressure sensors.

For instance you can create fairly easily a “skin” with 12 or more effective pressure sensors per square millimetre. You do this with three layers. The outer two have very fine parallel circuit tracks, and the layer in the middle is “carbon loaded” or similar that changes impedance with pressure. If you have the wires at 90 degrees then you can scan the same way that you do with keyboards. Two or three decades after I designed such a skin, it started fetting used as an inexpensive surface for “Graphics tablets” and later LCD touch screens for consumer products.

But whilst you can to a limited extent detect angle of pressure it’s not very good at detecting “slippage”. There is a simple trick to make it work better at detecting slippage but it’s not inexpensive to manufacture at the sensitivity of human skin and it’s not as robust by a long way.

44 52 4D CO+2 June 20, 2024 10:09 PM

@Clive Robinson

Did you watch the full video you cited?

https://m.youtube.com/watch?v=c52pKpYeZ74

Towards the end, he proposes a solution of tacking on DRM to a data diode setup.

Smells like snake oil to me.

Better for people to operate under the assumption that “two can keep a secret, if…”

Blaziken June 21, 2024 1:44 AM

@Bruce

As a long time lurker and occasional poster, I’d like to apologise on behalf of the user community. You provide an excellent service, and it is shameful that we cannot be trusted to moderate our own behaviour.

Your position on allowing anonymous posts should be applauded.

I strongly relate to your analogy that this is like a gathering in your home. I can’t imagine the recent unpleasantness taking place at (say) a dinner party.

Please keep up the good work, and thankyou for persisting in the presence of those of us who cannot manage even a small degree of self control.

loon June 21, 2024 2:56 AM

The squid entries are … interesting, i guess? But if i want to learn about squid i go to zoo-sites. I come here to learn about security, and the friday free-for-all was a nice way to get to know a sort of stream of consciousness for the crowd that gathers here. So very sad that you deem this neccessary – how about you leave the friday comments open, and put a triggerwarning up top? Or perhaps you even find someone itching to do something fun like this : https://linus-neumann.de / 2013/05/die-trolldrossel-erkenntnisse-der-empirischen-trollforschung/

Anyways, kudos for financing a moderator out of pocket – and a humble request: both the submission and the eventual decision will be logged, so could you please have a scripted note that tells prospective commenters what the current mean time to publication(-decision) is? Some comments are not worth it if they trundle in 3 days after the fact.

Z.Lozinski June 21, 2024 6:48 AM

I understand why, but it’s frustrating that one of the spaces on the internet where we can have reasoned discussions on security is attacked by griefers.

I for one really appreciate the effort Bruce puts into keeping this blog going, and the depth of comment from the regulars. I have always thought it is a nice touch that a group focused on secutity works off reputation without requiring log-ins and it would be a shame to lose that.

An aside on identity. I shared a flight with Chris Holloway (who for a time was iBM’s chief cryptographer). We were discussing identity, and he observed that in practice most identity was attestations by various authorities (all the way from your mates down the pub, the company HR department to government-issued identite) they had come across you before. He observed the only way to get definitive knowledge of person’s identity was in the maternity ward, before the umbilical cord is cut, as after that point there is always a way to defeat whatever system is in place. Which then comes back to the idea that the attestation of a group that knows / interacts with you.

Robin June 21, 2024 9:52 AM

A number of commenters have remarked on the fact that Squid posts often contain interesting and/or useful snippets, links or news items that we might not have seen ourselves.

Can I make a tentative suggestion: that a mechanism be found for people to flag up items of interest. A sort of mini-comment: a meaningful title, short (200 words?) description, a link to further information. No other discussion or commentary.

The idea no doubt needs refining; guidelines for what is “interesting”; ways to avoid hacking/contaminating/bots need to be thought through; acceptable ways to announce links found; perhaps some collective moderation with upticks (and downticks?).

Obviously a lot depends on how – or even if – this could be integrated into the existing blog. Is it worth thinking over?

@Moderator: if this is something Bruce would definitely rather not do, then please just send this comment straight to the waste-bin!

SomeFox June 22, 2024 4:51 AM

I have noticed a change in tone in the comments since a while and I feel minimal non-invasive screening like this is much welcomed.

If by all means even editing should be an option, by this I mean cutting out needless parts and put […] placeholders.

The fastest way for me to completely ignore a comment is this: “@ALL”
By default any comment is for everyone to consume. This type of attention grabbing behaviour needs to be curbed.

Maybe the editor will see this, but I won’t make the effort of writing an email for this suggestion. Thanks for the blog and thanks for leaving the comments open, for now.

Michael Elling June 22, 2024 7:19 AM

“well, I guess this doesn’t technically quite break a rule,” but “is this actually contributing.”

Good to see the notion of “incentives” brought into the picture. But this is a heavy handed and non-scalable approach. What if incentives and disincentives were built into the code in a far more subtle and generative and sustainable and scalable way. Subtle, as the author suggests people reflect how much skin do they put in the game. Generative, such that it adds to the original content or commentary, provides an alternative perspective, or debunks the discussion or opinion with fact. Sustainable, in that things are not constantly repeated and therefore wasteful for everyone. Lastly scalable, in terms of time, cost, applicability, usability, etc…

Comments have been broken from day one because the internet itself lacks a global incentive and disincentive system. But we can start with comments to fix the problems of the mothership.

JG5 June 22, 2024 9:10 AM

Sorry to not have had much to say lately. Unpleasantly busy. I try to follow and was saddened by the degradation of discourse. I think of Bruce, Clive, and MarkH often, and hope that they are doing well. I remain interested in participating in a broad discussion of security topics, from computer security to all of the things that it touches. As long as we are up-voting topics of interest, it would be a nice touch to be able to down-vote the petty sniping. Unfortunately, any credible voting scheme requires something roughly equivalent to a login. Or someone will loose the kraken-bots.

Metalobster June 22, 2024 3:28 PM

Ouch, I’m going to miss the squid comment thread. I check the rss occasionally but on weekends I often enjoy skimming last weeks’ squid thread. Yeah there are better sources for security news but I enjoy the discourse captured by those threads and specifically Clive’s contributions. I proposed a “comment barber” browser extension a few years ago when we saw a significant uptick in political trolls (no one liked the idea but I found it useful for ignoring the trolls).

I agree it has gone off the rails lately, but surely there is some middle ground besides disabling comments on it. Would you accept financial donations for additional moderation?

Dancing on thin ice June 22, 2024 4:12 PM

@Clive Robinson

By law in most places they are not allowed to record your ID

The large nightclub I worked for used a vcr to record an id along with the person presenting it as far back as the 1980s.
It proved useful proving the club was careful when it was raided. The kids had thrown out the fraudulent ids they presented and showed officers their real identification.

A quick look of the United States brought up only 1 state that prohibits the practice but several others that require it for purchasing liquor.
There are stipulations on who has access and how long to retain it such as police investigations. (Though we both know that may not always be true.)

44 52 4D CO+2 June 22, 2024 11:21 PM

@Escaped the Moderator

I’ve never had a reason to look at that site (//soylentnews.org/article.pl?sid=24/06/20/1558253)

100+ comments, most of them unrelated to the new policy here. There is an interesting question posed though – why do some want to destroy anonymous forums – I don’t think it should be that difficult to speculate about possible motivations.

Quickly5407 June 23, 2024 1:44 AM

If somebody does not deserves to deal with this is you, Bruce.

It is incredible how you are looking for other solutions than disabling comments or requiring verified accounts, unlike the… 95% of the Internet?

I want to thank you for this blog, you are always a trusted source in which I can rely on to understand complex things (such as TPMs dilema).

I wonder if the old Internet was always as toxic as it is now? Do you think that there might be ways of correcting toxicity?

Anonymous June 23, 2024 4:55 AM

First off I express my condolences regarding Ross Anderson.

Second of all I cannot say I am terribly surprised by this outcome. And with fear of hypothetical LLM powered ‘spam bots’ it calls into the question the potential feasibility of anonymous commenting systems on the Internet altogether.

While LLMs seemingly have issue with differentiating between factual and fictional information I would be curious if modern day ‘sentient analysis’ could keep some of the trolls and other timewasters at bay. Just an idea.

JazzHandler June 23, 2024 10:06 PM

I’ve only commented here a couple times over the years, but I have learned SO MUCH from this blog. Much of it from the comments on the squid posts. True, a lot of it is knowledge that I never needed, but I still enjoy having it.

So I hope you find a way to keep the comments, but if not, I’m still quite grateful for this site and everything I’ve learned by reading it.

Thomas Stone June 24, 2024 11:13 AM

Bruce,

As far as I am concerned, your blog is required reading for everyone who is concerned about security. I strongly recommend it to all the other security developers that I work with. You constantly open our eyes to issues we were not aware of but need to be concerned about.

The above will not change even if comments are disabled. It will be sad if it comes to that but it is wholly understandable. Just don’t stop blogging. You are a cherished resource in the crowd I hang with.

Esker Riada June 24, 2024 11:39 AM

Readers run the gamut from teen gamers to academic luminaries.
The comments do seem to drift toward the lowest common denominator in this egalitarian model.
Charge money for premium access – that should sort it out!
As an aside, I would like to see commenters self-sanitize. We all have opinions on good guys, bad guys, state actors, etc., but conjecturing at every opportunity that Putin is behind all digital evil or that Biden is responsible for this, that and the other thing is puerile. Where possible the community should focus on technicalities without policiticizing.

Who? June 25, 2024 11:14 AM

@ Charles

What do you think about requiring identity verification for all social media accounts, via third-party identity verification services? User anonymity can be maintained, as identity verifiers need only to pass back to website requesting identity verification an affirmative response, and some sort of identity token. My assumption is that this process would ensure website operators are able to enforce bans of abusive users, and mitigate the effects of bots and trolls, while maintaining plausible anonymity.

I would suggest you reading the book “Privacy is Hard and Seven other Myths,” by Jaap-Henk Hoepman:

https://mitpress.mit.edu/9780262547208/privacy-is-hard-and-seven-other-myths/

I think you will find the chapter on third-party identification services illustrative on how leaky these services are right now (and how powerful, being a central point where surveillance against you is easy). Of course, there are ways to turn these authentication services more privacy-friendly, but do you really think these services really want to be privacy-friendly?

In general I like this book, but Mr. Hoepman has too much confidence in laws as a mean to protect citizens, at least in Europe. I think differently, our well known regulations (e.g. GDPR) are here to protect the interests of corporations, not citizens. After all, Governments are owned by high-tech, not citizens.

I think the goal of this forum —please, Bruce, correct me if I am wrong— is preserving the anonimity of readers as much as possible, because some matters we talk here can be challenging if we lose anonimity.

Anonny Mouse June 25, 2024 6:46 PM

If the new policy does not work, there is one more that can be tried. Replace the comments with an invitation to send a “letter to the editor.” From time to time the best letters can then be published. More brutal, but it does not require an excessively timely response to received letters, while making the filter more or less aggressive as resources permit.

Noah June 25, 2024 8:30 PM

FWIW, I’d be interested in knowing the rate of received comments after a few months. Not because I think or want the number of useful comments to drop off, I’m more interested in whether the rate of crappy comments (rightfully) blocked by the mods goes down over time. Put another way, does the policy actually cause people to stop submitting crud, or does it continue regardless? Not that I’d be able to do much (if anything) with the info, but I sure am curious.

Herman June 26, 2024 10:47 AM

@Anonny Mouse

I believe this is the worst idea. You don’t need to select “the best” comments, you only need to filter the worst. Trolls usually don’t return if you take away their soap box and audience.

A sadist unable to get a reaction is never satisfied. Of course the moderator is the only soul exposed to possible unfiltered abuse. Which is why you should have several moderators to limit the abuse potential.

vas pup June 26, 2024 2:44 PM

@Bruce stated “1) are on topic, 2) contribute to the discussion, and 3) don’t attack or insult anyone.”

He agrees those are subjective in nature. When you have no objective criteria that is always huge place for misuse.

For IT and security in general (even when it subject related to humans as a weakest link) 2+2=4 not whatever is subjectively feasible.

Moreover, sanitizing post by name only not by subject without notice is leading for double standard. That may apply when blog is in Liberal Arts not security.

I hope You and Moderator will read and keep this post.

44 52 4D CO+2 June 26, 2024 9:11 PM

@Noah

I’d bet the flood of crappy comments has already dropped precipitously. You’d now have to wait a long time before responding to your own posts without making it obvious that you are submitting crud. It’s like starving a fire from oxygen.

Winter June 27, 2024 7:56 AM

On the Recent Comments (Last 100 comments) page it still says:

Note: new comments may take a few minutes to appear on this page.

Maybe this might be updated? At least temporarily.

John Bryson June 27, 2024 11:15 AM

Thanks so much for doing this! Loved reading the incredibly well informed and useful comments. The rest won’t be missed.
So grateful that you will continue the squid posts!

JonKnowsNothing June 28, 2024 11:43 AM

Would it be possible to alter the “Hold for Moderation” tag to differentiate between posts blocked due to spelling issues versus posts held for content review?

In the past, some posts with benign words that included subsets of words on the naughty filter list got the moderation message. When this happened it was often possible to find the offending word, replace it with another acceptable word and the post would go through.

Currently, I have no way of determining why some posts go through and some do not. It might be a naughty word filter reject or it might be content rejection.

It would be helpful if clarification could be noted on the submit page.

Winter June 29, 2024 5:42 AM

@JonKnowsNothing

Currently, I have no way of determining why some posts go through and some do not. It might be a naughty word filter reject or it might be content rejection.

I would suspect that such information could be used to probe the system to be able to circumvent the restrictions. If a spammer can map out the list of naughty words, she can use that knowledge to more easily craft her spam to get it past the filter.

Bruce Schneier June 29, 2024 6:48 PM

@ JonKnowsNothing:

I’m not sure what you’re asking. I think you are thinking there is still a keyword filter that causes some comments to be treated differently. That’s no longer true. If a post doesn’t go through, it’s because the moderator did not let it through.

Meta lobster June 30, 2024 2:12 PM

Just want to say I’m missing the squid comment thread.

Would it be possible to set up patreon or some other crowd-funding system in order to fund additional moderation costs? Would anyone else be interested in contributing to this?

Winter July 1, 2024 4:25 AM

We all miss the free discussions on the Squid posts and the fast responses on the other threads. We are all relieved about the disappearance of the toxic comments and spam.

But we know human moderation does not scale. Meanwhile, there are fast developments in technologies of detecting hate speech and sentiment detection [1]. I have been looking forward to a (semi-)automatic spam/troll/hate filter since the start of online communities.

Would it be possible to experiment with such systems in, eg, the Squid posts?

Such an experiment could be done as part of a student’s project. I am sure many students would want a try at analysing real systems in real time. This experiment could be restricted to hide all old comments after a set time, say, days, or a week. Obviously, users would have to opt-in to the experiment.

[1]’https://doi.org/10.1016/j.neucom.2023.126232
‘https://doi.org/10.1016/j.aej.2023.08.038

Clive Robinson July 1, 2024 8:30 PM

@ Winter,

Re : Hate Speech and current AI will fail.

You say,

“But we know human moderation does not scale. Meanwhile, there are fast developments in technologies of detecting hate speech and sentiment detection [1].”

And you give links to two papers.

However what you appear not to realise is that “moderation does not scale” applies to the current AI systems you allude to as well…

But the first paper really is of little worth for a number of reasons. Firstly their understanding of LLM’s and ML systems is incorrect and is part of a much more general problem. Few appear to understand that LLM’s are just glorified filter systems using the same techniques as “Digital Signal Processing”(DSP). There is nothing “magic or intelligent” about them, they are simply based on statistically weighted spectrums that are multidimensional and applied filter masks.

But the authors appear to be ignoring a series of issues. If you read the first two sentences of section 8.2 will tell you why any of the current AI LLM or ML systems will fail.

That is the systems are at best long lag reactive, not pro-active. As society moves forward they can not on their own keep up. This makes “Supervised Learning” by near sweatshop labour essential. Hence you get the issue of “human moderation does not scale” applying to the current AI systems.

Also the data gathering process will of necessity need to be “unethical” as the authors allude to in section 8.3 where they say

“… the collection process of such dataset often involves using content posted by real users who do not necessarily want to be identified. At present, most researchers do not systematically obtain explicit consent from all users whose content is being analyzed and, instead, rely on the implicit consent that users are in a public or semi-public space.”

But also consider, as the data corpus will include user identification and we know that LLMs care not a jot what they push out as they are in no way sentient. It can be seen that a crafted user input on a persons name or their other identifiers will output all the “Hate Speech” linked to them. Yes people talk of “guide rails” to stop this but so far they’ve effectively been got around by those who are sentient…

The second paper you link to has a large number of issues, and it’s difficult to know not just where to start but what best order to put them in.

But both papers have a common very significant failing that will in effect kill the usage of current AI systems.

Consider a sentence of the form,

“AAA is a XXX YYY”

That would cover,

“Jolum is an Ebolium drunk.”

Is it “Hate Speech” or “Statement of fact”?

Without other information it could be either or both, you can not tell.

This relegates it to falling foul of the “observer problem”, which is problematic.

Those who “supervise the learning” who are effectively sweatshop labour are not going to be impartial by definition. They are going to make a decision on what best benefits them.

The problem is this is a “class” not “instance” issue. Whilst the statement being judged by the observer is a “single instance” their choice will effect any other of a very large number of statements that fall in that class.

The result is each observer judgment taints the entire set of weights. It will not take to many such judgements to render the LLM or ML system well beyond “hallucination”.

But “hallucination” is very much the wrong word to use as an appropriate “term of art” in the domain.

Back in the first decade of this century a book was published and with it a more appropriate word as a “term of art” applies and is a better fit.

The problem is the word now used as a “term of art” not just in it’s original knowledge domain but increasingly others, has frequently been found on word “naughty lists” for profanity and the like…

You should read about it and why it’s highly relevant,

https://link.springer.com/article/10.1007/s10676-024-09775-5

Winter July 1, 2024 10:40 PM

@Clive

Firstly their understanding of LLM’s and ML systems is incorrect and is part of a much more general problem.

Both papers are systematic reviews [1]. They describe the existing literature and understanding. Whatever definitions they use are the general understanding of the field.

There is nothing “magic or intelligent” about them, they are simply based on statistically weighted spectrums that are multidimensional and applied filter masks.

I do understand your doubts, but this description of LLMs applies also to any network of organic neurons, including our brains. Biological neural networks are “based on statistically weighted spectrums that are multidimensional and applied filter masks”.

What LLMs, or Machine Learning, can do in sentiment analysis is to classify texts in levels of negativity, hate, etc.. Like human moderators or censors with a work load, they will be reactive with a lag.

They will have false positives and false negatives in their classification. That is all nothing new. And no one sane is suggesting a one size fits all, fully automatic decision system.

I can still use email because there are imperfect but useful spam filters. Spam filters that are based on machine learning, eg, naive bayes filters. I hope to see something like that for arguments and comments.

Is this possible?

After decades of sentiment analysis I am more hopeful than ever before after seeing the latest developments. I think they might become useful in moderation. They might be imperfect, but still helpful. Just like spam filters are imperfect but keep email usable.

If this works, we might see more free form discussions again on the internet.[2]

[1] Also, these were chosen as examples of developments in the field, not as finished systems.

[2] I am convinced the infamous “algoritms” of social networks already perform this same function as they gate connections based on content and sentiment so users see content that keeps them in and not drives them out

JonKnowsNothing July 2, 2024 3:39 AM

@Clive, @Winter, All

re: Hate Speech and current AI will fail

An important aspect of the failure in all moderation system is the one that @Clive pointed out: Context Failures

  • “AAA is a XXX YYY”

Since AI has no intelligence of any sort, the best it can do is Word Proximity or Word Order check to determine context, along with a dictionary lookup for naughty words.

In many other on-line public forums where comments are permitted, the struggle over context and content clash regularly. AI does not recognize “sarcasm” or “idioms” or “slang”. These move too quickly in the world vocabularies to be captured fast enough to determine if they are acceptable or not.

  • Length of post is not a marker of quality of context or content

A human moderator has a better chance of context identification in any language they are proficient in but they many not understand the nuances of the post.

  • When in doubt: delete

This means that useful posts as well as unpleasant posts get deleted and given the standard Bell Curve; 20% of all posts will be jettisoned; 10% bad and 10% good. If the deletion rate is greater than 20% there is a context problem or a technical loophole.

When reviewed posts are looked at in large scale systems (Social Media) the ratios remain about the same; however the quantity of blocked posts is a large number.

It’s a bit like spam phone calls.

  • I have a very persistent caller that spoofs their caller ID and calling number. They call multiple times a day. There’s little I can do about it other than Pull The Plug.

And that’s exactly what Moderation does: pulls the plug.

  • When the phone is unplugged neither good nor bad calls connect

JonKnowsNothing July 2, 2024 8:34 PM

@Clive

There are only a half dozen or so posters that I read. Maybe another handful I scan for content-context. Many of these posters have gone dormant or lurking.

If there is another venue where it is more convenient to have an exchange that might be the better solution rather than to stress our host with topics of no interest to him.

I’d be willing to shift locations and medium if that would work better.

Winter July 3, 2024 3:01 AM

@JonKnows

An important aspect of the failure in all moderation system is the one that @Clive pointed out: Context Failures

Spam filters are a good example: They are never perfect, but they are good enough.

“AAA is a XXX YYY”

That is not how LLMs work. They do take ample context to evaluate any piece of text. GPT4 has a context length of 8192 tokens (“words”). They are working to 32768 tokens. That is a lot of text.
‘https://openai.com/index/gpt-4-research/

LLMs are bad at jokes and satire, so are many people, moderators included.

Clive Robinson July 3, 2024 6:07 AM

@ Winter,

Re : Digital v. Biological

“Biological neural networks are “based on statistically weighted spectrums that are multidimensional and applied filter masks”.”

Actually we are very far from sure on that, even in the general sense.

We see some “structure” but “and insufficient detail”. Whilst “Digital Neural Networks”(DNN) are argued to be based on “Biological Neural Networks”(BNN) that is a little like arguing a cardboard box is designed on a mountain cave.

The design of a DNN neuron is a linear “Multiply and ADD”(MAD) structure which whilst linear for each multiply, has way to many inputs for the ADd to avoid various underlying issues (insufficient bit width being one). The output function of the DNN is where the summation is “rectified” which again has significant issues.

The way a BNN appears to work is by an interesting form of quite complex and nonlinear integration in the time domain. Importantly the nonlinearity is at the inputs prior to integration not the output as with the DNN.

So apart from the superficiality of multiple inputs and multiple outputs DNNs are not at all like BNNs as has been described in the literature.

It’s a case of “squinting at distant objects and hoping they are not mirages” currently.

Clive Robinson July 3, 2024 7:28 AM

@ JonKnowsNothing,

Re : Jack is not a master.

You note another important issue,

“A human moderator has a better chance of context identification in any language they are proficient in but they many not understand the nuances of the post.”

Moderators are rarely “masters of the subject” thus lack “domain knowledge”.

One of the fun things that has come out about current LLM AI is the ability to appear lucid whilst having zero comprehension.

The classic is the oft mentioned “legal brief”.

Untill pointed out, few realise the much deeper implication.

AI systems use rules and logic humans use harms and reason.

Back in the 1980’s when I used to teach and write on computers I used to point out,

“Computers follow rules, humans are guided by procedures”

The former is “logic” the latter is “reason” and the old “chalk and cheese” reasoning gave away the fallacy of “the duck test”.

Whilst cheese can look and smell like chalk thus be superficially confused, those with more experience will see the difference immediately and be able to confirm it easily.

In the main moderators are not experienced thus act on superficiality.

People tend to forget why the “peer review process exists”, journal editors are not subject domain experts but are linguistic specialists. Therefore they “sub-out” to those with domain expertise.

Something moderators do not do in part because finding domain experts when it comes to prejudice is not easy.

The other day for instance the use of “neckbeards”[1] as a group insult came to attention. It’s a newish one in the lexicon much like “gas lighting” was a few years back.

Where do you find a reliable domain expert?

It’s the same with slang, it’s coded language for an “in-group” of people. In this case not criminals or those facing persecution, but those who mistakenly believe they are some form of cognoscenti rather than a wannabe in a sad clique. Hence it was once called “Mother inlaw speak” for the sort of nonsense mothers “of a certain type” reserve for insulting the intended of their progeny by insinuating they are of a lower order.

It’s something that even the ability to reason at very high levels will often miss because it requires a certain degree of base venality and embitteredness only certain types of people possess.

Thus expecting any of the current AI to spot it by just logic is not something you would expect.

When and if AI gets to be able to reliably spot,

“All Covert Channels”

Then…

But as it can be demonstrated that covert channels fall under the “equiprobable issue” which gives the “One Time Pad” it’s security the “all” is not something it’s going to be able to do.

Yes if used sufficiently then correlation will show slang for what it actually means but that requires a sufficient body of usage.

But as with “cockney rhyming slang” where the non rhyming word in the couplet is used it becomes difficult.

See “a richard” from the couplet “Richard the third” what it means rhymes with the unsaid “third” but the rhyming gives rise to deliberate ambiguity which is often only resolved by visual not spoken context. Which in turn may be by allusion, (bird can also mean a young lady). So one male saying to another saying about a third “have you seen his richard” can make reasonable sense between the two, but to an evesdropper…

[1] For those that are unaware of the way TV cartoons such as South Park and the Simpsons portrayed neckbeards insultingly with regards “nerdy types” have a read of,

https://beardgains.com/blogs/learn/neckbeard

Clive Robinson July 3, 2024 10:28 AM

@ JonKnowsNothing, Winter, ALL

Re : Is AI going to scale to AGI

There us much debate about AI and AGI and the latter is getting more hype than the Emperor’s new wardrobe.

The fact is AI scaling is already hitting one of several walls and my personal thoughts are,

“The original AGI definition is never going to happen the way we are currently going about things.”

Some would say harsh, and I suspect some schills in the bubble pumping game will scream aloud and wave their arms and have not just fits of
conniption but make threats in various forms.

But the thing is those pumping the AI bubble, not just over stepped the mark, they leapt blindly into the dark void of daydreaming. From the fact they’ve been reeling things back in rather dramatically and quickly, and that saying they are “watering down the previous projections” does not do their back peddling justice.

I’m not the only person to notice this,

https://www.dwarkeshpatel.com/p/will-scaling-work

But the question few ask is about “compression ratios”.

That is as you compress information at some point you cross a threshold where the compression becomes lossy. The thing is what do you loose, how, and how does it show up.

Information can be compressed into a number, or the difference between two numbers but not uniformly.

Take a standard scale of numbers, all the information in the universe could be represented by just a single number or ratio between two numbers. All you need as the universe is believed to be finite is numbers of near infinite resolution that “exist but don’t exist” that is they exist in theory but not in practice.

Now consider that the closer your compressed number is to any other number that is finitely expressible the more accurate it is. So you can develop an “error function”.

But what effect does that error have?

Also what scope does it have?

And so on.

The simple fact is we can make a very simple calculation which is to count all the “significant” bits in the multiplier constants –weights– in the DNN first layer and that will give you the upper limit.

So the smaller the DNN the higher the error rate.

Thus how small can the DNN of any domain specific LLM be?..

Winter July 3, 2024 1:52 PM

@Clive

Actually we are very far from sure on that, even in the general sense.

Actually, we can calculate it quite well, if not exactly with the Hodgkin Huxley model.

‘https://en.m.wikipedia.org/wiki/Hodgkin%E2%80%93Huxley_model

When studying neural exitations and perceptual fields, individual neurons are indeed “based on statistically weighted spectrums that are multidimensional and applied filter masks”. Networks of such neurons are still “based on statistically weighted spectrums that are multidimensional and applied filter masks”.

It is just that such networks of filters are most likely Turning Complete computers.
‘https://arxiv.org/abs/1901.03429

JonKnowsNothing July 3, 2024 2:48 PM

@Winter, @Clive

re: @W: [AI] they are working to 32768 tokens. That is a lot of text.

You are conflating numeric volume with context.

There is zero comprehension of AI as to context, knowledge, understanding, expertise or any word you want to assign to the meaning of “knowing”.

You can string all the words, no spaces, from Shakespeare into one ginormous token set (which is done) and AI will still never understand

  • Juliet on the balcony

Nor can AI ever understand, even though there is an AI powered dictionary for this:

  • Kiazi’s children, their faces wet

It might regurgitate the text tokens but it doesn’t have a clue as to what it means.

It also has just as much likelihood of producing

  • Juliet’s children on the balcony crying for more Kiazi

====

Search terms:

Darmok
Star Trek

Winter July 3, 2024 5:59 PM

@Clive
Re: Turing Completeness of Neural Nets, biological or artificial

My previous link to arguments for the computational power of Neural Nets was not optimal. It was to a preprint and for specific types of network architectures.

I think the one below is more general and has been reviewed and published.

Neuromorphic Computing is Turing-Complete
‘https://doi.org/10.1145/3546790.3546806

The paper does show that the fact that the individual components of Neural Nets are filters with weighted input does not prevent the network from being a Turing Complete computer.

Winter July 3, 2024 6:34 PM

@JonKnowsNothing

You are conflating numeric volume with context.

But this is the literal definition of context [1]:

the text or speech that comes immediately before and after a particular phrase or piece of text and helps to explain its meaning

LLMs extract the “meaning” of a phrase from all the contexts it has been found in and the current context it has to be interpreted in.

[1] The word “context” is derived from the Latin words con (meaning “together”) and texere (meaning “to weave”). The raw meaning of it is therefore “weaving together”.
‘https://arxiv.org/pdf/0912.1838

JonKnowsNothing July 4, 2024 7:39 PM

@Winter

re: LLMs extract the “meaning” of a phrase from all the contexts

On this we will disagree.

Context, in the form of sentence organization, can give Humans clues as to the “meaning” of the sentence, by “inferring meaning” from the position of the unknown word within standard sentence structure of that language.

The inference is only a “possible meaning”.

  • Many humans have had those odd situations where people are exchanging sentences however they are not talking at all about the same thing.

AI has no method to determine “meaning” as in the sense of “knowing”. You can take your “32768 tokens” and never hit anything intelligent, sensible at all. If AI did, it would never hallucinate – ever.

So on the topic of AI derived “meaning”, it remains with the Human to determine if there is any “meaning” at all.

Twas bryllyg, and þe slythy toves

Did gyre and gymble in þe wabe:

All mimsy were þe borogoves;

And þe mome raths outgrabe.

Clive Robinson July 4, 2024 10:22 PM

@ Winter

Re : Neurons, DNNs and no synapses.

As has been noted neurons have been studied in quite some detail and are highly complex in nature. Attempts have been made to reduce them down to mathematical models of which the work of Hodgkin and Huxley was probably the first.

In part it was only possible because they worked on the squid “giant axion” which was sufficiently physically large that electrical activity could be stimulated and measured.

However their model was in effect only a tiny part of the story. In electrical terms they characterised the “conductor propagation” properties and it’s ability to work as a controlled damped “relaxation oscillator” to signal.

In short it was only one of the three out of

1, Communications
2, Storage
3, Processing

That is their model covers only the mechanisms that give rise to the generation of the “action potential” or “spike” and the channel characteristics that form the primary high-speed “communication” of information. But not what goes on at the “synapse” where not just slow communications but other aspects of “storage” and “processing” take place.

Whilst the biological details of the neuron are to put it politely “difficult to get your head around”, the mathematics abstracts much of this away.

The downside of this is much of the biological potential is removed and you get left with the multiple-input single-output model which is what the DSP MAD structure builds.

But the shrug of the shoulders and “so what” response happens because as noted most of the information handling in the brain is not in the neuron cell, but in the chemical activities in the synapse junction where the spike signal is converted to a chemical action as the output signal from a neuron that is then chemically coupled into the input to the interface of the subsequent neuron where it is converted into a new spike signal.

To say synapses are complex and the full chemical-biological detail of them is still unknown would be a bit of an understatement. Because whilst some fraction of their characteristics have been superficially put into mathematical formulations. The abstractions used are such that they fail to capture the majority of the biological synapse function. And what they do model they do so with at best mostly low degrees of accuracy.

As I’ve mentioned in the past there is in reality no such thing as “digital electronics” in the way most think of logic gates etc. The 1973 McMOS book from Motorola goes into quite some depth of how they are in reality “Analog amplifiers with open loop high gain” that get driven into the rails. The important thing where “most fear to tread” is the issue of “Metastability” which I’ve also mentioned in the past.

Back in the 1980’s the VLSI guru Carver Mead gave a lot of thought to this. As a result he pioneered quite a number of developments in what was called “bio-inspired microelectronics”. One result was the 1989 publication of his book

“Analog VLSI and Neural Systems”
(Addison-Weasley ISBN:978-0-201-05992-2)

https://archive.org/details/analogvlsineural00mead/mode/1up

It’s a book you might find of relevance as many consider the work in it to be the foundation stones of neuro-electronics we now call “Neuromorphic Systems”.

He is at pains in the book to separate out the neuron and the synapse and with good reasoning.

Since you brought it up the basic DNN is actually not Turing Complete but it is certainly a state machine. Most LLM DNN networks are not as such Turing complete either nor are they run as such or even designed to be.

The reason, as configured they are not capable of recursion or the ability to “write to the tape” storage which forms the multiplier weights.

This falls as such to the “transformer” used during training and where included the “adaptive” mechanism that is all to frequently referred to in the press as ML.

Why you brought it up I’m not sure, because being Turing Complete does not as such bring anything particularly new to the table. Also it is something that emerges in most systems where,

1, There is updatable storage.
2, There is a feedback mechanism.
3, There is a storage update mechanism within the state machine actions.

As was shown some years ago in the paper

“mov Is Turing-Complete”

by Stephen Dolan from Ross Anderson’s Cambridge Computer labs,

https://harrisonwl.github.io/assets/courses/malware/spring2017/papers/mov-is-turing-complete.pdf

The Intel iAx86 processor has a “Ghost Turing Engine” due to the way the memory addressing system works …

Yes it could be used as a Turing Engine but it was about the same level of usability as

https://theoutline.com/post/825/brainfuck-coding-languages

And similar “esoteric minimalist languages”. In his paper Steve laid down a challenge to write a compiler that only output mov instructions. And yes somebody took up the challenge, Chris Domas wrote a couple of compilers, and things have since become interesting,

https://esoteric.codes/blog/movfuscator-and-reductio

As it has major security implications.

Because not all such languages are effectively “purposeless”…

As I’ve mentioned before for my sins back “a long time ago in a place far, far away…” I designed a serial computer that was 1 bit wide. A part of which was a highly optimised and fast serial adder. As such it’s instruction set was very small and it had a fairly specific purpose to replace a “ladder logic” control system. So not unlike why the Intel 4004 was designed for a calculator,

https://en.m.wikipedia.org/wiki/Intel_4004

So yes, with the three requirements for filled even a database query can be made Turing Complete.

Winter July 5, 2024 3:58 PM

@Clive

Why you brought it up I’m not sure, because being Turing Complete does not as such bring anything particularly new to the table.

It was to show that telling us a LLM is just “based on statistically weighted spectrums that are multidimensional and applied filter masks” does little to understand what they can and cannot do. Just as saying a CPU is a collection of “Analog amplifiers with open loop high gain” does nothing to help understand what a CPU is capable of.

Since you brought it up the basic DNN is actually not Turing Complete but it is certainly a state machine.

Every real existing computer is finite and thus formally a finite state machine. But almost every real existing computer easily has more possible states than can be visited during the lifetime of the universe. So the “finite state machine” aspect is completely uninformative.

Max July 5, 2024 5:24 PM

I think this is important for a number of reasons.

  1. I think it is a waste of a world class experts time to moderate snarky comments.
  2. The squid threads while interesting can get quite chippy with some posters bullying others even when the OP has specific inside institutional knowledge on a topic and history ultimately proves that knowledge to be correct.
  3. There are a lot of very smart people who post here but where opinions differ it can easily tend to snark. This has kept me off the site for long stretches.
  4. I think what Bruce does here is unique and amazing. I personally trust that he will form a commenting process that is more orderly and imbues respect for all parties.

JonKnowsNothing July 6, 2024 1:49 PM

@Clive

re: Interview: AI extraction machine

An interesting interview with the co-authors of a book “Feeding the Machine: The Hidden Human Labour Powering AI. ”

The interview is only a surface Q&A about the undisclosed or hidden costs of AI systems

  • [AI is an] extraction machine’, exposing the repetitive labour, often in terrible conditions, that big tech is using to create artificial intelligence
  • AI feeds off the work of human beings

In the common view, we see AI scavenge the public facing aspects of the internet: music, images, text, documentation, books, essays, forum exchanges and shared knowledge bases.

In this interview the focus in the AI back end that remains hidden from the public facing dialog. All that data needs to be annotated. The work is outsourced down several pipelines ending up in the most impoverished areas of the globe. It travels along the “old east African railway” now replaced by fiber optics.

This:

  • 1 hour of video data requires 800 human hours of data annotation ~ $1 per hour

The Amazon AI system, their supply chain organising technology, has automated away the thinking process, and what the humans are left to do in an Amazon warehouse is this brutal, repetitive high-strain labour process.

You end up with technology that is meant to automate menial work and create freedom and time, but in fact what you have is people being forced to do more routine, boring and less skilled work

The authors acknowledge that the Tech Oligarchs don’t give a shyte about much and that it is consumers who learn of the excesses that are the ones that can make a dent in this system by leveraging local, national, regional, global cooperative efforts.

Search terms

theguardian

Mark Graham

Callum Cant

James Muldoon

Winter July 6, 2024 1:50 PM

@JonhnKnows

AI has no method to determine “meaning” as in the sense of “knowing”.

I have no idea what definitions you use here for “meaning” and “knowing”. There is this thing about a dictionary or encyclopedia being words defined with words. And there is the problem of “grounding” some words in sensory experiences

But for most concepts, it is words linking to words. And AI can currently also use images and videos to define words.

And context is still just text around a word or phrase, not sensory experience.

Winter July 6, 2024 3:55 PM

@Clive
Re: complexity of neurons and synapses.

Neurons and synapses, like all biology, are quite intricate systems. Most of the complexities are needed to keep the systems working. “Intelligence” is a function of number of operations per time and information bandwidth. That is number of neurons, number of connections&synapses, and number of operations per second (=action potentials=metabolic power).

The details of the neurons and synapses are only important in as far as they affect the number of connections and operations per second. All things that can be emulated in Deep Neural Networks.

Clive Robinson July 6, 2024 10:01 PM

@ Winter, Jonknowsnothing, ALL,

Re : Spectrums of vectors, and Filters.

“LLMs extract the “meaning” of a phrase from all the contexts it has been found in and the current context it has to be interpreted in.”

Err no the LLM does not. At a 20,000ft level it’s a two step process,

1, Build the DNN
2, Query the DNN

The LLM that most people see is the stand alone DNN embedded in a user interface.

The first step of which can be viewed as a two step process,

1, Build the vectors
2, Form the DNN from the vectors

Of which the first step can be viewed as a several step process of,

1, Pull in subset of corpus
2, Verify input
3, tokenise input,
4, Build vectors.

It’s this fourth stage where all the supposed “magic happens”. If we assume the token is a word or sub-word then it has a definition that can be turned into a vector of scalars where each scalar is a spectrum by which a necessary attribute of the token can be quantified and irrelevant or unnecessary attribute scalars set to a null value (speeds up evaluation).

The selection of the scalars of which there might be a thousand or more in each token vector is where a resulting DNN is made or broken, and contributes quite a bit to the hallucination problem or the “GO of GIGO” because it underlies the GI issue.

There is a spectrum on which the method used to select the attribute scalars/spectrums falls. One end is “Human built” the other is “statistically built”. But it is very rare to find any selection process to be one or the other because they are both “deficient” in different ways.

You might want to read a “Morning Paper” from Adrian Colyer[1] about the leading edge of research papers on what was then called NLP from a decade ago,

https://blog.acolyer.org/2016/04/21/the-amazing-power-of-word-vectors/

It will explain a couple of things, firstly why there are actual real world physical limits on the making of the DNNs in LLMs and secondly why they are not magic just Cosine/RMS systems over very many spectrums / dimensions. So just boring simple statistics not intelligence.

But a simpler and more recent over view is possibly a better place to start,

https://saschametzger.com/blog/what-are-tokens-vectors-and-embeddings-how-do-you-create-them

[1] I’ve mentioned Adrian Colyer before on this blog when he was writing his blog “the morning paper”. He used to write daily about one or more new papers in Computer Science. Like me he was based in the UK and had a one hour commute either way to fill and spent it in effect reading and note making. However due to C19 “lockdown” and other issues he had to stop which is a shame. His blog is still up and available as a resource and I would still recommend reading it especially if you are starting out on post graduate work with the aim of doing research.

MarkH July 7, 2024 10:46 PM

@Bruce:

I’m personally grateful for your many contributions, not least the obviously burdensome moderation of comments here.

This is the first case I’ve seen personally of an online forum, whose descent into the maelstrom has been arrested and reversed! It was really distressing to witness the decay.

Fewer-but-better is an excellent tradeoff.

Clive Robinson July 8, 2024 3:19 AM

@ JonKnowsNothing,

Re : Fluidic computing.

You might find this of interest,

https://royalsocietypublishing.org/doi/10.1098/rstb.2018.0372

It’s a history of calculating and logic devices based around the use of fluids or fluid paths and the gradients created, moving on to even more curious devices.

One or two I’ve seen in the London Science museum like the fluid “computer” used to model the economy, others I’ve only heard about.

I was aware of the use of biological components like “slime moulds” decades ago having seen as a young lad how the mycelium of fungi spread along the underneath of rotting planks. As for the poor slime mold it became a bit of a joke amongst a group of people I used to know. And the use of ants was something that got discussed with one over an outdoor lunch, a well known author who referenced it in one of their books.

But do you remember that Scientific America article last century about DNA being used to do massively parallel computing? That was used to solve an “optimization problem” by the cryptographer Leonard M. Adleman,

https://www.scientificamerican.com/article/computing-with-dna/

But If you look carefully at Fig 3c in the Royal Soc article you will see an excuse for adults to keep “working with” Lego “for research” 😉

But sadly as with other technology Section 10 points out there is a “dark side” use for the technology.

Winter July 8, 2024 4:10 AM

@Clive

Err no the LLM does not. At a 20,000ft level it’s a two step process,

Re: reductionism

When you reduce a system to its components, you are able to study and understand the components. But to understand the system, you have to reassemble the components into a system.

Nothing you write about the components of a LLM allows us to understand why a LLM can converse coherently and answer questions in a seemingly natural, and often even factual correct, way. The same for other complex ML capabilities like image and speech recognition. The statement about “stochastic parrots” often cited also does absolutely nothing in to help understand how we can converse coherently with an LLM.

Moreover, the same reductionist approach works on the neural networks in human brains. There too it lets us understand how the individual neural assemblies work, but not how we can speak or write coherent comments to blog posts.

Clive Robinson July 9, 2024 8:34 AM

@ Winter, ALL,

Re : Magic thinking does not build functional systems.

You say,

“When you reduce a system to its components, you are able to study and understand the components. But to understand the system, you have to reassemble the components into a system.”

What you conveniently forget to mention is that the assembled system can not do more than the foundation components allow.

Which is a point you apparently appear to be not cognizant of or are trying to ignore.

The basic DNN in LLM’s does not possess a system that allows recursion as I’ve explained, so it is not “Turing Complete” nor can it be, which debunks your earlier comment to that effect.

That is why the basic DNN in some systems is augmented with a “transformer layer” that gives the feedback mechanism that can adjust the DNN weights just as a DSP “adaptive filter” does. It’s this that gives some semblance of ML capability. But it’s certainly not “intelligence” or the claims originally espoused in the AGI babblings.

Which is why this comment of yours is wrong,

“Nothing you write about the components of a LLM allows us to understand why a LLM can converse coherently and answer questions in a seemingly natural, and often even factual correct, way.”

Actually they can “appear to” depending on what the training data held, and the questions you ask.

But ask a question outside of the training corpus and that’s when you get “hallucinations” as they are incorrectly called but as more recently explained is “soft bullshit” (yes BS is now a “term of art” in a scientific “research domain”).

https://link.springer.com/article/10.1007/s10676-024-09775-5

There is that “adaptive” difference between LLM and ML which is why I and others treat them distinctly and separately.

I can explain why an ML system can as you put it “converse coherently…” But I’m not here to write massive monologues.

Further I can explain why you are being “bedazzled and beguiled” rather than just understanding what LLMs and ML systems do.

But first you need to understand and accept the fallacies behind the Turing Test, and the implication of Searle’s Chinese Room and what it demonstrates.

Because at the moment you are applying “magic thinking” to LLM and ML systems of the sort that in effect says,

“If we make it complex enough intelligence will happen as an emergent property.”

It won’t as indicated because the base foundation components have to allow it. And currently the DSP arrangements in DNN’s being used will not allow it.

Worse that kind of thinking is the same as,

“If we can not explain it then God must have created it.”

And we’ve seen what kind of nonsense that causes, which a study of history clearly indicates is a very bad way for mankind to go.

Will we get Artificial Intelligence, I would say,

“Yes, but not down the cul-de-sac we are currently walking into.”

Winter July 10, 2024 12:08 PM

@Clive

The basic DNN in LLM’s does not possess a system that allows recursion as I’ve explained, so it is not “Turing Complete” nor can it be, which debunks your earlier comment to that effect.

The proof of Turing completeness was for Neuromorphic Computing, which is more general than LLMs. See link above.

Also, LLM training is recursive, albeit somewhat indirect by error back propagation. It is the inference part that is not recursive. But there are other architectures that are recursive.

Clive Robinson July 11, 2024 5:17 AM

@ Winter,

Re : Incorrect thinking does not build functional systems.

“It is the inference part that is not recursive”

You might want to think that statement through again.

Because by definition

“An inference is an idea or conclusion that’s drawn from evidence and reasoning. An inference is an educated guess.”

https://www.vocabulary.com/dictionary/inference

An inference is implicitly a recursive process as it’s based on “reasoning” which is a recursive process and “evidence” which in knowledge gained by tested evaluation which is again a recursive process.

Likewise “educated” is a process of learning by “training” which is again a recursive process. As you yourself point out,

“Also, LLM training is recursive, albeit somewhat indirect by error back propagation.”

By the way that “error back propagation” is what the “Transformer Layer” added to a DNN carries out, and also in a more general sense what those less than a couple of a dollars an hour humans do on detected errors.

Winter July 11, 2024 11:14 AM

@Clive

Re : Incorrect thinking does not build functional systems.

Maybe I can reformulate the principles more clearly.

Neural Nets, artificial or biological, of sufficient dimensions can approximate any computable function. This is called the universal approximation theorem. See:
‘https://en.wikipedia.org/wiki/Universal_approximation_theorem

This has been proven for many different ANN architectures.

A Universal Turing Machine is a computer that can emulate any computational function. The equivalence seems obvious.

This is obviously just an illustration, not a proof, but you can find the relevant proves in my link above and the Wikipedia pages.

JonKnowsNothing July 11, 2024 12:13 PM

@Clive, @Winter, All

re: AI building non functional systems

Part of this discussion is based on “what do we expect as output” from such systems.

In basic computer systems we expect “correct behavior” as defined by the PRD and System Designers; such that if the “correct behavior” expected is AnswerA that is the result we attempt to achieve.

In AI systems, correct answers are not required and even penalized in some aspects due to how the systems determine what is a correct output; such that the query to return Fruit As Apple results in Red Apple, Green Apple and Pineapple.

In this later aspect, the AI correct output of a system is totally acceptable and produces Pineapple in the output results. It is not the answer humans accept, although they might find it hilarious when William Tell shoots a Pineapple off of the head of his son.

All (afaik) human languages have grammar and organization. These are not the same between languages and ethnicities however, there are required word-order patterns. In English we have adverbs and adjectives, which are represented by the Token Set in AI for their “inference” substitutions. A data-token-set designated as an adverb will fit into any adverb position in an English sentence.

  • For those of you who survived Sentence Diagramming these are the words that are on the slanted part of the layout.

In AI it doesn’t matter what word it picks as long as it fits into the correct slant line. There is a statistical component that indicates the rarity of the word and a weighted average used to pick a more likely one (Sherlock Holmes The Adventure of the Dancing Men). Red is selected over Purple because the averages on Red+Apple are higher and there aren’t many use-cases for Purple+Apple whereas Purple+Cow is a common expression.

This is the limitation and eventually the failure of AI as currently designed. The difference between what “appears” to be OK and what is “nonsense” or worse “dangerous” outputs.

RL Difference tl;dr

In most book marts there are lots of books on horseback riding. Some are picture books for children enchanted by the mere thought of horses and ponies and some are thick scholarly tomes on the topic which take decades at best to absorb their meaning.

The main reason books fail past more than the basic information is that horse riding is something that cannot be learned from a book. It is by direct experience that one learns to ride. Scholarly tomes are attempts at describing intricate movements and muscle tensions in word format but they cannot be realized except by enacting them on a horse.

When people asked me to teach them to ride and wanted to know how long it would take, I would tell them that in 2-3 days I can teach you to sit on a horse (walk trot canter); but it will take a “lifetime to learn how to ride”.

  • AI is at best a 3 day riding lesson; an overview from a picture book
    • AI cannot actually teach you anything you do not already know

===

Search Terms

William_Tell

Shooting an apple off one’s child’s head

Sentence diagram

Purple Cow

Lt. Col. D’Endrödy was a member of the Hungarian Olympic Three-Day Event Team, a member of the Hungarian International Show Jumping Team. British national team coach at the 1956 Stockholm Equestrian Olympics they won a gold medal. In 1959, he wrote his book “Give Your Horse a Chance” in England, one of the most outstanding works in the international horse literature.

Winter July 12, 2024 10:49 PM

@JohnKnowsNothing

Part of this discussion is based on “what do we expect as output” from such systems.

Indeed. Examples for DNNs are, the words that were said, or the contents of an image, or the identity of a speaker, or a face, or a fingerprint. But it can also be the answer to a question, an image fitting a description, a summary of a text, or the sentiment of a comment.

In those cases there is no well defined “correct” answer, only a “likely” or “most likely” answer. That is, any answers have to be rated using some kind of (bayesian) statistics.

such that the query to return Fruit As Apple results in Red Apple, Green Apple and Pineapple.

This depends entirely on how the system is trained and what input data is used. For instance, LLMs only look at the context tokens are used in. If the input data show pineapples appear in different contexts, eg, pizzas, than apples, eg, in pies, then they will not appear easily in the same answers.

If you want a system that takes note of the biological taxonomy of plants, then the training data abs procedures should support that.

Red is selected over Purple because the averages on Red+Apple are higher and there aren’t many use-cases for Purple+Apple whereas Purple+Cow is a common expression.

You are now generalizing simple Markov chains to DNNs. That does not work. Markov chains are unable to capture the syntax or semantics of human language. DNNs can capture the syntax of human language. [1]

[1] Markov chains capture regular grammars, human languages have context sensitive grammars. In computational terms, these grammars are recognized by, respectively, finite state automata and linear bound automata.
‘https://www.google.com/amp/s/www.geeksforgeeks.org/chomsky-hierarchy-in-theory-of-computation/amp/

FR July 15, 2024 6:41 AM

Sad to see the comments on the weekly squid posts deactived. I was always looking forward to combing through them on slow afternoons at work.

On the other hand, glad to see you taking action against toxicity and hateful speech!

Chris Drake July 16, 2024 1:09 AM

This sounds like a no-brainer (literally) for an A.I. to handle.

Head over to hugging-face (or find someone to help you) and grab the latest model, fire it up on an old gaming PC with a decent graphics card, and you should be able to quickly tell it how to score comments (is it on-topic? is it hate? is it spam? …) then leave it running with an agent to take action based on those scores.

Remember to practice threat-deception – do not let people who comment know that you erased their comment, or else they’ll just come back and keep trying until their garbage shows up.

Moz July 20, 2024 12:03 PM

I don’t know what to say about this because I feel personally devastated by this. I came looking for the Squid comments after the recent outage and find nothign of value in the Blog right now. I guess let’s try:

I have learned very very much from the comments on the Squid section. So first:

  • thanks Clive for the comments over the years they have been great, have made me think and have made me learn
  • thanks Bruce for having run this. I wish you could have found a way to hand it on and not feel forced to kill it.
  • thanks everyone else for many valuable comments that were often interesting and much better than other places where there were comments and for the questions that brought more out of Clive

Secondly, I feel this is a terrible defeat. I have come to realize that there is a huge value in long running old communities of discussion. There is a tacit knowledge that builds up and a value in comments that come from people like Clive who have built a reputation over years. This value is not just in Clive alone, but also the knowledge that if there’s something missing it will be called out and Clive will give a clear answer.

The anonymity here has allowed interesting and important comments that would never have happened in a more controlled forum. That was also critical to the sum of the value of the comments. The toxicity was unfortunately inevitable.

There are real conscious reasons to destroy that community. Many people will benefit from that. Few of those people are good or have the interests of humanity in mind.

This is a security failure. We have been defeated by those that wish the internet and even humanity ill. I hope we all acknowledge that. I would propose that everyone considers whether there is another place that this community could agree to move to. Without that, the same will not grow up and a little pearl of wisdom in the world will be lost forever.

Clive Robinson July 25, 2024 7:24 AM

@ Moz, ALL,

Re : End of a freedom.

With regards,

“This is a security failure. We have been defeated by those that wish the internet and even humanity ill.”

I saw it coming “in a way” in what was happening years ago. It started with certain people making personal not objective criticism against an individual as a singular agenda. But never making anything approaching a societal contribution that was constructive, relevant, or original.

Their intent back then was to damage reputations in the eyes of casual readers[0]. This inturn encouraged others to “go scalping” for what I assume was petty bragging rights in small cliques of mostly irrelevant interest to the greater majority.

For a while such behaviour was popular and mostly based on fake comments that got drummed up by others “looking for topics” to write up or gain their 15 seconds of fame. It did a lot of harm for a while untill others started to realise it was possible for those of ill intent to create smoke and noise without there actually being a fire. But in the process they did burn a lot of people.

The supposed right of “free speech” is a precious gift to be used wisely. Like all tools it is a double edged sword that can be used for good or bad. Misuse to polish a personal ego or gain fame or notoriety via a baseless agenda –which is what all to many have tried to use it for, often infamously so– damages free speech with each such use in the eyes of society.

Which is why over the years Judges have decided there are limits to “free speech” which is why even school kids used to get told “Shouting fire in a crowded auditorium” is not free speech. In part because of the considerable harm it does to innocent people.

The principle behind this cautionary behaviour is much as it is with early professions of “First do no harm”. Unfortunately the lack of dealing with such bad behaviours led upto the point it “became a thing” of Social Media one name of which was “Cancel Culture”[1].

Whilst originally providing an outlet for those with a genuine need to “make public” which Free Speech alows, it has quickly become not just tainted, but weaponised for politics and personal agendas as will no doubt become abundantly clear over the next few months.

Whilst “political idiots” / “talking heads” espouse the power of AI to “moderate” they clearly do not understand the fact that the current LLM and ML systems really can not do such a job currently, and nor are they ever likely to. Because the LLMs are not in any way “intelligent” nor are they in any way “predictive” just grossly laggingly “responsive” at inordinate cost/harm.

But still worse the cost of keeping these “Deep” “Digital Neural Networks”(DNNs) even remotely upto date via ML is inordinate resource hungry almost beyond measure. Hence the use of the less expensive “sweat shop labour” to add layer after layer of filtering, which actually makes the resource usage issues worse.

Whilst it was clear to me that the “agenda / fame” nonsense was going to rise and I gave repeated warning, it turns out others were thinking in this area both more intently and academically. One such is a fellow at the Berkman Center, Dr. Aaron Shaw[3] who back over a decade ago in 2012 had a paper published,

“Centralized and Decentralized Gatekeeping in an Open Online Collective”

https://www.researchgate.net/publication/258174798_Centralized_and_Decentralized_Gatekeeping_in_an_Open_Online_Collective

Even though based on observations of behaviours back in 2008 it is still an interesting read on the non technology effectively social aspects of the subject of moderation. From which it can be seen why even other forms of AI that are not the current LLM or AI systems or those likely to spawn from them will not function as effective moderation either.

Which should not be at all surprising, as the past several millennium of human history shows trying to “social censor” against society, no matter how draconian the enforcement, in the end always fails to human ingenuity be it by the like of in channel satire, innuendo, or more subtle means. Or the simple fact control on one channel is insufficient as like ink on a wet page information bleeds across from line to line and even from page to page.

But all that aside learning is a two way process, because multiple view points give rise to thoughts, considerations and questions that might not have otherwise arisen. Thus this blog has been very much a collective effort. And for those that have read our hosts writings can see the influence this blog has had in turn on him, and other well know academics. Some of whom, Ross J. Anderson and Nicholas Weaver were regular contributors with others like Moti Yung reading and occasionally commenting.

The thing is, as was once noted, the simple and innocent question of a child if considered can give rise to a lot of deep thinking. Such as “Why is the sky blue?” or “Why are clouds white?” or the all time favourite subject of many children “What is a rainbow?”. If we truly know the answers then we can tell or show children the answers in ways that will encourage them to understand more themselves and so encourage others. Whilst the world appears to be magic, that is superficial and a deep understanding actually does not rob the world or individual of charm, it enables them to go on and create magic for following generations.

Sadly the “new policy” of the likes of Google is not to make the world available to all any longer. They have decided that you will be led astray by their choices, unless you know enough to be able to push through their bias to what is out there.

What ever their supposed “corporate reasoning” we can make an assumption it has actually to do with that triad of “Money, Politics and Power” that act as the tools of control. It actually brings forth all the human evils we associate with Kings and Bishops and their unsavoury efforts to keep people in oppression and subjugation, not for the good of society no matter how much they might claim, but for the evils that they so revel in.

This blog has acted as a place of knowledge where the triad of tools got ignored, and the control others sought not just questioned but challenged and railed against.

[0] And now of course that this blog has been scraped into current AI LLM systems apparently that ill intent has spread out in a way “search engines” could/did not previously do.

[1] In her 2022 essay / e-Book “Cancel Culture: A Critical Analysis”,

https://link.springer.com/book/10.1007/978-3-030-97374-2

Academic author Eve Ng[2] pointed out in it, her definitions of the terms “Cancelling”, and “Cancel Culture” that both are the practice of nullifying or cancelling someone or something by a process of “speaking out” or “shaming”. That is in someone’s eyes they have done “bad” in some way, hence the speaking out about them. Be they an individual, a group of individuals, an organisation, commercial or other brand, or as we are currently seeing even entire nations. By so called “Public Shaming” in what is often not a fair forum, and seeking to dictate the surrounding commentary about the alleged bad / wrong doing.

In short what first started out as a “social good” has quickly become a weapon of politics and in many places a “social bad”.

You can get a feel for the drivers behind the essayt e-Book with,

https://m.youtube.com/watch?v=95JuPhRjDn4

So to be honest I don’t recommend it as a general read because it becomes somewhat tainted and biased by what reads like cognitive bias or a cherry picking process.

[2] Associate Professor Dr. Eve Ng was at the time of writing her e-Book / essay[1], based in the School of Media Arts and Studies at OHIO University USA. As well as the associated Women, Gender, and Sexuality Studies programs with a focus on internationalisation of minority groups in small nations via online means to find/make creative spaces.

She has the problem that there is a computer security suite “EVE-NG” that coincidentally shares her name, which makes searches via the “main engines” a bit difficult.

[3] Professor Aaron Shaw based at the Department of Communication Studies at Northwestern University and also a fellow at the Berkman Center for Internet and Society at Harvard University.

As part of his research into the dynamics of large online communities and the participation, mobilization, and organization of individuals and groups surrounding them back in 2008 he published in 2012 the paper,

“Centralized and Decentralized Gatekeeping in an Open Online Collective”

Which details some of the issues of what we would now call “Comment Moderation”.

Evan August 5, 2024 4:02 PM

I realize this is a very late response but the topic required a lot of contemplation. We all experience this on many platforms. I understand the desire for open anonymous comments. It does encourage free thinking; at least in theory. Then again, some free thinking is less than positive and beneficial.

I would love a social experiment. If you had the time or resources, allow two different comment feeds and let the market decide. One feed would be the moderated anonymous feed. The second would be registered non-anonymous. Anyone can see and comment in the anonymous feed. Only registered users could comment in the non-anonymous feed. Where would users go for thoughtful analysis?

There would still be a choice to allow everyone to read the non-anonymous feed or maybe part of the experiment would be to restrict the reading to registered users.

It would be interesting to see how the market (the public users) would self select. Would the non-anonymous feed be more relevant? Does an anonymous feed actually encourage more free thinking or is that a myth. (Yes, this would be subjective although incivility is fairly objective)

Your change is sad. Maybe it doesn’t need to feel like a defeat. Your past 20 years could be considered an exceptional success. Let’s hope some of this uncivil current reality is a short blip.

maddaeachother September 6, 2024 5:57 PM

all this reminds me of karl popper’s paradox of tolerance. i don’t remember well enough but it was something like tolerating intolerance and reaching a threshold…

Leave a comment

All comments are now being held for moderation. For details, see this blog post.

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.