Jump to content

Wikipedia:Bot requests/Archive 87

From Wikipedia, the free encyclopedia
Archive 80Archive 85Archive 86Archive 87

Bot Request to Add Vezina Trophy Winners Navbox to Relevant Player Pages

I would like to request a bot to automatically add the {{Vezina Trophy Winners}} template to all player pages that are currently listed in the Category:Vezina Trophy winners.


The template is already created and can be found at: Template:Vezina Trophy Winners.


Details:


1. The bot should check all pages within Category:Vezina Trophy winners.

2. For each page, if the {{Vezina Trophy Winners}} template is not already present, the bot should add it to the bottom of the page.

3. The template should be placed in the Navboxes section (before any categories or external links) on each player’s page.


Rationale:

This will ensure consistency and streamline the process of displaying the relevant information across all Vezina Trophy winners’ pages without having to manually add the template to each page. This will also make it easier to update the navbox in the future without needing to edit each individual page.


Please let me know if more information or clarification is needed. Thank you! 108.51.96.36 (talk) 23:46, 18 October 2024 (UTC)

altering certain tags on protected pages?

I recently read a comment from someone regarding them seeing the {{expand}} tag in an article, and wanting to go ahead and work on the section, only to discover that it was protected and they could not do so. Their feeling, which is understandable, is that it was discouraging to reply to a request for help, only to find that their help was not welcome. There is of course usually a lock icon on the page but not everyone knows to look for that or what it means.

I note that User:MusikBot II removes templates from pages where protection has just expired, and is an adminbot and can therefore also edit protected pages. I'm curious if it seems feasible/wise to have it (or some other bot) make some sort of modification to the expand template, and perhaps other similar templates, to reflect the current protection level and suggest using the talk page to propose edits? And of course it would undo those edits upon the expiration of protection. (as always with the caveat that I know nothing about bot coding) Pinging @MusikAnimal: as bot maintainer, but any and all feedback is of course welcome. Just Step Sideways from this world ..... today 22:25, 6 October 2024 (UTC)

@Just Step Sideways: Another way would be to have the {{expand}} tag automatically detect the protection level (which I know is possible, but I wouldn't know how to implement it) and alter its message. Rusty 🐈 22:37, 6 October 2024 (UTC)
Yes, altering the template would be better; templates like {{rcat shell}} already have this functionality using magic words. I'll paste the relevant code below if someone wants to sandbox something. Obviously it will be different to make it inline but the general gist of using the #switch will be the same. For example, a #switch in {{Expand section}} could change the text from "You can help by adding to it" to "You can make an edit request to improve it". Primefac (talk) 10:30, 7 October 2024 (UTC)
{{if IP}}, {{if autoconfirmed}}, and {{if extended confirmed}} should also be useful here. jlwoodwa (talk) 03:36, 14 October 2024 (UTC)
PROTECTIONLEVEL code
{{#switch: {{PROTECTIONLEVEL:edit}}
   |sysop={{pp-protected|small=yes}}{{R fully-protected|embed=yes}}
   |templateeditor={{pp-protected|small=yes}}{{R template protected|embed=yes}}
   |extendedconfirmed={{pp-protected|small=yes}}{{R extended-protected|embed=yes}}
   |autoconfirmed={{pp-protected|small=yes}}{{R semi-protected|embed=yes}}
   | <!--Not protected, or only semi-move-protected-->
}}
I agree automating this without the bot is preferable. The bot can and does add protection templates to pages like this, but I believe it didn't here because this template was protected long before the bot was introduced. MusikAnimal talk 15:35, 7 October 2024 (UTC)

Replace merged WikiProject template with parent project + parameter

WikiProject Reference works has been merged as a task force of WikiProject books. The referencework= parameter has been added to the Template:WikiProject Books banner template to indicate if it applies to the task force, so now all usages of the former Template:WikiProject Reference works need to be replaced with the books template with the task force parameter. When something similar was done with a previous project, it was done by bot (though I forgot what bot), could this be done again? Or is there another efficient way to do this? Also, there will be some duplicates, since some are tagged with both. The books project doesn't use importance and many articles tagged with ref works don't have it so the importance parameter on the old banner should be discarded not transferred. Thanks! PARAKANYAA (talk) 16:58, 16 October 2024 (UTC)

I can help with this. – DreamRimmer (talk) 14:09, 17 October 2024 (UTC)
Same; my bot is set up to handle these, and I thought it was TheSandBot that actually had a specific task for this but maybe it was actually Kiranbot... Looks like ~1500 pages where it will need to be folded into the Books banner, after which it can just be converted into a wrapper and autosubst by AnomieBOT. Primefac (talk) 16:00, 17 October 2024 (UTC)
Thanks for the info. Please feel free to handle this. As there wasn't a response in a reasonable time, I thought I should step in to help. – DreamRimmer (talk) 16:29, 17 October 2024 (UTC)
@Primefac Your bot did the job perfectly, thank you! Sorry for the annoyance, but could you do the same with another merged-into-task force project? WP:TERROR was mostly inactive, and the very few active editors reached a consensus to merge, see this discussion. {{WikiProject Terrorism}} should be folded into the
{{WikiProject Crime and Criminal Biography}} banner (I added the task force and importance parameters to the crime banner). WP:TERROR has importance parameters, but the attention= and infobox= parameters were never maintained and most of the articles tagged with them have had their issues addressed so those can probably be discarded as the crime banner doesn't have them. I promise this is the last one hahaha. PARAKANYAA (talk) 19:46, 19 October 2024 (UTC)
I'll put it on my list. Primefac (talk) 20:58, 19 October 2024 (UTC)
 Done. Primefac (talk) 10:04, 21 October 2024 (UTC)
Archive 80Archive 85Archive 86Archive 87

Request for WP:SCRIPTREQ

Would like to attain WP:SCRIPTREQ bot's buildup code. StefanSurrealsSummon (talk) 18:27, 8 November 2024 (UTC)

Not related to a bot task request. You should reach out to the operator of the bot. MolecularPilot 🧪️✈️ 04:43, 1 January 2025 (UTC)

LLM summary for laypersons to talk pages of overly technical articles?

Today I was baffled by an article in Category:Wikipedia articles that are too technical which I was easily able to figure out after pasting the pertinent paragraphs into ChatGPT and asking it to explain it to me in layman's terms. So, that got me thinking, and looking through the category by popularity there are some pretty important articles getting a lot of views per day in there. So I thought, what about a bot which uses an LLM to create a layperson's summary of the article or tagged section, and posts it to the talk page for human editors to consider adding?

I think I can write it, I just want others' opinions and to find out if someone is trying or has already tried something like this yet. Mesopub (talk) 09:38, 10 November 2024 (UTC)

Considering past discussions of LLMs, and WP:LLMTALK, I doubt the community would go for this. If you really want to try, WP:Village pump (proposals) or WP:Village pump (idea lab) would be better places to seek consensus for the idea. Anomie 14:17, 10 November 2024 (UTC)
I don't think WP:LLMTALK necessarily applies here: that's about using chatbots to participate in discussions, which is utterly pointless and disruptive. The idea here seems to be using an LLM on a talkpage for a totally different purpose it's much more suited to. That said, I also doubt people will get on board with this.
Mesopub, having a quick look at your list, I think your target category Category:All articles that are too technical (3,322) is not a great choice: I see articles towards the top like Conor McGregor, Jackson 5, Malaysia, and Miami-Dade County, Florida. All of these are members of the target category due to transclusion {{technical inline}}, which produces [jargon].
All of these would easily be fixed by a simple rewording or explanation of a single term: none of the examples would benefit from an LLM summary.
I don't necessarily think the basic idea is terrible, which I've bolded for emphasis. We do have a lot of articles that are written at a level most appropriate to grad students or professionals in a niche scientific field. Of course, any LLM summary of these articles would have to be sanity-checked by a human who actually understands the article, to ensure the LLM summarises it without introducing errors.
For that reason I think that if you're convinced of the utility of this process, you should start very slow, select a small number of articles in different fields, post the LLM summaries with proper attribution in your userspace, and notify appropriate WikiProjects to see if anyone is interested in double checking them, or working to incorporate more accessible wording into the summarised articles. If no one has any interest, there's no realistic future for this. Folly Mox (talk) 15:28, 10 November 2024 (UTC)
Isn't the current consensus that we cannot allow AI-written text because of questionable copyright status? Primefac (talk) 17:02, 10 November 2024 (UTC)
There is no ban AFAIK, just that editors need to be careful and check the LLM didn't spit out copyrighted text back at them (or closely paraphrased, etc.). I think this is less of a risk with the proposed use case, which is taking existing Wikipedia text and cutting it down.
I agree with Folly Mox mostly, if you think this is going to be useful, try it on a very small scale and see how it goes. Legoktm (talk) 19:00, 10 November 2024 (UTC)
I'm not sure what formal consensus looks like on the LLM copyright issue. Wikipedia:Large language models § Copyright violations is pretty scant, and of course it's not policy. m:Wikilegal/Copyright Analysis of ChatGPT concludes in part with all possibilities remain open, as key cases about AI and copyright remain unresolved. The heftiest discussion I was able to find lazily is Wikipedia talk:Large language models/Archive 1 § Copyrights (January 2023); there is also this essay. Folly Mox (talk) 20:54, 10 November 2024 (UTC)
I kinda wonder how reliable LLMs are at simplifying content without making it misleading/wrong in the process. Jo-Jo Eumerus (talk) 09:30, 11 November 2024 (UTC)
They aren't. Even setting aside the resources they waste and the exploitative labor on which they rely, they're just not suited for the purpose. Asking editors with subject-matter expertise to "sanity check" their output is just a further demand on the time and energy of volunteers who are already stretched too thin. XOR'easter (talk) 22:37, 11 November 2024 (UTC)
Some examples: User:JPxG/LLM_demonstration#Plot_summary_condensation_(The_Seminar) and Wikipedia:Using_neural_network_language_models_on_Wikipedia/Transcripts#New York City. Legoktm (talk) 17:50, 12 November 2024 (UTC)
I'm sorry, but if you don't know the subject material, then you're not in a position to judge whether ChatGPT did a good job or not. XOR'easter (talk) 22:46, 11 November 2024 (UTC)
Declined Not a good task for a bot. Would be a WP:CONTEXTBOT, and definitely subject to hallucinations. At the very least requires community consensus, this is not the correct place to get it, please see WP:VPT or WP:VPR. MolecularPilot 🧪️✈️ 03:38, 1 January 2025 (UTC)
Note: you may make a bot that post summaries to it's own userspace, this would be allowed per WP:EXEMPTBOT, if it would be helpful to have some demos for the proposal. MolecularPilot 🧪️✈️ 03:40, 1 January 2025 (UTC)

Redirects with curly apostrophes

For every article with an apostrophe in the title (e.g. Piglet's Big Game, it strikes me it would be useful to have a bot create a redirect with a curly apostrophe (e.g. Piglet’s Big Game).

This could also be done for curly quotes.

Once done, this could be repeated on a scheduled basis for new articles. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 12:27, 11 November 2024 (UTC)

The justification for creating redirects with ASCII hyphen-minus to pages titled with en-dashes is that en-dashes are hard for people to type since they aren't on most keyboards. The opposite would be the case with curly quotes: straight quotes and apostrophes are on most people's keyboards while curly versions are not. This seems like another one that would be better proposed at WP:Village pump (proposals) to see if people actually want this.
Further complicating this is that the bot would need a reliable algorithm for deciding when to use versus . The general algorithm may need to be part of the community approval. Anomie 12:50, 11 November 2024 (UTC)
This probably isn't a useful thing to be doing; as Anomie says ' is almost often shown as ' during regular typing, and curly apostrophes are often Office-related auto-changes. I wouldn't necessarily be opposed to a bot fixing in-text curly apostrophes, but we shouldn't be proactively creating redirects. Primefac (talk) 13:03, 11 November 2024 (UTC)
I think straight versus curly for single and double quotes is mostly an OS thing. When I tap on the redlink Piglet’s Big Game and close out the editor, I get {{Did you mean box}} linking the valid title at the top of the page; if I search for the title with &fulltext=0 I'm redirected to the bluelink. The curly apostrophe also resolves to the straight apostrophe if typed into the search box.
Really, the piece missing here – if any – is automated fixing of redlinks with curly punctuation in the target.
User:Citation bot replaces curly apostrophes and double quotes with ASCII versions within citation template parameters, even though Module:CS1 renders them identically. Some user scripts are capable of doing a gsub over an entire article, like User:Novem Linguae/Scripts/DraftCleaner.js. (I know this isn't directly related to the OP, but tangentially related to the suggestion just above.)
I suppose the genesis of this request was this Help desk request? Folly Mox (talk) 15:15, 11 November 2024 (UTC)
I will also note that a curly quote is on the title blacklist, so it should be the case that we shouldn't even be accidentally creating these in the first place. Primefac (talk) 16:52, 11 November 2024 (UTC)
I added that to the blacklist because I was tired of articles being created with curly quotes and having to move them to the correct title, when that's almost never correct. The intend wasn't to block redirects with curly quotes.
Nevertheless, I oppose this because we already have a {{did you mean box}} warning for the situation, and that's sufficient.
Finally, if this is done, it definitely needs some logic to auto-retarget and G7 any redirects that have diverged from their sources, as AnomieBOT already does for dashes. * Pppery * it has begun... 17:30, 11 November 2024 (UTC)
N Not done, consensus against the bot from multiple experienced users raising genuine concerns (I also think due to ‘ vs ’ it is inherently a WP:CONTEXTBOT), and redundant as curly quotes are now on the blacklist. MolecularPilot 🧪️✈️ 04:53, 1 January 2025 (UTC)

Bot for replacing/archiving 13,000 dead citations for New Zealand charts

Dead citations occur due to the website changing the URL format. For example https://nztop40.co.nz/chart/albums?chart=3467 is now https://aotearoamusiccharts.co.nz/archive/albums/1991-08-09.
Case 1: 9,025 pages that are using these URLs found through search. Some may already be archived.
Case 2: 4,133 citations using {{cite certification|region=New Zealand}} and {{Certification Table Entry|region=New Zealand}}, categorized Category:Cite certification used for New Zealand with missing archive (0).

An ideal transition seems difficult as it would require the following steps:

  1. Find an archived version through the wayback machine, e.g., https://web.archive.org/web/20240713231341/https://nztop40.co.nz/chart/albums?chart=3467 for the above. For case 2 this requires inferring the URL first (https://nztop40.co.nz/chart/{{#switch:{{{type|}}}|album={{#if:{{{domestic|}}}|nzalbums|albums}}|compilation=compilations|single={{#if:{{{domestic|}}}|nzsingles|singles}}}}?chart={{{id|}}}))
  2. Harvest the date 11 August 1991 either from the rendered archived page or from the archived page source, <p id="p_calendar_heading">11 August 1991</p>
  3. For case 1, translate the URL accordingly to https://aotearoamusiccharts.co.nz/archive/albums/1991-08-11.
  4. For case 2, add |source=newchart and replace |id=1991-08-11.

Note that for case 1, the word after "/archive/" changed according to the following incomplete table. For case 2 this is handled by the template so no need to worry about it.

Old text New text
albums albums
singles singles
nzalbums aotearoa-albums
nzsingles aotearoa-singles

If someone is willing to go through the above, at least for simple cases, I think it is the ideal solution, especially for case 2. Failing that, a simpler archiving procedure can be taken.

  • For case 1: add |archive-url= and |archive-date= per usual archiving procedure. Add |url-status=deviated. If no archive exists (which should be a minority), add {{dead link}}
  • For case 2: add |archive-url= and |archive-date= per usual archiving procedure as they are supported by the templates. Add |source=oldchart (even if no archive is found)

I will be happy to support any technical assistance. Muhandes (talk) 15:08, 14 November 2024 (UTC)

Muhandes, I believe WP:URLREQ is the place for requests like these. — Qwerfjkltalk 16:56, 14 November 2024 (UTC)
I thought case 2 above will require a post here, but I'll repost there. Muhandes (talk) 22:49, 14 November 2024 (UTC)
Deferred to WP:URLREQ and successfully completed there. :) MolecularPilot 🧪️✈️ 05:09, 1 January 2025 (UTC)

Meanings of minor-planet names

Should we move to a hyphenated version of "minor-planet" instead of without hyphen for "minor planet", which is moved per Talk:Minor-planet designation#Requested move 21 September 2021, by numbers ranging from 100001–101000 to 500001–501000. Absolutiva (talk) 16:20, 18 November 2024 (UTC)

 Working, the bot is running now completing that task. I will let you know when it is done. :) MolecularPilot 🧪️✈️ 03:34, 1 January 2025 (UTC)
Currently in the 300000s... will be done soon! :) MolecularPilot 🧪️✈️ 03:42, 1 January 2025 (UTC)
 Done, Absolutiva, it's moved all the long dash (–) between the numbers ones to use "minor-planet". The short dash (-) ones just redirect to the long dash versions (which then redirects to the correct name), I had it programmed to fix the double redirects created, but there's no need because another bot already fixed these automatically while my bot was moving the titles! :) MolecularPilot 🧪️✈️ 04:00, 1 January 2025 (UTC)

Reference examination bot

I want a bot to help me.can anyome pls help me with this. Wiki king 100000 (talk) 07:46, 19 November 2024 (UTC)

Could you please elaborate further? – DreamRimmer (talk) 10:44, 19 November 2024 (UTC)
guessing from the title, I think they want the bot to fact-check the reference or something similar that. —usernamekiran (talk) 13:02, 20 November 2024 (UTC)
Yes the same. Actually my english spelling problem. Wiki king 100000 (talk) 17:00, 25 November 2024 (UTC)
Declined Not a good task for a bot. Would be a WP:CONTEXTBOT. MolecularPilot 🧪️✈️ 03:35, 1 January 2025 (UTC)

VPNGate

Would any admin be interested in setting up a bot that automatically blocks vpngate.net IPs? VPNGate is frequently used by LTAs (notably MidAtlanticBaby) and is very hard to deal with because of the number of rotating IPs available. User:ST47ProxyBot used to do some of this but is no longer active. T354599 should also help but this would be an interim solution to prevent disruption.

I looked into this and it should be pretty simple. VPNGate apparently has a (hidden?) public API available at www.vpngate.net/api/iphone/ which lists all currently-active proxy addresses. You can use regex (e.g. \b(?:\d{1,3}\.){3}\d{1,3}\b) to find the listed IPs from the API endpoint, check if they're already blocked, and then block them for however long as an open proxy. Theoretically if this is run once or twice a day the vast majority of active VPNGate IPs would be blocked. I wrote a quick script and tested it on testwiki and it seems to work. C F A 19:10, 7 December 2024 (UTC)

@CFA: Looking near the bottom of vpngate.net/en/ you see the following warning: Using the VPN Server List of VPN Gate Service as the IP Blocking List of your country's Censorship Firewall is prohibited by us. The VPN Server List sometimes contains wrong IP addresses. If you enter the IP address list into your Censorship Firewall, unexpected accidents will occur on the firewall. Therefore you must not use the VPN Server List for managing your Censorship Firewall's IP blocking list. It sounds like they are intentionally putting incorrect IP addresses in those lists to discourage people from using the list for unintended purposes. Polygnotus (talk) 03:15, 8 December 2024 (UTC)
I think that's an empty threat. It's possible, I suppose, but I haven't seen any evidence of it. C F A 03:26, 8 December 2024 (UTC)
Possibly, but its wise to portscan instead of assume. Polygnotus (talk) 03:30, 8 December 2024 (UTC)
Yes, not a bad idea. C F A 03:31, 8 December 2024 (UTC)
CFA, I'll note that the "ingress" IPs listed on the website are not the IPs that will actually be editing Wikipedia if MAB uses them. For example, if I connect to VPNgate's "210.113.124.81" using OpenVPN, the IP address that requests are made to the outside internet with is 61.75.47.194.
To test this theory, I made a pywikibot that uses the list of IPs and OpenVPN config on the link CFA provided, connects to them and determines the actual "output" IP connecting to Wikipedia and then blocks with TPA revoked (I won't say why, but WP:ANI watchers will know). I tested it on a local MediaWiki installation and blocking the input IPs (i.e. the ones listed on the VPNgate website) had no effect but blocking the output ones (obtained by connecting to each input IP's VPN and making a request to [1]) effectively disabled VPNgate entirely.
Will this actually be useful and would you like me to submit a WP:BRFA? MolecularPilot 🧪️✈️ 02:38, 17 December 2024 (UTC)
To demonstrate: It's me, User:MolecularPilot, using the VPN listed as "210.113.124.81" (as that's the "input" IP) but actually editing with the output IP of 61.75.47.194. This output IP is not listed anywhere on the VPNgate website or CSV file. Can this IP also be WP:OPP blocked? 61.75.47.194 (talk) 02:48, 17 December 2024 (UTC)
Huh, well that's an interesting find. I suppose that's what they mean by The VPN Server List sometimes contains wrong IP addresses. It would certainly be useful, but the harder part is finding an admin willing to do this (non-admins can't operate adminbots). Maybe a crosspost to AN would help? C F A 02:49, 17 December 2024 (UTC)
CFA, thank you for your very fast reply! Actually, I just examined the config files (they are provided in Base64 format at the CSV you gave, that's what the script uses to connect), and the listed IPs they give are actually blatant lies. For example, the VPN listed as "210.113.124.81" actually has "74.197.133.217:955" set as the input IP that my computer makes requests to, and, as shown above, actually makes requests to the outside internet/edits with "61.75.47.194". In fact, the VPN server list always contains wrong IP addresses - they're not even the correct input IP. The people behind VPNgate are quite good at tricky/opsec it seems. MolecularPilot 🧪️✈️ 02:58, 17 December 2024 (UTC)
74.197.133.217 is actually listed in a completely different section of the VPNgate list (and the associated config file for it of course does not actually use 74.197.133.217), so the provided "IPs" do not match the actual input IP in the config file but the IPs for a different config file. Regardless, these input IPs are useless and it's the output IPs (only findable by testing) that we are interested in. MolecularPilot 🧪️✈️ 03:01, 17 December 2024 (UTC)
Wikipedia:Bot policy states In particular, bot operators should not use a bot account to respond to messages related to the bot. You mistakenly used bot account to respond below :) – DreamRimmer (talk) 13:40, 21 December 2024 (UTC)
Oh I'm so sorry, I meant to reply with my main account, I didn't realise I was still logged into my bot account (I needed to login to make a manual fix to the JSON file). Thank you for picking up on it and correcting the mistake. :) MolecularPilot 🧪️✈️ 01:39, 22 December 2024 (UTC)
Y Done bot is live and collating data about VPNgate egress IPs at User:MolecularBot/IPData.json. A frontend to lookup an IP address (and the number of times it's been seen as a VPNgate express node, as well as when it was last seen) is available [on Toolforge]. It also can generate statistics from the current list, currently with 146 IPs (only a small drop in the bucket generated during my testing and development, overly represents more obvious IPs) - 42.47% are currently blocked on enwiki and 23.97% are globally blocked. Working on guidelines for an adminbot to block these IPs based on number of sightings (ramping up in length as the IP has more sightings, to not overly punish short term volunteers) and will then post at WP:AN looking for a botop once ready. Consensus developed for both this bot and a future adminbot at WP:VPT. 06:33, 21 December 2024 (UTC) — Preceding unsigned comment added by MolecularBot (talkcontribs) <diff>

Creation for nano bot

The 'Nano bot'(Natural Auditor and Native Organiser) will be useful for helping users create their user pages based on their recent actions and edits. Prime Siphon (talk) 20:28, 7 December 2024 (UTC)

Since userpages are an expression of individuality, and are not necessary to create an encyclopedia, having a bot create them would not work. Polygnotus (talk) 03:17, 8 December 2024 (UTC)
Declined Not a good task for a bot. Primefac (talk) 16:30, 9 December 2024 (UTC)

Logging AfC drafts resubmitted without progress

This is essentially a request for the implementation of Option 2 of the RfC here. "The bot should add ... submissions [that haven't changed since the last time they were submitted] to a list, similar to the list of possible copyvios." JJPMaster (she/they) 15:29, 29 December 2024 (UTC)

{{Working}}, I expect to be done in around 20 minutes. MolecularPilot 🧪️✈️ 05:34, 31 December 2024 (UTC)
Success, it will flag any re-submitted drafts without changes to User:MolecularBot/AfCResubmissions.json. Working on an accessible frontend on tool forge so that's its easy to lookup if a draft has been re-submitted. MolecularPilot 🧪️✈️ 06:36, 31 December 2024 (UTC)
Frontend coding done. Deploying this task to run continuously and also host the frontend on Toolforge... MolecularPilot 🧪️✈️ 06:43, 31 December 2024 (UTC)
Frontend is now hosted at [2]. Waiting for a Toolforge task to complete in order to deploy the bot to run continously. MolecularPilot 🧪️✈️ 07:09, 31 December 2024 (UTC)
Will complete Toolforge deployment tomorrow. MolecularPilot 🧪️✈️ 07:17, 31 December 2024 (UTC)
I think you should use GET method instead of POST so that it can be used in the Template:AfC submission/tools template. – DreamRimmer (talk) 09:23, 31 December 2024 (UTC)
A GET-based JSON API is now avaliable by doing this (but replacing dummy with the actual name of the draft, excluding the Draft: prefix) https://molecularbot2.toolforge.org/resubAPI.php?pageName=Dummy Thanks for your feedback! :) However, I don't think it should be used in Template:AfC submission/tools because the RfC closed against commenting or labelling the actual submission page. MolecularPilot 🧪️✈️ 01:48, 1 January 2025 (UTC)
@DreamRimmer Is it possible for you to publish a wikicode frontend based on the json. Something like Wikipedia:AfC sorting? ~/Bunnypranav:<ping> 10:12, 31 December 2024 (UTC)
I think MolecularPilot can help with this. – DreamRimmer (talk) 10:43, 31 December 2024 (UTC)
Oops, wrong ping thanks to the tiny buttons on mobile. Sorry DreamRimmer! @MolecularPilot: the real intended ping. ~/Bunnypranav:<ping> 11:26, 31 December 2024 (UTC)
Done here: Wikipedia:Declined AfC submissions resubmitted without any changes! It uses a template I made ({{AfCResubmissions}}) that uses a module I made (Module:AfCResubmissions) which fetches the data from the bot's user JSON file. :) Just need to finish Toolforge deployment! MolecularPilot 🧪️✈️ 02:40, 1 January 2025 (UTC)

List of schools in the UK

Hello. I'd like to request for a list of pages in Category:Schools in England, Category:Schools in Northern Ireland, Category:Schools in Scotland and Category:Schools in Wales and whatever file is used in their infobox. Format will be as such [ PAGE ] , [ Link to file ]. If possible, skip pages that are using a SVG file. Pages will go into User:Minorax/Schools in England, User:Minorax/Schools in Scotland, etc. --Min☠︎rax«¦talk¦» 14:58, 30 December 2024 (UTC)

I assume you want to check the categories recursively, to some depth? — Qwerfjkltalk 15:05, 30 December 2024 (UTC)
Yeap, hopefully you can dig deep into the large category (or any level you define as a possible-to-do) whilst skipping pages based on the following criteria where 1) an .svg extension file is used in the infobox, 2) there is no infobox available. --Min☠︎rax«¦talk¦» 16:48, 30 December 2024 (UTC)
Minorax  Done User:Minorax/Schools in England, User:Minorax/Schools in Northern Ireland, User:Minorax/Schools in Scotland and User:Minorax/Schools in Wales. – DreamRimmer (talk) 09:05, 31 December 2024 (UTC)
Many thanks! --Min☠︎rax«¦talk¦» 09:24, 31 December 2024 (UTC)

Category:University and college logos

Good day. I'd like to request for all .svg files in the above category be moved to Category:SVG logos of universities and colleges. --Min☠︎rax«¦talk¦» 02:01, 1 January 2025 (UTC)

{{Working}}, the bot is running and doing the task right now! MolecularPilot 🧪️✈️ 02:17, 1 January 2025 (UTC)
Update: still running, there's a lot of pages hahaha, but it's all working :) MolecularPilot 🧪️✈️ 02:23, 1 January 2025 (UTC)
Seems to be {{Done}} now, thank you for all your work on schools! If it missed some or didn't do something right, please don't hesitate to reach out to me! :) MolecularPilot 🧪️✈️ 02:31, 1 January 2025 (UTC)
Wait, so it turns out some pages don't have the category explicitly set (the bot did move all the ones that explicitly set it), but use {{Non-free school logo}} which sets the category using a parameter. I'm running the bot now to update the template usages to use the new category. MolecularPilot 🧪️✈️ 02:35, 1 January 2025 (UTC)
A LOT of pages have the category implicitly set this way so it is still running... but I can confirm it's definitely working and fixing these. MolecularPilot 🧪️✈️ 02:41, 1 January 2025 (UTC)
Actually  Done now hahaha! As before, if you it didn't catch everything please don't hesitate to reach out to me, Minorax. Happy new year! :) MolecularPilot 🧪️✈️ 02:50, 1 January 2025 (UTC)
Thanks! Seems good. Happy new year to you too. --Min☠︎rax«¦talk¦» 04:18, 1 January 2025 (UTC)

"Was" in TV articles

Moved to WP:AWB/TASKS (diff)
Deferred to WP:AWBREQ as such a task would require human confirmation and is not appropriate for a bot to do. I have posted a copy of this discussion to that noticeboard now. MolecularPilot 🧪️✈️ 05:18, 1 January 2025 (UTC)

The domain www.uptheposh.com has been usurped, and all links (including sublinks like http://www.uptheposh.com/people/580/, http://www.uptheposh.com/seasons/115/transfers/) now redirect to a gambling site. I request InternetArchiveBot to replace all links containing www.uptheposh.com with their corresponding archived versions from the Wayback Machine.

This is my first time doing this - if I need to request somewhere else or anything is better done manually, please let me know! Nina Gulat (talk) 16:36, 4 January 2025 (UTC)

Nina Gulat, you want WP:URLREQ. Primefac (talk) 16:37, 4 January 2025 (UTC)
Thanks! Nina Gulat (talk) 16:41, 4 January 2025 (UTC)

Hello, I would like to kindly request that all articles, categories, files, etc. within the Category:Cinema of Belgium be tagged with the newly created Belgian cinema task force. This will help streamline efforts to improve the quality and coverage of Belgian cinema-related content on Wikipedia. Earthh (talk) 11:41, 23 November 2024 (UTC)

How far down? I went down 3 levels and found Crouching Tiger, Hidden Dragon, which does not seem likely. Primefac (talk) 13:06, 23 November 2024 (UTC)

Just a quick clarification: please tag all entries in the Category:Cinema of Belgium with the Belgian cinema task force, except for the entries of the following categories, as they may include films that are not necessarily Belgian:

Thanks for your help! --Earthh (talk) 13:39, 23 November 2024 (UTC)

I've cross-posted this to WP:AWB/TASKS as I think it's small enough for manual addition (though I may have miscounted, will check when I get home later). Primefac (talk) 14:35, 23 November 2024 (UTC)
Hey @Earthh, as Primefac suggested, this can be done at WP:AWB/TA. This query with depth of 2 shows 717, and 86 with depth 1. Could you clarify on that. Also, what template changes are you suggesting? Like what should one tag it with? ~/Bunnypranav:<ping> 14:41, 23 November 2024 (UTC)
Thanks for your patience as I worked through the depth question. For this request:
The template to use is: {{WikiProject Film|Belgian-task-force=yes}}. Let me know if there are any issues or further clarifications needed.--Earthh (talk) 18:35, 23 November 2024 (UTC)
If it's over 700 it does push it a little into the bot territory, but if we can nail down a final number that would be best. Primefac (talk) 20:13, 23 November 2024 (UTC)
This one for Category:Belgian films and this for Category:Cinema of Belgium shows 717+956=1673 total. Is that right? @Earthh Also, all these pages are already tagged with film banner right, only the param is required? ~/Bunnypranav:<ping> 04:45, 24 November 2024 (UTC)
1682 for Category:Belgian films and 1189 for Category:Cinema of Belgium, including articles, files, templates, categories and portals. If these are already tagged with the film banner, |Belgian=yes or |Belgian-task-force=yes parameters will be enough. Earthh (talk) 15:48, 24 November 2024 (UTC)
will file a brfa tomorrow.
Not a big deal, but If these are already tagged means are they or not? ~/Bunnypranav:<ping> 15:57, 24 November 2024 (UTC)
Just as a note of caution, per WP:Film#Scope, {{WikiProject Film}} should not be added to biographical articles/categories/etc., which should use {{WikiProject Biography|filmbio-work-group=yes}}.   ~ Tom.Reding (talkdgaf)  16:12, 24 November 2024 (UTC)
You're absolutely right. For entries tagged with {{WikiProject Film}}, the parameter |Belgian=yes should be used. For entries tagged with {{WikiProject Biography}}, the parameter |cinema=yes should be added to {{WikiProject Belgium}}. However, it seems that this parameter is not yet supported. I've submitted an edit request on Template talk:WikiProject Belgium to address this. Earthh (talk) 22:35, 24 November 2024 (UTC)
Thanks Tom for the disclaimer.
@Earthh I shall do it like this. Replace {{WikiProject Film with {{WikiProject Film|Belgian=yes similarly {{WikiProject Biography with {{WikiProject Biography|cinema=yes For the pages in both above petscan queries. Is that fine? ~/Bunnypranav:<ping> 10:24, 25 November 2024 (UTC)
Thanks for your availability.
For the entries in both Petscan queries, {{WikiProject Belgium}} should be replaced or added as {{WikiProject Belgium|cinema=yes}} if not already present. Earthh (talk) 18:32, 25 November 2024 (UTC)
BRFA filed. Now time to wait! ~/Bunnypranav:<ping> 12:59, 26 November 2024 (UTC)
@Earthh: All done from the lists I hade made. Please tell if I missed any! ~/Bunnypranav:<ping> 16:21, 18 December 2024 (UTC)
@Earthh: {{WikiProject Film|Belgian=yes}} should be used per Template:WikiProject Film#National and Regional task forces.   ~ Tom.Reding (talkdgaf)  12:38, 24 November 2024 (UTC)
They are both in use at the moment; the one you suggested is definitely shorter :) Earthh (talk) 15:49, 24 November 2024 (UTC)
@Bunnypranav: Thank you for all the work you've done! Many individuals are still missing because we excluded the addition of the {{WikiProject Belgium}} tag and opted to only modify the parameters where it was already present. Is there anything we can do about this? Earthh (talk) 15:52, 21 December 2024 (UTC)
@Earthh Unless we have a clear cut definite list, I'm afraid it can't be a bot run ~/Bunnypranav:<ping> 15:58, 21 December 2024 (UTC)
@Earthh Do you have anything more, or should I mark this as done and archive it? ~/Bunnypranav:<ping> 17:12, 4 January 2025 (UTC)
@Bunnypranav: everything is fine, it's all good to go. Thank you again for your help! Earthh (talk) 15:47, 6 January 2025 (UTC)

There are presumably hundreds of articles about calendar years (for example, 671) that contain the text "link will display the full calendar".

I believe this text violates the spirit of WP:CLICKHERE, specifically:

phrases like "click here" should be avoided [...] In determining what language is most suitable, it may be helpful to imagine writing the article for a print encyclopedia

The text "link will display the full calendar" would of course make no sense in a print encyclopedia, so I think it should be deleted. Given the number of articles that this text appears in, this deletion would best be done by a bot. Stephen Hui (talk) 07:00, 30 November 2024 (UTC)

A total of 1562 pages use this wording. – DreamRimmer (talk) 07:22, 30 November 2024 (UTC)
DreamRimmer, Stephen Hui, this has been discussed before at Wikipedia:Bot requests/Archive 84#(link will display the full calendar). — Qwerfjkltalk 11:08, 30 November 2024 (UTC)
It has also been raised a number of times at WT:YEARS including following the linked BOTREQ (1, 2, 3), all of which were asking to remove it (without reply). Suffice to say, I think per WP:SILENCE there is a general lack of concern about whether this text is removed. Unless there is any significant opposition raised here in the next few days, I would be okay putting in a BRFA to remove the offending text. Primefac (talk) 22:28, 1 December 2024 (UTC)
If it hasn't been done already, I'd also suggest doing a few dozen "by hand" first, to see if that provokes any objection. I don't expect it will. Dicklyon (talk) 04:16, 7 December 2024 (UTC)
Easy enough to mass-rollback, I just forgot I was planning on doing this. Primefac (talk) 16:31, 9 December 2024 (UTC)
Y Done by the way. Primefac (talk) 12:43, 14 January 2025 (UTC)

Bot to simplify "ref name" content

I have come across some very long ref names in ref tags, sometimes automatically generated by incorporating the title of the work that is being referenced, which is disfavored by WP:REFNAME, which asks that reference names be kept "short and simple" to avoid clutter. A ref name is nothing more than a piece of code by which to identify a reference, and can be as short as "A1" or the like. However, according to this search generously provided by Cryptic, insource:ref insource:/\< *ref *name *= *[^ <>][^<>]{76}/, there are over 1,600 Wikipedia articles containing ref names that are over 75 characters in length, which is ridiculous. I have started hand-fixing these, and that is arduous, and I suspect bot-fixable. I would therefore like a bot that checks each page for ref names over the length of some set number of characters, perhaps something is short as 30 or 40 characters, and where a ref name is excessively long, shorten it to a more reasonable length (that does not match any existing names on the page).

In the course of this search, I have also come across quite a few ref names that contain complete urls, [[bracketed]] terms as would appear in linked text, and various non-standard characters or text usually used for formatting. I would like any instance of brackets, use of "http://" or "https://" or characters outside of the English alphanumeric set of letters, numbers, and basic punctuation to be stripped out or replaced with characters in that set. BD2412 T 03:26, 3 January 2025 (UTC)

I'm not sure I'd support bot-shortnening without seeing a demo of it. This seems to be more in the wheelhouse of semi-automated fixes. I'm opened to be convinced though. Headbomb {t · c · p · b} 03:46, 3 January 2025 (UTC)
A ref name is not article text. It is literally just a signal to allow multiple uses of a reference. We could replace every ref name in the encyclopedia with a random string of three or four letters/numbers, so long as each ref name was unique within its article, and it would not change their functionality at all. Granted, there are some projects (like those working on comic book movies) where they craft ref names for more informative purposes, but where this is done, the ref names are never excessively long or made with brackets, URLs, or exotic characters. The only thing an excessively long or convoluted ref name accomplishes is reduce usability and clutter up the Wikitext wherever it is used. BD2412 T 03:56, 3 January 2025 (UTC)
See also User:Nardog/RefRenamer-core.js which is an awesome tool for quickly renaming all refs in a given article. -- GreenC 04:03, 3 January 2025 (UTC)
@GreenC: Does it rename all of them? I am just concerned about the ridiculous ones (see, e.g., this fix). BD2412 T 04:11, 3 January 2025 (UTC)
(edit conflict) Kudos on the fixes you've done already. One point of WP:REFNAME is that reference names should have semantic value (not, for example "A1" or a random string of characters). It doesn't make the slightest difference to the reader, of course, but makes it easier for human editors to cite the correct source. Are you familiar with RefRenamer? I use it to replace reference names like ":0" and "auto1" with reasonably short but meaningful names, and it makes the replacement process a lot easier. It could simplify the task of replacing obscenely long reference names with much shorter ones, although it's still a by-hand tool, not a bot. It only changes the ones you tell it to. --Worldbruce (talk) 04:24, 3 January 2025 (UTC)
I'll check it out, thanks. However, with thousands of articles having excessively long ref names, there is still likely an advantage for having a bot do some of this lifting. BD2412 T 04:32, 3 January 2025 (UTC)
I might be able to do it manually with this tool, but it does not always actually shorten the name, sometimes it just formats it. This should work on the worst cases, though. BD2412 T 04:59, 3 January 2025 (UTC)
I think it should be possible to use the code from RefRenamer in combination with Bandersnatch to run it semi-automatically. — Qwerfjkltalk 14:13, 3 January 2025 (UTC)
Seems like a good example of WP:CONTEXTBOT to me. While "Author year" style, for example, will probably work well for types of sources where an author usually only publishes one per year, it won't work all that well for others. And personally I wish people would make more use of naming conventions that are more likely to be globally unique rather than less, often enough I've seen multiple articles on related topics that all cite different articles from a website and, since each only uses one article, the website's name gets used as the ref name, which has led to accidental bad refs if content is copy-pasted between the articles. Anomie 15:48, 3 January 2025 (UTC)
That's a good point about copying to other articles. "Authorlastname" is usually sufficient, if not then "Authorlastname YYYY-MM". Likewise the citation should have |last= as the first field so it's easy to visually find. I've found these two methods (with RefRenamer) are my current best practice. -- GreenC 16:50, 3 January 2025 (UTC)
Even so, I think we can all agree that it is bad to have ref names like:
  • "Legitimate politics : a source of codification between theory and practice: A fundamental study of the uniting unity between politics and jurisprudence"
  • "urlThe Alkaloids of Tabernanthe iboga. Part IV.1 The Structures of Ibogamine, Ibogaine, Tabernanthine and Voacangine - Journal of the American Chemical Society (ACS Publications)"
  • "Dr. Coburn Stands for Science:Opposes Congressional efforts to honor debunked author linked to failed global malaria control"
That said, I have played around with the ref renamer tool pointed out be GreenC and Worldbruce, and I do think that I can use it to handle this task without needing a bot. BD2412 T 18:38, 3 January 2025 (UTC)
As such,  Request withdrawn by requester. NotAG on AWB (talk) 14:49, 4 January 2025 (UTC) I have amended the template used in this comment because it was causing errors. The BOTREQ template does not have a "withdrawn" option. MolecularPilot 🧪️✈️ 08:12, 6 January 2025 (UTC)
I don't agree those are bad, at least not for the reason you think they are. The second should drop the odd "url" prefix and the third could use a space after the colon if they're going to be titled like that, but if editors of an article find it useful to use the full title of a work as its ref name then 🤷 why should we care? I've seen you insist on doing things that I personally think are stranger. Anomie 15:10, 4 January 2025 (UTC)
Sortname redirects do not sit in the article's Wikitext, making it difficult to see where the reference even begins and ends. I literally just removed a "ref name=The Edinburgh Gazetteer: Or, Geographical Dictionary: Containing a Description of the Various Countries, Kingdoms, States, Cities, Towns, Mountains, &c. of the World; an Account of the Government, Customs, and Religion of the Inhabitants; the Boundaries and Natural Productions of Each Country, &c. &c. Forming a Complete Body of Geography, Physical, Political, Statistical, and Commercial with Addenda, Containing the Present State of the New Governments in South America..."; and a "ref name="[RECRUES 2020/2021] Arrivée officielle à Sapiac de @Dylan_Sage11 pour la saison prochaine! Médaille de bronze aux Jeux Olympiques de Rio 2016, 134 sélections pour 155 points, il arrive pour renforcer l'effectif de l'USM et faire vibrer la @rugbyprod2 #AllezSapiac". BD2412 T 04:27, 10 January 2025 (UTC)

Requesting for bot help to nominate 156 navboxes for deletion (listed above). These navboxes are for teams that finished lower than third place in the Olympic basketball tournaments. Such templates are subject to WP:TCREEP and were previously deleted per May 31, 2021, April 22, 2020, June 7, 2019, and March 29, 2019 (first, second and third) discussions (to name a few). – sbaio 17:13, 20 November 2024 (UTC)

Maybe one day I'll complete User:Qwerfjkl/scripts/massXFD and things like this will be much easier.— Qwerfjkltalk 17:32, 20 November 2024 (UTC)
It is very easy to list these templates at TFD as a batch. Someone with AWB should be able to help you apply the correct TFD template to all of them. Notification of the templates' creators might also be not-too-hard with AWB, especially if you provide a list of those editors. – Jonesey95 (talk) 22:07, 20 November 2024 (UTC)
Ill tag them, if I could get the editors users ill notify as well.  Working with AWB. Geardona (talk to me?) 23:48, 20 November 2024 (UTC)
Geardona, have you completed this task? Thanks! (I'm trying to sort out all the discussions and either make the bot to do them, or close them). MolecularPilot 🧪️✈️ 04:45, 1 January 2025 (UTC)

Basketball biography infobox request

On the basketball biography infobox, all instances of |HOF_player= and |HOF_coach= should just be |HOF= as there is no actual difference between the two. The issue can be seen on Lenny Wilkens where both parameters are used and link to the same page. ~ Dissident93 (talk) 19:54, 17 November 2024 (UTC)

Bill Sharman appears to not hold to that trend, so it would appear that it is not true to say "all" instances must be changed. I will also note that there are only 79 instances of both parameters even appearing in the same article, so this is too small a task for a bot (try WP:AWBREQ or just tweak things manually). Primefac (talk) 20:13, 17 November 2024 (UTC)
I meant that even single instances of |HOF_player= and |HOF_coach= should be moved to |HOF=, allowing the removal of the first two parameters within the infobox itself. ~ Dissident93 (talk) 20:31, 17 November 2024 (UTC)
If the other two parameters are reasonable alternate parameters, it would make more sense to have them as alternate parameters rather than edit every page using any single parameter. Primefac (talk) 20:36, 17 November 2024 (UTC)
They are redundant as they both link to the same exact page as there is no official designation between being inducted as a player or coach into the Naismith Basketball Hall of Fame. ~ Dissident93 (talk) 20:43, 17 November 2024 (UTC)
And yet, Bill Sharman uses both parameters with different values in them. The might have the same base URL, but clearly the values passed to them can be different. Primefac (talk) 20:44, 17 November 2024 (UTC)
Actually, it does seem like there are separate pages to represent an inductee's playing and coaching accomplishments despite there being no official difference between the two, as per Bill Sharman. In that case, I suppose there must be more of a consensus to merge/remove the parameters before this can be implemented. ~ Dissident93 (talk) 21:04, 18 November 2024 (UTC)
Marking for the table above that this Needs wider discussion. NotAG on AWB (talk) 14:52, 4 January 2025 (UTC)

Lowercasing the word "romanized"

I've never done this before, but I've noticed a rather annoying issue that no human has the time to sort out manually. (Incidentally, my request is very similar to another active one.) There are thousands of stubs on Iranian locations that have the word "romanized" needlessly capitalised mid-sentence (apparently all mass-created by the same now-retired user). I'm not sure exactly how to track all of them down, but see categories like Sarbisheh County geography stubs or Qom province geography stubs. Anonymous 23:22, 11 January 2025 (UTC)

It looks like there may be around 820 pages, though there might be some valid uses in there (beginnings of sentences, Title Case, etc). Primefac (talk) 13:20, 13 January 2025 (UTC)
I don't think this is a comprehensive list (indeed, many of them use "Romanized" in a different sense and most seem correct in doing so). The stub articles on Iranian villages I'm referencing all follow the same basic layout and all appear to be miscapitalising the word in the exact same spot. There have to be tens of thousands — I seem to land on one at least every twenty presses of the "random article" button. Anonymous 18:10, 13 January 2025 (UTC)
That is a case-sensitive regex search; it will only return capital-R-Romanized. If they aren't on that list, then they don't exist. Next time you see examples please post them here. Primefac (talk) 19:50, 13 January 2025 (UTC)
I'm getting this message on my end: A warning has occurred while searching: The regex search timed out, so only partial results are available. Try simplifying your regular expression to get complete results. If there are indeed only 820 instances in all of Wikipedia, then that means that the chance of landing on any article with uppercase "Romanized" when hitting "random article" should be around 0.01%. It happens to me with fair consistency. Does it seem plausible that I'm getting these at several hundred times the normal rate? Anonymous 20:39, 13 January 2025 (UTC)
So you're clicking the random article button, and landing on dozens of Iranian villages (of which you still haven't given any examples)? That seems more unlikely than there being mysteriously thousands of articles not showing up on a search. Primefac (talk) 20:44, 13 January 2025 (UTC)
Are you suggesting that I'm lying about something this mundane and pointless? Here are six examples of articles I have randomly landed on and corrected: Lal-e Tazehabad, Golab-e Pain, Neyneh, Rudbarak, Gilan, and Dahich (check the edit history if you need proof). If your numbers are correct, these six articles represent around 0.7% of all articles that use(d) "Romanized" (for comparison, they're around 0.00008% of all English Wikipedia articles). Anonymous 21:43, 13 January 2025 (UTC)
No, I was not suggesting you were lying, I was trying to discover the mismatch between what you were saying and what I was seeing. I was searching for Romanized, while the text you are seeing is [[Romanize]]d, which are two very different things. That search gives ~41k pages. Primefac (talk) 09:29, 14 January 2025 (UTC)
Probably worth letting {{langx}} handle this like this. Gonnym (talk) 09:35, 14 January 2025 (UTC)
That would probably also reduce or remove a lot of the CONTEXTBOT issues I was envisioning. Primefac (talk) 09:38, 14 January 2025 (UTC)

Bot to block proxy servers and VPNs automatically

I think we should have a bot that would block proxy servers and VPNs automatically as we have a LTA right now who is very disruptive and uses proxy servers and VPNs to spam over and over to a point where most help pages like WP:AN WP:ANI WP:Help desk WP:Teahouse etc are protected i was thinking that a bot could automatically block all VPNS and proxy servers to reduce this LTA while reducing the disruption to normal users by the semi protection Isla🏳️‍⚧ 22:45, 19 January 2025 (UTC)

Doesn't User:ProcseeBot already do this? Nyttend (talk) 22:54, 19 January 2025 (UTC)
Has not worked since 2020 Isla🏳️‍⚧ 01:16, 20 January 2025 (UTC)
Same here, I thought User:ST47ProxyBot did this but it turned out to have been retired in 2024 Rusty 🐈 01:33, 20 January 2025 (UTC)
There is a Phabricator task for this: T380917. – DreamRimmer (talk) 01:42, 20 January 2025 (UTC)

Serial commas in page titles

Hello, I'm not sure that this request can be completed automatically; please accept my apology if it can't. I just want some lists, without edits to anything except the page where you put the lists, so it's not a CONTEXTBOT issue: just a "good use of time" issue. Could you compile some lists of pages in which serial commas are present or are omitted? I just discovered List of cities, towns and villages in Cyprus and created List of cities, towns, and villages in Cyprus as a redirect to support serial commas. Ideally, whenever a page could have a serial comma in the title, we'd have a redirect for the form not used by the current title, but I assume this isn't always the case.

First off, I'd like a list of all mainspace pages (whether articles, lists, disambiguation pages, anything else except redirects) that use a serial comma. I think the criteria might be:

  • [one or more words]
  • comma
  • [one or more words]
  • comma
  • ["and" or "or"]
  • [one or more words]

I'm unsure whether they're rigid enough, or whether they might return a lot of false positives.

Secondly, I'd like a list of all pages whose titles are identical to the first list, except lacking a serial comma. Redirects would be acceptable here, since if I'm creating serial-comma redirects, it helps to know if it already exists.

Thirdly, I'd like a list of all mainspace pages (whether articles, lists, disambiguation pages, anything else except redirects) that could use a serial comma but don't. I think the criteria would be:

  • [Page is not on first or second list]
  • [one or more words]
  • comma
  • one or more words]
  • ["and" or "or", but no comma immediately beforehand]
  • [one or more words]

Once the list is complete, the bot checks each page with the following process: "if I inserted a comma immediately before 'and' or 'or', would it appear on the first list?" If the answer is "no", the bot removes it from the list.

Fourthly, I'd like a list of all pages whose titles are identical to the third list, except they have a serial comma. Again, redirects are acceptable.


Is this a reasonable request? Please let me know if it's not, so I don't waste your time. Nyttend (talk) 20:14, 19 January 2025 (UTC)

Nyttend, I guess intitle:/[A-Za-z ]+, [A-Za-z ]+, (and|or) [A-Za-z ]+/ would work for the first request and intitle:/[A-Za-z ]+, [A-Za-z ]+ (and|or) [A-Za-z ]+/ would work for the second.
The latter two lists are trickier. I think your best bet is probably WP:QUARRY. — Qwerfjkltalk 20:26, 19 January 2025 (UTC)
Is there a way to download a list of results from a particular search? As far as I know, the only way to get a list of results is to copy/paste the whole thing somewhere and delete everything that's not a page title. (With 11,544 results for the first search, this isn't something I want to do manually.) Also, the first search includes redirects, e.g. Orders, decorations, and medals of the United Nations is result #1. Nyttend (talk) 20:47, 19 January 2025 (UTC)
@Nyttend: If you have AWB installed, you can use its 'Make list' function to generate a list of pages. https://insource.toolforge.org is a tool that can generate a list of pages from search results. Currently, it only supports the insource: parameter, but I plan to add functionality for other parameters like intitle: and incategory: soon. If you are unable to generate the list yourself, feel free to message or ping me, and I will create it for you and add it to your userspace. – DreamRimmer (talk) 05:11, 20 January 2025 (UTC)
Thanks, DreamRimmer, but someone at WP:QUARRY provided me with a list. Now I just need to do the filtering work to create redirects. Nyttend (talk) 08:52, 20 January 2025 (UTC)
Note that search results are limited to the first 10,000 pages. — Qwerfjkltalk 16:10, 20 January 2025 (UTC)

Create redirects from human-curated list

Following User:Qwerfjkl's advice in #Serial commas in page titles, I filed a Quarry request and got the information I wanted. I'm about to begin checking them, and I know I'll have heaps of potential redirects to create. Are there any bots already in existence that are already approved to create redirects from human-curated lists? It would be convenient if I could just dump a few hundred pairs of links (the title to be created, and the title to be the target) and have a bot create them. Probably there will be many more than a few hundred, so I was envisioning providing lists here and there, without a schedule. Nyttend (talk) 05:34, 20 January 2025 (UTC)

There are not, and you would need a solid consensus to have a bot do so. Primefac (talk) 17:00, 21 January 2025 (UTC)
Doesn't DannyS712 bot III do something like this? JJPMaster (she/they) 17:03, 21 January 2025 (UTC)
No, that's for patrolling them. Primefac (talk) 17:14, 21 January 2025 (UTC)
Oh, sorry, I linked to the wrong one. I meant to link to this AnomieBOT BRFA. JJPMaster (she/they) 17:16, 21 January 2025 (UTC)
Yes, but that is to overcome a technical challenge (as it says in the edit summary, "endashes are hard") so there is precedent but for a different issue. Primefac (talk) 17:25, 21 January 2025 (UTC)
Also that task too was proposed to the community. Looking back I'm a little surprised the community's response was mostly WP:SILENCE, I guess people worried about it less back in 2016. Personally, if you were to run this by Wikipedia:Village pump (proposals) and get WP:SILENCE too, I'd be satisfied. You'll also want to consider details like which Rcat templates should be applied. Anomie 12:38, 22 January 2025 (UTC)

If someone could write a bot that

Compare both list, and create a report


Province over-capitalization

See WP:AutoWikiBrowser/Tasks#50K articles with over-capitalized "Province" and WT:WikiProject Iran#Fixing widespread over-capitalization of "Province" (just started). There are over 50,000 articles with over-capitalized Province since the Jan 2022 multi-RM that moved all the Iranian province titles to lowercase. Assuming the project discussion doesn't turn up resistance to fixing, many of these would be easily amenable to bot fixing, like the task that User:BsoykaBot did for NFL Draft over-capitalization. Dicklyon (talk) 18:50, 6 December 2024 (UTC) Actually, there's not much activity at WikiProject Iran, so if anyone wants to see this discussed more, point me at a better place to bring it up. Dicklyon (talk) 04:00, 7 December 2024 (UTC) I did a bunch of these by hand on Dec. 6 (example). No reaction from anyone. Dicklyon (talk) 04:10, 7 December 2024 (UTC) @Bsoyka and DreamRimmer: Thank you both for volunteering to help if/when we see clear consensus or a closed discussion on this. Does anyone have a good idea how to provoke more response? All I've got so far is silence. The big RM was similarly quiet, with no opposition and just 2 supports. Dicklyon (talk) 21:45, 11 December 2024 (UTC) :This is why bots go through trials; not only does it allow for the bot operator to demonstrate that their bot operates as intended, it gives users the opportunity to give feedback on the task. If this is a potentially contentious task, we can have the bots not mark the edits as minor during the trial to raise more awareness of it prior to acceptance. Primefac (talk) 22:00, 11 December 2024 (UTC) ::Yes, a trial not marked minor is a good idea beyond the bunch I did by hand not marked minor. Are you prepared to approve such a trial? Bsoyka has a bot that's got demonstrated competence at doing such things, while avoiding purely cosmetic edits. Dicklyon (talk) 02:32, 12 December 2024 (UTC) I've got no response at the discussion I opened at the project. Is it OK to move forward with bot approval process? Dicklyon (talk) 19:20, 21 December 2024 (UTC) @Bsoyka and DreamRimmer: would either of you be willing to file an RFBA on this now, or should I try to provoke discussion elsewhere? Dicklyon (talk) 23:39, 21 December 2024 (UTC) :Dicklyon, you can file a WP:BRFA yourself if you are seeking approval to use AWB for this. I recommend the easy-brfa.js script linked on the page. :) MolecularPilot 🧪️✈️ 03:50, 1 January 2025 (UTC) ::I am not seeking to use AWB for this, but for someone else to do so. Dicklyon (talk) 07:12, 1 January 2025 (UTC)

Replacing FastilyBot

Now that Fastily has retired, FastilyBot is no longer running. Any chance of a replacement? On the bot's deleted userpage, one can see that it ran 17 tasks and updated 31 database reports. plicit 14:44, 19 November 2024 (UTC) :I have written code to update ten database reports. – DreamRimmer (talk) 14:48, 19 November 2024 (UTC) ::I have restored the userpage to Special:Permalink/1258404221 for tracking purposes. If anyone needs any of the code from any of the subpages, please let me know. Primefac (talk) 14:53, 19 November 2024 (UTC) ::I am taking over the following database reports: 4, 11, 12, 15, 18, 19, 20, 21, 22, 23, 24, 26, 28, 29, 30, and 31. – DreamRimmer (talk) 17:50, 19 November 2024 (UTC) :::DreamRimmer, given the availability of the sql queries https://github.com/fastily/fastilybot-toolforge/tree/master/scripts, it might be better to convert the pages to usse {{Database report}}. — Qwerfjkltalk 18:14, 19 November 2024 (UTC) ::::I second this suggestion for database reports. I converted Wikipedia:Database reports/Transclusions of non-existent templates (number 18 on the list) to use {{database report}}, and after fixing a couple of bonehead oversights on my part, it is working well and more functional than the previous report. – Jonesey95 (talk) 22:42, 19 November 2024 (UTC) :::::I have spent time writing code to update these database reports. There are some config files for certain database reports, so I have included functionality to exclude files from the report that transclude any templates or belong to any category listed in the config file. Database reports cannot do that, but I have no problem if you all want to use SDZeroBot's database reports. – DreamRimmer (talk) 03:25, 20 November 2024 (UTC) ::::::DreamRimmer, I agree that if it's not possible or feasible to use {{Database report}}, it makes sense for you to handle it; but where we can, it's nice to have some kind of standardisation for the reports. — Qwerfjkltalk 17:34, 20 November 2024 (UTC) :I'll look into taking over the deletion discussion notifier. DatGuyTalkContribs 16:26, 19 November 2024 (UTC) :: BRFA filed DatGuyTalkContribs 23:18, 20 November 2024 (UTC) * except for database reports, let me know if I can help with any other task. —usernamekiran (talk) 17:58, 19 November 2024 (UTC) *:Since I'm heavily invested in the file namespace, tasks 1, 2, 4, 5, 7, 8, 9, 10, 11, 15, and 17 are relevant for my work. plicit 00:46, 20 November 2024 (UTC) *::I would strongly second that tasks, 1-2, 4-12, and 14-17 are more or less essential for keeping this area running smoothly. — Red-tailed hawk (nest) 03:03, 20 November 2024 (UTC) *:::alright. So what tasks are remaining now? —usernamekiran (talk) 12:59, 20 November 2024 (UTC) *::::Tasks 1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 15 and 17 are still pending. If my assistance is needed with any task, I would be happy to help with some of them. – DreamRimmer (talk) 13:05, 20 November 2024 (UTC) *:::::@DreamRimmer: I think Explicit is going to take over tasks 1, 2, 4, 5, 7, 8, 9, 10, 11, 15, and 17. What I mean is, first we should make it clear who is going to tackle which task, before spending time working on it. —usernamekiran (talk) 14:03, 20 November 2024 (UTC) *::::::Nope, I'm clueless in this area. Those are the tasks are the ones I would like to have kept going. plicit 14:17, 20 November 2024 (UTC) *:::::::Is the code for these tasks available somewhere? It might be fairly simple to get it up and running again. — Qwerfjkltalk 17:39, 20 November 2024 (UTC) *::::::::Looks like it's availabe at https://github.com/fastily/fastilybot/blob/main/fastilybot/bots.py. — Qwerfjkltalk 17:41, 20 November 2024 (UTC) *:::::::::There are also some config pages on-wiki, I think I managed to undelete them all but if one is missing let me know. Primefac (talk) 14:08, 23 November 2024 (UTC) *:::::::Since no one is taking these, I am going to take over two tasks, Task 4 and 10. I will code these from scratch and file a BRFA in a few hours. – DreamRimmer (talk) 07:30, 21 November 2024 (UTC) *:::::::: BRFA filed and BRFA filed. – DreamRimmer (talk) 10:03, 21 November 2024 (UTC)

Current progress

Just creating a table below for a quick idea of which tasks (based on Special:Permalink/1258404221) are being handled. Please add ~~~~~ at the bottom if you update things. Primefac (talk) 12:42, 22 November 2024 (UTC)
Original task Description New Task
1 Replace {{Copy to Wikimedia Commons}}, for local files which are already on Commons, with {{Now Commons}}. CanonNiBot 1
2 Remove {{Copy to Wikimedia Commons}} from ineligible files. CanonNiBot 1
3 Report on malformed SPI pages. MolecularBot 4
5 Add {{Wrong-license}} to files with conflicting (free & non-free) licensing information. KiranBOT 15
7 Replace {{Now Commons}}, for local files which are nominated for deletion on Commons, with {{Nominated for deletion on Commons}}. CanonNiBot 1
8 Replace {{Nominated for deletion on Commons}}, for local files which have been deleted on Commons, with {{Deleted on Commons}}. CanonNiBot 1
9 Remove {{Nominated for deletion on Commons}} from files which are no longer nominated for deletion on Commons. CanonNiBot 1
11 Fill in missing date parameter for select usages of {{Now Commons}}.
13 Post various database reports to Wikipedia:Database reports.
17 Remove instances of {{FFDC}} which reference files that are no longer being discussed at FfD. KiranBOT 14
15 Remove {{Now Commons}} from file description pages which also translcude {{Keep local}} CanonNiBot 1
14 Leave courtesy notifications for uploaders (who were not notified) when their files are proposed for deletion. DatBot 12
4 Remove {{Orphan image}} from free files which are not orphaned. DreamRimmer bot 3
6 Leave courtesy notifications for uploaders (who were not notified) when their files are nominated for dated deletion. DatBot 12
10 Add {{Orphan image}} to orphaned free files. DreamRimmer bot 2
12 Leave courtesy notifications for uploaders (who were not notified) when their files are nominated for discussion. DatBot 12
16 Leave courtesy notifications for article authors (who were not notified) when their contributions are proposed for deletion. DatBot 12
12:42, 22 November 2024 (UTC) :Hi. Since nobody's taking them, I'm gonna try working on tasks 5 and 15. '''[[User:CanonNi]]''' (talkcontribs) 02:15, 17 December 2024 (UTC) :Thanks for volunteering :) – DreamRimmer (talk) 03:32, 17 December 2024 (UTC) :updated task 17 with KiranBOT 14. —usernamekiran (talk) 23:37, 26 December 2024 (UTC) :I've claimed the malformed SPI task. My bot will be an WP:EXEMPTBOT for this task and not require BRFA because it will just report malformed SPIs to User:MolecularBot/MalformedSPIs.json, which will then be displayed on the malformed SPIs page in projectspace through a template and Lua module, similar to my AfC bot (botreq by JJPMaster below). MolecularPilot 🧪️✈️ 06:16, 1 January 2025 (UTC) ::Completed & running continously (it watches RecentChanges) on Toolforge :) See Wikipedia:Malformed SPI Cases. MolecularPilot 🧪️✈️ 08:43, 1 January 2025 (UTC) : working on FastilyBot 5 "add {{Wrong-license}} to files", will file BRFA when code is ready. —usernamekiran (talk) 03:59, 3 January 2025 (UTC) Hello, I would like to kindly request that all articles, categories, files, etc. within the Category:Cinema of Israel be tagged with the newly created Israeli cinema task force. This will help streamline efforts to improve the quality and coverage of Israeli cinema-related content on Wikipedia. Please exclude the following subcategories, as they may include films that are not necessarily Israeli and these categories include biographies (which are not part of WP Film): *Category:American remakes of Israeli films *Category:Film censorship in Israel *Category:Israeli–Palestinian conflict films *Category:Films based on Israeli novels *Category:Films set in Israel *Category:Films shot in Israel *Category:Pornography in Israel *Category:Works by Israeli filmmakers *Category:Israeli film people I saw the request above on Belgian films and used it as a template for my request. Thank you. LDW5432 (talk) 05:23, 2 January 2025 (UTC) :This query with depth of 5 returns 1003 articles. I have checked a few random ones, and they seem fine. @LDW5432, can you please go through some random articles and let me know if there are any that should be removed from the list? – DreamRimmer (talk) 15:25, 3 January 2025 (UTC)
::Hold on with the bot. I need to remove people pages from the category as they aren't supposed to be in the film task force category. I will list a few that should not be included or we can manually remove later--: ::Tel Aviv University ::ICon festival ::Jinni (search engine) ::Kibbutzim College
::- LDW5432 (talk) 20:27, 4 January 2025 (UTC) ::I updated the bot request to not include biographies. If you run it now with the new parameters and exclude the following four pages then everything should be good to be tagged. @DreamRimmer ::Tel Aviv University ::ICon festival ::Jinni (search engine) ::Kibbutzim College ::- LDW5432 (talk) 21:33, 4 January 2025 (UTC) :::@LDW5432:This updated query shows 356 pages. If we remove these four, there will be 352 pages left to edit. Can you please confirm, so that I can tag them? – DreamRimmer (talk) 14:55, 5 January 2025 (UTC) ::::Yes, this is a good list. Please tag them. Thank you. LDW5432 (talk) 16:19, 5 January 2025 (UTC) ::::Hi, I am finding that the depth isn't high enough to tag many films. Can you run another query on this category? Category:Israeli films ::::And exclude this category: Category:Israeli–Palestinian conflict films ::::- LDW5432 (talk) 18:26, 5 January 2025 (UTC) ::::: Done Tagged 334 pages. – DreamRimmer (talk) 13:43, 6 January 2025 (UTC) ::::::Hi, thank you for tagging those 334 pages. ::::::In my previous comment, I mentioned that the initial query doesn't have a depth which reaches important articles. The query you ran missed important films like Lemon Popsicle and The House on Chelouche Street. ::::::Can you run the query again but on this category: Category:Israeli films ::::::And exclude this category: Category:Israeli–Palestinian conflict films ::::::Thank you. ::::::- LDW5432 (talk) 16:27, 6 January 2025 (UTC) :::::::@LDW5432: Can you please tell me which depth is correct? – DreamRimmer (talk) 16:37, 6 January 2025 (UTC) ::::::::This query finds the missing films. LDW5432 (talk) 16:47, 6 January 2025 (UTC) :::::::::There are many articles, such as Hounds of War and Gwen Stacy (Spider-Verse), that do not belong to the Cinema of Israel. There are many such articles. If you can provide me with the correct query or a list, I can help. – DreamRimmer (talk) 16:56, 6 January 2025 (UTC) ::::::::::Hounds of War is made by an Israeli director so it should be included. Some films have Israeli producers. You are correct that the other pages shouldn't be included. I reduced the depth to "2" and it fixes it. Query. - LDW5432 (talk) 17:04, 6 January 2025 (UTC) :::::::::::@LDW5432: This list contains some articles that do not belong to Israeli cinema, and some users have complained about it. The good thing is that I have only tagged a few from the second query. Please create a correct list; otherwise, I will not be able to assist with this. I am going to revert the last few changes. – DreamRimmer (talk) 10:27, 7 January 2025 (UTC) ::::::::::::Do you not want to include foreign films made by Israeli directors and producers? They don't need to be included. LDW5432 (talk) 20:00, 7 January 2025 (UTC)

Bot to track usage of AI images in articles

There are two similar tasks, both relying on categorisation data from Commons, which are currently only being done manually and occasionally: * Tracking when an image from a subcategory of commons:Category:Upscaling is being used in a Wikipedia article. (Such files almost always go against MOS:IMAGES#Editing images, and in some cases the upscaled version of an image will be restored repeatedly whenever a new editor is pleased to have found a "higher res" version on Commons.) * Tracking when a file from a subcategory of commons:Category:AI-generated media is being used in a Wikipedia article. (This is fine when illustrating an AI topic, but needs review in other contexts. Such usage is currently being recorded manually at Wikipedia:WikiProject AI Cleanup/AI images in non-AI contexts.) I don't know what output would be appropriate, whether it should be a hidden maintenance category or a list page somewhere. Would this be a good job for a bot? Belbury (talk) 10:51, 21 January 2025 (UTC) *@Belbury: Hello. Yes, this is doable. I mean, finding images from commons categories being used in enwiki articles is doable. I think two plain lists (on separate pages) can be generated —— one for upscaling, other for AI generated. With text something similar to: * File:Arcturian.png in article Arcturians (New Age). The page/output can be formatted in various ways, similar to Wikipedia:WikiProject AI Cleanup/AI images in non-AI contexts as well. I am not sure how to work with hidden maintenance category. —usernamekiran (talk) 15:36, 22 January 2025 (UTC) *:Two lists sounds like they should be enough. I don't know if that even needs to be a bot, or if it could just be a script that was hosted off-site somewhere. *:I'll notify Wikipedia:WikiProject AI Cleanup of this discussion in case they have any input. Belbury (talk) 15:53, 22 January 2025 (UTC) *::@Belbury: I have published both lists in my sandbox. Feel free to copy them from there. – DreamRimmer (talk) 03:38, 23 January 2025 (UTC) *:::Published script at User:DreamRimmer/commonsfileusage.py. – DreamRimmer (talk) 05:10, 23 January 2025 (UTC) *::::Thank you! That looks really useful. *::::So we'd need to find somebody else to host that script and run it on a regular basis? Belbury (talk) 11:07, 23 January 2025 (UTC) *:::::You can run it on PAWS and then copy-paste the results onto the wiki. I don't think automating it is necessary since there won't be many pages each week. Just run it on PAWS as needed. – DreamRimmer (talk) 11:35, 23 January 2025 (UTC) *::::::Automation would be good so that the list was always up to date and didn't rely on a person remembering to process it, but I'll see where it can be taken from there. Thanks again. Belbury (talk) 14:16, 26 January 2025 (UTC)

Template:MLS

Per the RFD discussion here, the redirect above is to be retargeted, but there's still many cases where the redirect is being used instead of the actual template. A bot to replace the current transclusions of Template:MLS with Template:MLS player, allowing the former to be retargeted per the discussion is requested. oknazevad (talk) 01:55, 1 February 2025 (UTC) :This looks fairly simple and straightforward, and may be better suited for WP:AWBREQ ~ Rusty meow ~ 02:01, 1 February 2025 (UTC) ::@Rusty Cat https://linkcount.toolforge.org/?project=en.wikipedia.org&page=Template%3AMLS shows that there are 1500+ transclusions, this is a job better suited for a bot. @Oknazevad: I am ready to do this task, but since this is basically a redirect bypassing job, do you think more consensus is needed apart from 3 people at the RfD, say at some wikiproject? ~/Bunnypranav:<ping> 07:21, 1 February 2025 (UTC) :::If you think so, but since it would just be replacing the redirect with the actual template, I don't think there could be much objection. As I said at ten RFD, it would allow the redirect to point to the navbox template consistent with other similar template redirects. oknazevad (talk) 07:44, 1 February 2025 (UTC) ::::@Oknazevad Gotcha, BRFA filed ~/Bunnypranav:<ping> 07:55, 1 February 2025 (UTC) :::::For what it's worth, my bot is already approved for this sort of thing, but carry on. Primefac (talk) 07:59, 1 February 2025 (UTC) ::::::If my task gets approved, can I also do such small runs without explicit approval? ~/Bunnypranav:<ping> 08:01, 1 February 2025 (UTC) :::::::No, because your task is specific to this template. Primefac (talk) 08:05, 1 February 2025 (UTC) :@Oknazevad Job is  Done, MLS transclusions show no mainspace usage, I believe this is good to be retargeted. ~/Bunnypranav:<ping> 15:40, 3 February 2025 (UTC) ::And retargeting  Done. Thanks all! oknazevad (talk) 15:47, 3 February 2025 (UTC)

IUCN Status Bot

Anyone have a bot to update the conservation statuses of organisms? If not, I can help make it myself (granted, my knowledge is very limited but I am willing to learn). AidenD (talk) 04:44, 21 January 2025 (UTC) :Hey @AidenD, I made a template to link TNC status from Wikidata to organisms. If you want to work on IUCN I would be willing to work with you on doing something similar. Template:TNCStatus Dr vulpes (Talk) 23:43, 2 February 2025 (UTC) ::Sure! Are you free to reach out on, say, Discord? AidenD (talk) 06:28, 3 February 2025 (UTC)

Bot to go over pages at Category:Talk pages with comments before the first section

Category:Talk pages with comments before the first section description says that these pages can cause display issues on mobile. WP:AWB, as part of its general fixes adds "Untitled" to these sections, which I think is a good enough fix and is much better than just leaving these as is. If someone can get a bot to do that would be the easiest solution. Gonnym (talk) 15:52, 1 February 2025 (UTC) :I think the bot should ignore /todo and /GA pages for the first pass as those might need a different fix. Gonnym (talk) 16:14, 1 February 2025 (UTC) ::Is that cat accurate? I randomly chose a page (Talk:Abersychan School) which does not fall into that category, and it's been unedited long enough I don't see it as a cache issue. Primefac (talk) 17:14, 1 February 2025 (UTC) :::That page has {{WikiProject Schools}} which uses |info= that the software sees as a comment (not really sure how relevant that system of comments inside banners is in 2025. I doubt any comment from 2007 in a banner can be helpful). Gonnym (talk) 17:30, 1 February 2025 (UTC) ::::Yeah, I actually read the cat documentation (shocker!) after I posted; agree that's why. Curious how many other pages like that are technically fine but the system thinks they're messed up... Primefac (talk) 17:32, 1 February 2025 (UTC)

BOT to clean-up spaces round non-breaking spaces

Hello, there is a need for a BOT to remove leading and trailing spaces from the non-breaking space character (&nbsp;). If a space exists then you end up with 2 spaces in the rendered text and it negates the purpose of having a non-breaking space as a break can be made between the space and the non-breaking space. You should ignore the cases where a non-breaking space is used as a template parameter or a cell entry in a table. Keith D (talk) 00:10, 5 February 2025 (UTC) :Keith D, sounds like WP:CONTEXTBOT - try WP:AWBREQ. — Qwerfjkltalk 16:51, 5 February 2025 (UTC) :@Keith D - Would this be a candidate for WP:AWB/Typos? GoingBatty (talk) 18:12, 8 February 2025 (UTC)

Auto URL Access Level

Specific publications (generally newspapers) have global URL access requirements. Believe it would be useful for a bot to crawl for specific websites within citations and apply the {{registration required}}, {{subscription required}} and {{limited access}} based on a list somewhere Example www.smh.com.au -> {{limited access}} www.afr.com -> {{subscription required}} both of these publications particularly were heavily referenced prior to the subscription model being introduced. A fair chunk of other various rags would find their place in this list, too. :) Losbeth (talk) 15:11, 9 February 2025 (UTC) :@GreenC, does your bot do this? — Qwerfjkltalk 18:21, 9 February 2025 (UTC) ::No, I don't. I think this sort of bot could be error prone. You have to assume nothing. If a website supposedly has a global policy, it almost surely is not a global policy, there will be exceptions. And that policy will change in the future. At best maybe a bot that detects known page warnings, such as a sub required banner, checks each URL one by one. It's adding |url-access= based on verification. Here is an afr.com page that is not subscription required. -- GreenC 22:26, 9 February 2025 (UTC)