consider adopting a policy that disallows LLM/"AI" contributions #8615
Replies: 2 comments 3 replies
-
|
my 2¢ as a (light) imagemagick user: i am wholeheartedly in favour of a no-AI policy in the project- I don't care about features being rolled out faster, i use imagemagick because it is the most reliable image manipulation tool out there. if release speed is prioritised by allowing lesser quality code into the project, I feel like it would defeat the point! as an open-source contributor and developer, who learned programming through open-source, i also fear that generative AI models, as they are today, are producing code that is not as easy to read by humans as "hand-written" code (or at least, that has been my experience reviewing AI-generated code so far). this bothers me a lot, as this risks killing off the way I learned (and still learn!) to write code, and would prevent future generation from learning the craft of programming that way. this, to me, makes open-source much less valuable, and much less interesting.
I will add to that that I don't expect this to be fixed any time soon; in translation, for which i have quite extensive experience, it is a known fact that machine translation software introduces sly mistakes that no humans would ever make, and that humans are particularly bad at spotting, unless trained specifically for it. as I don't expect anyone in the software world to be fully trained to spot these mistakes, due to code generation tools being still very recent, my advice would be to hold off on accepting contributions using them until there are reviewers who are specifically trained to handle these issues. |
Beta Was this translation helpful? Give feedback.
-
|
We maintain a neutral stance on the use of large language models (LLMs). The ImageMagick community has benefited from LLM‑assisted contributions, but the project has also experienced setbacks when low‑quality, AI‑generated material created additional work for the development team. LLMs are now a permanent part of the software ecosystem, and it will only become more difficult to distinguish whether an issue report or patch originated from a human or an automated tool. GPT‑4.5 was judged to be human 73% of the time in randomized, controlled Turing‑test experiments, significantly above the 50% chance level. That percentage will continue to increase to the point where the developers won't be able to distinguish an issue or patch from a human or an LLM. We will continue to keep humans in the loop. All contributions, whether written by a person, an LLM, or a mix of both, will be reviewed by a maintainer and tested before its committed. What matters is the quality, clarity, and correctness of the contribution, not the tool used to produce it. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Is your feature request related to a problem? Please describe.
commit bd4a469 was recently merged which makes use of the Claude Opus LLM model
the acceptance of such models for open source contributions brings along numerous technical, legal, social, and economic issues
on the technical side
LLMs often generate subtly incorrect code that is hard to spot, reviews and issues with a lot of unnecessary verbose noise
on the legal side
it is still uncertain if LLM assisted code can hold a valid copyright, with some jurisdictions around the world claiming everything it generates has not enough human input, and thus to be public domain
the source of the training data is also dubious and often means that the generated output is based on code that would be incompatible with the ImageMagick license
LLMs are being used to dubiously wash away license requirements[1]
on the social side
validating the use of LLMs promotes acceptance of the abuse that is caused by the companies behind the models
companies who are more preoccupied with acquiring more capital and power, than they are about the real damage that their products cause to people[2][3]
on the economic side
those models are also used as reason to lay off workers, by managers that are sold on the validity of having LLMs as "cheaper replacements"
not only is the assumption that LLMs can replace skilled workers wrong[4], as companies find out after such layoffs, it is also the cause of suffering for a lot of people
legitimizing the use of LLMs in open source projects only furthers the ability of those selling the service to push for the removal and de-skill of workers, making those who remain reliant on their services
1: chardet/chardet#327
2: https://futurism.com/artificial-intelligence/ai-abuse-harassment-stalking
3: https://apnews.com/article/google-gemini-ai-chatbot-gavalas-lawsuit-aba0587b782d4424aa780a8612f3fe30
4: https://www.cbsnews.com/news/ai-layoffs-2026-artificial-intelligence-amazon-pinterest/
Describe the solution you'd like
multiple open source projects already adopted a no LLM policy[1]
i suggest the ImageMagick project adopts one with wording similar to Gentoo[2], redox OS[3], or something similar to those
1: https://noai.starlightnet.work/list.html
2: https://wiki.gentoo.org/wiki/Project:Council/AI_policy
3: https://gitlab.redox-os.org/redox-os/redox/-/blob/master/CONTRIBUTING.md#ai-policy
Describe alternatives you've considered
No response
Additional context
No response
Beta Was this translation helpful? Give feedback.
All reactions