The NetBSD project seems to agree with me that code generated by “AI” like Copilot is tainted, and cannot be used safely. The project’s added a new guideline banning the use of code generated by such tools from being added to NetBSD unless explicitly permitted by “core“, NetBSD’s equivalent, roughly, of “technical management”.
Code generated by a large language model or similar technology, such as such as GitHub/Microsoft’s Copilot, OpenAI’s ChatGPT, or Facebook/Meta’s Code Llama, is presumed to be tainted code, and must not be committed without prior written approval by core.
↫ NetBSD Commit Guidelines
GitHub Copilot is copyright infringement and open source license violation at an industrial scale, and as I keep reiterating – the fact Microsoft is not training Copilot on its own closed-source code tells you all you need to know about what Microsoft thinks about the legality of Copilot.
Legalese will not stop CoPilot’s adoption. Corps look for ease of implementation, and less need to pay humans in development. That said, the justice system will side with Microsoft in this regard. That’s why they wasted no brain cycles in the consideration of law in this regard as they have the resources to fight and prevail against literally any open source entity, although I’m on the side of NetBSD in this.
And they wrote “such as such as” in the clause banning the use of LLMs for being unreliable, unsafe etc.
I’m not very hopeful for the future.