1. 58
    1. 39

      If this is trained on copyleft code under CDDL, CC-By-SA, GPL, etc then presumably the code it outputs would be a derived work, as would the code you use it in?

      Most code is licensed to impose some restrictions on use & distribution, from retaining copyright messages to patent indemnification. I wonder how that was overcome here.

      Worst case it’s a really interesting machine learning copyright case study.

      1. 10

        Would it be reasonable to say that anybody who unknowingly writes code that is similar to copyleft code that they’ve at some point read is producing a derived work? I realize it’s not exactly the same scenario, but presuming the AI consumes and transforms the original work in some fashion and doesn’t just copy it then it seems that it wouldn’t constitute a derived work by the same measure.

        1. 10

          The FSF told us that when we’re working on Octave, do not read Matlab source code at all, much of which is available for inspection, and do not use Matlab at all either. They in effect told us to do clean-room reverse engineering. Someone else could run code on Matlab and tell us what Matlab did and then we could try to replicate it in Octave, but it had to be a different person, and they had to just tell us what happened. Using Matlab documentation to implement Octave code was also considered safe.

          Yes, copyright cases have been lost over people being told that their derivative work is very similar and was produced with knowledge of the copyrighted work. I’m thinking about musical riffs in particular. Overtly stating you’re reading copyrighted work to produce derivative work seems to put github in weird legal waters, but I assume they have lots of lawyers that told them otherwise, so this is probably going to be okay, or they’re ready to fight off the smalltime free software authors who try to assert their copyleft.

        2. 7

          IANAL, but there are definitely questions here. There’s a history of questions around clean-room design, when you need it, what it gets you, etc. https://en.wikipedia.org/wiki/Clean_room_design

          1. 5

            I was chatting with someone on Twitter about this. If clean room design is “demonstrably uncontaminated by any knowledge of the proprietary techniques” then Copilot being a black box seems to not fit that definition. What comes out of it has no clear provenance, since it could be synthetic or a straight copy from various sources. The Copilot FAQ on Protecting Originality has already demonstrated occasionally (“0.1% of the time”) copying code directly from the copyrighted corpus.

        3. 6

          But in this case the author hasn’t written the code. It wasn’t a creative process that accidentally ended up looking like another work. They’ve taken a copy supplied by the Copilot.

          Copyright laws are centered around creative work, and were written with humans in mind. I don’t think “AI” would count as a creative thinker for the purpose of the law. One could argue that existing AIs are just obfuscated databases with a fancy query language.

          1. 1

            The original comment was in regards to considering the code being output as a derived work. My comment wasn’t about the consumer of the AI output, it was about whether or not the AI output itself would constitute a derived work. I was making the comparison between the AI output and some unknowingly written copyleft similar work.

        4. 4

          I have been warned to keep gpl code off slides because employers have policies about this

          1. 2

            Wouldn’t GPL code on slides mean that your slides are now subject to the GPL?

            Assuming GPLv2, how does clause 2c apply to “power point code”. Power point reads “commands” (space bar) interactively when you “run” (open) it. Are you required to “to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License”?

            I hate to say it, but the employers policy is paranoid, but probably sensible (especially if you work for them).

      2. 6

        On HN, the author of the announcement blogpost claims, that “jurisprudence” (whatever that means) and common understanding in the ML community is that “ML training is fair use”. I have my doubts about that, basically along what you wrote below about cleanroom design/rev-eng. But IANAL, and I suspect it will need some lawyering action to sort this out one way or another. Personally, I also suspect that might be one of the reasons they kept it as a limited preview for now.

        1. 11

          Copying an idea from someone else: the litmus test is easy: will MS train the model on their wind32 and azure repos? If not, why?

          1. 10

            Similarly, if someone else were to train a model on leaked MS code, how quickly would the cease and desists start arriving?

        2. 4

          Hmm, that sounds very… dubious to me.

          Like, what exactly is “machine learning training”? Is building an n-gram model fair use? What if I train my n-gram model on too little data so that it’ll recreate something which has a relatively high level of similarity to the copyrighted training set? Many statistical machine learning models can be viewed as a lossy encoding of the source text.

          It seems plausible to me that the “copilot” will auto-generate blocks of source code that’s very similar to existing code. Proving that the AI “came up with it on its own” (whatever that even means when we’re talking about a bunch of matrixes transforming into into output) and didn’t “copy it” from something copyrighted seems extremely legally difficult.

          If machine learning training is “fair use”, and the output generated by the model is owned by the maker of the model, then overfitting ML models becomes automatic copyright stripping machines. I wouldn’t mind, but that sounds very weird.

      3. 4

        Even if the code used to train wasn’t copyleft, the attribution/license inclusion requirement still stands. And I am quite sure that GitHub doesn’t include all the licenses from all the code repositories they have used to train the model (they say billions of lines, so that’s at least 10k licenses, good luck with that, and their compatibility).

      4. 4

        CNPLv6 has a clause explicitly forbidding this.

        CNPLv6 - 1b “…In addition, where the Work is designed to output a neural network the output of the neural network will be considered an Adaptation for the purpose of this license.”

        Are there examples of other licenses, besides mine, which explicitly forbid laundering the work of the commons into something proprietary using machine learning?

    2. 31

      As pointed out on Twitter, the sentiment.ts example has an escaping bug in the text argument, and the parse_expenses.py example parses currency amounts using floats (and also assumes that there’s only ever exactly one space separating the fields, but maybe that’s justifiable).

      The shipping addresses example is very US-centric (zip/state is not universal!).

      And these are the examples they choose to showcase it!

      Overall it’s pretty neat, I think, but I wouldn’t want to use it for anything serious.

      e: Also, their ‘strip suffix’ example is almost definitely not going to work well in the presence of dotfiles, .., and so on.

      1. 8

        I notice that all their code samples are missing escaping where appropriate.

        Maybe this is the future. But man, if it is, I’m not looking forward to a world where “programming” consists of asking the AI to generate code for you so that you can debug the AI’s code, and I’m not looking forward to all the security vulnerabilities which come from not catching the AI’s bugs.

        1. 1

          I mean, on the other hand, what’s your day rate for debugging gnarly production bugs?

          Perhaps we should be encouraging this ;)

      2. 4

        The top example write_sql.go is missing a check to rows.Err(): https://golang.org/pkg/database/sql/#Rows

    3. 29

      Can’t wait to get n-gate’s take.

    4. 18

      Does anyone else see this as a sign that the languages we use are not expressive enough? The fact that you need an AI to help automate boilerplate points to a failure in the adoption of powerful enough macro systems to eliminate the boilerplate.

      1. 1

        Why should that system be based upon macros and not an AI?

        1. 13

          Because you want deterministic and predictable output. An AI is ever evolving and therefore might give different outputs for given input over time. Also, I realise that this is becoming an increasingly unpopular opinion, but not sending everything you’re doing to a third party to snoop on you seems like a good idea to me.

          1. 3

            Because you want deterministic and predictable output. An AI is ever evolving and therefore might give different outputs for given input over time.

            Deep learning models don’t change their weights if you don’t purposefully update it. I can foresee an implementation where weights are kept static or updated on a given cadence. That said, I understand that for a language macro system that you would probably want something more explainable than a deep learning model.

            Also, I realise that this is becoming an increasingly unpopular opinion, but not sending everything you’re doing to a third party to snoop on you seems like a good idea to me.

            There is nothing unpopular about that opinion on this site and most tech sites on the internet. I’m pretty sure a full third of posts here are about third party surveillance.

            1. 2

              Deep learning models don’t change their weights if you don’t purposefully update it.

              If you’re sending data to their servers for copilot to process (my impression is that you are, but i’m not in the alpha and haven’t seen anything concrete on it), then you have no control over whether the weights change.

            2. 2

              Deep learning models don’t change their weights if you don’t purposefully update it.

              Given the high rate of commits on GitHub across all repos, it’s likely that they’ll be updating the model a lot (probably at least once a day). Otherwise, all that new code isn’t going to be taken into account by copilot and it’s effectively operating on an old snapshot of GitHub.

              There is nothing unpopular about that opinion on this site and most tech sites on the internet. I’m pretty sure a full third of posts here are about third party surveillance.

              As far as I can tell, the majority of people (even tech people) are still using software that snoops on them. Just look at the popularity of, for example, VSCode, Apple and Google products.

        2. 2

          I wouldn’t have an issue with using a perfect boilerplate generating AI (well, beyond the lack of brevity), I was more commenting on the fact that this had to be developed at all and how it reflects on the state of coding

          1. 1

            Indeed it’s certainly good food for thought.

        3. 1

          Because programmers are still going to have to program, but instead of being able to deterministically produce the results they want, they’ll have to do some fuzzy NLP incantation to get what you want.

        4. [Comment removed by author]

      2. 1

        I don’t agree on the macro systems point, but I do see it the same. As a recent student of BQN, I don’t see any use for a tool like this in APL-like languages. What, and from what, would you generate, when every character carries significant meaning?

      3. 1

        I think it’s true. The whole point of programming is abstracting away as many details as you can, so that every word you write is meaningful. That would mean that it’s something that the compiler wouldn’t be able to guess on its own, without itself understanding the problem and domain you’re trying to solve.

        At the same time, I can’t deny that a large part of “programming” doesn’t work that way. Many frameworks require long repetitive boilerplate. Often types have to be specified again and again. Decorators are still considered a novel feature.

        It’s sad, but at least, I think it means good programmers will have job security for a long time.

        1. 1

          I firmly disagree. Programming, at least as evolved from computer science, is about describing what you want using primitive operations that the computer can execute. For as long as you’re writing from this directions, code generating tools will be useful.

          On the other hand, programming as evolved from mathematics and programming language theory fits much closer to your definition, defining what you want to do without stating how it should be done. It is the job of the compiler to generate the boilerplate after all.

          1. 1

            We both agree that we should use the computer to generate code. But I want that generation to be automatic, and never involve me (unless I’m the toolmaker), rather than something that I have to do by hand.

            I don’t think of it as “writing math”. We are writing in a language in order to communicate. We do the same thing when we speak English to each other. The difference is that it’s a very different sort of language, and unfortunately it’s much more primitive, by the nature of the cognition of the listener. But if we can improve its cognition to understand a richer language, it will do software nothing but good.

    5. 10

      This is like stacksort on steroids. I have to wonder where they got their English input though. Did they hire a bunch of people to look at some code and explain what it did?

      1. 1

        I see what you’re getting at here but I disagree.

        Using ML to learn about common coding patterns and help people be more productive by suggesting better/more efficient/more idiomatic alternatives feels like much more than a simple recursive grep to me :)

    6. 10

      Won’t stop anyone from taking your code and putting it on GitHub…

    7. 10

      You may be interested to read You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion, where authors inserted some files in training data such that insecure code completion results.

    8. 7

      The model got this right 43% of the time on the first try, and 57% of the time when allowed 10 attempts. And it’s getting smarter all the time.

      43% chance of getting it right and you have to try 10 times for it to just have over a 50% chance? What the hell

      How will GitHub Copilot get better over time?

      GitHub Copilot doesn’t actually test the code it suggests, so the code may not even compile or run. GitHub Copilot can only hold a very limited context, so even single source files longer than a few hundred lines are clipped and only the immediately preceding context is used. And GitHub Copilot may suggest old or deprecated uses of libraries and languages. You can use the code anywhere, but you do so at your own risk.

      They didn’t even answer the question?

      OpenAI Codex was trained on publicly available source code and natural language

      I have zero trust that throwing a ton of code from randoms into a blender is going to give anything meaningful beyond a couple of “Ooooooh” demo examples but go off I guess.

      1. 9

        This strikes me as the same “it may not be perfect just yet, but trust us, it will be if you just give us enough time” of the self-driving car hype. Completely ignoring reality…

      2. 1

        They didn’t even answer the question?

        To be fair, it’s not a fair question they asked themselves. They’re not psychic, I’m sure they hope it will get better.

        Not compiling/running actually suggests some obvious paths forward in the development, train it against generating samples that don’t compile, stick a filter on the front end that filters out samples that don’t compile. Maybe that’s why they started talking about it?

        1. 1

          Okay but it literally reads like the answer to that FAQ question is intended to be the answer to a completely different question. It’s entirely non-sensical in the context of the question. Even if they were going to tangentially answer the question, I’d expect that they would somehow bring the tangent back around at the end and make it somewhat related. It seems like they’re answering the question of “is it safe to use this code”?

      3. 0

        I have zero trust that throwing a ton of code from randoms into a blender is going to give anything meaningful beyond a couple of “Ooooooh” demo examples but go off I guess.

        There’s more going on here than just tossing random code.

        1. 2

          Please elaborate.

    9. 6

      I wonder if this model could be turned on it’s head to score each region of code by its expected bugginess.

      “danger (or congrats): no one in the history of time has ever written anything like this before”

      1. 1

        Although, I suppose the output might be less than useful: “I have a vague feeling that this might be wrong but I can’t explain why”.

        1. 6

          That could be incredibly useful as a code review tool! Kind of gives you a heatmap of which spots to focus most attention on as a code reviewer. I want it yesterday.

          1. 1

            Hm; OTOH, if a bug is common enough to have a major presence in the input corpus, I see how it could result in a false positive “green” mark for a faulty fragment of code… super interesting questions, for sure :) maybe it should only be used for “red” coloring, the rest being left as “unrated”.

    10. 5

      I just deleted my second-to-last github repo. My last one remains mostly because it’s actively used by a large number of developers but not actively developed enough to justify the work it would take to move it. But maybe if GitHub continues to make huge mistakes like this I’ll finally be motivated enough to pull the plug completely.

    11. 5

      This seems fun, and maybe a good tool for build proof of concepts. But I hardly see it as being useful for large projects. Or have I become old and grumpy?

      1. 13

        As a stranger on the internet, I can be the one to tell you that you are old and grumpy.

        Ruby is definitely unusable without syntax highlighting… (Sadists excepted) Java is definitely unusable without code completion… (Sadists excepted) Whatever comes next will probably be unusable without this thing or something like it.

        1. 9

          I’m confused… Ruby has one of the best syntaxes to read without highlighting. Not as good as forth, but definitely above-average

          1. 2

            I used to think this way. Then I learned Python and now I no longer do.

            When I learned Ruby I was coming from Perl, so the Perl syntactic sugar (Which the Ruby community now seems to be rightly fleeing from in abject terror) made the transition much easier for me.

            I guess this is my wind-baggy way of saying that relative programming language readability is a highly subjective thing, so I would caution anyone against making absolute statements on this topic.

            For instance, many programmers not used to the syntax find FORTH to be an unreadable morass of words and punctuation, whereas folks who love it inherently grok its stack based nature and find it eminently readable.

            1. 1

              Oh, sure, I wasn’t trying to make a statement about general readability, but about syntax highlighting.

              For example, forth is basically king of being the same with and without highlighting because it’s just a stream of words. What would you even highlight? That doesn’t mean the code is readable to you, only that adding colour does the least of any syntax possible, really.

              Ruby has sigils for everything important and very few commonly-used keywords, so it comes pretty close also here. Sure you can highlight the few words (class, def, do, end, if) that are in common use, you could highlight the kinds of vars but they already have sigils anyway. Everything else is a method call.

              Basically I’m saying that highlighting shines when there are a lot of different kinds of syntax, because it helps you visually tell them apart. A language with a lot of common keywords, or uncommon kinds of literal expressions, or many built-in operators (which are effectively keywords), that kind of thing.

              Which is not to say no one uses syntax highlighting in ruby of course, some people find that just highlighting comments and string literals makes highlighting worth it in any syntax family, I just felt it was a weird top example for “syntax highlighting helps here”.

              1. 3

                Thank you for the clarification I understand more fully now.

                Unfortunately, while I can see where you’re coming from in the general case, I must respectfully disagree at least for myself. I’m partially blind, and syntax highlighting saves my bacon all the time no matter what programming language I’m using :)

                I do agree that Ruby perhaps has visual cues that other programming languages lack.

                1. 1

                  ’m partially blind, and syntax highlighting saves my bacon all the time no matter what programming language I’m using :)

                  If you don’t mind me asking - have you tried any Lisps, and if so, how was your experience with those? I’m curious as to whether the relative lack of syntax is an advantage or a disadvantage from an accessibility perspective.

                  1. 1

                    Don’t mind you asking at all.

                    So, first off I Am Not A LISP Hacker, so my response will be limited to the years I ran and hacked emacs (I was an inveterate elisp twiddler. I wasted WAY too much time on it which is why I migrated back to Vim and now Vim+VSCode :)

                    It was a disadvantage. Super smart parens matching helped, but having very clear visual disambiguation between blocks and other code flow altering constructs like loops and conditionals is incredibly helpful for me.

                    It’s also one of the reasons I favor Python versus any other language where braces denote blocks rather than indentation.

                    In Python, I can literally draw a veritcal line down from the construct and discern the boundaries of the code it effects. That’s a huge win for me.

                    Note that this won’t eventually keep me from learning Scheme, which I’d love to do. I’m super impressed by the Racket community :)

              2. 1

                For example, forth is basically king of being the same with and without highlighting because it’s just a stream of words. What would you even highlight? That doesn’t mean the code is readable to you, only that adding colour does the least of any syntax possible, really.

                You could use stack effect comments to highlight the arguments to a word.

                : squared ( n -- n*n ) 
                     dup * ;
                 squared 3 .  
                

                For example, if squared is selected then the 3 should be highlighted. There’s also Chuck Moore’s ColorForth which uses color as part of the syntax.

          2. 3

            Well, this is the internet. Good luck trying to make sense of every take.

        2. 6

          Masochists (people that love pain on themselves), not sadists (people that love inflicting pain on others).

          1. 2

            Ah, thank you for the correction.

            I did once have a coworker who started programming ruby in hungarian notation so that they could code without any syntax highlighting, does that work?

            1. 4

              That count as both ;)

        3. 2

          Go to source is probably the only reason I use IDEs. Syntax highlighting does nothing for me. I could code entirely in monochrome and it wouldn’t affect the outcome in the slightest.

          On the other hand, you’re right. Tools create languages that depend on those tools. Intellij is infamous for that.

      2. 6

        You’re old and grumpy :) But seriously, the fact that it’s restricted to Github Codespaces right now limits its usefulness for a bunch of us.

        However, I think this kind of guided assistance is going to be huge as the rough edges are polished away.

        Will the grizzled veterans coding exclusively with M-x butterflies and flipping magnetic cores with their teeth benefit? Probably not, but they don’t represent the masses of people laboring in the code mines every day either :)

        1. 4

          I don’t do those things, I use languages with rich type information along with an IDE that basically writes the code for me already. I just don’t understand who would use these kinds of snippets regularly other than people building example apps or PoCs. The vast majority of code I write on a daily basis calls into internal APIs that are part of the product I work on, those won’t be in the snippet catalog this things uses.

          1. 4

            I don’t doubt it but I would also posit that there are vast groups of people churning out Java/.Net/PHP/Python code every day who would benefit enormously from an AI saying:

            Hey, I see you have 5 nested for loops here. Why don’t we re-write this as a nested list comprehension. See? MUCH more readable now!

            1. 4

              The vast majority of code I write on a daily basis calls into internal APIs that are part of the product I work on, those won’t be in the snippet catalog this things uses.

              Well, not yet. Not until they come up with a way to ingest and train based on private, internal codebases. I can’t see any reason to think that won’t be coming.

            2. 2

              Oh sure, I agree that’s potentially (very) useful, even for me! I guess maybe the problem is that the examples I’ve seen (and admittedly I haven’t looked at it very hard) seem to be more like conventional “snippets”, whereas what you’re describing feels more like a AST-based lint that we have for certain languages and in certain IDEs already (though they could absolutely be smarter).

            3. 2

              Visual studio (the full ide) has something like this at the moment and it’s honestly terrible. Always suggests inverting if statements which break the logic, or another one that I haven’t taken the time to figure out how to disable is it ‘highlights’ with a little grey line at the side of the ide (where breakpoints would be) and suggests changes such as condensing your catch blocks from try/catches onto one line instead of nice and readable.

              Could be great in the future if could get to what you suggested!

          2. 3

            Given that GH already has an enterprise offering, I can’t see a reason why they can’t enable the copilot feature and perform some transfer learning on a private codebase.

          3. 1

            Is your code in GitHub? All my employer’s code that I work on is in our GitHub org, some repos public, some private. That seems like the use case here. Yeah, if your code isn’t in GitHub, this GitHub tool is probably not for you.

            I’d love to see what this looks like trained on a GitHub-wide MIT licensed corpus, then a tiny per-org transfer learning layer on top, with just our code.

            1. 1

              Yeah, although, to me, the more interesting use-case is a CI tool that attempts to detect duplicate code / effort across the organization. Not sure how often I’d need / want it to write a bunch of boilerplate for me.

      3. 1

        it feels like a niftier autocomplete/intellisense. kind of like how gmail provides suggestions for completing sentences. I don’t think it’s world-changing, but I can imagine it being useful when slogging through writing basic code structures. of course you could do the same thing with macros in your IDE but this doesn’t require any configuration.

    12. 5

      Does anybody use tabnine? I get tired of it sometimes, and it suggests code that might not type check, but it can complete a lot of non trivial Haskell code and it automates a lot of the boilerplate in a web project I mantain

      1. 2

        I use tabnine regularly and I do find it useful, I think I prefer their approach of suggesting single lines instead of complete blocks (as appears to be the case from Github Copilot), although I would have to test it. For me it saves me of googling/remembering some snippets of code to do some mundane tasks.

        1. 1

          Try writing an evaluator for simple arithmetic expressions in haskell or elixir - tabnine gets too much right :)

    13. 4

      So, what programming languages are supported? I see a few examples in the demo tabs, python, ruby, typescript, but no definitive list.

      1. 5

        Python, JavaScript, TypeScript, Ruby and Go.

      2. 2

        GitHub employee here, but not working on Copilot. Copilot works on lots of languages. It has the best support for Python, JavaScript, TypeScript, Ruby, and Go but it should work well on any somewhat well-known language. For example, it works fairly well on Clojure. For a laugh I tried it on Q and it didn’t seem to really work, though perhaps I needed to work with a larger file.

    14. 4

      This looks pretty interesting and I’ll be curious to play with it.

      I can see this kind of thing turning out to be a huge productivity win that reduces toil on a lot of repetitive programming tasks. Programmers spend a significant amount of time on, “Sort of like this other code, but not quite similar enough to be able to reuse the existing implementation” kinds of work.

      I can also see it turning out to be “copy and paste from Stack Overflow without understanding what you’re pasting” on steroids, leading to maintenance nightmares down the road because no human has ever known why the code looks a certain way. How many people will bother to clean up generated code snippets that seem to work correctly?

      Not sure which outcome is more likely. Probably a mix of both.

      1. 3

        I’ve had a fellow senior software engineer paste a nontrivial answer from Stack Overflow verbatim into a codebase recently, not even writing a single unit test for it until asked for. It gave me some existential crisis that I still haven’t resolved, and am not sure how to approach. Esp. as this seems to be rather deeply in line with their general philosophy of working and attitude. And apart from IMO that being deeply un-senior and troubling, they are very senior in other areas, and I learn a lot from them. I tried to have a talk with them on related topics, but they didn’t seem open to change on that at the time.

    15. 4

      With developments such as GPT-f (found here on Lobsters) around the corner, I suspect AI will be a driving force in increasing automation of many quotidian programming tasks. It’ll be interesting to see how this interacts with the abnegation culture in programming.

      1. 2

        the abnegation culture in programming

        wdym

        1. 2

          Simply put, the folks that insist on not using newer technology for various reasons, a demographic that’s quite prominently featured on this site. There’s lots of reasoning that goes behind this abnegation from differing philosophies so it’s a complex thing to describe. Examples include folks that eschew syntax highlighting, IDEs, electron, newer devices, and more.

          1. 5

            Speaking as someone who’s had enough exposure to Electron to thoroughly dislike it, but who runs an IDE on new devices with syntax highlighting, I think you may be conflating a few unrelated threads.

            Why do you consider abnegation of new technology to be central enough to tie these disparate and non-overlapping groups together?

            1. 1

              I don’t think these groups are tied together, which is why I said “There’s lots of reasoning that goes behind this abnegation from differing philosophies so it’s a complex thing to describe”. I’m simply pointing out that there is a contingent of folks, often unrelated, who abnegate newer technology for various reasons. I’m interested in seeing the dynamics that emerge here instead of how one particular form of abnegation operates.

              1. 1

                Ah, fair enough - I’d read more into your use of the term demographic than you’d intended.

                For what it’s worth, I’m quite uncertain where I stand on this issue.

                I’m quite certain that many, perhaps most, authors of free software on GitHub did not intend for them to be used to train a proprietary, commercial, tool like this.

                On the other hand, one presumably wouldn’t object to someone wishing to better themselves reading that source … and then going and seeking employment writing proprietary software.

                Should we treat an ML system differently to a human intelligence in this case? If so, why? And where’s the boundary? Would anyone seriously object to a sentient AI doing the same thing?

                1. 1

                  For what it’s worth, I’m quite uncertain where I stand on this issue.

                  This is natural, I think. The challenges being posed by AI are new and unique, and it’ll take us time to understand how to deal with it.

    16. 3

      This raises a lot of questions. How are they able to verify correctness with these snippets? Do people need to manually verify them? How do invalid/incorrect snippets get pruned? If a person utilizes a snippet are they more likely to create errors? (I.e., since they are using generated code they might not be familiar with the edge cases and be more likely to use it incorrectly.)

      Interesting concept nonetheless.

      1. 5

        From the faq at the bottom of the page, they make no guarantees around correctness. Like any other autocomplete you should verify the code before moving on.

        1. 1

          They make no guarantees, but surely they aren’t just throwing stuff out into the world, right?

      2. 2

        It’s hard to say until they talk more about it (and I’ll bet some sort of paper is forthcoming). I’ll bet they have some sort of AST parser (or perhaps they use the language runtime itself directly) offering a “syntactically valid” check, and they use that to guide training. A pure deep learning approach to enforcing a constraint like syntactic validity would be to create a Generative Adversarial Network and use an AST parser or runtime as an oracle. But there’s multiple ways to skin the cat here, so it’s hard to say without more information about the methodology.

        1. 1

          I want to believe they have this, but I suspect they might have something closer to a filter where some text is generated and then shrunk into a code block. I would really like to know more details about how it works.

          1. 1

            I would have no doubts about them using a GAN with an oracle if it was a useful way to construct a model. GitHub partnered with OpenAI, and OpenAI did make GPT-3, so I don’t see why they wouldn’t reach for something as “basic” as a GAN. I mean, you can setup a basic GAN in almost all of the common tensor libraries in Python or ML libraries in Julia pretty simply.

    17. 2

      Here’s Copilot regurgitating the (GPL-licensed) fast inverse square root function, and then misattributing it by adding the wrong license header: https://twitter.com/mitsuhiko/status/1410886329924194309

      If this thing takes of, it’s gonna be a field daydecade for IP lawyers.

    18. 1

      I think the discussion about the implications of this AI is super interesting, but let me tell you fellow lobsters, this blew my mind. And I enjoyed getting my mind blown, and encourage you to do the same. Just reflect on this fascinating technology and stop other thoughts in your brain for a few minutes. Just wow.

    19. [Comment removed by author]