Bradley M. Kuhn's Blog

2023

October

  • 2023-10-11: Eben Moglen & SFLC — abusive employer & LGBTQIA+ unfriendly

    [ The below is a personal statement that I make on my own behalf. While my statement's release coincides with a release of an unrelated statement on similar topics made by my employer, Software Freedom Conservancy, and the Free Software Foundation Europe, please keep in mind that this statement is my own, personal opinion — written exclusively by me — and not necessarily the opinion of either of those organizations. I did not consult nor coordinate with either organization on this statement. ]

    With great trepidation, I have decided to make this public statement regarding the psychological abuse, including menacing, that I suffered, perpetrated by Eben Moglen, both while I was employed at his Software Freedom Law Center (SFLC) from 2005-2010, and in the years after he fired me. No one revels in having psychological injuries and mistreatment they've suffered paraded to the public. I'll be frank that if it were not for Moglen's use of the USA Trademark Trial and Appeal Board (TTAB) as a method to perpetrate further abusive behavior, I wouldn't have written this post. Furthermore, sadly, Moglen has threatened in recent TTAB filings his intention to use the proceeding to release personal details about my life to the public (using the litigation itself as a lever). I have decided to preemptively make public the facts herein first myself — so that I can at least control the timing and framing of the information.

    This post is long; the issues discussed in it are complicated, nuanced, and cannot be summed up easily. Nevertheless, I'm realistic that most people will stop reading soon, so I'll summarize now as best I can in a few sentences: I worked initially with, and then for, Eben Moglen for nearly a decade — during which time he was psychologically abusive and gaslighted me (under the guise of training and mentoring me). I thought for many years that he was one of my best friends (— in retrospect, I believe that he tricked me into believing that he was). As such, I shared extremely personal details about myself to him — which he has used both contemporaneously and in years hence to attempt to discredit me with my colleagues and peers. Recently, Moglen declared his plans to use current TTAB proceedings to force me to answer questions about my mental health in deposition0. Long ago, I disclosed key personal information to Moglen, I therefore have a pretty good idea of what his next move will be during that deposition questioning. Specifically, I believe Moglen was hoping to out me as omni/bisexual1 as part of my deposition in this proceeding. As such, I'm outing myself here first (primarily) to disarm his ability to use what he knows about my sexual orientation against me. Since that last sentence makes me already out, Moglen will be unable to use the biggest “secret” that Moglen “has on me” in his future psychological and legal attacks.

    I suspect some folks will stop reading here, but I really urge that you keep reading this post, and also to read the unrelated statement made by Conservancy and FSFE. The details are important and matter. I am admittedly embarrassed to talk publicly about how Moglen exacerbated, expanded, and caused new symptoms of my Post-Traumatic Stress Disorder (PTSD) — which I already suffered from when I met him. But, I feel it is important to talk about these issues publicly for many reasons — including that Moglen seeks to expose these personal facts about me as an attempt to stigmatize what is actually a positive thing: I seek ongoing treatment for my PTSD (which Moglen himself, in part, caused) and to simultaneously process and reduce my (painful and stubborn) internalized shame about my LGBTQIA+ status. (Like many proud LGBTQIA+ folks, I struggle with this because living in a society unfriendly to LGBTQIA+ folks can lead to difficult shame issues — this is a well-documented phenomena that LGBTQIA+ folks like myself suffer from.)

    The primary recent catalyst for this situation is as follows: Moglen has insisted that, as part of the ongoing trademark cancellation petition that SFLC filed against my employer, Software Freedom Conservancy in the TTAB, that Moglen both personally be allowed to be present at, and to actually take the depositions3 of me and my colleague, Karen Sandler.

    This kind of behavior is typical of how abusers use litigation to perpetuate their abuse. The USA legal system is designed to give everyone “their day in Court”. Frankly, many of the rules established for Court proceedings did not contemplate that the process could be manipulated by abusers, and it remains an open problem on how to repair the rules that both preserve the egalitarian nature of our legal system, but also does not make it easy for abusers to misuse those same rules. Depositions, in particular, are a key tool in abusers' arsenals. Depositions allow Plaintiffs (in the TTAB, BTW, the Plaintiff is called “the Petitioner”) to gather evidence. Generally speaking, most Courts have no good default rules to prevent abusers from using these depositions to get themselves in the room with their victims and harass those victims further with off-topic haranguing. The only method (which is quite clunky as a legal tool) to curtail the harassment somewhat is called a protective order. However, Moglen has been smart enough to use the very process of the protective order application to further perpetuate abusive behavior.

    To understand all this in context, I ask that you first read Conservancy's public response to the initial filing of the trademark cancellation proceeding (six years ago). In short, SFLC is seeking to “cancel” the trademark on the name “Software Freedom Conservancy”. Ostensibly, that's all this case is (or, rather should be) about.

    The problem is that, upon reading the docket in detail, it's easily seen that at nearly every step, Moglen has attempted to use the proceeding as a method to harass and attack me and my colleague, Karen Sandler — regarding issues wholly unrelated to the trademarks. The recent arguments have been about our depositions4 — mine and Karen's2.

    After some complex legal back-and-forth, Judge Elgin ordered that I was legally required to sit for a deposition with and by Moglen. This is the point where a catch-22 began for me.

    • Option 0: Sit in a room for 8+ hours with a person who had spent years verbally abusing me and let him ask me any question he wants5 — under penalty of perjury and contempt of Court if I refuse.
    • Option 1: Give Conservancy's lawyers permission to talk openly, in public documents, about the details of the abuse I suffered from Moglen and the psychological harm that it caused me (which is the necessary backup document for a protective order motion).
    IOW, the only way to get a protective order that would prevent me from being legally required to suffer further psychological abuse from Moglen was to publicly talk about the past abuse 😩. I reluctantly chose Option 1. I encourage you to read in full my first sworn testimony on the issue. That document explains many of the psychological abusive examples I suffered from Moglen — both as an employee at SFLC and since.

    Fortunately, that aforementioned sworn testimony was sufficient to convince Judge Elgin to at least entertain reconsidering her decision that I have to sit8 for a deposition with Moglen. However, submitting the official motion then required that I give even more information about why the deposition with Moglen will be psychologically harmful. In particular, I had little choice but to add a letter from my (highly qualified) mental health provider speaking to the psychological dangers that I would face if deposed by Moglen personally and/or in his presence. I reluctantly asked my therapist to provide such a letter. It was really tough for me to publicly identify who my therapist is, but it was, again, my best option out of that catch-22. I admittedly didn't anticipate that Moglen might use this knowledge as a method to further his abuse against me publicly in his response filing.

    As can be seen in Moglen's response filing, Moglen directly attacks my therapist's credentials — claiming she is not credible nor qualified. Moglen's argument is that because my therapist is a licensed, AASECT-certified sex therapist, she is not qualified to diagnose PTSD. Of course, Moglen's argument is without merit: my therapist's sex therapy credentials are in addition to her many other credentials and certifications — all of which is explained on her website that Moglen admits in his filing he has reviewed.

    As I mentioned, at one time, I foolishly and erroneously considered Moglen a good friend. As such, I told Moglen a lot about my personal life, including that I was omni/bisexual, and that I was (at the time) closeted. So, Moglen already knows full well the reason that I would select a therapist who held among her credentials a certification to give therapy relating to sexuality. Moglen's filing is, in my view, a veiled threat to me that he's going to disclose publicly what he knows about my sexuality as part of this proceeding. So, I've decided — after much thought — that I should simply disarm him on this and say it first: I have identified as bisexual/omnisexual6 since 1993, but I have never been “out” in my professional community — until now. Moglen knows full well (because I told him on more than one occasion) that I struggled with whether or not to come out for decades. Thus, I chose a therapist who was both qualified to give treatment for PTSD as well as for sexual orientation challenges because I've lived much of my life with internalized shame about my sexual orientation. (I was (and still am, a bit) afraid that it would hurt my career opportunities in the FOSS community and technology generally if I came out; more on that below.) I was still working through these issues with my therapist when all these recent events occurred.

    Despite the serious psychological abuse I've suffered from Moglen, until this recent filing, I wouldn't have imagined that Moglen would attempt to use the secrecy about my LGBTQIA+ status as a way to further terrorize me. All I can think to say to Moglen in response is to quote what Joe Welch said to Senator Joe McCarthy on 1954-06-09: “Have you no sense of decency, sir — at long last? Have you left no sense of decency?”.

    It's hard to express coherently the difficult realization of the stark political reality of our world. There are people you might meet (and/or work for) who, if they have a policy disagreement8 with you later, will use every single fact about you to their advantage to prevail in that disagreement. There is truly no reason that Moglen needed to draw attention to the fact that I see a therapist who specializes (in part) in issues with sexuality. The fact that he goes on to further claim that the mere fact that she has such certification makes her unqualified to treat my other mental health illness — some of which Moglen himself (in part) personally caused — is unconscionable. I expect that even most of my worst political rivals who work for proprietary software companies and violate copyleft licenses on a daily basis would not stoop as low to what Moglen has in this situation.

    At this point, I really have no choice but to come out as omnisexual7 — even though I wasn't really ready to do so. Moglen has insisted now that my therapy has been brought up in the proceeding, that he has a legal right to force me to be evaluated by a therapist of his choosing (as if I were a criminal defendant). Moglen has also indicated that, during my deposition, he will interrogate me about my therapy and my reasons for choosing this particular therapist (see, for example, footnote 2 on page 11 (PDF-Page 27) of Moglen's declaration in support of the motion). Now, even if the judge grants Conservancy's motion to exclude Moglen from my deposition, Moglen will instruct his attorneys to ask me those questions about my therapy and my sexual orientation — with the obvious goal of seeking to embarrass me by forcing me to reveal such things publicly. Like those folks who sat before McCarthy in those HUAC hearings, I know that none of my secrets will survive Moglen's deposition. By outing myself here first, I am, at least, disarming Moglen from attempting to use my shame about my sexual orientation against me.

    Regarding LGBTQIA+ Acceptance and FOSS

    I would like to leave Moglen and his abusive behavior there, and spend the rest of this post talking about related issues of much greater importance. First, I want to explain why it was so difficult for me to come out in my professional community. Being somewhat older than most folks in FOSS today, I really need to paint the picture of the USA when my career in technology and FOSS got started. I was in my sophomore year of my Computer Science undergraduate program when Clinton implemented the Don't ask, Don't tell (DADT) policy for military in the USA. Now, as a pacifist, I had no desire to join the military, but the DADT approach was widely accepted in all areas of life. The whole sarcastic “Not that there's anything wrong with that …” attitude (made famous contemporaneously to DADT on an episode of the TV show, Seinfeld) made it clear in culture that the world, including those who ostensibly supported LGBTQIA+ rights, wanted queer folks to remain, at best, “quiet and proud”, not “loud and proud”. As a clincher, note that three years after DADT was put in effect, overwhelming bipartisan support came forward for the so-called “Defense of Marriage Act (DOMA)”. An overwhelming majority of everyone in Congress and the Presidency (regardless of party affiliation) was in 1996 anti-LGBTQIA+. Folks who supported and voted yes for DOMA include: Earl Blumenauer (still a senator from my current state), Joe Biden (now POTUS (!)), Barbara Mikulski (a senator until 2017 from my home state), and Chuck Schumer (still Senate majority leader today). DADT didn't end until 2011, and while SCOTUS ruled parts of DOMA unconstitutional in 2015, Congress didn't actually repeal DOMA until last year! Hopefully, that gives a clear sense of what the climate for LGBTQIA+ folks was like in the 1990s, and why I felt was terrified to be outed — even as the 1990s became the 2000s.

    I also admit that my own shame about my sexual orientation grew as I got older and began my professional career. I “pass” as straight — particularly in our heteronormative culture that auto-casts everyone as cishet until proven otherwise. It was just easier to not bring it up. Why bother, I thought? It was off-topic (so I felt), and there were plenty of people around the tech world in the 1990s and early 2000s who were not particularly LGBTQIA+-friendly, or who feigned that they were but were still “weird” about it.

    I do think tech in general and FOSS in particular are much more LGBTQIA+-friendly than they once were. However, there has been a huge anti-LGBTQIA+ backlash in certain areas of the USA in recent years, so even as I became more comfortable with the idea of being “out”, I also felt (and do feel) that the world has recently gotten a lot more dangerous for LGBTQIA+ folks. Folks like Moglen who wage “total war” against their political opponents know this, and it is precisely why they try to cast phrases like bisexual, gay, queer, and “sex therapist” as salacious.

    Also, PTSD has this way of making you believe you're vulnerable in every situation. When you're suffering from the worst of PTSD's symptoms, you believe that you can never be safe anywhere — ever again. But, logically I know that I'm safe being a queer person (at least in the small FOSS world) — for two big reasons. First, the FOSS community of today is (in most cases) very welcoming to LGBTQIA+ folks and most of the cishet folks in FOSS identify as LGBTQIA+ allies. Second, I sheepishly admit that as I've reached my 0x32'nd year of life this year, I have a 20+ year credentialed career that has left me in a position of authority and privilege as a FOSS leader. I gain inherent safety from my position of power in the community to just be who I am.

    While this is absolutely not the manner and time in which I wanted to come out, I'll try to make some proverbial lemonade out of the lemons. By now being out as LGBTQIA+ and already being a FOSS leader, I'd like to offer to anyone who is new to FOSS and faces fear and worry about LGBTQIA+ issues in FOSS to contact me if they think I can help. I can't promise to write back to everyone, but I will do my very best to try to either help or route you to someone else in FOSS who might be able to.

    Also, I want to state something in direct contrast to Moglen's claims that the mere fact that a therapist who is qualified for treating people with issues related to sexual orientation is ipso facto unqualified to treat any other mental condition. I want to share publicly how valuable it has been for me in finding a therapist who “gets it” with regard to living queer in the world while also suffering from other conditions (such as PTSD). So many LGBTQIA+ youth are bullied due to their orientation, and sustained bullying commonly causes PTSD. I think we should all be so lucky to have a mental health provider, as I do, that is extensively qualified to treat the whole person and not just a single condition or issue. We should stand against people like Moglen who, upon seeing that someone's therapist specializes in helping people with their sexual orientation, would use that fact as a way to shame both the individual and the therapist. Doing that is wrong, and people who do that are failing to create safe spaces for the LGBTQIA+ community.

    I am aghast that Moglen is trying to shame me for seeking help from a mental health provider who could help me overcome my internalized shame regarding my sexual orientation. I also want people to know that I did not feel safe as a queer person when I worked for Eben Moglen at SFLC. But I also know Moglen doesn't represent what our FOSS community and software freedom is about. I felt I needed to make this post not only to disarm the power Moglen held to “out me” before I was ready, but also to warn others that, in my opinion, Software Freedom Law Center (SFLC) as an organization that is not a safe space for LGBTQIA+ folks. Finally, I do know that Moglen is also a tenured professor at Columbia Law School. I have so often worried about his students — who may, as I did, erroneously believe they can trust Moglen with private information as important as their LGBTQIA+ status. I simply felt I couldn't stay silent about my experiences in good conscience any longer.


    0, 4 A deposition is a form of testimony done during litigation before trial begins. Each party in a legal dispute can subpoena witnesses. Rules vary from venue to venue, but typically, a deposition is taken for eight hours, and opposing attorneys can ask as many questions as they want — including leading questions.

    5In most depositions, there is a time limit, but the scope of what questions can be asked are not bounded. Somewhat strangely, one's own lawyer is not usually permitted to object on grounds of relevancy to the case, so the questions can be as off-topic as the opposing counsel wants.

    3, 8 The opposing attorney who asks the question is said to be “taking the deposition”. The witness is said to be “sitting for a deposition”. (IIUC, these are terms of art in litigation).

    1, 6, 7 From 1993-2018, I identified as “bisexual”. That term, unfortunately, is, in my opinion, not friendly to non-binary people, since the “bi” part (at least to me, I know others disagree) assumes binary gender. The more common term used today is “pansexual”, but, personally I prefer the term “omnisexual” to “pansexual” for reasons that are beyond the scope of this particular post. I am, however, not offended if you use any of the three terms to refer to my sexual orientation.

    2Note, BTW: when you read the docket, Judge Elgin (about 75% of the time) calls Karen by the name “Ms. Bradley” (using my first name as if it were Karen's surname). It's a bit confusing, so watch for it while you're reading so you don't get confused.

    8 Footnote added 2023-10-12, 19:00 US/Eastern: Since I posted this about 30 hours ago, I've gotten so many statements of support emailed to me that I can't possibly respond to them all, but I'll try. Meanwhile, a few people have hinted at and/or outright asked what policy disagreements Moglen actually has with me. I was reluctant to answer because the point I'm making in this post is that even if Moglen thought every last thing I've ever done in my career was harmful policy-wise, it still would not justify these abusive behaviors. Nevertheless, I admit that if this post were made by someone else, I'd be curious about what the policy disagreements were, so I decided to answer the question. I think that my overarching policy disagreement with Eben Moglen is with regard to how and when to engage in enforcement of the GPL and other copyleft licenses through litigation. I think Moglen explains this policy disagreement best in his talk that the Linux Foundation contemporaneously promoted (and continues to regularly reference) entitled “Whither (Not Wither) Copyleft”. In this talk, Moglen states that I (among others) are “on a jihad for free software” (his words, direct quote) because we continued to pursue GPL enforcement through litigation. While I agree that litigation should still remain the last resort, I do think it remains a necessary step often. Moglen argues that even though litigation was needed in the past, it should never be used again for copyleft and GPL enforcement. As Moglen outlines in his talk, he supports the concept of “spontaneous compliance” — a system whereby there is no regulatory regime and firms simply chose to follow the rules of copyleft because it's so obviously in their own best interest. I've not seen this approach work in practice, which is why I think we must still sometimes file GPL (and LGPL) lawsuits — even today. Moglen and I have plenty of other smaller policy disagreements: from appropriate copyright assignment structures for FOSS, to finer points of how GPLv3 should have been drafted, to tactics and strategy with regard to copyleft advocacy, to how non-profits and charities should be structured for the betterment of FOSS. However, I suspect all these smaller policy disagreements stem from our fundamental policy disagreement about GPL enforcement. However, I conclude by (a) saying again no policy disagreement with anyone justifies abusive behavior toward that person — not ever, and (b) please do note the irony that, in that 2016-11-02 speech, Moglen took the position that lawsuits should no longer be used to settle disputes in FOSS, and yet — less than 10 months later — Moglen sued Conservancy (his former client) in the TTAB.

    Posted on Wednesday 11 October 2023 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

2022

March

  • 2022-03-30: An Erroneous Preliminary Injunction Granted in Neo4j v. PureThink

    [ A version of this article was also posted on Software Freedom Conservancy's blog. ]

    Bad Early Court Decision for AGPLv3 Has Not Yet Been Appealed

    We at Software Freedom Conservancy proudly and vigilantly watch out for your rights under copyleft licenses such as the Affero GPLv3. Toward this goal, we have studied the Neo4j, Inc. v. PureThink, LLC ongoing case in the Northern District of California , and the preliminary injunction appeal decision in the Ninth Circuit Court this month. The case is complicated, and we've seen much understandable confusion in the public discourse about the status of the case and the impact of the Ninth Circuit's decision to continue the trial court's preliminary injunction while the case continues. While it's true that part of the summary judgment decision in the lower court bodes badly for an important provision in AGPLv3§7¶4, the good news is that the case is not over, nor was the appeal (decided this month) even an actual appeal of the decision itself! This lawsuit is far from completion.

    A Brief Summary of the Case So Far

    The primary case in question is a dispute between Neo4j, a proprietary relicensing company, against a very small company called PureThink, run by an individual named John Mark Suhy. Studying the docket of the case, and a relevant related case, and other available public materials, we've come to understand some basic facts and events. To paraphrase LeVar Burton, we encourage all our readers to not take our word (or anyone else's) for it, but instead take the time to read the dockets and come to your own conclusions.

    After canceling their formal, contractual partnership with Suhy, Neo4j alleged multiple claims in court against Suhy and his companies. Most of these claims centered around trademark rights regarding “Neo4j” and related marks. However, the claims central to our concern relate to a dispute between Suhy and Neo4j regarding Suhy's clarification in downstream licensing of the Enterprise version that Neo4j distributed.

    Specifically, Neo4j attempted to license the codebase under something they (later, in their Court filings) dubbed the “Neo4j Sweden Software License” — which consists of a LICENSE.txt file containing the entire text of the Affero General Public License, version 3 (“AGPLv3”) (a license that I helped write), and the so-called “Commons Clause” — a toxic proprietary license. Neo4j admits that this license mash-up (if legitimate, which we at Software Freedom Conservancy and Suhy both dispute), is not an “open source license”.

    There are many complex issues of trademark and breach of other contracts in this case; we agree that there are lots of interesting issues there. However, we focus on the matter of most interest to us and many FOSS activists: Suhy's permissions to remove of the “Commons Clause”. Neo4j accuses Suhy of improperly removing the “Commons Clause” from the codebase (and subsequently redistributing the software under pure AGPLv3) in paragraph 77 of their third amended complaint. (Note that Suhy denied these allegations in court — asserting that his removal of the “Commons Clause” was legitimate and permitted.

    Neo4j filed for summary judgment on all the issues, and throughout their summary judgment motion, Neo4j argued that the removal of the “Commons Clause” from the license information in the repository (and/or Suhy's suggestions to others that removal of the “Commons Clause” was legitimate) constituted behavior that the Court should enjoin or otherwise prohibit. The Court partially granted Neo4j's motion for summary judgment. Much of that ruling is not particularly related to FOSS licensing questions, but the section regarding licensing deeply concerns us. Specifically, to support the Court's order that temporarily prevents Suhy and others from saying that the Neo4j Enterprise edition that was released under the so-called “Neo4j Sweden Software License” is a “free and open source” version and/or alternative to proprietary-licensed Neo4j EE, the Court held that removal of the “Commons Clause” was not permitted. (BTW, the court confuses “commercial” and “proprietary” in that section — it seems they do not understand that FOSS can be commercial as well.)

    In this instance, we're not as concerned with the names used for the software; as much as the copyleft licensing question — because it's the software's license, not its name, that either assures or prevents users to exercise their fundamental software rights. Notwithstanding our disinterest in the naming issue, we'd all likely agree that — if “AGPLv3 WITH Commons-Clause” were a legitimate form of licensing — such a license is not FOSS. The primary issue, therefore, is not about whether or not this software is FOSS, but whether or not the “Commons Clause” can be legitimately removed by downstream licensees when presented with a license of “AGPLv3 WITH Commons-Clause”. We believe the Court held incorrectly by concluding that Suhy was not permitted to remove the “Commons Clause”. Their order that enjoins Suhy from calling the resulting code “FOSS” — even if it's a decision that bolsters a minor goal of some activists — is problematic because the underlying holding (if later upheld on appeal) could seriously harm FOSS and copyleft.

    The Confusion About the Appeal

    Because this was an incomplete summary judgment and the case is ongoing, the injunction against Suhy's on making such statements is a preliminary injunction, and cannot be made permanent until the case actually completes in the trial court. The decision by the Ninth Circuit appeals court regarding this preliminary injunction has been widely reported by others as an “appeal decision” on the issue of what can be called “open source”. However, this is not an appeal of the entire summary judgment decision, and certainly not an appeal of the entire case (which cannot even been appealed until the case completes). The Ninth Circuit decision merely affirms that Suhy remains under the preliminary injunction (which prohibits him and his companies from taking certain actions and saying certain things publicly) while the case continues. In fact, the standard that an appeals Court uses when considering an appeal of a preliminary injunction differs from the standard for ordinary appeals. Generally speaking, appeals Courts are highly deferential to trial courts regarding preliminary injunctions, and appeals of actual decisions have a much more stringent standard.

    The Affero GPL Right to Restriction Removal

    In their partial summary judgment ruling, the lower Court erred because they rejected an important and (in our opinion) correct counter-argument made by Suhy's attorneys. Specifically, Suhy's attorneys argued that Neo4j's license expressly permitted the removal of the “Commons Clause” from the license. AGPLv3 was, in fact, drafted to permit such removal in this precise fact pattern.

    Specifically, the AGPLv3 itself has the following provisions (found in AGPLv3§0 and AGPLv3§7¶4):

    • “This License” refers to version 3 of the GNU Affero General Public License.
    • “The Program” refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”.
    • If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term.

    That last term was added to address a real-world, known problem with GPLv2. Frequently throughout the time when GPLv2 was the current version, original copyright holders and/or licensors would attempt to license work under the GPL with additional restrictions. The problem was rampant and caused much confusion among licensees. As an attempted solution, the FSF (the publisher of the various GPL's) loosened its restrictions on reuse of the text of the GPL — in hopes that would provide a route for reuse of some GPL text, while also avoiding confusion for licensees. Sadly, many licensors continued to take the confusing route of using the entire text a GPL license with an additional restriction — attached either before or after, or both. Their goals were obvious and nefarious: they wanted to confuse the public into “thinking” the software was under the GPL, but in fact restrict certain other activities (such as commercial redistribution). They combined this practice with proprietary relicensing (i.e., a sole licensor selling separate proprietary licenses while releasing a (seemingly FOSS) public version of the code as demoware for marketing). Their goal is to build on the popularity of the GPL, but in direct opposition to the GPL's policy goals; they manipulate the GPL to open-wash bad policies rather than give actual rights to users. This tactic even permitted bad actors to sell “gotcha” proprietary licenses to those who were legitimately confused. For example, a company would look for users operating commercially with the code in compliance with GPLv2, but hadn't noticed the company's code had the statement: “Licensed GPLv2, but not for commercial use”. The user had seen GPLv2, and knew from its brand reputation that it gave certain rights, but hadn't realized that the additional restriction outside of the GPLv2's text might actually be valid. The goal was to catch users in a sneaky trap.

    Neo4j tried to use the AGPLv3 to set one of those traps. Neo4j, despite the permission in the FSF's GPL FAQ to “use the GPL terms (possibly modified) in another license provided that you call your license by another name and do not include the GPL preamble”, left the entire AGPLv3 intact as the license of the software — adding only a note at the front and at the end. However, their users can escape the trap, because GPLv3 (and AGPLv3) added a clause (which doesn't exist in GPLv2) to defend users from this. Specifically, AGPLv3§7¶4 includes a key provision to help this situation.

    Specifically, the clause was designed to give more rights to downstream recipients when bad actors attempt this nasty trick. Indeed, I recall from my direct participation in the A/GPLv3 drafting that this provision was specifically designed for the situation where the original, sole copyright holder/licensor0 added additional restrictions. And, I'm not the only one who recalls this. Richard Fontana (now a lawyer at IBM's Red Hat, but previously legal counsel to the FSF during the GPLv3 process), wrote on a mailing list1 in response to the Neo4j preliminary injunction ruling:

    For those who care about anecdotal drafting history … the whole point of the section 7 clause (“If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term.”) was to address the well known problem of an original GPL licensor tacking on non-GPL, non-FOSS, GPL-norm-violating restrictions, precisely like the use of the Commons Clause with the GPL. Around the time that this clause was added to the GPLv3 draft, there had been some recent examples of this phenomenon that had been picked up in the tech press.

    Fontana also pointed us to the FSF's own words on the subject, written during their process of drafting this section of the license (emphasis ours):

    Unlike additional permissions, additional requirements that are allowed under subsection 7b may not be removed. The revised section 7 makes clear that this condition does not apply to any other additional requirements, however, which are removable just like additional permissions. Here we are particularly concerned about the practice of program authors who purport to license their works under the GPL with an additional requirement that contradicts the terms of the GPL, such as a prohibition on commercial use. Such terms can make the program non-free, and thus contradict the basic purpose of the GNU GPL; but even when the conditions are not fundamentally unethical, adding them in this way invariably makes the rights and obligations of licensees uncertain.

    While the intent of the original drafter of a license text is not dispositive over the text as it actually appears in the license, all this information was available to Neo4j as they drafted their license. Many voices in the community had told them that provision in AGPLv3§3¶4 was added specifically to prevent what Neo4j was trying to do. The FSF, the copyright holder of the actual text of the AGPLv3, also publicly gave Neo4j permission to draft a new license, using any provisions they like from AGPLv3 and putting them together in a new way. But Neo4j made a conscious choice to not do that, but instead constructed their license in the exact manner that allowed Suhy's removal of the “Commons Clause”.

    In addition, that provision in AGPLv3§3¶4 has little meaning if it's not intended to bind the original licensor! Many other provisions (such as AGPLv3§10¶3) protect the users against further restrictions imposed later in the distribution chain of licensees. This clause was targeted from its inception against the exact, specific bad behavior that Neo4j did here.

    We don't dispute that copyright and contract law give Neo4j authority to license their work under any terms they wish — including terms that we consider unethical or immoral. In fact, we already pointed out above that Neo4j had permission to pick and choose only some text from AGPLv3. As long as they didn't use the name “Affero”, “GNU” or “General Public” or include any of the Preamble text in the name/body of their license — we'd readily agree that Neo4j could have put together a bunch of provisions from the AGPLv3, and/or the “Commons Clause”, and/or any other license that suited their fancy. They could have made an entirely new license. Lawyers commonly do share text of licenses and contracts to jump-start writing new ones. That's a practice we generally support (since it's sharing a true commons of ideas freely — even if the resulting license might not be FOSS).

    But Neo4j consciously chose not to do that. Instead, they license their software “subject to the terms of the GNU AFFERO GENERAL PUBLIC LICENSE Version 3, with the Commons Clause”. (The name “Neo4j Sweden Software License” only exists in the later Court papers, BTW, not with “The Program” in question.) Neo4j defines “This License” to mean “version 3 of the GNU Affero General Public License.”. Then, Neo4j tells all licensees that “If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term”. Yet, after all that, Neo4j had the audacity to claim to the Court that they didn't actually mean that last sentence, and the Court rubber-stamped that view.

    Simply put, the Court erred when it said: “Neither of the two provisions in the form AGPLv3 that Defendants point to give licensees the right to remove the information at issue.”. The Court then used that error as a basis for its ruling to temporarily enjoin Suhy from stating that software with “Commons Clause” removed by downstream is “free and open source”, or tell others that he disagrees with the Court's (temporary) conclusion about removing the “Commons Clause” in this situation.

    What Next?

    The case isn't over. The lower Court still has various issues to consider — including a DMCA claim regarding Suhy's removal of the “Commons Clause”. We suspect that's why the Court only made a preliminary injunction against Suhy's words, and did not issue an injunction against the actual removal of the clause! The issue as to whether the clause can be removed is still pending, and the current summary judgment decision doesn't address the DMCA claim from Neo4j's complaint.

    Sadly, the Court has temporarily enjoined Suhy from “representing that Neo4j Sweden AB’s addition of the Commons Clause to the license governing Neo4j Enterprise Edition violated the terms of AGPL or that removal of the Commons Clause is lawful, and similar statements”. But they haven't enjoined us, and our view on the matter is as follows:

    Clearly, Neo4j gave explicit permission, pursuant to the AGPLv3, for anyone who would like to to remove the “Commons Clause” from their LICENSE.txt file in version 3.4 and other versions of their Enterprise edition where it appears. We believe that you have full permission, pursuant to AGPLv3, to distribute that software under the terms of the AGPLv3 as written. In saying that, we also point out that we're not a law firm, our lawyers are not your lawyers, and this is not legal advice. However, after our decades of work in copyleft licensing, we know well the reason and motivations of this policy in the license (describe above), and given the error by the Court, it's our civic duty to inform the public that the licensing conclusions (upon which they based their temporary injunction) are incorrect.

    Meanwhile, despite what you may have read last week, the key software licensing issues in this case have not been decided — even by the lower Court. For example, the DMCA issue is still before the trial court. Furthermore, if you do read the docket of this case, it will be obvious that neither party is perfect. We have not analyzed every action Suhy took, nor do we have any comment on any action by Suhy other than this: we believe that Suhy's removal of the “Commons Clause” was fully permitted by the terms of the AGPLv3, and that Neo4j gave him that permission in that license. Suhy also did a great service to the community by taking action that obviously risked litigation against him. Misappropriation and manipulation of the strongest and most freedom-protecting copyleft license ever written to bolster a proprietary relicensing business model is an affront to FOSS and its advancement. It's even worse when the Courts are on the side of the bad actor. Neo4j should not have done this.

    Finally, we note that the Court was rather narrow on what it said regarding the question of “What Is Open Source?”. The Court ruled that one individual and his companies — when presented with ambiguous licensing information in one part of a document, who then finds another part of the document grants permission to repair and clarify the licensing information, and does so — is temporarily forbidden from telling others that the resulting software is, in fact, FOSS, after making such a change. The ruling does not set precedent, nor does it bind anyone other than the Defendants as to what they can or cannot say is FOSS, which is why we can say it is FOSS, because the AGPLv3 is an OSI-approved license and the AGPLv3 permits removal of the toxic “Commons Clause” in this situation.

    We will continue to follow this case and write further when new events occur..


    0 We were unable to find anywhere in the Court record that shows Neo4j used a Contributor Licensing Agreement (CLA) or Copyright Assignment Agreement (©AA) that sufficiently gave them exclusive rights as licensor of this software. We did however find evidence online that Neo4j accepted contributions from others. If Neo4j is, in fact, also a licensor of others' AGPLv3'd derivative works that have been incorporated into their upstream versions, then there are many other arguments (in addition to the one presented herein) that would permit removal of the “Commons Clause”. This issue remains an open question of fact in this case.

    1 Fontana made these statements on a mailing list governed by an odd confidentiality rule called CHR (which was originally designed for in-person meetings with a beginning and an end, not a mailing list). Nevertheless, Fontana explicitly waived CHR (in writing) to allow me to quote his words publicly.

    Posted on Wednesday 30 March 2022 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

2020

July

  • 2020-07-09: Organizational Proliferation Is Not the Problem You Think It Is

    [ This blog post was cross-posted from the blog at Software Freedom Conservancy where I work. ]

    I've been concerned this week about aggressive negative reaction (by some) to the formation of an additional organization to serve the Free and Open Source (FOSS) community. Thus it seems like a good moment to remind everyone why we all benefit when we welcome newcomer organizations in FOSS.

    I've been involved in helping found many different organizations — in roles as varied as co-founder, founding Board member, consultant, spin-off partner, and “just a friend giving advice”. Most of these organizations fill a variety of roles; they support, house, fiscally sponsor, or handle legal issues and/or trademark, copyright, or patent matters for FOSS projects. I and my colleagues at Conservancy speak regularly about why we believe a 501(c)(3) charitable structure in the USA has huge advantages, and you can find plenty of blog posts on our site about that. But you can also find us talking about how 501(c)(6) structures, and other structures outside the USA entirely, are often the right choices — depending on what a FOSS project seeks from its organization. Conservancy also makes our policies, agreements, and processes fully public so that organizations can reuse our work, and many have.

    Meanwhile, FOSS organizations must avoid the classic “not invented here” anti-pattern. Of course I believe that Conservancy has great ideas for how to help FOSS, and our work — such as fiscal sponsorship, GPL enforcement work, and the Outreachy internship program — are the highest priorities in FOSS. I also believe the projects we take under our auspices are the most important projects in FOSS today.

    But not everyone agrees with me, nor should they. Our Executive Director, Karen Sandler, loves the aphorism “let a thousand flowers bloom”. For example, when we learned of the launch of Open Collective, we at Conservancy were understandably concerned that since they were primarily a 501(c)(6) and didn't follow the kinds of fiscal sponsorship models and rules that we preferred, that somehow it was a “threat” to Conservancy. But that reaction is one of fear, selfishness, and insecurity. Once we analyzed what the Open Collective folks were up to, we realized that they were an excellent option for a lot of the projects that were simply not a good fit for Conservancy and our model. Conservancy is deeply steeped in a long-term focus on software freedom for the general public, and some projects — particularly those that are primarily in service to companies rather than individual users (or who don't want the oversight a charity requires) — just don't belong with us. We regularly refer projects to Open Collective.

    For many larger projects, Linux Foundation — as a 501(c)(6) controlled completely by large technology companies — is also a great option. We've often referred Conservancy applicants there, too. We do that even while we criticize Linux Foundation for choosing proprietary software for many tasks, including proprietary software they write from scratch for their outward-facing project services

    Of course, I'm thinking about all this today because Conservancy has been asked what we think about the Open Usage Commons. The fact is they're just getting started and both the legal details of how they're handling trademarks, and their governance documents, haven't been released yet. We should all give them an opportunity to slowly publish more and review it when it comes along. We should judge them fairly as an alternative for fulfilling FOSS project needs that no else addresses (or, more commonly are being addressed very differently by existing organizations). I'm going to hypothesize that, like Linux Foundation, Open Usage Commons will primarily be of interest to more for-profit-company focused projects, but that's my own speculation; none of us know yet.

    No one is denying that Open Usage Commons is tied to Google as part of their founding — in the same way that Linux Foundation's founding (which was originally founded as the “Open Source Development Labs”) was closely tied to IBM at the time. As near as I can tell, IBM's influence over Linux Foundation is these days no more than any other of their Platinum Members. It's not uncommon for a trade association to jumpstart with a key corporate member and eventually grow to be governed by a wider group of companies. But while appropriately run trade associations do balance the needs of all for-profit companies in their industry, they are decidedly not neutral; they are chartered to favor business needs over the needs of the general public. I encourage skepticism when you hear an organization claim “neutrality”. Since a trade association is narrowed to serving businesses, it can be neutral among the interests of business, but their mandate remains putting business needs above community. The ultimate proof of neutrality pudding is in the eating. As with multi-copyright held GPL'd projects, we can trust the equal rights for all in those — regardless of the corporate form of the contributors — because the document of legal rights makes it so. The same principle applies to any area of FOSS endeavor: examine the agreements and written rules for contributors and users to test neutrality.

    Finally, there are plenty of issues where software freedom activists should criticize Google. Just today, I was sent a Google Docs link for a non-FOSS volunteer thing I'm doing, and I groaned knowing that I'd have to install a bunch of proprietary Javascript just to be able to participate. Often, software freedom activists assume that bad actions by an entity means all actions are de-facto problematic. But we must judge each policy move on its own merits to avoid pointless partisanship.

    Posted on Thursday 09 July 2020 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

January

  • 2020-01-06: Toward Copyleft Equality for All

    [ This blog post was also crossposted to my blog at Software Freedom Conservancy. I hope you will donate now before the challenge match period ends so that you can support work like this that I'm doing at my day job. ]

    I would not have imagined even two years ago that expansion of copyleft would become such an issue of interest in software freedom licensing. Historically and for good reason, addition of new forms of copyleft clauses has moved at a steady pace. The early 2000s brought network services clauses (such as that in the Affero GPL), which hinged primarily on requiring provision of source to network-remote users. Affero GPL implemented this via copyright-controlled permission of modification. These licenses began as experiments, and were not approved by some license certification authorities until many years later.

    Even with the copyleft community's careful and considered growth, there have been surprising unintended consequences of copyleft licenses. The specific outcome of proprietary relicensing has spread widely and — for stronger copyleft licenses like Affero GPL — has become the more common usage of the license.

    As the popularity of Open Source has grown, companies have searched for methods to combine traditional proprietary licensing business models with FOSS offerings. Proprietary relicensing, originally pioneered by MySQL AB (now part of Oracle by way of Sun), uses software freedom licenses to compel purchase of proprietary licenses for the same codebase. Companies accomplish this by ensuring they collect all copyright control of a particular codebase, thus being its sole licensor, and offer the FOSS licenses as a loss-leader (often zero-cost) product. Non-commercial users generally are ignored, and commercial users often operate in fear of captious interpretations of the copyleft license. The remedy for their fear is a purchase of a separate proprietary license for the same codebase from the provider. Proprietary relicensing seems to have been the first mixed FOSS/proprietary business model in history.

    The toxicity of this business model has only become apparent in hindsight. Initially, companies engaging in this business model did so somewhat benignly — often offering proprietary licenses only to customers who sought to combine the product with other proprietary software, or as supplemental income along with other consulting businesses. This business model (for some codebases), however, became so lucrative that some companies eventually focused exclusively on it. As a result, aggressive copyleft license overreading and inappropriate, unprincipled enforcement typically came from such companies. For most, the business model likely reached its crescendo when MongoDB began using the Affero GPL for this purpose. I was personally told by large companies at the time (late 2000s into early 2010s) that they'd listed Affero GPL as “Never Allowed Here” specifically because of shake-downs from MongoDB.

    Copyleft itself is not a moral philosophy; rather, copyleft is a strategy that software freedom activists constructed to advance a particular set of policy goals. Specifically, software copyleft was designed to ensure that all users received complete, corresponding source for all binaries, and that any modifications or improvements made anywhere in the chain of custody of the software were available in source form to downstream users. As orginially postulated, copyleft was a simple strategy to disarm proprietarization as an anti-software-freedom tactic.

    The Corruption of Copyleft

    Copyleft is a tool to achieve software freedom. Any tool can be fashioned into a weapon when wielded the wrong way. That's precisely what occurred with copyleft — and it happened early in copyleft's history, too. Before even the release of GPLv2, Aladdin Ghostscript used a copyleft via a proprietary relicensing model (which is sometimes confusingly called the “dual licensing” model). This business model initially presented as benign to software freedom activists; leaders declared the business model “barely legitimate”, when it rose to popularity through MySQL AB (later Sun, and later Oracle)'s proprietary relicensing of the MySQL codebase.

    In theory, proprietary relicensors would only offer the proprietary license by popular demand to those who had some specific reason for wanting to proprietarize the codebase — a process that has been called “selling exceptions”. In practice, however, every company I'm aware of that sought to engage in “selling exceptions” eventually found a more aggressive and lucrative tack.

    This problem became clear to me in mid-2003 when MySQL AB attempted to hire me as a consultant. I was financially in need of supplementary income so I seriously considered taking the work, but the initial conference call felt surreal and convinced me that MySQL AB was engaging in problematic behavior . Specifically, their goal was to develop scare tactics regarding the GPLv2. I never followed up, and I am glad I never made the error of accepting any job or consulting gig when companies (not just MySQL AB, but also Black Duck and others) attempted to recruit me to serve as part of their fear-tactics marketing departments.

    Most proprietary relicensing businesses work as follows: a single codebase is produced by a for-profit company, which retains 100% control over all copyright in the software (either via an ©AA or a CLA). That codebase is offered as a gratis product to the marketplace, and the company invests substantial resources in marketing the software to users looking for FOSS solutions. The marketing department then engages in captious and unprincipled copyleft enforcement actions in an effort to “convert” those FOSS users into paying customers for proprietary licensing for the same codebase. (Occasionally, the company also offers additional proprietary add-ons, improvements, or security updates that are not available under the FOSS license — when used this way, the model is often specifically called “Open Core”.)

    Why We Must End The Proprietary Relicensing Exploitation of Copyleft

    This business model has a toxic effect on copyleft at every level. Users don't enjoy their software freedom under an assurance that a large community of contributors and users have all been bound to each other under the same, strong, and freedom-ensuring license. Instead, they dread the vendor finding a minor copyleft violation and blowing it out of proportion. The vendor offers no remedy (such as repairing the violation and promise of ongoing compliance) other than purchase of a proprietary license. Industry-wide. I have observed to my chagrin that the copyleft license that I helped create and once loved, the Affero GPL, was seen for a decade as inherently toxic because its most common use was by companies who engaged in these seedy practices. You've probably seen me and other software freedom activists speak out on this issue, in our ongoing efforts to clarify that the intent of the Affero GPL was not to create these sorts of corporate code silos that vendors constructed as copyleft-fueled traps for the unwary. Meanwhile, proprietary relicensing discourages contributions from a broad community, since any contributor must sign a CLA giving special powers to the vendor to continue the business model. Neither users nor co-developers benefit from copyleft protection.

    The Onslaught of Unreasonable Copyleft

    Meanwhile, and somewhat ironically, the success of Conservancy's and the FSF's efforts to counter this messaging about the Affero GPL has created an unintended consequence: efforts to draft even more restrictive software copyleft licenses that can more easily implement the proprietary relicensing business models. We have partially succeeded in convincing users that compliance with Affero GPL is straightforward, and in the backchannels we've aided users who were under attack from these proprietary relicensors like MongoDB. In response, these vendors have responded with a forceful political blow: their own efforts to redefine the future of copyleft, under the guise of advancing software freedom. MongoDB even cast itself as a “victim” against Amazon, because Amazon decided to reimplement their codebase from scratch (as proprietary software!) rather than use the AGPL'd version of MongoDB.

    These efforts began in earnest late last year when (against the advice of the license steward) MongoDB forked the Affero GPL to create the SS Public License. I, with the support of Conservancy, rose in opposition of MongoDB's approach, pointing out that MongoDB would not itself agree to its own license (since MongoDB's CLA would free it from the SS Public License terms). If an entity does not gladly bind itself by its own copyleft license (for example, by accepting third-party contributions to its codebases under that license), we should not treat that entity as a legitimate license steward, nor treat that license as a legitimate FOSS license. We should not and cannot focus single-mindedly on interpretation of the formalistic definitions when we recommend FOSS licensing policy. The message of “technically it's a FOSS license, but don't use” is too complicated to be meaningful.

    A Copyleft Clause To Restore Equality

    My friend and colleague, Richard Fontana, and I are known for our very public and sometimes heated debates on all manner of software freedom policy. We don't always agree on key issues, but I greatly respect Fontana for his careful thought and his inventive solutions. Indeed, Fontana first formulated “inbound=outbound” into that simple phrasing to more easily explain how the lopsided rights and permissions exchanges through CLAs actually create bad FOSS policy like proprietary relicensing. In the copyleft-next project that Fontana began, he further proposed this innovative copyleft clause that could, when Incorporated in a copyleft license, prevent proprietary licensing before it even starts! The clause still needs work, but Fontana's basic idea is revolutionary for copyleft drafting. The essence in non-legalese is this: If you offer a license that isn't a copyleft license, the copyleft provisions collapse and the software is now available to all under a non-copyleft, hyper-permissive FOSS license.

    This solution is ingenious in the way that copyleft itself was an ingenious way to use copyright to “reverse” the rights and ensure software freedom. This provision doesn't prohibit proprietary relicensing per se, but instead simply deflates the power of copyleft control when a copyright holder engages in proprietary relicensing activities.

    Given the near ubiquity of proprietary relicensing and the promulgation of stricter copylefts by companies who seek to engage (or help their clients engage) in such business models, I've come to a stark policy conclusion: the community should reject any new copyleft license without a clause that deflates the power of proprietary relicensing. Not only can we incorporate such a clause into new licenses (such as copyleft-next), but Conservancy's Executive Director, Karen Sandler, came up with a basic approach to incorporating similar copyleft equality clauses into written exceptions for existing copyleft licenses, such as the Affero GPL. I have received authorization to spend some of my Conservancy time and the time of our lawyers on this endeavor, and we hope to publish more about it in the coming months.

    We've finished the experiment. After thirty years of proprietary relicensing, beginning with Aladdin and culminating with MongoDB and their SS Public License, we now know that proprietary relicensing does not serve or extend software freedom, and in most cases has the opposite effect. We must now categorically reject it, and outright reject any new licenses that can be used for it.

    Posted on Monday 06 January 2020 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

2019

December

  • 2019-12-31: Donate to Conservancy Before End of 2019!

    Yesterday, I sent out a version of this blog post to Conservancy's donors as a fundraising email. As most people reading this already know, I work (remotely from the west coast) for a 501(c)(3) charity based in NY called Software Freedom Conservancy, which is funded primarily from individuals like you who donate $120/year (or more :). My primary job and career since 1997 has been working for various charities, mostly related to the general cause of software freedom.

    More generally, I have dedicated myself since the late 1990s to software freedom activism. Looking back across these two decades, I believe our movement, focused on software users' rights, faces the most difficult challenges yet. In particular, I believe 2019 was the most challenging year in our community's history.

    Our movement had early success. Most of our primary software development tools remain (for the moment) mostly Free Software. Rarely do new developers face the kinds of challenges that proprietary software originally brought us. In the world today that seemingly embraces Open Source, the problems are more subtle and complex than they once were. Conservancy dedicates its work to addressing those enigmatic problems. That’s why I work here, why I’m glad to support the organization myself, and why I ask you to support it as well.

    Early success was easy for software freedom because the technology industry ignored us at first. Copyleft was initially a successful antidote to the very first Digital Restrictions Management (DRM) — separating the binaries from source code and using copyright restrictions to forbid sharing. When companies attacked software freedom and copyleft in the early 2000s, we were lucky that those attacks backfired. However, today, we must solve the enigma that the technology industry seems to embrace software freedom, but only to a point. Most for-profit companies today ask a key question constantly: “what Open Source technologies can we leverage while keeping an unfair proprietary edge?”. FOSS is accepted in the enterprise but only if it allows companies to proprietarize, particularly in areas that specifically threaten user privacy and autonomy.

    However, I and my colleagues at Conservancy are realists. We know that a charity like us won't ever have the resources to face well-funded companies on their own playing field, and we’d be fools to try. So, we do what Free Software has always done best: we pick work with the greatest potential to maximize software freedom for as many users as we can.

    At Conservancy's founding, Conservancy focused exclusively on providing a charitable home to FOSS projects, so they could focus on software freedom for their users. Through Conservancy, projects make software freedom the project’s top priority rather than an afterthought. In this new environment where (seemingly) every company and trade association has set up a system for organizational homes for projects, Conservancy focuses on projects that make a big impact for the software freedom of individual users.

    Today, Conservancy does much more beyond those basics. Given my early introduction to licensing, I learned early and often that copyleft — our community's primary tool and strategy to assure companies and individuals would always remain equals — was and would always be constantly under attack. I've thus been glad to help Conservancy publish and speak regularly about essential copyleft and FOSS policy. (And, I'm personally working right now on even more writing on the subject of copyleft policy.) I'm particularly proud of Conservancy's work with members of the Linux community to assure the software freedoms guaranteed by copyleft for Linux-based devices. It's a big task, and we’re the only organization with that mission. But, Conservancy is resilient, unrelenting, and dedicated to it.

    If someone had predicted 28 years ago (when I first installed Linux) that, by 2020, Linux would be the most popular operating system on the most popular small devices in the world, but that almost no one would have the basic freedoms assured by copyleft, the thought would have horrified me. Manufacturers have treated Linux device users like the proverbial frogs in slowly boiling water, so we saw once a trickle and now an onslaught of non-upgradable, non-modifiable, Linux-based IoT and mobile devices as a norm; we’re even sometimes tricked into believing such infringing usage counts as success for software freedom. I'm glad to help Conservancy support and organize the primary group who continues to demand that the GPL matters and should be upheld for Linux. We shouldn't ignore users; their personal rights, privacy, and control of their own technology are at stake — and copyleft should assure their path to software freedom. That path is now deeply buried in complicated legal and political debris, but I believe that Conservancy will clear that path, and I and my colleagues at Conservancy have a plan for it.

    As we close out 2018, I must admit how tough this year has been for all of us with regard to leadership in the broader software freedom movement. I spent a large part of 2019 deeply involved with the political and social work of moving forward together in the face of the leadership crises and assuring the software freedom movement spans generations diversely. Having lived through this troubled year, I've come to a simple conclusion: we must be loyal to the principles of software freedom, not to individual people. We must build a welcoming community that is friendly to those who are different from us; those folks are most likely to bring us desperately needed new ideas and perspectives. I’m thus proud that Conservancy continues to host the Outreachy initiative, which is the premier internship program that seeks to bring those who have faced specific hardships related to diversity and inclusion into the wonders of FOSS development and leadership.

    We've all had a tough 2019 for many reasons, and I certainly believe it’s the most challenging year I've seen in my many years of software freedom activism. But, I don't shy away from a challenge: I am looking forward to helping Conservancy work tirelessly to lead the way out of difficulty, with new approaches.

    Obviously I'm going to help with my staff time at Conservancy , for which I am (obviously) paid a salary. (As I always joke, my salary has been a matter of public record since 2001, you just have to read the 501(c)(3) Form 990s of the organizations I've worked for.) I am very lucky that I was born into the middle class in a wealthy country. I believe it's important to acknowledge the privilege that comes with advantages we receive due to sheer luck. In recent years, I've focused on how I can use that privilege to help the social justice causes that I care about. In addition to devoting my career to a charity, I also think giving back financially to charity is important. Each year, I usually give my largest charitable donation to the charity where I work, Software Freedom Conservancy.

    It does feel strange to me to give money back to an organization that also pays me a salary. However, I do it because: (a) it's entirely voluntary (thus showing clearly that it isn't merely a run-of-the-mill paycut :), (b) it help Conservancy meet our meet our annual match challenge, and (c) I spend some of my time each winter asking everyone I know to also voluntarily give. I hope you'll join me today in becoming (or renewing!) as a Conservancy Supporter. I hope you'll set your Supporter contribution at a level higher than the minimum. Usually, computer geeks love to give amounts that are even powers of 2. This year, I suggested that was perhaps a bit hackneyed, so we set our donor challenge around prime numbers (the original match amount was $113,093). So, I planned ahead a frugal year so that I could give $1,021 today to Conservancy. I generally planned all year to give “about a thousand” at year's end for the match, but I picked $1,021 specifically because it's the closest prime number to 210. I think it makes sense to give to charity amounts of about about $60-100/month, as that's typically the amount that any middle class person in a wealthy country can afford if they just cut out a few luxuries (e.g., DRM-laden streaming services, cooking at home rather than eating at restaurants, etc.).

    So, please join me today in contributing to Conservancy. Most importantly, perhaps, today is the last day to donate for a USA tax deduction in 2019! If you pay taxes in the USA, do take a look at the deduction, because I've found in my fiscal planning that it does make a budgeting difference and means I can give a bit more, knowing that I'll get some of it back from both the USA and state government.

    Posted on Tuesday 31 December 2019 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

November

  • 2019-11-16: Last Chance to Submit for 2020 FOSS License Policy Events

    I ask that everyone give a thought to proposing at session at one (or both) of two great events on the Open Source and Free Software calendar: the FOSDEM Legal and Policy DevRoom and Copyleft Conf. Both CFPs close tomorrow!

    I've been co-organizing the Legal and Policy DevRoom, along with my colleagues Tom Marble, Richard Fontana, and Karen Sandler for the last eight years. Copyleft Conf grew out of this event a few years ago because there was excitement by attendees for another on in Brussels after FOSDEM for more specific content about copyleft policy and licensing.

    This year, the DevRoom is taking a new, experimental approach: we're looking for proposals for debates. Take a look at the CFP and see if you'd be willing to take a position (pro or con) on some important issue of debate in Free Software, and perhaps submit a proposal to join a debate team.

    Copyleft Conf will be a more traditional conference at an urgent time in copyleft history. This past year, there has been an increasing push by companies and VC-friendly lawyers to redefine the future of copyleft to serve the interests of powerful companies rather than individual users. I hope Copyleft Conf 2020 will be a premier venue to have community-oriented discussion about how copyleft can help users and developers gain more software freedom.

    Posted on Saturday 16 November 2019 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

October

  • 2019-10-15: On the Controversial Events Regarding the Free Software Foundation and Richard M. Stallman

    Update in 2024: You may have linked to this page because it's heavily quoted by a semi-anonymously published website called the “Stallman Report”. While the author of that document tried to contact me shortly before launching their site, I missed that email for a week or two. (I get a lot of email and as those of you who have emailed me know, my autoresponder indicates it's not the best way to contact me …). I got back to the author only after they'd published, asking for a phone or video call with them (in large part to discuss the inaccuracies in their “Stallman Report”). The author never followed up with me on that. While I would welcome rigorous journalistic coverage of these issues discussed herein, I sadly have not seen much of such, and the “Stallman Report” definitely is not written by a journalist nor is it researched with sufficient rigor, in my opinion. I don't think it is worth my time to point out the places where that document moves between fact and supposition. I urge you all to read my first hand account below in full, as I was present for the events in question. I renew my invitation from 2023 (below) to the members of the FSF Board to appear in a public event with me and discuss these events and concerns with the general public.

    Update in 2023: Careful readers will note that at the time I made this original post (which remains in full below), I did not disclose the precise circumstances of how I came to no longer be a Voting Member and an at-large Director of the Free Software Foundation (FSF) in October 2019. Because I was vague about the details, some pundits incorrectly reported that I resigned. I did not resign; instead, I was narrowly (by exactly one vote) voted out (of all my FSF roles) by FSF's Voting Members.

    I was voted out for various reasons. The most relevant reason was a fundamental disagreement about the criteria and requirements for RMS' return to the FSF Board of Directors. In particular, during September-October 2019, I was insisting that one qualification for reinstatement was a complete, unqualified apology for RMS' September 2019 statements that (a) “she [Virginia Giuffre] presented herself to him [Marvin Minsky] as entirely willing”, and (b) Giuffre (who was sex-trafficed by Jeffrey Epstein) committed “an injustice” by accusing Minsky of sexual assault in her deposition. To my knowledge, RMS has still not apologized for those statements, nor for his many similarly harmful statements about sexual assault. In fact, the press called RMS' April 2021 follow-up statement on these matters a non-apology apology. In that April 2021 statement, RMS actually repeats that any accusation of sexual assault against Minsky remains an “injustice”. (Minsky, BTW, had died of a cerebral hemorrhage at age 88 — which was four months before Giuffre made the accusation in her sealed deposition, and more than three years before that deposition was made public.)

    Furthermore, RMS' subsequent re-election to FSF's Board of Directors was already under discussion by the Voting Members in October 2019. That thin majority of the Voting Members knew that I would (and I do) find RMS' “non-apology apology” inadequate to resolve the situation sufficiently to yield my “yes” vote to reinstate RMS to FSF's Board of Directors. In short, I wanted more accountability and actions as a condition for RMS' return to FSF's Board of Directors than that thin majority of FSF's Voting Members knew they would ultimately require. So, they voted me out preemptively. As I said, there are other reasons, and plenty of political intrigue. Nevertheless, this summary is, IMO, accurate. (BTW, I'd also be glad to do a public, recorded Q&A with the FSF Voting Members time if they were willing — I do realize I'm telling just one side of a multi-sided story here. I would prefer improved transparency on these issues. In fact, another disagreement that I contemporaneously had in late 2019 with that same thin majority was that I was demanding better transparency regarding the FSF governance politics, and the Voting Members and Directors refused.)

    One additional thing that the press got wrong in covering this issue from September 2019 to April 2021 was that (to my knowledge) it was never reported that RMS never resigned as an FSF Voting Member. IOW, nearly everyone missed the fact that during the period (from September 2019 to March 2021) when RMS was not an FSF Director, RMS did remain an FSF Voting Member. And, since I'm sure folks will ask: yes, RMS' vote was indeed one of the votes in that thin majority that removed me from all my roles at the FSF in October 2019.

    Finally, I want to note that, over the years I've been trying to understand these events, new information that came to light later was very helpful. The Massachusetts Institute of Technology (MIT) report about MIT's long relationship with Jeffrey Epstein (published in 2020) explained a lot. Until reading that report, I had not realized that Epstein had an incredibly close relationship with the faculty of MIT's Computer Science and Artificial Intelligence Lab (CSAIL) and the Media Lab. For example, I personally was aghast to learn that (a) Marvin Minsky visited Epstein when Epstein was incarcerated in Florida for child prostitution in 2008, (b) Epstein was considered by many MIT faculty to be a “friend” (and Minsky specifically was considered Epstein's “closest friend”), and (c) Epstein's 2008 conviction seems to have been common knowledge at MIT — including among CSAIL and MIT Media Lab faculty and fundraisers.

    Indeed, looking at the dates in the MIT Epstein report, I realized that I was on the MIT campus for various FSF meetings contemporaneous with some of the events in that report. I'm disgusted just at the idea that from 2001-2019, I occasionally used MIT CSAIL facilities for my FSF volunteer and staff work while MIT was gladly accepting Epstein's money and CSAIL faculty were promoting and endorsing him.


    Original 2019-10-15 post follows:

    The last 33 days have been unprecedentedly difficult for the software freedom community and for me personally. Folks have been emailing, phoning, texting, tagging me on social media (— the last of which has been funny, because all my social media accounts are placeholder accounts). But, just about everyone has urged me to comment on the serious issues that the software freedom community now faces. Until now, I have stayed silent regarding all these current topics: from Richard M. Stallman (RMS)'s public statements, to his resignation from the Free Software Foundation (FSF), to the Epstein scandal and its connection to MIT. I've also avoided generally commenting on software freedom organizational governance during this period. I did this for good reason, which is explained below. However, in this blog post, I now share my primary comments on the matters that seem to currently be of the utmost attention of the Open Source and Free Software communities.

    I have been silent the last month because, until two days ago, I was an at-large member of FSF's Board of Directors, and a Voting Member of the FSF. As a member of FSF's two leadership bodies, I was abiding by a reasonable request from the FSF management and my duty to the organization. Specifically, the FSF asked that all communication during the crisis come directly from FSF officers and not from at-large directors and/or Voting Members. Furthermore, the FSF management asked all Directors and Voting Members to remain silent on this entire matter — even on issues only tangentially related to the current situation, and even when speaking in our own capacity (e.g., on our own blogs like this one). The FSF is an important organization, and I take any request from the FSF seriously — so I abided fully with their request — even though many of the other at-large Directors of the FSF did not.

    The situation was further complicated because folks at my employer, Software Freedom Conservancy (where I also serve on the Board of Directors) had strong opinions about this matter as well. Fortunately, the FSF and Conservancy both had already created clear protocols for what I should do if ever there was a disagreement or divergence of views between Conservancy and FSF. I therefore was recused fully from the planning, drafting, and timing of Conservancy's statement on this matter. I thank my colleagues at the Conservancy for working so carefully to keep me entirely outside the loop on their statement and to diligently assure that it was straight-forward for me to manage any potential organizational disagreements. I also thank those at the FSF who outlined clear protocols (ahead of time, back in March 2019) in case a situation like this ever came up. I also know my colleagues at Conservancy care deeply, as I do, about the health and welfare of the FSF and its mission of fighting for universal software freedom for all. None of us want, nor have, any substantive disagreement over software freedom issues.

    I take very seriously my duty to the various organizations where I have (or have had) affiliations. More generally, I champion non-profit organizational transparency. Unfortunately, the current crisis left me in a quandary between the overarching goal of community transparency and abiding by FSF management's directives. Now that I've left the FSF Board of Directors, FSF's Voting Membership, and all my FSF volunteer roles (which ends my 22-year uninterrupted affiliation with the FSF), I can now comment on the substantive issues that face not just the FSF, but the Free Software community as a whole, while continuing to adhere to my past duty of acting in FSF's best interest. In other words, my affiliation with the FSF has come to an end for many good and useful reasons. The end to this affiliation allows me to speak directly about the core issues at the heart of the community's current crisis.

    Firstly, all these events — from RMS' public comments on the MIT mailing list, to RMS' resignation from the FSF to RMS' discussions about the next steps for the GNU project — seem to many to have happened ridiculously quickly. But it wasn't actually fast at all. In fact, these events were culmination of issues that were slowly growing in concern to many people, including me.

    For the last two years, I had been a loud internal voice in the FSF leadership regarding RMS' Free-Software-unrelated public statements; I felt strongly that it was in the best interest of the FSF to actively seek to limit such statements, and that it was my duty to FSF to speak out about this within the organization. Those who only learned of this story in the last month (understandably) believed Selam G.'s Medium post raised an entirely new issue. In fact, RMS' views and statements posted on stallman.org about sexual morality escalated for the worse over the last few years. When the escalation started, I still considered RMS both a friend and colleague, and I attempted to argue with him at length to convince him that some of his positions were harmful to sexual assault survivors and those who are sex-trafficked, and to the people who devote their lives in service to such individuals. More importantly to the FSF, I attempted to persuade RMS that launching a controversial campaign on sexual behavior and morality was counter to his and FSF's mission to advance software freedom, and told RMS that my duty as an FSF Director was to assure the best outcome for the FSF, which IMO didn't include having a leader who made such statements. Not only is human sexual behavior not a topic on which RMS has adequate academic expertise, but also his positions appear to ignore significant research and widely available information on the subject. Many of his comments, while occasionally politically intriguing, lack empathy for people who experienced trauma.

    IMO, this is not and has never been a Free Speech issue. I do believe freedom of speech links directly to software freedom: indeed, I see the freedom to publish software under Free licenses as almost a corollary to the freedom of speech. However, we do not need to follow leadership from those whose views we fundamentally disagree. Moreover, organizations need not and should not elevate spokespeople and leaders who speak regularly on unrelated issues that organizations find do not advance their mission, and/or that alienate important constituents. I, like many other software freedom leaders, curtail my public comments on issues not related to FOSS. (Indeed, I would not even be commenting on this issue if it had not become a central issue of concern to the software freedom community.) Leaders have power, and they must exercise the power of their words with restraint, not with impunity.

    RMS has consistently argued that there was a campaign of “prudish intimidation” — seeking to keep him quiet about his views on sexuality. After years of conversing with RMS about how his non-software-freedom views were a distraction, an indulgence, and downright problematic, his general response was to make even more public comments of this nature. The issue is not about RMS' right to say what he believes, nor is it even about whether or not you agree or disagree with RMS' statements. The question is whether an organization should have a designated leader who is on a sustained, public campaign advocating about an unrelated issue that many consider controversial. It really doesn't matter what your view about the controversial issue is; a leader who refuses to stop talking loudly about unrelated issues eventually creates an untenable distraction from the radical activism you're actively trying to advance. The message of universal software freedom is a radical cause; it's basically impossible for one individual to effectively push forward two unrelated controversial agendas at once. In short, the radical message of software freedom became overshadowed by RMS' radical views about sexual morality.

    And here is where I say the thing that may infuriate many but it's what I believe: I think RMS took a useful step by resigning some of his leadership roles at the FSF. I thank RMS for taking that step, and I wish the FSF Directors well in their efforts to assure that the FSF becomes a welcoming organization to all who care about universal software freedom. The FSF's mission is essential to our technological future, and we should all support that mission. I care deeply about that mission myself and have worked and will continue to work in our community in the best interest of the mission.

    I'm admittedly struggling to find a way to work again with RMS, given his views on sexual morality and his behaviors stemming from those views. I explicitly do not agree with this “(re-)definition” of sexual assault. Furthermore, I believe uninformed statements about sexual assault are irresponsible and cause harm to victims. #MeToo is not a “frenzy”; it is a global movement by individuals who have been harmed seeking to hold both bad actors and society-at-large accountable for ignoring systemic wrongs. Nevertheless, I still am proud of the essay that I co-wrote with RMS and still find many of RMS' other essays compelling, important, and relevant.

    I want the FSF to succeed in its mission and enter a new era of accomplishments. I've spent the last 22 years, without a break, dedicating substantial time, effort, care and loyalty to the various FSF roles that I've had: including employee, volunteer, at-large Director, and Voting Member. Even though my duties to the FSF are done, and my relationship with the FSF is no longer formal, I still think the FSF is a valuable institution worth helping and saving, specifically because the FSF was founded for a mission that I deeply support. And we should also realize that RMS — a human being (who is flawed like the rest of us) — invented that mission.

    As culture change becomes more rapid, I hope we can find reasonable nuance and moderation on our complex analysis about people and their disparate views, while we also hold individuals fully accountable for their actions. That's the difficulty we face in the post-post-modern culture of the early twenty-first century. Most importantly, I believe we must find a way to stand firm for software freedom while also making a safe environment for victims of sexual assault, sexual abuse, gaslighting, and other deplorable actions.

    Posted on Tuesday 15 October 2019 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

May

  • 2019-05-23: Chasing Quick Fixes To Sustainability

    This post is co-authored with my colleague, Karen M. Sandler, and is crossposted from Software Freedom Conservancy's website.

    Various companies and trade associations have now launched their own tweak on answers to the question of “FOSS sustainability”. We commented in March on Linux Foundation's Community Bridge, and Bradley's talk at SCALE 2019 focused on this issue (video). Assuring that developers are funded to continue to maintain and improve FOSS is the focus of many organizations in our community, including charities like ourselves, the Free Software Foundation, the GNOME Foundation, Software in the Public Interest, and others.

    Today, another for-profit company, GitHub, announced their sponsors program. We're glad that GitHub is taking seriously the issue of assuring that those doing the work in FOSS are financially supported. We hope that GitHub will ultimately facilitate charities as payees, so that Conservancy membership projects can benefit. We realize the program is in beta, but our overarching concern remains that the fundamental approach of this new program fails to address any of the major issues that have already been identified in FOSS sustainability.

    Conservancy has paid hundreds of thousands of dollars to fund FOSS developers over the course of our existence. We find that managing the community goverance, carefully negotating with communities about who will be paid, how paid workers interact with the unpaid volunteers, and otherwise managing and assuring that donor dollars are well spent to advance the project are the great challenges of FOSS sustainability. We realize that newcomers to this discussion (like GitHub and their parent company, Microsoft) may not be aware of these complex problems. We also have sympathy for their current approach: when Conservancy started, we too thought that merely putting up a donation button and routing payments was the primary and central activity to assure FOSS sustainability. We quickly discovered that those tasks are prerequisite, but alone are not sufficient to succeed.

    Just as important is how the infrastructure is implemented. GitHub is a proprietary software platform for FOSS development, and their sponsors program implements more proprietary software on top of that proprietary platform. FOSS developers should have FOSS that helps them fund their work. Choosing FOSS instead of proprietary software is not always easy initially. Conservancy promotes free-as-in-freedom solutions like our Houdini project and other initiatives throughout our community. We are somewhat alarmed at the advent of so many entrants into the FOSS sustainability space that offer proprietary software and/or proprietary network services as a proposed solution. We hope that GitHub and others who have entered this space recently will collaborate with the existing community of charities who are already working on this problem and remain in search of long-term sustainable, FOSS-friendly solutions.

    Note: This post was co-authored with Karen M. Sandler.

    Posted on Thursday 23 May 2019 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2019-05-10: Delta Airlines Crosses One Line Too Far in Union Busting

    We create, develop, document and collaborate as users of Free and Open Source Software (FOSS) from around the globe, usually by working remotely on the Internet. However, human beings have many millennia of evolution that makes us predisposed to communicate most effectively via in-person interaction. We don't just rely on the content of communication, but its manner of expression, the body language of the communicator, and thousands of different non-verbal cues and subtle communication mechanisms. In fact, I believe something that's quite radical for a software freedom activist to believe: meeting in person to discuss something is always better than some form of online communication. And this belief is why I attend so many FOSS events, and encourage (and work in my day job to support) programs and policies that financially assist others in FOSS to attend such events.

    When I travel, Delta Airlines often works out to be the best option for my travel: they have many international flights from my home airport (PDX), including a daily one to AMS in Europe — and since many FOSS events are in Europe, this has worked out well.

    Admittedly, most for-profit companies that I patronize regularly engage in some activity that I find abhorrent. One of the biggest challenges of modern middle-class life in an industrialized soceity is figuring out (absent becoming a Thoreau-inspired recluse) how to navigate one's comfort level with patronizing companies that engage in bad behaviors. We all have to pick our own boycotts and what vendors we're going to avoid.

    I realize that all the commercial airlines are some of the worst environmental polluters in the world. I realize that they all hire union-busting law firms to help them mistreat their workers. But, Delta Airlines recent PR campaign to frighten their workers about unions was one dirty trick too far.

    I know unions can be inconvenient for organizational leadership; I actually have been a manager of a workforce who unionized while I was an executive. I personally negotiated that union contract with staff. The process is admittedly annoying and complicated. But I fundamentally believe it's deeply necessary, because workers' rights to collectively organize and negotiate with their employers is a cornerstone of equality — not just in the USA but around the entire world.

    Furthermore, the Delta posters are particularly offensive because they reach into the basest problematic instinct in humans that often becomes our downfall: the belief that one's own short-term personal convenience and comfort should be valued higher than the long-term good of our larger communityf. It's that instinct that causes us to litter, or to shun public transit and favor driving a car and/or calling a ride service.

    We won't be perfect in our efforts to serve the greater good, and sometimes we're going to selfishly (say) buy a video game system with money that could go to a better cause. What's truly offensive, and downright nefarious here, is that Delta Airlines — surely in full knowledge of the worst parts of some human instincts — attempted to exploit that for their own profit and future ability to oppress their workforce.

    As a regular Delta customer (both personally, and through my employer when they reimburse my travel), I had to decide how to respond to this act that's beyond the pale. I've decided on the following steps:

    • I've written the following statement via Delta's complaint form:

      I am a Diamond Medallion (since 2016) on Delta, and I've flown more than 975,000 miles on Delta since 2000. I am also a (admittedly small) shareholder in Delta myself (via my retirement savings accounts).

      I realize that it is common practice for your company (and indeed likely every other airline) to negotiate hard with unions to get the best deal for your company and its shareholders. However, taking the step to launch what appears to be a well-funded and planned PR campaign to convince your workers to reject the union and instead spend union dues funds on frivolous purchases instead is a despicable, nefarious strategy. Your fiduciary duty to your shareholders does not mandate the use of unethical and immoral strategies with your unionizing labor force — only that you negotiate in good faith to get the best deal with them for the company.

      I demand that Delta issue a public apology for the posters. Ideally, such an apology should include a statement by Delta indicating that you believe your workers have the right to unionize and should take seriously the counter-arguments put forward by the union in favor of union dues and each employee should decide for themselves what is right.

      I've already booked my primary travel through the rest of the year, so I cannot easily pivot away from Delta quickly. This gives you some time to do the right thing. If Delta does not apologize publicly for this incident by November 1st, 2019, I plan to begin avoiding Delta as a carrier and will seek a status match on another airline.

      I realize that this complaint email will likely primarily be read by labor, not by management. I thus also encourage you to do two things: (a) I hope you'll share this message, to the extent you are permitted under your employment agreement, with your coworkers. Know that there are Diamond Medallions out here in the Delta system who support your right to unionize. (b) I hope you escalate this matter up to management decision-makers so they know that regular customers are unhappy at their actions.

    • Given that I'm already booked on many non-refundable Delta flights in the coming months, I would like to make business-card-sized flyers that say something like: I'm a Delta frequent flyer & I support a unionizing workforce. and maybe on the other side: Delta should apologize for the posters. It would be great if these had some good graphics or otherwise be eye-catching in some way. The idea would be to give them out to travelers and leave them in seat pockets on flights for others to find. If anyone is interested in this project and would like to help, email me — I have no graphic design skills and would appreciate help.
    • I'm encouraging everyone to visit Delta's complaint form and complain about this. If you've flown Delta before with a frequent flyer account, make sure you're logged into that account when you fill out the form — I know from experience their system prioritizes how seriously they take the complaint based on your past travel.
    • I plan to keep my DAL stock shares until the next annual meeting, and (schedule-permitting), I plan to attend the annual meeting and attempt to speak about the issue (or at least give out the aforementioned business cards there). I'll also look in to whether shareholders can attend earnings calls to ask questions, so maybe I can do something of this nature before the next annual meeting.

    Overall, there is one positive outcome of this for me personally: I am renewed in my appreciation for having spent most of my career working for charities. Charities in the software freedom community have our problems, but nearly everyone I've worked with at software freedom charities (including management) have always been staunchly pro-union. Workers have a right to negotiate on equal terms with their employers and be treated as equals to come to equitable arrangements about working conditions and workplace issues. Unions aren't perfect, but they are the only way to effectively do that when a workforce is larger than a few people.

    Posted on Friday 10 May 2019 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

March

  • 2019-03-13: Understanding LF's New “Community Bridge”

    [ This blog post was co-written by me and Karen M. Sandler, with input from Deb Nicholson, for our Conservancy blog, and that its canonical location. I'm reposting here just for the convenience of those who are subscribed to my RSS feed but not get Conservancy's feed. ]

    Yesterday, the Linux Foundation (LF) launched a new service, called “Community Bridge” — an ambitious platform that promises a self-service system to handle finances, address security issues, manage CLAs and license compliance, and also bring mentorship to projects. These tasks are difficult work that typically require human intervention, so we understand the allure of automating them; we and our peer organizations have long welcomed newcomers to this field and have together sought collaborative assistance for these issues. Indeed, Community Bridge's offerings bear some similarity to the work of organizations like Apache Software Foundation, the Free Software Foundation (FSF), the GNOME Foundation (GF), Open Source Initiative (OSI), Software in the Public Interest (SPI) and Conservancy. People have already begun to ask us to compare this initiative to our work and the work of our peer organizations. This blog post hopefully answers those questions and anticipated similar questions.

    The first huge difference (and the biggest disappointment for the entire FOSS community) is that LF's Community Bridge is a proprietary software system. §4.2 of their Platform Use Agreement requires those who sign up for this platform to agree to a proprietary software license, and LF has remained silent about the proprietary nature of the platform in its explanatory materials. The LF, as an organization dedicated to Open Source, should release the source for Community Bridge. At Conservancy, we've worked since 2012 on a Non-Profit Accounting Software system, including creating a tagging system for transparently documenting ledger transactions, and various support software around that. We and SPI both now use these methods daily. We also funded the creation of a system to manage mentorship programs, which we now runs the Outreachy mentorship program. We believe fundamentally that the infrastructure we provide for FOSS fiscal sponsorship (including accounting, mentorship and license compliance) must itself be FOSS, and developed in public as a FOSS project. LF's own research already shows that transparency is impossible for systems that are not FOSS. More importantly, LF's new software could directly benefit so many organizations in our community, including not only Conservancy but also the many others (listed above) who do some form of fiscal sponsorship. LF shouldn't behave like a proprietary software company like Patreon or Kickstarter, but instead support FOSS development. Generally speaking, all Conservancy's peer organizations (listed above) have been fully dedicated to the idea that any infrastructure developed for fiscal sponsorship should itself be FOSS. LF has deviated here from this community norm by unnecessarily requiring FOSS developers to use proprietary software to receive these services, and also failing to collaborate over a FOSS codebase with the existing community of organizations. LF Executive Director Jim Zemlin has said that he “wants more participation in open source … to advance its sustainability and … wants organizations to share their code for the benefit of their fellow [hu]mankind”; we ask him to apply these principles to his own organization now.

    The second difference is that LF is not a charity, but a trade association — designed to serve the common business interest of its paid members, who control its Board of Directors. This means that donations made to projects through their system will not be tax-deductible in the USA, and that the money can be used in ways that do not necessarily benefit the public good. For some projects, this may well be an advantage: not all FOSS projects operate in the public good. We believe charitable commitment remains a huge benefit of joining a fiscal sponsor like Conservancy, FSF, GF, or SPI. While charitable affiliation means there are more constraints on how projects can spend their funds, as the projects must show that their spending serves the public benefit, we believe that such constraints are most valuable. Legal requirements that assure behavior of the organization always benefits the general public are a good thing. However, some projects may indeed prefer to serve the common business interest of LF's member companies rather than the public good, but projects should note such benefit to the common business interest is mandatory on this platform — it's explicitly unauthorized to use LF's platform to engage in activities in conflict with LF’s trade association status). Furthermore, (per the FAQ) only one maintainer can administer a project's account, so the platform currently only supports the “BDFL” FOSS governance model, which has already been widely discredited. No governance check exists to ensure that the project's interests align with spending, or to verify that the maintainer acts with consent of a larger group to implement group decisions. Even worse, (per §2.3 of the Usage Agreement) terminating the relationship means ceasing use of the account; no provision allows transfer of the money somewhere else when projects' needs change.

    Finally, the LF offers services that are mainly orthogonal and/or a subset of the services provided by a typical fiscal sponsor. Conservancy, for example, does work to negotiate contracts, assist in active fundraising, deal with legal and licensing issues, and various other hands-on work. LF's system is similar to Patreon and other platforms in that it is a hands-off system that takes a cut of the money and provides minimal financial services. Participants will still need to worry about forming their own organization if they want to sign contracts, have an entity that can engage with lawyers and receive legal advice for the project, work through governance issues, or the many other things that projects often want from a fiscal sponsor.

    Historically, fiscal sponsors in FOSS have not treated each other as competitors. Conservancy collaborates often with SPI, FSF, and GF in particular. We refer applicant projects to other entities, including explaining to applicants that a trade association may be a better fit for their project. In some cases, we have even referred such trade-association-appropriate applicants to the LF itself, and the LF then helped them form their own sub-organizations and/or became LF Collaborative Projects. The launch of this platform, as proprietary software, without coordination with the rest of the FOSS organization community, is unnecessarily uncollaborative with our community and we therefore encourage some skepticism here. That said, this new LF system is probably just right for FOSS projects that (a) prefer to use single-point-of-failure, proprietary software rather than FOSS for their infrastructure, (b) do not want to operate in a way that is dedicated to the public good, and (c) have very minimal fiscal sponsorship needs, such as occasional reimbursements of project expenses.

    Posted on Wednesday 13 March 2019 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

2018

December

  • 2018-12-15: What Debian Does For Me

    I woke up early this morning, and those of you live above 45° parallel north or so are used to the “I'm wide awake but it's still dark as night” feeling in the winter. I usually don't turn on the lights, wander into my office, and just bring my computer out of hibernate; that takes a bit as my 100% Free-Software-only computer is old and slow, so I usually go to make coffee while that happens.

    As I came back in my office this morning I was a bit struck by both displays with the huge Debian screen lock image, and it got me thinking of how Debian has been my companion for so many years. I spoke about this at DebConf 15 a bit, and wrote about a similar concept years before. I realize that it's been almost nine years that I've been thinking rather deeply about my personal relationship with Debian and why it matters.

    This morning, I was inspired to post this because, echoing back to my thoughts at my DebConf 15 talk, that I can't actually do the work I do without Debian. I thought this morning about a few simple things that Debian gets done for me that are essential:

    • Licensing assurance. I really can trust that Debian will not put something in main that fails to respect my software freedom. Given my lifelong work on Free Software licensing, yes, I can vet a codebase to search for hidden proprietary software among the Free, but it's so convenient to have another group of people gladly do that job for me and other users.
    • Curated and configured software, with connection to the expert. Some days it seems none of the new generation of developers are a fan of software packaging anymore. Anytime you want to run something new these days, someone is trying to convince you to download some docker image or something like that. It's not that I don't see the value in that, but what I usually want is that software I just read about installed on my machine as quickly as possible. Debian's repository is huge, and the setup of Debian as a project allows for each package maintainer to work in relative independence to make the software of their interest run correctly as part of the whole. For the user, that means when I hear about some interesting software, Debian immediately connects me, via apt, with the individual expert who knows about that software and my operating system / distribution both. Apt, Debian's Bug Tracker, etc. are actually a rudimentary but very usable form of a social networking that allows me to find the person who did the job to get this software actually working on my system. That's a professional community that's amazing
    • Stability. It's rather amusing, All the Debian developers I know run testing on their laptop and stable only on their servers. I run stable on my laptop. I have a hectic schedule and always lots of work to do that, sadly, does not usually include “making my personal infrastructure setup do new things”. While I enjoy that sort of work, it's a rabbit hole that I rarely have the luxury to enter. Running Debian stable on my laptop means I am (almost) never surprised by any behavior of my equipment. In the last nine years, if my computer does something weird, it's basically always a hardware problem.

    Sure, maybe you can get the last two mostly with other distributions, but I don't think you can get the first one anywhere better. Anyway, I've gotta get to work for the day, but those of you out there that make Debian happen, perhaps you'll see a bit of a thank you from me today. While I've thanked you all before, I think that no one does it enough.

    Posted on Saturday 15 December 2018 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

November

  • 2018-11-22: My Views on GNU Kind Communication Guidelines and Related Material

    I have until now avoided making a public statement about my views on the various interrelated issues regarding the GNU Kind Communication Guidelines that came up over the last month. However, given increasing interest in our community on these issues, and the repeated inquiries that I received privately from major contributors in our community, I now must state my views publicly. I don't have much desire to debate these topics in public, nor do I think such is particularly useful, but I've been asked frequently about these GNU policy statements. I feel, if for no other reason than efficiency, that I should share them in one place publicly for easy reference:

    • I think the GNU Kind Communication Guidelines, as a stand-alone document, are useful suggestions and helpful to the GNU project and would be helpful, if adopted, for any software freedom project.
    • However, I think that the GNU Kind Communication Guidelines standing alone are inadequate for a project of GNU's size and number of contributors to address the stated problems. Traditional Codes of Conduct, particularly those that offer mechanisms for complaint resolution when bad behavior occurs, are necessary in Free Software projects of GNU's size. Codes of Conduct are the best mechanism known today in our community to ensure welcoming environments for those who might be targeted by inappropriate and unprofessional behavior.
    • I therefore disagree with the meta-material stated in the announcement of these Communication Guidelines. First, I disagree with the decision to reject any Code of Conduct for the GNU project. Second, I believe that diversity is an important goal for advancing software freedom and human equality generally. I support all Outreachy's goals (including their political ones) and I work hard to help Outreachy succeed as part of my day job. I have publicly supported affirmative action since the early 1990s, and continue to support it. I agree with “making diversity a goal”; Richard Stallman (RMS), speaking on behalf of GNU, states that perse disagrees with “making diversity a goal”.
    • I also disagree with encouraging GNU project contributors to ignore the request of non-binary-gender individuals who ask for the pronouns they/them0, as stated in RMS' personal essay linked to from the GNU Kind Communication Guidelines. My position is that refusing to use the pronouns people ask for is the same unkindness as refusing to call transgender people by a name that is not their legal name when they request it. I don't think the grammatical argument that “pronouns are different from proper nouns” is compelling enough to warrant unwelcoming behavior toward these individuals. The words people use matter. RMS has insisted for years that people make a clear distinction between open source and free software — for good reason —. I believe that how we say things makes a political statement in itself.
    • Related to the last point, I am concerned with the conflating of GNU project views with RMS' personal views. RMS seems to have decided unilaterally that GNU would take a position that requests for use of they/them pronouns need not be honored. I think it is essential that RMS keeps per personal views separate from official GNU policy; I have said so many times to the FSF Board of Directors in various contexts. It was a surprise to me that RMS' personal view on this issue was referenced as part of GNU project guidelines.
    • I think the GNU Kindness Communication Guidelines should apply to all communication from the project, including GNU manuals themselves, and I also believe the glibc abort() joke should be removed. I don't believe free speech of anyone is impacted if a Free Software project forbids certain types of off-topic communication in its official channels. Everyone can have their own website and blog to express their personal views; they don't need to do so through project channels.

    I have been encouraged many times this year by various prominent community members to resign from the FSF's Board of Directors (sometimes over these issues, and sometimes over other, similar issues). I have also received many private communications from other prominent community members (including some GNU contributors) expressing similar concerns to the above, but these individuals noted that they feel much better about the FSF and its shepherding of the GNU project because I'm on the FSF Board of Directors, even though I clearly pointed out to them that my views on these matters will not necessarily become GNU and/or FSF policy. The argument that many have made to me is that it's valuable to have dissenting opinions in the leadership on these issues, even if those dissenting opinions do not become FSF and/or GNU policy.

    I am swayed by the latter argument, and I have decided to continue as an FSF Director indefinitely (assuming the other Directors wish me to continue). However, these recent public positions are far enough out of alignment with my own views that I feel it necessary to exercise my own free speech rights here on my personal blog and state my disagreement with them. I will continue to urge the FSF and GNU to change and/or clarify these positions. (I also sent this blog post privately to the FSF Directors 8 days before I posted it, and had also discussed these concerns in detail with RMS for a month before posting this.)

    Governing well means working (and finding common ground) with those you disagree. We oscillate a bit too much in software freedom communities: either we air every last disagreement no matter how minor, or (perhaps as an over-correction to the former) we seek to represent a seemingly perfect consensus even when one isn't present. I try to avoid both extremes; so this is the first time in my many years on the FSF Board of Directors where I've publicly disagreed with an FSF or GNU project policy. FSF and GNU primarily fight for one principle: equal software freedom for all users and developers. On other topics, there can easily exist disagreement, and working through those disagreements together, in my opinion, usually make the community stronger.

    As always, this is my personal blog, and nothing here necessarily reflects the official views of any organization with which I am affiliated, including not only the Free Software Foundation and GNU, but also Software Freedom Conservancy.

    Change made on 2019-03-25: Above, the words I am a supporter of Outreachy and work hard to help it succeed as part of my day job. were changed to: I support all Outreachy's goals (including their political ones)


    0 A review of various archive.org links shows that this particular text was surreptitious changed in the weeks following my publication of this blog post. I was never contacted nor consulted to review the original condemnation by the GNU project of they/them pronouns nor the improvements. This footnote here was added in 2020 long after these incidents, as that's when I first became aware those changes were made after the fact. I believe that the change, which evolved into something more reasonable after a few months of edits (but coming after I posted this blog) vindicates both my position that the GNU project should not have initially condemned the use of they/them pronouns for non-binary individuals, and that it would have been advisable for the GNU project to seek input from the FSF Board of Directors (which I was a member of at the time but am no longer) before setting such policies about diversity and inclusiveness.

    Posted on Thursday 22 November 2018 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

October

  • 2018-10-16: Toward Community-Oriented, Public & Transparent Copyleft Policy Planning

    [ A similar version was crossposted on Conservancy's blog. ]

    More than 15 years ago, Free, Libre, and Open Source Software (FLOSS) community activists successfully argued that licensing proliferation was a serious threat to the viability of FLOSS. We convinced companies to end the era of “vanity” licenses. Different charities — from the Open Source Initiative (OSI) to the Free Software Foundation (FSF) to the Apache Software Foundation — all agreed we were better off with fewer FLOSS licenses. We de-facto instituted what my colleague Richard Fontana once called the “Rule of Three” — assuring that any potential FLOSS license should be met with suspicion unless (a) the OSI declares that it meets their Open Source Definition, (b) the FSF declares that it meets their Free Software Definition, and (c) the Debian Project declares that it meets their Debian Free Software Guidelines. The work for those organizations quelled license proliferation from radioactive threat to safe background noise. Everyone thought the problem was solved. Pointless license drafting had become a rare practice, and updated versions of established licenses were handled with public engagement and close discussion with the OSI and other license evaluation experts.

    Sadly, the age of license proliferation has returned. It's harder to stop this time, because this isn't merely about corporate vanity licenses. Companies now have complex FLOSS policy agendas, and those agendas are not to guarantee software freedom for all. While it is annoying that our community must again confront an old threat, we are fortunate the problem is not hidden: companies proposing their own licenses are now straightforward about their new FLOSS licenses' purposes: to maximize profits.

    Open-in-name-only licenses are now common, but seem like FLOSS licenses only to the most casual of readers. We've succeeded in convincing everyone to “check the OSI license list before you buy”. We can therefore easily dismiss licenses like Common Clause merely by stating they are non-free/non-open-source and urging the community to avoid them. But, the next stage of tactics have begun, and they are harder to combat. What happens when for-profit companies promulgate their own hyper-aggressive (quasi-)copyleft licenses that seek to pursue the key policy goal of “selling proprietary licenses” over “defending software freedom”? We're about to find out, because, yesterday, MongoDB declared themselves the arbiter of what “strong copyleft” means.

    Understanding MongoDB's Business Model

    To understand the policy threat inherent in MongoDB's so-called “Server Side Public License, Version 1”, one must first understand the fundamental business model for MongoDB and companies like them. These companies use copyleft for profit-making rather than freedom-protecting. First, they require full control (either via ©AA or CLA) of all copyrights in the work, and second, they offer two independent lines of licensing. Publicly, they provide the software under the strongest copyleft license available. Privately, the same (or secretly improved) versions of the software are available under fully proprietary terms. In theory, this could be merely selling exceptions: a benign manner of funding more Free Software code — giving the proprietary option only to those who request it. In practice — in all examples that have been even mildly successful (such as MongoDB and MySQL) — this mechanism serves as a warped proprietary licensing shake-down: “Gee, it looks like you're violating the copyleft license. That's a shame. I guess you just need to abandon the copyleft version and buy a proprietary license from us to get yourself out of this jam, since we don't plan to reinstate any lost rights and permissions under the copyleft license.” In other words, this structure grants exclusive and dictatorial power to a for-profit company as the arbiter of copyleft compliance. Indeed, we have never seen any of these companies follow or endorse the Principles of Community-Oriented GPL Enforcement. While it has made me unpopular with some, I still make no apologies that I have since 2004 consistently criticized this “proprietary relicensing” business model as “nefarious”, once I started hearing regular reports that MySQL AB (now Oracle) asserts GPL violations against compliant uses merely to scare users into becoming “customers”. Other companies, including MongoDB, have since emulated this activity.

    Why Seek Even Stronger Copyleft?

    The GNU Affero General Public License (AGPL) has done a wonderful job defending the software freedom of community-developed projects like Mastodon and Mediagoblin. So, we should answer with skepticism a solitary for-profit company coming forward to claim that “Affero GPL has not resulted in sufficient legal incentives for some of the largest users of infrastructure software … to participate in the community. Many open source developers are struggling with a similar reality”. If the last sentence were on Wikipedia, I'd edit it to add a Citation Needed tag, as I know of nomulti-copyright-held or charity-based AGPL'd project that has “struggled with this reality”. In fact, it's only a “reality” for those that engage in proprietary relicensing. Eliot Horowitz, co-founder of MongoDB and promulgator of their new license, neglects to mention that.

    The most glaring problem with this license, which Horowitz admits in his OSI license-review list post, is that there was no community drafting process. Instead, a for-profit company, whose primary goal is to use copyleft as a weapon against the software-sharing community for the purpose of converting that “community” into paying customers, published this license as a fait accompli without prior public discussion of the license text.

    If this action were an isolated incident by one company, ignoring it is surely the best response. Indeed, I urged everyone to simply ignore the Commons Clause. Now, we see a repackaging of the Commons Clause into a copyleft-like box (with reuse of Commons Clause's text such as “whose value derives, entirely or substantially, from the functionality of the Software”). Since both licenses were drafted in secret, we cannot know if the reuse of text was simply because the same lawyer was employed to write both, or if MongoDB has joined a broader and more significant industry-wide strategy to replace existing FLOSS licensing with alternatives that favor businesses over individuals.

    The Community Creation Process Matters

    Admittedly, the history of copyleft has been one of slowly evolving community-orientation. GPLv1 and GPLv2 were drafted in private, too, by Richard Stallman and FSF's (then) law firm lawyer, Jerry Cohen. However, from the start, the license steward was not Stallman himself, nor the law firm, but the FSF, a 501(c)(3) charity dedicated to serve the public good. As such, the FSF made substantial efforts in the GPLv3 process to reorient the drafting of copyleft licenses as a public policy and legislative process. Like all legislative processes, GPLv3 was not ideal — and I was even personally miffed to be relegated to the oft-ignored “GPLv3 Discussion Committee D” — but the GPLv3 process was undoubtedly a step forward in FLOSS community license drafting. Mozilla Corporation made efforts for community collaboration in redrafting the MPL, and specifically included the OSI and the FSF (arbiters of the Open Source Definition and Free Software Definition (respectively)) in MPL's drafting deliberations. The modern acceptable standard is a leap rather than a step forward: a fully public, transparent drafting process with a fully public draft repository, as the copyleft-next project has done. I think we should now meet with utmost suspicion any license that does not use copyleft-next's approach of “running licensing drafting as a Free Software project”.

    I was admittedly skeptical of that approach at first. What I have seen six years since Richard Fontana started copyleft-next is that, simply put, the key people who are impacted most fundamentally by a software license are mostly likely to be aware of, and engage in, a process if it is fully public, community-oriented, and uses community tools, like Git.

    Like legislation, the policies outlined in copyleft licenses impact the general public, so the general public should be welcomed to the drafting. At Conservancy, we don't draft our own licenses0, so our contracts with software developers and agreements with member projects state that the licenses be both “OSI-approved Open Source” and “FSF-approved GPL-compatible Free Software”. However, you can imagine that Conservancy has a serious vested interest in what licenses are ultimately approved by the OSI and the FSF. Indeed, with so much money flowing to software developers bound by those licenses, our very charitable mission could be at stake if OSI and the FSF began approving proprietary licenses as Open, Free, and/or GPL-compatible. I want to therefore see license stewards work, as Mozilla did, to make the vetting process easier, not harder, for these organizations.

    A community drafting process allows everyone to vet the license text early and often, to investigate the community and industry impact of the license, and to probe the license drafter's intent through the acceptance and rejection of proposed modified text (ideally through a DVCS). With for-profit actors seeking to gain policy control of fundamental questions such as “what is strong copyleft?”, we must demand full drafting transparency and frank public discourse.

    The Challenge Licensing Arbiters Face

    OSI, FSF, and Debian have a huge challenge before them. Historically, the FSF was the only organization who sought to push the boundary of strong copyleft. (Full disclosure: I created the Affero clause while working for the FSF in 2002, inspired by Henry Poole's useful and timely demands for a true network services copyleft.) Yet, the Affero clause was itself controversial. Many complained that it changed the fundamental rules of copyleft. While “triggered only on distribution, not modification” was a fundamental rule of the regular GPL, we as a community — over time and much public debate — decided the Affero clause is a legitimate copyleft, and AGPL was declared Open Source by OSI and DFSG-free by Debian.

    That debate was obviously framed by the FSF. The FSF, due to public pressure, compromised by leaving the AGPL as an indefinite fork of the GPL (i.e., the FSF did not include the Affero clause in plain GPL. While I personally lobbied (from GPLv3 Discussion Committee D and elsewhere) for the merger of AGPL and GPL during the GPLv3 drafting process, I respect the decision of the FSF, which was informed not by my one voice, but the voices of the entire community.

    Furthermore, the FSF is a charity, chartered to serve the public good and the advancement of software freedom for users and developers. MongoDB is a for-profit company, chartered to serve the wallets of its owners. While MongoDB employees1 (like those of any other company) should be welcomed on equal footing to the other unaffiliated individuals, and representatives of companies, charities, and trade-associations to the debate about the future of copyleft, we should not accept their active framing of that debate. By submitting this license to OSI for approval without any public community discussion, and without any discussion whatsoever with the key charities in the community, is unacceptable. The OSI should now adopt a new requirement for license approval — namely, that licenses without a community-oriented drafting process should be rejected for the meta-reason of “non-transparent drafting”, regardless of their actual text. This will have the added benefit of forcing future license drafters to come to OSI, on their public mailing lists, before the license is finalized. That will save OSI the painstaking work of walking back bad license drafts, which has in recent years consumed much expert time by OSI's volunteers.

    Welcoming All To Public Discussion

    Earlier this year, Conservancy announced plans to host and organize the first annual CopyleftConf. Conservancy decided to do this because Conservancy seeks to create a truly neutral, open, friendly, and welcoming forum for discussion about the past and future of copyleft as a strategy for defending software freedom. We had no idea when Karen and I first mentioned the possibility of running CopyleftConf (during the Organizers' Panel at the end of the Legal and Policy DevRoom at FOSDEM 2018 in February 2018) that multiple companies would come forward and seek to control the microphone on the future of copyleft. Now that MongoDB has done so, I'm very glad that the conference is already organized and on the calendar before they did so.

    Despite my criticisms of MongoDB, I welcome Eliot Horowitz, Heather Meeker (the law firm lawyer who drafted MongoDB's new license and the Commons Clause), or anyone else who was involved in the creation of MongoDB's new license to submit a talk. Conservancy will be announcing soon the independent group of copyleft experts (and critics!) who will make up the Program Committee and will independently evaluate the submissions. Even if a talk is rejected, I welcome rejected proposers to attend and speak about their views in the hallway track and the breakout sessions.

    One of the most important principles in copyleft policy that our community has learned is that commercial, non-commercial, and hobbyist activity3 should have equal footing with regard to rights assured by the copyleft licenses themselves. There is no debate about that; we all agree that copyleft codebases become meeting places for hobbyists, companies, charities, and trade associations to work together toward common goals and in harmony and software freedom. With this blog post, I call on everyone to continue on the long road to applying that same principle to the meta-level of how these licenses are drafted and how they are enforced. While we have done some work recently on the latter, not enough has been done on the former. MongoDB's actions today give us an opportunity to begin that work anew.


    0 While Conservancy does not draft any main FLOSS license texts, Conservancy does help with the drafting of additional permissions upon the request of our member projects. Note that additional permissions (sometimes called license exceptions) grant permission to engage in activities that the main license would otherwise prohibit. As such, by default, additional permissions can only make a copyleft license weaker, never stronger.

    1 , 3 I originally had “individual actors” here instead of “hobbyist activity”, and additionally had expressed poorly the idea of welcoming individuals representing all types of entities to the discussion. The miscommunication in my earlier text gave one person the wrong impression that I believe the rights of companies should be equal to the rights of individuals. I fundamentally that companies and organizations should not have rights of personhood and I've updated the text in an effort to avoid such confusions.

    Posted on Tuesday 16 October 2018 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2018-10-10: Thoughts on Microsoft Joining OIN's Patent Non-Aggression Pact

    [ A similar version was crossposted on Conservancy's blog. ]

    Folks lauded today that Microsoft has joined the Open Invention Network (OIN)'s limited patent non-aggression pact, suggesting that perhaps it will bring peace in our time regarding Microsoft's historical patent aggression. While today's announcement is a step forward, we call on Microsoft to make this just the beginning of their efforts to stop their patent aggression efforts against the software freedom community.

    The OIN patent non-aggression pact is governed by something called the Linux System Definition. This is the most important component of the OIN non-aggression pact, because it's often surprising what is not included in that Definition especially when compared with Microsoft's patent aggression activities. Most importantly, the non-aggression pact only applies to the upstream versions of software, including Linux itself.

    We know that Microsoft has done patent troll shakedowns in the past on Linux products related to the exfat filesystem. While we at Conservancy were successful in getting the code that implements exfat for Linux released under GPL (by Samsung), that code has not been upstreamed into Linux. So, Microsoft has not included any patents they might hold on exfat into the patent non-aggression pact.

    We now ask Microsoft, as a sign of good faith and to confirm its intention to end all patent aggression against Linux and its users, to now submit to upstream the exfat code themselves under GPLv2-or-later. This would provide two important protections to Linux users regarding exfat: (a) it would include any patents that read on exfat as part of OIN's non-aggression pact while Microsoft participates in OIN, and (b) it would provide the various benefits that GPLv2-or-later provides regarding patents, including an implied patent license and those protections provided by GPLv2§7 (and possibly other GPL protections and assurances as well)

    Posted on Wednesday 10 October 2018 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

August

  • 2018-08-30: Challenges in Maintaining A Big Tent for Software Freedom

    [ A similar version of this blog post was cross-posted on Software Freedom Conservancy's blog. ]

    In recent weeks, I've been involved with a complex internal discussion by a major software freedom project about a desire to take a stance on social justice issues other than software freedom. In the discussion, many different people came forward with various issues that matter to them, including vegetarianism, diversity, and speech censorship, wondering how that software freedom project should handle other social justices causes that are not software freedom. This week, (separate and fully unrelated) another project, called Lerna, publicly had a similar debate. The issues involved are challenging, and it deserves careful consideration regardless of how the issue is raised.

    One of the first licensing discussions that I was ever involved in the mid 1990s was with a developer, who was a lifelong global peace activist, objecting to the GPL because it allowed the USA Department of Defense and the wider military industrial complex to incorporate software into their destructive killing machines. As a lifelong pacifist myself, I sympathized with his objection, and since then, I have regularly considered the question of “do those who perpetrate other social injustices deserve software freedom?”

    I ultimately drew much of my conclusion about this from activists for free speech, who have a longer history and have therefore had longer time to consider the philosophical question. I remember in the late 1980s when I first learned of the ACLU, and hearing that they assisted the Klu-Klux Klan in their right to march. I was flabbergasted; the Klan is historically well-documented as an organization that was party to horrific murder. Why would the ACLU defend their free speech rights? Recently, many people had a similar reaction when, in defense of the freedom of association and free speech of the National Rifle Association (NRA), the ACLU filed an amicus brief in a case involving the NRA, an organization that I and many others oppose politically. Again, we're left wondering: why should we act to defend the free speech and association rights of political causes we oppose — particularly for those like the NRA and big software companies who have adequate resources to defend themselves?

    A few weeks ago, I heard a good explanation of this in an interview with ACLU's Executive Director, whom I'll directly quote, as he stated succinctly the reason why ACLU has a long history of defending everyone's free speech and free association rights:

    [Our decision] to give legal representation to Nazis [was controversial].… It is not for the government's role to decide who gets a permit to march based on the content of their speech. We got lots of criticism, both internally and externally. … We believe these rights are for everyone, and we truly mean it — even for people we hate and whose ideology is loathsome, disgusting, and hurtful. [The ACLU can't be] just a liberal/left advocacy group; no liberal/left advocacy group would take on these kinds of cases. … It is important for us to forge a path that talks about this being about the rights of everyone.

    Ultimately, fighting for software freedom is a social justice cause similar to that of fighting for free speech and other causes that require equal rights for all. We will always find groups exploiting those freedoms for ill rather than good. We, as software freedom activists, will have to sometimes grit our teeth and defend the rights to modify and improve software for those we otherwise oppose. Indeed, they may even utilize that software for those objectionable activities. It's particularly annoying to do that for companies that otherwise produce proprietary software: after all, in another realm, they are actively working against our cause. Nevertheless, either we believe the Four Software Freedoms are universal, or we don't. If we do, even our active political opponents deserve them, too.

    I think we can take a good example from the ACLU on this matter. The ACLU, by standing firm on its core principles, now has, after two generations of work, developed the power to make impact on related causes. The ACLU is the primary organization defending immigrants who have been forcibly separated from their children by the USA government. I'd posit that only an organization with a long history of principled activity can have both the gravitas and adequate resources to take on that issue.

    Fortunately, software freedom is already successful enough that we can do at least a little bit of that now. For example, Conservancy (where I work) already took a public position, early, in opposition of Trump's immigration policy because of its negative impact on software freedom, whose advancement depends on the free flow of movement by technologists around the world. Speaking out from our microphone built from our principled stand on software freedom, we can make an impact that denying software freedom to others never could. Specifically, rather than proprietarizing the license of projects to fight USA's Immigration and Customs Enforcement (ICE) and its software providers, I'd encourage us to figure out a specific FOSS package that we can prove is deployed for use at ICE, and use that fact as a rhetorical lever to criticize their bad behavior. For example, has anyone investigated if ICE uses Linux-based servers to host their otherwise proprietary software systems? If so, the Linux community is already large and powerful enough that if a group of Linux contributors made a public statement in political opposition to the use of Linux in ICE's activities, it would get national news attention here in the USA. We could even ally with the ACLU to assure the message is heard. No license change is needed to do that, and it will surely be more effective.

    Again, this is how software freedom is so much like free speech. We give software freedom to all, which allows them to freely use and deploy the software for any purpose, just like hate groups can use the free speech microphone to share their ideas. However, like the ACLU, software freedom activists, who simultaneously defend all users equal rights in copying, sharing and modifying the software, can use their platform — already standing on the moral high ground that was generated by that long time principled support of equal rights — to speak out against those who bring harm to society in other ways.

    Finally, note that the Four Software Freedoms obviously should never be the only laws and/or rules of conduct of our society. Just like you should be prevented from (proverbially) falsely yelling Fire! in a crowded movie theater, you still should be stopped when you deploy Free Software in a manner that violates some other law, or commits human rights violations. However, taking away software freedom from bad actors, while it seems like a panacea to other societal ills, will simply backfire. The simplicity and beauty of copyleft is that it takes away someone's software freedom only at the moment when they take away someone else's software freedom; copyleft ensures that is the only reason your software freedom should be lost. Simple tools work best when your social justice cause is an underdog, and we risk obscurity of our software if we seek to change the fundamental simple design of copyleft licensing to include licensing penalties for other social justice grievances (— even if we could agree on which other non-FOSS causes warrant “copyleft protection”). It means we have a big tent for software freedom, and we sometimes stand under it with people whose behavior we despise. The value we have is our ability to stand with them under the tent, and tell them: “while I respect your right to share and improve that software, I find the task you're doing with the software deplorable.”. That's the message I deliver to any ICE agent who used Free Software while forcibly separating parents from their children.

    Posted on Thursday 30 August 2018 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2018-08-22: Software Freedom Ensures the True Software Commons

    [ A similar version was crossposted on Conservancy's blog. ]

    Proprietary software has always been about a power relationship. Copyright and other legal systems give authors the power to decide what license to choose, and usually, they choose a license that favors themselves and takes rights and permissions away from others.

    The so-called “Commons Clause” purposely confuses and conflates many issues. The initiative is backed by FOSSA, a company that sells materiel in the proprietary compliance industrial complex. This clause recently made news again since other parties have now adopted this same license.

    This proprietary software license, which is not Open Source and does not respect the four freedoms of Free Software, seeks to hide a power imbalance ironically behind the guise “Open Source sustainability”. Their argument, once you look past their assertion that the only way to save Open Source is to not do open source, is quite plain: If we can't make money as quickly and as easily as we'd like with this software, then we have to make sure no one else can as well.

    These observations are not new. Software freedom advocates have always admitted that if your primary goal is to make money, proprietary software is a better option. It's not that you can't earn a living writing only Free Software; it's that proprietary software makes it easier because you have monopolistic power, granted to you by a legal system ill-equipped to deal with modern technology. In my view, it's a power which you don't deserve — that allows you to restrict others.

    Of course, we all want software freedom to exist and survive sustainably. But the environmental movement has already taught us that unbridled commerce and conspicuous consumption is not sustainable. Yet, companies still adopt strategies like this Commons Clause to prioritize rapid growth and revenue that the proprietary software industry expects, claiming these strategies bolster the Commons (even if it is a “partial commons in name only”). The two goals are often just incompatible.

    At Software Freedom Conservancy (where I work), we ask our projects to be realistic about revenue. We don't typically see Conservancy projects grow at rapid rates. They grow at slow and steady rates, but they grow better, stronger, and more diverse because they take the time to invite everyone to get involved. The software takes longer to mature, but when it does it's more robust and survives longer.

    I'll take a bet with anyone who'd like. Let's pick five projects under the Affero GPL and five projects under the Commons Clause, and then let's see which ones survive longer as vibrant communities with active codebases and diverse contributors.

    Finally, it's not surprising that the authors chose the name “Commons”. Sadly, “commons” has for many years been a compromised term, often used by those who want to promote licenses or organizational models that do not guarantee all four freedoms inherent in software freedom. Proprietary software is the ultimate tragedy of the software commons, and while it's clever rhetoric for our opposition to claim that they can make FLOSS sustainable by proprietarizing it, such an argument is also sophistry.

    Posted on Wednesday 22 August 2018 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

July

  • 2018-07-29: In Memoriam: Gervase Markham

    Yesterday, we lost an important member of the FLOSS community. Gervase Markham finally succumbed to his battle with cancer (specifically, metastatic adenoid cystic carcinoma).

    I met Gerv in the early 2000s, after he'd already been diagnosed. He has always been very public about his illness. He was frank with all who knew him that his life expectancy was sadly well below average due to that illness. So, this outcome isn't a surprise nor a shock, but it is nevertheless sad and unfortunate for all who knew him.

    I really liked Gerv. I found him insightful and thoughtful. His insatiable curiosity for my primary field — FLOSS licensing — was a source of enjoyment for me in our many conversations on the subject. Gerv was always Socratic in his approach: he asked questions, rather than make statements, even when it was pretty obvious he had an answer of his own; he liked to spark debate and seek conversation. He thoughtfully considered the opinions of others and I many times saw his positions change based on new information. I considered him open-minded and an important contributor to FLOSS licensing thought.

    I bring up Gerv's open-mindedness because I know that many people didn't find him so, but, frankly, I think those folks were mistaken. It is well documented publicly that Gerv held what most would consider particularly “conservative values”. And, I'll continue with more frankness: I found a few of Gerv's views offensive and morally wrong. But Gerv was also someone who could respectfully communicate his views. I never felt the need to avoid speaking with him or otherwise distance myself. Even if a particular position offended me, it was nevertheless clear to me that Gerv had come to his conclusions by starting from his (a priori) care and concern for all of humanity. Also, I could simply say to Gerv: I really disagree with that so much, and if it became clear our views were just too far apart to productively discuss the matter further, he'd happily and collaboratively find another subject for us to discuss. Gerv was a reasonable man. He could set aside fundamental disagreements and find common ground to talk with, collaborate with, and befriend those who disagreed with him. That level of kindness and openness is rarely seen in our current times.

    In fact, Gerv gave me a huge gift without even knowing it: he really helped me understand myself better. Specifically, I have for decades publicly stated my belief that the creation and promulgation of proprietary software is an immoral and harmful act. I am aware that many people (e.g., proprietary software developers) consider that view offensive. I learned much from Gerv about how to productively live in a world where the majority are offended by my deeply held, morally-founded and well-considered beliefs. Gerv taught me how to work positively, productively and in a friendly way alongside others who are offended by my most deeply-held convictions. While I mourn the loss of Gerv today, I am so glad that I had that opportunity to learn from him. I am grateful for the life he had and his work.

    Gerv's time with us was too short. In response, I suggest that we look at his life and work and learn from his example. Gerv set aside his illness for as long as possible to continue good work in FLOSS. If he can do that, we can all be inspired by him to set aside virtually any problem to work hard, together, for important outcomes that are bigger than us all.

    [Finally, I should note that the text above was vetted and approved by Gerv, a few months ago, before his death. I am also very impressed that he planned so carefully for his own death that he contacted Conservancy to seek to assign his copyrights for safe keeping and took the time to review and comment on the text above. ]

    Posted on Sunday 29 July 2018 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2018-07-23: When Meat Salespeople Call Vegans “fundamentalists”

    Someone linked me to this blog by a boutique proprietary software company complaining about porting to GNU/Linux systems, in which David Power, co-founder of Hiri, says:

    Unfortunately, the fundamentalist FOSS mentality we encountered on Reddit is still alive and well. Some Linux blogs and Podcasts simply won’t give us the time of day.

    I just want to quickly share a few analogous quotes that show why that statement is an unwarranted and unfair statement about people's reasonably held beliefs. First, imagine if Hiri were not a proprietary software company, but a butcher. Here's how the quote would sound:

    Unfortunately, the fundamentalist vegan mentality we encountered on Reddit is still alive and well. Some vegetarian blogs and Podcasts simply won’t give us the time of day.

    Should a butcher really expect vegetarian blogs and podcasts to talk about their great new cuts of meat available? Should a butcher be surprised that vegans disagree with them?

    How about if Hiri sold non-recycled card stock paper?:

    Unfortunately, the fundamentalist recycling mentality we encountered on Reddit is still alive and well. Some environmentalist blogs and Podcasts simply won’t give us the time of day.

    If you make a product to which a large part of the potential customer population has a moral objection, you should expect that objection, and it's reasonable for that to happen. To admonish those people because they don't want to promote your product really is akin to a butcher annoyed that vegans won't promote their prime cuts of meat.

    Posted on Monday 23 July 2018 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2018-07-12: On Avoiding Conflation of Political Speech and Hate Speech

    If you're one of the people in the software freedom community who is attending O'Reilly's Open Source Software Convention (OSCON) next week here in Portland, you may have seen debate about O'Reilly and Associates (ORA)'s surreptitious Code of Conduct change (and quick revocation thereof) to name “political affiliation” as a protected class. If you're going to OSCON or plan to go to an OSCON or ORA event in the future, I suggest that you familiarize yourself with this issue and the political historical context in which these events of the last few days take place.

    First, OSCON has always been political: software freedom is inherently a political struggle for the rights of computer users, so any conference including that topic is necessarily political. Additionally, O'Reilly himself had stated his political positions many times at OSCON, so it's strange that, in his response this morning, O'Reilly admits that he and his staff tried to require via agreements that speakers … refrain from all political speech. OSCON can't possibly be a software freedom community event if ORA's intent … [is] to make sure that conferences put on for the exchange of technical information aren't politicized (as O'Reilly stated today). OTOH, I'm not surprised by this tack, because O'Reilly, in large part via OSCON, often pushes forward political views that O'Reilly likes, and marginalizes those he doesn't.

    Second, I must strongly disagree with ORA's new (as of this morning) position that Codes of Conduct should only include “protected classes” that the laws of a particular country currently recognize. Codes of Conduct exist in our community not only as mechanism to assure the rights of protected classes, but also to assure that everyone feels safe and free of harassment and hate speech. In fact, most Codes of Conduct in our community have “including but not limited to” language alongside any list of protected classes, and IMO all of them should.

    More than that, ORA has missed a key opportunity to delineate hate speech and political speech in a manner that is sorely needed here in the USA and in the software freedom community. We live in a political climate where our Politician-in-Chief governs via Twitter and smoothly co-mingles political positioning with statements that would violate the Code of Conduct at most conferences. In other words, in a political climate where the party-ticket-headline candidate is exposed for celebrating his own sexual harassing behavior and gets elected anyway, we are culturally going to have trouble nationwide distinguishing between political speech and hate speech. Furthermore, political manipulators now use that confusion to their own ends, and we must be ever-vigilant in efforts to assure that political speech is free, but that it is delineated from hate speech, and, most importantly, that our policy on the latter is zero-tolerance.

    In this climate, I'm disturbed to see that O'Reilly, who is certainly politically savvy enough to fully understand these delineations, is ignoring them completely. The rancor in our current politics — which is not just at the national level but has also trickled down into the software freedom community — is fueled by bad actors who will gladly conflate their own hate speech and political speech, and (in the irony that only post-fact politics can bring), those same people will also accuse the other side of hate speech, primarily by accusing intolerance of the original “political speech” (which is of course was, from the start, a mix of hate speech and political speech). (Examples of this abound, but one example that comes to mind is Donald Trump's public back-and-forth with San Juan Mayor Carmen Yulín Cruz.) None of ORA's policy proposals, nor O'Reilly's public response, address this nuance. ORA's detractors are legitimately concerned, because blanketly adding “political affiliation” to a protected class, married with a outright ban on political speech, creates an environment where selective enforcement favors the powerful, and furthermore allows the Code of Conduct to more easily become a political weapon by those who engage in the conflation practice I described.

    However, it's no surprise that O'Reilly is taking this tack, either. OSCON (in particular) has a long history — on political issues of software freedom — of promoting (and even facilitating) certain political speech, even while squelching other political speech. Given that history (examples of which I include below), O'Reilly shouldn't be surprised that many in our community are legitimately skeptical about why ORA made these two changes without community discussion, only to quickly backpedal when exposed. I too am left wondering what political game O'Reilly is up to, since I recall well that Morozov documented O'Reilly's track record of political manipulation in his article, The Meme Hustler. I thus encourage everyone who attends ORA events to follow this political game with a careful eye and a good sense of OSCON history to figure out what's really going on. I've been watching for years, and OSCON is often a master class in achieving what Chomsky critically called “manufacturing consent” in politics.

    For example, back in 2001, when OSCON was already in its third year, Microsoft executives went on the political attack against copyleft (calling it unAmerican and a “cancer”). O'Reilly, long unfriendly to copyleft himself, personally invited Craig Mundie of Microsoft to have a “Great Debate” keynote at the next OSCON — where Mundie would “debate” with “Open Source leaders” about the value of Open Source. In reality, O'Reilly put on stage lots of Open Source people with Mundie, but among them was no one who supported the strategy of copyleft, the primary component of Microsoft's political attacks. The “debate” was artfully framed to have only one “logical” conclusion: “we all love Open Source — even Microsoft (!) — it's just copyleft that can be problematic and which we should avoid”. It was no debate at all; only carefully crafted messaging that left out much of the picture.

    That wasn't an isolated incident; both subtle and overt examples of crafted political messaging at OSCON became annual events after that. As another example, ten years later, O'Reilly did almost the same playbook again: he invited the GitHub CEO to give a very political and completely anti-copyleft keynote. After years of watching how O'Reilly carefully framed the political issue of copyleft at OSCON, I am definitely concerned about how other political issues might be framed.

    And, not all political issues are equal. I follow copyleft politics because it's my been my day job for two decades. But, I admit there are stakes even higher with other political topics, and having watched how ORA has handled the politics of copyleft for decades, I'm fearful that ORA is (at best) ill-equipped to handle political issues that can cause real harm — such as the current political climate that permits hate speech, and even racist speech (think of Trump calling Elizabeth Warren “Pocahontas”), as standard political fare. The stakes of contemporary politics now leave people feeling unsafe. Since OSCON is a political event, ORA should face this directly rather than pretending OSCON is merely a series of technical lectures.

    The most insidious part of ORA's response to this issue is that, until the issue was called out, it seems that all political speech (particularly that in opposition to the status quo) violated OSCON's policies by default. We've successfully gotten ORA to back down from that position, but not without a fight. My biggest concern is that ORA nearly ran OSCON this year with the problematic combination of banning political speech in the speaker agreement, while treating “political affiliation” as a protected class in the Code of Conduct. Regardless of intent, confusing and unclear rules like that are gamed primarily by bad actors, and O'Reilly knows that. Indeed, just days later, O'Reilly admits that both items were serious errors, yet still asks for voluntary compliance with the “spirit” of those confusing rules.

    How could it be that an organization that's been running the same event for two decades only just began to realize that these are complex issues? Paradoxically, I'm both baffled and not surprised that ORA has handled this issue so poorly. They still have no improved solution for the original problem that O'Reilly states they wanted to address (i.e., preventing hate speech). Meanwhile, they've cycled through a series of failed (and alarming) solutions without community input. Would it have really been that hard for them to publicly ask first: “We want to welcome all political views at OSCON, but we also detest hate speech that is sometimes joined with political speech. Does anyone want to join a committee to work on improvements to our policies to address this issue?” I think if they'd handled this issue in that (Open Source) way, the outcome would have not be the fiasco it's become.

    Posted on Thursday 12 July 2018 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

June

  • 2018-06-21: The Everyday Sexism That I See In My Work

    My friend, colleague, and boss, Karen Sandler, yesterday tweeted about one of the unfortunately sexist incidents that she's faced in her life. This incident is a culmination of sexist incidents that Karen and I have seen since we started working together. I describe below how these events entice me to be complicit in sexist incidents, which I do my best to actively resist.

    Ultimately, this isn't about me, Karen, or about a single situation, but this is a great example of how sexist behaviors manipulate a situation and put successful women leaders in no-win situations. If you read this tweet (and additionally already knew about Software Freedom Conservancy where I work)…

    “#EveryDaySexism I'm Exec Director of a charity.  A senior tech exec is making his company's annual donation conditional on his speaking privately to a man who reports to me. I hope shining light on these situations erodes their power to build no-win situations for women leaders.” — Karen Sandler

    … you've already guessed that I'm the male employee that this executive meant. When I examine the situation, I can't think of a single reason this donor could want to speak to me that would not be more productive if he instead spoke with Karen. Yet, the executive, who was previously well briefed on the role changes at Conservancy, repeatedly insisted that the donation was gated on a conversation with me.

    Those who follow my and Karen's work know that I was Conservancy's first Executive Director. Now, I have a lower-ranking role since Karen came to Conservancy.

    Back in 2014, Karen and I collaboratively talked about what role would make sense for her and me — and we made a choice together. We briefly considered a co-Executive Director situation, but that arrangement has been tried elsewhere and is typically not successful in the long term. Karen is much better than me at the key jobs of a successful Executive Director. Karen and I agreed she was better for the job than me. We took it to Conservancy's Board of Directors, and they moved my leadership role at Conservancy to be honorary, and we named Karen the sole Executive Director. Yes, I'm still nebulously a leader in the Free Software community (which I'm of course glad about). But for Conservancy matters, and specifically donor relations and major decisions about the organization, Karen is in charge.

    Karen is an impressive leader and there is no one else that I'd want to follow in my software freedom activism work. She's the best Executive Director that Conservancy could possibly have — by far. Everyone in the community who works with us regularly knows this. Yet ever since Karen was named our Executive Director, she faces everyday sexist behavior, including people who seek to conscript me into participation in institutional sexism. As outlined above, I was initially Executive Director of Conservancy, and I was treated very differently than she is treated in similar situations, even though the organization has grown significantly under her leadership. More on that below, but first a few of the other everyday examples of sexism I've witnessed with Karen:

    Many times when we're at conferences together, men who meet us assume that Karen works for me until we explain our roles. This happens almost every time both Karen and I are at the same conference, which is at least a few times each year.

    Another time: a journalist wrote an article about some of “Bradley's work” at Conservancy. We pointed out to the journalist how strange it was that Karen was not mentioned in the article, and that it made it sound like I was the only person doing this work at our organization. He initially responded that because I was the “primary spokesperson”, it was natural to credit me and not her. Karen in fact had been more recently giving multiple keynotes on the topic, and had more speaking engagements than I did in that year. One of those keynotes was just weeks before the article, and it had been months since I'd given a talk or made any public statements. Fortunately, the journalist was willing to engage and discuss the importance of the issue (which was excellent) and the journalist even did agree it was a mistake, but neverthless couldn't rewrite the article.

    Another time: we were leaked (reliable) information about a closed-door meeting where some industry leaders were discussing Conservancy and its work. The person who leaked us the information told us that multiple participants kept talking only about me, not Karen's work. When someone in the meeting said wait, isn't Karen Sandler the Executive Director?, our source (who was giving us a real-time report over IRC) reported that that the (male) meeting coordinator literally said: Oh sure, Karen works there, but Bradley is their guiding light. Karen had been Executive Director for years at that point.

    I consistently say in talks, and in public conversations, that Karen is my boss. I literally use the word “boss”, so there is no confusion nor ambiguity. I did it this week at a talk. But instead of taking that as the fact that it is, many people make comments like well, Karen's not really your boss, right; that's just a thing you say?. So, I'm saying unequivocally here (surely not for the last time): I report to Karen at Conservancy. She is in charge of Conservancy. She has the authority to fire me. (I hope she won't, of course :). She takes views and opinions of our entire staff seriously but she sets the agenda and makes the decisions about what work we do and how we do it. (It shows how bad sexism is in our culture that Karen and I often have to explain in intricate detail what it means for someone to be an Executive Director of an organization.)

    Interestingly but disturbingly, the actors here are not typically people who are actually sexist. They are rarely doing these actions consciously. Rather these incidents teach how institutional sexism operates in practice. Every time I'm approached (which is often) with some subtle situation where it makes Karen look like she's not really in charge, I'm given the opportunity to pump myself up, make myself look more important, and gain more credibility and power. It is clear to me that this comes at the expense of subtly denigrating Karen and that the enticement is part of an institutionally sexist zero-sum game.

    These situations are no-win. I know that in the recent situation, the donation would be assured if I'd just agreed to a call right away without Karen's involvement. I didn't do it, because that approach would make me inherently complicit in institutional sexism. But, avoiding becoming “part of the problem” requires constant vigilance.

    These situations are sadly very common, particularly for women who are banging cracks into the glass ceiling. For my part, I'm glad to help where I can tell my side the story, because I think it's essential for men to assist and corroborate the fight against sexism in our industry without mansplaining or white-knighting. I hope other men in technology will join me and refuse to participate and support behavior that seeks to erode women's well-earned power in our community. When you are told that a woman is in charge of a free software project, that a woman is the executive director of the organization, or that a woman is the chair of the board, take the fact at face value, treat that person as the one who is in charge of that endeavor, and don't (inadvertantly nor explicitly) undermine her authority.

    Posted on Thursday 21 June 2018 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

2017

December

  • 2017-12-31: Supporting Conservancy Makes a Difference

    Earlier this year, in February, I wrote a blog post encouraging people to donate to where I work, Software Freedom Conservancy. I've not otherwise blogged too much this year. It's been a rough year for many reasons, and while I personally and Conservancy in general have accomplished some very important work this year, I'm reminded as always that more resources do make things easier.

    I understand the urge, given how bad the larger political crises have gotten, to want to give to charities other than those related to software freedom. There are important causes out there that have become more urgent this year. Here's three issues which have become shockingly more acute this year:

    • making sure the USA keeps it commitment to immigrants to allow them make a new life here just like my own ancestors did,
    • assuring that the great national nature reserves are maintained and left pristine for generations to come,
    • assuring that we have zero tolerance for abusive behavior — particularly by those in power against people who come to them for help and job opportunities.
    These are just three of the many issues this year that I've seen get worse, not better. I am glad that I know and support people who work on these issues, and I urge everyone to work on these issues, too.

    Nevertheless, as I plan my primary donations this year, I'm again, as I always do, giving to the FSF and my own employer, Software Freedom Conservancy. The reason is simple: software freedom is still an essential cause and it is frankly one that most people don't understand (yet). I wrote almost two years ago about the phenomenon I dubbed Kuhn's Paradox. Simply put: it keeps getting more and more difficult to avoid proprietary software in a normal day's tasks, even while the number of lines of code licensed freely gets larger every day.

    As long as that paradox remains true, I see software freedom as urgent. I know that we're losing ground on so many other causes, too. But those of you who read my blog are some of the few people in the world that understand that software freedom is under threat and needs the urgent work that the very few software-freedom-related organizations, like the FSF and Software Freedom Conservancy are doing. I hope you'll donate now to both of them. For my part, I gave $120 myself to FSF as part of the monthly Associate Membership program, and in a few minutes, I'm going to give $400 to Conservancy. I'll be frank: if you work in technology in an industrialized country, I'm quite sure you can afford that level of money, and I suspect those amounts are less than most of you spent on technology equipment and/or network connectivity charges this year. Make a difference for us and give to the cause of software freedom at least as much a you're giving to large technology companies.

    Finally, a good reason to give to smaller charities like FSF and Conservancy is that your donation makes a bigger difference. I do think bigger organizations, such as (to pick an example of an organization I used to give to) my local NPR station does important work. However, I was listening this week to my local NPR station, and they said their goal for that day was to raise $50,000. For Conservancy, that's closer to a goal we have for entire fundraising season, which for this year was $75,000. The thing is: NPR is an important part of USA society, but it's one that nearly everyone understands. So few people understand the threats looming from proprietary software, and they may not understand at all until it's too late — when all their devices are locked down, DRM is fully ubiquitous, and no one is allowed to tinker with the software on their devices and learn the wonderful art of computer programming. We are at real risk of reaching that distopia before 90% of the world's population understands the threat!

    Thus, giving to organizations in the area of software freedom is just going to have a bigger and more immediate impact than more general causes that more easily connect with people. You're giving to prevent a future that not everyone understands yet, and making an impact on our work to help explain the dangers to the larger population.

    Posted on Sunday 31 December 2017 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

July

  • 2017-07-03: Goodbye To Bob Chassell

    It's fortunately more common now in Free Software communities today to properly value contributions from non-developers. Historically, though, contributions from developers were often overvalued and contributions from others grossly undervalued. One person trailblazed as (likely) the earliest non-developer contributor to software freedom. His name was Robert J. Chassell — called Bob by his friends and colleagues. Over the weekend, our community lost Bob after a long battle with a degenerative illness.

    I am one of the few of my generation in the Free Software community who had the opportunity to know Bob. He was already semi-retired in the late 1990s when I first became involved with Free Software, but he enjoyed giving talks about Free Software and occasionally worked the FSF booths at events where I had begun to volunteer in 1997. He was the first person to offer mentorship to me as I began the long road of becoming a professional software freedom activist.

    I regularly credit Bob as the first Executive Director of the FSF. While he technically never held that title, he served as Treasurer for many years and was the de-facto non-technical manager at the FSF for its first decade of existence. One need only read the earliest issues of the GNU's Bulletin to see just a sampling of the plethora of contributions that Bob made to the FSF and Free Software generally.

    Bob's primary forte was as a writer and he came to Free Software as a technical writer. Having focused his career on documenting software and how it worked to help users make the most of it, software freedom — the right to improve and modify not only the software, but its documentation as well — was a moral belief that he held strongly. Bob was an early member of the privileged group that now encompasses most people in industrialized society: a non-developer who sees the value in computing and the improvement it can bring to life. However, Bob's realization that users like him (and not just developers) faced detrimental impact from proprietary software remains somewhat rare, even today. Thus, Bob died in a world where he was still unique among non-developers: fighting for software freedom as an essential right for all who use computers.

    Bob coined a phrase that I still love to this day. He said once that the job that we must do as activists was “preserve, protect and promote software freedom”. Only a skilled writer such as he could come up with such a perfectly concise alliteration that nevertheless rolls off the tongue without stuttering. Today, I pulled up an email I sent to Bob in November 2006 to tell him that (when Novell made their bizarre software-freedom-unfriendly patent deal with Microsoft) Novell had coopted his language in their FAQ on the matter. Bob wrote back: I am not surprised. You can bet everything [we've ever come up with] will be used against us. Bob's decade-old words are prolific when I look at the cooption we now face daily in Free Software. I acutely feel the loss of his insight and thoughtfulness.

    One of the saddest facts about Bob's illness, Progressive Supranuclear Palsy, is that his voice was quite literally lost many years before we lost him entirely. His illness made it nearly impossible for him to speak. In the late 1990s, I had the pleasure of regularly hearing Bob's voice, when I accompanied Bob to talks and speeches at various conferences. That included the wonderful highlight of his acceptance speech of GNU's 2001 achievement award from the USENIX Association. (I lament that no recordings of any of these talks seem to be available anywhere.) Throughout the early 2000s, I would speak to Bob on the telephone at least once a month; he would offer his sage advice and mentorship in those early years of my professional software freedom career. Losing his voice in our community has been a slow-moving tragedy as his illness has progressed. This weekend, that unique voice was lost to us forever.


    Bob, who was born in Bennington, VT on 22 August 1946, died in Great Barrington, MA on 30 June 2017. He is survived by his sister, Karen Ringwald, and several nieces and nephews and their families. A memorial service for Bob will take place at 11 am, July 26, 2017, at The First Congregational Church in Stockbridge, MA.

    In the meantime, the best I can suggest is that anyone who would like to posthumously get to know Bob please read (what I believe was) the favorite book that he wrote, An Introduction to Programming in Emacs Lisp. Bob was a huge advocate of non-developers learning “a little bit” of programming — just enough to make their lives easier when they used computers. He used GNU Emacs from its earliest versions and I recall he was absolutely giddy to discover new features, help document them, and teach them to new users. I hope those of you that both already love and use Emacs and those who don't will take a moment to read what Bob had to teach us about his favorite program.

    Posted on Monday 03 July 2017 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

April

  • 2017-04-25: Why GPL Compliance Education Materials Should Be Free as in Freedom

    [ This blog was crossposted on Software Freedom Conservancy's website. ]

    I am honored to be a co-author and editor-in-chief of the most comprehensive, detailed, and complete guide on matters related to compliance of copyleft software licenses such as the GPL. This book, Copyleft and the GNU General Public License: A Comprehensive Tutorial and Guide (which we often call the Copyleft Guide for short) is 155 pages filled with useful material to help everyone understand copyleft licenses for software, how they work, and how to comply with them properly. It is the only document to fully incorporate esoteric material such as the FSF's famous GPLv3 rationale documents directly alongside practical advice, such as the pristine example, which is the only freely published compliance analysis of a real product on the market. The document explains in great detail how that product manufacturer made good choices to comply with the GPL. The reader learns by both real-world example as well as abstract explanation.

    However, the most important fact about the Copyleft Guide is not its useful and engaging content. More importantly, the license of this book gives freedom to its readers in the same way the license of the copylefted software does. Specifically, we chose the Creative Commons Attribution Share-Alike 4.0 license (CC BY-SA) for this work. We believe that not just software, but any generally useful technical information that teaches people should be freely sharable and modifiable by the general public.

    The reasons these freedoms are necessary seem so obvious that I'm surprised I need to state them. Companies who want to build internal training courses on copyleft compliance for their employees need to modify the materials for that purpose. They then need to be able to freely distribute them to employees and contractors for maximum effect. Furthermore, like all documents and software alike, there are always “bugs”, which (in the case of written prose) usually means there are sections that are fail to communicate to maximum effect. Those who find better ways to express the ideas need the ability to propose patches and write improvements. Perhaps most importantly, everyone who teaches should avoid NIH syndrome. Education and science work best when we borrow and share (with proper license-compliant attribution, of course!) the best material that others develop, and augment our works by incorporating them.

    These reasons are akin to those that led Richard M. Stallman to write his seminal essay, Why Software Should Be Free. Indeed, if you reread that essay now — as I just did — you'll see that much of the damage and many of the same problems to the advancement of software that RMS documents in that essay also occur in the world of tutorial documentation about FLOSS licensing. As too often happens in the Open Source community, though, folks seek ways to proprietarize, for profit, any copyrighted work that doesn't already have a copyleft license attached. In the field of copyleft compliance education, we see the same behavior: organizations who wish to control the dialogue and profit from selling compliance education seek to proprietarize the meta-material of compliance education, rather than sharing freely like the software itself. This yields an ironic exploitation, since the copyleft license documented therein exists as a strategy to assure the freedom to share knowledge. These educators tell their audiences with a straight face: Sure, the software is free as in freedom, but if you want to learn how its license works, you have to license our proprietary materials! This behavior uses legal controls to curtail the sharing of knowledge, limits the advancement and improvement of those tutorials, and emboldens silos of know-how that only wealthy corporations have the resources to access and afford. The educational dystopia that these organizations create is precisely what I sought to prevent by advocating for software freedom for so long.

    While Conservancy's primary job provides non-profit infrastructure for Free Software projects, we also do a bit of license compliance work as well. But we practice what we preach: we release all the educational materials that we produce as part of the Copyleft Guide project under CC BY-SA. Other Open Source organizations are currently hypocrites on this point; they tout the values of openness and sharing of knowledge through software, but they take their tutorial materials and lock them up under proprietary licenses. I hereby publicly call on such organizations (including but not limited to the Linux Foundation) to license materials such as those under CC BY-SA.

    I did not make this public call for liberation of such materials without first trying friendly diplomacy first. Conservancy has been in talks with individuals and staff who produce these materials for some time. We urged them to join the Free Software community and share their materials under free licenses. We even offered volunteer time to help them improve those materials if they would simply license them freely. After two years of that effort, it's now abundantly clear that public pressure is the only force that might work0. Ultimately, like all proprietary businesses, the training divisions of Linux Foundation and other entities in the compliance industrial complex (such as Black Duck) realize they can make much more revenue by making materials proprietary and choosing legal restrictions that forbid their students from sharing and improving the materials after they complete the course. While the reality of this impasse regarding freely licensing these materials is probably an obvious outcome, multiple sources inside these organizations have also confirmed for me that liberation of the materials for the good of general public won't happen without a major paradigm shift — specifically because such educational freedom will reduce the revenue stream around those materials.

    Of course, I can attest first-hand that freely liberating tutorial materials curtails revenue. Karen Sandler and I have regularly taught courses on copyleft licensing based on the freely available materials for a few years — most recently in January 2017 at LinuxConf Australia and at at OSCON in a few weeks. These conferences do kindly cover our travel expenses to attend and teach the tutorial, but compliance education is not a revenue stream for Conservancy. (By contrast, Linux Foundation generates US$3.8 million/year using proprietary training materials, per their 2015 Form 990, page 9, line 2c.) While, in an ideal world, we'd get revenue from education to fund our other important activities, we believe that there is value in doing this education as currently funded by our individual Supporters; these education efforts fit withour charitable mission to promote the public good. We furthermore don't believe that locking up the materials and refusing to share them with others fits a mission of software freedom, so we never considered such as a viable option. Finally, given the institutionally-backed FUD that we've continue to witness, we seek to draw specific attention to the fundamental difference in approach that Conservancy (as a charity) take toward this compliance education work. (My recent talk on compliance covered on LWN includes some points on that matter, if you'd like further reading.)


    0One notable exception to these efforts was the success of my colleague, Karen Sandler's (and others) in convincing the OpenChain project to choose CC-0 licensing. However, OpenChain has released only 68 presentation slides, and a 12-page specification, and some of the slides simply encourage people to go buy an LF proprietary training course!

    Posted on Tuesday 25 April 2017 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

February

  • 2017-02-13: The Dystopia of Minority Report Needs Proprietary Software

    I encourage all of you to either listen to or read the transcript of Terry Gross' Fresh Air interview with Joseph Turow about his discussion of his book “The Aisles Have Eyes: How Retailers Track Your Shopping, Strip Your Privacy, And Define Your Power”.

    Now, most of you who read my blog know the difference between proprietary and Free Software, and the difference between a network service and software that runs on your own device. I want all of you have a good understanding of that to do a simple thought experiment:

    How many of the horrible things that Turow talks about can happen if there is no proprietary software on your IoT or mobile devices?

    AFAICT, other than the facial recognition in the store itself that he talked about in Russia, everything he talks about would be mitigated or eliminated completely as a thread if users could modify the software on their devices.

    Yes, universal software freedom will not solve all the worlds' problems. But it does solve a lot of them, at least with regard to the bad things the powerful want to do to us via technology.

    (BTW, the blog title is a reference to Philip K. Dick's Minority Report, which includes a scene about systems reading people's eyes to target-market to them. It's not the main theme of that particular book, though… Dick was always going off on tangents in his books.)

    Posted on Monday 13 February 2017 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2017-02-13: Supporting Conservancy Makes a Difference

    There are a lot of problems in our society, and particularly in the USA, right now, and plenty of charities who need our support. The reason I continue to focus my work on software freedom is simply because there are so few focused on the moral and ethical issues of computing. Open Source has reached its pinnacle as an industry fad, and with it, a watered-down message: “having some of the source code for some of your systems some of the time is so great, why would you need anything more?”. Universal software freedom is however further from reality than it was even a few years ago. At least a few of us, in my view, must focus on that cause.

    I did not post many blog posts about this in 2016. There was a reason for that — more than any other year, work demands at Conservancy have been constant and unrelenting. I enjoy my work, so I don't mind, but blogging becomes low priority when there is a constant backlog of urgent work to support Conservancy's mission and our member projects. It's not just Conservancy's mission, of course, it's my personal one as well.

    For our 2016 fundraiser, I wrote last year a blog post entitled “Do You Like What I Do For a Living?”. Last year, so many of you responded, that it not only made it possible for me to continue that work for one more year, but we were able to add our colleague Brett Smith to our staff, which brought Conservancy to four full-time staff for the first time. We added a few member projects (and are moving that queue to add more in 2017), and sure enough — the new work plus the backlog of work waiting for another staffer filled Brett's queue just like my, Karen's and Tony's was already filled.

    The challenge now is sustaining this staffing level. Many of you came to our aid last year because we were on the brink of needing to reduce our efforts (and staffing) at Conservancy. Thanks to your overwhelming response, we not only endured, but we were able to add one additional person. As expected, though, needs of our projects increased throughout the year, and we again — all four of us full-time staff — must work to our limits to meet the needs of our projects.

    Charitable donations are a voluntary activity, and as such they have a special place in our society and culture. I've talked a lot about how Conservancy's Supporters give us a mandate to carry out our work. Those of you that chose to renew your Supporter donations or become new Supporters enable us to focus our full-time efforts on the work of Conservancy.

    On the signup and renewal page, you can read about some of our accomplishments in the last year (including my recent keynote at FOSDEM, an excerpt of which is included here). Our work does not follow fads, and it's not particularly glamorous, so only dedicated Supporters like you understand its value. We don't expect to get large grants to meet the unique needs of each of our member projects, and we certainly don't expect large companies to provide very much funding unless we cede control of the organization to their requests (as trade associations do). Even our most popular program, Outreachy, is attacked by a small group of people who don't want to see the status quo of privileged male domination of Open Source and Free Software disrupted.

    Supporter contributions are what make Conservancy possible. A year ago, you helped us build Conservancy as a donor-funded organization and stabilize our funding base. I now must ask that you make an annual commitment to renewal — either by renewing your contribution now or becoming a monthly supporter, or, if you're just learning about my work at Conservancy from this blog post, reading up on us and becoming a new Supporter.

    Years ago, when I was still only a part-time volunteer at Conservancy, someone who disliked our work told me that I had “invented a job of running Conservancy”. He meant it as an insult, but I take it as a compliment with pride. In fact, between me and my colleague (and our Executive Director) Karen Sandler, we've “invented” a total of four full-time jobs and one part-time one to advance software freedom. You helped us do that with your donations. If you donate again today, your donation will be matched to make the funds go further.

    Many have told me this year that they are driven to give to other excellent charities that fight racism, work for civil and immigration rights, and other causes that seem particularly urgent right now. As long as there is racism, sexism, murder, starvation, and governmental oppression in the world, I cannot argue that software freedom should be made a priority above all of those issues. However, even if everyone in our society focused on a single, solitary cause that we agreed was the top priority, it's unlikely we could make quicker progress. Meanwhile, if we all single-mindedly ignore less urgent issues, they will, in time, become so urgent they'll be insurmountable by the time we focus on them.

    Industrialized nations have moved almost fully to computer automation for most every daily task. If you question this fact, try to do your job for a day without using any software at all, or anyone using software on your behalf, and you'll probably find it impossible. Then, try to do your job using only Free Software for a day, and you'll find, as I have, that tasks that should take only a few minutes take hours when you avoid proprietary software, and some are just impossible. There are very few organizations that are considering the long-term implications of this slowly growing problem and making plans to build the foundations of a society that doesn't have that problem. Conservancy is one of those few, so I hope you'll realize that long-term value of our lifelong work to defend and expand software freedom and donate.

    Posted on Monday 13 February 2017 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

2016

October

  • 2016-10-27: Conservancy's First GPL Enforcement Feedback Session

    [ This blog was crossposted on Software Freedom Conservancy's website. ]

    As I mentioned in an earlier blog post, I had the privilege of attending Embedded Linux Conference Europe (ELC EU) and the OpenWrt Summit in Berlin, Germany earlier this month. I gave a talk (for which the video is available below) at the OpenWrt Summit. I also had the opportunity to host the first of many conference sessions seeking feedback and input from the Linux developer community about Conservancy's GPL Compliance Project for Linux Developers.

    ELC EU has no “BoF Board” where you can post informal sessions. So, we scheduled the session by word of mouth over a lunch hour. We nevertheless got an good turnout (given that our session's main competition was eating food :) of about 15 people.

    Most notably and excitingly, Harald Welte, well-known Netfilter developer and leader of gpl-violations.org, was able to attend. Harald talked about his work with gpl-violations.org enforcing his own copyrights in Linux, and explained why this was important work for users of the violating devices. He also pointed out that some of the companies that were sued during his most active period of gpl-violations.org are now regular upstream contributors.

    Two people who work in the for-profit license compliance industry attended as well. Some of the discussion focused on usual debates that charities involved in compliance commonly have with the for-profit compliance industry. Specifically, one of them asked how much compliance is enough, by percentage? I responded to his question on two axes. First, I addressed the axis of how many enforcement matters does the GPL Compliance Program for Linux Developers do, by percentage of products violating the GPL? There are, at any given time, hundreds of documented GPL violating products, and our coalition works on only a tiny percentage of those per year. It's a sad fact that only that tiny percentage of the products that violate Linux are actually pursued to compliance.

    On the other axis, I discussed the percentage on a per-product basis. From that point of view, the question is really: Is there a ‘close enough to compliance’ that we can as a community accept and forget about the remainder? From my point of view, we frequently compromise anyway, since the GPL doesn't require someone to prepare code properly for upstream contribution. Thus, we all often accept compliance once someone completes the bare minimum of obligations literally written in the GPL, but give us a source release that cannot easily be converted to an upstream contribution. So, from that point of view, we're often accepting a less-than-optimal outcome. The GPL by itself does not inspire upstreaming; the other collaboration techniques that are enabled in our community because of the GPL work to finish that job, and adherence to the Principles assures that process can work. Having many people who work with companies in different ways assures that as a larger community, we try all the different strategies to encourage participation, and inspire today's violators to become tomorrow upstream contributors — as Harald mention has already often happened.

    That same axis does include on rare but important compliance problem: when a violator is particularly savvy, and refuses to release very specific parts of their Linux code (as VMware did), even though the license requires it. In those cases, we certainly cannot and should not accept anything less than required compliance — lest companies begin holding back all the most interesting parts of the code that GPL requires them to produce. If that happened, the GPL would cease to function correctly for Linux.

    After that part of the discussion, we turned to considerations of corporate contributors, and how they responded to enforcement. Wolfram Sang, one of the developers in Conservancy's coalition, spoke up on this point. He expressed that the focus on for-profit company contributions, and the achievements of those companies, seemed unduly prioritized by some in the community. As an independent contractor and individual developer, Wolfram believes that contributions from people like him are essential to a diverse developer base, that their opinions should be taken into account, and their achievements respected.

    I found Wolfram's points particularly salient. My view is that Free Software development, including for Linux, succeeds because both powerful and wealthy entities and individuals contribute and collaborate together on equal footing. While companies have typically only enforce the GPL on their own copyrights for business reasons (e.g., there is at least one example of a major Linux-contributing company using GPL enforcement merely as a counter-punch in a patent lawsuit), individual developers who join Conservancy's coalition follow community principles and enforce to defend the rights of their users.

    At the end of the session, I asked two developers who hadn't spoken during the session, and who aren't members of Conservancy's coalition, their opinion on how enforcement was historically carried out by gpl-violations.org, and how it is currently carried out by Conservancy's GPL Compliance Program for Linux Developers. Both responded with a simple response (paraphrased): it seems like a good thing to do; keep doing it!

    I finished up the session by inviting everyone to the join the principles-discuss list, where public discussion about GPL enforcement under the Principles has already begun. I also invited everyone to attend my talk, that took place an hour later at the OpenWrt Summit, which was co-located with ELC EU.

    In that talk, I spoke about a specific example of community success in GPL enforcement. As explained on the OpenWrt history page, OpenWrt was initially made possible thanks to GPL enforcement done by BusyBox and Linux contributors in a coalition together. (Those who want to hear more about the connection between GPL enforcement and OpenWrt can view my talk.)

    Since there weren't opportunities to promote impromptu sessions on-site, this event was a low-key (but still quite nice) start to Conservancy's planned year-long effort seeking feedback about GPL compliance and enforcement. Our next session is an official BoF session at Linux Plumbers Conference, scheduled for next Thursday 3 November at 18:00. It will be led by my colleagues Karen Sandler and Brett Smith.

    Posted on Thursday 27 October 2016 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

September

  • 2016-09-21: Help Send Conservancy to Embedded Linux Conference Europe

    [ This blog was crossposted on Software Freedom Conservancy's website. ]

    Last month, Conservancy made a public commitment to attend Linux-related events to get feedback from developers about our work generally, and Conservancy's GPL Compliance Program for Linux Developers specifically. As always, even before that, we were regularly submitting talks to nearly any event with Linux in its name. As a small charity, we always request travel funding from the organizers, who are often quite gracious. As I mentioned in my blog posts about LCA 2016 and GUADEC 2016, the organizers covered my travel funding there, and recently both Karen and I both received travel funding to speak at LCA 2017 and DebConf 2016, as well as many other events this year.

    Recently, I submitted talks for the CFPs of Linux Foundation's Embedded Linux Conference Europe (ELC EU) and the Prpl Foundation's OpenWRT Summit. The latter was accepted, and the folks at the Prpl Foundation graciously offered to fund my flight costs to speak at the OpenWRT Summit! I've never spoken at an OpenWRT event before and I'm looking forward to the opportunity getting to know the OpenWRT and LEDE communities better by speaking at that event, and am excited to discuss Conservancy's work with them.

    OpenWRT Summit, while co-located, is a wholly separate event from LF's ELC EU. Unfortunately, I was not so lucky in my talk submissions there: my talk proposal has been waitlisted since July. I was hopeful after a talk cancellation in mid-August. (I know because the speaker who canceled suggested that I request his slot for my waitlisted talk.) Unfortunately, the LF staff informed me that they understandably filled his open slot with a sponsored session that came in.

    The good news is that my OpenWRT Summit flight is booked, and my friend (and Conservancy Board Member Emeritus) Loïc Dachary (who lives in Berlin) has agreed to let me crash with him for that week. So, I'll be in town for the entirety of ELC EU with almost no direct travel costs to Conservancy! The bad news is that it seems my ELC EU talk remains waitlisted. Therefore, I don't have a confirmed registration for the rest of ELC EU (beyond OpenWRT Summit).

    While it seems like a perfect and cost-effective opportunity to be able to attend both events, that seems harder than I thought! Once I confirmed my OpenWRT Summit travel arrangements, I asked for the hobbyist discount to register for ELC EU, but LF staff informed me yesterday that the hobbyist (as well as the other discounts) are sold out. The moral of the story is that logistics are just plain tough and time-consuming when you work for a charity with an extremely limited travel budget. ☻

    Yet, it seems a shame to waste the opportunity of being in town with so many Linux developers and not being able to see or talk to them, so Conservancy is asking for some help from you to fund the $680 of my registration costs for ELC EU. That's just about six new Conservancy supporter signups, so I hope we can get six new Supporters before Linux Foundation's ELC EU conference begins on October 10th. Either way, I look forward to seeing those developers who attend the co-located OpenWRT Summit! And, if the logistics work out — perhaps I'll see you at ELC EU as well!

    Posted on Wednesday 21 September 2016 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2016-09-02: Two Blog Posts Disguised as Mailing List Posts

    There are plenty of mailing list threads to read, and I don't actually recommend the one that I'm talking about. I think it went on too long, was far too “ad hominem” rather than real policy. Somewhere beneath the surface there was a policy discussion being shouted down; if you look close, you can find find it underneath.

    As he always does, Jon Corbet did an excellent job finding the real policy details in the “GPL defence” ksummit-discuss thread, and telling us all about it. I am very hard on tech journalism, but when it comes to reporting on Linux specifically, Jon and his colleagues at lwn.net have been, for nearly two decades, always been real, detailed, and balanced (and not in the Fox News way) tech journalism.

    The main reason I made this blog post about it, though, is that I actually spent as much time on a few of my posts on the list as I would on any blog post, and I thought readers of my blog might want the content here. So I link to two posts in the thread that I encourage you to read. I also encourage you to read these two posts that my boss at my day job, Karen Sandler, made, which I think are very good as well.

    And, to quote the fictional Forrest Gump: That's all I have to say about that.

    Posted on Friday 02 September 2016 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

August

  • 2016-08-16: My Keynote at GUADEC 2016

    Last Friday, I gave the first keynote at GUADEC 2016. I was delighted for the invitation from the GNOME Foundation to deliver this talk, which I entitled Confessions of a command line geek: why I don’t use GNOME but everyone else should.

    The Chaos Computer Club assisted the GUADEC organizers in recording the talks, so you can see here a great recording of my talk here (and also, the slides). Whether the talk itself is great — that's for you to watch and judge, of course.

    The focus of this talk is why the GNOME desktop is such a central component for the future of software freedom. Too often, we assume that the advent of tablets and other mobile computing platforms means the laptop and desktop will disappear. And, maybe the desktop will disappear, but the laptop is going nowhere. And we need a good interface that gives software freedom to the people who use those laptops. GNOME is undoubtedly the best system we have for that task.

    There is competition. The competition is now, undeniably, Apple. Unlike Microsoft, who hitherto dominated desktops, Apple truly wants to make beautifully designed, and carefully crafted products that people will not just live with, but actually love. It's certainly possible to love something that harms you, and Apple is so carefully adept creating products that not only refuse to give you software freedom, but Apple goes a step further to regularly invent new ways to gain lock-down control and thwarting modification by their customers.

    GUADEC 2016 trip sponsored by the GNOME Foundation!

    We have a great challenge before us, and my goal in the keynote was to express that the GNOME developers are best poised to fight that battle and that they should continue in earnest in their efforts, and to offer my help — in whatever way they need it — to make it happen. And, I offer this help even though I readily admit that I don't need GNOME for myself, but we as a community need it to advance software freedom.

    I hope you all enjoy the talk, and also check out Werner Koch's keynote, We want more centralization, do we?, which was also about a very important issue. (There was also an LWN article about Werner's keynote if you prefer to read to watching.) And, finally, I thank the GNOME Foundation for covering my travel expenses for this trip.

    Posted on Tuesday 16 August 2016 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2016-08-13: Software Freedom Doesn't Kill People, Your Security Through Obscurity Kills People

    The time has come that I must speak out against the inappropriate rhetoric used by those who (ostensibly) advocate for FLOSS usage in automotive applications.

    There was a catalyst that convinced me to finally speak up. I heard a talk today from a company representative of a software supplier for the automotive industry. He said during his talk: putting GPLv3 software in cars will kill people and opening up the source code to cars will cause more harm than good. These statements are completely disingenuous. Most importantly, it ignores the fact that proprietary software in cars is at least equally, if not more, dangerous. At least one person has already been killed in a crash while using a proprietary software auto-control system. Volkswagen decided to take a different route; they decided to kill us all slowly (rather than quickly) by using proprietary software to lie about their emissions and illegally polluting our air.

    Meanwhile, there has been not a single example yet about use of GPLv3 software that has harmed anyone. If you have such an example, email it to me and I promise to add it right here to this blog post.

    So, to the auto industry folks and vendors who market to/for them: until you can prove that proprietary software assures safety in a way that FLOSS cannot, I will continue to tell you this: in the long and sad tradition of the Therac 25, your proprietary software has killed people, both quickly and slowly, and your attacks on GPLv3 and software freedom are not only unwarranted, they are clearly part of a political strategy to divert attention from your own industry's bad behavior and graft unfair blame onto FLOSS.

    As a side note, during the talk's Q&A session, I asked this company's representatives how they assure compliance with the GPLv2 — particularly their compliance with provision of scripts used to control compilation and installation of the executable, which are so often missing for many products, including vehicles. The official answer was: Oh, I don't know. Not only does this company publicly claim security through obscurity is a viable solution, and accuse copyleft advocates of endangering the public safety, they also seem to have not fully learned the lessons of making FLOSS license compliance a clear part of their workflow.

    This is, unfortunately, my general impression of the status of the automotive industry.

    Posted on Saturday 13 August 2016 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2016-08-04: Why You Should Speak At & Attend LinuxConf Australia

    [ This blog was crossposted on Software Freedom Conservancy's website. ]

    Monday 1 February 2016 was the longest day of my life, but I don't mean that in the canonical, figurative, and usually negative sense of that phrase. I mean it literally and in a positive way. I woke up that morning Amsterdam in the Netherlands — having the previous night taken a evening train from Brussels, Belgium with my friend and colleague Tom Marble. Tom and I had just spent the weekend at FOSDEM 2016, where he and I co-organize the Legal and Policy Issues DevRoom (with our mutual friends and colleagues, Richard Fontana and Karen M. Sandler).

    Tom and I headed over to AMS airport around 07:00 local time, found some breakfast and boarded our flights. Tom was homeward bound, but I was about to do the crazy thing that he'd done in the reverse a few years before: I was speaking at FOSDEM and LinuxConf Australia, back-to-back. In fact, because the airline fares were substantially cheaper this way, I didn't book a “round the world” flight, but instead two back-to-back round-trip tickets. I boarded the plane at AMS at 09:30 that morning (local time), and landed in my (new-ish) hometown of Portland, OR as afternoon there began. I went home, spent the afternoon with my wife, sister-in-law, and dogs, washed my laundry, and repacked my bag. My flight to LAX departed at 19:36 local time, a little after US/Pacific sunset.

    I crossed the Pacific ocean, the international dateline, left a day on deposit to pickup on the way back, and after 24 hours of almost literally chasing the sun, I arrived in Melbourne on the morning of Wednesday 3 February, road a shuttle bus, dumped my bags at my room, and arrived just in time for the Wednesday afternoon tea break at LinuxConf Australia 2016 in Geelong.

    Nearly everyone who heard this story — or saw me while it was happening — asked me the same question: Why are you doing this?. The five to six people packed in with me in my coach section on the LAX→SYD leg are probably still asking this, because I had an allergic attack of some sort most of the flight and couldn't stop coughing, even with two full bags of Fisherman's Friends over those 15 hours.

    But, nevertheless, I gave a simple answer to everyone who questioned my crazy BRU→AMS→PDX→LAX→SYD→MEL itinerary: FOSDEM and LinuxConf AU are two of the most important events on the Free Software annual calendar. There's just no question. I'll write more about FOSDEM sometime soon, but the rest of this post, I'll dedicate to LinuxConf Australia (LCA).

    One of my biggest regrets in Free Software is that I was once — and you'll be surprised by this given my story above — a bit squeamish about the nearly 15 hour flight to get from the USA to Australia, and therefore I didn't attend LCA until 2015. LCA began way back in 1999. Keep in mind that, other than FOSDEM, no major, community-organized events have survived from that time. But LCA has the culture and mindset of the kinds of conferences that our community made in 1999.

    LCA is community organized and operated. Groups of volunteers each year plan the event. In the tradition of science fiction conventions and other hobbyist activities, groups bid for the conference and offer their time and effort to make the conference a success. They have an annual hand-off meeting to be sure the organization lessons are passed from one committee to the next, and some volunteers even repeat their involvement year after year. For organizational structure, they rely on a non-profit organization, Linux Australia, to assist with handling the funds and providing infrastructure (just like Conservancy does for our member projects and their conferences!).

    I believe fully that the success of software freedom and GNU/Linux in particular has not primarily come from companies that allow developers to spend some of their time coding on upstream. Sure, many Free Software projects couldn't survive without that component, but what really makes GNU/Linux, or any Free Software project, truly special is that there's a community of users and developers who use, improve, and learn about the software because it excites and interests them. LCA is one of the few events specifically designed to invite that sort of person to attend, and it has for almost an entire generation stood in stark contrast the highly corporate, for-profit/trade-association events that slowly took over our community in the years that followed LCA's founding. (Remember all those years of LinuxWorld Expo? I wasn't even sad when IDG stopped running it!)

    Speaking particularly of earlier this year, LCA 2016 in Geelong, Australia was a particular profound event for me. LCA is one of the few events that accepts my rather political talks about what's happening in Open Source and Free Software, so I gave a talk on Friday 5 February 2016 entitled Copyleft For the Next Decade: A Comprehensive Plan, which was recorded, so you can watch it, or read the LWN article about it. I do warn everyone that the jokes did not go over well (mine never do), so after I finished, I was feeling a bit down that I hadn't made the talk entertaining enough. But then, something amazing happened: people started walking up to me and telling me how important my message was. One individual even came up and told me that he was excited enough that he'd like to match any donation that Software Freedom Conservancy received during LCA 2016. Since it was the last day of the event, I quickly went to one of the organizers, Kathy Reid, and asked if they would announce this match during the closing ceremonies; she agreed. In a matter of just an hour or two, I'd gone from believing my talk had fallen flat to realizing that — regardless of whether I'd presented well — the concepts I discussed had connected with people.

    Then, I sat down in the closing session. I started to tear up slightly when the organizers announced the donation match. Within 90 seconds, though, that turned to full tears of joy when the incoming President of Linux Australia, Hugh Blemings, came on stage and said:

    [I'll start with] a Software Freedom Conservancy thing, as it turns out. … I can tell that most of you weren't at Bradley's talk earlier on today, but if there is one talk I'd encourage you to watch on the playback later it would be that one. There's a very very important message in there and something to take away for all of us. On behalf of the Council I'd like to announce … that we're actually in the process of making a significant donation from Linux Australia to Software Freedom Conservancy as well. I urge all of you to consider contributing individual as well, and there is much left for us to be done as a community on that front.

    I hope that this post helps organizers of events like LCA fully understand how much something like this means to us who run a small charities — and not just with regard to the financial contributions. Knowing that the organizers of community events feel so strongly positive about our work really keeps us going. We work hard and spend much time at Conservancy to serve the Open Source and Free Software community, and knowing the work is appreciated inspires us to keep working. Furthermore, we know that without these events, it's much tougher for us to reach others with our message of software freedom. So, for us, the feeling is mutual: I'm delighted that the Linux Australia and LCA folks feel so positively about Conservancy, and I now look forward to another 15 hour flight for the next LCA.

    And, on that note, I chose a strategic time to post this story. On Friday 5 August 2016, the CFP for LCA 2017 closes. So, now is the time for all of you to submit a talk. If you regularly speak at Open Source and Free Software events, or have been considering it, this event really needs to be on your calendar. I look forward to seeing all of you Hobart this January.

    Posted on Thursday 04 August 2016 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

May

  • 2016-05-13: That “My Ears are Burning” Thing Is Definitely Apocryphal

    I've posted in the past about the Oracle vs. Google case. I'm for the moment sticking to my habit of only commenting when there is a clear court decision. Having been through litigation as the 30(b)(6) witness for Conservancy, I'm used to court testimony and why it often doesn't really matter in the long run. So much gets said by both parties in a court case that it's somewhat pointless to begin analyzing each individual move, unless it's for entertainment purposes only. (It's certainly as entertaining as most TV dramas, really, but I hope folks who are watching step-by-step admit to themselves that they're just engaged in entertainment, not actual work. :)

    I saw a lot go by today with various people as witnesses in the case. About the only part that caught my attention was that Classpath was mentioned over and over again. But that's not for any real salient reason, only because I remember so distinctly, sitting in a little restaurant in New Orleans with RMS and Paul Fisher, talking about how we should name this yet-to-be-launched GNU project “$CLASSPATH”. My idea was that was a shell variable that would expand to /usr/lib/java, so, in my estimation, it was a way to name the project “User Libraries for Java” without having to say the words. (For those of you that were still children in the 1990s, trademark aggression by Sun at the time on the their word mark for “Java” was fierce, it was worse than the whole problem the Unix trademark, which led in turn to the GNU name.)

    But today, as I saw people all of the Internet quoting judges, lawyers and witnesses saying the word “Classpath” over and over again, it felt a bit weird to think that, almost 20 years ago sitting in that restaurant, I could have said something other than Classpath and the key word in Court today might well have been whatever I'd said. Court cases are, as I said, dramatic, and as such, it felt a little like having my own name mentioned over and over again on the TV news or something. Indeed, I felt today like I had some really pointless, one-time-use superpower that I didn't know I had at the time. I now further have this feeling of: “darn, if I knew that was the one thing I did that would catch on this much, I'd have tried to do or say something more interesting”.

    Naming new things, particularly those that have to replace other things that are non-Free, is really difficult, and, at least speaking for myself, I definitely can't tell when I suggest a name whether it is any good or not. I actually named another project, years later, that could theoretically get mentioned in this case, Replicant. At that time, I thought Replicant was a much more creative name than Classpath. When I named Classpath, I felt it was somewhat obvious corollary to the “GNU'S Not Unix” line of thinking. I also recall distinctly that I really thought the name lost all its cleverness when the $ and the all-caps was dropped, but RMS and others insisted on that :).

    Anyway, my final message today is to the court transcribers. I know from chatting with the court transcribers during my depositions in Conservancy's GPL enforcement cases that technical terminology is really a pain. I hope that the term I coined that got bandied about so much in today's testimony was not annoying to you all. Really, no one thinks about the transcribers in all this. If we're going to have lawsuits about this stuff, we should name stuff with the forethought of making their lives easier when the litigation begins. :)

    Posted on Friday 13 May 2016 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

March

  • 2016-03-13: MythWeb Confusing Error Message

    I'm finally configuring Kodi properly to watch over-the-air channels using this this USB ATSC / DVB-T tuner card from Thinkpenguin. I hate taking time away, even on the weekends, from the urgent Conservancy matters but I've been doing by-hand recordings using VLC for my wife when she's at work, and I just need to present a good solution to my home to showcase software freedom here.

    So, I installed Debian testing to get a newr Kodi, I did discover this bug after it had already been closed but had to pull util-linux out of unstable for the moment since it hadn't moved to testing.

    Kodi works fine after installing it via apt, and since VDR is packaged for Debian, I tried getting VDR working instead of MythTV at first. I almost had it working but then I got this error:

    VNSI-Error: cxSocket::read: read() error at 0/4
    when trying to use kodi-pvr-vdr-vnsi (1.11.15-1) with vdr-plugin-vnsiserver (1:1.3.1) combined with vdr (2.2.0-5) and kodi (16.0+dfsg1-1). I tried briefly using the upstream plugins for both VDR and Kodi just to be sure I'd produce the same error, and got the same so I started by reporting this on the Kodi VDR backend forum. If I don't get a response there in a few weeks, I'll file it as a bug against kodi-pvr-vdr-vnsi instead.

    For now, I gave up on VDR (which I rather liked, very old-school Unix-server module was to build a PVR), and tried MythTV instead since it's also GPL'd. Since there weren't Debian packages, I followed this building from source tutorial on MythTV's website.

    I didn't think I'd actually need to install MythWeb at first, because I am using Kodi primarily and am only using MythTV backend to handle the tuner card. It was pretty odd that you can only configure MythTV via a QT program called mythtv-setup, but ok, I did that, and it was relatiavely straight forward. Once I did, playback was working reasonable using Kodi's MythTV plugin. (BTW, if you end up doing this, it's fine to test Kodi as its own in a window with a desktop environment running, but I had playback speed issues in that usage, but they went away fully when I switched to a simple .xinitrc that just called kodi-standalone.

    The only problem left was that I noticed that I was not getting Event Information Table (EIT) data from the card to add to the Electronic Program Guide (EPG). Then I discovered that one must install MythWeb for the EIT data to make it through via the plugin for EPG in Kodi. Seems weird to me, but ok, I went to install MythWeb.

    Oddly, this is where I had the most trouble, constantly receiving this error message:

    PHP Fatal error: Call to a member function query_col() on null in /path/to/mythweb/modules/backend_log/init.php on line 15

    The top net.search hit is likely to be this bug ticket which out points out that this is a horrible form of an error message to tell you the equivalent of “something is strange about the database configuration, but I'm not sure what”.

    Indeed, I tried a litany of items which i found through lots of net.searching. Unfortunately I got a bit frantic, so I'm not sure which one solved my problem (I think it was actually quite obviously multiple ones :). I'm going to list them all here, in one place, so that future searchers for this problem will find all of them together:

    • Make sure the PHP load_path is coming through properly and includes the MythTV backend directory, ala:
      setenv include_path "/path/to/mythtv/share/mythtv/bindings/php/"
    • Make sure the mythtv user has a password set properly and is authorized in the database users table to have access from localhost, ::1, and 127.*, as it's sometimes unclear which way Apache might connect.
    • In Debian testing, make sure PHP 7 is definitely not in use by MythWeb (I am guessing it is incompatible), and make sure the right PHP5 MySql modules are installed. The MythWeb installation instructions do say:
      apache2-mpm-prefork php5 php5-mysql libhttp-date-perl
      And at one point, I somehow got php5-mysql installed and libapache2-mod-php5 without having php5 installed, which I think may have caused a problem.
    • Also, read

      this thread from the MythTV mailing list as it is the most comprehensive in discussing this error.

    I did have to update the channel lineup with mythfilldatabase --dd-grab-all

    Posted on Sunday 13 March 2016 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

February

  • 2016-02-29: The VMware Hearing and the Long Road Ahead

    [ This blog was crossposted on Software Freedom Conservancy's website. ]

    On last Thursday, Christoph Hellwig and his legal counsel attended a hearing in Hellwig's VMware case that Conservancy currently funds. Harald Welte, world famous for his GPL enforcement work in the early 2000s, also attended as an observer and wrote an excellent summary. I'd like to highlight a few parts of his summary, in the context of Conservancy's past litigation experience regarding the GPL.

    First of all, in great contrast to the cases here in the USA, the Court acknowledged fully the level of public interest and importance of the case. Judges who have presided over Conservancy's GPL enforcement cases USA federal court take all matters before them quite seriously. However, in our hearings, the federal judges preferred to ignore entirely the public policy implications regarding copyleft; they focused only on the copyright infringement and claims related to it. Usually, appeals courts in the USA are the first to broadly consider larger policy questions. There are definitely some advantages to the first Court showing interest in the public policy concerns.

    However, beyond this initial point, I was struck that Harald's summary sounded so much like the many hearings I attended in the late 2000's and early 2010's regarding Conservancy's BusyBox cases. From his description, it sounds to me like judges around the world aren't all that different: they like to ask leading questions and speculate from the bench. It's their job to dig deep into an issue, separate away irrelevancies, and assure that the stark truth of the matter presents itself before the Court for consideration. In an adversarial process like this one, that means impartially asking both sides plenty of tough questions.

    That process can be a rollercoaster for anyone who feels, as we do, that the Court will rule on the specific legal issues around which we have built our community. We should of course not fear the hard questions of judges; it's their job to ask us the hard questions, and it's our job to answer them as best we can. So often, here in the USA, we've listened to Supreme Court arguments (for which the audio is released publicly), and every pundit has speculated incorrectly about how the justices would rule based on their questions. Sometimes, a judge asks a clarification question regarding a matter they already understand to support a specific opinion and help their colleagues on the bench see the same issue. Other times, judges asks a questions for the usual reasons: because the judges themselves are truly confused and unsure. Sometimes, particularly in our past BusyBox cases, I've seen the judge ask the opposing counsel a question to expose some bit of bluster that counsel sought to pass off as settled law. You never know really why a judge asked a specific question until you see the ruling. At this point in the VMware case, nothing has been decided; this is just the next step forward in a long process. We enforced here in the USA for almost five years, we've been in litigation in Germany for about one year, and the earliest the Germany case can possibly resolve is this May.

    Kierkegaard wrote that it is perfectly true, as the philosophers say, that life must be understood backwards. But they forget the other proposition, that it must be lived forwards. Court cases are a prime example of this phenomenon. We know it is gut-wrenching for our Supporters to watch every twist and turn in the case. It has taken so long for us to reach the point where the question of a combined work of software under the GPL is before a Court; now that it is we all want this part to finish quickly. We remain very grateful to all our Supporters who stick with us, and the new ones who will join today. That funding makes it possible for Conservancy to pursue this and other matters to ensure strong copyleft for our future, and handle every other detail that our member projects need. The one certainty is that our best chance of success is working hard for plenty of hours, and we appreciate that all of you continue to donate so that the hard work can continue. We also thank the Linux developers in Germany, like Harald, who are supporting us locally and able to attend in person and report back.

    Posted on Monday 29 February 2016 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2016-02-19: Kuhn's Paradox

    I've been making the following social observation frequently in my talks and presentations for the last two years. While I suppose it's rather forward of me to do so, I've decide to name this principle:

    Kuhn's Paradox

    For some time now, this paradoxical principle appears to hold: each day, more lines of freely licensed code exist than ever before in human history; yet, it also becomes increasingly more difficult each day for users to successfully avoid proprietary software while completing their necessary work on a computer.

    Kuhn's View On Motivations & Causes of Kuhn's Paradox

    I believe this paradox is primarily driven by the cooption of software freedom by companies that ostensibly support Open Source, but have the (now extremely popular) open source almost everything philosophy.

    For certain areas of software endeavor, companies dedicate enormous resources toward the authorship of new Free Software for particular narrow tasks. Often, these core systems provide underpinnings and fuel the growth of proprietary systems built on top of them. An obvious example here is OpenStack: a fully Free Software platform, but most deployments of OpenStack add proprietary features not available from a pure upstream OpenStack installation.

    Meanwhile, in other areas, projects struggle for meager resources to compete with the largest proprietary behemoths. Large user-facing, server-based applications of the Service as a Software Substitute variety, along with massive social media sites like Twitter and Facebook that actively work against federated social network systems, are the two classes of most difficult culprits on this point. Even worse, most traditional web sites have now become a mix of mundane content (i.e., HTML) and proprietary Javascript programs, which are installed on-demand into the users' browser all day long, even while most of those servers run a primarily Free Software operating system.

  • Finally, much (possibly a majority of) computer use in industrialized society is via hand-held mobile devices (usually inaccurately described as “mobile phones”). While some of these devices have Free Software operating systems (i.e., Android/Linux), nearly all the applications for all of these devices are proprietary software.

    The explosion of for-profit interest in “Open Source” over the last decade has led us to this paradoxical problem, which increases daily — because the gap between “software under a license respects my rights to copy, share, and modify” and “software that's essential for my daily activities” grows linearly wider with each sunset.

    I propose herein no panacea; I wish I had one to offer. However, I believe the problem is exacerbated by our community's tendency to ignore this paradox, and its pace even accelerates due to many developers' belief that having a job writing any old Free Software replaces the need for volunteer labor to author more strategic code that advances software freedom.

    Linksvayer's View On Motivations & Causes of Kuhn's Paradox

    Linksvayer agrees the paradox is observable, but disagrees with me regarding the primary motivations and causes. Linksvayer claims the following are the primary motivations and causes of Kuhn's paradox:

    1. Software is becoming harder to avoid.
    2. Proprietary vendors outcompete relatively decentralized free software efforts to put software in hands of people.
    3. The latter may be increasing or decreasing. But even if the latter is decreasing, the former trumps it.

      Note the competition includes competition to control policy, particularly public policy. Unfortunately most Free Software activists appear to be focused on individual (thus dwarfish) heroism and insider politics rather than collective action.

    I rewrote Linksvayer's text slightly from a comment made to this blog post to include it in the main text, as I find his arguments regarding causes as equally plausible as mine.

    As an Apologia for the possibility that Linksvayer means me spending too much time on insider politics, I believe that the cooption I discussed above means that the seemingly broad base of support we could use for the collective action Linksvayer recommends is actually tiny. In other words, most people involved with Free Software development now are not Free Software activists. (Compare it to 20 years ago, when rarely did you find a Free Software developer who wasn't also a Free Software activist.) Therefore, one central part of my insider politics work is to recruit moderate Open Source enthusiasts to become radical Free Software activists.

    Posted on Friday 19 February 2016 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

January

  • 2016-01-25: Key Charities That Advance Software Freedom Are Worthy of Your Urgent Support

    [ This blog was crossposted on Software Freedom Conservancy's website. ]

    I've had the pleasure and the privilege, for the last 20 years, to be either a volunteer or employee of the two most important organizations for the advance of software freedom and users' rights to copy, share, modify and redistribute software. In 1996, I began volunteering for the Free Software Foundation (FSF) and worked as its Executive Director from 2001–2005. I continued as a volunteer for the FSF since then, and now serve as a volunteer on FSF's Board of Directors. I was also one of the first volunteers for Software Freedom Conservancy when we founded it in 2006, and I was the primary person doing the work of the organization as a volunteer from 2006–2010. I've enjoyed having a day job as a Conservancy employee since 2011.

    These two organizations have been the center of my life's work. Between them, I typically spend 50–80 hours every single week doing a mix of paid and volunteer work. Both my hobby and my career are advancing software freedom.

    I choose to give my time and work to these organizations because they provide the infrastructure that make my work possible. The Free Software community has shown that the work of many individuals, who care deeply about a cause but cooperate together toward a common goal, has an impact greater than any individuals can ever have working separately. The same is often true for cooperating organizations: charities, like Conservancy and the FSF, that work together with each other amplify their impact beyond the expected.

    Both Conservancy and the FSF pursue specific and differing approaches and methods to the advancement of software freedom. The FSF is an advocacy organization that raises awareness about key issues that impact the future of users' freedoms and rights, and finds volunteers and pays staff to advocate about these issues. Conservancy is a fiscal sponsor, which means one of our key activities is operational work, meeting the logistical and organizational needs of volunteers so they can focus on the production of great Free Software and Free Documentation. Meanwhile, both Conservancy and FSF dedicated themselves to sponsoring software projects: the FSF through the GNU project, and Conservancy through its member projects. And, most importantly, both charities stand up for the rights of users by enforcing and defending copyleft licenses such as the GNU GPL.

    Conservancy and the FSF show in concrete terms that two charities can work together to increase their impact. Last year, our organizations collaborated on many projects, such as the proposed FCC rule changes for wireless devices, jointly handled a GPL enforcement action against Canonical, Ltd., published the principles of community-oriented GPL enforcement, and continued our collaboration on copyleft.org. We're already discussing lots of ways that the two organizations can work together in 2016!

    I'm proud to give so much of my time and energy to both these excellent organizations. But, I also give my money as well: I was the first person in history to become an Associate Member of the FSF (back in November 2002), and have gladly paid my monthly dues since then. Today, I also signed up as an annual Supporter of Conservancy, because I'm want to ensure that Conservancy's meets its current pledge match — the next 215 Supporters who sign up before January 31st will double their donation via the match.

    For just US$20 each month, you make sure the excellent work of both these organizations can continue. This is quite a deal: if you are employed, University-educated professional living in the industrialized world, US$20 is probably the same amount you'd easily spend on a meals at restaurants or other luxuries. Isn't it even a better luxury to know that these two organizations can have employ a years' worth of effort of standing up for your software freedom in 2016? You can make the real difference by making your charitable contribution to these two organizations today:

    Please don't wait: both fundraising deadlines are just six days away!

    Posted on Monday 25 January 2016 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2016-01-05: Sun, Oracle, Android, Google and JDK Copyleft FUD

    I have probably spent more time dealing with the implications and real-world scenarios of copyleft in the embedded device space than anyone. I'm one of a very few people charged with the task of enforcing the GPL for Linux, and it's been well-known for a decade that GPL violations on Linux occur most often in embedded devices such as mobile hand-held computers (aka “phones”) and other such devices.

    This experience has left me wondering if I should laugh or cry at the news coverage and pundit FUD that has quickly come forth from Google's decision to move from the Apache-licensed Java implementation to the JDK available from Oracle.

    As some smart commenters like Bob Lee have said, there is already at least one essential part of Android, namely Linux itself, licensed as pure GPL. I find it both amusing and maddening that respondents use widespread GPL violation by chip manufacturers as some sort of justification for why Linux is acceptable, but Oracle's JDK is not. Eventually, (slowly but surely) GPL enforcement will adjudicate the widespread problem of poor Linux license compliance — one way or the other. But, that issue is beside the point when we talk of the licenses of code running in userspace. The real issue with that is two-fold.

    First, If you think the ecosystem shall collapse because “pure GPL has moved up the Android stack”, and “it will soon virally infect everyone” with copyleft (as you anti-copyleft folks love to say) your fears are just unfounded. Those of us who worked in the early days of reimplementing Java in copyleft communities thought carefully about just this situation. At the time, remember, Sun's Java was completely proprietary, and our goal was to wean developers off Sun's implementation to use a Free Software one. We knew, just as the early GNU developers knew with libc, that a fully copylefted implementation would gain few adopters. So, the earliest copyleft versions of Java were under an extremely weak copyleft called the “GPL plus the Classpath exception”. Personally, I was involved as a volunteer in the early days of the Classpath community; I helped name the project and design the Classpath exception. (At the time, I proposed we call it the “Least GPL” since the Classpath exception carves so many holes in strong copyleft that it's less of a copyleft than even the Lesser GPL and probably the Mozilla Public License, too!)

    But, what does the Classpath exception from GNU's implementation have to with Oracle's JDK? Well, Sun, before Oracle's acquisition, sought to collaborate with the Classpath community. Those of us who helped start Classpath were excited to see the original proprietary vendor seek to release their own formerly proprietary code and want to merge some of it with the community that had originally formed to replace their code with a liberated alternative.

    Sun thus released much of the JDK under “GPL with Classpath exception”. The reasons were clearly explained (URL linked is an archived version of what once appeared on Sun's website) on their collaboration website for all to see. You see the outcome of that in many files in the now-infamous commit from last week. I strongly suspect Google's lawyers vetted what was merged to made sure that the Android Java SDK fully gets the appropriate advantages of the Classpath exception.

    So, how is incorporating Oracle's GPL-plus-Classpath-exception'd JDK different from having an Apache-licensed Java userspace? It's not that much different! Android redistributors already have strong copyleft obligations in kernel space, and, remember that Webkit is LGPL'd; there's also already weak copyleft compliance obligations floating around Android, too. So, if a redistributor is already meeting those, it's not much more work to meet the even weaker requirements now added to the incorporated JDK code. I urge you to ask anyone who says that this change will have any serious impact on licensing obligations and analysis for Android redistributors to please prove their claim with an actual example of a piece of code added in that commit under pure GPL that will combine in some way with Android userspace applications. I admit I haven't dug through the commit to prove the negative, but I'd be surprised if some Google engineers didn't do that work before the commit happened.

    You may now ask yourself if there is anything of note here at all. There's certainly less here than most are saying about it. In fact, a Java industry analyst (with more than a decade of experience in the area) told me that he believed the decision was primarily technical. Authors of userspace applications on Android (apparently) seek a newer Java language implementation and given that there was a reasonably licensed Free Software one available, Google made a technical switch to the superior codebase, as it gives API users technically what they want while also reducing maintenance burden. This seems very reasonable. While it's less shocking than what the pundits say, technical reasons probably were the primary impetus.

    So, for Android redistributors, are there any actual licensing risks to this change? The answer there is undoubtedly yes, but the situation is quite nuanced, and again, the problem is not as bad as the anti-copyleft crowd says. The Classpath exception grants very wide permissions. Nevertheless, some basic copyleft obligations can remain, albeit in a very weak-copyleft manner. It is possible to violate that weak copyleft, particularly if you don't understand the licensing of all third-party materials combined with the JDK. Still, since you must comply with Linux's license to redistribute Android, complying with the Classpath exception'd stuff will require only a simple afterthought.

    Meanwhile, Sun's (now Oracle's) JDK, is likely nearly 100% copyright-held by Oracle. I've written before about the dangers of the consolidation of a copylefted codebase with a single for-profit, commercial entity. I've even pointed out that Oracle specifically is very dangerous in its methods of using copyleft as an aggression.

    Copyleft is a tool, not a moral principle. Tools can be used incorrectly with deleterious effect. As an analogy, I'm constantly bending paper clips to press those little buttons on electronic devices, and afterwards, the tool doesn't do what it's intended for (hold papers together); it's bent out of shape and only good for the new, dubious purpose, better served by a different tool. (But, the paper clip was already right there on my desk, you see…)

    Similarly, while organizations like Conservancy use copyleft in a principled way to fight for software freedom, others use it in a manipulative, drafter-unintended, way to extract revenue with no intention standing up for users' rights. We already know Oracle likes to use GPL this way, and I really doubt that Oracle will sign a pledge to follow Conservancy's and FSF's principles of GPL enforcement. Thus, we should expect Oracle to aggressively enforce against downstream Android manufacturers who fail to comply with “GPL plus Classpath exception”. Of course, Conservancy's GPL Compliance Project for Linux developers may also enforce, if the violation extends to Linux as well. But, Conservancy will follow those principles and prioritize compliance and community goodwill. Oracle won't. But, saying that means that Oracle has “its hooks” in Android makes no sense. They have as many hooks as any of the other thousands of copyright holders of copylefted material in Android. If anything, this is just another indication that we need more of those copyright holders to agree with the principles, and we should shun codebases where only one for-profit company holds copyright.

    Thus, my conclusion about this situation is quite different than the pundits and link-bait news articles. I speculate that Google weighed a technical decision against its own copyleft compliance processes, and determined that Google would succeed in its compliance efforts on Android, and thus won't face compliance problems, and can therefore easily benefit technically from the better code. However, for those many downstream redistributors of Android who fail at license compliance already, the ironic outcome is that you may finally find out how friendly and reasonable Conservancy's Linux GPL enforcement truly is, once you compare it with GPL enforcement from a company like Oracle, who holds avarice, not software freedom, as its primary moral principle.

    Finally, the bigger problem in Android with respect to software freedom is that the GPL is widely violated on Linux in Android devices. If this change causes Android redistributors to reevalute their willful ignorance of GPL's requirements, then some good may come of it all, despite Oracle's expected nastiness.

    Update on 2016-01-06: I specifically didn't mention the lawsuit above because I don't actually think this whole situation has much to do with the lawsuit, but if folks do want to read my analysis of the Oracle v. Google lawsuit, these are my posts on it in reverse chronological order: [0], [1], [2], [3]. I figured I should add these links given that all the discussion on at least one forum discussing this blog post is about the lawsuit.

    Posted on Tuesday 05 January 2016 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

2015

December

  • 2015-12-30: A Requiem for Ian Murdock

    [ This post was crossposted on Conservancy's website. ]

    I first met Ian Murdock gathered around a table at some bar, somewhere, after some conference in the late 1990s. Progeny Linux Systems' founding was soon to be announced, and Ian had invited a group from the Debian BoF along to hear about “something interesting”; the post-BoF meetup was actually a briefing on his plans for Progeny.

    Many of the details (such as which conference and where on the planet it was), I've forgotten, but I've never forgotten Ian gathering us around, bending my ear to hear in the loud bar, and getting one of my first insider scoops on something big that was about to happen in Free Software. Ian was truly famous in my world; I felt like I'd won the jackpot of meeting a rock star.

    More recently, I gave a keynote at DebConf this year and talked about how long I've used Debian and how much it has meant to me. I've since then talked with many people about how the Debian community is rapidly becoming a unicorn among Free Software projects — one of the last true community-driven, non-commercial projects.

    A culture like that needs a huge group to rise to fruition, and there are no specific actions that can ensure creation of a multi-generational project like Debian. But, there are lots of ways to make the wrong decisions early. As near as I can tell, Ian artfully avoided the project-ending mistakes; he made the early decisions right.

    Ian cared about Free Software and wanted to make something useful for the community. He teamed up with (for a time in Debian's earliest history) the FSF to help Debian in its non-profit connections and roots. And, when the time came, he did what all great leaders do: he stepped aside and let a democratic structure form. He paved the way for the creation of Debian's strong Constitutional and democratic governance. Debian has had many great leaders in its long history, but Ian was (effectively) the first DPL, and he chose not to be a BDFL.

    The Free Software community remains relatively young. Thus, loss of our community members jar us in the manner that uniquely unsettles the young. In other words, anyone we lose now, as we've lost Ian this week, has died too young. It's a cliché to say, but I say anyway that we should remind ourselves to engage with those around us every day, and to welcome new people gladly. When Ian invited me around that table, I was truly nobody: he'd never met me before — indeed no one in the Free Software community knew who I was then. Yet, the mere fact that I stayed late at a conference to attend the Debian BoF was enough for him — enough for him to even invite me to hear the secret plans of his new company. Ian's trust — his welcoming nature — remains for me unforgettable. I hope to watch that nature flourish in our community for the remainder of all our lives.

    Posted on Wednesday 30 December 2015 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2015-12-18: Conservancy's Year In Review 2015

    If you've noticed my blog a little silent the past few weeks, I've been spending my blogging time in December writing blogs on Conservancy's site for Conservancy's 2015: Year in Review series.

    So far, these are the ones that were posted:

    Generally speaking, if you want to keep up with my work, you probably should subscribe not only to my blog but also to Conservancy's. I tend to crosspost the more personal pieces, but if something is purely a Conservancy matter and doesn't relate to usual things I write about here, I don't crosspost.

    Posted on Friday 18 December 2015 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2015-12-02: Fighting For Social Justice Is a Major Contribution to Society

    I have something to say that I'm sure everyone is going to consider controversial. I've been meaning to say it for some time, and I realize that it's going to get some annoyance from all sides of this debate. Conservancy may lose Supporters over this, even though this is my personal blog and my personal opinion, and views expressed here aren't necessarily Conservancy's views. I've actually been meaning to write this publicly for a year. I just have to say it now, because there's yet another event on this issue caused yet another a war of words in our community.

    If you follow the types of Free Software politics and issues that I do (which you probably do if you read my blog) you have heard the phrase — which has become globally common in general politics — “Social Justice Warrior”, often abbreviated SJW. As anyone who reads my blog probably already knows, SJW is used as a derogatory catch-all phrase referring to anyone who speaks up to on any cause, but particularly on racial or gender inequality. While the derogatory part seems superficially to refer to tactics rather than strategic positions, nevertheless many critics who use the phrase conflate (either purposely or not) some specific, poorly-chosen tactic (perhaps from long ago) of the few with the strategic goals of an entire movement.

    Anyway, my argument in this post, which is why I expect it to annoy everyone equally, is not about some specific issue in any cause, but on a meta-issue. The meta-issue is the term “SJW” itself. The first time I heard the phrase (which, given my age, feels recent, even though it was probably four years ago), I actually thought it was something good; I first thought that SJW was a compliment. In fact, I've more-or-less spent my entire adult life wanting to be a social justice warrior, although I typically called it being a “social justice activist”.

    First of all, I believe deeply in social justice causes. I care about equality, fairness, and justice for everyone. I believe software freedom is a social justice cause, and I personally have proudly called software freedom a social justice cause for more than a decade.

    Second, I also believe in the zealous pursuit of causes that matter. I've believed fully and completely in non-violence since the mid-1980s, but I nevertheless believe there is a constant war of words in the politics surrounding any cause or issue, including software freedom. I am, therefore — for lack of a better word — a warrior, in those politics.

    So, when I look at the three words on their face: Social. Justice. Warrior. Well, denotively, it describes my lifelong work exactly.

    Connotatively, a warped and twisted manipulation of words has occurred. Those, who want to discredit the validity of various social justice causes, have bestowed a negative connotation on the phrase to create a social environment that makes anyone who wants to speak out about a cause automatically wrong and easily branded.

    I've suggested to various colleagues privately over the last two years that we should coopt the phrase back to mean something good. Most have said that's a waste of time and beside the point. I still wonder whether they're right.

    By communicating an idea that these social justice people are fighting against me and oppressing me, the messenger accusing a so-called SJW has a politically powerful, well-coopted message, carefully constructed for concision and confirmation bias. While I don't believe all that cooptive and manipulative power is wielded solely in the one three-word phrase, I do believe that the rhetorical trick that allows “SJW” to have a negative connotation is the same rhetorical power that has for centuries allowed the incumbent power structures to keep their control of those many social institutions that are governed chiefly by rhetoric.

    And this is precisely why I just had to finally post something about this. I won a cultural power jackpot, merely by being born a middle-class Caucasian boy in the USA. Having faced some adversity in my life despite that luck, and then seeing how easy I had it compared to the adversity that others have faced, I become furious at how the existing power structures can brand people with — let's call it what is — a sophisticated form of name-calling that coopts a phrase like “social justice”, which until that time had a history of describing some of the greatest, most selfless, and most important acts of human history.

    Yes, I know there are bigger issues at stake than just the words people use. But words matter. No matter how many people use the phrase negatively, I continue to strive to be a social justice warrior. I believe that's a good thing, in the tradition of all those who have fought for a cause they believed was right, even when it wasn't popular.

    Posted on Wednesday 02 December 2015 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

November

  • 2015-11-26: Do You Like What I Do For a Living?

    [ A version of this blog post was crossposted on Conservancy's blog. ]

    I'm quite delighted with my career choice. As an undergraduate and even in graduate school, I still expected my career extend my earlier careers in the software industry: a mixture of software developer and sysadmin. I'd probably be a DevOps person now, had I stuck with that career path.

    Instead, I picked the charity route: which (not financially, but work-satisfaction-wise) is like winning a lottery. There are very few charities related to software freedom, and frankly, if (like me) you believe in universal software freedom and reject proprietary software entirely, there are two charities for you: the Free Software Foundation, where I used to work, and Software Freedom Conservancy, where I work now.

    But software freedom is not merely an ideology for me. I believe the ideology matters because I see the lives of developers and users are better when they have software freedom. I first got a taste of this IRL when I attended the earliest Perl conferences in the late 1990s. My friend James and I stayed in dive motels and even slept in a rental car one night to be able to attend. There was excitement in the Perl community (my first Free Software community). I was exhilarated to meet in person the people I'd seen only as god-like hackers posting on perl5-porters. James was so excited he asked me to take a picture of him jumping as high as he could with his fist in the air in front of the main conference banner. At the time, I complained; I was mortified and felt like a tourist taking that picture. But looking back, I remember that James and I felt that same excitement and we just expressed it differently.

    I channeled that thrill into finding a way that my day job would focus on software freedom. As an activist since my teenage years, I concentrated specifically on how I could preserve, protect and promote this valuable culture and ideology in a manner that would assure the rights of developers and users to improve and share the software they write and use.

    I've enjoyed the work; I attend more great conferences than I ever imagined I would, where now people occasionally walk up to me with the same kind of fanboy reverence that I reserved for Larry Wall, RMS and the heroes of my Free Software generation. I like my work. I've been careful, however, to avoid a sense of entitlement. Since I read it in 1991, I have never forgotten RMS' point in the GNU Manifesto: Most of us cannot manage to get any money for standing on the street and making faces. But we are not, as a result, condemned to spend our lives standing on the street making faces, and starving. We do something else., a point he continues in his regular speeches, by adding: I [could] just … give up those principles and start … writing proprietary software. I looked for another alternative, and there was an obvious one. I could leave the software field and do something else. Now I had no other special noteworthy skills, but I'm sure I could have become a waiter. Not at a fancy restaurant; they wouldn’t hire me; but I could be a waiter somewhere. And many programmers, they say to me, “the people who hire programmers demand [that I write proprietary software] and if I don’t do [it], I’ll starve”. It’s literally the word they use. Well, as a waiter, you’re not going to starve.

    RMS' point is not merely to expose the false dilemma inherent in: I have to program, even if my software is proprietary, because that's what companies pay me to do, but also to expose the sense of entitlement in assuming a fundamental right to do the work you want. This applies not just to software authorship (the work I originally trained for) but also the political activism and non-profit organizational work that I do now.

    I've spent most of my career at charities because I believe deeply that I should take actions that advance the public good, and because I have a strategic vision for the best methods to advance software freedom. My strategic goals to advance software freedom include two basic tenets: (a) provide structure for Free Software projects in a charitable home (so that developers can focus on writing software, not administration, and so that the projects aren't unduly influenced by for-profit corporations) and (b) uphold and defend Free Software licensing, such as copyleft, to ensure software freedom.

    I don't, however, arrogantly believe that these two priorities are inherently right. Strategic plans work toward a larger goal, and pursing success of a larger ideological mission requires open-mindedness regarding strategies. Nevertheless, any strategy, once decided, requires zealous pursuit. It's with this mindset that I teamed up with my colleague, Karen Sandler, to form Software Freedom Conservancy.

    Conservancy, like most tiny charities, survives on the determination of its small management staff. Karen Sandler, Conservancy's Executive Director, and I have a unique professional collaboration. She and I share a commitment to promoting and defending moral principles in the context of software freedom, along with an unrelenting work ethic to match. I believe fundamentally that she and I have the skills, ability, and commitment to meet these two key strategic goals for software freedom.

    Yet, I don't think we're entitled to do this work. And, herein there's another great feature of a charity. A charity not only serves the public good; the USA IRS also requires that a charity be funded primarily by donations from the public.

    I like this feature for various reasons. Particularly, in the context of the fundraiser that Conservancy announced this week, I think about it terms of seeking a mandate from the public. As Conservancy poises to begin its tenth year, Karen and I as its leaders stand at a crossroads. For financial reasons of the organization's budget, we've been thrust to test this question: Does the public of Free Software users and developers actually want the work that we do?.

    While I'm nervous that perhaps the answer is no, I'm nevertheless not afraid to ask the question. So, we've asked. We asked all of you to show us that you want our work to continue. We set two levels, matching the two strategic goals I mentioned. (The second is harder and more expensive to do than the first, so we've asked many more of you to support us if you want it.)

    It's become difficult in recent years to launch a non-profit fundraiser (which have existed for generations) and not think of the relatively recent advent of gofundme, Kickstarter, and the like. These new systems provide a (sadly, usually proprietary software) platform for people to ask the public: Is my business idea and/or personal goal worth your money?. While I'm dubious about those sites, I do believe in democracy enough to build my career on a structure that requires an election (of sorts). Karen and I don't need you to go to the polls and cast your ballot, but we do ask you consider if what we do for a living at Conservancy is worth US$10 per month to you. If it is, I hope you'll “cast a vote” for Conservancy and become a Conservancy supporter now.

    Posted on Thursday 26 November 2015 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

September

  • 2015-09-28: How Would Software Freedom Have Helped With VW?

    [ A version of this blog post was crossposted on Conservancy's blog. ]

    Would software-related scandals, such as Volkswagen's use of proprietary software to lie to emissions inspectors, cease if software freedom were universal? Likely so, as I wrote last week. In a world where regulations mandate distribution of source code for all the software in all devices, and where no one ever cheats on that rule, VW would need means other than software to hide their treachery.

    Universal software freedom is my lifelong goal, but I realized years ago that I won't live to see it. I suspect that generations of software users will need to repeatedly rediscover and face the harms of proprietary software before a groundswell of support demands universal software freedom. In the meantime, our community has invented semi-permanent strategies, such as copyleft, to maximize software freedom for users in our current mixed proprietary and Free Software world.

    In the world we live in today, software freedom can impact the VW situation only if a few complex conditions are met. Let's consider the necessary hypothetical series of events, in today's real world, that would have been necessary for Open Source and Free Software to have stopped VW immediately.

    First, VW would have created a combined or derivative work of software with a copylefted program. While many cars today contain Linux, which is copylefted, I am not aware of any cars that use Linux outside of the on-board entertainment and climate control systems. The VW software was not part of those systems, and VW engineers almost surely wrote the emissions testing mode code from scratch. Even if they included some non-copylefted Open Source or Free Software in it, those licenses don't require disclosure of any source code; VW's ability to conceal its bad actions with non-copylefted code is roughly identical to the situation of proprietary VW code before us. As a thought experiment, though, let's pretend, that VW based the nefarious code on Linux by writing a proprietary Linux module to trick the emissions testing systems.

    In that case, VW would have violated the GPL. But that alone is far from enough to ensure anyone would catch VW. Indeed, GPL violations remain very prevalent, and only one organization enforces the GPL for Linux (full disclosure: that's Software Freedom Conservancy, where I work). That organization has such limited enforcement resources (only three people on staff, and enforcement is one of many of our programs), I suspect that years would pass before Conservancy had the resources to pursue the violation; Conservancy currently has hundreds of Linux GPL violations queued for action. Even once opened, most GPL violations take years to resolve. As an example, we are currently enforcing the GPL against one auto manufacturer who has Linux in their car. We've already spent hundreds of hours and the company to date continues to fail in their GPL compliance efforts. Admittedly, it's highly unlikely that particular violator has a GPL-violating Linux module specifically designed to circumvent automotive regulations. However, after enforcing the GPL in that case for more than two years, I still don't have enough data about their use of Linux to even know which proprietary Linux modules are present — let alone whether those modules are nefarious in any way other than as violating Linux's license.

    Thus, in today's world, a “software freedom solution” to prevent the VW scandal must meet unbelievable preconditions: (a) VW would have to base all its software on copylefted Open Source and Free Software, and (b) an organization with a mission to enforce copyleft for the public good would require the resources to find the majority of GPL violators and ensure compliance in a timely fashion. This thought experiment quickly shows how much more work remains to advance and defend software freedom. While requirements of source code disclosure, such as those in copyleft licenses, are necessary to assure the benefits of software freedom, they cannot operate unless someone exercises the offers for source and looks at the details.

    We live in a world where most of the population accepts proprietary software as legitimate. Even major trade associations, such as the OpenStack Foundation and the Linux Foundation, in the Open Source community laud companies who make proprietary software, as long as they adopt and occasionally contribute to some Free Software too. Currently, it feels like software freedom is winning, because the overwhelming majority in the software industry believe Open Source and Free Software is useful and superior in some circumstances. Furthermore, while I appreciate the aspirational ideal of voluntary Open Source, I find in my work that so many companies, just as VW did, will cheat against important social good policies unless someone watches and regulates. Mere adoption of Open Source won't work alone; we only yield the valuable results of software freedom if software is copylefted and someone upholds that copyleft.

    Indeed, just as it has been since the 1980s, very few people believe that software freedom is of fundamental importance for all software users. Scandals, like VW's use of proprietary software to hide other bad acts, might slowly change opinions, but one scandal is rarely enough to permanently change public opinion. I therefore encourage those who support software freedom to take this incident as inspiration for a stronger stance, and to prepare yourselves for the long haul of software freedom advocacy.

    Posted on Monday 28 September 2015 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2015-09-22: The EPA Deserves Software Freedom, Too

    The issue of software freedom is, not surprisingly, not mentioned in the mainstream coverage of Volkswagen's recent use of proprietary software to circumvent important regulations that exist for the public good. Given that Volkswagen is an upstream contributor to Linux, it's highly likely that Volkswagen vehicles have Linux in them.

    Thus, we have a wonderful example of how much we sacrifice at the altar of “Linux adoption”. While I'm glad for some Free Software to appear in products rather than none, I also believe that, too often, our community happily accepts the idea that we should gratefully laud any company that includes even a tiny bit of Free Software in their product, and gives a little code back, even if most of what they do is proprietary software.

    In this example, a company poisoned people and our environment with out-of-compliance greenhouse gas emissions, and hid their tracks behind proprietary software. IIUC, the EPA had to use an (almost literal) analog hole to catch these scoundrels.

    It's not that I'm going to argue that end users should modify the software that verifies emissions standards. But if end users could extract these binaries from the physical device, recompile the source, and verify the binaries match, someone would have discovered this problem immediately when the models drove off the lot.

    So, why does no one demand for this? To me, this feels like Diebold and voting machines all over again. So tell me, voters' rights advocates who claimed proprietary software was fine, as long as you could get voter-verified paper records: how do are we going to “paper verify” our emissions testing?

    Software freedom is the only solution to problems that proprietary software creates. Sadly, opposition to software freedom is so strong, nearly everyone will desperately try every other (failing) solution first.

    Posted on Tuesday 22 September 2015 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2015-09-15: Exercising Software Freedom in the Global Email System

    [ This post was cross-posted on Conservancy's blog. ]

    In this post, I discuss one example of how a choice for software freedom can cause many strange problems that others will dismiss. My goal here is to explain in gory detail how proprietary software biases in the computing world continue to grow, notwithstanding Open Source ballyhoo.

    Two decades ago, nearly every company, organization, entity, and tech-minded individual ran their own email server. Generally speaking, even back then, nearly all the software for both MTAs and MUAs were Free Software0. MTA's are the mail transport agents — the complex software that moves email around from one Internet domain to another. MUAs are the mail user agents, sometimes called mail clients — the local programs with which users manipulate their own email.

    I've run my own MTA since around 1993: initially with sendmail, then with exim for a while, and with Postfix since 1999 or so. Also, everywhere I've worked throughout my entire career since 1995, I've either been in charge of — or been the manager of the person in charge of — the MTA installation for the organization where I worked. In all cases, that MTA has always been Free Software, of course.

    However, the world of email has changed drastically during that period. The most notable change in the email world is the influx of massive amounts of spam, which has been used as an excuse to implement another disturbing change. Slowly but surely, email service — both the MTA and the MUA — have been outsourced for most organizations. Specifically, either (a) organizations run proprietary software on their own computers to deal with email and/or (b) people pay a third-party to run proprietary and/or trade-secret software on their behalf to handle the email services. Email, generally speaking, isn't handled by Free Software all that much anymore.

    This situation became acutely apparent to me this earlier this month when Conservancy moved its email server. I had plenty of warning that the move was needed1, and I'd set up a test site on the new server. We sent and received some of our email for months (mostly mailing list traffic) using that server configured with a different domain (sf-conservancy.org). When the shut-off day came, I moved sfconservancy.org's email officially. All looked good: I had a current Debian, with a new version of Postfix and Dovecot on a speedier host, and with better spam protection settings in Postfix and better spam filtering with a newer version of SpamAssassin. All was going great, thanks to all those great Free Software projects — until the proprietary software vendors threw a spanner in our works.

    For reasons that we'll never determine for sure2, the IPv4 number that our new hosting provide gave us was already listed on many spam blacklists. I won't debate the validity of various blacklists here, but the fact is, for nearly every public-facing, pure-blacklist-only service, delisting is straightforward, takes about 24 hours, and requires at most answering some basic questions about your domain name and answering a captcha-like challenge. These services, even though some are quite dubious, are not the center of my complaint.

    The real peril comes from third-party email hosting companies. These companies have arbitrary, non-public blacklisting rules. More importantly, they are not merely blacklist maintainers, they are MTA (and in some cases, even MUA) providers who sell their proprietary and/or trade-secret hosted solutions as a package to customers. Years ago, the idea of giving up that much control of what happens to your own email would be considered unbelievable. Today, it's commonplace.

    And herein lies the fact that is obvious to most software freedom advocates but indiscernible by most email users. As a Free Software user, with your own MTA on your own machine, your software only functions if everyone else respects your right to run that software yourself. Furthermore, if the people you want to email are fully removed from their hosting service, they won't realize nor understand that their hosting site might block your emails. These companies have their customers fully manipulated to oppose your software freedom. In other words, you can't appeal to those customers (the people you want to email), because you're likely the only person to ever raise this issue with them (i.e., unless they know you very well, they'll assume you're crazy). You're left begging to the provider, whom you have no business relationship with, to convince them that their customers want to hear from you. Your voice rings out indecipherable from the spammers who want the same permission to attack their customers.

    The upshot for Conservancy? For days, Microsoft told all its customers that Conservancy is a spammer; Microsoft did it so subtly that the customers wouldn't even believe it if we told them. Specifically, every time I or one of my Conservancy colleagues emailed organizations using Microsoft's “Exchange Online”, “Office 365” or similar products to host email for their domain4, we got the following response:

                    Sep  2 23:26:26 pine postfix/smtp[31888]: 27CD6E12B: to=, relay=example-org.mail.protection.outlook.com[207.46.163.215]:25, delay=5.6, delays=0.43/0/0.16/5, dsn=5.7.1, status=bounced (host example-org.mail.protection.outlook.com[207.46.163.215] said: 550 5.7.1 Service unavailable; Client host [162.242.171.33] blocked using FBLW15; To request removal from this list please forward this message to [email protected] (in reply to RCPT TO command))
                    

    Oh, you ask, did you forward your message to the specified address? Of course I did; right away! I got back an email that said:

    Hello ,

    Thank you for your delisting request SRXNUMBERSID. Your ticket was received on (Sep 01 2015 06:13 PM UTC) and will be responded to within 24 hours.

    Once we passed the 24 hour mark with no response, I started looking around for more information. I also saw a suggestion online that calling is the only way to escalate one of those tickets, so I phoned 800-865-9408 and gave V-2JECOD my ticket number and she told that I could only raise these issues with the “Mail Flow Team”. She put me on hold for them, and told me that I was number 2 in the queue for them so it should be a few minutes. I waited on hold for just under six hours. I finally reached a helpful representative, who said the ticket was the lowest level of escalation available (he hinted that it would take weeks to resolve at that level, which is consistent with other comments about this problem I've seen online). The fellow on the phone agreed to escalate it to the highest priority available, and said within four hours, Conservancy should be delisted. Thus, ultimately, I did resolve these issues after about 72 hours. But, I'd spent about 15 hours all-told researching various blacklists, email hosting companies, and their procedures3, and that was after I'd already carefully configured our MTA and DNS to be very RFC-compliant (which is complicated and confusing, but absolutely essential to stay off these blacklists once you're off).

    Admittedly, this sounds like a standard Kafkaesque experience with a large company that almost everyone in post-modern society has experienced. However, it's different in one key way: I had to convince Microsoft to allow me to communicate with their customers who are paying Microsoft for proprietary and/or trade-secret software and services, ostensibly to improve efficiency of their communications. Plus, since Microsoft, by the nature of their so-called spam blocking, doesn't inform their customers whom they've blocked, I and my colleagues would have just sounded crazy if we'd asked our contacts to call their provider instead. (I actually considered this, and realized that we might negatively impact relationships with professional contacts.)

    These problems do reduce email software freedom by network effects. Most people rely on third-party proprietary email software from Google, Microsoft, Barracuda, or others. Therefore, most people, don't exercise any software freedom regarding email services. Since exercising software freedom for email slowly becomes a rarer and rarer (rather than norm it once was), society slowly but surely pegs those who do exercise software freedom as “random crazy people”.

    There are a few companies who are seeking to do email hosting in a way that respects your software freedom. The real test of such companies is if someone technically minded can get the same software configured on their own systems, and have it work the same way. Yet, in most cases, you go to one of these companies' Github pages and find a bunch of stuff pushed public, but limited information on how to configure it so that it functions the same way the hosted service does. RMS wrote years ago that Free Software cannot properly succeed without Free Documentation, and in many of these hosting cases: the hosting company is using fully upstreamed Free Software, but has configured the software in a way that is difficult to stumble upon by oneself. (For that reason, I'm committing to writing up tutorials on how Conservancy configured our mail server, so at least I'll be part of the solution instead of part of the problem.)

    BTW, as I dealt with all this, I couldn't help but think of John Gilmore's activism efforts regarding open mail relays. While I don't agree with all of John's positions on this, his fundamental position is right: we must oppose companies who think they know better how we should configure our email servers (or on which IP numbers we should run those servers). I'd add a corollary that there's a serious threat to software freedom, at least with regard to email software, if we continue to allow such top-down control of the once beautifully decentralized email system.

    The future of software freedom depends on issues like this. Imagine someone who has just learned that they can run their own email server, or bought some Free Software-based plug computing system that purports to be a “home cloud” service with email. There's virtually no chance that such users would bother to figure all this out. They'd see their email blocked, declare the “home cloud” solution useless, and would just get a gmail.com, outlook.com, or some other third-party email account. Thus, I predict that software freedom that we once had, for our MTAs and MUAs, will eventually evaporate for everyone except those tiny few who invest the time to understand these complexities and fight the for-profit corporate power that curtails software freedom. Furthermore, that struggle becomes Sisyphean as our numbers dwindle.

    Email is the oldest software-centric communication system on the planet. The global email system serves as a canary in the coalmine regarding software freedom and network service freedom issues. Frighteningly, software now controls most of the global communications systems. How long will it be before mobile network providers refuse to terminate PSTN calls or SMS's sent from devices running modified Android firmwares like Replicant? Perhaps those providers, like large email providers, will argue that preventing robocalls (the telephone equivalent of SPAM) necessitates such blocking. Such network effects place so many dystopias on software freedom's horizon.

    I don't deny that every day, there is more Free Software existing in the world than has ever existed before — the P.T. Barnum's of Open Source have that part right. The part they leave out is that, each day, their corporate backers make it a little more difficult to complete mundane tasks using only Free Software. Open Source wins the battle while software freedom loses the war.


    0Yes, I'm intimately aware that Elm's license was non-free, and that the software freedom of PINE's license was in question. That's slightly relevant here but mostly orthogonal to this point, because Free Software MUAs were still very common then, and there were (ultimately successful) projects to actively rewrite the ones whose software freedom was in question

    1For the last five years, one of Conservancy's Director Emeriti, Loïc Dachary, has donated an extensive amount of personal time and in-kind donations by providing Cloud server for Conservancy to host its three key servers, including the email server. The burden of maintaining this for us became too time consuming (very reasonably), and Loïc's asked us to find another provider. I want, BTW, to thank Loïc his for years of volunteer work maintaining infrastructure for us; he provided this service for much longer than we could have hoped! Loïc also gave us plenty of warning that we'd need to move. None of these problems are his fault in the least!

    2The obvious supposition is that, because IPv4 numbers are so scarce, this particular IP number was likely used previously by a spammer who was shut down.

    3I of course didn't count the time time on phone hold, as I was able to do other work while waiting, but less efficiently because the hold music was very distracting.

    4If you want to see if someone's domain is a Microsoft customer, see if the MX record for their domain (say, example.org) points to example-org.mail.protection.outlook.com.

    Posted on Tuesday 15 September 2015 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

July

  • 2015-07-15: Thoughts on Canonical, Ltd.'s Updated Ubuntu IP Policy

    Most of you by now have probably seen Conservancy's and FSF's statements regarding the today's update to Canonical, Ltd.'s Ubuntu IP Policy. I have a few personal comments, speaking only for myself, that I want to add that don't appear in the FSF's nor Conservancy's analysis. (I wrote nearly all of Conservancy's analysis and did some editing on FSF's analysis, but the statements here I add are my personal opinions and don't necessarily reflect the views of the FSF nor Conservancy, notwithstanding that I have affiliations with both orgs.)

    First of all, I think it's important to note the timeline: it took two years of work by two charities to get this change done. The scary thing is that compared to their peers who have also violated the GPL, Canonical, Ltd. acted rather quickly. As Conservancy pointed out regarding the VMware lawsuit, it's not uncommon for these negotiations to take even four years before we all give up and have to file a lawsuit. So, Canonical, Ltd. resolved the matter at least twice as fast as VMware, and they deserve some credit for that — even if other GPL violators have set the bar quite low.

    Second, I have to express my sympathy for the positions on this matter taken by Matthew Garrett and Jonathan Riddell. Their positions show clearly that, while the GPL violation is now fully resolved, the community is very concerned about what the happens regarding non-copylefted software in Ubuntu, and thus Ubuntu as a whole.

    Realize, though, that these trump clauses are widely used throughout the software industry. For example, electronics manufacturers who ship an Android/Linux system with standard, disgustingly worded, forbid-everything EULA usually include a trump clause not unlike Ubuntu's. In such systems, usually, the only copylefted program is the kernel named Linux. The rest of the distribution includes tons of (now proprietarized) non-copylefted code from Android (as well as a bunch of born-proprietary applications too). The trump clause assures the software freedom rights for that one copylefted work present, but all the non-copylefted ones are subject to the strict EULA (which often includes “no reverse engineer clauses”, etc.). That means if the electronics company did change the Android Java code in some way, you can't even legally reverse engineer it — even though it was Apache-licensed by upstream.

    Trump clauses are thus less than ideal because they achieve compliance only by allowing a copyleft to prevail when the overarching license contradicts specific requirements, permissions, or rights under copyleft. That's acceptable because copyleft licenses have many important clauses that assure and uphold software freedom. By contrast, most non-copyleft licenses have very few requirements, and thus they lack adequate terms to triumph over any anti-software-freedom terms of the overarching license. For example, if I take a 100% ISC-licensed program and build a binary from it, nothing in the ISC license prohibits me from imposing this license on you: “you may not redistribute this binary commercially”. Thus, even if I also say to you: “but also, if the ISC license grants rights, my aforementioned license does not modify or reduce those rights”, nothing has changed for you. You still have a binary that you can't distribute commercially, and there was no text in the ISC license to force the trump clause to save you.

    Therefore, this whole situation is a simple and clear argument for why copyleft matters. Copyleft can and does (when someone like me actually enforces it) prevent such situations. But copyleft is not infinitely expansive. Nearly every full operating system distribution available includes an aggregated mix of copylefted, non-copyleft, and often fully-proprietary userspace applications. Nearly every company that distributes them wraps the whole thing with some agreement that restricts some rights that copyleft defends, and then adds a trump clause that gives an exception just for FLOSS license compliance. Sadly, I have yet to see a company trailblaze adoption of a “software freedom preservation” clause that guarantees copyleft-like compliance for non-copylefted programs and packages. Thus, the problem with Ubuntu is just a particularly bad example of what has become a standard industry practice by nearly every “open source” company.

    How badly these practices impact software freedom depends on the strictness and detailed terms of the overarching license (and not the contents of the trump clause itself; they are generally isomorphic0). The task of analyzing and rating “relative badness” of each overarching licensing document is monumental; there are probably thousands of different ones in use today. Matthew Garrett points out why Canonical, Ltd.'s is particularly bad, but that doesn't mean there aren't worse (and better) situations of a similar ilk. Perhaps our next best move is to use copyleft licenses more often, so that the trump clauses actually do more.

    In other words, as long as there is non-copylefted software aggregated in a given distribution of an otherwise Free Software system, companies will seek to put non-Free terms on top of the non-copylefted parts, To my knowledge, every distribution-shipping company (except for extremely rare, Free-Software-focused companies like ThinkPenguin) place some kind of restrictions in their business terms for their enterprise distribution products. Everyone seems to be asking me today to build the “worst to almost-benign” ranking of these terms, but I've resisted the urge to try. I think the safe bet is to assume that if you're looking at one of these trump clauses, there is some sort of software-freedom-unfriendly restriction floating around in the broader agreement, and you should thus just avoid that product entirely. Or, if you really want to use it, fork it from source and relicense the non-copylefted stuff under copyleft licenses (which is permitted by nearly all non-copyleft licenses), to prevent future downstream actors from adding more restrictive terms. I'd even suggest this as a potential solution to the current Ubuntu problem (or, better yet, just go back upstream to Debian and do the same :).

    Finally, IMO the biggest problem with these “overarching licenses with a trump clause” is their use by companies who herald “open source” friendliness. I suspect the community ire comes from a sense of betrayal. Yet, I feel only my usual anger at proprietary software here; I don't feel betrayed. Rather, this is just another situation that proves that saying you are an “open source company” isn't enough; only the company's actions and “fine print” terms matter. Now that open source has really succeeded at coopting software freedom, enormous effort is now required to ascertain if any company respects your software freedom. We must ignore the ballyhoo of “community managers” and look closely at the real story.


    0Despite Canonical, Ltd.'s use of a trump clause, I don't think these various trump clauses are canonically isomorphic. There is no natural mapping between these various trump clauses, but they all do have the same effect: they assure that when the overarching terms conflict with the a FLOSS license, the FLOSS license triumphs over the overarching terms, no matter what they are. However, the potential relevance of the phrase “canonical isomorphism” here is yet another example why it's confusing and insidious that Canonical, Ltd. insisted so strongly on using canonical in a non-canonical way.

    Posted on Wednesday 15 July 2015 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2015-07-04: Did You Actually Read the Lower Court's Decision?

    I'm seeing plenty of people, including some non-profit organizations along with the usual punditocracy, opining on the USA Supreme Court's denial for a writ of certiorari in the Oracle v. Google copyright infringement case. And, it's not that I expect everyone in the world to read my blog, but I'm amazed that people who should know better haven't bothered to even read the lower Court's decision, which is de-facto upheld upon denial by the Supreme Court to hear the appeal.

    I wrote at great length about why the decision isn't actually a decision about whether APIs are copyrightable, and that the decision actually gives us some good clarity with regard to the issue of combined work distribution (i.e., when you distribute your own works with the copyrighted material of others combined into a single program). The basic summary of the blog post I linked to above is simply: The lower Court seemed genially confused about whether Google copy-and-pasted code, as the original trial seems to have inappropriately conflated API reimplemenation with code cut-and-paste.

    No one else has addressed this nuance of the lower Court's decision in the year since the decision came down, and I suspect that's because in our TL;DR 24-hour-news cycle, it's much easier for the pundits and organizations tangentially involved with this issue to get a bunch of press over giving confusing information.

    So, I'm mainly making this blog post to encourage people to go back and read the decision and my blog post about it. I'd be delighted to debate people if they think I misread the decision, but I won't debate you unless you assure me you read the lower Court's decision in its entirety. I think that leaves virtually no one who will. :-/

    Posted on Saturday 04 July 2015 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

June

  • 2015-06-26: John Oliver Falls For Software Patent Trade Association Messaging

    I've been otherwise impressed with John Oliver and his ability on Last Week Tonight to find key issues that don't have enough attention and give reasonably good information about them in an entertaining way — I even lauded Oliver's discussion of non-profit organizational corruption last year. I suppose that's why I'm particularly sad (as I caught up last weekend on an old episode) to find that John Oliver basically fell for the large patent holders' pro-software-patent rhetoric on so-called “software patents”.

    In short, Oliver mimics the trade association and for-profit software industry rhetoric of software patent reform rather than abolition — because trolls are the only problem. I hope the worlds' largest software patent holders send Oliver's writing staff a nice gift basket, as such might be the only thing that would signal to them that they fell into this PR trap. Although, it's admittedly slightly unfair to blame Oliver and his writers; the situation is subtle.

    Indeed, someone not particularly versed in the situation can easily fall for this manipulation. It's just so easy to criticize non-practicing entities. Plus, the idea that the sole inventor might get funded on Shark Tank has a certain appeal, and fits a USAmerican sensibility of personal capitalistic success. Thus, the first-order conclusion is often, as Oliver's piece concludes, maybe if we got rid of trolls, things wouldn't be so bad.

    And then there's also the focus on the patent quality issue; it's easy to convince the public that higher quality patents will make it ok to restrict software sharing and improvement with patents. It's great rhetoric for a pro-patent entities to generate outrage among the technology-using public by pointing to, say, an example of a patent that reads on every Android application and telling a few jokes about patent quality. In fact, at nearly every FLOSS conference I've gone to in the last year, OIN has sponsored a speaker to talk about that very issue. The jokes at such talks aren't as good as John Oliver's, but they still get laughs and technologists upset about patent quality and trolls — but through carefully cultural engineering, not about software patents themselves.

    In fact, I don't think I've seen a for-profit industry and its trade associations do so well at public outrage distraction since the “tort reform” battles of the 1980s and 1990s, which were produced in part by George H. W. Bush's beloved M.C. Rove himself. I really encourage those who want to understand of how the anti-troll messaging manipulation works to study how and why the tort reform issue played out the way it did. (As I mentioned on the Free as in Freedom audcast, Episode 0x13, the documentary film Hot Coffee is a good resource for that.)

    I've literally been laughed at publicly by OIN representatives when I point out that IBM, Microsoft, and other practicing entities do software patent shake-downs, too — just like the trolls. They're part of a well-trained and well-funded (by trade associations and companies) PR machine out there in our community to convince us that trolls and so-called “poor patent quality” are the only problems. Yet, nary a year has gone in my adult life where I don't see a some incident where a so-called legitimate, non-obvious software patent causes serious trouble for a Free Software project. From RSA, to the codec patents, to Microsoft FAT patent shakedowns, to IBM's shakedown of the Hercules open source project, to exfat — and that's just a few choice examples from the public tip of the practicing entity shakedown iceberg. IMO, the practicing entities are just trolls with more expensive suits and proprietary software licenses for sale. We should politically oppose the companies and trade associations that bolster them — and call for an end to software patents.

    Posted on Friday 26 June 2015 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2015-06-15: Why Greet Apple's Swift 2.0 With Open Arms?

    Apple announced last week that its Swift programming language — a currently fully proprietary software successor to Objective C — will probably be partially released under an OSI-approved license eventually. Apple explicitly stated though that such released software will not be copylefted. (Apple's pathological hatred of copyleft is reasonably well documented.) Apple's announcement remained completely silent on patents, and we should expect the chosen non-copyleft license will not contain a patent grant. (I've explained at great length in the past why software patents are a particularly dangerous threat to programming language infrastructure.)

    Apple's dogged pursuit for non-copyleft replacements for copylefted software is far from new. For example, Apple has worked to create replacements for Samba so they need not ship Samba in OSX. But, their anti-copyleft witch hunt goes back much further. It began when Richard Stallman himself famously led the world's first GPL enforcement effort against NeXT, and Objective-C was liberated. For a time, NeXT and Apple worked upstream with GCC to make Objective-C better for the community. But, that whole time, Apple was carefully plotting its escape from the copyleft world. Fortuitously, Apple eventually discovered a technically brilliant (but sadly non-copylefted) research programming language and compiler system called LLVM. Since then, Apple has sunk millions of dollars into making LLVM better. On the surface, that seems like a win for software freedom, until you look at the bigger picture: their goal is to end copyleft compilers. Their goal is to pick and choose when and how programming language software is liberated. Swift is not a shining example of Apple joining us in software freedom; rather, it's a recent example of Apple's long-term strategy to manipulate open source — giving our community occasional software freedom on Apple's own terms. Apple gives us no bread but says let them eat cake instead.

    Apple's got PR talent. They understand that merely announcing the possibility of liberating proprietary software gets press. They know that few people will follow through and determine how it went. Meanwhile, the standing story becomes: Wait, didn't Apple open source Swift anyway?. Already, that false soundbite's grip strengthens, even though the answer remains a resounding No!. However, I suspect that Apple will probably meet most of their public pledges. We'll likely see pieces of Swift 2.0 thrown over the wall. But the best stuff will be kept proprietary. That's already happening with LLVM, anyway; Apple already ships a no-source-available fork of LLVM.

    Thus, Apple's announcement incident hasn't happened in a void. Apple didn't just discover open source after years of neutrality on the topic. Apple's move is calculated, which led various industry pundits like O'Grady and Weinberg to ask hard questions (some of which are similar to mine). Yet, Apple's hype is so good, that it did convince one trade association leader.

    To me, Apple's not-yet-executed move to liberate some of the Swift 2.0 code seems a tactical stunt to win over developers who currently prefer the relatively more open nature of the Android/Linux platform. While nearly all the Android userspace applications are proprietary, and GPL violations on Android devices abound, at least the copyleft license of Linux itself provides the opportunity to keep the core operating system of Android liberated. No matter how much Swift code is released, such will never be true with Apple.

    I'm often pointing out in my recent talks how complex and treacherous the Open Source and Free Software political climate became in the last decade. Here's a great example: Apple is a wily opponent, utilizing Open Source (the cooption of Free Software) to manipulate the press and hoodwink the would-be spokespeople for Linux to support them. Many of us software freedom advocates have predicted for years that Free Software unfriendly companies like Apple would liberate more and more code under non-copyleft licenses in an effort to create walled gardens of seeming software freedom. I don't revel in my past accuracy of such predictions; rather, I feel simply the hefty weight of Cassandra's curse.

    Posted on Monday 15 June 2015 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2015-06-03: The Satirized Is the Satirist, or Who Bought the “Journalists”?

    I watched the most recent Silicon Valley episode last night. I laughed at some parts (not as much as a usual episode) and then there was a completely unbelievable tech-related plot twist — quite out of character for that show. I was surprised.

    When the credits played, my draw dropped when I saw the episode's author was Dan Lyons. Lyons (whose work has been promoted by the Linux Foundation) once compared me to a communist and a member of organized crime (in, Forbes, a prominent publication for the wealthy) because of my work enforcing the GPL.

    In the years since Lyons' first anti-software freedom article (yes, there were more), I've watched many who once helped me enforce the GPL change positions and oppose GPL enforcement (including allies who once received criticism alongside me). Many such allies went even further — publicly denouncing my work and regularly undermining GPL enforcement politically.

    Attacks by people like Dan Lyons — journalists well connected with industry trade associations and companies — are one reason so many people are too afraid to enforce the GPL. I've wondered for years why the technology press has such a pro-corporate agenda, but it eventually became obvious to me in early 2005 when listening to yet another David Pogue Apple product review: nearly the entire tech press is bought and paid for by the very companies on which they report! The cartoonish level of Orwellian fear across our industry of GPL enforcement is but one example of many for-profit corporate agendas that people like Lyons have helped promulgate through their pro-company reporting.

    Meanwhile, I had taken Silicon Valley (until this week) as pretty good satire on the pathetic state of the technology industry today. Perhaps Alec Berg and Mike Judge just liked Lyons' script — not even knowing that he is a small part of the problem they seek to criticize. Regardless as to why his script was produced, the line between satirist and the satirized is clearly thinner than I imagined; it seems just as thin as the line between technology journalist and corporate PR employee.

    I still hope that Berg and Judge seek, just as Judge did in Office Space, to pierce the veil of for-profit corporate manipulation of employees and users alike. However, for me, the luster of their achievement fades when I realize at least some of their creative collaborators participate in the central to the problem they criticize.

    Shall we start a letter writing campaign to convince them to donate some of Silicon Valley's proceeds to Free Software charities? Or, at the very least, to convince Berg to write one of his usually excellent episodes about how the technology press is completely corrupted by the companies on which they report?

    Posted on Wednesday 03 June 2015 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

February

  • 2015-02-26: Vote Karen Sandler for Red Hat's Women In Open Source Award

    I know this decision is tough, as all the candidates in the list deserve an award. However, I hope that you'll chose to vote for my friend and colleague, Karen Sandler, for the 2015 Red Hat Women in Open Source Community Award. Admittedly, most of Karen's work has been for software freedom, not Open Source (i.e., her work has been community and charity-oriented, not for-profit oriented). However, giving her an “Open Source” award is a great way to spread the message of software freedom to the for-profit corporate Open Source world.

    I realize that there are some amazingly good candidates, and I admit I'd be posting a blog post to endorse someone else (No, I won't say who :) if Karen wasn't on the ballot for the Community Award. So, I wouldn't say you backed the wrong candidate you if you vote for someone else. And, I'm imminently biased since Karen and I have worked together on Conservancy since its inception. But, if you can see your way through to it, I hope you'll give Karen your vote.

    (BTW, I'm not endorsing a candidate in the Academic Award race. I am just not familiar enough with the work of the candidates involved to make an endorsement. I even abstained from voting in that race myself because I didn't want to make an uninformed vote.)

    Posted on Thursday 26 February 2015 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2015-02-10: Trade Associations Are Never Neutral

    It's amazing what we let for-profit companies and their trade associations get away with. Today, Joyent announced the Node.js Foundation, in conjunction with various for-profit corporate partners and Linux Foundation (which is a 501(c)(6) trade association under the full control of for-profit companies).

    Joyent and their corporate partners claim that the Node.js Foundation will be neutral and provide open governance. Yet, they don't even say what corporate form the new organization will take, nor present its by-laws. There's no way that anyone can know if the organization will be neutral and provide open governance without at least that information.

    Meanwhile, I've spent years pointing out that what corporate form you chose matters. In the USA, if you pick a 501(c)(6) trade association (like Linux Foundation), the result is not a neutral non-profit home. Rather, a trade association simply promotes the interest of the for-profit businesses that control it. Such organizations don't have the community interests at heart, but rather the interests of the for-profit corporate masters who control the Board of Directors. Sadly, most people tend to think that if you put the word “Foundation” in the name0, you magically get a neutral home and open governance.

    Fortunately for these trade associations, they hide behind the far-too-general term non-profit, and act as if all non-profits are equal. Why do trade association representatives and companies ignore the differences between charities and trade associations? Because they don't want you to know the real story.

    Ultimately, charities serve the public good. They can do nothing else, lest they run afoul of IRS rules. Trade associations serve the business interests of the companies that join them. They can do nothing else, lest they run afoul of IRS rules. I would certainly argue the Linux Foundation has done an excellent job serving the interests of the businesses that control it. They can be commended for meeting their mission, but that mission is not one to serve the individual users and developers of Linux and other Free Software. What will the mission of the Node.js Foundation be? We really don't know, but given who's starting it, I'm sure it will be to promote the businesses around Node.js, not its users and developers.


    0Richard Fontana recently pointed out to me that it is extremely rare for trade associations to call themselves foundations outside of the Open Source and Free Software community. He found very few examples of it in the wider world. He speculated that this may be an attempt to capitalize on the credibility of the Free Software Foundation, which is older than all other non-profits in this community by at least two decades. Of course, FSF is a 501(c)(3) charity, and since there is no IRS rule about calling a 501(c)(6) trade association by the name “Foundation”, this is a further opportunity to spread confusion about who these organization serve: business interests or the general public.

    Posted on Tuesday 10 February 2015 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

January

  • 2015-01-02: Weirdness with hplip package in Debian wheezy

    I suspect this information is of limited use because it's far too vague. I didn't even file it as a Debian bug because I don't think I have enough information here to report a bug. It's not dissimilar from the issues reported in Debian bug 663868, but the system in question doesn't have foo2zjs installed. So, I filed Debian Bug 774460.

    However, in searching around the Internet for the syslog messages below, I found very few results. So, in the interest of increasing the indexing on these error messages, I include the below:

                    Jan  2 18:29:04 puggington kernel: [ 2822.256130] usb 2-1: new high-speed USB device number 16 using ehci_hcd
                    Jan  2 18:29:04 puggington kernel: [ 2822.388961] usb 2-1: New USB device found, idVendor=03f0, idProduct=5417
                    Jan  2 18:29:04 puggington kernel: [ 2822.388970] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
                    Jan  2 18:29:04 puggington kernel: [ 2822.388977] usb 2-1: Product: HP Color LaserJet CP2025dn
                    Jan  2 18:29:04 puggington kernel: [ 2822.388983] usb 2-1: Manufacturer: Hewlett-Packard
                    Jan  2 18:29:04 puggington kernel: [ 2822.388988] usb 2-1: SerialNumber: 00CNGS705379
                    Jan  2 18:29:04 puggington kernel: [ 2822.390346] usblp0: USB Bidirectional printer dev 16 if 0 alt 0 proto 2 vid 0x03F0 pid 0x5417
                    Jan  2 18:29:04 puggington udevd[25370]: missing file parameter for attr
                    Jan  2 18:29:04 puggington mtp-probe: checking bus 2, device 16: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1"
                    Jan  2 18:29:04 puggington mtp-probe: bus: 2, device: 16 was not an MTP device
                    Jan  2 18:29:04 puggington hp-mkuri: io/hpmud/model.c 625: unable to find [s{product}] support-type in /usr/share/hplip/data/models/models.dat
                    Jan  2 18:25:19 puggington kernel: [ 2596.528574] usblp0: removed
                    Jan  2 18:25:19 puggington kernel: [ 2596.535273] usblp0: USB Bidirectional printer dev 12 if 0 alt 0 proto 2 vid 0x03F0 pid 0x5417
                    Jan  2 18:25:24 puggington kernel: [ 2601.727506] usblp0: removed
                    Jan  2 18:25:24 puggington kernel: [ 2601.733244] usblp0: USB Bidirectional printer dev 12 if 0 alt 0 proto 2 vid 0x03F0 pid 0x5417
                    [last two repeat until unplugged]
                    

    I really think the problem relates specifically to hplip 3.12.6-3.1+deb7u1, as I said in the bug report, the following commands resolved the problem for me:

                    # dpkg --purge hplip
                    # dpkg --purge system-config-printer-udev
                    # aptitude install system-config-printer-udev
                    

    Posted on Friday 02 January 2015 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

2014

December

  • 2014-12-23: Toward Civil Behavior

    I thought recently of a quote from a Sopranos' Season 1 episode, A Hit is a Hit, wherein Tony Soprano's neighbor proclaims for laughs at a party, Sometimes I think the only thing separating American business from the Mob is [EXPLETIVE] whacking somebody.

    The line stuck with me in the decade and a half since I heard it. When I saw the episode in 1999, my career was basically just beginning, as I was just finishing graduate school and had just begun working for the FSF. I've often wondered over these years how close that quote — offered glibly to explore a complex literary theme — matches reality.

    Organized crime drama connects with audiences because such drama explores a primal human theme: given the human capacity for physical violence and notwithstanding the Enlightenment, how and why does physical violence find its way into otherwise civilized social systems? A year before my own birth, The Godfather explored the same theme famously with the line, It's not personal, Sonny. It's strictly business. I've actually heard a would-be community leader quote that line as a warped justification for his verbally abusive behavior.

    Before I explain further, I should state my belief that physical violence always crosses a line that's as wide as the Grand Canyon. Film depictions consider the question of whether the line is blurry, but it's certainly not. However, what intrigues me is how often “businesspeople” and celebrities will literally walk right up to the edge of that Grand Canyon, and pace back and forth there for days — and even years.

    In the politics of Free, Libre and Open Source Software (FLOSS), some people regularly engage in behavior right on that line: berating, verbal abuse, and intimidation. These behaviors are consistently tolerated, accepted, and sometimes lauded in FLOSS projects and organizations. I can report from direct experience: if you think what happens on public mailing lists is bad, what happens on the private phone calls and in-person meetings is even worse. The types of behavior that would-be leaders employ would surely shock you.

    I regularly ponder whether I have a duty to disclose how much worse the back-room behavior is compared to the already abysmal public actions. The main reason I don't (until a few decades from now in my memoirs — drafting is already underway ;) is that I suspect people won't believe me. The smart abusive people know how to avoid leaving a record of their most abusive behavior perpetrated against their colleagues. I know of at least one person who will refuse to have a discussion via email or IRC and insist on in-person or telephone meetings specifically because the person outright plans to act abusively and doesn't want a record.

    While it's certainly a relief that I cannot report a single incident of actual assault in the FLOSS community, I have seen behavior escalate from ill-advised and mean political strategies to downright menacing. For example, I often receive threats of public character assassination, and character assassination in the backchannel rumor mill remains ongoing. At a USENIX conference in the late 1990s, I saw Hans Reiser screaming and wagging his finger menacingly in the face of another Linux developer. During many FLOSS community scandals, women have received threats of physical violence. Nevertheless, many FLOSS “leaders” still consider psychological intimidation a completely reasonable course of action and employ it regularly.

    How long are we going to tolerate this, and should we simply tolerate it, merely because it doesn't cross that huge chasm (on the other side of which lies physical violence)? How close are we willing to get? Is it really true that any words are fair game, and nothing you can say is off-limits? (In my experience, verbally abusive people often use that claim as an iron-clad excuse.) But, if we don't start asking these questions regularly, our community culture will continue to deteriorate.

    I realize I'm just making a statement, and not proposing real action, which (I admit) is only marginally helpful. As Tor recently showed, though, making a statement is the first step. In other words, saying “No, this behavior is not acceptable” is undoubtedly the only way to begin. Our community has been way too slow in taking that one step, so we've now got a lot of catching up to get to the right place in a reasonable timeframe.

    Posted on Tuesday 23 December 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-12-03: Help Fund Open-Wash-Free Zones

    Recently, I was forwarded an email from an executive at a 501(c)(6) trade association. In answering a question about accepting small donations for an “Open Source” project through their organization, the Trade Association Executive responded Accepting [small] donations [from individuals] is possible, but [is] generally not a sustainable way to raise funds for a project based on our experience. It's extremely difficult … to raise any meaningful or reliable amounts.

    I was aghast, but not surprised. The current Zeitgeist of the broader Open Source and Free Software community incubated his disturbing mindset. Our community suffers now from regular and active cooption by for-profit interests. The Trade Association Executive's fundraising claim — which probably even bears true in their subset of the community — shows the primary mechanism of cooption: encourage funding only from a few, big sources so they can slowly but surely dictate project policy.

    Today, more revenue than ever goes to the development of code released under licenses that respect software freedom. That belabored sentence contains the key subtlety: most Free Software communities are not receiving more funding than before, in fact, they're probably receiving less. Instead, Open Source became a fad, and now it's “cool” for for-profit companies to release code, or channel funds through some trade associations to get the code they want written and released. This problem is actually much worse than traditional open-washing. I'd call this for-profit cooption its own subtle open-washing: picking a seemingly acceptable license for the software, but “engineering” the “community” as a proxy group controlled by for-profit interests.

    This cooption phenomenon leaves the community-oriented efforts of Free Software charities underfunded and (quite often) under attack. These same companies that fund plenty of Open Source development also often oppose copyleft. Meanwhile, the majority of Free Software projects that predate the “Open Source Boom” didn't rise to worldwide fame and discover a funding bonanza. Such less famous projects still struggle financially for the very basics. For example, I participate in email threads nearly every day with Conservancy member projects who are just trying to figure out how to fund developers to a conference to give a talk about their project.

    Thus, a sad kernel of truth hides in the Trade Association Executive's otherwise inaccurate statement: big corporate donations buy influence, and a few of our traditionally community-oriented Free Software projects have been “bought” in various ways with this influx of cash. The trade associations seek to facilitate more of this. Unless we change our behavior, the larger Open Source and Free Software community may soon look much like the political system in the USA: where a few lobbyist-like organizations control the key decision-making through funding. In such a structure, who will stand up for those developers who prefer copyleft? Who will make sure individual developers receive the organizational infrastructure they need? In short, who will put the needs of individual developers and users ahead of for-profit companies?

    Become a Conservancy Supporter!

    The answer is simple: non-profit 501(c)(3) charities in our community. These organizations that are required by IRS regulation to pass a public support test, which means they must seek large portions of their revenue from individuals in the general public and not receive too much from any small group of sources. Our society charges these organizations with the difficult but attainable tasks of (a) answering to the general public, and never for-profit corporate donors, and (b) funding the organization via mechanisms appropriate to that charge. The best part is that you, the individual, have the strongest say in reaching those goals.

    Those who favor for-profit corporate control of “Open Source” projects will always insist that Free Software initiatives and plans just cannot be funded effectively via small, individual donations. Please, for the sake of software freedom, help us prove them wrong. There's even an easy way that you can do that. For just $10 a month, you can join the Conservancy Supporter program. You can help Conservancy stand up for Free Software projects who seek to keep project control in the hands of developers and users.

    Of course, I realize you might not like my work at Conservancy. If you don't, then give to the FSF instead. If you don't like Conservancy nor the FSF, then give to the GNOME Foundation. Just pick the 501(c)(3) non-profit charity in the Free Software community that you like best and donate. The future of software freedom depends on it.

    Posted on Wednesday 03 December 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

November

  • 2014-11-11: Groupon Tried To Take GNOME's Name & Failed

    [ I'm writing this last update to this post, which I posted at 15:55 US/Eastern on 2014-11-11, above the original post (and its other update), since the first text below is the most important message about this siutation. (Please note that I am merely a mundane GF member, and I don't speak for GF in any way.) ]

    There is a lesson learned here, now that Groupon has (only after public admonishing from GNOME Foundation) decided to do what GNOME Foundation asked them for from the start. Specifically, I'd like to point out how it's all too common for for-profit companies to treat non-profit charities quite badly, even when the non-profit charity is involved in an endeavor that the for-profit company nominally “supports”.

    The GNOME Foundation (GF) Board minutes are public; you can go and read them. If you do, you'll find that for many months, GF has been spending substantial time and resources to deal with this issue. They've begged Groupon to be reasonable, and Groupon refused. Then, GF (having at least a few politically savvy folks on their Board of Directors) decided they had to make the (correct) political next move and go public.

    As a professional “Free Software politician”, I can tell you from personal experience that going public with a private dispute is always a gamble. It can backfire, and thus is almost always a “last hope” before the only other option: litigation. But, Groupon's aggressive stance and deceitful behavior seems to have left GF with little choice; I'd have done the same in GF's situation. Fortunately, the gamble paid off, and Groupon caved when they realized that GF would win — both in the court of public opinion and in a real court later.

    However, this tells us something about the ethos of Groupon as a company: they are willing to waste the resources of a tiny non-profit charity (which is currently run exclusively by volunteers) simply because Groupon thought they could beat that charity down by outspending them. And, it's not as if it's a charity with a mission Groupon opposes — it's a charity operating in a space which Groupon claims to love.

    I suppose I'm reacting so strongly to this because this is exactly the kind of manipulative behavior I see every day from GPL violators. The situations are quite analogous: a non-profit charity, standing up for a legal right of a group of volunteer Free Software developers, is viewed by that company like a bug the company can squash with their shoe. The company only gives up when they realize the bug won't die, and they'll just have to give up this time and let the bug live.

    GF frankly and fortunately got off a little light. For my part, the companies (and their cronies) that oppose copyleft have called me a “copyright troll”, “guilty of criminal copyright abuse”, and also accused me of enforcing the GPL merely to “get rich” (even though my salary has been public since 1999 and is less than all of theirs). Based on my experience with GPL enforcement, I can assure you: Groupon had exactly two ways to go politically: either give up almost immediately once the dispute was public (which they did), or start attacking GF with dirty politics.

    Having personally often faced the aforementioned “next political step” by the for-profit company in similar situations, I'm thankful that GF dodged that, and we now know that Groupon is unlikely to make dirty political attacks against GF as their next move. However, please don't misread this situation: Groupon didn't “do something nice just because GF asked them to”, as the Groupon press people are no doubt at this moment feeding the tech press for tomorrow's news cycle. The real story is: “Groupon stonewalled, wasting limited resources of a small non-profit for months, and gave up only when the non-profit politically outflanked them”.


    My original post and update from earlier in the day on 2014-11-11 follows as they originally appeared:

    It's probably been at least a decade, possibly more, since I saw a a proprietary software company attempt to take the name of an existing Free Software project. I'm very glad GNOME Foundation had the forethought to register their trademark, and I'm glad they're defending it.

    It's important to note that names are really different from copyrights. I've been a regular critic of the patent and copyright systems, particularly as applied to software. However, trademarks, while the system has some serious flaws, has at its root a useful principle: people looking for stuff they really want shouldn't be confused by what they find. (I remember as a kid the first time I got a knock-off toy and I was quite frustrated and upset for being duped.) Trademark law is designed primarily to prevent the public from being duped.

    Trademark is also designed to prevent a new actor in the marketplace from gaining advantage using the good name of an existing work. Of course, that's what Groupon is doing here, but Groupon's position seems to have come from the sleaziest of their attorneys and it's completely disingenuous Oh, we never heard of GNOME and we didn't even search the trademark database before filing. Meanwhile, now that you've contacted us, we're going to file a bunch more trademarks with your name in them. BTW, the odds that they are lying about never searching the USTPO database for GNOME are close to 100%. I have been involved with registration of many a trademark for a Free Software project: the first thing you do is search the trademark database. The USPTO even provides a public search engine for it!

    Finally, GNOME's legal battle is not merely their own. Proprietary software companies always think they can bully Free Software projects. They figure Free Software just doesn't matter that much and doesn't have the resources to fight. Of course, one major flaw in the trademark system is that it is expensive (because of the substantial time investment needed by trademark experts) to fight an attack like this. Therefore, please donate to the GNOME Foundation to help them in this fight. This is part of a proxy war against all proprietary software companies that think they can walk all over a Free Software project. Thus, this issue relates to many others in our community. We have to show the wealthy companies that Free Software projects with limited resources are not pushovers, but non-profit charities like GNOME Foundation cannot do this without your help.

    Update on 2014-11-11 at 12:23 US/Eastern: Groupon responded to the GNOME Foundation publicly on their “engineering” site. I wrote the following comment on that page and posted it, but of course they refused to allow me to post a comment0, so I've posted my comment here:

    If you respected software freedom and the GNOME project, then you'd have already stop trying to use their good name (which was trademarked before your company was even founded) to market proprietary software. You say you'd be glad to look for another name; I suspect that was GNOME Foundation's first request to you, wasn't it? Are you saying the GNOME Foundation has never asked you to change the name of the product you've been calling GNOME?

    Meanwhile, your comments about “open source” are suspect at best. Most technology companies these days have little choice but to interact in some ways with open source. I see of course, that Groupon has released a few tidbits of code, but your website is primarily proprietary software. (I notice, for example, a visit just to your welcome page at groupon.com attempts to install a huge amount of proprietary Javascript on my machine — lucky I use NoScript to reject it). Therefore, your argument that you “love open source” is quite dubious. Someone who loves open source doesn't just liberate a few tidbits of their code, they embrace it fully. To be accurate, you probably should have said: We like open source a little bit.

    Finally, your statement, which is certainly well-drafted Orwellian marketing-speak, doesn't actually answer any of the points the GNOME Foundation raised with you. According to the GNOME Foundation, you were certainly communicating, but in the meantime you were dubiously registering more infringing trademarks with the USPTO. The only reasonable conclusion is that you used the communication to buy time to stab GNOME Foundation in the back further. I do a lot of work defending copyleft communities against companies that try to exploit and mistreat those communities, and yours are the exact types of manipulative tactics I often see in those negotiations.


    0While it's of course standard procedure for website to refuse comments, I find it additionally disingenuous when a website looks like it accepts comments, but then refuses some. Obviously, I don't think trolls should be given a free pass to submit comments, but I rather like the solution of simply full disclosure: Groupon should disclose that they are screening some comments. This, BTW, is why I just use a third party application (pump.io) for my comments. Anyone can post. :)

    Posted on Tuesday 11 November 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-11-08: Branding GNU Mailman Headers & Footers

    As always, when something takes me a while to figure out, I try to post the generally useful technical information on my blog. For the new copyleft.org site, I've been trying to get all the pages branded properly with the header/footer. This was straightforward for ikiwiki (which hosts the main site), but I spent an hour searching around this morning for how to brand the GNU Mailman instance on lists.copyleft.org.

    Ultimately, here's what I had to do to get everything branded, and I'm still not completely sure I found every spot. It seems that if someone wanted to make a useful patch to GNU Mailman, you could offer up a change that unifies the HTML templating and branding. In the meantime, at least for GNU Mailman 2.1.15 as found in Debian 7 (wheezy), here's what you have to do:

    First, some of the branding details are handled in the Python code itself, so my first action was:

                        # cd /var/lib/mailman/Mailman
                        # cp -pa htmlformat.py /etc/mailman
                        # ln -sf /etc/mailman/htmlformat.py htmlformat.py
                      
    I did this because htmlformat.py is not a file that the Debian package install for Mailman puts in /etc/mailman, and I wanted to keep track with etckeeper that I was modifying that file.

    The primary modifications that I made to that file were in the MailmanLogo() method, to which I added a custom footer, and to Document.Format() method, to which I added a custom header (at least when not self.suppress_head). The suppress_head thing was a red flag that told me it was likely not enough merely to change these methods to get a custom header and footer on every page. I was right. Ultimately, I had to also change nearly all the HTML files in /etc/mailman/en/, each of which needed different changes based on what files they were, and there was no clear guideline. I guess I could have added <MM-Mailman-Footer> to every file that had a </BODY> but didn't have that yet to get my footer everywhere, but in the end, I custom-hacked the whole thing.

    My full patches that I applied to all the mailman files is available on copyleft.org, in case you want to see how I did it.

    Posted on Saturday 08 November 2014 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

October

  • 2014-10-10: Always Follow the Money

    Selena Larson wrote an article describing the Male Allies Plenary Panel at the Anita Borg Institute's Grace Hopper Celebration on Wednesday night. There is a video available of the panel (that's the youtube link, the links on Anita Borg Institute's website don't work with Free Software).

    Selena's article pretty much covers it. The only point that I thought useful to add was that one can “follow the money” here. Interestingly enough, Facebook, Google, GoDaddy, and Intuit were all listed as top-tier sponsors of the event. I find it a strange correlation that not one man on this panel is from a company that didn't sponsor the event. Are there no male allies to the cause of women in tech worth hearing from who work for companies that, say, don't have enough money to sponsor the event? Perhaps that's true, but it's somewhat surprising.

    Honest US Congresspeople often say that the main problem with corruption of campaign funds is that those who donate simply have more access and time to make their case to the congressional representatives. They aren't buying votes; they're buying access for conversations. (This was covered well in This American Life, Episode 461).

    I often see a similar problem in the “Open Source” world. The loudest microphones can be bought by the highest bidder (in various ways), so we hear more from the wealthiest companies. The amazing thing about this story, frankly, is that buying the microphone didn't work this time. I'm very glad the audience refused to let it happen! I'd love to see a similar reaction at the corporate-controlled “Open Source and Linux” conferences!

    Update later in the day: The conference I'm commenting on above is the same conference where Satya Nadella, CEO of Microsoft, said that women shouldn't ask for raises, and Microsoft is also a top-tier sponsor of the conference. I'm left wondering if anyone who spoke at this conference didn't pay for the privilege of making these gaffes.

    Posted on Friday 10 October 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

September

  • 2014-09-26: IRS Tax-Exempt Status & FaiF 0x4E

    Historically, I used to write a blog post for each episode of the audcast, Free as in Freedom that Karen Sandler and I released. However, since I currently do my work on FaiF exclusively as a volunteer, I often found it difficult to budget time for a blog post about each show.

    However, enough happened in between when Karen and I recorded FaiF 0x4E and when it was released earlier this week that I thought I'd comment on those events.

    First, with regard to the direct content of the show, I've added some detail in the 0x4E show notes about additional research I did about various other non-software-related non-profit organizations that I mention in the show.

    The primary thrust of Karen's and my discussion on the show, though, regarded how the IRS is (somewhat strangely) the regulatory body for various types of organizational statuses, and that our legislation lumps many disparate activities together under the term “non-profit organizations” in the USA. The types of these available, outlined in 26 USC§501(c), vary greatly in what they do, and in what the IRS intends for them to do.

    Interestingly, a few events occurred in mainstream popular culture since FaiF 0x4E's recording that relate to this subject. First, on John Oliver's Last Week Tonight Episode 18 on 2014-09-21 (skip to 08:30 in the video to see the part I'm commenting on), John actually pulled out a stack of interlocking Form 990s from various related non-profit organizations and walked through some details of misrepresentation to the public regarding the organization's grant-making activities. As an avid reader of Form 990s, I was absolutely elated to see a popular comic pundit actually assign his staff the task of reviewing Form 990s to follow the money. (Although I wish he hadn't wasted the paper to print them out merely to make a sight gag.)

    Meanwhile, the failure of just about everyone to engage in such research remains my constant frustration. I'm often amazed that people judge non-profit organizations merely based on a (Stephen-Colbert-style) gut reaction of truthiness rather than researching the budgetary actions of such organizations. Given that tendency, the mandatory IRS public disclosures for all these various non-profits end up almost completely hidden in plain sight.

    Granted, you sometimes have to make as many as three clicks, and type the name of the organization twice on Foundation Center's Form 990 finder to find these documents. That's why I started to maintain the FLOSS Foundation gitorious repository of Form 990s of all the orgs related to Open Source and Free Software — hoping that a git cloneable solution would be more appealing to geeks. Yet, it's rare that anyone besides those of us who maintain the repository read these. The only notable exception is Brian Proffitt's interesting article back in March 2012, which made use of FLOSS Foundation Form 990 data. But, AFAIK, that's the only time the media has looked at any FLOSS Foundations' Form 990s.

    The final recent story related to non-profits was linked to by Conservancy Board of Directors member, Mike Linksvayer on identi.ca. In the article from Slate Mike references there, Jordan Weissmann points out that the NFL is a 501(c)(6). Weissmann further notes that permission for football to be classified under 501(c)(6) rules seems like pork barrel politics in the first place.

    These disparate events — the Tea Party attacks against IRS 501(c)(4) denials, John Oliver's discussion of the Miss America Organization, Weissmann's specific angle in reporting the NFL scandals, and (more parochially) Yorba's 501(c)(3) and OpenStack Foundation's 501(c)(6) application denials — are brief moments of attention on non-profit structures in the USA. In such moments, we're invited to dig deeper and understand what is really going on, using public information that's readily accessible. So, why do so many people use truthiness rather than data to judge the performance and behavior of non-profit organizations? Why do so many funders, grant-makers and donors admit to never even reading the Form 990 of the organizations whom they support and with whom they collaborate? I ask, of course, rhetorically, but I'd be delighted if there is any answer beyond: “because they're lazy”.

    Posted on Friday 26 September 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-09-22: The LinkedIn Lawsuit Is a Step Forward But Doesn't Go Far Enough

    Years ago, I wrote a blog post about how I don't use Google Plus, Google Hangouts, Facebook, Twitter, Skype, LinkedIn or other proprietary network services. I talked in that post about how I'm under constant and immense social pressure to use these services. (It's often worse than the peer pressure one experiences as a teenager.)

    I discovered a few months ago, however, that one form of this peer pressure was actually a product of nefarious practices by one of the vendors — namely Linked In. Today, I learned a lawsuit is now proceeding against Linked In on behalf of the users whose contacts were spammed repeatedly by Linked In's clandestine use of people's address books.

    For my part, I suppose I should be glad that I'm “well connected”, but that means I get multiple emails from Linked In almost every single day, and indeed, as the article (linked to above) states, each person's spam arrives three times over a period of weeks. I was initially furious at people whom I'd met for selling my contact information to Linked In (which of course, they did), but many of them indeed told me they were never informed by Linked In that such spam generation would occur once they'd complete the sale of all their contact data to Linked In.

    This is just yet another example of proprietary software companies mistreating users. If we had a truly federated Linked-In-like service, we'd be able to configure our own settings in this regard. But, we don't have that. (I don't think anyone is even writing one.) This is precisely why it's important to boycott these proprietary solutions, so at the very least, we don't complacently forget that they're proprietary, or inadvertently mistreat our colleagues who don't use those services in the interim.

    Finally, the lawsuit seems to focus solely on the harm caused to Linked In users who were embarrassed professionally. (I can say that indeed I was pretty angry at many of my contacts for a while when I thought they were choosing to spam me three times each, so that harm is surely real.) But the violation CAN-SPAM act by Linked In should also not be ignored and I hope someone will take action on that point, too.

    Posted on Monday 22 September 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-09-11: Understanding Conservancy Through the GSoC Lens

    [ A version of this post originally appeared on the Google Open Source Blog, and was cross-posted on Conservancy's blog. ]

    Software Freedom Conservancy, Inc. is a 501(c)(3) non-profit charity that serves as a home to Open Source and Free Software projects. Such is easily said, but in this post I'd like to discuss what that means in practice for an Open Source and Free Software project and why such projects need a non-profit home. In short, a non-profit home makes the lives of Free Software developers easier, because they have less work to do outside of their area of focus (i.e., software development and documentation).

    As the summer of 2014 ends, Google Summer of Code (GSoC) coordnation work exemplifies the value a non-profit home brings its Free Software projects. GSoC is likely the largest philanthropic program in the Open Source and Free Software community today. However, one of the most difficult things for organizations that seek to take advantage of such programs is the administrative overhead necessary to take full advantage of the program. Google invests heavily in making it easy for organizations to participate in the program — such as by handling the details of stipend payments to students directly. However, to take full advantage of any philanthropic program, the benefiting organization has some work to do. For its member projects, Conservancy is the organization that gets that logistical work done.

    For example, Google kindly donates $500 to the mentoring organization for every student it mentors. However, these funds need to go “somewhere”. If the funds go to an individual, there are two inherent problems. First, that individual is responsible for taxes on that income. Second, funds that belong to an organization as a whole are now in the bank account of a single project leader. Conservancy solves both those problems: as a tax-exempt charity, the mentor payments are available for organizational use under its tax exemption. Furthermore, Conservancy maintains earmarked funds for each of its projects. Thus, Conservancy keeps the mentor funds for the Free Software project, and the project leaders can later vote to make use of the funds in a manner that helps the project and Conservancy's charitable mission. Often, projects in Conservancy use their mentor funds to send developers to important conferences to speak about the project and recruit new developers and users.

    Meanwhile, Google also offers to pay travel expenses for two mentors from each mentoring organization to attend the annual GSoC Mentor Summit (and, this year, it's an even bigger Reunion conference!). Conservancy handles this work on behalf of its member projects in two directions. First, for developers who don't have a credit card or otherwise are unable to pay for their own flight and receive reimbursement later, Conservancy staff book the flights on Conservancy's credit card. For the other travelers, Conservancy handles the reimbursement details. On the back end of all of this, Conservancy handles all the overhead annoyances and issues in requesting the POs from Google, invoicing for the funds, and tracking to ensure payment is made. While the Google staff is incredibly responsive and helpful on these issues, the Googlers need someone on the project's side to take care of the details. That's what Conservancy does.

    GSoC coordination is just one of the many things that Conservancy does every day for its member projects. If there's anything other than software development and documentation that you can imagine a project needs, Conservancy does that job for its member projects. This includes not only mundane items such as travel coordination, but also issues as complex as trademark filings and defense, copyright licensing advice and enforcement, governance coordination and mentoring, and fundraising for the projects. Some of Conservancy's member projects have been so successful in Conservancy that they've been able to fund developer salaries — often part-time but occasionally full-time — for years on end to allow them to focus on improving the project's software for the public benefit.

    Finally, if your project seeks help with regard to handling its GSoC funds and travel, or anything else mentioned on Conservancy's list of services to member projects, Conservancy is welcoming new applications for membership. Your project could join Conservancy's more than thirty other member projects and receive these wonderful services to help your community grow and focus on its core mission of building software for the public good.

    Posted on Thursday 11 September 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

July

  • 2014-07-15: Why The Kallithea Project Exists

    [ This is a version of an essay that I originally published on Conservancy's blog ].

    Eleven days ago, Conservancy announced Kallithea. Kallithea is a GPLv3'd system for hosting and managing Mercurial and Git repositories on one's own servers. As Conservancy mentioned in its announcement, Kallithea is indeed based on code released under GPLv3 by RhodeCode GmbH. Below, I describe why I was willing to participate in helping Conservancy become a non-profit home to an obvious fork (as this is the first time Conservancy ever welcomed a fork as a member project).

    The primary impetus for Kallithea is that more recent versions of RhodeCode GmbH's codebase contain a very unorthodox and ambiguous license statement, which states:

    (1) The Python code and integrated HTML are licensed under the GPLv3 license as is RhodeCode itself.
    (2) All other parts of the RhodeCode including, but not limited to the CSS code, images, and design are licensed according to the license purchased.

    Simply put, this licensing scheme is — either (a) a GPL violation, (b) an unclear license permission statement under the GPL which leaves the redistributor feeling unclear about their rights, or (c) both.

    When members of the Mercurial community first brought this license to my attention about ten months ago, my first focus was to form a formal opinion regarding (a). Of course, I did form such an opinion, and you can probably guess what that is. However, I realized a few weeks later that this analysis really didn't matter in this case; the situation called for a more innovative solution.

    Indeed, I recalled at that time the disputes between AT&T and University of California at Berkeley over BSD. In that case, while nearly all of the BSD code was adjudicated as freely licensed, the dispute itself was painful for the BSD community. BSD's development slowed nearly to a standstill for years while the legal disagreement was resolved. Court action — even if you're in the right — isn't always the fastest nor best way to push forward an important Free Software project.

    In the case of RhodeCode's releases, there was an obvious and more productive solution. Namely, the 1.7.2 release of RhodeCode's codebase, written primarily by Marcin Kuzminski was fully released under GPLv3-only, and provided an excellent starting point to begin a GPLv3'd fork. Furthermore, some of the improved code in the 2.2.5 era of RhodeCode's codebase were explicitly licensed under GPLv3 by RhodeCode GmbH itself. Finally, many volunteers produced patches for all versions of RhodeCode's codebase and released those patches under GPLv3, too. Thus, there was already a burgeoning GPLv3-friendly community yearning to begin.

    My primary contribution, therefore, was to lead the process of vetting and verifying a completely indisputable GPLv3'd version of the codebase. This was extensive and time consuming work; I personally spent over 100 hours to reach this point, and I suspect many Kallithea volunteers have already spent that much and more. Ironically, the most complex part of the work so far was verifying and organizing the licensing situation regarding third-party Javascript (released under a myriad of various licenses). You can see the details of that work by reading the revision history of Kallithea (or, you can read an overview in Kallithea's LICENSE file).

    Like with any Free Software codebase fork, acrimony and disagreement led to Kallithea's creation. However, as the person who made most of the early changesets for Kallithea, I want to thank RhodeCode GmbH for explicitly releasing some of their work under GPLv3. Even as I hereby reiterate publicly my previously private request that RhodeCode GmbH correct the parts of their licensing scheme that are (at best) problematic, and (at worst) GPL-violating, I also point out this simple fact to those who have been heavily criticizing and admonishing RhodeCode GmbH: the situation could be much worse! RhodeCode could have simply never released any of their code under the GPLv3 in the first place. After all, there are many well-known code hosting sites that refuse to release any of their code (or release only a pittance of small components). By contrast, the GPLv3'd RhodeCode software was nearly a working system that helped bootstrap the Kallithea community. I'm grateful for that, and I welcome RhodeCode developers to contribute to Kallithea under GPLv3. I note, of course, that RhodeCode developers sadly can't incorporate any of our improvements in their codebase, due to their problematic license. However, I extend again my offer (also made privately last year) to work with RhodeCode GmbH to correct its licensing problems.

    Posted on Tuesday 15 July 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

June

  • 2014-06-18: USPTO Affirms Copyleft-ish Hack on Trademark

    I don't often say good things about the USPTO, so I should take the opportunity: the trademark revocation hack to pressure the change of the name of the sports team called the Redskins was a legal hack in the same caliber as copyleft. Presumably Blackhorse deserves the credit for this hack, but the USPTO showed it was sound.

    Update, 2014-06-19 & 2014-06-20: A few have commented that this isn't a hack in the way copyleft is. They have not made an argument for this, only pointed that the statue prohibits racially disparaging trademarks. I thought it would be obvious why I was calling this a copyleft-ish hack, but I guess I need to explain. Copyleft uses copyright law to pursue a social good unrelated to copyright at all: it uses copyright to promote a separate social aim — the freedom of software users. Similarly, I'm strongly suspect Blackhorse doesn't care one wit about trademarks and why they exist or even that they exist. Blackhorse is using the trademark statute to put financial pressure on an institution that is doing social harm — specifically, by reversing the financial incentives of the institution bent on harm. This is analogous to the way copyleft manipulates the financial incentives of software development toward software freedom using the copyright statute. I explain more in this comment.

    Fontana's comments argue that the UPSTO press release is designed to distance itself from the TTAB's decision. Fontana's point is accurate, but the TTAB is ultimately part of the USPTO. Even if some folks at the USPTO don't like the TTAB's ruling, the USPTO is actually arguing with itself, not a third party. Fontana further pointed out in turn that the TTAB is an Article I tribunal, so there can be Executive Branch “judges” who have some level of independence. Thanks to Fontana for pointing to that research; my earlier version of this post was incorrect, and I've removed the incorrect text. (Pam Chestek, BTW, was the first to point this out, but Fontana linked to the documentation.)

    Posted on Wednesday 18 June 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-06-11: Node.js Removes Its CLA

    I've had my disagreements with Joyent's management of the Node.js project. In fact, I am generally auto-skeptical of any Open Source and/or Free Software project run by a for-profit company. However, I also like to give credit where credit is due.

    Specifically, I'd like to congratulate Joyent for making the right decision today to remove one of the major barriers to entry for contribution to the Node.js project: its CLA. In an announcement today (see section labeled “Easier Contribution”, Joyent announced Joyent no longer requires contributors to sign the CLA and will (so it seems) accept contributions simply licensed under the MIT-permissive license. In short, Node.js is, as of today, an inbound=outbound project.

    While I'd prefer if Joyent would in addition switch the project to the Apache License 2.0 — or even better, the Affero GPLv3 — I realize that neither of those things are likely to happen. :) Given that, dropping the CLA is the next best outcome possible, and I'm glad it has happened.


    For further reading on my positions against CLAs, please see these two older blog posts:

    Posted on Wednesday 11 June 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-06-09: Why Your Project Doesn't Need a Contributor Licensing Agreement

    [ This is a version of an essay that I originally published on Conservancy's blog ].

    For nearly a decade, a battle has raged between two distinct camps regarding something called Contributor Licensing Agreements (CLAs). I've previously written a long treatise on the issue. This article below is a summary on the basics of why CLA's aren't necessary.

    In the most general sense, a CLA is a formal legal contract between a contributor to a FLOSS project and the “project” itself0. Ostensibly, this agreement seeks to assure the project, and/or its governing legal entity, has the appropriate permissions to incorporate contributed patches, changes, and/or improvements to the software and then distribute the resulting larger work.

    In practice, most CLAs in use today are deleterious overkill for that purpose. CLAs simply shift legal blame for any patent infringement, copyright infringement, or other bad acts from the project (or its legal entity) back onto its contributors. Meanwhile, since vetting every contribution for copyright and/or patent infringement is time-consuming and expensive, no existing organization actually does that work; it's unfeasible to do so effectively. Thus, no one knows (in the general case) if the contributors' assurances in the CLA are valid. Indeed, since it's so difficult to determine if a given work of software infringes a patent, it's highly likely that any contributor submitting a patent-infringing patch did so inadvertently and without any knowledge that the patent even existed — even regarding patents controlled by their own company1.

    The undeniable benefit to CLAs relates to contributions from for-profit companies who likely do hold patents that read on the software. It's useful to receive from such companies (whenever possible) a patent license for any patents exercised in making, using or selling the FLOSS containing that company's contributions. I agree that such an assurance is nice to have, and I might consider supporting CLAs if there was no other cost associated with using them. However, maintenance of CLA-assent records requires massive administrative overhead.

    More disastrously, CLAs require the first interaction between a FLOSS project and a new contributor to involve a complex legal negotiation and a formal legal agreement. CLAs twist the empowering, community-oriented, enjoyable experience of FLOSS contribution into an annoying exercise in pointless bureaucracy, which (if handled properly) requires a business-like, grating haggle between necessarily adverse parties. And, that's the best possible outcome. Admittedly, few contributors actually bother to negotiate about the CLA. CLAs frankly rely on our “Don't Read & Click ‘Agree’” culture — thereby tricking contributors into bearing legal risk. FLOSS project leaders shouldn't rely on “gotcha” fine print like car salespeople.

    Thus, I encourage those considering a CLA to look past the “nice assurances we'd like to have — all things being equal” and focus on the “what legal assurances our FLOSS project actually needs to assure its thrives”. I've spent years doing that analysis; I've concluded quite simply: in this regard, all a project and its legal home actually need is a clear statement and/or assent from the contributor that they offer the contribution under the project's known FLOSS license. Long ago, the now famous Open Source lawyer Richard Fontana dubbed this legal policy with the name “inbound=outbound”. It's a powerful concept that shows clearly the redundancy of CLAs.

    Most importantly, “inbound=outbound” makes a strong and correct statement about the FLOSS license the project chooses. FLOSS licenses must contain all the legal terms that are necessary for a project to thrive. If the project is unwilling to accept (inbound) contribution of code under the terms of the license it chose, that's a clear indication that the project's (outbound) license has serious deficiencies that require immediate remedy. This is precisely why I urge projects to select a copyleft license with a strong patent clause, such as the GPLv3. With a license like that, CLAs are unnecessary.

    Meanwhile, the issue of requesting the contributors' assent to the projects' license is orthogonal to the issue of CLAs. I do encourage use of clear systems (either formal or informal) for that purpose. One popular option is called the Developer Certificate of Origin (DCO). Originally designed for the Linux project and published by the OSDL under the CC-By-SA license, the DCO is a mechanism to assure contributors have confirmed their right to license their contribution under the project's license. Typically, developers indicate their agreement to the DCO with a specially-formed tag in their DVCS commit log. Conservancy's Evergreen, phpMyAdmin, and Samba projects all use modified versions of the DCO.

    Conservancy's Selenium project uses a license assent mechanism somewhat closer to a formal CLA. In this method, the contributors must complete a special online form wherein they formally assent to the license of the project. The project keeps careful records of all assents separately from the code repository itself. This mechanism is a bit heavy-weight, but ultimately simply formally implements the same inbound=outbound concept.

    However, most projects use the same time-honored and successful mechanism used throughout the 35 year history of the Free Software community. Simply, they publish clearly in their developer documentation and/or other key places (such as mailing list subscription notices) that submissions using the normal means to contribute to the project — such as patches to the mailing list or pull and merge requests — indicate the contributors' assent for inclusion of that software in the canonical version under the project's license.

    Ultimately, CLAs are much ado about nothing. Lawyers are trained to zealously represent their clients, and as such they often seek to an outcome that maximizes leverage of clients' legal rights, but they typically ignore the other important benefits that are outside of their profession. The most ardent supporters of CLAs have yet to experience first-hand the arduous daily work required to manage a queue of incoming FLOSS contributions. Those of us who have done the latter easily see that avoiding additional barriers to entry is paramount. While a beautifully crafted CLA — jam-packed with legalese that artfully shifts all the blame off to the contributors — may make some corporate attorneys smile, but I've never seen such bring anything but a frown and a sigh from FLOSS developers.


    0Only rarely does an unincorporated, unaffiliated project request CLAs. Typically, CLAs name a corporate entity — a non-profit charity (like Conservancy), a trade association (like OpenStack Foundation), or a for-profit company, as its ultimate beneficiary. On rare occasions, the beneficiary of a CLA is a single individual developer.

    1I've yet to meet any FLOSS developer who has read their own employer's entire patent portfolio.

    Posted on Monday 09 June 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-06-08: Resolving Weirdness In Thinkpad T60 Hotkeys

    In keeping with my tendency to write a blog post about any technical issue I find that takes me more than five minutes to figure out when searching the Internet, I include below a resolution to a problem that took me, embarrassingly, nearly two and half hours across two different tries to figure out.

    The problem appeared when I took Debian 7 (wheezy) laptop hard drive out of an Lenovo Thinkpad T61 that I was using that failed and into Lenovo Thinkpad T60. (I've been trying to switch fully to the T60 for everything because it is supported by Coreboot.)

    image of a Lenovo T60 Thinkpad keyboard with volume buttons circled in purple. When I switched, everything was working fine, except the volume buttons on the Thinkpad T60 (those three buttons in the top left hand corner of the keyboard, shown circled in purple in the image on the right) no longer did what I expected. I expected they would ultimately control PulseAudio volume, which does the equivalent of pactl set-sink-mute 0 0 and appropriate pactl set-sink-volume 0 commands for my sound card. I noticed this because when PulseAudio is running, and you type those commands on the command line, all functions properly with the volume, and, when running under X, I see the popup windows coming from my desktop environment showing the volume changes. So, I knew nothing was wrong with the sound configuration when I switched the hard drive to a new machine, since the command line tools worked and did the right things. Somehow, the buttons weren't sending the same commands in whatever manner they were used to.

    I assumed at first that the buttons simply generated X events. It turns out they do, but the story there is a bit more complex. When I ran xev I saw those buttons did not, in fact, generate any X events. So, that makes it clear that nothing from X windows “up” (i.e, to the desktop software) had anything to do with the situation.

    So, I first proceed to research whether these volume keys were supposed to generate X events. I discovered that there were indeed XF86VolumeUp, XF86VolumeDown and XF86VolumeMute key events (I'd seen those before, in fact, doing similar research years ago). However, the advice online was highly conflicting whether or not the best way to solve this is to have them generate X events. Most of the discussions I found assumed the keys were already generating X events and had advice about how to bind those keys to scripts or to your desktop setup of choice0.

    I found various old documentation about the thinkpad_acpi daemon, which I quickly found quickly was out of date since long ago that had been incorporated into Linux's ACPI directly and didn't require additional daemons. This led me to just begin poking around about how the ACPI subsystem for ACPI keys worked.

    I quickly found the xev equivalent for acpi: acpi_listen. This was the breakthrough I needed to solve this problem. I ran acpi_listen and discovered that while other Thinkpad key sequences, such as Fn-Home (to increase brightness), generated output like:

                    video/brightnessup BRTUP 00000086 00000000 K
                    video/brightnessup BRTUP 00000086 00000000
                    
    but the volume up, down, and mute keys generated no output. Therefore, it's pretty clear at this point that the problem is something related to configuration of ACPI in some way. I had a feeling this would be hard to find a solution for.

    That's when I started poking around in /proc, and found that /proc/acpi/ibm/volume was changing each time I hit a these keys. So, Linux clearly was receiving notice that these keys were pressed. So, why wasn't the acpi subsystem notifying anything else, including whatever interface acpi_listen talks to?

    Well, this was a hard one to find an answer to. I have to admit that I found the answer through pure serendipity. I had already loaded this old bug report for an GNU/Linux distribution waning in popularity and found that someone resolved the ticket with the command:

                    cp /sys/devices/platform/thinkpad_acpi/hotkey_all_mask /sys/devices/platform/thinkpad_acpi/hotkey_mask
                    
    This command:
                    # cat /sys/devices/platform/thinkpad_acpi/hotkey_all_mask /sys/devices/platform/thinkpad_acpi/hotkey_mask 
                    0x00ffffff
                    0x008dffff
                    
    quickly showed that that the masks didn't match. So I did:
                    # cat /sys/devices/platform/thinkpad_acpi/hotkey_all_mask > /sys/devices/platform/thinkpad_acpi/hotkey_mask 
                    
    and that single change caused the buttons to work again as expected, including causing the popup notifications of volume changes and the like.

    Additional searching show this hotkey issue is documented in Linux, in its Thinkpad ACPI documentation, which states:

    The hot key bit mask allows some control over which hot keys generate events. If a key is "masked" (bit set to 0 in the mask), the firmware will handle it. If it is "unmasked", it signals the firmware that thinkpad-acpi would prefer to handle it, if the firmware would be so kind to allow it (and it often doesn't!).

    I note that on my system, running the command the document recommends to reset to defaults yields me back to the wrong state:

                    # cat /proc/acpi/ibm/hotkey 
                    status:         enabled
                    mask:           0x00ffffff
                    commands:       enable, disable, reset, <mask>
                    # echo reset > /proc/acpi/ibm/hotkey 
                    # cat /proc/acpi/ibm/hotkey 
                    status:         enabled
                    mask:           0x008dffff
                    commands:       enable, disable, reset, <mask>
                    # echo 0xffffffff > /proc/acpi/ibm/hotkey
                    

    So, I added that last command above to restore it to enabled Linux's control of all the ACPI hot keys, which I suspect is what I want. I'll update the post if doing that causes other problems that I hadn't seen before. I'll also update the post to note whether this setting is saved over reboots, as I haven't rebooted the machine since I did this. :)


    0Interestingly, as has happened to me often recently, much of the most useful information that I find about any complex topic regarding how things work in modern GNU/Linux distributions is found on the Arch or Crunchbang online fora and wikis. It's quite interesting to me that these two distributions appear to be the primary place where the types of information that every distribution once needed to provide are kept. Their wikis are becoming the canonical references of how a distribution is constructed, since much of the information found therein applies to all distributions, but distributions like Fedora and Debian attempt to make it less complex for the users to change the configuration.

    Posted on Sunday 08 June 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-06-04: Be Sure to Comment on FCC's NPRM 14-28

    I remind everyone today, particularly USA Citizens, to be sure to comment on the FCC's Notice of Proposed Rulemaking (NPRM) 14-28. They even did a sane thing and provided an email address you can write to rather than using their poorly designed web forums, but PC Magazine published relatively complete instructions for other ways. The deadline isn't for a while yet, but it's worth getting it done so you don't forget. Below is my letter in case anyone is interested.

    Dear FCC Commissioners,

    I am writing in response to NPRM 14-28 — your request for comments regarding the “Open Internet”.

    I am a trained computer scientist and I work in the technology industry. (I'm a software developer and software freedom activist.) I have subscribed to home network services since 1989, starting with the Prodigy service, and switching to Internet service in 1991. Initially, I used a PSTN single-pair modem and eventually upgraded to DSL in 1999. I still have a DSL line, but it's sadly not much faster than the one I had in 1999, and I explain below why.

    In fact, I've watched the situation get progressively worse, not better, since the Telecommunications Act of 1996. While my download speeds are little bit faster than they were in the late 1990s, I now pay substantially more for only small increases of upload speeds, even in a major urban markets. In short, it's become increasingly more difficult to actually purchase true Internet connectivity service anywhere in the USA. But first, let me explain what I mean by “true Internet connectivity”.

    The Internet was created as a peer-to-peer medium where all nodes were equal. In the original design of the Internet, every device has its own IP address and, if the user wanted, that device could be addressed directly and fully by any other device on the Internet. For its part, the network in between the two nodes were intended to merely move the packets between those nodes as quickly as possible — treating all those packets the same way, and analyzing those packets only with publicly available algorithms that everyone agreed were correct and fair.

    Of course, the companies who typically appeal to (or even fight) the FCC want the true Internet to simply die. They seek to turn the promise of a truly peer-to-peer network of equality into a traditional broadcast medium that they control. They frankly want to manipulate the Internet into a mere television broadcast system (with the only improvement to that being “more stations”).

    Because of this, the three following features of the Internet — inherent in its design — that are now extremely difficult for individual home users to purchase at reasonable cost from so-called “Internet providers” like Time Warner, Verizon, and Comcast:

    • A static IP address, which allows the user to be a true, equal node on the Internet. (And, related: IPv6 addresses, which could end the claim that static IP addresses are a precious resource.)
    • An unfiltered connection, that allows the user to run their own webserver, email server and the like. (Most of these companies block TCP ports 80 and 25 at the least, and usually many more ports, too).
    • Reasonable choices between the upload/download speed tradeoff.

    For example, in New York, I currently pay nearly $150/month to an independent ISP just to have a static, unfiltered IP address with 10 Mbps down and 2 Mbps up. I work from home and the 2 Mbps up is incredibly slow for modern usage. However, I still live in the Slowness because upload speeds greater than that are extremely price-restrictive from any provider.

    In other words, these carriers have designed their networks to prioritize all downloading over all uploading, and to purposely place the user behind many levels of Network Address Translation and network filtering. In this environment, many Internet applications simply do not work (or require complex work-arounds that disable key features). As an example: true diversity in VoIP accessibility and service has almost entirely been superseded by proprietary single-company services (such as Skype) because SIP, designed by the IETF (in part) for VoIP applications, did not fully anticipate that nearly every user would be behind NAT and unable to use SIP without complex work-arounds.

    I believe this disastrous situation centers around problems with the Telecommunications Act of 1996. While the ILECs are theoretically required to license network infrastructure fairly at bulk rates to CLECs, I've frequently seen — both professional and personally — wars waged against CLECs by ILECs. CLECs simply can't offer their own types of services that merely “use” the ILECs' connectivity. The technical restrictions placed by ILECs force CLECs to offer the same style of service the ILEC offers, and at a higher price (to cover their additional overhead in dealing with the CLECs)! It's no wonder there are hardly any CLECs left.

    Indeed, in my 25 year career as a technologist, I've seen many nasty tricks by Verizon here in NYC, such as purposeful work-slowdowns in resolution of outages and Verizon technicians outright lying to me and to CLEC technicians about the state of their network. For my part, I stick with one of the last independent ISPs in NYC, but I suspect they won't be able to keep their business going for long. Verizon either (a) buys up any CLEC that looks too powerful, or, (b) if Verizon can't buy them, Verizon slowly squeezes them out of business with dirty tricks.

    The end result is that we don't have real options for true Internet connectivity for home nor on-site business use. I'm already priced out of getting a 10 Mbps upload with a static IP and all ports usable. I suspect within 5 years, I'll be priced out of my current 2 Mbps upload with a static IP and all ports usable.

    I realize the problems that most users are concerned about on this issue relate to their ability to download bytes from third-party companies like Netflix. Therefore, it's all too easy for Verizon to play out this argument as if it's big companies vs. big companies.

    However, the real fallout from the current system is that the cost for personal Internet connectivity that allows individuals equal existence on the network is so high that few bother. The consequence, thus, is that only those who are heavily involved in the technology industry even know what types of applications would be available if everyone had a static IP with all ports usable and equal upload and download speeds of 10 Mbs or higher.

    Yet, that's the exact promise of network connectivity that I was taught about as an undergraduate in Computer Science in the early 1990s. What I see today is the dystopian version of the promise. My generation of computer scientists have been forced to constrain their designs of Internet-enabled applications to fit a model that the network carriers dictate.

    I realize you can't possibly fix all these social ills in the network connectivity industry with one rule-making, but I hope my comments have perhaps given a slightly different perspective of what you'll hear from most of the other commenters on this issue. I thank you for reading my comments and would be delighted to talk further with any of your staff about these issues at your convenience.

    Sincerely,

    Bradley M. Kuhn,
    a citizen of the USA since birth, currently living in New York, NY.

    Posted on Wednesday 04 June 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

May

  • 2014-05-14: To Serve Users

    (Spoiler alert: spoilers regarding a 1950s science fiction short story that you may not have read appear in this blog post.)

    Mitchell Baker announced today that Mozilla Corporation (or maybe Mozilla Foundation? She doesn't really say…) will begin implementing proprietary software by default in Firefox at the behest of wealthy and powerful media companies. Baker argues this serves users: that Orwellian phrasing caught my attention most.

    image from Twilight Zone Episode, To Serve Man, showing the book with the alien title on the front and its translation.

    In the old science fiction story, To Serve Man (which later was adapted for the The Twilight Zone), aliens come to earth and freely share various technological advances, and offer free visits to the alien world. Eventually, the narrator, who remains skeptical, begins translating one of their books. The title is innocuous, and even well-meaning: To Serve Man. Only too late does the narrator realize that the book isn't about service to mankind, but rather — a cookbook.

    It's in the same spirit that Baker seeks to serve Firefox's users up on a platter to the MPAA, the RIAA, and like-minded wealthy for-profit corporations. Baker's only defense appears to be that other browser vendors have done the same, and cites specifically for-profit companies such as Apple, Google, and Microsoft.

    Theoretically speaking, though, the Mozilla Foundation is supposed to be a 501(c)(3) non-profit charity which told the IRS its charitable purpose was: to keep the Internet a universal platform that is accessible by anyone from anywhere, using any computer, and … develop open-source Internet applications. Baker fails to explain how switching Firefox to include proprietary software fits that mission. In fact, with a bit of revisionist history, she says that open source was merely an “approach” that Mozilla Foundation was using, not their mission.

    Of course, Mozilla Foundation is actually a thin non-profit shell wrapped around a much larger entity called the Mozilla Corporation, which is a for-profit company. I have always been dubious about this structure, and actions like this that make it obvious that “Mozilla” is focused on being a for-profit company, competing with other for-profit companies, rather than a charity serving the public (at least, in the way that I mean “serving”).

    Meanwhile, I greatly appreciate that various Free Software communities maintain forks and/or alternative wrappers around many web browser technologies, which, like Firefox, succumb easily to for-profit corporate control. This process (such as Debian's iceweasel fork and GNOME's ephiphany interface to Webkit) provide an nice “canary in the coalmine” to confirm there is enough software-freedom-respecting code still released to make these browsers usable by those who care about software freedom and reject the digital restrictions management that Mozilla now embraces. OTOH, the one item that Baker is right about: given that so few people oppose proprietary software, there soon may not be much of a web left for those of us who stand firmly for software freedom. Sadly, Mozilla announced today their plans to depart from curtailing that distopia and will instead help accelerate its onset.

    Related Links:

    Posted on Wednesday 14 May 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-05-10: Federal Appeals Court Decision in Oracle v. Google

    [ Update on 2014-05-13: If you're more of a listening rather than reading type, you might enjoy the Free as in Freedom oggcast that Karen Sandler and I recorded about this topic. ]

    I have a strange relationship with copyright law. Many copyright policies of various jurisdictions, the USA in particular, are draconian at best and downright vindictive at worst. For example, during the public comment period on ACTA, I commented that I think it's always wrong, as a policy matter, for copyright infringement to carry criminal penalties.

    That said, much of what I do in my work in the software freedom movement is enforcement of copyleft: assuring that the primary legal tool, which defends the freedom of the Free Software, functions properly, and actually works — in the real world — the way it should.

    As I've written about before at great length, copyleft functions primarily because it uses copyright law to stand up and defend the four freedoms. It's commonly called a hack on copyright: turning the copyright system which is canonically used to restrict users' rights, into a system of justice for the equality of users.

    However, it's this very activity that leaves me with a weird relationship with copyright. Copyleft uses the restrictive force of copyright in the other direction, but that means the greater the negative force, the more powerful the positive force. So, as I read yesterday the Federal Circuit Appeals Court's decision in Oracle v. Google, I had that strange feeling of simultaneous annoyance and contentment. In this blog post, I attempt to state why I am both glad for and annoyed with the decision.

    I stated clearly after Alsup's decision NDCA decision in this case that I never thought APIs were copyrightable, nor does any developer really think so in practice. But, when considering the appeal, note carefully that the court of appeals wasn't assigned the general job of considering whether APIs are copyrightable. Their job is to figure out if the lower court made an error in judgment in this particular case, and to discern any issues that were missed previously. I think that's what the Federal Circuit Court attempted to do here, and while IMO they too erred regarding a factual issue, I don't think their decision is wholly useless nor categorically incorrect.

    Their decision is worth reading in full. I'd also urge anyone who wants to opine on this decision to actually read the whole thing (which so often rarely happens in these situations). I bet most pundits out there opining already didn't read the whole thing. I read the decision as soon as it was announced, and I didn't get this post up until early Saturday morning, because it took that long to read the opinion in detail, go back to other related texts and verify some details and then write down my analysis. So, please, go ahead, read it now before reading this blog post further. My post will still be here when you get back. (And, BTW, don't fall for that self-aggrandizing ballyhoo some lawyers will feed you that only they can understand things like court decisions. In fact, I think programmers are going to have an easier time reading decisions about this topic than lawyers, as the technical facts are highly pertinent.)

    Ok, you've read the decision now? Good. Now, I'll tell you what I think in detail: (As always, my opinions on this are my own, IANAL and TINLA and these are my personal thoughts on the question.)

    The most interesting thing, IMO, about this decision is that the Court focused on a fact from trial that clearly has more nuance than they realize. Specifically, the Court claims many times in this decision that Google conceded that it copied the declaring code used in the 37 packages verbatim (pg 12 of the Appeals decision).

    I suspect the Court imagined the situation too simply: that there was a huge body of source code text, and that Google engineers sat there, simply cutting-and-pasting from Oracle's code right into their own code for each of the 7,000 lines or so of function declarations. However, I've chatted with some people (including Mark J. Wielaard) who are much more deeply embedded in the Free Software Java world than I am, and they pointed out it's highly unlikely anyone did a blatant cut-and-paste job to implement Java's core library API, for various reasons. I thus suspect that Google didn't do it that way either.

    So, how did the Appeals Court come to this erroneous conclusion? On page 27 of their decision, they write: Google conceded that it copied it verbatim. Indeed, the district court specifically instructed the jury that ‘Google agrees that it uses the same names and declarations’ in Android. Charge to the Jury at 10. So, I reread page 10 of the final charge to the jury. It actually says something much more verbose and nuanced. I've pasted together below all the parts where the Alsup's jury charge mentions this issue (emphasis mine):

    Google denies infringing any such copyrighted material … Google agrees that the structure, sequence and organization of the 37 accused API packages in Android is substantially the same as the structure, sequence and organization of the corresponding 37 API packages in Java. … The copyrighted Java platform has more than 37 API packages and so does the accused Android platform. As for the 37 API packages that overlap, Google agrees that it uses the same names and declarations but contends that its line-by-line implementations are different … Google agrees that the structure, sequence and organization of the 37 accused API packages in Android is substantially the same as the structure, sequence and organization of the corresponding 37 API packages in Java. Google states, however, that the elements it has used are not infringing … With respect to the API documentation, Oracle contends Google copied the English-language comments in the registered copyrighted work and moved them over to the documentation for the 37 API packages in Android. Google agrees that there are similarities in the wording but, pointing to differences as well, denies that its documentation is a copy. Google further asserts that the similarities are largely the result of the fact that each API carries out the same functions in both systems.

    Thus, in the original trial, Google did not admit to copying of any of Oracle's text, documentation or code (other than the rangeCheck thing, which is moot on the API copyrightability issue). Rather, Google said two separate things: (a) they did not copy any material (other than rangeCheck), and (b) admitted that the names and declarations are the same, not because Google copied those names and declarations from Oracle's own work, but because they perform the same functions. In other words, Google makes various arguments of why those names and declarations look the same, but for reasons other than “mundane cut-and-paste copying from Oracle's copyrighted works”.

    For we programmers, this is of course a distinction without any difference. Frankly, programmers, when we look at this situation, we'd make many obvious logical leaps at once. Specifically, we all think APIs in the abstract can't possibly be copyrightable (since that's absurd), and we work backwards from there with some quick thinking, that goes something like this: it doesn't make sense for APIs to be copyrightable because if you explain to me with enough detail what the API has to, such that I have sufficient information to implement, my declarations of the functions of that API are going to necessarily be quite similar to yours — so much so that it'll be nearly indistinguishable from what those function declarations might look like if I cut-and-pasted them. So, the fact is, if we both sit down separately to implement the same API, well, then we're likely going to have two works that look similar. However, it doesn't mean I copied your work. And, besides, it makes no sense for APIs, as a general concept, to be copyrightable so why are we discussing this again?0

    But this is reasoning a programmer can love but the Courts hate. The Courts want to take a set of laws the legislature passed, some precedents that their system gave them, along with a specific set of facts, and then see what happens when the law is applied to those facts. Juries, in turn, have the job of finding which facts are accurate, which aren't, and then coming to a verdict, upon receiving instructions about the law from the Court.

    And that's right where the confusion began in this case, IMO. The original jury, to start with, likely had trouble distinguishing three distinct things: the general concept of an API, the specification of the API, and the implementation of an API. Plus, they were told by the judge to assume API's were copyrightable anyway. Then, it got more confusing when they looked at two implementations of an API, parts of which looked similar for purely mundane technical reasons, and assumed (incorrectly) that textual copying from one file to another was the only way to get to that same result. Meanwhile, the jury was likely further confused that Google argued various affirmative defenses against copyright infringement in the alternative.

    So, what happens with the Appeals Court? The Appeals court, of course, has no reason to believe the finding of fact of the jury is wrong, and it's simply not the appeals court's job to replace the original jury's job, but to analyze the matters of law decided by the lower court. That's why I'm admittedly troubled and downright confused that the ruling from the Appeals court seems to conflate the issue of literal copying of text and similarities in independently developed text. That is a factual issue in any given case, but that question of fact is the central nuance to API copyrightiable and it seems the Appeals Court glossed over it. The Appeals Court simply fails to distinguish between literal cut-and-paste copying from a given API's implementation and serendipitous similarities that are likely to happen when two API implementations support the same API.

    But that error isn't the interesting part. Of course, this error is a fundamental incorrect assumption by the Appeals Court, and as such the primary ruling are effectively conclusions based on a hypothetical fact pattern and not the actual fact pattern in this case. However, after poring over the decision for hours, it's the only error that I found in the appeals ruling. Thus, setting the fundamental error aside, their ruling has some good parts. For example, I'm rather impressed and swayed by their argument that the lower court misapplied the merger doctrine because it analyzed the situation based on the decisions Google had with regard to functionality, rather than the decisions of Sun/Oracle. To quote:

    We further find that the district court erred in focusing its merger analysis on the options available to Google at the time of copying. It is well-established that copyrightability and the scope of protectable activity are to be evaluated at the time of creation, not at the time of infringement. … The focus is, therefore, on the options that were available to Sun/Oracle at the time it created the API packages.

    Of course, cropping up again in that analysis is that same darned confusion the Court had with regard to copying this declaration code. The ruling goes on to say: But, as the court acknowledged, nothing prevented Google from writing its own declaring code, along with its own implementing code, to achieve the same result.

    To go back to my earlier point, Google likely did write their own declaring code, and the code ended up looking the same as the other code, because there was no other way to implement the same API.

    In the end, Mark J. Wielaard put it best when he read the decision, pointing out to me that the Appeals Court seemed almost angry that the jury hung on the fair use question. It reads to me, too, like Appeals Court is slyly saying: the right affirmative defense for Google here is fair use, and that a new jury really needs to sit and look at it.

    My conclusion is that this just isn't a decision about the copyrightable of APIs in the general sense. The question the Court would need to consider to actually settle that question would be: “If we believe an API itself isn't copyrightable, but its implementation is, how do we figure out when copyright infringement has occurred when there are multiple implementations of the same API floating around, which of course have declarations that look similar?” But the court did not consider that fundamental question, because the Court assumed (incorrectly) there was textual cut-and-paste copying. The decision here, in my view, is about a more narrow, hypothetical question that the Court decided to ask itself instead: “If someone textually copies parts of your API implementation, are merger doctrine, scènes à faire, and de minimis affirmative defenses like to succeed?“ In this hypothetical scenario, the Appeals Court claims “such defenses rarely help you, but a fair use defense might help you”.

    However, on this point, in my copyleft-defender role, I don't mind this decision very much. The one thing this decision clearly seems to declare is: “if there is even a modicum of evidence that direct textual copying occurred, then the alleged infringer must pass an extremely high bar of affirmative defense to show infringement didn't occur”. In most GPL violation cases, the facts aren't nuanced: there is always clearly an intention to incorporate and distribute large textual parts of the GPL'd code (i.e., not just a few function declarations). As such, this decision is probably good for copyleft, since on its narrowest reading, this decision upholds the idea that if you go mixing in other copyrighted stuff, via copying and distribution, then it will be difficult to show no copyright infringement occurred.

    OTOH, I suspect that most pundits are going to look at this in an overly contrasted way: NDCA said API's aren't copyrightable, and the Appeals Court said they are. That's not what happened here, and if you look at the situation that way, you're making the same kinds of oversimplications that the Appeals Court seems to have erroneously made.

    The most positive outcome here is that a new jury can now narrowly consider the question of fair use as it relates to serendipitous similarity of multiple API function declaration code. I suspect a fresh jury focused on that narrow question will do a much better job. The previous jury had so many complex issues before them, I suspect that they were easily conflated. (Recall that the previous jury considered patent questions as well.) I've found that people who haven't spent their lives training (as programmers and lawyers have) to delineate complex matters and separate truly unrelated issues do a poor job at such. Thus, I suspect the jury won't hang the second time if they're just considering the fair use question.

    Finally, with regard to this ruling, I suspect this won't become immediate, frequently cited precedent. The case is remanded, so a new jury will first sit down and consider the fair use question. If that jury finds fair use and thus no infringement, Oracle's next appeal will be quite weak, and the Appeals Court likely won't reexamine the question in any detail. In that outcome, very little has changed overall: we'll have certainty that API's aren't copyrightable, as long as any textual copying that occurs during reimplementation is easily called fair use. By contrast, if the new jury rejects Google's fair use defense, I suspect Google will have to appeal all the way to SCOTUS. It's thus going to be at least two years before anything definitive is decided, and the big winners will be wealthy litigation attorneys — as usual.


    0This is of course true for any sufficiently simple programming task. I used to be a high-school computer science teacher. Frankly, while I was successful twice in detecting student plagiarism, it was pretty easy to get false positives sometimes. And certainly I had plenty of student programmers who wrote their function declarations the same for the same job! And no, those weren't the students who plagiarized.

    Posted on Saturday 10 May 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

April

  • 2014-04-03: Open Source as Last Resort

    “Open Source as Last Resort” appears to be popular this week. First, Canonical, Ltd. will finally liberate UbuntuOne server-side code, but only after abandoning it entirely. Second, Microsoft announced a plan to release its .NET compiler platform, Roslyn, under the Apache License spinning it into an (apparent, based on description) 501(c)(6) organization called the Dot Net Foundation.

    This strategy is pretty bad for software freedom. It gives fodder to the idea that “open source doesn't work”, because these projects are likely to fail (or have already failed) when they're released. (I suspect, although I don't know of any studies on this, that) most software projects, like most start-up organizations, fail in the first five years. That's true if they're proprietary software projects or not.

    But, using code liberation as a last straw attempt to gain interest in a failing codebase only gives a bad name to the licensing and community-oriented governance that creates software freedom. I therefore think we should not laud these sorts of releases, even though they liberate more code. We should call them for what they are: too little, too late. (I said as much in the five year old bug ticket where community members have been complaining that UbuntuOne server-side is proprietary.)

    Finally, a note on using a foundation to attempt to bolster a project community in these cases:

    I must again point out that the type of organization matters greatly. Those who are interested in the liberated .NET codebase should be asking Microsoft if they're going to form a 501(c)(6) or a 501(c)(3) (and I suspect it's the former, which bodes badly).

    I know some in our community glibly dismiss this distinction as some esoteric IRS issue, but it really matters with regard to how the organization treats the community. 501(c)(6) organizations are trade associations who serve for-profit businesses. 501(c)(3)'s serve the public at large. There's a huge difference in their behavior and activities. While it's possible for a 501(c)(3) to fail to serve all the public's interest, it's corruption when they so fail. When 501(c)(6)'s serve only their corporate members' interest, possibly at the detriment to the public, those 501(c)(6) organizations are just doing the job they are supposed to do — however distasteful it is.


    Note: I said “open source” on purpose in this post in various places. I'm specifically saying that term because it's clear these companies actions are not in the spirit of software freedom, nor even inspired therefrom, but are pure and simple strategy decisions.

    Posted on Thursday 03 April 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

March

  • 2014-03-31: The Change in My Role at Conservancy

    Today, Conservancy announced the addition of Karen Sandler to our management team. This addition to Conservancy's staff will greatly improve Conservancy's ability to help Conservancy's many member projects.

    This outcome is one I've been working towards for a long time. I've focused for at least a year on fundraising for Conservancy in hopes that we could hire a third full-time staffer. For the last few years, I've been doing basically two full-time jobs, since I've needed to give my personal attention to virtually everything Conservancy does. This obviously doesn't scale, so my focus has been on increasing capacity at Conservancy to serve more projects better.

    I (and the entire Board of Directors of Conservancy) have often worried if I were to disappear, leave Conservancy (or otherwise just drop dead), Conservancy might not survive without me. Such heavy reliance on one person is a bug, not a feature, in an organization. That's why I worked so hard to recruit Karen Sandler as Conservancy's new Executive Director. Admittedly, she helped create Conservancy and has been involved since its inception. But, having her full-time on staff is a great step forward: there's no single point of failure anymore.

    It's somewhat difficult for me to relinquish some of my personal control over Conservancy. I have been mostly responsible for building Conservancy from a small unstaffed “thin” fiscal sponsor into a “full-service” fiscal sponsor that provides virtually any work that a Free Software project requests. Much of that has been thanks to my work, and it's tough to let someone else take that over.

    However, handing off the Executive Director position to Karen specifically made this transition easy. Put simply, I trust Karen, and I recruited her personally to take over (one of) my job(s). She really believes in software freedom in the way that I do, and she's taught me at least half the things I know about non-profit organizational management. We've collaborated on so many projects and have been friends and colleagues — through both rough and easy times — for nearly a decade. While I think I'm justified in saying I did a pretty good job as Conservancy's Executive Director, Karen will do an even better job than I did.

    I'm not stepping aside completely from Conservancy management, though. I'm continuing in the role of President and I remain on the Board of Directors. I'll be involved with all strategic decisions for the organization, and I'll be the primary manager for a few of Conservancy's program activities: including at least the non-profit accounting project and Conservancy's license enforcement activities. My primary staff role, however, will now be under the title “Distinguished Technologist” — a title we borrowed from HP. The basic idea behind this job at Conservancy is that my day-to-day work helps the organization understand the technology of Free Software and how it relates to Conservancy's work. As an initial matter, I suspect that my focus for the next few years is going to be the non-profit accounting project, since that's the most urgent place where Free Software is inadequately providing technological solutions for Conservancy's work. (Now, more than ever, I urge you to donate to that campaign, since it will become a major component of funding my day-to-day work. :)

    I'm somewhat surprised that, even in the six hours since this announcement, I've already received emails from Conservancy member project representatives worded as if they expect they won't hear from me anymore. While, indeed, I'll cease to be the front-line contact person for issues related to Conservancy's work, Conservancy and its operations will remain my focus. Karen and I plan a collaborative management style for the organization, so I suspect for many things, Karen will brief me about what's going on and will seek my input. That said, I'm looking forward to a time very soon when most Conservancy management decisions won't primarily be mine anymore. I'm grateful for Karen, as I know that the two of us running Conservancy together will make a great working environment for both of us, and I really believe that she and I as a management team are greater than the sum of our parts.

    Related Links

    Posted on Monday 31 March 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

January

  • 2014-01-26: GCC, LLVM, Copyleft, Companies, and Non-Profits

    [ Please keep in mind in reading this post that while both FSF and Conservancy are mentioned, and that I have leadership roles at both organizations, these opinions on ebb.org, as always, are my own and don't necessarily reflect the view of FSF and/or Conservancy. ]

    Most people know I'm a fan of RMS' writing about Free Software and I agree with most (but not all) of his beliefs about software freedom politics and strategy. I was delighted to read RMS' post about LLVM on the GCC mailing list on Friday. It's clear and concise, and, as usual, I agree with most (but not all) of it, and I encourage people to read it. Meanwhile, upon reading comments on LWN on this post, I felt the need to add a few points to the discussion.

    Firstly, I'm troubled to see so many developers, including GCC developers, conflating various social troubles in the GCC community with the choice of license. I think it's impossible to deny that culturally, the GCC community faces challenges, like any community that has lasted for so long. Indeed, there's a long political history of GCC that even predates my earliest involvement with the Free Software community (even though I'm now considered an old-timer in Free Software in part because I played a small role — as a young, inexperienced FSF volunteer — in helping negotiate the EGCS fork back into the GCC mainline).

    But none of these politics really relate to GCC's license. The copyleft was about ensuring that there were never proprietary improvements to the compiler, and AFAIK no GCC developers ever wanted that. In fact, GCC was ultimately the first major enforcement test of the GPL, and ironically that test sent us on the trajectory that led to the current situation.

    Specifically, as I've spoken about in my many talks on GPL compliance, the earliest publicly discussed major GPL violation was by NeXT computing when Steve Jobs attempted and failed (thanks to RMS' GPL enforcement work) to make the Objective C front-end to GCC proprietary. Everything for everyone involved would have gone quite differently if that enforcement effort had failed.

    As it stands, copyleft was upheld and worked. For years, until quite recently (in context of the history of computing, anyway), Apple itself used and relied on the Free Software GCC as its primary and preferred Objective C compiler, because of that enforcement against NeXT so long ago. But, that occurrence also likely solidified Jobs' irrational hatred of copyleft and software freedom, and Apple was on a mission to find an alternative compiler — but writing a compiler is difficult and takes time.

    Meanwhile, I should point out that copyleft advocates sometimes conflate issues in analyzing the situation with LLVM. I believe most LLVM developers when they say that they don't like proprietary software and that they want to encourage software freedom. I really think they do. And, for all of us, copyleft isn't a religion, or even a belief — it's a strategy to maximize software freedom, and no one (AFAICT) has said it's the only viable strategy to do that. It's quite possible the strategy of LLVM developers of changing the APIs quickly to thwart proprietarization might work. I really doubt it, though, and here's why:

    I'll concede that LLVM was started with the best of academic intentions to make better compiler technology and share it freely. (I've discussed this issue at some length with Chris Lattner directly, and I believe he actually is someone who wants more software freedom in the world, even if he disagrees with copyleft as a strategy.) IMO, though, the problem we face is exploitation by various anti-copyleft, software-freedom-unfriendly companies that seek to remove every copyleft component from any software stack. Their reasons for pursuing that goal may or may not be rational, but its collateral damage has already become clear: it's possible today to license proprietary improvements to LLVM that aren't released as Free Software. I predict this will become more common, notwithstanding any technical efforts of LLVM developers to thwart it. (Consider, by way of historical example, that proprietary combined works with Apache web server continue to this very day, despite Apache developers' decades of we'll break APIs, so don't keep your stuff proprietary claims.)

    Copyleft is always a trade-off between software freedom and adoption. I don't admonish people for picking the adoption side over the software freedom side, but I do think as a community we should be honest with ourselves that copyleft remains the best strategy to prevent proprietary improvements and forks and no other strategy has been as successful in reaching that goal. And, those who don't pick copyleft have priorities other than software freedom ranked higher in their goals.

    As a penultimate point, I'll reiterate something that Joe Buck pointed out on the LWN thread: a lot of effort was put in to creating a licensing solution that solved the copyleft concerns of GCC plugins. FSF's worry for more than a decade (reaching back into the late 1990s) was that a GCC plugin architecture would allow writing to an output file GCC's intermediate representation, which would, in turn, allow a wholly separate program to optimize the software by reading and writing that file format, and thus circumvent the protections of copyleft. The GCC Runtime Library Exception (GCC RTL Exception) is (in my biased opinion) an innovative licensing solution that solves the problem — the ironic outcome: you are only permitted to perform proprietary optimization with GCC on GPL'd software, but not on proprietary software.

    The problem was that the GCC RTL Exception came too late. While I led the GCC RTL Exception drafting process, I don't take the blame for delays. In fact, I fought for nearly a year to prioritize the work when FSF's outside law firm was focused on other priorities and ignored my calls for urgency. I finally convinced everyone, but the work got done far too late. (IMO, it should have been timed for release in parallel with GPLv3 in June 2007.)

    Finally, I want to reiterate that copyleft is a strategy, not a moral principle. I respect the LLVM developers' decision to use a different strategy for software freedom, even if it isn't my preferred strategy. Indeed, I respect it so much that I supported Conservancy's offer of membership to LLVM in Software Freedom Conservancy. I still hope the LLVM developers will take Conservancy up on this offer. I think that regardless of a project's preferred strategy for software freedom — copyleft or non-copyleft — that it's important for the developers to have a not-for-profit charity as a gathering place for developers, separate from their for-profit employer affiliations.

    Undue for-profit corporate influence is the biggest problem that software freedom faces today. Indeed, I don't know a single developer in our community who likes to see their work proprietarized. Developers, generally speaking, want to share their code with other developers. It's lawyers and business people with dollar signs in their eyes who want to make proprietary software. Those people sometimes convince developers to make trade-offs (which I don't agree with myself) to work on proprietary software (— usually in exchange for funding some of their work time on upstream Free Software). Meanwhile, those for-profit-corporate folks frequently spread lies and half-truths about the copyleft side of the community — in an effort to convince developers that their Free Software projects “won't survive” if those developers don't follow the exact plan The Company proposes. I've experienced these manipulations myself — for example, in April 2013, a prominent corporate lawyer with an interest in LLVM told me to my face that his company would continue spreading false rumors that I'd use LLVM's membership in Conservancy to push the LLVM developers toward copyleft, despite my public statements to the contrary. (Again, for the record, I have no such intention and I'd be delighted to help LLVM be led in a non-profit home by its rightful developer leaders, whichever Open Source and Free Software license they chose.)

    In short, the biggest threat to the future of software has always been for-profit companies who wish to maximize profits by exploiting the code, developers and users while limiting their software freedom. Such companies try every trick in pursuit of that goal. As such, I prefer copyleft as a strategy. However, I don't necessarily admonish those who pick a different strategy. The reason that I encourage membership of non-copylefted projects in Conservancy (and other 501(c)(3) charities) is to give those projects the benefits of a non-profit home that maximize software freedom using the project's chosen strategy, whatever it may be.

    Posted on Sunday 26 January 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-01-24: Choosing Software Freedom Costs Money Sometimes

    Apparently, the company that makes my hand lotion brand uses coupons.com for its coupons. The only way to print a coupon is to use a proprietary software browser plugin called “couponprinter.exe” (which presumably implements some form of “coupon DRM).

    So, as for, I actually have a price, in dollars, that it cost me to avoid proprietary software. Standing up for software freedom cost me $1.50 today. :) I suppose there are some people who would argue in this situation that they have to use proprietary software, but of course I'm not one of them.

    The interesting thing is that this program has a OS X and Windows version, but nothing for iOS and Android/Linux. Now, if they had the latter, it'd surely be proprietary software anyway.

    That said, coupons.com does have a send a paper copy to a postal address option, and I have ordered the coupon to be sent to me. But it expires 2014-03-31 and I'm out of hand lotion today; thus whether or not I get to use the coupon before expiration is an open question.

    I'm curious to try to order as many copies as possible of this coupon just to see if they implement ARM properly.

    ARM is of course not a canonical acronym to mean what I mean here. I mean “Analog Restrictions Management”, as opposed to the DRM (“Digital Restrictions Management”) that I was mentioned above. I doubt ARM will become a standard acronym for this, given the obvious overloading of ARM TLA, which is already quite overloaded.

    Posted on Friday 24 January 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

2013

December

  • 2013-12-05: Considerations on a non-profit home for your project

    [ This post of mine is cross-posted from Conservancy's blog.]

    I came across this email thread this week, and it seems to me that Node.js is facing a standard decision that comes up in the life of most Open Source and Free Software projects. It inspired me to write some general advice to Open Source and Free Software projects who might be at a similar crossroads0. Specifically, at some point in the history of a project, the community is faced with the decision of whether the project should be housed at a specific for-profit company, or have a non-profit entity behind it instead. Further, project leaders must consider, if they persue the latter, whether the community should form its own non-profit or affiliate with one that already exists.

    Choosing a governance structure is a tough and complex decision for a project — and there is always some status quo that (at least) seems easier. Thus, there will always be a certain amount of acrimony in this debate. I have my own biases on this, since I am the Executive Director of Conservancy, a non-profit home for Open Source and Free Software projects, and because I have studied the issue of non-profit governance for Open Source and Free Software for the last decade. I have a few comments based on that experience that might be helpful to projects who face this decision.

    The obvious benefit of a project housed in a for-profit company is that they'll usually always have more resources to put toward the project — particularly if the project is of strategic importance to their business. The downside is that the company almost always controls the trademark, perhaps controls the copyright to some extent (e.g., by being the sole beneficiary of a very broad CLA or ©AA), and likely has a stronger say in the technical direction of the project. There will also always be “brand conflation” when something happens in the project (Did the project do it, or did the company?), and such is easily observable in the many for-profit-controlled Open Source and Free Software projects.

    By contrast, while a for-profit entity only needs to consider the interests of its own shareholders, a non-profit entity is legally required to balance the needs of many contributors and users. Thus, non-profits are a neutral home for activities of the project, and a neutral place for the trademark to live, perhaps a neutral place to receive CLAs (if the community even wants a CLA, that is), and to do other activities for the project. (Conservancy, for its part, has a list of what services it provides.)

    There's also difference among non-profit options. The primary two USA options for Open Source and Free Software are 501(c)(3)'s (public charities) and 501(c)(6)'s (trade associations). 501(c)(3) public charities must always act in the public good, while 501(c)(6) trade associations act in interest of its paying for-profit members. I'm a fan of the 501(c)(3)-style of non-profit, again, because I help run one. IMO, the choice between the two really depends on whether you want the project run and controlled by a consortium of for-profit businesses, or if you want the project to operate as a public charity focused on advancing the public good by producing better Open Source and Free Software. BTW, the big benefit, IMO, to a 501(c)(3) is that the non-profit only represents the interests of the project with respect to the public good, so IRS prohibits the charity from conflating its motives with any corporate interest (be they single or aggregate).

    If you decide you want a non-profit, there's then the decision of forming your own non-profit or affiliating with an existing non-profit. Folks who say it's easy to start a new non-profit are (mostly) correct; the challenge is in keeping it running. It's a tremendous amount of work and effort to handle the day-to-day requirements of non-profit management, which is why so many Open Source and Free Software projects choose to affiliate or join with an existing non-profit rather than form their own. I'd suggest strongly that the any community look into joining an existing home, in part because many non-profit umbrellas permit the project to later “spin off” to form your own non-profit. Thus, joining an existing entity is not always a permanent decision.

    Anyway, as you've guessed, thinking about these questions is a part of what I do for a living. Thus, I'd love to talk (by email, phone or IRC) with anyone in any Open Source and Free Software community about joining Conservancy specifically, or even just to talk through all the non-profit options available. There are many options and existing non-profits, all with their own tweaks, so if a given community decides it'd like a non-profit home, there's lots to chose from and a lot to consider.

    I'd note finally that the different tweaks between non-profit options deserve careful attention. I often see people commenting that structures imposed by non-profits won't help with what they need. However, not all non-profits have the same type of structures, and they focus on different things. For example, Conservancy doesn't dictate anything regarding specific CLA rules, licensing, development models, and the like. Conservancy generally advises about all the known options, and help the community come to the conclusions it wants and implement them well. The only place Conservancy has strict rules is with regard to the requirements and guidelines the IRS puts forward on 501(c)(3) status. Meanwhile, other non-profits do have strict rules for development models, or CLAs, and the like, which some projects prefer for various reasons.

    Update 2013-12-07: I posted a follow up on Node.js mailing list in the original discussion that inspired me to write the above.


    0BTW, I don't think how a community comes to that crossroads matters that much, actually. At some point in a project's history, this issue is raised, and, at that moment, a decision is before the project.

    Posted on Thursday 05 December 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

November

  • 2013-11-13: The Trade-offs of Unpaid Free Software Labor

    I read with interest Ashe Dryden's blog post entitled The Ethics of Unpaid Labor and the OSS Community0, and I agree with much of it. At least, I agree with Dryden much more than I agree with Hanson's blog post that inspired Dryden's, since Hanson's seems almost completely unaware of the distinctions between Free Software funding in non-profit and for-profit settings, and I think Dryden's criticism that Hanson's view is narrowed by “white-male in a wealthy country” privilege is quite accurate. I think Dryden does understand the distinctions of non-profit vs. for-profit Free Software development, and Dryden's has an excellent discussion on how wealthy and powerful individuals by default have more leisure time to enter the (likely fictional) Free Software development meritocracy via pure volunteer efforts.

    However, I think two key points remain missing in the discussions so far on this topic. Specifically, (a) the issue of license design as it relates to non-monetary compensation of volunteer efforts and (b) developers' goals in using volunteer Free Software labor to bootstrap employment. The two issues don't interrelate that much, so I'll discuss them separately.

    Copyleft Requirements as “Compensation” For Volunteer Contribution

    I'm not surprised that this discussion about volunteer vs. paid labor is happening completely bereft of reference to the licenses of the software in question. With companies and even many individuals so rabidly anti-copyleft recently, I suspect that everyone in the discussion is assuming that the underlying license structure of these volunteer contributions is non-copyleft.

    Strong copyleft's design, however, deals specifically with the problems inherent in uncompensated volunteer labor. By avoiding the possibility of proprietary derivatives, copyleft ensures that volunteer contributions do have, for lack of a better term, some strings attached: the requirement that even big and powerful companies that use the code treat the lowly volunteer contributor as a true equal.

    Companies have resources that allows them to quickly capitalize on improvements to Free Software contributed by volunteers, and thus the volunteers are always at an economic disadvantage. Requiring that the companies share improvements with the community ensures that the volunteers' labor don't go entirely uncompensated: at the very least, the volunteer contributor has equal access to all improvements.

    This phenomenon is in my opinion an argument for why there is less risk and more opportunity for contributors to copylefted codebases. Copyleft allows for some level of opportunity to the volunteer contributor that doesn't necessarily exist with non-copylefted codebases (i.e., the contributor is assured equal access to later improvements), and certainly doesn't exist with proprietary software.

    Volunteer Contribution As Employment Terms-Setting

    An orthogonal issue is this trend that employers use Free Software contribution as a hiring criterion. I've frankly found this trend disturbing for a wholly different reason than those raised in the current discussed. Namely, most employers who hire based on past Free Software contribution don't employ these developers to work on Free Software!

    Free Software is, frankly, in a state of cooption. (Open Source itself, as a concept, is part of that cooption.) As another part of that cooption, teams of proprietary software (or non-released, secret software) developers use methodologies and workflows that were once unique to Free Software. Therefore, these employers want to know if job candidates know those workflows and methodologies so that the employer can pay the developer to stop using those techniques for the good of software freedom and instead use them for proprietary and/or secretive software development.

    When I was in graduate school, one of the reasons I keenly wanted to be a core contributor to Free Software was not to just get paid for any software development, but specifically to gain employment writing software that would be Free Software. In those days, you picked a codebase you liked because you wanted to be employed to work on that upstream codebase. In fact, becoming a core contributor for a widely used copylefted codebase was once commonly a way to ensure you'd have your pick of jobs being paid to work on that codebase.

    These days, most developers, even though they are required to use some Free Software as part of their jobs, usually are assigned work on some non-Free Software that interacts with that Free Software. Thus, the original meme, that began in the early 1990s, of volunteer for a Free Software codebase so you can later get paid to work on it, has recently morphed into volunteer to work on Free Software so you can get a job working on some proprietary software. That practice is a complete corruption and cooption of the Free Software culture.


    All that said, I do agree with Dryden that we should do more funding at the entry-level of Free Software development, and the internships in particular, such as those through the OPW are, as Dryden writes, absolutely essential to solve the obvious problem of under-representation by those with limited leisure time for volunteer contribution. I think such funding is best when it's done as part of a non-profit rather than a for-profit settings, for reasons that would require yet another blog post to explain.


    0Please note that I haven't seen any of the comments on Dryden's blog post or many of the comments that spawned it, because as near as I can tell, I can't use Disqus without installing proprietary software on my computer, through its proprietary Javascript. If someone can tell me how to read Disqus discussions without proprietary Javascript, I'd appreciate it.

    Posted on Wednesday 13 November 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2013-11-08: Canonical, Ltd.'s Trademark Aggression

    I was disturbed to read that Canonical, Ltd.'s trademark aggression, which I've been vaguely aware of for some time, has reached a new height. And, I say this as someone who regularly encourages Free Software projects to register trademarks, and to occasionally do trademark enforcement and also to actively avoid project policies that might lead to naked licensing. Names matter, and Free Software projects should strive to strike a careful balance between assuring that names mean what they are supposed to mean, and also encourage software sharing and modification at the same time.

    However, Canonical, Ltd.'s behavior shows what happens when lawyers and corporate marketing run amok and fail to strike that necessary balance. Specifically, Canonical, Ltd. sent a standard cease and desist (C&D) letter to Micah F. Lee, for running fixubuntu.com, a site that clearly to any casual reader is not affiliated with Canonical, Ltd. or its Ubuntu® project. In fact, the site is specifically telling you how to undo some anti-privacy stuff that Canonical, Ltd. puts into its Ubuntu, so there is no trademark-governed threat to its Ubuntu branding. Lee fortunately got legal assistance from the EFF, who wrote a letter explaining why Canonical, Ltd. was completely wrong.

    Anyway, this sort of bad behavior is so commonplace by Canonical, Ltd. that I'd previously decided to stop talking about when it reached the crescendo of Mark Shuttleworth calling me a McCarthyist because of my Free Software beliefs and work. But, one comment on Micah's blog inspired me to comment here. Specifically, Jono Bacon, who leads Ubuntu's PR division under the dubious title of Community Manager, asks this insultingly naïve question as a comment on Micah's blog: Did you raise your concerns the team who sent the email?.

    I am sure that Jono knows well what a C&D letter is and what one looks like. I also am sure that he knows that any lawyer would advise Micah to not engage with an adverse party on his own over an issue of trademark dispute without adequate legal counsel. Thus, for Jono to suggest that there is some Canonical, Ltd. “team” that Micah should be talking to not only pathetically conflates Free Software community operations with corporate legal aggression, but also seem like a Canonical, Ltd. employee subtly suggesting that those who receive C&D's from Canonical, Ltd.'s legal departments should engage in discussion without seeking their own legal counsel.

    Free Software projects should get trademarks of their own. Indeed, I fully support that and I encourage for folks interested in this issue to listen to Pam Chestek's excellent talk on the topic at FOSDEM 2013 (which Karen Sandler and I broadcast on Free as in Freedom). However, true Free Software communities don't try to squelch Free Speech that criticizes their projects. It's deplorable that Canonical, Ltd. has an organized campaign between their lawyers and their public relations folks like Jono to (a) send aggressive C&D letters to Free Software enthusiasts who criticize Ubuntu and (b) follow up on those efforts by subtly shaming those who lawyer-up upon receiving that C&D.

    I should finally note that Canonical, Ltd. has an inappropriate and Orwellian predilection for coopting words our community (including the word “community” itself, BTW). Most people don't know that I myself registered the domain name canonical.org back on 1999-08-06 (when Shuttleworth was still running Thawte) for a group of friends who liked to use the word canonical in the canonical way, and still do so today. However, thanks to Shuttleworth, it's difficult to use canonical in the canonical way anymore in Free Software circles, because Shuttleworth coopted the term and brand-markets on top of it. Ubuntu, for its part, is a word meaning human kindness that Shuttleworth has also coopted for his often unkind activities.


    Update at 16:17 on 2013-11-08: Canonical, Ltd. has posted a response regarding their enforcement action, which claims that their trademark policy is unusually permissive. This is true if the universe is “all trademark policies in the world”, but it is false if the universe is “Open Source and Free Software trademark policies”. Of course, like any good spin doctors, Canonical, Ltd. doesn't actually say this explicitly.

    Similarly, Canonical, Ltd. restates the oft-over-simplified claim that in trademark law a mark owner is expected to protect the authenticity of a trademark otherwise they risk losing the mark. What they don't tell you is why they believe failure to enforce in this specific instance against fixubuntu.com had specific risk. Why didn't they tell us that?: because it doesn't. I suspect they could have simply asked for the disclaimer that Micah gave them willingly, and that would have satisfied the aforementioned risk adequately.

    Posted on Friday 08 November 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

October

  • 2013-10-07: Using Perl PayPal API on Debian wheezy

    I recently upgraded to Debian wheezy. On, Debian squeeze, I had no problem using the stock Perl module Business::PayPal::API to import PayPal transactions for Software Freedom Conservancy, via the Debian package libbusiness-paypal-api-perl.

    After the wheezy upgrade, something goes wrong and it doesn't work. I reviewed some similar complaints, that seem to relate to this resolved bug, but that wasn't my problem, I don't think.

    I ran strace to dig around and see what was going on. The working squeeeze install did this:

                    select(8, [3], [3], NULL, {0, 0})       = 1 (out [3], left {0, 0})
                    write(3, "SOMEDATA"..., 1365) = 1365
                    rt_sigprocmask(SIG_BLOCK, [ALRM], [], 8) = 0
                    rt_sigaction(SIGALRM, {SIG_DFL, [], 0}, {SIG_DFL, [], 0}, 8) = 0
                    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
                    rt_sigprocmask(SIG_BLOCK, [ALRM], [], 8) = 0
                    rt_sigaction(SIGALRM, {0xxxxxx, [], 0}, {SIG_DFL, [], 0}, 8) = 0
                    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
                    alarm(60)                               = 0
                    read(3, "SOMEDATA", 5)               = 5
                    

    But the same script on wheezy did this at the same point:

                    select(8, [3], [3], NULL, {0, 0})       = 1 (out [3], left {0, 0})
                    write(3, "SOMEDATA"..., 1373) = 1373
                    read(3, 0xxxxxxxx, 5)                   = -1 EAGAIN (Resource temporarily unavailable)
                    select(0, NULL, NULL, NULL, {0, 100000}) = 0 (Timeout)
                    read(3, 0xxxxxxxx, 5)                   = -1 EAGAIN (Resource temporarily unavailable)
                    select(0, NULL, NULL, NULL, {0, 100000}) = 0 (Timeout)
                    read(3, 0xxxxxxxx, 5)                   = -1 EAGAIN (Resource temporarily unavailable)
                    select(0, NULL, NULL, NULL, {0, 100000}) = 0 (Timeout)
                    read(3, 0xxxxxxxx, 5)                   = -1 EAGAIN (Resource temporarily unavailable)
                    

    I was pretty confused, and basically I still am, but then I noticed this in the documentation for Business::PayPal::API, regarding SOAP::Lite:

    if you have already loaded Net::SSLeay (or IO::Socket::SSL), then Net::HTTPS will prefer to use IO::Socket::SSL. I don't know how to get SOAP::Lite to work with IO::Socket::SSL (e.g., Crypt::SSLeay uses HTTPS_* environment variables), so until then, you can use this hack: local $IO::Socket::SSL::VERSION = undef;

    That hack didn't work, but I did confirm via strace that on wheezy, IO::Socket::SSL was getting loaded instead of Net::SSL. So, I did this, which was a complete and much worse hack:

                    use Net::SSL;
                    use Net::SSLeay;
                    $ENV{'PERL_LWP_SSL_VERIFY_HOSTNAME'} = 0;
                    # Then:
                    use Business::PayPal::API qw(GetTransactionDetails TransactionSearch);
                    

    … And this incantation worked. This isn't the right fix, but I figured I should publish this, as this ate up three hours, and it's worth the 15 minutes to write this post, just in case someone else tries to use Business::PayPal::API on wheezy.

    I used to be a Perl expert once upon a time. This situation convinced me that I'm not. In the old days, I would've actually figured out what was wrong.

    Posted on Monday 07 October 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

September

  • 2013-09-23: The Dangers VC-Backed “Open Source”

    I'm thankful for Christopher Allan Webber for pointing me at this interesting post from Guillaume Lesniak, the developer of Focal (a once fully GPL'd camera application for Android/Linux), and how he was (IMO) pressured to give a proprietary license to the new CyanogenMod, Inc.

    I mostly think Guillaume's post speaks for itself, and I encourage readers of my blog to read it as well. When I read it, I couldn't help thinking about how this is what Free Software often becomes in the world of “Open Source”. Specifically, VCs, and the companies they back, just absolutely love to say they're doing “Open Source”, but it just goes to show the clear difference between “doing Open Source” and giving users software freedom. These VC-backed companies don't really want to share freedoms with their users: they want to exploit Free Software licenses to market more proprietary software.

    Years ago, I helped get the Replicant project started. I haven't been an active contributor to the project, but I hope that folks can see this is an actual, community-oriented, volunteer-run Free Software alternative firmware based on Android/Linux. In my opinion, any project controlled primarily by one company will likely never be all those things. I urge Cyanogenmod users to switch to Replicant today!

    Posted on Monday 23 September 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

July

June

  • 2013-06-26: Congratulations to Harald Welte on Another One

    I'd like to congratulate Harald Welte on yet another great decision in the Berlin court, this time regarding a long-known GPL violator called Fantec. There are so many violations of this nature that are of course so trivially easy to find; it's often tough to pick which one to take action on. Harald has done a great job being selective to make good examples of violators.

    Just as a bit of history, I first documented and confirmed the Fantec violation in January 2009, based on this email sent to the BusyBox mailing list. I discovered that the product didn't seem to be regularly on sale in the USA, so it wasn't ultimately part of the lawsuit that Conservancy and Erik Andersen filed in late 2009.

    However, since Fantec products were on sale mostly in Germany, it was a great case for Harald to pursue. I'm not surprised in the least that even three years after I confirmed the violation, gpl-violations.org found Fantec still out of compliance and was able to take action at that point. It's not surprising either that it took an entire year thereafter to get it resolved. My reaction to that was actually: Darn, that Berlin Court acts fast compared to Courts in the USA. :)

    Posted on Wednesday 26 June 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2013-06-23: Matthew Garrett on Mir

    Matthew Garrett has a good blog post regarding Mir and Canonical, Ltd.'s CLA. I encourage folks to read it; I added a comment there.

    Posted on Sunday 23 June 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

April

  • 2013-04-06: The Punditocracy of Unelected Technocrats

    All this past week, people have been emailing and/or pinging me on IRC to tell me to read the article, The Meme Hustler by Evgeny Morozov. The article is quite long, and while my day-job duties left me TL;DR'ing it for most of the week, I've now read it, and I understand why everyone kept sending me the article. I encourage you not to TL;DR it any longer yourself.

    Morozov centers his criticisms on Tim O'Reilly, but that's not all the article is about. I spend my days walking the Free Software beat as a (self-admitted) unelected politician, and I've encounter many spin doctors, including O'Reilly — most of whom wear the trappings of advocates for software freedom. As Morozov points out, O'Reilly isn't the only one; he's just the best at it. Morozov's analysis of O'Reilly can help us understand these P.T. Barnum's in our midst.

    In 2001, I co-wrote Freedom or Power? with RMS in response to O'Reilly's very Randian arguments (which Morozov discusses). I remember working on that essay for (literally) days with RMS, in-person at the FSF offices (and at his office at MIT), while he would (again, literally) dance around the room, deep in thought, and then run back to the screen where I was writing to suggest a new idea or phrase to add. We both found it was really difficult to craft the right rhetoric to refute O'Reilly's points. (BTW, most people don't know that there were two versions of my and RMS' essay; the original one was published as a direct response to O'Reilly on his own website. One of the reasons RMS and I redrafted as a stand-alone piece was that we saw our original published response actually served to increase uptake of O'Reilly's position. We decided the issue was important enough it needed a piece that would stand on its own indefinitely to defend that key position.)

    Meanwhile, I find it difficult to express more than a decade later how turbulent that time was for hard-core Free Software advocates, and how concerted the marketing campaign against us was. While we were in the middle of the Microsoft's attacks that GPL was an unAmerican cancer, we also had O'Reilly's the freedom that matters is the freedom to pick one's own license meme propagating fast. There were dirty politics afoot at the time, too: this all occurred during the same three-month period when Eric Raymond called me an inmate taking over the asylum. In other words, the spin doctors were attacking software freedom advocates from every side! Morozov's article captures a bit of what it feels like to be on the wrong side of a concerted, organized PR campaign to manipulate public opinion.

    However, I suppose what I like most about Morozov's article is it's the first time I've seen discussed publicly and coherently a rhetorical trick that spin doctors use. Notice when you listen to a pundit at their undue sense of urgency; they invariably act as if what's happening now is somehow (to use a phrase the pundits love): “game changing”. What I typically see is such folks use urgency as a reason to make compromises quickly. Of course, the real goal is a get-rich-(or-famous)-quick scheme for themselves — not a greater cause. The sense of urgency leaves many people feeling that if they don't follow the meme, they'll be left in the dust. A colleague of mine once described this entrancing effect as dream-like, and that desire to stay asleep and keep dreaming is what lets the hustlers keep us under their spell.

    I've admittedly spent more time than I'd like refuting these spin doctors (or, as Morozov also calls them, meme hustlers). Such work seems unfortunately necessary because Free Software is in an important, multi-decade (but admittedly not urgent :) battle of cooption (which, BTW, every social justice movement throughout history has faced). The tide of cooption by spin doctors can be stemmed only with constant vigilance, so I practice it.

    Still, this all seems a cold, academic way to talk about the phenomenon. For these calculating Frank Luntz types, winning is enough; rhetoric, to them, is almost an end in itself (which I guess one might dub “Cicero 2.0”). For those of us who believe in the cause, the “game for the game's sake” remains distasteful because there are real principles at stake for us. Meanwhile, the most talented of these meme hustlers know well that what's a game to them matters emotionally to us, so they use our genuine concern against us at every turn. And, to make it worse, there's more of them out there than most people realize — usually carefully donning the trappings of allies. Kudos to Morozov for reminding us how many of these emperors have no clothes.

    Posted on Saturday 06 April 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

2012

December

  • 2012-12-18: Perl is Free Software's COBOL, and That's Ok!

    In 1991, I'd just gotten my first real programming job for two reasons: nepotism, and a willingness to write code for $12/hour. I was working as a contractor to a blood testing laboratory, where the main development job was writing custom software to handle, process, and do statistical calculations on blood testing results, primarily for paternity testing.

    My father had been a software developer since the early 1970s, and worked as a contractor at this blood lab since the late 1970s. As the calendar had marched toward the early 1990s, technology cruft had collected. The old TI mainframe, once the primary computer, now only had one job left: statistical calculation for paternity testing, written in TI's Pascal. Slowly but surely, the other software had been rewritten and moved to an AT&T 3B2/600 running Unix System VR3.2.3. That latter machine was the first access I had to a real computer, and certainly the first time I had access to Usenet. This changed my life.

    Ironically, even on that 3B2, the accounting system software was written in COBOL. This seemed like “more cruft” to me, but fortunately there was a third-party vendor who handled that software, so I didn't have to program in COBOL.

    I had the good fortune, actually, to help with the interesting problems, which included grokking data from a blood testing machine that dumped a bunch of data in some weird reporting format onto its RS-232 port at the end of every testing cycle. We had to pull the data of that RS-232 interface and load the data in the database. Perl, since it treated regular expressions as first-class citizens, and had all the Unix block device fundamentals baked in as native (for the RS-232 I/O), was the obvious choice.

    After that project, I was intrigued by this programming language that had made the job so easy. My father gave me a copy of the Camel book — which was, at that point, almost hot off the presses. I read it over a weekend and I decided that I didn't really want to program in any other language again. Perl was just 4 years old then; it was a young language — Perl 4 had just been released. I started trying to embed Perl into our database system, but it wasn't designed for embedding into other systems as a scripting language. So, I ended up using Tcl instead for the big project of rewriting the statical calculation software to replace the TI mainframe. After a year or two writing tens of thousands of lines of Tcl, I was even more convinced that I'd rather be writing in Perl. When Perl 5 was released, I switched back to Perl and never really looked back.

    Perl ultimately became my first Free Software community. I lurked on perl5-porters for years, almost always a bit too timid to post, or ever send in a patch. But, as I finished my college degree and went to graduate school, I focused my thesis work on Perl and virtual machines. I went to the Perl conference every year. I was even in the room for the perl5-porters meeting the day after Jon Orwant's staged tantrum, which was the catalyst for the Perl 6 effort. I wrote more than a few RFC's during the Perl 6 specification process. And, to this day, even though I've since done plenty of Python development, too, when I need to program to do something, I open an Emacs buffer and start typing #!/usr/bin/perl.

    Meanwhile, I never did learn COBOL. But, I was amazed to hear that multiple folks who graduated with me eventually got jobs at a health insurance company. The company trained them in COBOL, so that they could maintain COBOL systems all day. Everyone once in a while, I idly search a job site for COBOL. Today, that search is returning 2,338 open jobs. Most developers never hear about it, of course. It's far from the exciting new technology, but it's there, it's needed and it's obviously useful to someone. Indeed, the COBOL standard was just updated 10 years ago, in 2002!

    I notice these days, though, that when I mentioned having done a lot of Perl development in my life, the average Javascript, Python, or Haskell developer looks at me like I looked at my dad when he told me that accounting system was written in COBOL. I'd bet they'd have my same sigh of relief when told that “someone else” maintains that code and they won't have to bother with it.

    Yet, I still know people heavily immersed in the Perl community. Indeed, there is a very active Perl community out there, just like there's an active COBOL community. I'm not active in Perl like I once was, but it's a community of people, who write new code and maintain old code in Perl, and that has value. More importantly, though, (and unlike COBOL), Perl was born on Usenet, and was released as Free Software from the day of its first release, twenty-five years ago today. Perl was born as part of Free Software culture, and it lives on.

    So, I get it now. I once scoffed at the idea that anyone would write in COBOL anymore, as if the average COBOL programmer was some sort of second-class technology citizen. COBOL programmers in 1991, and even today, are surely good programmers — doing useful things for their jobs. The same is true of Perl these days: maybe Perl is finally getting a bit old fashioned — but there are good developers, still doing useful things with Perl. Perl is becoming Free Software's COBOL: an aging language that still has value.

    Perl turns 25 years old today. COBOL was 25 years old in 1984, right at the time when I first started programming. To those young people who start programming today: I hope you'll learn from my mistake. Don't scoff at the Perl programmers. 25 years from now, you may regret scoffing at them as much as I regret scoffing at the COBOL developers. Programmers are programmers; don't judge them because you don't like their favorite language.

    Update (2013-04-12): I posted a comment on Allison Randal's blog about similar issues of Perl's popularity.

    Posted on Tuesday 18 December 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2012-12-14: The Symmetry of My UnAmerican McCarthyist Cancer

    In mid-2001, after working for FSF part-time for the prior year and a half, I'd actually just started working at FSF full-time. I'd recently relocated to Cambridge, MA to work on-site at the FSF offices. The phone started ringing. The aggressive Microsoft attacks had started; the press wanted to know FSF's response. First, Ballmer'd said the GPL was a cancer. Then, Allchin said it was unAmerican1. Then, Bill Gates added (rather pointlessly and oddly) that it was a pac-man that eats up your business. Microsoft even shopped weird talking-points to the press as part of their botched political axe-job on FSF.

    FSF staffing levels have always been small, but FSF was even smaller then. I led a staff of four to respond to the near constant press inquiries for the entire summer. We coordinated speaking engagements for RMS related to the attacks, and got transcripts published. We did all the stuff that you do when the wealthiest corporation in the world decides it wants to destroy a small 501(c)(3) charity that publishes a license that fosters software sharing. From my point of view, I'll admit now that I was, back then, in slightly over my head: this was my first-ever non-software-development job. I was new to politics, new to management, new to just about everything that I needed to do to lead the response to something like that. I learned fast; hopefully it was fast enough.

    The experience made a huge impression on me. I got quickly comfortable to the idea that, if you work for a radical social justice cause, there's always someone powerful attacking your political positions, but if you believe your cause is just and what you're doing is right, you'll survive. I found that good non-profit work is indeed something that just one of us can do against all that money and power trying to crush us into roaches0. Non-profit work really was the dream career I'd always wanted.

    Still, the experience left me permanently distrustful of Microsoft. I've tried to kept an open mind, and watch for potential change in behavior. I admittedly don't think Microsoft became a friend to Free Software in the 11 years since they put me through the wringer during what was almost literally my first day on the job as FSF's Executive Director (a position I ultimately held until 2005). But, I am now somewhat sure Microsoft's executives aren't hatching new plans to kill copyleft every morning anymore. Indeed, I was excited this week to see that my colleagues at the Samba Project acknowledged Microsoft's help in creating documentation that allowed Samba to implement compatibility with Active Directory. Even I have to admit that companies do change, and sometimes a little bit for the better.

    But, companies don't always change for the better. Over an even shorter period, I've watched another company get worse at almost the same rate as Microsoft's improving.

    Specifically, this week, Mark Shuttleworth of Canonical, Ltd. said that those of us who stand strongly against proprietary software device drivers are insecure McCarthyists. I wonder if Mark realized the irony of using the term McCarthyism to refer to the same people who Microsoft called unAmerican just a decade ago.

    I marvel at these shifting winds of politics. These days, the guy out there slurring against copyleft advocates claims to be the biggest promoter of Free Software himself, and in fact built most of his product on the Free Software that is often defended by the people he claims are on a witch-hunt.

    I wrote many blog posts in 2010 critical of Canonical, Ltd. and its policies. Someone asked me in October if I'd stopped because Canonical, Ltd. got better, or if they'd just bought me off. I answered simply, saying, First of all, Mark hasn't shared any of his unfathomable financial wealth with me. But, more importantly, Mark is making enough bad decisions that Canonical, Ltd.'s behavior is now widely criticized, even by the tech press. Others are doing a good enough job pointing out the problems now; I don't have to. Indeed, I'm supportive of RMS' recent comments about Canonical, Ltd. and its Ubuntu project (and RMS surely has a larger microphone than I do, since he's famous). I've also got nothing to add to his well-argued points, so I simply endorse them.

    Nevertheless, I just couldn't let the situation go without commenting. This week, I watched Microsoft (who once ran a campaign to kill FSF's flagship license) do something helpful to Free Software, while also watching Canonical, Ltd. (who has helped write a lot of GPL'd software) pull a page from Microsoft's old playbook to attack GPL advocates. That's got an intriguing symmetry to it. It's not “history repeating itself”, because all the details are different. But, one fact is still exactly the same: The Wealthy sure do like to call us names when it suits them.

    Update 2012-12-15: In addition to my usual identi.ca comment thread (which has been quite active on this post), there's also a comment thread on Hacker News and also one on reddit about this blog post.

    Update 2012-12-18: Karen Sandler and I discuss some of the issues related to Shuttleworth's comments on Free as in Freedom, Episode 0x36.


    0 Strangely, my head (somewhat-uselessly) still contains now, as it did then, verbatim copies of Dead Kennedys' lyric sheets, so I quoted that easily from memory. Fortunately, I am pretty sure verbatim copying something into your own brain isn't copyright infringement (yet).

    1I realized after reading some of the reddit comments that it might be useful to link here to the essay I wrote at the time of Allchin's comments, called The GNU GPL and the American Dream.

    Posted on Friday 14 December 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2012-12-09: Who Ever Thought APIs Were Copyrightable, Anyway?

    Back in the summer, there was a widely covered story about Judge Alsup's decision regarding copyrightablity in the Oracle v. Google case. Oracle has appealed the verdict so presumably this will enter the news again at some point. I'd been meaning to write a blog post about it since it happened, and also Karen Sandler and I had been planning an audcast to talk about it.

    Karen and I finally released last week our audcast on it, episode 0x35 of FaiF on the subject. Fact of the matter is, as Karen has been pointing out, there actually isn't much to say.

    Meanwhile, the upside in delay in commenting means that I can respond to some of the comments that I've seen in the wake of decision's publication. The most common confusion about Alsup's decision, in my view, comes from the imprecision of programmers' use of the term “API”. The API and the implementation of that API are different. Frankly, in the Free Software community, everyone always assumed APIs themselves weren't copyrightable. The whole idea of a clean-room implementation of something centers around the idea that the APIs aren't copyrighted. GNU itself depends on the fact that Unix's APIs weren't copyrighted; just the code that AT&T wrote to implement Unix was.

    Those who oppose copyleft keep saying this decision eviscerates copyleft. I don't really see how it does. For all this time, Free Software advocates have always reimplemented proprietary APIs from scratch. Even copylefted projects like Wine depend on this, after all.

    But, be careful here. Many developers use the phrase API to mean different things. Implementations of an API are still copyrightable, just like they always have been. Distribution of other people's code that implement APIs still requires their permission. What isn't copyrightable is general concepts like “to make things work, you need a function that returns an int and takes a string as an argument and that function must called Foo”.

    Note: This post has been about the copyright issues in the case. I previously wrote a blog post when Oracle v. Google started, which was mostly about the software patent issues. I think the advice in there for Free Software developers is still pretty useful.

    Posted on Sunday 09 December 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2012-12-03: FOSDEM Legal & Policy Issues DevRoom

    Richard Fontana, Tom Marble, Karen Sandler, and I will reprise our roles as co-coordinators of the Legal and Policy Issues DevRoom for FOSDEM 2013. The CFP for the FOSDEM 2013 Legal & Policy Issues DevRoom is now available, and the deadline for submission is 21 December 2012, about 18 days from now.

    I want to put a very specific call out to a group of people who may not have considered submitting a talk to a track like this before. In particular, if you are a Free Software developer who has ideas about the policy/licensing decisions for your project, then you should consider submitting a proposal.

    The problem we have is that we often hear from lawyers, or licensing pundits like me on these types of tracks. We all have a lot to say about issue of policy or licensing. But, it's the developers who lead these projects who know best what policy issues you face, and what is needed to address those issues.

    I also want to add something my graduate adviser once said to me: At the Master's level, it's sufficient for your thesis just to ask an important and complex question well. Only a PhD-level thesis has to propose answers to such questions. In my view, our track is at the Master's level: talks that ask complex licensing policy questions well, but don't necessarily have all the answers are just the kind of proposals we're seeking.

    Please share this CFP widely. We've got a two-day dev room so there are plenty of slots, and while we can't guarantee acceptance of any specific talk, your job as submitters is to make the job of the co-chairs difficult by having to choose between many excellent talks. We look forward to your submissions!

    Posted on Monday 03 December 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

November

  • 2012-11-29: If You've Got a Problem With Me, Please Contact Me!

    [ I usually write blog posts about high-minded software freedom concepts. This post isn't one of those; it's much more typical personal blog-fare, so please stop reading here if you're looking for a good software freedom essay; just move on to another one of my blog posts if that's what you want. ]

    I heard something really odd today. I was told that a relatively large group of people find me untrustworthy and refuse to work or collaborate with me because of it. I heard this second-hand, and I asked for more details, and the person who told me really doesn't want to be involved any further (and I don't blame that person, because the whole thing is admittedly rather silly, and I'd walk away too if it wasn't personally about me).

    There are people in the world I don't trust too, of course. I always tell them so to their face. I just operate my life in a really transparent way, so if I believe someone is my political opponent, I tell them so. I've written emails to people that say things like: Now that you work for Company Blah, I have to assume you're working against Free Software, because Company Blah has a history of doing so. If someone says something offensive to me, I tell them they've offended me. Sometimes, I clearly say that I am explicitly not forgiving the person, which thus makes it clear that there is a standing issue between us indefinitely. I do occasionally hold a grudge. (Frankly, I doubt people who claim they never hold a grudge, because everyone I've ever met seems to have a grudge against somebody for something.)

    I've been told that I'm not tactful. I always respond with: Of course, I'm not a tactful person. I've made a conscious choice not to change that behavior because, IMO, the other option is to leave people guessing about how you feel about their actions. If I think someone's action is wrong, I tell them I think it's wrong and why. If I think someone's action is good, I thank them for it and ask if I can help in the future. That's not a tactful way to live, I admit, but I believe it's nevertheless an honorable way to live. I'm grateful for the tactful people I know, because I realize they can accomplish things that I can't, but I also point out that there are things that the untactful can accomplish that the tactful can't. For example, only the tactless can point out emperors who wear no clothes.

    Meanwhile, the kinds of backroom (and seemingly tactful) politics that we sometimes see in Free Software have a way of descending into high school drama. I heard from Foo who heard from Bar that you won't be elected class president because nobody likes you. No, I can't say who Bar heard it from. No, I can't tell you exactly why. This immature behavior is, IMO, much worse than being tactless.

    I frankly think those who operate this way should be ashamed of themselves. I'm therefore putting out a public call (which is just a repeat of what I've said privately to people for years): if you have some problem with something I've done, or find my actions at any time untrustworthy, or wrong, or anything else negative, you're welcome to contact me. I get emails almost weekly anyway of people who have issues with something I've said on the Free as in Freedom audcast or somewhere else. I take the time to answer almost everyone who writes to me. I also always tell people that you can keep pinging me until I answer and I won't be offended if you do. Sometimes, I might just write back with the reasons why I decided not to answer you. But, I'll always at least tell you my opinions on what you've said, even if it's just a tactless: I don't think what you're writing about is a major priority and I can't schedule the time to think about it further right now. I challenge others in the Free Software community to also rise up to more transparency in their actions and statements.

    I want to be clear, BTW, there's a difference between being tactless and mean. I work really hard not to be mean; I sometimes fail, and I also work very hard to examine my actions to see if I've crossed the line. I send apologies to people when it becomes apparent that I've been not just tactless but also mean. I have to admit, though, there are plenty of mean people kicking around the Free Software world who owe a bunch of apologies (including some to me), but if you think I owe you an apology, I encourage you to write to me and ask for one. In my tactless style, I'll either give you an apology or tell you why I disagree about why you deserve one. :)

    Finally, I thought hard about whether to “name names” herein. It's surely obvious that a specific situation has inspired my words above, and those who know what this situation is will realize immediately; those that don't will sadly be left wondering what the hell is going on. Still, as disgusted as I am about the backroom politics I'm dealing with at the moment, I think public admonishment of the perpetrators here would cross the line from tactless to mean, so I decided not to cross the line.

    Posted on Thursday 29 November 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2012-11-22: Left Wondering Why VideoLan Relicensed Some Code to LGPL

    I first met the original group of VLC developers at the Solutions GNU/Linux conference in 2001. I had been an employee of FSF for about a year at the time, and I recall they were excited to tell the FSF about the project, and very proud that they'd used FSF's premier and preferred license (at the time): GPLv2-or-later.

    What a difference a decade makes. I'm admittedly sad that VLC has (mostly) finished its process of relicensing some of its code under LGPLv2.1-or-later. While I have occasionally supported relicensing from GPL to LGPL, every situation is different and I think it should be analyzed carefully. In this case, I don't support VideoLan's decision to relicense the libVLC code.

    The main reason to use the LGPL, as RMS put eloquently long ago, is for situations where there are many competitors and developers would face serious difficulty gaining adoption of a strong-copylefted solution. Another more recent reason that I've discovered to move to weaker licenses (and this was the case with Qt) is to normalize away some of the problems of proprietary relicensing. However, neither reason applies to libVLC.

    VLC is the most popular media player for desktop computers. I know many proprietary operating system users who love VLC and it's the first application they download to a new computer. It is the standard for desktop video viewing, and does a wonderful job advocating the value of software freedom to people who live in a primarily proprietary software world.

    Meanwhile, the VideoLan Organization's press statements have been quite vague on their reasons for changing, saying only that this change was motivated to match the evolution of the video industry and to spread the VLC engine as a multi-platform open-source multimedia engine and library. The only argument that I've seen discussed heavily in public for relicensing is ostensibly to address the widely publicized incompatibility of copyleft licensing with various App Store agreements. Yet, those incompatibilities still exist with the LGPL or, indeed, any true copyleft license. The incompatibilities of Apple's terms are so strict that they make it absolutely impossible to comply simultaneously with any copyleft and Apple's terms at the same time. Other similar terms aren't much better, even with Google's Play Store (— its terms are incompatible with any copyleft license if the project has many copyright holders)0.

    So, I'm left baffled: does the VLC community actually believes the LGPL would solve that problem? (To be clear, I haven't seen any official statement where the VideoLAN Organization claims that relicensing will solve that issue, but others speculate that it's the reason.) Regardless, I don't think it's a problem worth solving. The specters of “Application Store” terms and conditions are something to fight against wholly in an uncompromising way. The copyleft licensing incompatibilities with such terms are actually a signaling mechanism to show us that these stores are working against software freedom actively. I hope developers will reject deployment to these application stores entirely.

    Therefore, I'm left wondering what VLC seeks to do here. Do they want proprietary application interfaces that use their core libraries? If so, I'm left wondering why: VLC is already so popular that they could pull adopters toward software freedom by using the strong copyleft of GPL on libVLC. It seems to me they're making a bad trade-off to get only marginally more popular by allowing some proprietary derivatives. OTOH, I guess I should cut my losses on this point and be glad they stuck with any copyleft at all and didn't go all the way to a permissive license.

    Finally, I do think there's one valuable outcome shown by this relicensing effort (which Gerv pointed out first): it is possible to relicense a multi-copyright-held code based. It's a lot of work, but it can be done. It appears to me that VLC did a responsible and reasonable job on that part, even if I disagree strongly with the need for such a job here in the first place.

    Update (2012-11-30): It's been pointed out to me that VLC has relegated certain code from VLC into a library called libVLC, and that's the code that's been relicensed. I've made today changes to the post above to clarify that issue.


    0 If you want to hear more about my views and analysis of application store terms and conditions, please listen to the Application Stores Panel that I was on at FOSDEM 2012, which was broadcast on the audcast, Free as in Freedom.

    Posted on Thursday 22 November 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

September

  • 2012-09-17: GPL Violations Are Still Pretty Common, You Know?

    As I've written about before, I am always amazed when suddenly there is widespread interest in, excitement over, and focus on some particular GPL violation. I've spent most of my adult life working on copyleft compliance issues, so perhaps I've got an overly unique perspective. It's just that I've seen lots of GPL violations every single day since the late 1990s. Even now, copyleft compliance remains a regular part of my monthly work. Even though it's now only one task among many that I work on every day, I'm still never surprised nor shocked by some violation.

    When some GPL violation suddenly becomes a “big story”, it reminds me of celebrity divorces. There are, of course, every single day, hundreds (maybe even thousands) of couples facing the conclusion that their marriage has ended. It's a tragedy for their families, and they'll spend years recovering. The divorce impacts everyone they know: both their families, and all their friends, too. Everyone's life who touches the couple is impacted in some way or other.

    Of course, the same is true personally for celebrities when they divorce. The weird thing is, though, that people who don't even know these celebrities want to read about the divorce and know the details. It's exciting because the media tells us that we really want to know all the details and follow the drama every step of the way. It's disturbing that our culture sympathizes more with the pain of the rich and famous than the pain of our everyday neighbors.

    Like divorce, copyleft violations are very damaging, but failure to comply with the copyleft licenses impacts three specific sets of people who directly touch the issue: the people whose copyright are infringed, the people who infringed the copyrights, and the people who received infringing articles. Everyone else is just a spectator0.

    That said, my heart goes out to ever user who is sold software that they can't study, improve and share. I'm doubly concerned when those people were legally entitled to those rights, and an infringer snatched them away by failing to comply with copyleft licenses. I also have great sympathy for the individual copyright holders who licensed their works under GPL, yet find many infringers ignoring the rather simple and reasonable requirements of GPL.

    But, I don't think gawking has any value. My biggest post-mortem complaint about SCO was not the FUD: that was obviously wrong and we knew the community would prevail. The constant gawking took away time that we could have spent writing more Free Software and doing good work in the software freedom community. So, from time to time, I like to encourage everyone to avoid gawking. (Unless, of course, you're doing it with the GNU implementation of AWK. :)

    So, when you read GPL violation stories, even when they seem novel, remember that they're mundane tragedies. It's good someone's working on it, but they don't necessarily deserve the inordinate attention that they sometimes get.

    Update, morning of 2012-09-18: Someone asked me to state more clearly how I felt about Red Hat's GPL enforcement action against TwinPeaks1. I carefully avoided saying that above last night, but I suppose I'm going to get asked so often that I might as well say. Plus, the answer is actually quite simple: I simply don't know until the action completes. I only believe that GPL enforcement is morally legitimate if compliance with the GPL is paramount above all other goals. I have never seen Red Hat enforce the GPL before, so I don't know the pecking order of their goals. The proof of the pudding is in the eating, and the proof in the enforcement is whether compliance is obtained. In short, if I were the Magic 8-Ball of GPL compliance, I'd say “Reply hazy, ask again later”2.


    0 Obviously, there's a large negative impact that many seemingly “small” GPL violations, in aggregate, will together have on the entire software freedom community. But, I'm examining the point narrowly in the main text above. For example, imagine if the only GPL violation in the history of the world were done by one company, on one individual's copyrights, and only one customer ever purchased the infringing product. While I'd still value pursuit of that violation (and I would even help such a copyright holder pursue the matter), even I'd have to readily admit that the impact on the software freedom community of that one violation is rather limited.

    Indeed, the larger policy impact of violations comes from the aggregate effect. That's why I've long argued that it's important to deal with the giant volume of GPL violations rather than focus on any one specific matter, even if that matter looks like a “big one”. It's just too easy sometimes to think one particular copyright holder, or one particular program, or one particular product deserves an inordinate amount of attention, but such undue focus is likely an outgrowth of familiarity breeding a bit too much contempt. I occasionally temporarily fall into that trap, so it makes me sad when others do as well.


    1 What bugs me most is that I have yet to see a good Twin Peaks parody (ala Twin Beaks) of this whole court case. I suppose I'm just too old; I was in high school when the entire nation was obsessed with David Lynch's one hit TV series.

    2 cf15290cc2481dbeacef75a3b8a87014e056c256a1aa485e8684c8c5f4f77660

    Posted on Monday 17 September 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

July

  • 2012-07-23: I Received a 2012 O'Reilly Open Source Award

    On last Friday 20 July 2012, I received an O'Reilly Open Source Award, in appreciation for my decade of work in Free Software non-profit organizations, including my current daily work at the Software Freedom Conservancy, my work at the FSF (including starting FSF's associate membership program), and for my work creating and defending copyleft licensing, including such things as inventing the idea behind the Affero clause, helping draft AGPLv3, and, more generally, enforcing copyleft.

    I'm very proud of all this work. My obsession with software freedom goes back far into my past, when I downloaded my first copy of GNU Emacs in 1991 from Usenet and my first GNU/Linux distribution, SLS, in 1992, booting for the first time, on the first computer I ever owned, a copy of Linux 0.99pl12.

    I honestly have written a lot less Free Software than I wanted to. I've made a patch here and there over the years to dozens of projects. I was a co-maintainer of the AGPL'd PokerSource system for a while, and I made various (mostly mixed-success) attempts to build a better virtual machine for Perl, which now is done much better than I ever did by the Parrot project.

    Despite the fact that making better software was what enthralled me most, feeling the helplessness of supporting, using and writing proprietary software in my brief for-profit career convinced me that lack of adequate software freedom was the most dangerous social justice problem in the computing community. I furthermore realized that lots of people were ready and willing to write great Free Software, but that few wanted to do the (frankly more boring) work of running non-profit organizations to defend and advance software freedom. Thus, I devoted myself to helping FSF and Conservancy to be successful organizations that could assist in that regard. I'm privileged and proud to continue my service to both of these organizations.

    Being recognized for this work means a great deal to me. Awards have a special meaning for me, because financial success never really mattered much to me, but knowing that I've made a contribution to something greater than myself matters greatly. Receiving an award that indicates that I've succeeded in that regard invigorates me to do even more. So, at this moment of receiving this award, I'd like to thank all of you in the software freedom community who appreciate and support my work. It means a great deal to me that my work has made a positive impact.

    Posted on Monday 23 July 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

June

May

  • 2012-05-29: Conservancy's Coordinated Compliance Efforts

    As most readers might have guessed, my work at Software Freedom Conservancy has been so demanding in the last few months that I've been unable to blog, although I have kept up (along with my co-host Karen Sandler) releasing new episodes of the Free as in Freedom oggcast.

    Today, Karen and I released a special episode of FaiF (which is merely special because it was released during a week that we don't normally release a show). In it, Karen and I discuss in detail Conservancy's announcement today of its new coordinated compliance program that includes many copyright holders and projects.

    This new program is an outgrowth of the debate that happened over the last few months regarding Conservancy's GPL compliance efforts. Specifically, I noticed that, buried in the FUD over the last four months regarding GPL compliance, there was one key criticism that was valid and couldn't be ignored: Linux copyright holders should be involved in compliance actions on embedded systems. Linux is a central component of such work, and the BusyBox developers agreed wholeheartedly that having some Linux developers involved with compliance would be very helpful. Conservancy has addressed this issue by building a broad coalition of copyright holders in many different projects who seek to work on compliance with Conservancy, including not just Linux and BusyBox, but other projects as well.

    I'm looking forward in my day job to working collaboratively with copyright holders of many different projects to uphold the rights guaranteed by GPL. I'm also elated at the broad showing of support by other Conservancy projects. In addition to the primary group in the announcement (i.e., copyright holders in BusyBox, Samba and Linux), a total of seven other GPL'd and/or LGPL'd projects have chosen Conservancy to handle compliance efforts. It's clear that Conservancy's compliance efforts are widely supported by many projects.

    The funniest part about all this, though, is that while there has been no end of discussion of Conservancy's and other's compliance efforts this year, most Free Software users never actually have to deal with the details of compliance. Requirements of most copyleft licenses like GPL generally trigger on distribution of the software — particularly distribution of binaries. Since most users simply receive distribution of binaries, and run them locally on their own computer, rarely do they face complex issues of compliance. As the GPLv2 says, The act of running the Program is not restricted.

    Posted on Tuesday 29 May 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

February

  • 2012-02-11: Cutting Through The Anti-Copyleft Political Ruse

    I'd like to thank Harald Welte for his reasoned and clear blog post about GPL enforcement which I hope helps to clear up some of the confusions that I also wrote about recently.

    Harald and I appear to agree that all enforcement actions should request, encourage, and pressure companies toward full FLOSS compliance. Our only disagreement, therefore, is on a minor strategy point. Specifically, Harald believes that the “reinstatement of rights lever” shouldn't be used to require compliance on all FLOSS licenses when resolving a violation matter, and I believe such use of that lever is acceptable in some cases. In other words, Harald and I have only a minor disagreement on how aggressively a specific legal tools should be utilized. (I'd also note that given Harald's interpretation of German law, he never had the opportunity to even consider using that tool, whereas it's always been a default tool in the USA.) Anyway, other than this minor side point, Harald and I appear to otherwise be in full in agreement on everything else regarding GPL enforcement.

    Specifically, one key place where Harald and I are in total agreement is: copyright holders who enforce should approve all enforcement strategies. In every GPL enforcement action that I've done in my life, I've always made sure of that. Indeed, even while I'm a very minor copyright holder in BusyBox (just a few patches), I still nevertheless defer to Erik Andersen (who holds a plurality of the BusyBox copyrights) and Denys Vlasenko (who is the current BusyBox maintainer) about enforcement strategy for BusyBox.

    I hope that Harald's post helps to end this silly recent debate about GPL enforcement. I think the overflowing comment pages can be summarized quite succinctly: some people don't like copyleft and don't want it enforced. Others disagree, and want to enforce. I've written before that if you support copyleft, the only logically consistent position is to also support enforcement. The real disagreement here, thus, is one about whether or not people like copyleft: that's an age-old debate that we just had again.

    However, the anti-copyleft side used a more sophisticated political strategy this time. Specifically, copyleft opponents are attempting to scapegoat minor strategy disagreements among those who do GPL enforcement. I'm grateful to Harald for cutting through that ruse. Those of us that support copyleft may have minor disagreements about enforcement strategy, but we all support GPL enforcement and want to see it continue. Copyleft opponents will of course use political maneuvering to portray such minor disagreements as serious policy questions. Copyleft opponents just want to distract the debate away from the only policy question that matters: Is copyleft a good force in the world for software freedom? I say yes, and thus I'm going to keep enforcing it, until there are no developers left who want to enforce it.

    Posted on Saturday 11 February 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2012-02-01: Some Basic Thoughts on GPL Enforcement

    I've had the interesting pleasure the last 36 hours to watch people debate something that's been a major part of my life's work for the last thirteen years. I'm admittedly proud of myself for entirely resisting the urge to dive into the comment threads, and I don't think it would be all that useful to do so. Mostly, I believe my work stands on its own, and people can make their judgments and disagree if they like (as a few have) or speak out about how they support it (as even more did — at least by my confirmation-biased count, anyway :).

    I was concerned, however, that some of the classic misconceptions about GPL enforcement were coming up yet again. I generally feel that I give so many talks (including releasing one as an oggcast) that everyone must by now know the detailed reasons why GPL enforcement is done the way it is, and how a plan for non-profit GPL enforcement is executed.

    But, the recent discussion threads show otherwise. So, over on Conservancy's blog, I've written a basic, first-principles summary of my GPL enforcement philosophy and I've also posted a few comments on the BusyBox mailing list thread, too.

    I may have more to say about this later, but that's it for now, I think.

    Posted on Wednesday 01 February 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

January

2011

December

  • 2011-12-16: FaiFCast Release, and Submit to FOSDEM Legal & Policy Issues DevRoom

    Today Karen Sandler and I released Episode 0x1E of the Free as in Freedom oggcast (available in ogg and mp3 formats). There are two important things discussed on that oggcast that I want to draw your attention to:

    Submit a proposal for the Legal & Policy Issues DevRoom CFP

    Tom Marble, Richard Fontana, Karen Sandler, and I are coordinating the Legal and Policy Issues DevRoom at FOSDEM 2012. The Call for Participation for the DevRoom is now available. I'd like to ask anyone reading this blog post who has an interest in policy and/or legal issues related to software freedom to submit a talk by Friday 30 December 2011, by emailing <[email protected]>.

    We only have about six slots for speakers (it's a one-day DevRoom), so we won't be able to accept all proposals. I just wanted to let everyone know that so you don't flame me if you submit and get rejected. Meanwhile, note that our goal is to avoid the “this is what copyrights, trademarks and patents are” introductory talks. Our focus is on complex issues for those already informed about the basics. We really felt that the level of discourse about legal and policy issues at software freedom conferences needs to rise.

    There are, of course, plenty of secret membership clubs 0, even some with their own private conferences, where these sorts of important issues are discussed. I personally seek to move high-level policy discussion and debate out of the secret “old-boys” club backrooms and into a public space where the entire software freedom community can discuss openly important legal and policy questions in the community. I hope this DevRoom is a first step in that direction!

    Issues & Questions List for the Software Freedom Non-Profits Debate

    I've made reference recently to debates about the value of non-profit organizations for software freedom projects. In FaiFCast 0x1E, Karen and I discuss the debate in depth. As part of that, as you'll see in the show notes, I've made a list of issues that I think were fully conflated during the recent debates. I can't spare the time to opine in detail on them right now (although Karen and I do a bit of that in the oggcast itself), but I did want to copy the list over here in my blog, mainly to list them out as issues worth thinking about in a software freedom non-profit:

    • Should a non-profit home decide what technical infrastructure is used for a software freedom project? And if so, what should it be?
    • If the non-profit doesn't provide technological services, should non-profits allow their projects to rely on for-profits for technological or other services?
    • Should a non-profit home set political and social positions that must be followed by the projects? If so, how strictly should they be enforced?
    • Should copyrights be held by the non-profit home of the project, or with the developers, or a mix of the two?
    • Should the non-profit dictate licensing requirements on the project? If so, how many licenses and which licenses are acceptable?
    • Should a non-profit dictate strict copyright provenance requirements on their projects? If not, should the non-profit at least provide guidelines and recommendations?

    This list of questions is far from exhaustive, but I think it's a pretty good start.


    0 Admittedly, I've got a proverbial axe to grind about these secretive membership-only groups, since, for nearly all of them, I'm persona non grata. My frustration level in this reached a crescendo when, during a session at LinuxCon Europe recently, I asked for the criteria to join one such private legal issues discussions group, and I was told the criteria themselves were secret. I pointed out to the coordinators of the forum that this wasn't a particularly Free Software friendly way to run a discussion group, and they simply changed the subject. My hope is that this FOSDEM DevRoom can be a catalyst to start a new discussion forum for legal and policy issues related to software freedom that doesn't have this problem.

    BTW, just to clarify: I'm not talking about FLOSS Foundations as one of these secretive, members-only clubs. While the FLOSS Foundations main mailing list is indeed invite-only, it's very easy to join and the only requirement is: “if you repost emails from this list publicly, you'll probably be taken off the mailing list”. There is no “Chatham House Rule” or other silly, unenforceable, and spend-inordinate-amount-of-times-remembering-how-to-follow rules in place for FLOSS Foundations, but such silly rulesets are now common with these other secretive legal issues meeting groups.

    Finally, I know I haven't named publicly the members-only clubs I'm talking about here, and that's by design. This is the first time I've mentioned them at all in my blog, and my hope is that they'll change their behaviors soon. I don't want to publicly shame them by name until I give them a bit more time to change their behaviors. Also, I don't want to inadvertently promote these fora either, since IMO their very structure is flawed and community-unfriendly.

    Update: Some have claimed incorrectly that the text in the footnote above somehow indicates my unwillingness to follow the Chatham House Rule (CHR). I refuted that on identi.ca, noting that the text above doesn't say that, and those who think it does have simply misunderstood. My primary point (which I'll now state even more explicitly) is that CHR is difficult to follow, particularly when it is mis-applied to a mailing list. CHR is designed for meetings, which have a clear start time and a finish time. Mailing lists aren't meetings, so the behavior of CHR when applied to a mailing list is often undefined.

    I should furthermore note that people who have lived under CHR for a series of meetings also have similar concerns as mine. For example, Allison Randal, who worked under CHR on Project Harmony noted:

    The group decided to adopt Chatham House Rule for our discussions. … At first glance it seems quite sensible: encourage open participation by being careful about what you share publicly. But, after almost a year of working under it, I have to say I’m not a big fan. It’s really quite awkward sometimes figuring out what you can and can’t say publicly. I’m trying to follow it in this post, but I’ve probably missed in spots. The simple rule is tricky to apply.

    I agree with Allison.

    Posted on Friday 16 December 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

November

  • 2011-11-28: What's a Free Software Non-Profit For?

    Over on Conservancy's blog, I just published a blog post entitled What's a Free Software Non-Profit For?. It responds in part to what was written last week about non-profit homes for Free Software projects.

    Posted on Monday 28 November 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-11-24: No, You Won't See Me on Twitter, Facebook, Linkedin, Google Plus, Google Hangouts, nor Skype

    Most folks outside of technology fields and the software freedom movement can't grok why I'm not on Facebook. Facebook's marketing has reached most of the USA's non-technical Internet users. On the upside, Facebook gave the masses access to something akin to blogging. But, as with most technology controlled by for-profit companies, Facebook is proprietary software. Facebook, as a software application, is written in a mix of server-side software that no one besides Facebook employees can study, modify and share. On the client-side, Facebook is an obfuscated, proprietary software Javascript application, which is distributed to the user's browser when they access facebook.com. Thus, in my view, using Facebook is no different than installing a proprietary binary program on my GNU/Linux desktop.

    Most of the press critical of Facebook has focused on privacy, data mining of users' data on behalf of advertisers, and other types of data autonomy concerns. Such concerns remain incredibly important too. Nevertheless, since the advent of the software freedom community's concerns about network services a few years ago, I've maintained this simple principle, that I still find correct: While I can agree that merely liberating all software for an online application is not a sufficient condition to treat the online users well, the liberation of the software is certainly a necessary condition for the freedom of the users. Releasing freely all code for the online application the first step for freedom, autonomy, and privacy of the users. Therefore, I certainly don't give in myself to running proprietary software on my FaiF desktops. I simply refuse to use Facebook.

    Meanwhile, when Google Plus was announced, I didn't see any fundamental difference from Facebook. Of course, there are differences on the subtle edges: for example, I do expect that Google will respect data portability more than Facebook. However, I expect data mining for advertisers' behalf will be roughly the same, although Google will likely be more subtle with advertising tie-in than Facebook, and thus users will not notice it as much.

    But, since I'm firstly a software freedom activist, on the primary issue of my concern, there is absolutely no difference between Facebook and Google Plus. Google Plus' software is a mix of server-side trade-secret software that only Google employees can study, share, and modify, and a client-side proprietary Javascript application downloaded into the users' browsers when they access the website.

    Yet, in a matter of just a few months, much of the online conversation in the software freedom community has moved to Google Plus, and I've heard very few people lament this situation. It's not that I believe we'll succeed against proprietary software tomorrow, and I understand fully that (unlike me) most people in the software freedom community have important reasons to interact regularly with those outside of our community. It's not that I chastise software freedom developers and activist for maintaining a minimal presence on these services to interact with those who aren't committed to our cause.

    My actual complaint here is that Google Plus is becoming the default location for discussion of software freedom issues. I've noticed because I've recently discovered that I've missed a lot of community conversations that are only occurring on Google Plus. (I've similarly noticed that many of my Free Software contacts spam me to join Linkedin, so I assume something similar is occurring there as well.)

    What's more, I've received more pressure than ever before to sign up for not only Google Plus, but for Twitter, Linkedin, Google Hangout, Skype and other socially-oriented online communication services. Indeed, just in the last ten days, I've had three different software freedom development projects and/or organizations request that I sign up for a proprietary online communication service merely to attend a meeting or conference call. (Update on 2013-02-16: I still get such requests on a monthly basis.) Of course, I refused, but I've not felt peer pressure this strong since I was a teenager.

    Indeed, the advent of proprietary social networking software adds a new challenge to those of us who want to stand firm and resist proprietary software. As adoption of services like Facebook, Twitter, Google Plus, Skype, Linkedin and Google Hangouts increases, those of us who resist using proprietary software will come under ever-increasing peer pressure. Disturbingly, I've found that peer pressure comes not only from folks outside our community, but also from those who have, for years, otherwise been supporters of the software freedom movement.

    When I point out that I use only Free Software, some respond that Skype, Facebook, and Google Plus are convenient and do things that can't be done easily with Free Software currently. I don't argue that point. It's easy to resist Microsoft Windows, or Internet Explorer, or any other proprietary software that is substandard and works poorly. But proprietary software developers aren't necessarily stupid, nor untalented. In fact, proprietary software developers are highly paid to write easy-to-use, beautiful and enticing software (cross-reference Apple, BTW). The challenge the software freedom community faces is not merely to provide alternatives to the worst proprietary software, but to also replace the most enticing proprietary software available. Yet, if FaiF Software developers settle into being users of that enticing proprietary software, the key inspiration for development disappears.

    The best motivator to write great new software is to solve a problem that's not yet solved. To inspire ourselves as FaiF Software developers, we can't complacently settle into use of proprietary software applications as part of our daily workflow. That's why you won't find me on Google Plus, Google Hangout, Facebook, Skype, Linkedin, Twitter or any other proprietary software network service. You can phone with me with SIP, you can read my blog and identi.ca feed, and chat with me on IRC and XMPP, and those are the only places that I'll be until there's Free Software replacements for those other services. I sometimes kid myself into believing that I'm leading by example, but sadly few in the software freedom community seem to be following.

    Posted on Thursday 24 November 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-11-13: Just Ignore Him; He'll Go Away Eventually.

    One of my favorite verbal exchanges in an episode of The West Wing occurs in S03E08, The Women of Qumar. In the story, after President Bartlet said at a fundraiser: Everything has risks. Your car can drive into a lake and your seatbelt jams, but no one's saying don't wear your seat belt, someone had a car accident while not wearing a seatbelt and filed a lawsuit naming the President as a defendant. Sam, the Deputy Communications Director, thinks the White House should respond preemptively before the story. Toby, the Communication Director, instead ignores Sam and then has this wonderfully deadpan exchange with the President:

    BARTLET
    [Toby,] Come with me for a second, would you?
    TOBY
    Sir, it's possible you're going to hear some stuff about seatbelts today. I urge you to ignore it.
    BARTLET
    No problem. [changes topic] Are you straightening things out with the Smithsonian?

    I remember when I first watched this episode in late 2001. It expressed to me a cogent and concise fact of press relations: someone may be out there trying to get attention for themselves on a topic related to you with some sophistic argument, but you should sometimes just ignore it.

    With that, I say: Dear readers of my blog, you may have heard some stuff about Edward Naughton again this week. I urge you to ignore it.

    I hope you'll all walk in the shoes of President Bartlet and respond with a “No problem” and change the topic. If you really want to follow this story, just read what I've said before on it; nothing has changed.

    Meanwhile, while Naughton seems to be happy to selectively quote me to support his sophistry, he still hasn't gotten in touch with me to help actually enforce the GPL. It's obvious he doesn't care in the least about the GPL; he just wants to use it inappropriately to attack Android/Linux and Google. There are criticisms that Google and Android/Linux deserve, but none of them relate to the topic of GPL violations.

    Posted on Sunday 13 November 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-11-11: Last Four FaiF Episodes

    Those of you that follow my blog have probably wondered we're I've been. Quite frankly, there is just so much work going on at Conservancy that I have almost had no time to do anything but Conservancy work, eat and sleep. My output on this blog and on identi.ca surely shows that.

    The one thing that I've kept up with is the oggcast, Free as in Freedom that I co-host with Karen Sandler, and which is produced by Dan Lynch.

    Since I last made a blog post here, Karen, Dan and I released four oggcasts. I'll discuss them here in reverse chronological order:

    In Episode 0x1C, which was released today, we published Karen's interview with Adam Dingle of Yorba. IMO (which is undoubtedly biased), this episode is an important one since it relates to the issues of non-profit organizations in our community who waiting in the 501(c)(3) application queue. This is a detailed and specific follow-up to the issues that Karen and I discussed on FaiF's Episode 0x13.

    In Episode 0x1B, Karen and I discuss in some detail about the work that we've been up to. Both Karen and I are full-time Executive Directors, and the amount of work that job takes always seems insurmountable. Although, after we recorded the episode, I somewhat embarrassingly remembered the Bush/Kerry debate where George W. Bush kept saying his job as president is hard work. It's certainly annoying when a chief executive goes on and on about how hard his job is, so I apologize if I did a little too much of that in Episode 0x1B.

    In Episode 0x1A, Karen and I discussed in detail Steve Jobs' death and the various news coverage about it. The subject is a bit old news now that I write this, but I'm glad we did that episode, since it gave me an opportunity to say everything I wanted to stay about Steve Jobs' life and death.

    In Episode 0x19, we played Karen's interview with Jos Poortvliet, discussed the identi.ca upgrade, and Karen discussed GNOME 3.2.

    My plan is to at least keep the FaiF oggcast going, and I'm even bugging Fontana that he and I should start an oggcast too. Beyond that, I can't necessarily commit to any other activities outside of that (and my job at Conservancy and volunteer duties at FSF). BTW, I recently attended a few conferences (both LinxCon Europe and the Summer of Code Mentor Summit). At both of them, multiple folks asked me why I haven't been blogging more. I appreciate people's interest in what I'm writing, but at the moment, my day-job at Conservancy and volunteer work at FSF has had to take absolute priority.

    Based on the ebb and flow (yes, that's the first time I've actually used that phrase on my ebb.org blog :) of the Free Software community that I've gotten used to over the last decade and a half, I usually find that things slow down in mid-December until mid-January. Since Conservancy's work is based on the needs of its Free Software projects, I'll likely be able to return a “normal” 50 hour work week (instead of the 60-70 I've been doing lately) in December. Thus, I'll probably try to write some queued blog posts then to slowly push out over the few months that follow.

    Finally, I want to mention that Conservancy has an donation appeal up on its website. I hope you'll give generously to support Conservancy's work. On that, I'll just briefly mention my “hard work” again, to assure you that donors to Conservancy definitely get their money's worth when I'm on the job. Since I'm on the topic of that, I also thank everyone who has donated to FSF and Conservancy over the years. I've been fortunate to have worked full-time at both organizations, and I appreciate the community that has supported all that work over the years.

    Posted on Friday 11 November 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

September

August

  • 2011-08-21: Desktop Summit 2011

    I realize nearly ten days after the end of a conference is a bit late to blog about it. However, I needed some time to recover my usual workflow, having attended two conferences almost back-to-back, OSCON 2011 and Desktop Summit. (The strain of the back-to-back conferences, BTW, made it impossible for me to attend Linux Con North America 2011, although I'll be at Linux Con Europe. I hope next year's summer conference schedule is not so tight.)

    This was my first Desktop Summit, as I was unable to attend the first one in Grand Canaria two years ago. I must admit, while it might be a bit controversial to say so, that I felt the conference was still like two co-located conferences rather than one conference. I got a chance to speak to my KDE colleagues about various things, but I ended up mostly attending GNOME talks and therefore felt more like I was at GUADEC than at a Desktop Summit for most of the time.

    The big exception to that, however, was in fact the primary reason I was at Desktop Summit this year: to participate in a panel discussion with Mark Shuttleworth and Michael Meeks (who gave the panel a quick one-sentence summary on his blog). That was plenary session and the room was filled with KDE and GNOME developers alike, all of whom seemed very interested in the issue.

    Photo of The CAA/CLA panel discussion at Desktop Summit 2011.

    The panel format was slightly frustrating — primarily due to Mark's insistence that we all make very long open statements — although Karen Sandler nevertheless did a good job moderating it and framing the discussion.

    I get the impression most of the audience was already pretty well informed about all of our positions, although I think I shocked some by finally saying clearly in a public forum (other than identi.ca) that I have been lobbying FSF to make copyright assignment for FSF-assigned projects optional rather than mandatory. Nevertheless, we were cast well into our three roles: Mark, who wants broad licensing control over projects his company sponsors so he can control the assets (and possibly sell them); Michael, who has faced so many troubles in the OpenOffice.org/LibreOffice debacle that he believes inbound=outbound can be The Only Way; and me, who believes that copyright assignment is useful for non-profits willing to promise to do the public good to enforce the GPL, but otherwise is a Bad Thing.

    Lydia tells me that the videos will be available eventually from Desktop Summit, and I'll update this blog post when they are so folks can watch the panel. I encourage everyone concerned about the issue of rights transfers from individual developers to entities (be they via copyright assignment or other broad CLA means) to watch the video once it's available. For the moment, Jake Edge's LWN article about the panel is a pretty good summary.

    My favorite moment of the panel, though, was when Shuttleworth claimed he was but a distant observer of Project Harmony. Karen, as moderator, quickly pointed out that he was billed as Project Harmony's originator in the panel materials. It's disturbing that Shuttleworth thinks he can get away with such a claim: it's a matter of public record, that Amanda Brock (Canonical, Ltd.'s General Counsel) initiated Project Harmony, led it for most of its early drafts, and then Canonical Ltd. paid Mark Radcliffe (a lawyer who represents companies that violate the GPL) to finish the drafting. I suppose Shuttleworth's claim is narrowly true (if misleading) since his personal involvement as an individual was only tangential, but his money and his staff were clearly central: even now, it's led by his employee, Allison Randal. If you run the company that runs a project, it's your project: after all, doesn't that fit clearly with Shuttleworth's suppositions about why he should be entitled to be the recipient of copyright assignments and broad CLAs in the first place?

    The rest of my time at Desktop Summit was more as an attendee than a speaker. Since I'm not desktop or GUI developer by any means, I mostly went to talks and learned what others had to teach. I was delighted, however, that no less than six people came up to me and said they really liked this blog. It's always good to be told that something you put a lot of volunteer work into is valuable to at least a few people, and fortunately everyone on the Internet is famous to at least six people. :)

    Sponsored by the GNOME Foundation!

    Meanwhile, I want to thank the GNOME Foundation for sponsoring my trip to Desktop Summit 2011, as they did last year for GUADEC 2010. Given my own work and background, I'm very appreciative of a non-profit with limited resources providing travel funding for conferences. It's a big expense, and I'm thankful that the GNOME Foundation has funded my trips to their annual conference.

    BTW, while we await the videos from Desktop Summit, there's some “proof” you can see that I attended Desktop Summit, as I appear in the group photo, although you'll need to view the hi-res version and scroll to the lower right of the image, and find me. I'm in the second/third (depending on how you count) row back, 2-3 from the right, and two to the left from Lydia Pintscher.

    Finally, I did my best to live dent from the Desktop Summit 2011. That might be of interest to some as well, for example, if you want to dig back and see what folks said in some of the talks I attended. There was also a two threads after the panel that may be of interest.

    Posted on Sunday 21 August 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-08-18: Will Nokia Ever Realize Open Source Is Not a Panacea?

    I was pretty sure there was something wrong with the whole thing in fall of 2009, when they first asked me. A Nokia employee contacted me to ask if I'd be willing to be a director of the Symbian Foundation (or so I thought that's what they were asking — read on). I wrote them a thoughtful response explaining my then-current concerns about Symbian:

    • the poor choice of the Eclipse Public License for the eventual code,
    • the fact that Symbian couldn't be built in any software freedom system environment, and
    • that the Symbian source code that had been released thus far didn't actually run on any existing phones.

    I nevertheless offered to serve as a director for one year, and I would resign at that point if the problems that I'd listed weren't resolved.

    I figured that was quite a laundry list. I also figured that they probably wouldn't be interested anyway once they saw my list. Amusingly, they still were. But then, I realized what was really going on.

    In response to my laundry list, I got back a rather disturbing response that showed a confusion in my understanding. I wasn't being invited to join the board of the Symbian Foundation. They had asked me instead to serve as a Director of a small USA entity (that they heralded as Symbian DevCo) that would then be permitted one Representative of the Symbian Foundation itself, which was, in turn, a trade association controlled by dozens of proprietary software companies.

    In fact, this Nokia employee said that they planned to channel all individual developers toward this Symbian DevCo in the USA, and that would be the only voice these developers would have in the direction of Symbian. It would be one tiny voice against dozens of proprietary software company who controlled the real Symbian Foundation, a trade association.

    Anyone who has worked in the non-profit sector, or even contributed to any real software freedom project can see what's deeply wrong there. However, my response wasn't to refuse. I wrote back and said clearly why this was failing completely to create a software freedom community that could survive vibrantly. I pointed out the way the Linux community was structured: whereby the Linux Foundation is a trade association for companies — and, while they do fund Linus' salary, they don't control his or any other activities of developers. Meanwhile, the individual Linux developers have all the real authority: from community structure, to licensing, to holding copyrights, to technical decision-making. I pointed out if they wanted Symbian to succeed, they should emulate Linux as much as they could. I suggested Nokia immediately change the whole structure to have developers in charge of the project, and have a path for Symbian DevCo to ultimately be the primary organization in charge of the codebase, while Symbian Foundation could remain the trade association, roughly akin to the Linux Foundation. I offered to help them do that.

    You might guess that I never got a reply to that email. It was thus no surprise to me in the least what happened to Symbian after that:

    So, within 17 months of Symbian Foundation's inquiry to ask me to help run Symbian DevCo, the (Open Source) Symbian project was canceled entirely, the codebase was now again proprietary (with a few of the old codedumps floating around on other sites), and the Symbian Foundation consists only of a single webpage filled with double-speak.

    Of course, even if Nokia had tried its hardest to build an actual software freedom community, Symbian still had a good chance of failing, as I pointed out in March 2010. But, if Nokia had actually tried to release control and let developers have some authority, Symbian might have had a fighting chance as Free Software. As it turned out, Nokia threw some code over the wall, gave all the power to decide what happens to a bunch of proprietary software companies, and then hung it all out to dry. It's a shining example of how to liberate software in a way that will guarantee its deprecation in short order.

    Of course, we now know that during all this time, Nokia was busy preparing a backroom deal that would end its always-burgeoning-but-never-complete affiliation with software freedom by making a deal with Microsoft to control the future of Nokia. It's a foolish decision for software freedom; whether it's a good business decision surely isn't for me to judge. (After all, I haven't worked in the for-profit sector for fifteen years for a reason.)

    It's true that I've always given a hard time to Maemo (and to MeeGo as well). Those involved from inside Nokia spent the last six months telling me that MeeGo is run by completely different people at Nokia, and Nokia did recently launch yet another MeeGo based product. I've meanwhile gotten the impression that Nokia is one of those companies whose executives are more like wealthy Romans who like to pit their champions against each other in the arena to see who wins; Nokia's various divisions appear to be in constant competition with each other. I imagine someone running the place has read too much Ayn Rand.

    Of course, it now seems that MeeGo hasn't, in Nokia's view, “survived as the fittest”. I learned today (thanks to jwildeboer) that, In Elop's words, there is no returning to MeeGo, even if the N9 turns out to be a hit. Nokia's commitment to Maemo/MeeGo, while it did last at least four years or so, is now gone too, as they begin their march to Microsoft's funeral dirge. Yet another FLOSS project Nokia got serious about, coordinated poorly, and yet ultimately gave up.

    Upon considering Nokia's bad trajectory, it led me to think about how Open Source companies tend to succeed. I've noticed something interesting, which I've confirmed by talking to a lot of employees of successful Open Source companies. The successful ones — those that get something useful done for software freedom while also making some cash (i.e., the true promise of Open Source) — let the developers run the software projects themselves. Such companies don't relegate the developers into a small non-profit that has to lobby dozens of proprietary software companies to actually make an impact. They don't throw code over the wall — rather, they fund developers who make their own decisions about what to do in the software. Ultimately, smart Open Source companies treat software freedom development like R&D should be treated: fund it and see what comes out and try to build a business model after something's already working. Companies like Nokia, by contrast, constantly put their carts in front of all the horses and wonder why those horses whinny loudly at them but don't write any code.

    Open Source slowly became a fad during the DotCom era, and it strangely remains such. A lot of companies follow fads, particularly when they can't figure what else to do. The fad becomes a quick-fix solution. Of course, for those of us that started as volunteers and enthusiasts in 1991 or earlier, software freedom isn't some new attraction at P. T. Barnum's circus. It's a community where we belong and collaborate to improve society. Companies are welcomed to join us for the ride, but only if they put developers and users in charge.

    Meanwhile, my personal postscript to my old conversation with Nokia arrived in my inbox late in May 2011. I received a extremely vague email from a lawyer at Nokia. She wanted really badly to figure out how to quickly dump some software project — and she wouldn't tell me what it was — into the Software Freedom Conservancy. Of course, I'm sure this lawyer knows nothing about the history of the Symbian project wooing me for directorship of Symbian DevCo and all the other history of why “throwing code over the wall” into a non-profit is rarely known to work, particularly for Nokia. I sent her a response explaining all the problems with her request, and, true to Nokia's style, she didn't even bother to respond to me thanking me for my time.

    I can't wait to see what project Nokia dumps over the wall next, and then, in another 17 months (or if they really want to lead us on, four years), decides to proprietarize or abandon it because, they'll say, this open-sourcing thing just doesn't work. Yet, so many companies make money with it. The short answer is: Nokia, you keep doing it wrong!

    Update (2011-08-24): Boudewijn Rempt argued another side of this question. He says the Calligra suite is a counterexample of Nokia getting a FLOSS project right. I don't know enough about Calligra to agree or disagree.

    Posted on Thursday 18 August 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-08-15: If Only They'd Actually Help Enforce GPL

    Unfortunately, Edward Naughton is at it again, and everyone keeps emailing me about, including Brian Proffitt, who quoted my email response to him this morning in his article.

    As I said in my response to Brian, I've written before on this issue and I have nothing much more to add. Naughton has not identified a GPL violation that actually occurred, at least with respect to Google's own distribution of Android, and he has completely ignored my public call for him to make such a formal report to the copyright holders of GPL violations for which he has evidence (if any).

    Jon Corbet of LWN has also picked up the story, mostly pontificating on what it would mean if loss of distribution rights under GPLv2§4 are used nefariously instead of the honorable way it has been hitherto used to defend software freedom. I commented on the LWN post.

    I think Jon's right to raise that specific concern, and that's a good reason for projects to upgrade to GPLv3. But, nevertheless, this whole thing is not even relevant until someone actually documents a real GPL violation that has occurred. As I previously mentioned, I'm aware of plenty of documented violations (thanks to Matthew Garrett), and I'd love if more people were picking up and act on these violations to enforce the GPL. I again tell Naughton: if you are seriously concerned about enforcing GPL, then volunteer your time as a lawyer to help. But we all know that's not really what interests you: rather, your job is to spread FUD.

    Posted on Monday 15 August 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-08-05: You're Living in the Past, Dude!

    At the 2000 Usenix Technical Conference (which was the primary “generalist” conference for Free Software developers in those days), I met Miguel De Icaza for the third time in my life. In those days, he'd just started Helix Code (anyone else remember what Ximian used to be called?) and was still president of the GNOME Foundation. To give you some context: Bonobo was a centerpiece of new and active GNOME development then.

    Out of curiosity and a little excitement about GNOME, I asked Miguel if he could show me how to get the GNOME 1.2 running on my laptop. Miguel agreed to help, quickly taking control of the keyboard and frantically typing and editing my sources.list.

    Debian potato was the just-becoming-stable release in those days, and of course, I was still running potato (this was before my experiment with running things from testing began).

    After a few minutes hacking on my keyboard, Miguel realized that I wasn't running woody, Debian's development release. Miguel looked at me, and said: You aren't running woody; I can't make GNOME run on this thing. There's nothing I can do for you. You're living in the past, dude!. (Those who know Miguel IRL can imagine easily how he'd sound saying this.)

    So, I've told that story many times for the last eleven years. I usually tell it for laughs, as it seems an equal-opportunity humorous anecdote. It pokes some fun at Miguel, at me, at Debian for its release cycle, and also at GNOME (which has, since its inception, tried to never live in the past, dude).

    Fact is, though, I rather like living in the past, at least with regard to my computer setup. By way of desktop GUIs, I used twm well into the late 1990s, and used fvwm well into the early 2000s. I switched to sawfish (then sawmill) during the relatively brief period when GNOME used it as its default window manager. When Metacity became the default, I never switched because I'd configured sawfish so heavily.

    In fact, the only actual parts of GNOME 2 that I ever used on a daily basis have been (a) a small unobtrusive panel, (b) dbus (and its related services), and (c) the Network Manager applet. When GNOME 3 was released, I had no plans to switch to it, and frankly I still don't.

    I'm not embarrassed that I consistently live in the past; it's sort of the point. GNOME 3 isn't for me; it's for people who want their desktop to operate in new and interesting ways. Indeed, it's (in many ways) for the people who are tempted to run OSX because its desktop is different than the usual, traditional, “desktop metaphor” experience that had been standard since the mid-1990s.

    GNOME 3 just wasn't designed with old-school Unix hackers in mind. Those of us who don't believe a computer is any good until we see a command line aren't going to be the early adopters who embrace GNOME 3. For my part, I'll actually try to avoid it as long as possible, continue to run my little GNOME 2 panel and sawfish, until slowly, GNOME 3 will seep into my workflow the way the GNOME 2 panel and sawfish did when they were current, state-of-the-art GNOME technologies.

    I hope that other old-school geeks will see this distinction: we're past the era when every Free Software project is targeted at us hackers specifically. Failing to notice this will cause us to ignore the deeper problem software freedom faces. GNOME Foundation's Executive Director (and my good friend), Karen Sandler, pointed out in her OSCON keynote something that's bothered her and me for years: the majority computer at OSCON is Apple hardware running OSX. (In fact, I even noticed Simon Phipps has one now!) That's the world we're living in now. Users who actually know about “Open Source” are now regularly enticed to give up software freedom for shiny things.

    Yes, as you just read, I can snicker as quickly as any old-school command-line geek (just as Linus Torvalds did earlier this week) at the pointlessness of wobbly windows, desktop cubes, and zoom effects. I could also easily give a treatise on how I can get work done faster, better, and smarter because I have the technology of years ago that makes every keystroke matter.

    Notwithstanding that, I'd even love to have the same versatility with GNOME 3 that I have with sawfish. And, if it turns out GNOME 3's embedded Javascript engine will give me the same hackability I prefer with sawfish, I'll adopt GNOME 3 happily. But, no matter what, I'll always be living in the past, because like every other human, I hate changing anything, unless it's strictly necessary or it's my own creation and derivation. Humans are like that: no matter who you are, if it wasn't your idea, you're always slow to adopt something new and change old habits.

    Nevertheless, there's actually nothing wrong with living in the past — I quite like it myself. However, I'd suggest that care be taken to not admonish those who make a go at creating the future. (At this risk of making a conclusion that sounds like a time travel joke,) don't forget that their future will eventually become that very past where I and others would prefer to live.

    Posted on Friday 05 August 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

July

  • 2011-07-29: GNU Emacs Developers Will Fix It; Please Calm Down

    fabsh was the first to point me at a slashdot story that is (like most slashdot stories) sensationalized.

    The story, IMO, makes the usual mistake of considering a GPL violation as an earth-shattering disaster that has breached the future of software freedom. GPL violations vary in degree of the problems they create; most aren't earth-shattering.

    Specifically, the slashdot story points to a thread on the emacs-devel mailing list about a failure to include some needed bison grammar in the complete and corresponding sources for Emacs in a few Emacs releases in the last year or two. As you can see there, RMS quickly responded to call it a grave problem … [both] legally and ethically, and he's asked the Emacs developers to help clear up the problem quickly.

    I wrote nearly two years ago that one shouldn't jump to conclusions and start condemning those who violate the GPL without investigating further first. Most GPL violations are mistakes, as this situation clearly was, and I suspect it will be resolved within a few news cycles of this blog post.

    And please, while we all see the snickering-inducing irony of FSF and its GNU project violating the GPL, keep in mind that this is what I've typically called a “community violation”. It's a non-profit volunteer project that made an honest mistake and is resolving it quickly. Meanwhile, I've a list of hundreds of companies who are actively violating the GPL, ignoring users who requested source, and have apparently no interest in doing the right thing until I open an enforcement action against them. So, please keep perspective about what how bad any given violation is. Not all GPL violations are of equal gravity, but all should be resolved, of course. The Emacs developers are on it.

    Posted on Friday 29 July 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-07-07: Project Harmony (and “Next Generation Contributor Agreements”) Considered Harmful

    Update on 2014-06-10:While this article is about a specific series of attempts to “unify” CLAs and ©AAs into a single set of documents, the issues raised below cover the gamut of problems that are encountered in many CLAs and ©AAs in common use today in FLOSS projects. Even though it appears that both Project Harmony and its reincarnation Next Generation Contributor Agreements have both failed, CLAs and ©AAs are increasing in popularity among FLOSS projects, and developers should begin action to oppose these agreements for their projects.

    Update on 2013-09-05: Project Harmony was recently relaunched under the name the Next Generation of Contributor Agreements. AFAICT, it's been publicly identified as the same initiative, and its funding comes from the same person. I've verified that everything I say below still applies to their current drafts available from the Contributor Agreements project. I also emailed this comments to the leaders of that project before it started, but they wouldn't respond to my policy questions.


    Much advertising is designed to convince us to buy or use of something that we don't need. When I hear someone droning on about some new, wonderful thing, I have to worry that these folks are actually trying to market something to me.

    Very soon, you're likely to see a marketing blitz for this thing called Project Harmony (which just released its 1.0 version of document templates). Even the name itself is marketing: it's not actually descriptive, but is so named to market a “good feeling” about the project before even knowing what it is. (It's also got serious namespace collision, including with a project already in the software freedom community.)

    Project Harmony markets itself as fixing something that our community doesn't really consider broken. Project Harmony is a set of document templates, primarily promulgated and mostly drafted by corporate lawyers, that entice developers to give control of their software work over to companies.

    My analysis below is primarily about how these agreements are problematic for individual developers. An analysis of the agreements in light of companies or organizations using them between each other may have the same or different conclusions; I just haven't done that analysis in detail so I don't know what the outcome is.

    [ BTW, I'm aware that I've failed to provide a TL;DR version of this article. I tried twice to write one and ultimately decided that I can't. Simply put, these issues are complex, and I had to draw on a decade of software freedom licensing, policy, and organizational knowledge to fully articulate what's wrong with the Project Harmony agreements. I realize that sounds like a It was hard to write — it should be hard to read justification, but I just don't know how to summarize these Gordian problems in a pithy way. I nevertheless hope developers will take the time to read this before they sign a Project Harmony agreement, or — indeed — any CLA or ©AA. ]

    Copyright Assignment That Lacks Real Assurances

    First of all, about half of Project Harmony is copyright assignment agreements ( ©AAs). Assigning copyright completely gives the work over to someone else. Once the ©AA is signed, the work ceases to belong to the assignor. It's as if that work was done by the assignee. There is admittedly some value to copyright assignment, particularly if developers want to ensure that the GPL or other copyleft is enforced on their work and they don't have time to do it themselves. (Although developers can also designate an enforcement agent to do that on their behalf even if they don't assign copyright, so even that necessity is limited.)

    One must immensely trust an assignee organization. Personally, I've only ever assigned some of my copyrights to one organization in my life: the Free Software Foundation, because FSF is the only organization I ever encountered that is institutionally committed to DTRT'ing with copyrights in a manner similar to my personal moral beliefs.

    First of all, as I've written about before, FSF's ©AA make all sorts of promises back to the assignor. Second, FSF is institutionally committed to the GPL and enforcing GPL in a way that advances FSF's non-profit advocacy mission for software freedom. All of this activity fits my moral principles, so I've been willing to sign FSF's ©AAs.

    Yet, I've nevertheless met many developers who refuse to sign FSF's ©AAs. While many of such developers like the GPL, they don't necessarily agree with the FSF's moral positions. Indeed, in many cases, developers are completely opposed to assigning copyright to anyone, FSF or otherwise. For example, Linus Torvalds, founder of Linux, has often stated on record that he never wanted to do copyright assignments, for several reasons: [he] think[s] they are nasty and wrong personally, and [he]'d hate all the paperwork, and [he] thinks it would actually detract from the development model.

    Obviously, my position is not as radical as Linus'; I do think ©AAs can sometimes be appropriate. But, I also believe that developers should never assign copyright to a company or to an organization whose moral philosophy doesn't fit well with their own.

    FSF, for its part, spells out its moral position in its ©AA itself. As I've mentioned elsewhere, and as Groklaw recently covered in detail, FSF's ©AA makes various legally binding promises to developers who sign it. Meanwhile, Project Harmony's ©AAs, while they put forward a few options that look vaguely acceptable (although they have problems of their own discussed below), make no such promises mandatory. I have often times pointed Harmony's drafters to the terms that FSF has proposed should be mandatory in any for-profit company's ©AA, but Harmony's drafters have refused to incorporate these assurances as a required part of Harmony's agreements. (Note that such assurances would still be required for the CLA options as well; see below for details why.)

    Regarding ©AAs, I'd like to note finally that FSF does not require ©AAs for all GNU packages. This confusion is so common that I'd like to draw attention to it, even thought it's only a tangential point in this context. FSF's ©AA is only mandatory, to my knowledge, on those GNU packages where either (a) FSF employees developed the first versions or (b) the original developers themselves asked to assign copyright to FSF, upon their project joining GNU. In all other cases, FSF assignment is optional. Some GNU projects, such as GNOME, have their own positions regarding ©AAs that differ radically from FSF's. I seriously doubt that companies who adopt Project Harmony's agreement will ever be as flexible on copyright assignment as FSF, nor will any of the possible Project Harmony options be acceptable to GNOME's existing policy.

    Giving Away Rights to Give Companies Warm Fuzzies?

    Project Harmony, however, claims that the important part isn't its ©AA, but its Contributor License Agreement (CLA). To briefly consider the history of Free Software CLAs, note that the Apache CLA was likely the first CLA used in the Free Software community. Apache Software Foundation has always been heavily influenced by IBM and other companies, and such companies have generally sought the “warm fuzzies” of getting every contributor to formally assent to a complex legal document that asserts various assurances about the code and gives certain powers to the company.

    The main point of a CLA (and a somewhat valid one) is to ensure that the developers have verified their right to contribute the code under the specified copyright license. Both the Apache CLA and Project Harmony's CLA go to great length and verbosity to require developers to agree that they know the contribution is theirs. In fact, if a developer signs one of these CLA's, the developer makes a formal contract with the entity (usually a for-profit company) that the developer knows for sure that the contribution is licensed under the specified license. The developer then takes on all liability if that fact is in any way incorrect or in dispute!

    Of course, shifting away all liability about the origins of the code is a great big “warm fuzzy” for the company's lawyers. Those lawyers know that they can now easily sue an individual developer for breach of contract if the developer was wrong about the code. If the company redistributes some developer's code and ends up in an infringement suit where the company has to pay millions of dollars, they can easily come back and sue the developer0. The company would argue in court that the developer breached the CLA. If this possible outcome doesn't immediately worry you as an individual developer signing a Project Harmony CLA for your FLOSS contribution, it should.

    “Choice of Law” & Contractual Arrangement Muddies Copyright Claims

    Apache's CLA doesn't have a choice of law clause, which is preferable in my opinion. Most lawyers just love a “choice of law” clause for various reasons. The biggest reason is that it means the rules that apply to the agreement are the ones with which the lawyers are most familiar, and the jurisdiction for disputes will be the local jurisdiction of the company, not of the developer. In addition, lawyers often pick particular jurisdictions that are very favorable to their client and not as favorable to the other signers.

    Unfortunately, all of Project Harmony's drafts include a “choice of law” clause1. I expect that the drafters will argue in response that the jurisdiction is a configuration variable. However, the problem is that the company decides the binding of that variable, which almost always won't be the binding that an individual developer prefers. The term will likely be non-negotiable at that point, even though it was configurable in the template.

    Not only that, but imagine a much more likely scenario about the CLA: the company fails to use the outbound license they promised. For example, suppose they promised the developers it'd be AGPL'd forever (although, no such option actually exists in Project Harmony, as described below!), but then the company releases proprietarized versions. The developers who signed the CLA are still copyright holders, so they can enforce under copyright law, which, by itself, would allow the developers to enforce under the laws in whatever jurisdiction suits them (assuming the infringement is happening in that jurisdiction, of course).

    However, by signing a CLA with a “choice of law” clause, the developers agreed to whatever jurisdiction is stated in that CLA. The CLA has now turned what would otherwise be a mundane copyright enforcement action operating purely under the developer's local copyright law into a contract dispute between the developers and the company under the chosen jurisdiction's laws. Obviously that agreement might include AGPL and/or GPL by reference, but the claim of copyright infringement due to violation of GPL is now muddied by the CLA contract that the developers signed, wherein the developers granted some rights and permission beyond GPL to the company.

    Even worse, if the developer does bring action in a their own jurisdiction, their own jurisdiction is forced to interpret the laws of another place. This leads to highly variable and confusing results.

    Problems for Individual Copyright Enforcement Against Third-Parties

    Furthermore, even though individual developers still hold the copyrights, the Project Harmony CLAs grant many transferable rights and permissions to the CLA recipient (again, usually a company). Even if the reasons for requiring that were noble, it introduces a bundle of extra permissions that can be passed along to other entities.

    Suddenly, what was once a simple copyright enforcement action for a developer discovering a copyleft violation becomes a question: Did this violating entity somehow receive special permissions from the CLA-collecting entity? Violators will quickly become aware of this defense. While the defense may not have merit (i.e., the CLA recipient may not even know the violator), it introduces confusion. Most legal proceedings involving software are already confusing enough for courts due to the complex technology involved. Adding something like this will just cause trouble and delays, further taxing our already minimally funded community copyleft enforcement efforts.

    Inbound=Outbound Is All You Need

    Meanwhile, the whole CLA question actually is but one fundamental consideration: Do we need this? Project Harmony's answer is clear: its proponents claim that there is mass confusion about CLAs and no standardization, and therefore Project Harmony must give a standard set of agreements that embody all the options that are typically used.

    Yet, Project Harmony has purposely refused to offer the simplest and most popular option of all, which my colleague Richard Fontana (a lawyer at Red Hat who also opposes Project Harmony) last year dubbed inbound=outbound. Specifically, the default agreement in the overwhelming majority of FLOSS projects is simply this: each contributor agrees to license each contribution using the project's specified copyright license (or a license compatible with the project's license).

    No matter what way you dice Project Harmony, the other contractual problems described above make true inbound=outbound impossible because the CLA recipient is never actually bound formally by the project's license itself. Meanwhile, even under its best configuration, Project Harmony can't adequately approximate inbound=outbound. Specifically, Project Harmony attempts to limit outbound licensing with its § 2.3 (called Outbound License). However, all the copyleft versions of this template include a clause that say: We [the CLA recipient] agree to license the Contribution … under terms of the … licenses which We are using on the Submission Date for the Material. Yet, there is no way for the contributor to reliably verify what licenses are in use privately by the entity receiving the CLA. If the entity is already engaged in, for example, a proprietary relicensing business model at the Submission Date, then the contributor grants permission for such relicensing on the new contribution, even if the rest of § 2.3 promises copyleft. This is not a hypothetical: there have been many cases where it was unclear whether or not a company was engaged in proprietary relicensing, and then later it was discovered that they had been privately doing so for years. As written, therefore, every configuration of Project Harmony's § 2.3 is useless to prevent proprietarization.

    Even if that bug were fixed, the closest Project Harmony gets to inbound=outbound is restricting the CLA version to “FSF's list of ‘recommended copyleft licenses’”. However, this category makes no distinction between the AGPL and GPL, and furthermore ultimately grants FSF power over relicensing (as FSF can change its list of recommended copylefts at will). If the contributors are serious about the AGPL, then Project Harmony cannot assure their changes stay AGPL'd. Furthermore, contributors must trust the FSF for perpetuity, even more than already needed in the -or-later options in the existing FSF-authored licenses. I'm all for trusting the FSF myself in most cases. However, because I prefer plain AGPLv3-or-later for my code, Project Harmony is completely unable to accommodate my licensing preferences to even approximate an AGPL version of inbound=outbound (even if I ignored the numerous problems already discussed).

    Meanwhile, the normal, mundane, and already widely used inbound=outbound practice is simple, effective, and doesn't mix in complicated contract disputes and control structures with the project's governance. In essence, for most FLOSS projects, the copyright license of the project serves as the Constitution of the project, and doesn't mix in any other complications. Project Harmony seeks to give warm fuzzies to lawyers at the expense of offloading liability, annoyance, and extra hoop-jumping onto developers.

    Linux Hackers Ingeniously Trailblazed inbound=outbound

    Almost exactly 10 years ago today, I recall distinctly attending the USENIX 2001 Linux BoF session. At that session, Ted Ts'o and I had a rather lively debate; I claimed that FSF's ©AA assured legal certainty of the GNU codebase, but that Linux had no such assurance. (BTW, even I was confused in those days and thought all GNU packages required FSF's ©AA.) Ted explained, in his usual clear and bright manner, that such heavy-handed methods shouldn't be needed to give legal certainty to the GPL and that the Linux community wanted to find an alternative.

    I walked away skeptically shaking my head. I remember thinking: Ted just doesn't get it. But I was wrong; he did get it. In fact, many of the core Linux developers did. Three years to the month after that public conversation with Ted, the Developer's Certificate of Origin (DCO) became the official required way to handle the “CLA issue” for Linux and it remains the policy of Linux today. (See item 12 in Linux's Documentation/SubmittingPatches file.)

    The DCO, in fact, is the only CLA any FLOSS project ever needs! It implements inbound=outbound in a simple and straightforward way, without giving special powers over to any particular company or entity. Developers keep their own copyright and they unilaterally attest to their right to contribute and the license of the contribution. (Developers can even sign a ©AA with some other entity, such as the FSF, if they wish.) The DCO also gives a simple methodology (i.e., the Signed-off-by: tag) for developers to so attest.

    I admit that I once scoffed at the (what I then considered naïve) simplicity of the DCO when compared to FSF's ©AA. Yet, I've been since convinced that the Linux DCO clearly accomplishes the primary job and simultaneously fits how most developers like to work. ©AA's have their place, particularly when the developers find a trusted organization that aligns with their personal moral code and will enforce copyleft for them. However, for CLAs, the Linux DCO gets the important job done and tosses aside the pointless and pro-corporate stuff.

    Frankly, if I have to choose between making things easy for developers and making them easy for corporate lawyers, I'm going to chose the former every time: developers actually write the code; while, most of the time, company's legal departments just get in our way. The FLOSS community needs just enough CYA stuff to get by; the DCO shows what's actually necessary, as opposed to what corporate attorneys wish they could get developers to do.

    What about Relicensing?

    Admittedly, Linux's DCO does not allow for relicensing wholesale of the code by some single entity; it's indeed the reason a Linux switch to GPLv3 will be an arduous task of public processes to ensure permission to make the change. However, it's important to note that the Linux culture believes in GPLv2-only as a moral foundation and principle of their community. It's not a principle I espouse; most of my readers know that my preferred software license is AGPLv3-or-later. However, that's the point here: inbound=outbound is the way a FLOSS community implements their morality; Project Harmony seeks to remove community license decision-making from most projects.

    Meanwhile, I'm all for the “-or-later” brand of relicensing permission; GPL, LGPL and AGPL have left this as an option for community choice since GPLv1 was published in late 1980s. Projects declare themselves GPLv2-or-later or LGPLv3-or-later, or even (GPLv1-or-later|Artistic) (ala Perl 5) to identify their culture and relicensing permissions. While it would sometimes be nice to have a broad post-hoc relicensing authority, the price for that's expensive: abandonment of community clarity regarding what terms define their software development culture.

    An Anti-Strong-Copyleft Bias?

    Even worse, Project Harmony remains biased against some of the more fine-grained versions of copyleft culture. For example, Allison Randal, who is heavily involved with Project Harmony, argued on Linux Outlaws Episode 204 that Most developers who contribute under a copyleft license — they'd be happy with any copyleft license — AGPL, GPL, LGPL. Yet there are well stated reasons why developers might pick GPL rather than LGPL. Thus, giving a for-profit company (or non-profit that doesn't necessarily share the developers' values) unilateral decision-making power to relicense GPL'd works under LGPL or other weak copyleft licenses is ludicrous.

    In its 1.0 release, Project Harmony attempted to add a “strong copyleft only” option. It doesn't actually work, of course, for the various reasons discussed in detail above. But even so, this solution is just one option among many, and is not required as a default when a project is otherwise copylefted.

    Finally, it's important to realize that the GPLv3, AGPLv3, and LGPLv3 already offer a “proxy option”; projects can name someone to decide the -or-later question at a later time. So, for those projects that use any of the set { LGPLv3-only, AGPLv3-only, GPLv3-only, GPLv2-or-later, GPLv1-or-later, or LGPLv2.1-or-later }, the developers already have mechanisms to move to later versions of the license with ease — by specifying a proxy. There is no need for a CLA to accomplish that task in the GPL family of licenses, unless the goal is to erode stronger copylefts into weaker copylefts.

    This is No Creative Commons, But Even If It Were, Is It Worth Emulation?

    Project Harmony's proponents love to compare the project to Creative Commons, but the comparison isn't particularly apt. Furthermore, I'm not convinced the FLOSS community should emulate the CC license suite wholesale, as some of the aspects of the CC structure are problematic when imported back into FLOSS licensing.

    First of all, Larry Lessig (who is widely considered a visionary) started the CC licensing suite to bootstrap a Free Culture movement that modeled on the software freedom movement (which he spent a decade studying). However, Lessig made some moral compromises in an attempt to build a bridge to the “some rights reserved” mentality. As such, many of the CC licenses — notably those that include the non-commercial (NC) or no-derivatives (ND) terms — are considered overly restrictive of freedom and are therefore shunned by Free Culture activists and software freedom advocates alike.

    Over nearly decade, such advocates have slowly begun to convince copyright holders to avoid CC's NC and ND options, but CC's own continued promulgation of those options lend them undue legitimacy. Thus, CC and Project Harmony make the same mistake: they act amorally in an attempt to build a structure of licenses/agreements that tries to bridge a gulf in understanding between a FaiF community and those only barely dipping their toe in that community. I chose the word amoral, as I often do, to note a situation where important moral principles exist, but the primary actors involved seek to remove morality from the considerations under the guise of leaving decision-making to the “magic of the marketplace”. Project Harmony is repeating the mistake of the CC license suite that the Free Culture community has spent a decade (and counting) cleaning up.

    Conclusions

    Please note that IANAL and TINLA. I'm just a community- and individual-developer- focused software freedom policy wonk who has some grave concerns about how these Project Harmony Agreements operate. I can't give you a fine-grained legal analysis, because I'm frankly only an amateur when it comes to the law, but I am an expert in software freedom project policy. In that vein — corporate attorney endorsements notwithstanding — my opinion is that Project Harmony should be abandoned entirely.

    In fact, the distinction between policy and legal expertise actually shows the root of the problem with Project Harmony. It's a system of documents designed by a committee primarily comprised of corporate attorneys, yet it's offered up as if it's a FLOSS developer consensus. Indeed, Project Harmony itself was initiated by Amanda Brock, a for-profit corporate attorney for Canonical, Ltd, who is remains involved in its drafting. Canonical, Ltd. later hired Mark Radcliffe (a big law firm attorney, who has defended GPL violators) to draft the alpha revisions of the document, and Radcliffe remains involved in the process. Furthermore, the primary drafting process was done secretly in closed meetings dominated by corporate attorneys until the documents were almost complete; the process was not made publicly open to the FLOSS community until April 2011. The 1.0 documents differ little from the drafts that were released in April 2011, and thus remain to this day primarily documents drafted in secrecy by corporate attorneys who have only a passing familiarity with software freedom culture.

    Meanwhile, I've asked Project Harmony's advocates many times who is in charge of Project Harmony now, and no one can give me a straight answer. One is left to wonder who decides final draft approval and what process exists to prevent or permit text for the drafts. The process which once was in secrecy appears to be now in chaos because it was opened up too late for fundamental problems to be resolved.

    A few developers are indeed actively involved in Project Harmony. But Project Harmony is not something that most developers requested; it was initiated by companies who would like to convince developers to passively adopt overreaching CLAs and ©AAs. To me, the whole Project Harmony process feels like a war of attrition to convince developers to accept something that they don't necessarily want with minimal dissent. In short, the need for Project Harmony has not been fully articulated to developers.

    Finally, I ask, what's really broken here? The industry has been steadily and widely adopting GNU and Linux for years. GNU, for its part, has FSF assignments in place for much of its earlier projects, but the later projects (GNOME, in particular) have either been against both ©AA's and CLA's entirely, or are mostly indifferent to them and use inbound=outbound. Linux, for its part, uses the DCO, which does the job of handling the urgent and important parts of a CLA without getting in developers' way and without otherwise forcing extra liabilities onto the developers and handing over important licensing decisions (including copyleft weakening ones) to a single (usually for-profit) entity.

    In short, Project Harmony is a design-flawed solution looking for a problem.

    Further Reading


    0Project Harmony advocates will likely claim to their § 5, “Consequential Damage Waiver” protects developers adequately. I note that it explicitly leaves out, for example, statutory damages for copyright infringement. Also, some types of damages cannot be waived (which is why that section shouts at the reader TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW). Note my discussion of jurisdictions in the main text of this article, and consider the fact that the CLA recipient will obviously select a jurisdiction where the fewest possible damages can be waived. Finally, note that the OR US part of that § 5 is optionally available, and surely corporate attorneys will use it, which means that if they violate the agreement, there's basically no way for you to get any damages from them, even if they their promise to keep the code copylefted and fail.

    1Note: Earlier versions of this blog post conflated slightly “choice of venue” with “choice of law”. The wording has been cleared up to address this problem. Please comment or email me if you believe it's not adequately corrected.

    Posted on Thursday 07 July 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-07-04: Identi.ca Weekly Summary

    Identi.ca Summary, 2011-06-26 through 2011-07-04

    Posted on Monday 04 July 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

June

May

  • 2011-05-31: Should a Power-User Key Mapping Change Be This Difficult?

    It's been some time since X made me hate computing, but it happened again today (well, yesterday into the early hours of today, actually.

    I got the stupid idea to upgrade to squeeze from lenny yesterday. I was at work, but it was actually a holiday in the USA, and I figured it would be a good time to do some sysadmin work instead of my usual work.

    I admittedly had some things to fix that were my fault: I had backports and other mess installed, but upon removing, the upgrade itself was more-or-less smooth. I faced only a minor problem with my MD device for /boot not starting properly, but the upgrade warned me that I needed to switch to properly using the UUIDs for my RAID arrays, and once I corrected that, all booted fine, even with GRUB2 on my old hardware.

    Once I was in X, things got weird, keyboard-wise. My meta and alt keys weren't working. BTW, I separate Alt from Meta, making my actual Alt key into a meta key, while my lower control is set to an Alt (ala Mod2), since I throw away caps lock and make it a control. (This is for when I'm on the laptop keyboard rather than the HHKB.)

    I've used the same xmodmap for two decades to get this done:

                    keycode 22 = BackSpace
                    
                    clear Mod1
                    clear Mod2
                    clear Lock
                    clear Control
                    
                    keycode 66  = Control_L
                    
                    keycode 64 = Meta_L
                    keycode 113 = Meta_R
                    keycode 37 = Alt_L
                    keycode 109 = Alt_R
                    
                    add Control = Control_L
                    
                    add Mod1 = Meta_L
                    add Mod1 = Meta_R
                    
                    add Mod2 = Alt_L
                    add Mod2 = Alt_R
                    

    This just “doesn't work” in squeeze (or presumably any Xorg 7.5 system). Instead, it just gives this error message:

                    X Error of failed request:  BadValue (integer parameter out of range for operation)
                      Major opcode of failed request:  118 (X_SetModifierMapping)
                      Value in failed request:  0x17
                      Serial number of failed request:  21
                      Current serial number in output stream:  21
                    
    … and while my Control key ends up fine, it leaves me with no Mod1 nor Mod2 key.

    There appear to be at least two Debian bugs (564327 and 432011), which were against squeeze before it was released. In retrospect, I sure wish they'd have been release-critical!. (There's also an Ubuntu bug, which of course just punts to the upstream Debian bug.) There are also two further upstream bugs at freedeskop (20145 and 11822), although Daniel Stone thinks the main problem might be fixed upstream.

    I gather that many people “in the know” believe xmodmap to be deprecated, and we all should have switched to xkb years ago. I even got snarky comments to that effect. (Update:) However, after I made this first post, quite angry after 8 hours of just trying to make my Alt key DTRT, I was elated to see Daniel Stone indicate that xmodmap should be backwards compatible. It's always true that almost every time I get pissed off about some Free Software not working, a developer often shows up and tells me they want to fix it. This is in some ways just as valuable as the thing being fixed: knowing that the developer doesn't want the bug to be there — it means it'll be fixed eventually and only patience is required.

    However, the bigger problem really is that xkb appears to lack good documentation. If any exists, I can't find it. madduck did this useful blog post (and, later, vinc17 showed me some docs he was working on too). These are basically the only things I could find that were real help on the issue, and they were sparse. I was able to learn, after hours, that this should be the rough equivalent to my old modmap:

                    partial modifier_keys
                    xkb_symbols "thinkpad" {
                        replace key <CAPS>  {  [ Control_L, Control_L ] };
                        modifier_map  Control { <CAPS> };
                        replace key <LALT>  {  [ Meta_L ] };
                        modifier_map Mod1   { Meta_L, Meta_R };
                        key <LCTL> { [ Alt_L ] };
                        modifier_map Mod2 { Alt_L };
                    };
                    

    But, you can't just load that with a program! No, it must be placed in a file called /path/symbols/bkuhn, which it is then loaded with an incantation like this:

                    xkb_keymap {
                            xkb_keycodes  { include "evdev+aliases(qwerty)" };
                            xkb_types     { include "complete"      };
                            xkb_compat    { include "complete"      };
                            xkb_symbols   { include "pc+us+inet(evdev)+bkuhn(thinkpad)"     };
                            xkb_geometry  { include "pc(pc105)"     };
                    };
                    

    …which, in turn, requires to be fed into: xkbcomp -I/path - $DISPLAY as stdin. Oh, did I mention you have to get the majority of that stuff above by running setxkbmap -print, then modify it to add the bkuhn(thinkpad) part? I'm impressed that madduck figured this all out. I mean, I know xmodmap was arcane incantations and all, but this is supposed to be clearer and better for users wanting to change key mappings? WTF!?!

    Oh, so, BTW, my code in /path/symbols/bkuhn didn't work. I tried every incantation I could think of, but I couldn't get it to think about Alt and Meta as separate Mod2 and Mod1 keys. I think it's actually a bug, because weird things happened when I added lines like:

                        modifier_map Mod5 { <META> };
                    
    Namely, when I added the above line to my /path/symbols/bkuhn, the Mod2 was then picked up correctly (magically!), but then both LCTL and LALT acted like a Mod2, and I still had no Mod1! Frankly, I was too desperate to get back to my 20 years of keystroke memory to try to document what was going on well enough for a coherent bug report. (Remember, I was doing all this on a laptop where my control key kept MAKING ME SHOUT INSTEAD OF DOING ITS JOB.)

    I finally got the idea to give up entirely on Mod2 and see if i could force the literal LCTL key to be a Mod3, hopefully allowing Emacs to again see my usual Mod1 Meta expectations for LALT. So, I saw what some of the code in /usr/share/X11/xkb/symbols/altwin did to handle Mod3, and I got this working (although it required a sawfish change to expect Mod3 instead of Mod2, of course, but that part was 5 seconds of search and replace). Here's what finally worked as contents of /path/symbols/bkuhn:

                    partial modifier_keys
                    xkb_symbols "thinkpad" {
                        modifier_map  Control { <CAPS> };
                        replace key <LALT>  {  [ Meta_L ] };
                        modifier_map Mod1   { Meta_L };
                        key <LCTL> { type[Group1] = "ONE_LEVEL",
                                     symbols[Group1] = [ Super_L ] };
                        modifier_map Mod3 { Super_L };
                    };
                    

    So, is all this really less arcane than xmodmap? Was the eight hours of my life spent learning xkb was somehow worth it, because now I know a better tool than xmodmap? I realize I'm a power user, but I'm not convinced that it should be this hard even for power users. I felt reminiscent of days when I had to use Eric Raymond's mode timings howto to get X working. That was actually easier than this!

    Even though spot claimed this is somehow Debian's fault, I don't believe him. I bet I would run into the same problem on any system using Xorg 7.5. There are clearly known bugs in xmodmap, and I think there is probably a subtle bug I uncovered that exist xkbd, but I am not sure I can coherently report it without revisiting this horrible computing evening again. Clearly, that first thing I tried should have not made two keys be a Mod2, but only when I moved META into Mod5, right?

    BTW, If you're looking for me online tomorrow early, you hopefully know where I am. I'm going to bed two hours before my usual waketime. Ugh. (Update: tekk later typo'ed xmodmap as ’xmodnap‘ on identi.ca. Quite fitting; after working on that all night, I surely needed an xmodnap!

    Update on 2013-04-03: I want to note that the X11 and now Wayland developer named Daniel Stone took an interest in this bug and actually followed up with me two years later giving me a report. It is apparently really hard to fix without a lot of effort, and I've switched to xkb (which I think is even more arcane), but mostly works, except when I'm in Xnest. But my main point is that Daniel stuck with the problem and while he didn't get resolution, he kept me posted. That's a dedicated Free Software developer; I'm just a random user, after all!

    Posted on Tuesday 31 May 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-05-26: Choosing A License

    Brett Smith of the FSF has announced a new tutorial available on the GNU website that gives advice about picking a license for your project.

    I'm glad that Brett wrote this tutorial. My typical answer when someone asks me which license to chose is to say: Use AGPLv3-or-later unless you can think of a good reason not to. That's a glib answer that is rarely helpful to questioner. Brett's article is much better and more useful.

    For me, the particularly interesting outcome of the tutorial is how it finishes a the turbulent trajectory of the FSF's relationship with Apache's license. Initially, there was substantial acrimony between the Apache Software Foundation and the FSF because version 2.0 of the Apache License is incompatible with the GPLv2, a point on which the Apache Software Foundation has long disagreed with the FSF. You can even find cases where I was opining in the press about this back when I was Executive Director of the FSF.

    An important component of GPLv3 drafting was to reach out and mend relationships with other useful software freedom licenses that had been drafted in the time since GPLv2 was released. Brett's article published yesterday shows the culmination of that fence-mending: Apache-2.0 is now not only compatible with the GPLv3 and AGPLv3, but also the FSF's recommended permissive license!

    Posted on Thursday 26 May 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-05-19: Clarification on Android, its (Lack of) Copyleft-ness, and GPL Enforcement

    I'm grateful to Brian Proffitt for clarifying some of these confusions about Android licensing. In particular, I'm glad I'm not the only one who has cleared up the confusions that Edward J. Naughton keeps spreading regarding the GPL.

    I noted that Naughton even commented on Proffitt's article; the comment spreads even more confusion about the GPL. In particular, Naughton claims that most BusyBox GPL violations are on unmodified versions of BusyBox. That's just absolutely false, if for no other reason that a binary is a modified version of the source code in the first place, and nearly all BusyBox GPL violations involve a binary-only version distributed without any source (nor an offer therefor).

    Mixed in with Naughton's constant confusions about what the GPL and LGPL actually requires, he does have a possible valid point lurking: there are a few components in Android/Linux that are under copyleft licenses, namely Linux (GPL) and Webkit (LGPL). Yet, in all of Naughton's screeching about this issue, I haven't seen any clear GPL or LGPL violation reports — all I see is speculation about what may or may not be a violation without any actual facts presented.

    I'm pretty sure that I've spent more time reading and assessing the veracity of GPL violation reports than anyone on the planet. I don't talk about this part of it much: but there are, in fact, a lot of false alarms. I get emails every week from users who are confused about what the GPL and LGPL actually require, and I typically must send them back to collect more details before I can say with any certainty a GPL or LGPL violation has occurred.

    Of course, as a software freedom advocate, I'm deeply dismayed that Google, Motorola and others haven't seen fit to share a lot of the Android code in a meaningful way with the community; failure to share software is an affront to what the software freedom movement seeks to accomplish. However, every reliable report that I've seen indicates that there are no GPL nor LGPL violations present. Of course, if someone has evidence to the contrary, they should send it to those of us who do GPL enforcement. Meanwhile, despite Naughton's public claims that there are GPL and LGPL violations occurring, I've received no contact from him. Don't you think if he was really worried about getting a GPL or LGPL violation resolved, he'd contact the guy in the world most known for doing GPL enforcement and see if I could help?

    Of course, Naughton hasn't contacted me because he isn't really interested in software freedom. He's interested in getting press for himself, and writing vague reports about Android copyrights and licensing is a way to get lots of press. I put out now a public call to anyone who believes they haven't received source code that they were required to get under GPL or LGPL to get in touch with me and I'll try to help, or at the very least put you in touch with a copyright holder who can help do some enforcement with you. I don't, however, expect to see a message in my inbox from Naughton any time soon, nor do I expect him to actually write about the wide-spread GPL violations related to Android/Linux that Matthew Garrett has been finding. Garrett's findings are the real story about Android/Linux compliance, but it's presumably not headline-getting enough for Naughton to even care.

    Finally, Naughton is a lawyer. He has the skills at hand to actually help resolve GPL violations. If he really cared about GPL violations, he'd offer his pro bono help to copyright holders to assist in the overwhelming onslaught of GPL violations. I've written and spoken frequently about how I and others who enforce the GPL are really lacking in talented person-power to do more enforcement. Yet, again, I haven't received an offer from Naughton or these other lawyers who are opining about GPL non-compliance to help me get some actual GPL compliance done. I await their offers, but I'm certainly not expecting they'll be forthcoming.

    (BTW, you'll notice that I don't link to Naughton's actual article myself; I don't want to give him any more linkage than he's already gotten. I'm pretty aghast at the Huffington Post for giving a far-reaching soapbox to such shoddy commentary, but I suppose that I shouldn't expect better from a company owned by AOL.)

    Posted on Thursday 19 May 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-05-18: Germany Trip: Samba XP Keynote and LinuxTag Keynote

    I just returned a few days ago to the USA after one week in Germany. I visited Göttingen for my keynote at Samba XP (which I already blogged about). Attending Samba XP was an excellent experience, and I thank SerNet for sponsoring my trip there. Since going full-time at Conservancy last year, I have been trying to visit the conferences of each of Conservancy's member projects. It will probably take me years to do this, but given that Samba is one of Conservancy's charter members, it's good that I have finally visited Samba's annual conference. It was even better that they asked me to give a keynote talk at Samba XP.

    I must admit that I didn't follow the details many of the talks other than Tridge's Samba 4 Status Report talk and Jeremy's The Death of File Protocols. This time I really mean it! talk. The rest, unsurprisingly, were highly specific and detailed about Samba, and since I haven't been a regular Samba user myself since 1996, I didn't have the background information required to grok the talks fully. But I did see a lot of excited developers, and it was absolutely wonderful to meet the entire Samba Team for the first time after exchanging email with them for so many years.

    It's funny to see how different communities tend to standardize around the same kinds of practices with minor tweaks. Having visited a lot of project-specific conferences for Conservancy's members, I'm seeing how each community does their conference, and one key thing all projects have in common is the same final conference session: a panel discussion with all the core developers.

    The Samba Team has their own little tweak on this. First, John Terpstra asks all speakers at the conference (which included me this year) to join the Samba Team and stand up in front of the audience. Then, the audience can ask any final questions of all speakers (this year, the attendees had none). Then, the Samba Team stands up in front of the crowd and takes questions.

    The Samba tweak on this model is that the Samba Team is not permitted to sit down during the Q&A. This year, it didn't last that long, but it was still rather amusing. I've never seen a developers' panel before where the developers couldn't sit down!

    After Samba XP, I headed “back” to Berlin (my flight had landed there on Saturday and I'd taken the Deutsche Bahn ICE train to Göttingen for Samba XP), and arrived just in time to attend LinuxNacht, the LinuxTag annual party. (WARNING: name dropping follows!) It was excellent to see Vincent Untz, Lennart Poettering, Michael Meeks and Stefano Zacchiroli at the party (listed in order I saw them at the party).

    The next day I attended Vincent's talk, which was about cross-distribution collaboration. It was a good talk, although, I think Vincent glossed over too much the fact that many distributions (Fedora, Ubuntu, and OpenSUSE, specifically) are controlled by companies and that cross-distribution collaboration has certain complications because of this corporate influence. I talked with Vincent in more detail about this later, and he argued that the developers at the companies in question have a lot of freedom to operate, but I maintain there are subtle (and sometimes, not so subtle) influences that cause problems for cross-distribution collaboration. I also encouraged Vincent to listen to Richard Fontana's talk, Open Source Projects and Corporate Entanglement, that Karen and I released as an episode of the FaiF oggcast.

    I also attended Martin Michlmayr's talk on SPDX. I kibitzed more than I should have from the audience, pointing out that while SPDX is a good “first start”, it's a bit of a “too little, too late” attempt to address and prevent the flood of GPL violations that are now all too common. I believe SPDX is a great tool for those who already are generally in compliance, but it isn't very likely to impact the more common violations, wherein the companies just ignore their GPL obligations. A lively debate ensued on this topic. I frankly hope to be proved wrong on this; if SPDX actually ends or reduces GPL violations, I'll be happy to work on something else instead.

    On Friday afternoon, I gave my second keynote of the week, which was an updated version of my talk, 12 Years of GPL Compliance: A Historical Perspective. It went well, although I misunderstood and thought I had a full hour slot, but only actually had a 50 minute slot, so I had to rush a bit at the end. I really do hate rushing at the end when speaking primarily to a non-native-English-speaking audience, as I know I'm capable of speaking English way too fast (a problem that I am constantly vigilant about under normal public speaking circumstances).

    The talk was nevertheless pretty well received, and afterward, I was surrounded by a gaggle of interested copyleft enthusiasts, who, as always, were asking what more can be done to enforce the GPL. My talks on enforcement always tend to elicit this reaction, since my final slides are a bit depressing with regard to the volume of GPL enforcement that's currently occurring.

    Meanwhile, I also decided I should also start putting up my slides from talks in a more accessible fashion. Since I use S5 (although I hope to switch to jQuery S5 RSN), my slides are trivially web-publishable anyway. While I've generally published the source code to my slides, it makes sense to also make compiled, quickly viewable versions of my slides on my website too. Finally, I realized I should also put my upcoming public speaking events on my frontpage and have done so.

    After a late lunch on Friday, I saw only the very end of Lennart's talk on systemd, and then I visited for a while with Claudia Rauch, Business Manager of KDE, e.V. in the KDE booth. Claudia kindly helped me practice my German a bit by speaking slowly enough that I could actually parse the words.

    I must admit I was pretty frustrated all week that my German is now so poor. I studied German for two years in High School and one semester in college. I even participated in a three-week student exchange trip to a Gymnasium (the German term for college-prep high school) in Munich in 1990. Yet, German speaking skills are just a degraded version of what they once were.

    Meanwhile, I did rather like Berlin's Tegel airport (TXL). It's a pretty small airport, but I really like its layout. Because of its small size, each check-in area is attached to a security checkpoint, which is then directly connected to the gate. While this might seem a bit tight, it makes it very easy to check-in, go through security, and then be right at your gate. I can understand why an airport this small would have to be closed (it's slated for closure in 2012), but I am glad that I got a chance to travel to it (and probably again, for the Desktop Summit) before it closes.

    Posted on Wednesday 18 May 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-05-10: Samba XP Keynote, Jeremy's GPLv3 talk, & GPLv2/LGPLv3

    This morning, I gave the keynote talk at Samba XP. I was really honored to be invited to speak to Samba XP (the Samba Developers and Users Conference).

    My talk, entitled Samba, GPL Enforcement, and the GPLv3 was about GPL enforcement, and how it relates to the Samba project and embedded devices. I've pushed my slides to my gitorious “talks” project. That's of course just the source code of the slides. Previously, some folks have complained that they have trouble building the slides because they don't have pandoc or other such dependencies installed. (I do, BTW, believe that my Installation Information is adequate, even though the talk isn't GPLv3'd, but it does have some dependencies :). Anyway, I've put up an installed version of my Samba XP slides as well.

    Some have asked if there's a recording of the talk. I see video cameras and the like here at Samba XP, and I will try to get the audio for a future FaiF Cast.

    Speaking of FaiFCast, Karen and I timed it (mostly by luck) so that, while I'm at Samba XP, we'd release FaiF 0x0F, which includes audio from Jeremy's Linux Collaboration Summit talk about why Samba chose to switch to GPLv3. BTW, I'm sorry I didn't do show notes this week, but because of being at Samba XP the last few days, I wasn't able to write detailed show notes. However, the main thing you need are Jeremy's slides, which are linked to from the show notes section.

    Later this week, I'm giving the Friday keynote at Linux Tag, also on GPL enforcement (It's at 13:00 on Friday 2011-05-13). I hope those of you who can come to Berlin will come see my talk!

    Finally, Ivo de Decker in the audience at Samba XP asked about LGPLv3/GPLv2 incompatibility. In my answer to the question, I noted the GPL Compatibility Matrix on the GNU site. Also, regarding the specific LGPLv3 compatibility issue, I mentioned post I made last year on the GNOME desktop-devel-list about the LGPLv3/GPLv2 issue. I promised that I'd also quote that post here in my blog, so that there was a stable URL that discussed the issue. I therefore quote the relevant parts of that email here:

    The most important point [about GPLv2-only/LGPLv3-or-later incompatibility], I'd like to make is to suggest a possible compromise. Specifically, I suggest disjunctive licensing, (GPLv2|LGPLv3-or-later), which could be implemented like this:

    This program's license gives you software freedom; you can copy, modify, convey, propagate, and/or redistribute this software under the terms of either:

    • the GNU Lesser General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.
    • OR
    • the GNU General Public License, version 2 only, as published by the Free Software Foundation.

    In addition, when you convey, distribute, and/or propagate this software and/or modified versions thereof, you may also preserve this notice so that recipients of such distributions will also have both licensing options described above.

    A good moniker for this license is (GPLv2|LGPLv3-or-later). It actually gives 3+ licensing options to downstream: they can continue under the full (GPLv2|LGPLv3-or-later), or they can use GPLv2-only, or they can use LGPLv3 (or any later version of the LGPL).

    Some folks will probably note this isn't that different from LGPLv2.1-or-later. The key difference, though, is that it removes LGPLv2.1 from the mix. If you've read the LGPLv2.1 lately, you've seen that it really shows its age. LGPLv3 is a much better implementation of the weak copyleft idea. If any license needs deprecation, it's LGPLv2.1. I thus personally believe upgrade to (GPLv2|LGPLv3-or-later) is something worth doing right away.

    I note, BTW, that existing code licensed LGPLv2.1-or-later has also already given permission to migrate to the license (GPLv2|LGPLv3-or-later). Specifically, it's permitted by LGPLv2.1 to license the work under GPLv2 if you want to. Furthermore, LGPLv2.1-or-later permits you to license LGPLv3-or-later. Therefore, LGPLv2.1-or-later can, at anyone's option, be upgraded to (GPLv2|LGPLv3-or-later).

    Note the incompatibility exists on both [GPLv2-only and LGPLv3] sides (it proverbially takes two to tango), but the incompatibility centers primarily around the strong copyleft on the GPLv2 side, not the weak copyleft on the LGPLv3 side. Specifically, GPLv2 requires that:

    You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License.
    and
    You may not impose any further restrictions on the recipients' exercise of the rights granted herein.

    This is part of the text that creates copyleft: making sure that other terms can't be imposed.

    The problem occurs in interaction with another copyleft license (even a weak one). Usually, no two copyleft implementations are isomorphic and therefore there are different requirements in the details. LGPLv3, for its part, doesn't care much about additional restrictions imposed by another license (hence its weak copyleft nature). However, from the point of view of the GPLv2-side observer, any additional requirement, even minor ones imposed by LGPLv3, are merely “further restrictions”.

    This is why copyleft licenses, when they want compatibility, have to explicitly permit relicensing (as LGPLv2 does for GPLv2/GPLv3 and as LGPLv3 does for GPLv3), by allowing you to “upgrade” to the another copyleft from the current copyleft. To be clear, from the point of view the LGPLv3 observer, it has no qualms about “upgrading” from LGPLv3 to GPLv2. The problem occurs from the GPLv2 side, specifically because the (relatively) minor things that LGPLv3 requires are written differently from the similar things asked for in GPLv2.

    It's a common misconception that LGPL has no licensing requirements whatsoever on “works that use the library” (LGPLv2) or the “Application” (LGPLv3). That's not completely true; for example, in LGPLv3 § 4+5 (and LGPLv2.1 § 6+7), you find various requirements regarding licensing of such works. Those requirements aren't strict and are actually very easy to comply with. However, from GPLv2's point of view, they are “further restrictions” since they are not written exactly in the same fashion in GPLv2.

    (BTW, note that LGPLv2.1's compatibility with GPLv2 and/or GPLv3 comes explicitly from LGPLv2.1's Section 3, which allows direct upgrade to GPLv2 or GPLv3, or to any later version published by FSF).

    I hope the above helps some to clarify the GPLv2/LGPLv3 incompatibility.

    Posted on Tuesday 10 May 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-05-03: Mono Developers Losing Jobs Isn't Good

    Both RMS and I have been critical of Mono, which is an implementation of Microsoft's C# language infrastructure for GNU/Linux systems. (Until recently, at Novell, Miguel De Icaza has led a team of developers working on Mono.)

    Most have probably heard that the Attachmate acquisition of Novell completed last week, and that reports of who will be fired because of the acquisition have begun to trickle. This evening, it's been reported that the developers working on Mono will be among those losing their jobs.

    In the last few hours, I've seen some folks indicating that this is a good outcome. I worry that this sort of response is somehow inspired by the criticisms and concerns about Mono that software freedom advocates like myself raised. I thus seek to clarify the concerns regarding Mono, and point out why it's unfortunate that these developers won't work on Mono anymore.

    First of all, note that the concerns about Mono are that many Microsoft software patents likely read on any C# implementation, and Microsoft's so-called “patent promise” is not adequate to defend the software freedom community. Anyone who uses Mono faces software patent danger from Microsoft. This is precisely why using Mono to write new applications, targeted for GNU/Linux and other software freedom systems, should be avoided.

    Nevertheless, Mono should exist, for at least one important reason: some developers write lots and lots of new code on Microsoft systems in C#. If those developers decide they want to abandon Microsoft platforms tomorrow and switch to GNU/Linux, we don't want them to change their minds and decide to stay with Microsoft merely because GNU/Linux lacks a C# implementation. Obviously, I'd support convincing those developers to learn another language system so they won't write more code in C#, but initially, the lack of Free Software C# implementation might impede their switch to Free Software.

    This is a really subtle point that has been lost in the anti-Mono rhetoric. I am not aware of any software freedom advocate who wants Mono to cease to exist. The problem that I and others point out is this: it's dangerous to write new code that relies on technology that's likely patented by Microsoft — a company that's known to shake down or even sue Free-Software-using companies over patents. But the value of Mono (while much more limited than its strongest proponents claim) is still apparent and real: it has a good chance to entice developers living in a purely Microsoft environment to switch to a software freedom environment. It was therefore valuable that Novell was funding developers to work on Mono; it's a bad outcome for software freedom that those developers will lose their jobs. Finally, while perhaps some of those developers might get jobs working on more urgent Free Software tasks, many will likely end up in jobs doing proprietary software development. And developers switching from Free Software work to proprietary software work is surely always a loss for software freedom.

    Update (2011-05-04): ciarang pointed out to me that Mono for Android is proprietary software. As such, it's certainly better if no one is working on that proprietary project anymore. However, I would make an educated guess that most of the employed Mono developers at Novell were working on the Free Software components, so the above analysis in the main blog post still likely applies in most cases.

    Posted on Tuesday 03 May 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

April

  • 2011-04-29: Hopefully My Voice Will Hold Out

    Those of you that follow me on identi.ca already know that I caught a rhinovirus, and was very sick while at the 2011 Linux Collaboration Summit (LCS). Unfortunately, the illness got worse since I “worked through” it while at LCS, and I was too sick to work the entire week afterward (the week of 2011-04-11).

    I realized thereafter that, before the conference, I forgot to even mention online that I was speaking and chairing the legal track at LCS. I can't blame that on the illness, since I should have noted it on my blog the week before.

    So, just barely, I'm posting ahead of time about my appearances this weekend at LinuxFest Northwest (LFNW). I have been asked to give four (!) talks in two days; and unfortunately three are scheduled almost right in a row in one day (I begged the organizers to fix it so I was giving two each day, but they'd already locked in the schedule, and even though I told them within hours of the schedule going up, they weren't able to change it.)

    It's a rather amusing story how I ended up giving four talks. Most of you that go to many conferences (and particularly those that speak at them) know that the hardest part of speaking is preparing a new talk. I learned in graduate school that you must practice talks to keep the quality high, and if a talk is new, I usually try to practice twice. That's a pretty large time investment, not to mention the research that has to go into a talk.

    So, what I typically do is have between three and five talks that are “active” on my playlist. I'll keep a talk in rotation for about ten to eighteen months and then discontinue it (unless there's new at least 40% new material that I can cycle into, which I sort of consider more-or-less a new talk).

    Often, I'll submit up to four active talks to a given conference. I do this for a couple of reasons. The first and foremost reason is to give choice to the program chairs. If I'm prepared to speak on an array of topics, I'd rather offer up what I can to the chairs so that they can pick the best fit for the track they wish to construct. The second reason is, quite frankly, is for when I really want to go to a conference. My employer only funds my travel if I am speaking at a conference, so sometimes, if I really want to go, I have to increase my odds as much as possible that a talk will be accepted. Multiple submissions usually help in this regard (although I can imagine it may hurt one's chances in some rare cases).

    Now, something happened with LFNW that's never happened to me before: the organizers accepted three of my four talk submissions, and wait-listed one of them! I wrote to them immediately telling them I was honored they wanted so many of my talks, and that I was of course happy to give all of them if they really wanted me to. Then, I happened to be working on my talks last weekend when the LFWN organizers were updating the schedule, and suddenly, I reloaded the page and saw they'd added the fourth talk as well!

    So, in the next two days, I'm giving four talks at LFNW! Most of them are talks I've given before (or at least, given substantially similar talks), so I am not worried about preparation (although I may have to skip any social events on Saturday night to practice the three-in-row for Sunday). What I'm worried about is that my voice has just recovered in the last few days from that long-lasting illness, and I am a bit afraid it won't hold out through all four. So, if you're at LFNW and notice I'm more quiet than usual in the hallway conversations (I'm not known for my silence, after all ;), it's because I'm saving my voice for my talks!

    Anyway, here's the run down of my LFWN talks:

    If you're not able to attend LFNW, I'll try to live-dent as much as I can (when I'm not speaking, which will actually be almost half the conference ;). Watch my identi.ca stream for the #lfnw tag. In particular, I'm really looking forward to Tom “spot” Callaway's talk. I really want to understand his reasoning for not signing the Chromium CLA, since, as Fontana suggests, it might illuminate the reasoning why developers might oppose CLAs for permissively licensed projects.

    By way of previews of what conferences I'll be at soon (I'll try to blog more fully about them a week before they start), I'll be giving keynotes at both Samba XP and LinuxTag in a few weeks (both about GPL compliance). I'll also be speaking about GPL compliance at OSCON in late July, and I might be on a panel at the Desktop Summit. I hope to see many of you at one of these events.

    I should also apologize to the excellent folks who run RMLL (aka the Libre Software Meeting) in France each year. When I came back so ill from LCS and lost that whole week of work because of it, I took a hard look at my 2011 travel schedule and I just had to cut something. I'm sorry it had to be RMLL, but I hope to make it up to them in a future year. (I actually had to do something similar to the LFNW guys in 2010, which I'm about to make up for this weekend!)

    Posted on Friday 29 April 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

March

  • 2011-03-18: Questioning The Original Analysis On The Bionic Debate

    I was hoping to avoid having to comment further on this problematic story. I figured a comment as a brief identi.ca statement was enough when it was just a story on the Register. But, it's now hit a major tech news outlet, and I feel that, given that I'm typically the first person everyone in the Free Software world comes to ask if something is a GPL violation, I'm going to get asked about this soon, so I might as well preempt the questions with a blog post, so I can answer any questions about it with this URL.

    In short, the question is: Does Bionic (the Android/Linux default C library developed by Google) violate the GPL by importing “scrubbed” headers from Linux? For those of you seeking TL;DR version: You can stop now if you expect me to answer this question; I'm not going to. I'm just going to show that the apparent original analysis material that started this brouhaha is a speculative hypothesis which would require much more research to amount to anything of note.

    Indeed, the kind of work needed to answer these questions typically requires the painstaking work of a talented developer working very closely with legal counsel. I've done analysis like this before for other projects. The only one I can easily talk about publicly is the ath5k situation. (If you want to hear more on that, you can listen to an old oggcast where I discussed this with Karen Sandler or read papers that were written on the subject back where I used to work.)

    Anyway, most of what's been written about this subject of the Linux headers in Bionic has been poorly drafted speculation. I suppose some will say this blog post is no better, since I am not answering any questions, but my primary goal here is to draw attention that absolutely no one, as near as I can tell, has done the incredibly time consuming work to figure out anything approaching a definitive answer! Furthermore, the original article that launched this debate (Naughton's paper, The Bionic Library: Did Google Work Around the GPL?) is merely a position paper for a research project yet to be done.

    Naughton's full paper gives some examples that would make a good starting point for a complete analysis. It's disturbing, however, that his paper is presented as if it's a complete analysis. At best, his paper is a position statement of a hypothesis that then needs the actual experiment to figure things out. That rigorous research (as I keep reiterating) is still undone.

    To his credit, Naughton does admit that only the kind of analysis I'm talking about would yield a definitive answer. You have to get almost all the way through his paper to get to:

    Determining copyrightability is thus a fact-specific, case-by-case exercise. … Certainly, sorting out what is and isn’t subject to GPLv2 in Bionic would require at least a file-by-file, and most likely line-by-line, analysis of Bionic — a daunting task[.]
    Of course, in that statement, Naughton makes the mistake of subtly including an assumption in the hypothesis: he fails to acknowledge clearly that it's entirely possible the set of GPLv2-covered work found in Bionic could be the empty set; he hasn't shown it's not the empty set (even notwithstanding his very cursory analysis of a few files).

    Yet, even though Naughton admits full analysis (that he hasn't done) is necessary, he nevertheless later makes sweeping conclusions:

    The 750 Linux kernel header files … define a complex overarching structure, an application programming interface, that is thoughtfully and cleverly designed, and almost assuredly protected by copyright.
    Again, this is a hypothesis, that would have be tested and proved with evidence generated by the careful line-by-line analysis Naughton himself admits is necessary. Yet, he doesn't acknowledge that fact in his conclusions, leaving his readers (and IMO he's expecting to dupe lots of readers unsophisticated on these issues) with the impression he's shown something he hasn't. For example, one of my first questions would be whether or not Bionic uses only parts of Linux headers that are required by specification to write POSIX programs, a question that Naughton doesn't even consider.

    Finally, Naughton moves from the merely shoddy analysis to completely alarmist speculation with:

    But if Google is right, if it has succeeded in removing all copyrightable material from the Linux kernel headers, then it has unlocked the Linux kernel from the restrictions of GPLv2. Google can now use the “clean” Bionic headers to create a non-GPL’d fork of the Linux kernel, one that can be extended under proprietary license terms. Even if Google does not do this itself, it has enabled others to do so. It also has provided a useful roadmap for those who might want to do the same thing with other GPLv2-licensed [sic] programs, such as databases.

    If it turns out that Google has succeeded in making sure that the GPLv2 does not apply to Bionic, then Google's success is substantially more narrow. The success would be merely the extraction of the non-copyrightable facts that any C library needs to know about Linux to make a binary run when Linux happens to be the kernel underneath. Now, it should be duly noted that there already exist two libraries under the LGPL that have already implemented that (namely, glibc, and uClibc — the latter of which Naughton's cursory research apparently didn't even turn up). As it stands, anyone who wants to write user-space applications on a Linux-based system already can; there are multiple C library choices available under the weak copyleft license, LGPL. Google, for its part, believes they've succeed at is to make a permissively licensed third alternative, which is an outcome that would be no surprise to us who have seen something like it done twice before.

    In short, everyone opining here seems to be conflating a lot of issues. There are many ways to interface with Linux. Many people, including me, believe quite strongly that there is no way to make a subprogram in kernel space (such as a device driver) without the terms of the GPLv2 applying to it. But writing a device driver is a specialized task that's very different from what most Linux users do. Most developers who “use Linux” — by which they typically mean write a user space program that runs on a GNU/Linux operating system — have (at most) weak copyleft (LGPL) terms to follow due to glibc or uClibc. I admit that I sometimes feel chagrin that proprietary applications can be written for GNU/Linux (and other Linux-based) systems, but that was a strategic decision that RMS made (correctly) at the start of the GNU project one that the Linux project, for its part, has also always sought.

    I'm quite sure no one — including hard-core copyleft advocates like me — expects nor seeks the GPLv2 terms to apply to programs that interface with Linux solely as user-space programs that runs on an operating system that uses Linux as its kernel. Thus, I'd guess that even if it turned out that Google made some mistakes in this regard for Bionic, we'd all work together to rectify those mistakes so that the outcome everyone intended could occur.

    Moreover, to compare the specifics of this situation to other types of so-called “copyleft circumvention techniques” is just link-baiting that borders on trolling. Google wasn't seeking to circumvent the GPL at all; they were seeking to write and/or adapt a permissively licensed library that replaced an LGPL'd one. I'm of course against that task on principle (I think Google should have just used glibc and/or uClibc and required LGPL-compliance by applications). But, to deny that it's possible to rewrite a C library for Linux under a license that isn't GPLv2 would also imply immediately the (incorrect) conclusion that uClibc and glibc are covered by the GPLv2, and we are all quite sure they aren't; even Naughton himself admits that (regarding glibc).

    Google may have erred; no one actually knows for sure at this time. But the task they sought to do has been done before and everyone intended it to be permitted. The worst mistake of which we might ultimately accuse Google is inadvertently taking a copyright-infringing short-cut. If someone actually does all the research to prove that Google did so, I'd easily offer a 1,000-to-1 bet to anyone that such a copyright infringement could be cleared up easily, that Bionic would still work as a permissively licensed C library for Linux, and the implications of the whole thing wouldn't go beyond: “It's possible to write your own C library for Linux that isn't covered by the GPLv2” — a fact which we've all known for a decade and a half anyway.

    Update (2011-03-20): Many people, including slashdot, have been linking to this comment by RMS on LKML about .h files. It's important to look carefully at what RMS is saying. Specifically, RMS says that sometimes #include'ing a .h file creates a copyright derivative work, and sometimes it doesn't; it depends on the details. Then, RMS goes to talk on some rules of thumb that can help determine the outcome of the question. The details are what matters; and those are, as I explain in the main post above, what requires careful analysis done jointly and in close collaboration between a developer and a lawyer. There is no general rule of thumb that always immediately leads one to the right answer on this question.

    Posted on Friday 18 March 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-03-11: Thoughts On GPL Compliance of Red Hat's Linux Distribution

    Today, I was interviewed by Sam Varghese about whether Red Hat's current distribution policies for the kernel named Linux are GPL-compliant. You can read there that AFAICT they are, and have been presented with no evidence to the contrary.

    Last week, when the original story broke, I happened to be at the Linux Foundation's End User Summit, and I had a rather extensive discussion with attendees there about this issue, including Jon Corbet, who wrote an article about it. In my mind, the issue was settled after that discussion, and I had actually put out of my mind, until I realized (when Varghese contacted me for an interview) that people had conflated my previous blog post from last weekend as being a comment specifically on the kernel distribution issue. (I'd been otherwise busy this week, and thus hadn't yet seen Jake Edge's follow-up article on LWN (to which I respond to in detail below).)

    (BTW, on this issue please note that my analysis below is purely a GPLv2 analysis. GPLv3 analysis may be slightly different here, but since, for the moment, the issue relates to the kernel named Linux which is currently licensed GPLv2-only, discussing GPLv3 in this context is a bit off-topic.)

    Preferred Form For Modification

    I have been a bit amazed to watch that so much debate on this has happened around the words of preferred form of the work for making modifications to it from GPLv2§3. In particularly, I can't help chuckling at the esoteric level to which many people believe they can read these words. I laugh to myself and think: not a one of these people commenting on this has ever tried in their life to actually enforce the GPL.

    To be a bit less sardonic, I agree with those who are saying that the preferred form of modification should be the exact organization of the bytes as we would all like to have them to make our further work on the software as easy as possible. But I always look at GPL with an enforcers' eye, and have to say this wish is one that won't be fulfilled all the time.

    The way preferred form for modification ends up working out in GPLv2 enforcement is something more like: you must provide complete sources that a sufficiently skilled software developer can actually make use of it without any reverse engineering. Thus, it does clearly prohibit things like source on cuneiform tablet that Branden mentions. (BTW, I wonder if Branden knows we GPL geeks started using that as an example circa 2001.) GPLv2 also certainly prohibits source obfuscation tools that Jake Edge mentions. But, suppose you give me a nice .tar.bz2 file with all the sources organized neatly in mundane ASCII files, which I can open up with tar xvf, cd in, type make and get a binary out of those sources that's functional and feature-equivalent to your binaries, and then I can type make install and that binary is put into the right place on the device where your binary runs. I reboot the device, and I'm up and running with my newly compiled version rather than the binary you gave me. I'd call that scenario easily GPLv2 compliant.

    Specifically, ease of upstream contribution has almost nothing to do with GPL compliance. Whether you get some software in a form the upstream likes (or can easily use) is more or less irrelevant to the letter of the license. The compliance question always is: did their distribution meet the terms required by the GPL?

    Now, I'm talking above about the letter of the license. The spirit of the license is something different. GPL exists (in part) to promote collaboration, and if you make it difficult for those receiving your distributions to easily share and improve the work with a larger community, it's still a fail (in a moral sense), but not a failure to comply with the GPL. It's a failure to treat the community well. Frankly, no software license can effectively prevent annoying and uncooperative behavior from those who seek to only follow the exact letter of the rules.

    Prominent Notices of Changes

    Meanwhile, what people are actually complaining about is that Red Hat RHEL customers have access to better meta-information about why various patches were applied. Some have argued (quite reasonably) that this information is required under GPLv2§2(a), but usually that section has been interpreted to allow a very terse changelog. Corbet's original article mentioned that the Red Hat distribution of the kernel named Linux contains no changelog. I see why he said that, because it took me some time to find it myself (and an earlier version of this very blog post was therefore incorrect on that point), but the src.rpm file does have what appears to be a changelog embedded in the kernel.spec file. There's also a simple summary as well that in release notes found in a separate src.rpm (in the file called kernel.xml). This material seems sufficient to me to meet the letter-of-the-license compliance for GPLv2§2(a) requirements. I, too, wish the log were a bit more readable and organized, but, again, the debate isn't about whether there's optimal community cooperation going on, but rather whether this distribution complies with the GPL.

    Relating This to the RHEL Model

    My previous blog post, which, while it was focused on answering the question of whether or not Fedora is somehow inappropriately exploited (via, say, proprietary relicensing) to build the RHEL business model, also addressed the issue whether RHEL's business model is GPL-compliant. I didn't think about that blog post in connection with the distribution of the kernel named Linux issue, but even considering that now, I still have no reason to believe RHEL's business model is non-compliant. (I continue to believe it's unfriendly, of course.)

    Varghese directly asked me if I felt the if you exercise GPL rights, then your money's no good here business model is an additional restriction under GPLv2. I don't think it is, and said so. Meanwhile, I was a bit troubled by the conclusions Jake Edge came to regarding this. First of all, I haven't forgotten about Sveasoft (geez, who could?), but that situation came up years after the RHEL business model started, so Jake's implication that Sveasoft “tried this model first” would be wrong even if Sveasoft had an identical business model.

    However, the bigger difficulty in trying to use the Sveasoft scenario as precedent (as Jake hints we should) is not only because of the “link rot” Jake referenced, but also because Sveasoft frequently modified their business model over a period of years. There's no way to coherently use them as an example for anything but erratic behavior.

    The RHEL model, by contrast, AFAICT, has been consistent for nearly a decade. (It was once called the “Red Hat Advanced Server”, but the business model seems to be the same). Notwithstanding Red Hat employees themselves, I've never talked to anyone who particularly likes the RHEL business model or thinks it is community-friendly, but I've also never received a report from someone that showed a GPL violation there. Even the “report” that first made me aware of the RHEL model, wherein someone told me: I hired a guy to call Red Hat for service all day every day for eight hours a day and those jerks at Red Hat said they were going to cancel my contract didn't sound like a GPL violation to me. I'd cancel the guy's contract, too, if his employee was calling me for eight hours a day straight!

    More importantly, though, I'm troubled that Jake indicates the RHEL model requires people to trade their GPL rights for service, because I don't think that's accurate. He goes further to say that terminat[ing] … support contract for users that run their own kernel … is another restriction on exercising GPL rights; that's very inaccurate. Refusing to support software that users have modified is completely different from restricting their right to modify. Given that the GPL was designed by a software developer (RMS), I find it particularly unlikely that he would have intended GPL to require distributors to provide support for any conceivable modification. What software developers want a license that puts that obligation hanging over their head?

    The likely confusion here is using the word “restriction” instead of “consequence”. It's undeniable that your support contractors may throw up their hands in disgust and quit if you modify the software in some strange way and still expect support. It might even be legitimately called a consequence of choosing to modify your software. But, you weren't restricted from making those modifications — far from it.

    As I've written about before, I think most work should always be paid by the hour anyway, which is for me somewhat a matter of personal principle. I therefore always remain skeptical of any software business model that isn't structured around the idea of a group of people getting paid for the hours that they actually worked. But, it's also clear to me that the GPL doesn't mandate that “hourly work contracts” are the only possible compliant business model; there are clearly others that are GPL compliant, too. Meanwhile, it's also trivial to invent a business model that isn't GPL compliant — I see such every day, on my ever-growing list of GPL violating companies who sell binary software with no source (nor offer therefor) included. I do find myself wishing that the people debating whether the exact right number of angels are dancing on the head of this particular GPL pin would instead spend some time helping to end the flagrant, constant, and obvious GPL violations with which I spent much time dealing time each week.

    On that note, if you ever think that someone is violating the GPL, (either for an esoteric reason or a mundane one), I hope that you will attempt to get it resolved, and report the violation to a copyright holder or enforcement agent if you can't. The part of this debate I find particularly useful here is that people are considering carefully whether or not various activities are GPL compliant. To quote the signs all over New York City subways, If you see something, say something. Always report suspicious activity around GPL software so we find out together as a community if there's really a GPL violation going on, and correct it if there is.

    Posted on Friday 11 March 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-03-05: The Slur “Open Core”: Toward More Diligent Analysis

    I certainly deserve some of the blame, and for that I certainly apologize: the phrase “Open Core” has apparently become a slur word, used by those who wish to discredit the position of someone else without presenting facts. I've done my best when using the term to also give facts that backed up the claim, but even so, I finally abandoned the term back in November 2010, and I hope you will too.

    The story, from my point of view, began seventeen months ago, when I felt that “Open Core” was a definable term and that behavior was a dangerous practice. I gave it the clear definition that I felt reflected problematic behavior, as I wrote at the time:

    Like most buzzwords, Open Core has no real agreed-upon meaning. I'm using it to describe a business model whereby some middleware-ish system is released by a single, for-profit entity copyright holder, who requires copyright-assigned changes back to the company, and that company sells proprietary add-ons and applications that use the framework.

    Later — shortly after I pointed out Mark Shuttleworth's fascination with and leanings towards this practice — I realized that it was better to use the preexisting, tried-and-true term for the practice: “proprietary relicensing”. I've been pretty consistent in avoiding the term “Open Core” since then. I called on Shuttleworth to adopt the FSF's recommendations to show Canonical, Ltd. isn't seeking proprietary relicensing and left the whole thing at that. (Shuttleworth, of course, has refused to even respond, BTW.)

    Sadly, it was too late: I'd help create a monster. A few weeks later, Alexandre Oliva (whose positions on the issue of proprietary software inside the kernel named Linux I definitely agree with) took it a step too far and called the kernel named Linux an “Open Core” project. Obviously, Linux developers don't and can't engage in proprietary relicensing; some just engage in a “look the other way” mentality with regard to proprietary components inside Linux. At the time, I said that the term “Open Core” was clearly just too confusing to analyze a real-world licensing situation.

    So, I just stopped calling things “Open Core”. My concerns currently are regarding the practice of collecting copyright assignments to copyleft software and engaging in proprietary relicensing activity, and I've focused on advocating against that specific practice. That's what I've criticized Canonical, Ltd. for doing — both with their existing copyright assignment policies and with their effort to extend those policies community-wide with the manipulatively named “Project Harmony”.

    Shuttleworth, for his part, is now making use the slur phrase I'd inadvertently help create. Specifically, a few days ago, Shuttleworth accused Fedora of being an “Open Core” product.

    I've often said that Fedora is primarily a Red Hat corporate project (and it's among the reasons that I run Debian rather than Fedora). However, since “Open Core” clearly still has no agreed-upon meaning, when I read what Shuttleworth said, I considered the question of whether his claim had any merit (using the “Open Core” definition I used myself before I abandoned the term). Put simply, I asked myself the question: Does Red Hat engaged in “proprietary relicensing of copyleft software with mandatory copyright assignment or non-copyleft CLA“ with Fedora?.

    Fact is, despite having serious reservations about how the RHEL business model works, I have no evidence to show that Red Hat requires copyright assignment or a mandatory non-copyleft CLA on copyleft projects on any products other than Cygwin. So, if Shuttleworth had said: Cygwin is Red Hat's Open Core product, I would still encourage him that we should all now drop the term “Open Core”, but I would also agree with him that Cygwin is a proprietary-relicensed product and that we should urge Red Hat to abandon that practice. (Update: It's also been noted by Fontana on identi.ca (although the statement was subsequently deleted by the user) that some JBoss projects require permissive CLAs but licenses back out under LGPL, so that would be another example.)

    But does Fedora require contributors to assign copyright or do non-copyleft licensing? I can't find the evidence, but there are some confusing facts. Fedora has a Contributor Licensing Agreement (CLA), which, in §1(D), clearly allows contributors to chose their own license. If the contributor accepts all the defaults on the existing Fedora CLA, the contributor gives a permissive license to the contribution (even for copyleft projects). Fortunately, though, the author can easily copyleft a work under the agreement, and it is still accepted by Fedora. (Contrast this with Canonical, Ltd.'s mandatory copyright assignment form, which explicitly demands Canonical, Ltd.'s power for proprietary relicensing.)

    While Fedora's current CLA does push people toward permissive licensing of copylefted works, the new draft of the Fedora CLA is much clearer on this point (in §2). In other words, the proposed replacement closes this bug. It thus seems to me Red Hat is looking to make things better, while Canonical, Ltd. hoodwinks us and is manufacturing consent in Project “Harmony” around a proprietary copyright-grab by for-profit corporations. When I line up the two trajectories, Red Hat's slowly getting better, and Canonical, Ltd. is quickly getting worse. Thus, Shuttleworth, sitting in his black pot, clearly has no right say that the slightly brown kettle sitting next to him is black, too.

    It could be that Shuttleworth is actually thinking of the RHEL business model itself, which is actually quite different than proprietary relicensing. I do have strong, negative opinions about the RHEL business model; I have long called it the if you like copyleft, your money is no good here business model. It's a GPL-compliant business model merely because the GPL is silent on whether or not you must keep someone as your customer. Red Hat tells RHEL customers that if they chose to engage in their rights under GPL, then their support contract will be canceled. I've often pointed out (although this may be the first time publicly on the Internet) that Red Hat found a bright line of GPL compliance, walked right up to it, and were the first to stake out a business model right on the line. (I've been told, though, that Cygnus experimented with this business model before being acquired by Red Hat.) This practice is, frankly, barely legitimate.

    Ironically, RMS and I used to say that Canonical, Ltd.'s new business model of interest — proprietary relicensing (once trailblazed by MySQL AB) — was also barely legitimate. In one literal sense, that's still true: it's legitimate in the sense that it doesn't violate GPL. In the sense of software freedom morality, I think proprietary relicensing harms the Free Software community too much, and that it was therefore a mistake to ever tolerate it.

    As for RHEL's business model, I've never liked it, but I'm still unsure (even ten years later since its inception) about its software freedom morality. It doesn't seem as harmful as proprietary relicensing. In proprietary licensing, those mistreated under the model are the small business and individual developers who are pressured to give up their copyleft rights lest their patches be rejected or rewritten. The small entities are left to chose between maintaining a fork or giving over proprietary corporate control of the codebase. In RHEL's business model, by contrast, the mistreated entities are large corporations that are forced to choose between exercising their GPL rights and losing access to the expensive RHEL support. It seems to me that the RHEL model is not immoral, but I definitely find it unfriendly and inappropriate, since it says: if you exercise software freedom, you can't be our customer.

    However, when we analyze these models that occupy the zone between license legitimacy and software freedom morality, I think I've learned from the mistake of using slur phrases like “Open Core”. From my point of view, most of these “edge” business models have ill effects on software freedom and community building, and we have to examine their nuances mindfully and gage carefully the level of harm caused. Sometimes, over time, that harm shows itself to be unbearable (as with proprietary relicensing). We must stand against such models and meanwhile continue to question the rest with precise analysis.

    Posted on Saturday 05 March 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-03-01: Software Freedom Is Elementary, My Dear Watson.

    I've watched the game show, Jeopardy!, regularly since its Trebek-hosted relaunch on 1984-09-10. I even remember distinctly the Final Jeopardy question that night as This date is the first day of the new millennium. At the age of 11, I got the answer wrong, falling for the incorrect What is 2000-01-01?, but I recalled this memory eleven years ago during the debates regarding when the millennium turnover happened.

    I had periods of life where I watched Jeopardy! only rarely, but in recent years (as I've become more of a student of games (in part, because of poker)), I've watched Jeopardy! almost nightly over dinner with my wife. I've learned that I'm unlikely to excel as a Jeopardy! player myself because (a) I read slow and (b) my recall of facts, while reasonably strong, is not instantaneous. I thus haven't tried out for the show, but I'm nevertheless a fan of strong players.

    Jeopardy! isn't my only spectator game. Right after college, even though I'm a worse-than-mediocre chess player, I watched with excitement as Deep Blue played and defeated Kasparov. Kasparov has disputed the results and how much humans were actually involved, but even so, such interference was minimal (between matches) and the demonstration still showed computer algorithmic mastery of chess.

    Of course, the core algorithms that Deep Blue used were well known and often implemented. I learned α-β pruning in my undergraduate AI course and it was clear that a sufficiently fast computer, given a few strong heuristics, could beat most any full information game with a reasonable branching factor. And, computers typically do these days.

    I suppose I never really thought about the issues of Deep Blue being released as Free Software. First, because I was not as involved with Free Software then as I am now, and also, as near as anyone could tell, Deep Blue's software was probably not useful for anything other than playing chess, and its primary power was in its ability to go very deep (hence the name, I guess) in the search tree. In short, Deep Blue was primarily a hardware, not a software, success story.

    It was nevertheless, impressive, and last month, I saw the next installment in this IBM story. I watched with interest as IBM's Watson defeated two champion Jeopardy! players. Ken Jennings, for one, even welcomed our new computer overlords.

    Watson beating Jeopardy! is, frankly, a lot more innovative than Deep Blue beating chess. Most don't know this about me, but I came very close to focusing my career on PhD work in Natural Language Processing; I believe fundamentally it's the area of AI most in need of attention and research. Watson is a shining example of success in modern NLP, and I actually believe some of the IBM hype about how Watson's technology can be applied elsewhere, such as medical information systems. Indeed, IBM has announced a deal with Columbia University Medical Center to adapt the system for medical diagnostics. (Perhaps Watson's next TV appearance will be on House.)

    This all sounds great to most people, but to me, my real concern is the freedom of the software. We've shown in the software freedom community that to advance software and improve it, sharing the software is essential. Technology locked up in a vaulted cave doesn't allow all the great minds to collaborate. Just as we don't lock up libraries so that only the guilded overlords have access, nor should the best software technology be restricted in proprietariness.

    Indeed, Eric Brown, at his Linux Foundation End User Linux Summit talk, told us that Watson relied heavily on the publicly available software freedom codebase, such as GNU/Linux, Hadoop, and other FLOSS components. They clearly couldn't do their work without building upon the work we shared with IBM, yet IBM apparently ignores its moral obligation to reciprocate.

    So, I just point-blank asked Brown why Watson is proprietary. Of course, I long ago learned to never ask a confrontational question from the crowd at a technical talk without knowing what the answer is likely to be. Brown answered in the way I expected: We're working with Universities to provide a framework for their research. I followed up asking when he would actually release the sources and what license would be. He dodged the question, and instead speculated about what licenses IBM sometimes like to use when it does chose to release code; he did not indicate if Watson's sources will ever be released. In short, the answer from IBM is clear: Watson's general ideas will be shared with academics, but the source code won't be.

    This point is precisely one of the reasons I didn't pursue a career in academic Computer Science. Since most jobs — including professorships at Universities — for PhDs in Computer Science require that any code written be kept proprietary, most Computer Science researchers have convinced themselves that code doesn't matter; only publishing ideas do. This belief is so pervasive that I knew something like this would be Brown's response to my query. (I was even so sure, I wrote almost this entire blog post before I asked the question).

    I'd easily agree that publishing papers is better than the technology being only a trade secret. At least we can learn a little bit about the work. But in all but the pure theoretical areas of Computer Science, code is written to exemplify, test, and exercise the ideas. Merely publishing papers and not the code is akin to a chemist publishing final results but nothing about the methodologies or raw data. Science, in such cases, is unverifiable and unreproducible. If we accepted such in fields other than CS, we'd have accepted the idea that cold fusion was discovered in 1989.

    I don't think I'm going to convince IBM to release Watson's sources as Free Software. What I do hope is that perhaps this blog post convinces a few more people that we just shouldn't accept that Computer Science is advanced by researchers who give us flashy demos and code-less research papers. I, for one, welcome our computer overlords…but only if I can study and modify their source code.

    Posted on Tuesday 01 March 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

February

  • 2011-02-15: Everyone in USA: Comment against ACTA today!

    In the USA, the deadline for comments on ACTA is today (Tuesday 15 February 2011) at 17:00 US/Eastern. It's absolutely imperative that every USA citizen submit a comment on this. The Free Software Foundation has details on how to do so.

    ACTA is a dangerous international agreement that would establish additional criminal penalties, promulgate DMCA/EUCD-like legislation around the world, and otherwise extend copyright law into places it should not go. Copyright law is already much stronger than anyone needs.

    On a meta-point, it's extremely important that USA citizens participate in comment processes like this. The reason that things like ACTA can happen in the USA is because most of the citizens don't pay attention. By way of hyperbolic fantasy, imagine if every citizen of the USA wrote a letter today to Mr. McCoy about ACTA. It'd be a news story on all the major news networks tonight, and would probably be in the headlines in print/online news stories tomorrow. Our whole country would suddenly be debating whether or not we should have criminal penalties for copying TV shows, and whether breaking a DVD's DRM should be illegal.

    Obviously, that fantasy won't happen, but getting from where we are to that wonderful fantasy is actually linear; each person who writes to Mr. McCoy today makes a difference! Please take 15 minutes out of your day today and do so. It's the least you can do on this issue.

    The Free Software Foundation has a sample letter you can use if you don't have time to write your own. I wrote my own, giving some of my unique perspective, which I include below.

    The automated system on regulations.gov assigned this comment below the tracking number of 80bef9a1 (cool, it's in hex! :)

    Stanford K. McCoy
    Assistant U.S. Trade Representative for Intellectual Property and Innovation
    Office of the United States Trade Representative
    600 17th St NW
    Washington, DC 20006

    Re: ACTA Public Comments (Docket no. USTR-2010-0014)

    Dear Mr. McCoy:

    I am a USA citizen writing to urge that the USA not sign ACTA. Copyright law already reaches too far. ACTA would extend problematic, overly-broad copyright rules around the world and would increase the already inappropriate criminal penalties for copyright infringement here in the USA.

    Both individually and as an agent of my employer, I am regularly involved in copyright enforcement efforts to defend the Free Software license called the GNU General Public License (GPL). I therefore think my perspective can be uniquely contrasted with other copyright holders who support ACTA.

    Specifically, when engaging in copyright enforcement for the GPL, we treat it as purely a civil issue, not a criminal one. We have been successful in defending the rights of software authors in this regard without the need for criminal penalties for the rampant copyright infringement that we often encounter.

    I realize that many powerful corporate copyright holders wish to see criminal penalties for copyright infringement expanded. As someone who has worked in the area of copyright enforcement regularly for 12 years, I see absolutely no reason that any copyright infringement of any kind ever should be considered a criminal matter. Copyright holders who believe their rights have been infringed have the full power of civil law to defend their rights. Using the power of government to impose criminal penalties for copyright infringement is an inappropriate use of government to interfere in civil disputes between its citizens.

    Finally, ACTA would introduce new barriers for those of us trying to change our copyright law here in the USA. The USA should neither impose its desired copyright regime on other countries, nor should the USA bind itself in international agreements on an issue where its citizens are in great disagreement about correct policy.

    Thank you for considering my opinion, and please do not allow the USA to sign ACTA.

    Sincerely,
    Bradley M. Kuhn

    Posted on Tuesday 15 February 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

January

  • 2011-01-23: A Brief Tutorial on a Shared Git Repository

    A while ago, I set up Git for a group privately sharing the same central repository. Specifically, this is a tutorial for those who would want to have a Git setup that is a little bit like a SVN repository: a central repository that has all the branches that matter published there in one repository. I found this file today floating in a directory of “thing I should publish at some point”, so I decided just to put it up, as every time I came across this file, it reminded me I should put this up and it's really morally wrong (IMO) to keep generally useful technical information private, even when it's only laziness that's causing it.

    Before you read this, note that most developers don't use Git this way, particularly with the advent of shared hosting facilities like Gitorious, as systems like Gitorious solve the weirdness of problems that this tutorial addresses. When I originally wrote this (more than a year ago), the only well-known project that I found using a system like this was Samba; I haven't seen a lot of other projects that do this. Indeed, this process is not really what Git is designed to do, but sometimes groups that are used to SVN expect there to be a “canonical repository” that has all the contents of the shared work under one proverbial roof, and set up a “one true Git repository” for the project from which everyone clones.

    Thus, this tutorial is primarily targeted to a user mostly familiar with an SVN workflow, that has ssh access to host.example.org that has a writable (usually by multiple people) Git repository living in the directory /git/REPOSITORY.git/.

    Ultimately, The stuff that I've documented herein is basically to fill in the gaps that I found when reading the following tutorials:

    So, here's my tutorial, FWIW. (I apologize that I make the mortal sin of tutorial writing: I drift wildly between second-person-singular, first-person-plural, and passive-voice third-person. If someone sends me a patch to the HTML file that fixes this, I'll fix it. :)

    Initial Setup

    Before you start using git, you should run these commands to let it know who you are so your info appears correctly in commit logs:

                     $ git config --global user.email [email protected]
                     $ git config --global user.name “Your Real Name”
                    

    Examining Your First Clone

    To get started, first we clone the repository:

                      $ git clone ssh://host.example.org/git/REPOSITORY.git/
                    

    Now, note that Git almost always operates in the terms of branches. Unlike Subversion, Git's branches are first-class citizens and most operations in Git operate around a branch. The default branch is often called “master”, although I tend to avoid using the master branch for much, mainly because everyone who uses git has a different perception of what the master branch should embody. Therefore, giving all your branches more descriptive name is helpful. But, when you first import something into git, (for example, from existing Subversion trees), everything from Subversion's trunk is thrown on the master branch.

    So, we take a look at the result of that clone command. We have a new directory, called REPOSITORY, that contains a “working checkout&rquo; of the repository, and under that there is one special directory, REPOSITORY/.git/, which is a full copy of the repository. Note that this is not like Subversion, where what you have on your local machine is merely one view of the repository. With Git, you have a full copy of everything. However, an interesting thing has been done on your copy with the branches. You can take a look with these commands:

                      $ git branch
                      * master
                      $ git branch -r
                      origin/HEAD
                      origin/master
                    

    The first list of branches are the branches that are personal and local to you. (By default, git branch uses the -l option, which shows you only “local” branches; -r means “remote” branches. You can also use -a to see all of them.) Unless you take action to publish your local branches in some way, they will be your private area to work in and live only on your computer. (And be aware: they are not backed up unless you back them up!) The remote ones, that all start with “origin/” track the progress on the shared repository.

    (Note the term “origin” is a standard way of referring to “the repository from whence you cloned”, and origin/BRANCH refers to “BRANCH as it looks in the repository from whence you cloned”. However, there is nothing magical about the name “origin”. It's set up to DTRT in your WORKING-DIRECTORY/.git/config file, and the clone command set it all up for you, which is why you have them now.)

    Get to Work

    The canonical way to “get moving” with a new task in Git is to somehow create a branch for it. Branches are designed to be cheap and quick to create so that users will not be shy about creating a new one. Naming conventions are your own, but generally I like to call a branch USERNAME/TASK when I'm still not sure exactly what I'll be doing with it (i.e., who I will publish it to, etc.) You can always merge it back into another branch, or copy it to another branch (perhaps using a more formal name) later.

    Where do you Start Your Branch From?

    Once a repository exists, each branch in the repository comes from somewhere — it has a parent. These relationships help Git know how to easily merge branches together. So, the most typical procedure of starting a new branch of your own is to begin with an existing branch. The git checkout command is the easiest to use to start this:

                       git checkout -b USERNAME/feature origin/master
                    

    In this example, we've created our own local branch, called USERNAME/feature, and it's started from the current state of origin/master. When you are getting started, you will probably usually want to always base your new branches off of ones that exist on the origin. This isn't a rule, it's just less confusing for a newbie if all your branches have a parent revision that live on the server.

    Now, it's important to note here that no branch stands still. It's best to think about a branch as a “moving pointer” to a linked list of some set of revisions in the repository.

    Every revision stored in git, local or remote, has a SHA1 which is computed based on the revisions before it plus new patch the revision just applied.

    Meanwhile, the only two substantive differences between one of these SHA1 identifiers and an actual branch is that (a) Git keeps changing what identifier the branch refers to as new commits come in (aka it moves the branch's HEAD), and (b) Git keeps track of the history of identifiers the branch previously referred to.

    So, above, when we asked git checkout to creat a new branch called USERNAME/feature based on origin/master, the two important things to realize are that (a) your new branch has its HEAD pointing at the same head that is currently the HEAD of origin/master, and (b) you got a new list to start adding revisions in the new branch.

    We didn't have to use branch for that. We could have simply started our branch from any old SHA1 of any revision. We happened to want to declare a relationship with the master branch on the server in this case, but we could have easily picked any SHA1 from our git log and used that one.

    Do Not Fear the checkout

    Every time you run a git checkout SOMETHING command, your entire working directory changes. This normally scares Subversion users; it certainly scared me the first time I used git checkout SOMETHING. But, the only reason it is scary is because svn switch, which is the roughly analogous command in the Subversion world, so often doesn't do something sane with your working copy. By contrast, switching branches and changing your whole working directory is a common occurrence with git.

    Note, however, that you cannot do git checkout with uncommitted changes in your directory (which, BTW, also makes it safer than svn switch). However, don't be too Subversion-user-like and therefore afraid to commit things. Remember, with Git (and unlike with Subversion), committing and publishing are two different operations. You can commit to your heart's content on local branches and merge or push into public branches later. (There are even commands to squash many commits into one before putting it on a public branch, in case you don't want people to see all the intermediate goofiness you might have done. This is why, BTW, many Git users commit as often as an SVN user would save in their editors.)

    However, if you must switch checkouts but really do fear making commits, there is a tool for you: look into git stash.

    Share with the Group

    Once you've been doing some work, you'll end up with some useful work finished on a USERNAME/feature branch. As noted before, this is your own private branch. You probably want to use the shared repository to make your work available to others.

    When using a shared Git repository, there are two ways to share your branches with your colleagues. The first procedure is when you simply want to publish directly on an existing branch. The second is when you wish to create your own branch.

    Publishing to Existing Branch

    You may choose to merge your work directly into a known branch on the remote repository. That's a viable option, certainly, but often you want to make it available on a separate branch for others to examine, even before you merge it into something like the master branch. We discuss the slightly more complicated new branch publication next, but for the moment, we can consider the quicker process of publishing to an existing branch.

    Let's consider when we have work on USERNAME/feature and we would like to make it available on the master branch. Make sure your USERNAME/feature branch is clean (i.e., all your changes are committed).

    The first thing you should verify is that you have what I call a “local tracking branch” (this is my own term that I made up, I think, you won't likely see it in other documentation) that is tied directly with the same name to the origin. This is not completely necessary, but is much more convenient to keep track of what you are doing. To check, do a:

                       $ git branch -a
                       * USERNAME/feature
                         master
                         origin/master
                    

    In the list, you should see both master and origin/master. If you don't have that, you should create it with:

                       $ git checkout -b master origin/master
                    

    So, either way, you wan to be on the master branch. To get there if it already existed, you can run:

                       $ git checkout master
                    

    And you should be able verify that you are now on master with:

                       $ git branch
                       * master
                       ...
                    

    Now, we're ready to merge in our changes:

                       $ git merge USERNAME/feature
                       Updating ded2fb3..9b1c0c9
                       Fast forward
                       FILE ...
                       N files changed, X insertions(+), Y deletions(-)
                    

    If you don't get any message about conflicts, everything is fine. Your changes from USERNAME/feature are now on master. Next, we publish it to the shared repository:

                      $ git push
                      Counting objects: N, done.
                      Compressing objects: 100% (A/A), done.
                      Writing objects: 100% (A/A), XXX bytes, done.
                      Total G (delta T), reused 0 (delta 0)
                      refs/heads/master: IDENTIFIER_X -> IDENTIFIER_Y
                      To ssh://host.example.org/git/REPOSITORY.git
                       X..Y  master -> master
                    

    Your changes can now be seen by others when they git pull (See below for details).

    Publishing to a New Branch

    Suppose, what you wanted to instead of immediately putting the feature on the master branch, you wanted to simply mirror your personal feature branch to the rest of your colleagues so they can try it out before it officially becomes part of master. To do that, first, you need tell Git we want to make a new branch on the shared repository. In this case, you do have to use the git push command as well. (It is a catch-all command for any operations you want to do to the remote repository without actually logging into the server where the shared Git repository is hosted. Thus, Not surprisingly, nearly any git push commands you can think of will require you to be net.connected.)

    So, first let's create a local branch that has the actual name we want to use publicly. To do this, we'll just use the checkout command, because it's the most convenient and quick way to create a local branch from an already existing local branch:

                      $ git branch -l
                      * USERNAME/feature
                        master
                        ...
                      $ git checkout -b proposed-feature USERNAME/feature
                      Switched to a new branch “proposed-feature”
                      $ git branch -l
                      * proposed-feature
                        USERNAME/feature
                        master
                        ...
                    

    Now, again, we've only created this branch locally. We need an equivalent branch on the server, too. This is where git push comes in:

                      $ git push origin proposed-feature:refs/heads/proposed-feature
                    

    Let's break that command down. The first argument for push is always “the place you are pushing to”. That can be any sort of git URL, including ssh://, http://, or git://. However, remember that the original clone operation set up this shorthand “origin” to refer to the place from whence we cloned. We'll use that shorthand here so we don't have to type out that big long URL.

    The second argument is a colon-separated item. The left hand side is the local branch we're pushing from on our local repository, and the right hand side is the branch we are pushing to on the remote repository.

    (BTW, I have no idea why refs/heads/ is necessary. It seems you should be able to say proposed-feature:proposed-feature and git would figure out what you mean. But, in the setups I've worked with, it doesn't usually work if you don't put in refs/heads/.)

    That operation will take a bit to run, but when it is done we see something like:

                      Counting objects: 35, done.
                      Compressing objects: 100% (31/31), done.
                      Writing objects: 100% (33/33), 9.44 MiB | 262 KiB/s, done.
                      Total 33 (delta 1), reused 27 (delta 0)
                      refs/heads/proposed-feature: 0000000000000000000000000000000000000000
                                                     -> CURRENT_HEAD_SHA1_SUM
                      To ssh://host.example.org/git/REPOSITORY.git/
                       * [new branch]      proposed-feature -> proposed-feature
                    

    In older Git clients, you may not see that last line, and you won't get the origin/proposed-feature branch until you do a subsequent pull. I believe newer git clients do the pull automatically for you.

    Reconfiguring Your Client to see the New Remote Branch

    Annoyingly, as the creator of the branch, we have some extra config work to do to officially tell our repository copy that these two branches should be linked. Git didn't know from our single git push command that our repository's relationship with that remote branch was going to be a long term thing. To marry our local to origin/proposed-feature to a local branch, we must use the commands:

                      $ git config branch.proposed-feature.remote origin
                      $ git config branch.proposed-feature.merge refs/heads/proposed-feature
                    

    We can see that this branch now exists because we find:

                      $ git branch -a
                      * proposed-feature
                        USERNAME/feature
                        master
                        origin/HEAD
                        origin/proposed-feature
                        origin/master
                     

    After this is done, the remote repository has a proposed-feature branch and, locally, we have a proposed-feature branch that is a “local tracking branch” of origin/proposed-feature. Note that our USERNAME/feature, where all this stuff started from, is still around too, but can be deleted with:

                    git branch -d USERNAME/feature
                    

    Finding It Elsewhere

    Meanwhile, someone else who has separately cloned the repository before we did this won't see these changes automatically, but a simple git pull command can get it:

                      $ git pull
                      remote: Generating pack...
                      remote: Done counting 35 objects.
                      remote: Result has 33 objects.
                      remote: Deltifying 33 objects...
                      remote:  100% (33/33) done
                      remote: Total 33 (delta 1), reused 27 (delta 0)
                      Unpacking objects: 100% (33/33), done.
                      From ssh://host.example.org/git/REPOSITORY.git
                       * [new branch]      proposed-feature -> origin/proposed-feature
                      Already up-to-date.
                      $ git branch -a
                      * master
                        origin/HEAD
                        origin/proposed-feature
                        origin/master
                    

    However, their checkout directory won't be updated to show the changes until they make a local “mirror” branch to show them the changes. Usually, this would be done with:

                      $ git checkout -b proposed-feature origin/proposed-feature
                    

    Then they'll have a working copy with all the data and a local branch to work on.

    BTW, if you want to try this yourself just to see how it works, you can always make another clone in some other director just to play with, by doing something like:

                      $ git clone ssh://host.example.org/git/SOME-REPOSITORY.git/ \
                        extra-clone-for-git-didactic-purposes
                    

    Now on this secondary checkout (which makes you just like the user who is not the creator of the new branch), work can be pushed and pulled on that branch easily. Namely, anything you merge into or commit on your local proposed-feature branch will automatically be pushed to origin/proposed-feature on the server when you git push. And, anything that shows up from other users on the origin/proposed-feature branch will show up when you do a git pull. These two branches were paired together from the start.

    Irrational Rebased Fears

    When using a shared repository like this, it's generally the case that git rebase usually screws something up. When Git is used in the “normal way”, rebase is one of the amazing things about Git. The rebase idea is: you unwind the entire work you've done on one of your local branches, bringing in changes that other people have made in the meantime, and then reapply your changes on top of them.

    It works out great when you use Git the way the Linux Project does. However, if you use a single, shared repository in a work group, rebase can be dangerous.

    Generally speaking, though, with a shared repository, you can use git merge and won't need rebasing. My usual work flow is that I get started on a feature with:

                      $ git checkout -b bkuhn/new-feature starting-branch
                    

    I work work work away on it. Then, when it's ready, I send a patch around to a mailing list that I generate with:

                      $ git diff $(git merge-base starting-branch bkuhn/new-feature) bkuhn/new-feature
                    

    Note that the thing in the $() returns a single identifier for a version, namely, the version of the fork point between starting-branch and bkuhn/new-feature. Therefore, the diff output is just the stuff I've actually changed. This generates all the differences between the place where I forked and my current work.

    Once I have discussed and decided with my co-developers that we like what I've done, I do this:

                      $ git checkout starting-branch
                      $ git merge bkuhn/new-feature
                    

    If all went well, this should automatically commit my feature into starting-branch. Usually, there is also an origin/starting-branch, which I've probably set up for automatic push/pull with my local starting-branch, so I then can make the change officially by running:

                      $ git push
                    

    The fact that I avoid rebase is probably merely FUD, and if I learned more, I could use it safely in cases with shared repository. But I have no advice on how to make it work. In particular, this Git FAQ entry shows quite clearly that my work sequence ceases to work all that well when you do a rebase — namely, doing a git push becomes more complicated.

    I am sure a rebase would easily become very necessary if I lived on bkuhn/new-feature for a long time and there had been tons of changes underneath me, but I generally try not to dive to deep into a fork, although many people love DVCS because they can do just that. YMMV, etc.

    Posted on Sunday 23 January 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-01-18: Free as in Freedom, Episode 0x07

    I realized that I should start regularly noting here on my blog when the oggcast that I co-host with Karen Sandler is released. There are perhaps folks who want content from my blog but haven't subscribed to the RSS feed of the show, and thus might want to know when new episodes come out. If this annoys people reading this blog, please let me know via email or identica.

    In particular, perhaps readers won't like that, in these posts (which are going to be written after the show), I'm likely to drift off into topics beyond what was talked about on the show, and there may be “spoilers” for the oggcast in them. Again, if this annoys you (or if you like it) please let me know.

    Today's FaiF episode is entitled Revoked?. The main issue of discussion is some recent confusions about the GPLv2 release of WinMTR. I was quoted in an article about the topic as well, and in the oggcast we discuss this issue at length.

    To summarize my primary point in the oggcast: I'm often troubled when these issues come up, because I've seen these types of confusions so many times before in the last decade. (I've seen this particular one, almost exactly like this, at least five times.) I believe that those of us who focus on policy issues in software freedom need to do a better job documenting these sorts of issues.

    Meanwhile, after we recorded the show I was thinking again about how Karen points out in the oggcast that the primary issues are legal ones. I don't really agree with that. These are policy questions, that are perhaps informed by legal analysis, and it's policy folks (and, specifically, Free Software project leaders) that should be guiding the discussion, not necessarily lawyers.

    That's not to say that lawyers can't be policy folks as well; I actually think Karen and a few other lawyers I know are both. The problem is that if we simply take things like GPL on their face — as if they are unchanging laws of nature that simply need to be interpreted — we miss out on the fact that licenses, too, can have bugs and can fail to work the way that they should. A lawyer's job is typically to look at a license, or a law, or something more or less fixed in its existence and explain how it works, and perhaps argue for a particular position of how it should be understood.

    In our community, activists and project leaders who set (or influence) policy should take such interpretations as input, and output plans to either change the licenses and interpretation to make sure they properly match the goals of software freedom, or to build up standards and practices that work within the existing licensing and legal structure to advance the goal of building a world where all published software is Free Software.

    So, those are a few thoughts I had after recording; be sure to listen to FaiF 0x07 available in ogg and mp3 formats.

    Posted on Tuesday 18 January 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-01-02: Conservancy Activity Summary, 2010-10-01 to 2010-12-31

    [ Crossposted from Conservancy's blog. ]

    I had hoped to blog more regularly about my work at Conservancy, and hopefully I'll do better in the coming year. But now seems a good time to summarize what has happened with Conservancy since I started my full-time volunteer stint as Executive Director from 2010-10-01 until 2010-12-31.

    New Members

    We excitedly announced in the last few months two new Conservancy member projects, PyPy and Git. Thinking of PyPy connects me back to my roots in Computer Science: in graduate school, I focused on research about programming language infrastructure and, in particular, virtual machines and language runtimes. PyPy is a project that connects Conservancy to lots of exciting programming language research work of that nature, and I'm glad they've joined.

    For its part, Git rounds out a group of three DVCS projects that are now Conservancy members; Conservancy is now the home of Darcs, Git, and Mercurial. Amusingly, when I reminded the Git developers when they applied that their “competition” were members, the Git developers told me that they were inspired to apply because these other DVCS' had been happy in Conservancy. That's a reminder that the software freedom community remains a place where projects — even that might seem on the surface as competitors — seek to get along and work together whenever possible. I'm glad Conservancy now hosts all these projects together.

    Meanwhile, I remain in active discussions with five projects that have been offered membership in Conservancy. As I always tell new projects, joining Conservancy is a big step for a project, so it often takes time for communities to discuss the details of Conservancy's Fiscal Sponsorship Agreement. It may be some time before these five projects join, and perhaps they'll ultimately decide not to join. However, I'll continue to help them make the right decision for their project, even if joining a different fiscal sponsor (or not joining one at all) is the ultimately right choice.

    Also, about once every two weeks, another inquiry about joining Conservancy comes in. We won't be able to accept all the projects that are interested, but hopefully many can become members of Conservancy.

    Annual Filings

    In the late fall, I finished up Conservancy's 2010 filings. Annual filings for a non-profit can be an administrative rat-hole at times, but the level of transparency they create for an organization makes them worth it. Conservancy's FY 2009 Federal Form 990 and FY 2009 New York CHAR-500 are up on Conservancy's filing page. I always make the filings available on our own website; I wish other non-profits would do this. It's so annoying to have to go to a third-party source to grab these documents. (Although New York State, to its credit, makes all the NY NPO filings available on its website.)

    Conservancy filed a Form 990-EZ in FY 2009. If you take a look, I'd encourage you to direct the most attention to Part III (which is on the top of page 2) to see most of Conservancy's program activities between 2008-03-01 to 2009-02-28.

    In FY 2010, Conservancy will move from the New York State requirement of “limited financial review” to “full audit“ (see page 4 of the CHAR-500 for the level requirements). Conservancy had so little funds in FY 2007 that it wasn't required to file a Form 990 at all. Now, just three years later, there is enough revenue to warrant a full audit. However, I've already begun preparing myself for all the administrative work that will entail.

    Project Growth and Funding

    Those increases in revenue are related to growth in many of Conservancy's projects. 2010 marked the beginning of the first full-time funding of a developer by Conservancy. Specifically, since June, Matt Mackall has been funded through directed donations to Conservancy to work full-time on Mercurial. Matt blogs once a month (under topic of Mercurial Fellowship Update) about his work, but, more directly, the hundreds of changesets that Matt's committed really show the advantages of funding projects through Conservancy.

    Conservancy is also collecting donations and managing funding for various part-time development initiatives by many developers. Developers of jQuery, Sugar Labs, and Twisted have all recently received regular development funding through Conservancy. An important part of my job is making sure these developers receive funding and report the work clearly and fully to the community of donors (and the general public) that fund this work.

    But, as usual with Conservancy, it's handling of the “many little things” for projects that make a big difference and sometimes takes the most time. In late 2010, Conservancy handled funding for Code Sprints and conferences for the Mercurial, Darcs, and jQuery. In addition, jQuery held a conference in Boston in October, for which Conservancy handled all the financial details. I was fortunate to be able to attend the conference and meet many of the jQuery developers in person for the first time. Wine also held their annual conference in November 2010, and Conservancy handled the venue details and reimbursements to many of travelers to the conference.

    Also, as always, Conservancy project contributors regularly attend other conferences related to their projects. At least a few times a month, Conservancy reimburses developers for travel to speak and attend important conferences related to their projects.

    Google Summer of Code

    Since its inception, Google's Summer of Code (SoC) program has been one of the most important philanthropy programs for Open Source and Free Software projects. In 2010, eight Conservancy projects (and 5% of the entire SoC program) participated in SoC. The SoC program funds college students for the summer to contribute to the projects, and an experienced contributor to project mentors each student. A $500 stipend is paid to the non-profit organization of the project for each project contributor who mentors a student.

    Furthermore, there's an annual conference, in October, of all the mentors, with travel funded by Google. This is a really valuable conference, since it's one of the few places where very disparate Free Software projects that usually wouldn't interact can meet up in one place. I attended this year's Soc Mentor Summit and hope to attend again next year.

    I'm really going to be urging all Conservancy's projects to take advantage of the SoC program in 2011. The level of funding given out by Google for this program is higher than any other open-application funding program for FLOSS. While Google's selfish motives are clear (the program presumably helps them recruit young programmers to hire), the benefit to Free Software community of the program can nevertheless not be ignored.

    GPL Enforcement

    GPL Enforcement, primarily for our BusyBox member project, remains an active focus of Conservancy. Work regarding the lawsuit continues. It's been more than a year since Conservancy filed a lawsuit against fourteen defendants who manufacture embedded devices that included BusyBox without source nor an offer for source. Some of those have come into compliance with the GPL and settled, but a number remain and are out of compliance; our litigation efforts continue. Usually, our lawyers encourage us not to comment on ongoing litigation, but we did put up a news item in August when the Court granted Conservancy a default judgment against one of the defendants, Westinghouse.

    Meanwhile, in the coming year, Conservancy hopes to expand efforts to enforce the GPL. New violation reports on BusyBox arrive almost daily that need attention.

    More Frequent Blogging

    As noted at the start of this post, my hope is to update Conservancy's blog more regularly with information about our activities.

    This blog post was covered on LWN and on lxnews.org.

    Posted on Sunday 02 January 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

2010

November

  • 2010-11-16: In Defense of Bacon

    Jono Bacon is currently being criticized for the manner in which he launched an initiative called OpenRespect.Org. Much of this criticism is unfair, and I decided to write briefly here in support of Jono, because he's a victim of a type of mistreatment that I've experienced myself, so I have particularly strong empathy for his situation.

    To be clear, I'm not even a supporter of Jono's OpenRespect.Org initiative myself. I think there are others who are doing good work in this area already (for example, various efforts around getting women involved in Free Software have long recognized and worked on the issue, since mutual respect is an essential part having a more diverse community). Also, I felt that Jono's initiative was slanted toward encouraging people respect all actions by companies, some of which don't advance Free Software. I commented on Jono's blog to share my criticisms of the initiative when he was still formulating it. In short, I think the wording of the current statement on OpenRespect.org seems to indicate people should accept anyone else's choice as equally moral. As someone who believes software freedom as a moral issue, and thus view development and distribution of proprietary software as an immoral act, I have a problem with such a mandate, although I nevertheless strive to be respectful in pursuit of that view. I would hate to be declared disrespectful merely because I believe in the morality of software freedom.

    Yet, despite the fact that I disagree with some of the details of Jono's initiative, I believe most of the criticisms have been unfair. First and foremost, we should take Jono at his word that this initiative is his own and not one undertaken on behalf of Canonical, Ltd. I doubt Jono would dispute that his work at Canonical, Ltd. inspired him to think about these issues, but that doesn't mean that everything he does on his own time on his own website is a Canonical, Ltd. activity.

    Indeed, I've personally been similarly attacked for items I've said on this blog of my own, which of course does not represent the views of any of my employers (past nor present) nor any organizations with which I have volunteer affiliations. When I have things to say on those topics, I have other fora to post officially, as does Jono.

    So, I've experienced first-hand what Jono is currently experiencing: namely, that people ignore disclaimers precisely to attack someone who has an opinion that they don't like. By conflating your personal opinions with those of your employer's, people subtly discredit you — for example, by using your employment relationship to put inappropriate pressure on you to change your positions. I'm very sad to see that this same thing I've been a victim of is now happening to Jono, too. I couldn't just watch it happen without making a statement of solidarity and pointing out that such treatment is unfair.

    Even if we don't agree with the OpenRespect.org initiative (and I don't, for reasons stated above), there is no one to blame but Jono himself, as he's told us clearly this isn't a Canonical initiative, and I've seen no evidence that shows the situation is otherwise.

    I do note that there are other criticisms raised, such as whether or not Jono reached out in the best possible way to others during the launch, or whether others thought they'd be involved when it turned out to be a unilateral initiative. All of that, of course, is something that's reparable (as is my primary complaint above, too), so on those fronts, we should just give our criticism and ask Jono to change it. That's what I did on my issue. He chose not to take my advice, which is his prerogative. My response thereafter was simply to not support the initiative.

    To the extent we don't have enough respect in the FLOSS community, here's an easy place to improve: we should take people at their word until we have evidence to believe otherwise. Jono says OpenRespect.org is his own thing; we should believe him. We shouldn't insist that everything someone says is on behalf of their employer, even if they have a spokesperson role. People have a right to be something more than automatons for their bosses.

    Disclosure: I did not tell Jono I was going to write this post, but after it was completely written, I gave him the chance to make a binary decision about whether I posted it publicly or not. Since you're reading this, he obviously answered 1.

    Posted on Tuesday 16 November 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-11-15: Comments on Perens' Comments on Software Patents

    Bruce Perens and I often disagree about lots of things. However, I urge everyone to read what Bruce wrote this weekend about software patents. I'm very glad he's looking deep into recent events surrounding this issue; I haven't had the time to do so myself because I've been so busy with the launch of my full-time work at Conservancy this fall.

    Despite my current focus on getting Conservancy ramped up with staff, so it can do more of its work, I nevertheless still remain frightfully concerned about the impact of software patents on the future of software freedom, and I support any activities that seek to make sure that software patent threats do not stand in the way of software freedom. Bruce and I have always agreed about this issue: software patents should end, and while individuals with limited means can't easily make that happen themselves, we must all work to raise awareness and public opinion against all patenting of software.

    Specifically, I'm really glad that Bruce has mentioned the issue of lobbying against software patents. Post-Bilski, it's become obvious that software patents can only be ended with legislative change. In the USA, sadly, the only way to do this effectively is through lobbying. Therefore, I've called on businesses (such as Google and Red Hat), that have been targets of software patent litigation, to fund lobbying efforts to end software patents; such funding would simultaneously help themselves as well as software freedom. Unfortunately, as far as I'm aware, no companies have stepped forward to fund such an effort, and they instead seem to spend their patent-related resources on getting more software patents of their own. Meanwhile, individual, not-for-profit Free Software developers simply don't have the resources to do this lobbying work ourselves.

    Nevertheless, there are still a few things individual developers can do in the meantime against software patents. I wrote a complete list of suggestions after Bilski; I just reread it and confirmed all of the suggestions listed there are still useful.

    Posted on Monday 15 November 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

October

  • 2010-10-20: Open Letter: Adopt RMS' CAA/CLA Suggested Texts

    I was glad to read today that Sam Varghese is reporting that Mark Shuttleworth doesn't want Canonical, Ltd. to engage in business models that abuse proprietary relicensing powers in a negative way. I wrote below a brief open letter to Mark for him to read when he returns from UDS (since the article said he would handle this in detail upon his return from there). It's fortunate that there is a simple test to see if Mark's words are a genuine commitment for change by Canonical, Ltd. There's a simple action he can take to show if means to follow through on his statement:

    Dear Mark,

    I was glad to read today that you have no plans to abuse the powers of proprietary relicensing that Canonical, Ltd's. CAAs/CLAs give you. As you are hopefully already aware, Richard Stallman published a few suggested texts to use if you are attempting to only consider benign business models as part of your CAA/CLA process. Since you've committed to that, I would expect you'd be ready, willing and able to adopt those immediately for Canonical, Ltd.'s, CLAs and CAAs. When will you do so?

    Thanks very much for taking my criticisms seriously and I look forward to seeing this change soon in Canonical, Ltd.'s CAAs and/or CLAs.

    Posted on Wednesday 20 October 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-10-19: Does “Open Core” Actually Differ from Proprietary Relicensing?

    I've been criticized — quite a bit this week, but before that too — for using the term “Open Core” as a shortcut for the phrase “proprietary relicensing0 that harms software freedom”. Meanwhile, Matt Aslett points to Andrew Lampitt's “Open Core” definition as canonical. I admit I wasn't aware of Lampitt's definition before, but I dutifully read it when Aslett linked to it, and I quote it here:

    [Lampitt] propose[s] the following for the Open Core Licensing business model:
    • core is GPL: if you embed the GPL in closed source, you pay a fee
    • technical support of GPL product may be offered for a fee (up for debate as to whether it must be offered)
    • annual commercial subscription includes: indemnity, technical support, and additional features and/or platform support. (Additional commercial features having viewable or closed source, becoming GPL after timebomb period are both up for debate).
    • professional services and training are for a fee.

    The amusing fact about this definition is that half the things on it (i.e., technical support, services/training, indemnity, tech support) can be part of any FLOSS business model and do not require the offering company to hold the exclusive right of proprietary relicensing. Meanwhile, the rest of the items on the list are definitely part of what was traditionally called the “proprietary relicensing business“ dating back to the late 1990s: namely, customers can buy their way out of GPL obligations, and a single company can exclusively offer proprietary add-ons. For example, this is precisely what Ximian did with their Microsoft Exchange Connector for Evolution, which predated the first use of the term “Open Core” by nearly a decade. Cygnus also used this model for Cygwin, which has unfortunately continued at Red Hat (although Richard Fontana of Red Hat wants to end the copyright assignment of Cygwin).

    In my opinion, mass terminology confusion exists on this point simply because there is a spectrum1 of behaviors that are all under the banner of “proprietary relicensing”. Moreover, these behaviors get progressively worse for software freedom as you continue down the spectrum. Nearly the entire spectrum consists of activities that are harmful to software freedom (to varying degrees), but the spectrum does begin with a practice that is barely legitimate.

    That practice is one that RMS' himself began calling barely legitimate in the early 2000s. RMS specifically and carefully coined his own term for it: selling exceptions to the GPL. This practice is a form of proprietary relicensing that never permits the seller to create their own proprietary fork of the code and always releases all improvements done by the sole proprietary licensee itself to the general public. If this practice is barely legitimate, it stands to reason that anything that goes even just a little bit further crosses the line into illegitimacy.

    From that perspective, I view this spectrum of proprietary relicensing thusly: on the narrow benign end of the spectrum we find what RMS calls “exception selling” and on the other end, we find GPL'd demoware that is merely functional enough to convince customers to call up the company to ask to buy more. Everything beyond “selling exceptions” in harmful to software freedom, getting progressively more harmful as you move further down the spectrum. Also, notwithstanding Lampitt's purportedly canonical definition, “Open Core” doesn't really have a well-defined meaning. The best we can say is that “Open Core” must be something beyond “selling exceptions” and therefore lives somewhere outside of the benign areas of “proprietary relicensing”. So, from my point of view, it's not a question of whether or not “Open Core” is a benign use of GPL: it clearly isn't. The only question to be asked is: how bad is it for software freedom, a little or a lot? Furthermore, I don't really care that much how far a company gets into “proprietary relicensing”, because I believe it's already likely to be harmful to software freedom. Thus, focusing debate only on how bad is it? seems to be missing the primary point: we should shun nearly all proprietary relicensing models entirely.

    Furthermore, I believe that once a company starts down the path of this proprietary relicensing spectrum, it becomes a slippery slope. I have never seen the benign “exception selling” last for very long in practice. Perhaps a truly ethical company might stick to the principle, and would thus use an additional promise-back as RMS' suggests to prove to the community they will never veer from it. RMS' suggested texts have only been available for less than a month, so more time is needed to see if they are actually adopted. Of course, I call on any company asking for a CLA and/or CAA to adopt RMS' texts, and I will laud any company that does.

    But, pragmatically, I admit I'll be (pleasantly) surprised if most CAA/CLA-requesting companies come forward to adopt RMS' suggested texts. We have a long historical list of examples of for-profit corporate CAAs and CLAs being used for more nefarious purposes than selling exceptions, even when that wasn't the original intent. For example2, When MySQL AB switched to GPL, they started benignly selling exceptions, but, by the end of their reign, part of their marketing was telling potential “customers” that they'd violated the GPL even when they hadn't — merely to manipulate the customer into buying a proprietary license. Ximian initially had no plans to make proprietary add-ons to Evolution, but nevertheless made use of their copyright assignment to make the Microsoft Exchange Connector. Sourceforge, Inc. (named VA Linux at the time) even went so far as to demand copyright assignments on the Sourceforge code after the fact (writing out changes by developers who refused) so they could move to an “Open Core”-style business model. (Ultimately, Sourceforge.net became merely demoware for a proprietary product.)

    In short, handing over copyright assignment to a company gives that company a lot of power, and it's naïve to believe a for-profit company won't use every ounce of that power to make a buck when it's not turning a profit otherwise. Non-profit assignors, for their part, mitigate the situation by making firm promises back regarding what will and won't be done with the code, and also (usually) have well-defined non-profit missions that prevent them from moving in troubling directions. For profit companies don't usually have either.

    Without strong assurances in the agreement, like the ones RMS suggests, individual developers simply must assume the worst when assigning copyright and/or giving a broad CLA to a for-profit company. Whether we can ever determine what is or is not “Open Core”, history shows us that for-profit companies with exclusive proprietary relicensing power eventually move away from the (extremely narrow) benign end of the proprietary relicensing spectrum.


    0Most pundits will prefer the term “dual licensing” for what I call “proprietary relicensing”. I urge avoidance of the term “dual licensing”. “Dual licensing” also has a completely orthogonal denotative usage: a Free Software license that has two branches, like jQuery's license of (GPLv2-or-later|MIT). That terminology usage was quite common before even the first “proprietary relicensing” business model was dreamed of, and therefore it only creates confusion to overload that term further.

    1BTW, Lampitt does deserve some credit here. His August 2008 post hints at this spectrum idea of proprietary licensing models. His post doesn't consider the software-freedom implications of the various types, but it seems to me that post was likely ahead of its time for two years ago, and I wish I'd seen it sooner.

    2I give here just of a few of the many examples, which actually name names. Although he doesn't name names, Michael Meeks, in his Some Thoughts on Copyright Assignment, gives quite a good laundry list of all the software-freedom-unfriendly things that have historically happened in situations where CAA/CLAs without adequate promises back were used.

    Posted on Tuesday 19 October 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-10-17: Canonical, Ltd. Finally On Record: Seeking Open Core

    I've written before about my deep skepticism regarding the true motives of Canonical, Ltd.'s advocacy and demand of for-profit corporate copyright assignment without promises to adhere to copyleft. I've often asked Canonical employees, including Jono Bacon, Amanda Brock, Jane Silber, Mark Shuttleworth himself, and — in the comments of this very blog postMatt Asay to explain (a) why exactly they demand copyright assignment on their projects, rather than merely having contributors agree to the GNU GPL formally (like projects such as Linux do), and (b) why, having received a contributor's copyright assignment, Canonical, Ltd. refuses to promise to keep the software copylefted and never proprietarize it (FSF, for example, has always done the latter in assignments). When I ask these questions of Canonical, Ltd. employees, they invariably artfully change the subject.

    I've actually been asking these questions for at least a year and a half, but I really began to get worried earlier this year when Mark Shuttleworth falsely claimed that Canonical, Ltd.'s copyright assignment was no different than the FSF's copyright assignment. That event made it clear to me that there was a job of salesmanship going on: Canonical, Ltd. was trying to sell something to community that the community doesn't want nor need, and trying to reuse the good name of other people and organizations to do it.

    Since that interview in February, Canonical, Ltd. has launched a manipulatively named product called “Project Harmony”. They market this product as a “summit” of sorts — purported to have no determined agenda other than to discuss the issue of contributor agreements and copyright assignment, and come to a community consensus on this. Their goal, however, was merely to get community members to lend their good names to the process. Indeed, Canonical, Ltd. has oft attempted to use the involvement of good people to make it seem as if Canonical, Ltd.'s agenda is endorsed by many. In fact, FSF recently distanced itself from the process because of Canonical, Ltd.'s actions in this regard. Simon Phipps had similarly distanced himself before that.

    Nevertheless, it seems Canonical, Ltd. now believes that they've succeed in their sales job, because they've now confessed their true motive. In an IRC Q&A session last Thursday0, Shuttleworth finally admits that his goal is to increase the amount of “Open Core” activity. Specifically, Shuttleworth says at 15:21 (and following):

    [C]ompare Qt and Gtk, Qt has a contribution agreement, Gtk doesn't, for a while, back in the bubble, Sun, Red Hat, Ximian and many other companies threw money at Gtk and it grew and improved very quickly but, then they lost interest, and it has stagnated. Qt was owned by Trolltech it was open source (GPL) but because of the contribution agreement they had many options including proprietary licensing, which is just fine with me alongside the GPL and later, because they owned Qt completely, they were an attractive acquisition for Nokia, All in all, the Qt ecosystem has benefitted and the Gtk ecosystem hasn't.

    It takes some careful analysis to parse what's going on here. First of all, Shuttleworth is glossing over a lot of complicated Qt history. Qt started with a non-FaiF license (QPL), which later became a GPL-incompatible Free Software license. After a few years of this oddball, license-proliferation-style software freedom license, Trolltech stumbled upon the “Open Core” model (likely inspired by MySQL AB), and switched to GPL. When Nokia bought Trolltech, Nokia itself discovered that full-on “Open Core” was bad for the code base, and (as I heralded at the time) relicensed the codebase to LGPL (the same license used by Gtk). A few months after that, Nokia abandoned copyright assignment completely for Qt as well! (I.e., Shuttleworth is just wrong on this point entirely.) In fact, Shuttleworth, rather than supporting his pro-Open-Core argument, actually gave the prime example of Nokia/TrollTech's lesson learned: “don't do an Open-Core-style contributor agreement, you'll regret it”. (RMS also recently published a good essay on this subject).

    Furthermore, Shuttleworth also ignores completely plenty of historical angst in communities that rely on Qt, which often had difficulty getting bugfixes upstream and other such challenges when dealing with a for-profit controlled “Open Core” library. (These were, in fact, among the reasons Nokia gave in May 2009 for the change in policy). Indeed, if the proprietary relicensing business is what made Trolltech such a lucrative acquisition for Nokia, why did they abandon the business model entirely within four months of the acquisition?

    Although, Shuttleworth's “lucrative acquisition” point has some validity. Namely, “Open Core” makes wealthy, profit-driven types (e.g., VCs) drool. Meanwhile, people like me, Simon Phipps, NASA's Chris Kemp, John Mark Walker, Tarus Balog and many others are either very skeptical about “Open Core”, or dead-set against it. The reason it's meeting with so much opposition is because “Open Core” is a VC-friendly way to control all the copyright “assets” while pretending to actually have the goal of building an Open Source community. The real goal of “Open Core”, of course, is a bait-and-switch move. (Details on that are beyond the scope of this post and well covered in the links I've given.)

    As to Shuttleworth's argument of Gtk stagnation, after my trip this past summer to GUADEC, I'm quite convinced that the GNOME community is extremely healthy. Indeed, as Dave Neary's GNOME Census shows, the GNOME codebases are well-contributed to by various corporate entities and (more importantly) volunteers. For-profit corporate folks like Shuttleworth and his executives tend not to like communities where a non-profit (in this case, the GNOME Foundation) shepherds a project and keeps the multiple for-profit interests at bay. In fact, he dislikes this so much that when GNOME was recently documenting its long standing copyright policies, he sent Silber to the GNOME Advisory Board (the first and only time Canonical, Ltd. sent such a high profile person to the Advisory Board) to argue against the long-standing GNOME community preference for no copyright assignment on its projects1. Silber's primary argument was that it was unreasonable for individual contributors to even ask to keep their own copyrights, since Canonical, Ltd. puts in the bulk of the work on their projects that require copyright assignment. Her argument was, in other words, an anti-software-freedom equality argument: a for-profit company is more valuable to the community than the individual contributor. Fortunately, GNOME Foundation didn't fall for this, continued its work with Intel to get the Clutter codebase free of copyright assignment (and that work has since succeeded). It's also particularly ironic that, a few months later, Neary showed that the very company making that argument contributes 22% less to the GNOME codebase than the volunteers Silber once argued don't contribute enough to warrant keeping their copyrights.

    So, why have Shuttleworth and his staff been on a year-long campaign to convince everyone to embrace “Open Core” and give up all their rights that copyleft provides? Well, in the same IRC log (at 15:15) I quoted above, Shuttleworth admits that he has some work left to do to make Canonical, Ltd. profitable. And therein lies the connection: Shuttleworth admits Canonical, Ltd.'s profitability is a major goal (which is probably obvious). Then, in his next answer, he explains at great length how lucrative and important “Open Core” is. We should accept “Open Core”, Shuttleworth argues, merely because it's so important that Canonical, Ltd. be profitable.

    Shuttleworth's argument reminds me of a story that Michael Moore (who famously made the documentary Roger and Me, and has since made other documentaries) told at a book-signing in the mid-1990s. Moore said (I'm paraphrasing from memory here, BTW):

    Inevitably, I end up on planes next to some corporate executive. They look at me a few times, and then say: Hey, I know you, you're Roger Moore [audience laughs]. What I want to know, is what the hell have you got against profit? What's wrong with profit, anyway? The answer I give is simple: There's nothing wrong with profit at all. The question I'm raising is: What lengths are acceptable to achieve profit? We all agree that we can't exploit child labor and other such things, even if that helps profitability. Yet, once upon a time, these sorts of horrible policies were acceptable for corporations. So, my point is that we still need more changes to balance the push for profit with what's right for workers.

    I quote this at length to make it abundantly clear: I'm not opposed to Canonical, Ltd. making a profit by supporting software freedom. I'm glad that Shuttleworth has contributed a non-trivial part of his personal wealth to start a company that employs many excellent FLOSS developers (and even sometimes lets those developers work on upstream projects). But the question really is: Are the values of software freedom worth giving up merely to make Canonical, Ltd. profitable? Should we just accept that proprietary network services like UbuntuOne, integrated on nearly every menu of the desktop, as reasonable merely because it might help Canonical, Ltd. make a few bucks? Do we think we should abandon copyleft's assurances of fair treatment to all, and hand over full proprietarization powers on GPL'd software to for-profit companies, merely so they can employ a few FLOSS developers to work primarily on non-upstream projects?

    I don't think so. I'm often critical of Red Hat, but one thing they do get right in this regard is a healthy encouragement of their developers to start, contribute to, and maintain upstream projects that live in the community rather than inside Red Hat. Red Hat currently allows its engineers to keep their own copyrights and license them under whatever license the upstream project uses, binding them to the terms of the copyleft licenses (when the upstream project is copylefted). For projects generated inside Red Hat, after experimenting with the sorts of CLAs that I'm complaining about, they learned from the mistake and corrected it (although unfortunately, Red Hat hasn't universally corrected the problem). For the most part, Red Hat encourages outside contributors to give under their own copyright under the outbound license Red Hat chose for its projects (some of which are also copylefted). Red Hat's newer policies have some flaws (details of which are beyond the scope of this post), but it's orders of magnitude better than the copyright assignment intimidation tactics that other companies, like Canonical, Ltd., now employ.

    So, don't let a friendly name like “Harmony” fool you. Our community has some key infrastructure, such as the copyleft itself, that actually keeps us harmonious. Contributor agreements aren't created equal, and therefore we should oppose the idea that contributor and assignment agreements should be set to the lowest common denominator to enable a for-profit corporate land-grab that Shuttleworth and other “Open Core” proponents seek. I also strongly advise the organizations and individuals who are assisting Canonical, Ltd. in this goal to stop immediately, particularly now that Shuttleworth has announced his “Open Core” plans.


    Update (2010-10-18): In comments, many people have, quite correctly, argued that I have not proved that Canonical, Ltd. has plans to go “Open Core” with their copyright-assigned copyleft products. Such comments are correct; I intended this article to be an opinion piece, not a logical proof. I further agree that without absolute proof, the title of this blog post is an exaggeration. (I didn't change it, as that seemed disingenuous after the fact).

    Anyway, to be clear, the only thing the chain of events described above prove is that Canonical, Ltd. wants “Open Core” as a possibility for the future. That part is trivially true: if they didn't want to reserve the possibility, they'd simply make a promise-back to keep the software as Free Software in their assignment. The only reason not to make an FSF-style promise-back is that you want to reserve the possibility of proprietary relicensing.

    Meanwhile, even though I cannot construct a logical proof of it, I still believe the only possible explanation for this 1+ year marketing campaign described above is that Canonical, Ltd. is moving toward “Open Core” for those projects on which they are the sole copyright holder. I have asked others to offer alternative explanations of why Canonical, Ltd. is carrying out this campaign: I agree that there could exist another logical explanation other than the one I've presented. If someone can come up with one, then I would be happy to link to it here.

    Finally, if Canonical, Ltd. comes out with a statement that they'll switch to using FSF's promise-back in their assignments, I will be very happy to admit I was wrong. The outcome I want is for individual developers to be treated right by corporations in control of particular codebases; I would much rather that happen than be correct in my opinions.


    0I originally credited OMG Ubuntu as publishing Shutleworth's comments as an interview. Their reformatting of his comments temporarily confused me, and I thought they'd done an interview. Thanks to @gotunandan who pointed this out.

    1Ironically, the debate had nothing to do with a Canonical, Ltd. codebase, since their contributions amount to so little (1%) of the GNOME codebase anyway. The debate was about the Clutter/Intel situation, which has since been resolved.


    Responses Not In the Identica Thread:

    • Alex Hudson's blog post
    • Discussion on Hacker News
    • LWN comments
    • Matt Aslett's response and my response to him
    • Ingolf Schaefer's blog post, which only allows comments with a Google Account, so I comment below instead (to be clear, I'm not criticizing Ingolf's choice of Google-account-to-comment, especially since I make everyone who wants to comment here sign up for identi.ca ;):

      Ingolf, you noted that you'd rather I not try to read between the lines to deduce that proprietary relicensing and/or “Open Core” is where Canonical, Ltd.'s marketing is leading. I disagree; I think it's useful to consider what seems a likely end-outcome here. My primary goal is to draw attention to it now in hopes of preventing it from happening. My best possible outcome is that I get proved wrong, and Canonical makes a promise-back in their assignment and/or CLA.

      Meanwhile, I don't think they can go “Open Core” and/or proprietary relicensing for all of Ubuntu, as you are saying. They aren't sole copyright holder in most of Ubuntu. The places where they can pursue these options is in Launchpad, pbuilder, upstart, and the other projects that require CLA and/or assignment.

      I don't know for sure that they'll do this, as I say above. I can deduce no other explanation. As I keep saying, if someone else has another possible explanation for Canonical, Ltd.'s behavior that I list above, I'm happy to link to it here. I can't see any other reason; they'd surely by now just made an FSF-style promise-back in their CLA if they didn't want to hold proprietarization as a possibility.

    Posted on Sunday 17 October 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-10-04: Conservancy's First Blog Post

    [ Crossposted from Conservancy's blog. ]

    As can be seen in today's announcement, today is my first day as full-time Executive Director at the Software Freedom Conservancy. For four years, I have worked part-time on nights, weekends, and lunch times to keep Conservancy running and to implement and administer the services that Conservancy provides to its member projects. It's actual quite a relief to now have full-time attention available to carry out this important work.

    From the start, one of my goals with Conservancy has been to run the non-profit organization as transparently as possible. At times, I've found that when time is limited, keeping the public informed about all your work is often the first item to fall too far down on the action item list. Now that Conservancy is my primary, daily focus, I hope to increase its transparency as much as possible.

    Specifically, I plan to keep a regular blog about activities of the Conservancy. I've found that a public blog is a particular convenient way to report to the public in a non-onerous way about the activities of an organization. Indeed, we usually ask those developers whose work is funded through Conservancy to keep a blog about their activities, so that the project's community and the public at large can get regular updates about the work. I should hold myself to no less a standard!

    I encourage everyone to subscribe to the full Conservancy site RSS feed, where you'll receive both news items and blog posts from the Conservancy. There are also separate feeds available for just news and just blog posts. Also, if you're a subscriber to my personal blog, I will cross-post these blog posts there, although my posts on Conservancy's blog will certainly be a proper subset of my entire personal blog.

    Posted on Monday 04 October 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

September

  • 2010-09-11: Two Thank-Yous

    I'm well known for being critical when necessary about what happens in the software freedom community, but occasionally, there's nothing to do but thank someone, particularly when they've done something I asked for. :)

    First, I'd like to thank Matthew Garrett for engaging in some GPL enforcement (as covered on lwn.net). He's taking an interesting tack of filing a complaint with US Customs. I've thought about this method in the past, but never really felt I wanted to go that route (mainly because I'm more familiar with the traditional GPL enforcement processes). However, it's really important that we try lots of different strategies for GPL enforcement; the path to success is often many methods in parallel. It looks like Matthew already got the attention of the violator. In the end, every GPL enforcement strategy is primarily to get the violator's attention so they take the issue seriously and come into compliance with the license.

    I've written before about how GPL enforcement can be a lonely place, and when I see someone get serious about doing some — as Matthew has in the last year or so — it makes GPL enforcement a lot less lonely. I still think I can count on my hands all the people active regularly in GPL enforcement efforts, but I am glad to see that's changing. The license stands for a principle, and we should defend it, despite the great length the corporate powers in the software freedom world go to in trying to stop GPL enforcement.

    Secondly, I need to thank my colleague Chris DiBona. Two years ago, I gave him quite a hard time that Google prohibited hosting of AGPLv3'd projects on its FLOSS Project Hosting site. The interesting part of our debate was that Chris argued that license proliferation was the reason to prohibit AGPLv3. I argued at the time that Google simply opposed AGPLv3 because many parts of Google's business model rely on the fact that the GPL behaves in practice somewhat like permissive licenses when deployed in a web services environment.

    Honestly, I never had definitive proof at Google's “real reasons” for holding the policy it did for two years, but it doesn't matter now, because yesterday Chris announced that Google Code Hosting now accepts AGPLv3'd projects0. I really appreciate Chris' friendly words on AGPLv3, noting that he didn't like turning away projects under licenses that serve a truly new function, like the AGPL.

    Google will now accept projects under any license that is on OSI's approved list. I think this is a reasonable outcome. I firmly believe that acceptable license lists must be the purview of not-for-profit organizations, not for-profit ones. Personally, I tend to avoid and distrust any license that fails to appear on both OSI's list and the FSF Free Software License List. While I obviously favor the FSF list myself (having helped originate it), I generally want to see a license on both lists before I'm ready to say for sure there are no worries about it.

    There are two other entities that maintain license lists, namely the Debian Project and Red Hat's Fedora Project. I wouldn't say that I find Debian's list definitive, mainly because, despite Debian's generally democratic slant, the ftp-masters hold a bit too much power in interpreting the DFSG.

    As for Fedora, that's ultimately a project controlled by a for-profit corporation (Red Hat), and therefore I have some trepidation about trusting their list, just as I had concerns that Google attempted to set licensing policy by defining an acceptable license list. As it stands at the moment, I trust Fedora's list because I know that Spot and Fontana currently have the ultimate say on what does or does not go onto Fedora's list. Nevertheless, Red Hat is ultimately in control of Fedora, so I think its license list can't be relied on indefinitely (e.g., in case Spot and/or Fontana ever leave Red Hat at some point.)

    Anyway, I think the best outcome for the community is for the logical conjunction of the OSI's list and the FSF's list to be considered the accepted list of licenses. While I often disagree with the OSI, I think it's in the best interest of the community to require that two distinct non-profits with different missions both approve a license before it's considered acceptable. (I suppose I'd have a different view if OSI had not accepted the AGPLv3, though. ;)


    0I must point out that Chris has an error in his blog post: namely, FSF's Code hosting site, Savannah accepts not just GPL'd projects, but any project that is listed as “GPL-Compatible” on FSF's Free Software License List.

    Posted on Saturday 11 September 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

August

  • 2010-08-27: The Saga of Sun RPC

    I first became aware of the Sun RPC license in mid-2001, but my email archives from the time indicate the issue predated my involvement with it; it'd been an issue of consideration since 1994. I later had my first large email thread “free-for-all” on the issue in April 2002, which was the first of too many that I'd have before it was all done. In December 2002, the Debian bug was filed, and then it became a very public debate. Late last week, it was finally resolved. It now ranks as the longest standing Free Software licensing problem of my career. A cast of dozens deserve credit for getting it resolved.

    Tom “spot” Callaway does a good job summarizing the recent occurrences on this issue (and by recent, I mean since 2005 — it's been going long enough that five years ago is “recent”), and its final resolution. So, I won't cover that recent history, but I encourage people to read Spot's summary. Simon Phipps, who worked on this issue during his time as the Chief Open Source Officer of Sun, also wrote about his work on the issue. For my part, I'll try to cover the “middle” part of the story from 2001-2005.

    So, the funny thing about this license is everyone knew it was Sun's intention to make it Free Software. The code is so old, it dates back to a time when the drafting of Free Software licenses weren't well understood (old-schoolers will, for example, remember the annoying advertising clause in early BSD licenses). Thus, by our modern standards, the Sun RPC license does appear on its face as trivially non-Free, but in its historical context, the intent was actually clear, in my opinion.

    Nevertheless, by 2002, we knew how to look at licenses objectively and critically, and it was clear to many people that the license had problems. Competing legal theories existed, but the concerns of Debian were enough to get everyone moving toward a solution.

    For my part, I checked in regularly during 2002-2004 with Danese Cooper (who was, effectively, Simon Phipps' predecessor at Sun), until I was practically begging her to pay attention to the issue. While I could frequently get verbal assurances from Danese and other Sun officials that it was their clear intention that glibc be permitted to include the code under the LGPL, I could never get something in writing. I had a hundred other things to worry about, and eventually, I stopped worrying about it. I remember thinking at the time: well, I've notes on all these calls and discussions I've had with Sun people about the license. Worst case scenario: I'll have to testify to this when Sun sues some Free Software project, and there will be a good estoppel defense.

    Meanwhile, around early 2004, my friend and colleague at FSF, David “Novalis” Turner took up the cause in earnest. I think he spent a year or two as I did: desperately trying to get others to pay attention and solve the problem. Eventually, he left FSF for other work, and others took up the cause, including Brett Smith (who took over Novalis' FSF job), and, by that time, Spot was also paying attention to this. Both Brett and Spot worked hard to get Simon Phipps attention on it, which finally happened. But around then began that long waiting period while Oracle was preparing to buy Sun. It stopped almost anything anyone wanted to get done with Sun, so everyone just waited (again). It was around that time that I decided I was pretty sure I never wanted to hear the phrase: “Sun RPC license” again in my life.

    Meanwhile, Richard Fontana had gone to work for Red Hat, and his self-proclaimed pathological obsession with Free Software (which can only be rivaled by my own) led him to begin discussing the Sun RPC issue again. He and Spot were also doing their best negotiating with Oracle to get it fixed. They took us the last miles of this marathon, and now the job is done.

    I admit that I feel of some shame that, in recent years, I've had such fatigue about this issue — a simple one that should've been solved a decade and a half ago — that, since 2008, I've done nothing but kibitz about the issue when people complained. I also didn't believe that a company as disturbing and anti-Free-Software as Oracle could ever be convinced to change a license to be more FaiF. Spot and Fontana proved me wrong, and I'm glad.

    Thanks to everyone in this great cast of characters that made this ultimately beneficial production of licensing theater possible. I've been honored that I shared the stage in the first few acts, and sorry that I hid backstage for the last few. It was right to keep working on it until the job was done. As Fontana said: Estoppel may be relevant but never enough; software freedom principle[s] should matter as much as legal risk. … [the] standard for FaiF can't simply be ‘good defense to copyright infringement likely’. Thanks to everyone; I'm so glad I no longer have to wait in fear of a subpoena from Oracle in a lawsuit claiming infringement of their Sun RPC copyrights.

    Posted on Friday 27 August 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-08-16: Considerations For FLOSS Hackers About Oracle vs. Google

    Many have already opined about the Oracle v. Google lawsuit filed last week. As you might expect, I'm not that worried about what company sues what company for some heap of cash; those sort of for-profit wranglings just aren't what concerns me. Rather, I'm focused on what this event means for the future of software freedom. And, I think even at this early stage of the lawsuit, there are already a few lessons for the Free Software community to learn.

    Avoid Single-Company-Controlled Language Infrastructure

    Fourteen months ago, before the Oracle purchase of Sun, I wrote about the specific danger of language infrastructure developed by a single for-profit patent-holding entity (when such infrastructure is less than 20 years old). In that blog post, I wrote:

    [Some] might argue that with all those patents consolidated [in a single company], patent trolls will have a tough time acquiring patents and attacking FaiF implementations. However, while this can sometimes be temporarily true, one cannot rely on this safety. Java, for example, is in a precarious situation now. Oracle is not a friend to Free Software, and soon will hold all Sun's Java patents — a looming threat to FaiF Java implementations … [A]n Oracle attack on FaiF Java is a possibility.

    I'm sorry that I was right about this, but we should now finally learn the lesson: languages like Java and C# are dangerous. Single companies developed them, and there are live, unexpired patents that can easily be used in a group to attack FaiF implementations. Of course, that doesn't mean other language infrastructures are completely safe from patents, but I believe there is greater relative risk of a system with patent consolidation at a single company.

    It also bears repeating the point I made on Linux Outlaws last July: this doesn't mean the Free Software community shouldn't have FaiF implementations of all languages. In fact, we absolutely should, because we do want developers who are familiar with those languages to bring their software over to GNU/Linux and other Free Software systems.

    However, this lawsuit proves that choosing some languages for newly written Free Software is dangerous and should be avoided, especially when there are safer choices like C, C++, Python, and Perl0. (See my blog post from last year for more on this subject.)

    Never Let Your Company File for Patents on Your Work
    James Gosling is usually pretty cryptic in his non-technical writing, but I think if you read carefully, it seems to me that Gosling regrets that Oracle now holds his patents on Java. I know developers get nice bonuses if they let their company apply for patents on their work. I also know there's pressure in most large companies to get more patents. We, as developers, must simply refuse this. We invent this stuff, not the suits and the lawyers who want to exploit our work for larger and larger profits. As a community of developers and computer scientists, we must simply refuse to ever let someone patent our work. In a phrase: just say no.

    Even if you like your company today, you never know who will own those software patents later. I'm sure James Gosling originally never considered the idea that a company as revolting as Oracle would have control of everything he's invented for the last two decades. But they do, and there's nothing Gosling can do about what's done with his work and “inventions”. Learn from this example; don't let your company patent your work. Instead, publish online to establish prior art as quickly as possible.

    Google Is Not Merely a Pure Free Software Distributor

    Google has worked hard to cast themselves as innocent, Free-Software-producing victims. That's good PR because it's true, but it's also not telling the whole truth. Google worked hard to make sure Android was completely Apache-2.0 (or even more permissively) licensed (except for Linux, of course). There was already plenty Java stuff available under the GPL that Google could have used. Sadly, Google was so allergic to GPL for Android/Linux that they even avoided LGPL'd components like uClibc and glibc (in favor of their own permissively-licensed C library based on a BSD version).

    Google's reason for permissive-only licensing for “everything but the kernel” was likely a classic “adoption is more important than software freedom” scenario. Google wants Android/Linux in as many phones as possible, and wants to eliminate any “barrier” to such adoption, even if such a “barrier” would defend software freedom.

    This new lawsuit would be much more interesting if Google had chosen GPL and/or LGPL for Android. In fact, if I fantasize about being empowered to design a binding, non-financial settlement to the lawsuit, the first item on my list would be a relicense of all future Android/Linux systems under GPL and/or LGPL. (Basically, Google would license only enough under LGPL to allow proprietary applications, and license all the rest as GPL, thus yielding the same licensing consequences as GNU/Linux and GNOME). Then, I'd have Oracle explicitly license all its patents under GPL and/or LGPL compatible licenses that would permit Android/Linux to continue unencumbered, but under copyleft. (BTW, Mark Wielaard has a blog post that discussed more about the issue of GPL'd/LGPL'd Java implementations and how they relate to this lawsuit.)

    I realize that's never going to happen, but it's an interesting thought experiment. I am of course opposed to software patents, and I certainly oppose companies like Oracle that produce almost all proprietary software. However, I can at least understand the logic of Oracle not wanting its software patents exercised in proprietary software. I think a trade off, whereby all software patents are licensed freely and royalty-free only for use in copylefted software is a reasonable compromise. OTOH, knowing Oracle, they could easily have plans to attack copyleft implementations too. Thus, we must assume they won't accept this reasonable compromise of “royalty-free licensing for copyleft only”. That brings me to my next point of FaiF hackers' concern about this lawsuit.

    Never Trust a Mere Patent Promise; Demand Real Patent Licenses

    I wrote after Bilski that patent promises just aren't enough, and this lawsuit is an example of why. I presume that Oracle's lawyers have looked carefully as the various promises and assurances that Sun made about its Java patents and have concluded Oracle has good arguments for why those promises don't apply to Android. I have no idea what those arguments are, but rarely do lawyers file a lawsuit without very good arguments already prepared. I hope Oracle's lawyers' arguments are wrong and they lose. But, the fact that Oracle even has a credible argument that Android/Linux doesn't already have a patent license shows again that patent promises are just not enough.

    Miguel de Icaza used this opportunity to point out how the Microsoft C# promises are “better” by comparison, in his opinion. But, Brett Smith at FSF already found huge holes in those Microsoft promises that haven't been fixed. In fact, any company making these promises always tries to hide as much nasty stuff as it can, to convince the users that they are safe from patent aggression when they really aren't. That's why the Free Software community must demand simple, clear, and permanent royalty-free patent licenses for all patents any company might hold. We should accept nothing less. As mentioned above, those licenses could perhaps require that a certain Free Software copyright license, such as GPLv3-or-later, be used for any software that gets the advantage of the license. (i.e., I can certainly understand if companies don't want to accidentally grant such patent licenses to their proprietary software competitors).

    Indeed, it's particularly important that the licenses cover all patents and those possibly exercised in future improvements in the software. This lawsuit has clearly shown that even if patent pools exist for some subsets of patents for some subsets of Free Software, patent holders will either use other patents for aggression, or they'll assert patents in the patent pools against Free Software that's not part of the pool. In essence, we must assume that any for-profit company will become a patent troll eventually (they always do), and therefore any cross-licensing pools that don't include every patent possible for any possible Free Software will always be inadequate. So, the answer is simple: trust no software-patent-holding company unless they give an explicit GPLv3-compatible license for all their patents.

    We Must End Software Patents

    The failure of the Bilski case to end software patents in the USA means much work lies ahead to end software patents. The End Software Patents Wiki has some good stuff about this case as well as lots of other information related to software patents. There are now heavily funded for-profit corporate efforts that seek to convince the Free Software community that patent reform is enough. But, it's not! For example, if you see presenters at FLOSS conferences claiming to have solutions to patent problems, ask them if their organization opposes all software patents, and ask them if their funders license all their patents freely for GPLv3-or-later software implementations. If you hear the wrong answers, then their motives and mission are suspect.

    Finally, I'd like to note that, in some sense, these patent battles help Free Software, because it may actually teach companies that the expense of having software patents is not worth the risk of patent lawsuits. It's possible we've reached a moment in history where it'd be better if the Software Patent Cold War becomes a full Software Patent Nuclear War. Software freedom can survive that “nuclear winter”. I sometimes think that in the Free Software community, we may find ourselves left with just two choices: fifty more years of Patent Cold War (with lots of skirmishes like this one), or ten years of full-on patent war (after which companies would beg Congress to end software patents). Both outcomes are horrible until they're resolved, but the latter would reach resolution quicker. I often wonder which one is the better long term for software freedom.

    But, no matter what happens next, the necessary position is: all software patents are bad for software freedom. Any entity that supports anything short of full abolition of software patents is working against software freedom.


    0I originally had PHP listed here, but jwildeboer argued that Zend Technologies, Ltd. might be a problem for PHP in the same way Oracle is for Java and Microsoft for C#. It's true that Zend is a software patent holder and was involved in the development of later PHP versions. I don't think the single-company-controlled software patent risks with PHP are akin to those of Java and C#, since Zend Technologies isn't the only entity involved in PHP's development, but certainly the other languages listed are likely preferable to PHP.

    Posted on Monday 16 August 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-08-13: GNOME Copyright Assignment Policy

    Vincent Untz announced and blogged today about the GNOME Copyright Assignment Policy and a longer guidelines document about the GNOME policy. I want to thank both Vincent and Michael Meeks for their work with me on this policy.

    As I noted in my blog last week, GUADEC really reminded me how great the GNOME community is. Therefore, it's with great pride that I was able to assist on this important piece of policy for the GNOME community.

    There are a lot of forces in the corporate side of Free Software right now that are aggressively trying to convince copylefted projects to begin assigning copyright of their code (or otherwise agree to CLAs) to corporations without any promises that the code will remain Free Software. We must resist this pressure: copyleft, when used correctly, is the force that keeps equality in the community, as I've written about before.

    I thank the GNOME Board of Directors for entrusting us to write the policy, and am glad they have adopted it.

    Posted on Friday 13 August 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-08-10: May They Make Me Superfluous

    The Linux Foundation announced today their own FLOSS license compliance program, which included the launch of a few software tools under a modified BSD license. They also have offered some training courses for those that want to learn how to comply.

    If this Linux Foundation (LF) program is successful, I may get something I've wished for since the first enforcement I ever worked on back in late 1998: I'd like to never do GPL enforcement again. I admit I talk a lot about GPL enforcement. It's indeed been a major center of my work for twelve years, but I can't say I've ever really liked doing it.

    By contrast, I have been hoping for years that someone would eventually come along and “put me out of the enforcement business”. Someday, I dream of opening up the <[email protected]> folder and having no new violation reports (BTW, those dreams usually become real-life nightmares, as I typically get two new violations reports each week). I also wish for the day that I don't have a backlogged queue of 200 or more GPL violations where no source nor offer for source has been provided. I hate that it takes so much time to resolve violations because of the sheer magnitude that exist.

    I got into GPL enforcement so heavily, frankly, because so few others were doing it. To this day, there are basically three groups even bothering to enforce GPL on behalf of the community: Conservancy (with enforcement efforts led by me), FSF (with enforcement efforts led by Brett Smith), and gpl-violations.org (with enforcement efforts led by Harald Welte). Generally, GPL enforcement has been a relatively lonely world for a long time, mainly because it's boring, tedious and patience-trying work that only the most dedicated (masochistic?) want to spend their time doing.

    There are a dozen of very important software-freedom-advancing activities that I'd rather spend my time doing. But as long as people don't respect the freedom of software users and ignore the important protections of copyleft, I have to continue doing GPL enforcement. Any effort like LF's is very welcome, provided that it reduces the number of violations.

    Of course, LF (as GPL educators) and Brett, Harald, and I (as GPL enforcers) will share the biggest obstacle: getting communication going with the actual violators. Fact is, people who know the LF exists or have heard of the GPL are likely to already be in compliance. When I find a new violation, it's nearly always someone who doesn't even know what's going on, and often doesn't even realize what their engineering team put into their firmware. If LF can reach these companies before they end up as a violation report emailed to me, I'll be as glad as can be. But it's a tall order.

    I do have a few minor criticisms of LF's program. First, I believe the directory of FLOSS Compliance Officers should be made publicly available. I think FLOSS Compliance Officers at companies should make themselves publicly known in the software freedom community so they can be contacted directly. As LF currently has it set up, you have to make a request of the LF to put you in touch with a company's compliance officer.

    Second, I admit I'd have liked to have been actively engaged in LF's process of forming this program. But, I presume that they wanted as much distance as possible from the world's most prolific GPL enforcer, and I can understand that. (I suppose there's a good cop/bad cop metaphor you could make here, but I don't like to think of myself as the GPL police.) I did offer to help LF on this back in April when they announced it at the Linux Collaboration Summit, but they haven't been in touch. Nevertheless, I'll hopefully meet with LF folks on Thursday at LinuxCon about their program. Also, I was invited a few months ago by Martin Michlmayr to join one subset of the project, the SPDX working group and I've been giving it time whenever I can.

    But, as I said, those are only minor complaints. The program as a whole looks like it might do some good. I hope companies take advantage of it, and more importantly, I hope LF can reach out to the companies who don't know their name yet but have BusyBox/Linux embedded in their products.

    Please, LF, help free me from the grind of GPL enforcement work. I remain committed to enforcing GPL until there are no violations left, but if LF can actually bring about an end to GPL violations sooner rather than later, I'll be much obliged. In a year, if I have an empty queue of GPL violations, I'll call LF's program a unmitigated success and gladly move on to other urgent work to advance software freedom.

    Posted on Tuesday 10 August 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-08-09: “Have To” Is a Relative Phrase

    I often hear it. I have to use proprietary software, people say. But usually, that's a justification and an excuse. Saying have to implies that they've been compelled by some external force to do it.

    It begs the question: Who's doing the forcing? I don't deny there might be occasions with a certain amount of force. Imagine if you're unemployed, and you've spent months looking for a job. You finally get one, but it generally doesn't have anything to do with software. After working a few weeks, your boss says you have to use a Microsoft Windows computer. Your choices are: use the software or be fired and spend months again looking for a job. In that case, if you told me you have to use proprietary software, I'd easily agree.

    But, imagine people who just have something they want to do, completely unrelated to their job, that is made convenient with proprietary software. In that case, there is no have to. One doesn't have to do a side project. So, it's a choice. The right phrase is wanted to, not have to.

    Saying that you're forced to do something when you really aren't is a failure to take responsibility for your actions. I generally don't think users of proprietary software are primarily to blame for the challenges of software freedom — nearly all the blame lies with those who write, market, and distribute proprietary software. However, I think that software users should be clear about why they are using the software. It's quite rare for someone to be compelled under threat of economic (or other) harm to use proprietary software. Therefore, only rarely is it justifiable to say you have to use proprietary software. In most cases, saying so is just making an excuse.

    As for being forced to develop proprietary software, I think it's even rarer yet. Back in 1991 when I first read the GNU Manifesto, I was moved by RMS' words about the issue:

    “Won't programmers starve?”

    I could answer that nobody is forced to be a programmer. Most of us cannot manage to get any money for standing on the street and making faces. But we are not, as a result, condemned to spend our lives standing on the street making faces, and starving. We do something else.

    But that is the wrong answer because it accepts the questioner's implicit assumption: that without ownership of software, programmers cannot possibly be paid a cent. Supposedly it is all or nothing.

    Well, even if it is all or nothing, RMS was actually right about this: we can do something else. By the mid 1990s, these words had inspired me to make a lifelong plan to make sure I'd never have to write or support proprietary software again. Despite being trained primarily as a computer scientist, I've spent much time building contingency plans to make sure I wouldn't be left with proprietary software support or development as my only marketable skill.

    During the 1990s, it wasn't clear that software freedom would have any success at all. It was a fringe activity; Cygnus was roughly the only for-profit company able to employ people to write Free Software. As such, I of course started learning the GCC codebase, figuring that I'd maybe someday get a job at Cygnus. I also started training as an American Sign Language translator, so I'd have a fallback career if I didn't get a job at Cygnus. Later, I learned how to play poker really well, figuring that in a worst case, I could end up as a professional poker player permanently.

    As it turned out, I've never had to rely fully on these fallback plans, primarily because I was hired by the FSF in 1999. For the last eleven years, I have been able to ensure that I've never had a job that required that I use, support, or write proprietary software and I've worked only on activities that directly advanced software freedom. I admit I was often afraid that someday I might be unable to find a job, and I'd have to support, use or write proprietary software again. Yet, despite that fear, since 1997, I've never even been close to that.

    So, honestly, I just don't believe those who say they have to use proprietary software. Almost always, they chose to use it, because it's more convenient than the other things they'd have to do to avoid it. Or, perhaps, they'd rather write or use proprietary software than write or use no software at all, even when avoiding software entirely was a viable option.

    In summary, I want to be clear that I don't judge people who use proprietary software. I realize not everyone wants to live their life as I do — with cascading fallback plans to avoid using, writing or supporting proprietary software. I nevertheless think it's disingenuous to say you have to use, support or develop proprietary software. It's a choice, and every year that goes by, the choice gets easier, so the statement sounds more like an excuse all the time.

    Posted on Monday 09 August 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-08-05: GUADEC 2010: Rate Conferences by Inspiration Value

    Conferences are often ephemeral. I've been going to FLOSS conferences since before there were conferences specifically for the topic. In the 1990s, I'd started attending various USENIX conferences. Many of my career successes can be traced back to attending those conferences and meeting key leaders in the FLOSS world. While I know this is true generally, I can't really recall, without reviewing notes from specific conferences, what happened at them, and how specifically it helped me personally or FLOSS in general. I know they're important to me and to software freedom, but it's tough to connect the dots perfectly without looking in detail at what happened when.

    Indeed, for most of us, after decades, conferences start to run together. At GUADEC this year, I had at least two conversations of the nature: What city was that? What conference was that? Wait, what year was that?. And that was just discussions about past GUADECs specifically, let alone other events!

    For my part, after checking my records, I discovered that I hadn't been to a GUADEC since 2003. I've served as FSF's representative on the GNOME Advisory Board straight through from 2001 until today, but nevertheless I hadn't been able to attend GUADECs from 2004-2009. Thus, the 2010 GUADEC was somewhat of a reintroduction for me to the in-person GNOME community.

    With fresh eyes, what I saw had great impact on me. GNOME seems to be a vibrant, healthy community, with many contributors and incredible diversity in both for-profit and volunteer contributions. GNOME's growth and project diversity has greatly exceeded what I would have expected to see between 2004 and 2010.

    It's not often I go to a conference and am jealous that I can't be more engaged as a developer. I readily admit that I haven't coded regularly in more than a decade (and I often long to do it again). But, I usually talk myself out of it when I remember the difficultly of getting involved and in shepherding work upstream. It's a non-trivial job, and some don't even bother. The challenges are usually enough to keep the enticement at bay.

    Yet, I left GUADEC 2010 and couldn't see a downside in getting involved. I found myself on the flight back wishing I could do more, thinking through the projects I saw and wondering how I might be a coder again. There must be some time on the weekends somewhere, I thought, and while I'm not a GUI programmer, there's plenty of system stuff in GNOME like dbus and systemd; surely I can contribute there.

    Fact is, I've got too many other FLOSS-world responsibilities and I must admit I probably won't contribute code, despite wanting to. What's amazing, though, is that everything about GUADEC made me want to get more involved and there appeared no downside in doing so. There's something special about a conference (and a community) that can inspire that feeling in a hardened, decade-long conference attendee. I interact with a lot of FLOSS communities, and GNOME is probably the most welcoming of all.

    The rest of this post is a random bullet list of cool things that happened at GUADEC that I witnessed/heard/thought about:

    • There was a lot of debate and concern about the change in the GNOME 3 release schedule. I was impressed at the community unity on this topic when I heard a developer say in the hall: The change in GNOME 3 schedule is bad for me, but it's clearly the right thing for GNOME, so I support it. That's representative of the “all for one” and selfless attitude you'll find in the GNOME community.
    • Dave Neary presented a very interesting study on GNOME code contributions, which he was convinced to release under CC-By-SA. The study has caused some rancor in the community about who does or does not contribute to GNOME upstream, but generally speaking, I'm glad the data is out there, and I'm glad Dave's released it under a license that allows people to build on the work and reproduce and/or verify the results. (Dave's also assured me he'll release the tools and config files and all other materials under FaiF licenses as well; I'll put a link here when he has one.) Thing is, the most important and wonderful datum from Dave's study is that a plurality of GNOME contribution comes from volunteers: a full 23%! I think every FLOSS project needs a plurality of volunteer contribution to truly be healthy, and it seems GNOME has it.
    • My talk on GPLv3 was reasonably well received, notwithstanding some friendly kibitzing from Michael Meeks. There had been push back in previous discussions in the GNOME community about GPLv3. It seems now, however, that developers are interested in the license. It's not my goal to force anyone to switch, but I hope that my talk and my participation in this recent LGPLv3 thread on desktop-list might help to encourage a slow-but-sure migration to GPLv3-or-later (for applications) and (GPLv2|LGPLv3-or-later) (for platform libraries) in GNOME. If folks have questions about the idea, I'm always happy to discuss them.
    • I enjoyed rooming with Brad Taylor. We did wonder, though, if the GNOME Travel Committee assigned us rooms by similar first names. (In fact, I was so focused that on the fact that we shared the same first name, I previously had typed Brad's last name wrong here!) I liked hearing about his TomBoy online project, Snowy. I'm obviously delighted to see adoption of AGPLv3, the license I helped create. I've promised Brad that I'll try to see if I can convince the org-mode community to use Snowy for its online storage as well.
    • Owen Taylor demoed and spoke about GNOME Shell 3.0. I don't use GUIs much myself, but I can see how GUI-loving users will really enjoy this excellent work.
    • I met Lennart Poettering and discussed with him in detail the systemd project. While I can see how this could be construed as a Canonical/Red Hat fight over the future of what's used for system startup, I still was impressed with Lennart's approach technically, and find it much healthier that his community isn't requiring copyright assignment.
    • Emmanuele Bassi's talk on Clutter was inspiring, as he delivered heartfelt slide indicating that he'd overcome the copyright assignment requirements and assignment is no longer required by Intel for Clutter upstream contributions. I like to believe that Vincent Untz's, Michael Meeks' and my work on the (yet to be ratified) GNOME Copyright Assignment Policy was a help to Emmanuele's efforts in this regard. However, it sounds to me like the outcome was primarily due to a lot of personal effort on Emmanuele's part internally to get Intel to DTRT. I thank him for this effort and congratulate him on that success.
    • It was great to finally meet Fabian Scherschel in person. He kindly brought me some gifts from Germany and I brought him some gifts from the USA (we prearranged it; I guess that's the “outlaw” version of gifts). Fab also got some good interviews for the Linux Outlaws podcast that he does with Dan Lynch. It seems that podcast has been heavily linked to in the GNOME community, which is really good for Dan and Fab and for GNOME, I think.
    Sponsored by the GNOME Foundation!

    That's about all the random thoughts and observations I have from GUADEC. The conference was excellent, and I think I simply must readd it to my “must attend each year” list.

    Finally, I want to thank the GNOME Foundation for sponsoring my travel costs. It allowed me to take some vacation time from my day job to attend and participate in GUADEC.

    Posted on Thursday 05 August 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-08-03: More GPL Enforcement Progress

    LWN is reporting a GPL enforcement story that I learned about during last week while at GUADEC (excellent conference, BTW, blog post on that later this week). I wasn't sure if it was really of interest to everyone, but since it's hit the press, I figured I'd write a brief post to mention it.

    As many probably know, I'm president of the Software Freedom Conservancy, which is the non-profit organizational home of the BusyBox project. As part of my role at Conservancy, I help BusyBox in its GPL enforcement efforts. Specifically and currently, Conservancy is in litigation against a number of defendants who have violated the GPL and were initially unresponsive to Conservancy's attempts to bring them into compliance with the terms of the license.

    A few months ago, one of those defendants, Westinghouse Digital Electronics, LLC, stopped responding to issues regarding the lawsuit. On Conservancy's behalf, SFLC asked the judge to issue a default judgment against them. A “default” means what it looks like: Conservancy asked to “win by default” since Westinghouse stopped showing up. And, last week, Conservancy was granted a default judgment against Westinghouse, which included an injunction to stop their GPL-non-compliant distributions of BusyBox.

    “Injunctive Relief”, as the lawyers call it, is a really important thing for GPL enforcement. Obviously our primary goal is full compliance with the GPL, which means giving the complete and corresponding source code (C&CS, as I tend to abbreviate it) to all those who received binary distributions of the software. Unfortunately, in some cases (for example, when a company simply won't cooperate in the process despite many efforts to convince them to do so), the only option is to stop further distribution of the violating software. As many parts of the GPL itself point out, it's better to not have software distributed at all, if it's only being distributed as (de facto) proprietary software.

    I'm really glad that a judge has agreed that the GPL is important enough a license to warrant an injunction on out-of-compliance distribution. This is a major step forward in GPL enforcement in the USA. (Please note that Harald Welte had past similar successes in Germany, and deserves credit and kudos for getting this done the first time in the world. This success follows in his footsteps.)

    Posted on Tuesday 03 August 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

July

  • 2010-07-15: At Least Motorola Admits It

    I've written before about the software freedom issues inherent with Android/Linux. Summarized shortly: the software freedom community is fortunate that Google released so much code under Free Software licenses, but since most of the code in the system is Apache-2.0 licensed, we're going to see a lot of proprietarized, non-user-upgradable versions. In fact, there's no Android/Linux system that's fully Free Software yet. (That's why Aaron Williamson and I try to keep the Replicant project going. We've focused on the HTC Dream and the NexusOne, since they are the mobile devices closest to working with only Free Software installed, and because they allow the users to put their own firmware on the device.)

    I was therefore intrigued to discover last night (via mtrausch) a February blog post by Lori Fraleigh of Motorola, wherein Fraleigh clarifies Motorola's opposition to software freedom for its Android/Linux users:

    We [Motorola] understand there is a community of developers interested in … Android system development … For these developers, we highly recommend obtaining either a Google ADP1 developer phone or a Nexus One … At this time, Motorola Android-based handsets are intended for use by consumers.

    I appreciate the fact that Fraleigh and Motorola are honest in their disdain for software developers. Unlike Apple — who tries to hide how developer-unfriendly its mobile platform is — Motorola readily admits that they seek to leave developers as helpless as possible, refusing to share the necessary tools that developers need to upgrade devices and to improve themselves, their community, and their software. Companies like Motorola and Apple both seek to squelch the healthy hacker tendency to make technology better for everyone. Now that I've seen Fraleigh's old blog post, I can at least give Motorola credit for full honesty about these motives.

    I do, however, find the implication of Fraleigh's words revolting. People who buy the devices, in Motorola's view, don't deserve the right to improve their technology. By contrast, I believe that software freedom should be universal and that no one need be a “mere consumer” of technology. I believe that every technology user is a potential developer who might have something to contribute but obviously cannot if that user isn't given the tools to do so. Sadly, it seems, Motorola believes the general public has nothing useful to contribute, so the public shouldn't even be given the chance.

    But, this attitude is always true for proprietary software companies, so there are actually no revelations on that point. Of more interest is how Motorola was able to do this, given that Android/Linux (at least most of it) is Free Software.

    Motorola's ability to take these actions is a consequence of a few licensing issues. First, most of the Android system is under the Apache-2.0 license (or, in some cases, an even more permissive license). These licenses allow Motorola to make proprietary versions of what Google released and sell it without source code nor the ability for users to install modified versions. That license decision is lamentable (but expected, given Google's goals for Android).

    The even more lamentable licensing issue here is regarding Linux's license, the GPLv2. Specifically, Fraleigh's post claims:

    The use of open source software, such as the Linux kernel … in a consumer device does not require the handset running such software to be open for re-flashing. We comply with the licenses, including GPLv2.

    I should note that, other than Fraleigh's assertion quoted above, I have no knowledge one way or another if Motorola is compliant with GPLv2 on its Android/Linux phones. I don't own one, have no plans to buy one, and therefore I'm not in receipt of an offer for source regarding the devices. I've also received no reports from anyone regarding possible non-compliance. In fact, I'd love to confirm their compliance: please get in touch if you have a Motorola Android/Linux phone and attempted to install a newly compiled executable of Linux onto your phone.

    I'm specifically interested in the installation issue because GPLv2 requires that any binary distribution of Linux (such as one on telephone hardware) include both the source code itself and the scripts to control compilation and installation of the executable. So, if Motorola wrote any helper programs or other software that installs Linux onto the phones, then such software, under GPLv2, is a required part of the complete and corresponding source code of Linux and must be distributed to each buyer of a Motorola Android/Linux phone.

    If you're surprised by that last paragraph, you're probably not alone. I find that many are confused regarding this GPLv2 nuance. I believe the confusion stems from discussions during the GPLv3 process about this specific requirement. GPLv3 does indeed expand the requirement for the scripts to control compilation and installation of the executable into the concept of Installation Information. Furthermore, GPLv3's Installation Information is much more expansive than merely requiring helper software programs and the like. GPLv3's Installation Information includes any material, such as an authorization key, that is necessary for installation of a modified version onto the device.

    However, merely because GPLv3 expanded installation information requirements does not lessen GPLv2's requirement of such. In fact, in my reading of GPLv2 in comparison to GPLv3, the only effective difference between the two on this point relates to cryptographic device lock-down0. I do admit that under GPLv2, if you give all the required installation scripts, you could still use cryptography to prevent those scripts from functioning without an authorization key. Some vendors do this, and that's precisely why GPLv3 is written the way that it is: we'd observed such lock-down occurring in the field, and identified that behavior as a bug in GPLv2 that is now closed with GPLv3. (Please see the footnote as to why I think I previously erred in that deleted interpretation.)

    However, because of all that hype about GPLv3's new Installation Information definition, many simply forgot that the GPLv2 isn't silent on the issue. In other words, GPLv3's verbosity on the subject led people to minimize the important existing requirements of GPLv2 regarding installation information.

    As regular readers of this blog know, I've spent much of my time for the last 12 years doing GPL enforcement. Quite often, I must remind violators that GPLv2 does indeed require the scripts to control compilation and installation of the executable, and that candidate source code releases missing the scripts remain in violation of GPLv2. I sincerely hope that Android/Linux redistributors haven't forgotten this.

    I have one final and important point to make regarding Motorola's February statement: I've often mentioned that the mobile industry's opposition to GPLv3 and to user-upgradable devices is for their own reasons, and nothing to do with regulators or other outside entities preventing them from releasing such software. In their blog post, Motorola tells us quite clearly that the community of developers interested in … experimenting with Android system development and re-flashing phones … [should obtain] either a Google ADP1 developer phone or a Nexus One, both of which are intended for these purposes. In other words, Motorola tacitly admits that it's completely legal and reasonable for the community to obtain such telephones, and that, in fact, Google sells such devices. Motorola was not required to put lock-down restrictions in place, rather they made a choice to prohibit users in this way. On this point, Google chose to treat its users with respect, allowing them to install modified versions. Motorola, by contrast, chose to make Android/Linux as close to Apple's iPhone as they could get away with legally.

    So, the next time a mobile company tries to tell you that they just can't abide by GPLv3 because some third party (the FCC is their frequent scapegoat) prohibits them, you should call them on their FUD. Point out that Google sells phones on the open market that provide all Installation Information that GPLv3 might require. (In other words, even if Linux were GPLv3'd, Android/Linux on the NexusOne and HTC Dream would be a GPLv3-compliant distribution.) Meanwhile, at least one such company, Motorola, has admitted their solitary reason for avoiding GPLv3: the company just doesn't believe users deserve the right to install improved versions of their software. At least they admit their contempt for their customers.

    Update (same day): jwildeboer pointed me to a few posts in the custom ROM and jailbreaking communities about their concerns about Motorola's new offering, the Droid-X. Some commentors there point out that eventually, most phones get jailbroken or otherwise allow user control. However, the key point of the CrunchGear User Manifesto is a clear and good one: no company or person has the right to tell you that you may not do what you like with your own property. This is a point akin and perhaps essential to software freedom. It doesn't really matter if you can figure out to how to hack a device; what's important is that you not give your money to the company that prohibits such hacking. For goodness sake, people, why don't we all use ADP1's and NexusOne's and be done with this?

    Updated (2010-07-17): It appears that cryptographic lock down on the Droid-X is confirmed (thanks to rao for the link). I hope everyone will boycott all Motorola devices because of this, especially given that there are Android/Linux devices on the market that aren't locked down in this way.

    BTW, in Motorola's answer to Engadget on this, we see they are again subtly sending FUD that the lock-down is somehow legally required:

    Motorola's primary focus is the security of our end users and protection of their data, while also meeting carrier, partner and legal requirements.
    I agree the carriers and partners probably want such lock down, but I'd like to see their evidence that there is a legal restriction that requires that. They present none.

    Meanwhile, they also state that such cryptographic lock-down is the only way they know how to secure their devices:

    Checking for a valid software configuration is a common practice within the industry to protect the user against potential malicious software threats.
    Pity that Motorola engineers aren't as clueful as the Google and HTC engineers who designed the ADP1 and Nexus One.

    0 Update on 2020-04-09: At the time I wrote the text above, I was writing for a specific organization where I worked at the time, who held this position, and I'd cross-posted the blog post here. I trusted lawyers I spoke to at the time, who insisted that GPLv2's failure to mention cryptography meant that “scripts used to control compilation and installation of the executable” necessarily did not include items mentioned explicitly GPLv3's Installation Instructions definition. I believed these lawyers, and shouldn't have. Lawyers I've talked to since making this post have taught me that the view stated above lacks nuance. The issue of cryptographic lock-down in GPLv2, and how to interpret “scripts used to control … installation” in an age of cryptographic lock-down, remain an open question of GPL interpretation.

    Posted on Thursday 15 July 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-07-07: Proprietary Software Licensing Produces No New Value In Society

    I sought out the quote below when Chris Dodd paraphrased it on Meet The Press on 25 April 2010. (I've been, BTW, slowly but surely working on this blog post since that date.) Dodd was quoting Frank Rich, who wrote the following, referring to the USA economic system (and its recent collapse):

    As many have said — though not many politicians in either party — something is fundamentally amiss in a financial culture that thrives on “products” that create nothing and produce nothing except new ways to make bigger bets and stack the deck in favor of the house. “At least in an actual casino, the damage is contained to gamblers,” wrote the financial journalist Roger Lowenstein in The Times Magazine last month. This catastrophe cost the economy eight million jobs.

    I was drawn to this quote for a few reasons. First, as a poker player, I've spend some time thinking about how “empty” the gambling industry is. Nothing is produced; no value for humans is created; it's just exchanging of money for things that don't actually exist. I've been considering that issue regularly since around 2001 (when I started playing poker seriously). I ultimately came to a conclusion not too different from Frank Rich's point: since there is a certain “entertainment value”, and since the damage is contained to those who chose to enter the casino, I'm not categorically against poker nor gambling in general, nor do I think they are immoral. However, I also don't believe gambling has any particular important value in society, either. In other words, I don't think people have an inalienable right to gamble, but I also don't think there is any moral reason to prohibit casinos.

    Meanwhile, I've also spent some time applying this idea of creating nothing and producing nothing to the proprietary software industry. Proprietary licenses, in many ways, are actually not all that different from these valueless financial transactions. Initially, there's no problem: someone writes software and is paid for it; that's the way it should be. Creation of new software is an activity that should absolutely be funded: it creates something new and valuable for others. However, proprietary licenses are designed specifically to allow a single act of programming generate new revenue over and over again. In this aspect, proprietary licensing is akin to selling financial derivatives: the actual valuable transaction is buried well below the non-existent financial construction above it.

    I admit that I'm not a student of economics. In fact, I rarely think of software in terms of economics, because, generally, I don't want economic decisions to drive my morality nor that of our society at large. As such, I don't approach this question with an academic economic slant, but rather, from personal economic experience. Specifically, I learned a simple concept about work when I was young: workers in our society get paid only for the hours that they work. To get paid, you have to do something new. You just can't sit around and have money magically appear in your bank account for hours you didn't work.

    I always approached software with this philosophy. I've often been paid for programming, but I've been paid directly for the hours I spent programming. I never even considered it reasonable to be paid again for programming I did in the past. How is that fair, just, or quite frankly, even necessary? If I get a job building a house, I can't get paid every day someone uses that house. Indeed, even if I built the house, I shouldn't get a royalty paid every time the house is resold to a new owner0. Why should software work any differently? Indeed, there's even an argument that software, since it's so much more trivial to copy than a house, should be available gratis to everyone once it's written the first time.

    I recently heard (for the first time) an old story about a well-known Open Source company (which no longer exists, in case you're wondering). As the company grew larger, the company's owners were annoyed that the company could only bill the clients for the hour they worked. The business was going well, and they even had more work than they could handle because of the unique expertise of their developers. The billable rates covered the cost of the developers' salaries plus a reasonable profit margin. Yet, the company executives wanted more; they wanted to make new money even when everyone was on vacation. In essence, having all the new, well-paid programming work in the world wasn't enough; they wanted the kinds of obscene profits that can only be made from proprietary licensing. Having learned this story, I'm pretty glad the company ceased to exist before they could implement their make money while everyone's on the beach plan. Indeed, the first order of business in implementing the company's new plan was, not surprisingly, developing some new from-scratch code not covered by GPL that could be proprietarized. I'm glad they never had time to execute on that plan.

    I'll just never be fully comfortable with the idea that workers should get money for work they already did. Work is only valuable if it produces something new that didn't exist in the world before the work started, or solves a problem that had yet to be solved. Proprietary licensing and financial bets on market derivatives have something troubling in common: they can make a profit for someone without requiring that someone to do any new work. Any time a business moves away from actually producing something new of value for a real human being, I'll always question whether the business remains legitimate.

    I've thus far ignored one key point in the quote that began this post: “At least in an actual casino, the damage is contained to gamblers”. Thus, for this “valueless work” idea to apply to proprietary licensing, I had to consider (a) whether or not the problem is sufficiently contained, and (b) whether software or not is akin to the mere entertainment activity, as gambling is.

    I've pointed out that I'm not opposed to the gambling industry, because the entertainment value exists and the damage is contained to people who want that particular entertainment. To avoid the stigma associated with gambling, I can also make a less politically charged example such as the local Chuck E. Cheese, a place I quite enjoyed as a child. One's parent or guardian goes to Chuck E. Cheese to pay for a child's entertainment, and there is some value in that. If someone had issue with Chuck E. Cheese's operation, it'd be easy to just ignore it and not take your children there, finding some other entertainment. So, the question is, does proprietary software work the same way, and is it therefore not too damaging?

    I think the excuse doesn't apply to proprietary software for two reasons. First, the damage is not sufficiently contained, particularly for widely used software. It is, for example, roughly impossible to get a job that doesn't require the employee to use some proprietary software. Imagine if we lived in a society where you weren't allowed to work for a living if you didn't agree to play Blackjack with a certain part of your weekly salary? Of course, this situation is not fully analogous, but the fundamental principle applies: software is ubiquitous enough in industrialized society that it's roughly impossible to avoid encountering it in daily life. Therefore, the proprietary software situation is not adequately contained, and is difficult for individuals to avoid.

    Second, software is not merely a diversion. Our society has changed enough that people cannot work effectively in the society without at least sometimes using software. Therefore, the “entertainment” part of the containment theory does not properly apply1, either. If citizens are de-facto required to use something to live productively, it must have different rules and control structures around it than wholly optional diversions.

    Thus, this line of reasoning gives me yet another reason to oppose proprietary software: proprietary licensing is simply a valueless transaction. It creates a burden on society and gives no benefit, other than a financial one to those granted the monopoly over that particular software program. Unfortunately, there nevertheless remain many who want that level of control, because one fact cannot be denied: the profits are larger.

    For example, Mårten Mikos recently argued in favor of these sorts of large profits. He claims that to benefit massively from Open Source (i.e., to get really rich), business models like “Open Core” are necessary. Mårten's argument, and indeed most pro-Open-Core arguments, rely on this following fundamental assumption: for FLOSS to be legitimate, it must allow for the same level of profits as proprietary software. This assumption, in my view, is faulty. It's always true that you can make bigger profits by ignoring morality. Factories can easily make more money by completely ignoring environmental issues; strip mining is always very profitable, after all. However, as a society, we've decided that the environment is worth protecting, so we have rules that do limit profit maximization because a more important goal is served.

    Software freedom is another principle of this type. While you can make a profit with community-respecting FLOSS business models (such as service, support and freely licensed custom modifications on contract), it's admittedly a smaller profit than can be made with Open Core and proprietary licensing. But that greater profit potential doesn't legitimatize such business models, just as it doesn't legitimize strip mining or gambling on financial derivatives.

    Update: Based on some feedback that I got, I felt it was important to make clear that I don't believe this argument alone can create a unified theory that shows why software freedom should be an inalienable right for all software users. This factor of lack of value that proprietary licensing brings to society is just another to consider in a more complete discussion about software freedom.

    Update: Glynn Moody wrote a blog post that quoted from this post extensively and made some interesting comments on it. There's some interesting discussion in the blog comments there on his site; perhaps because so many people hate that I only do blog comments on identi.ca (which I do, BTW, because it's the only online forum I'm assured that I'll actually read and respond to.)


    0I realize that some argue that you can buy a house, then rent it to others, and evict them if they fail to pay. Some might argue further that owners of software should get this same rental power. The key difference, though, is that the house owner can't really make full use of the house when it's being rented. The owner's right to rent it to others, therefore, is centered around the idea that the owner loses some of their personal ability to use the house while the renters are present. This loss of use never happens with software.

    1You might be wondering, Ok, so if it's pure entertainment software, is it acceptable for it to be proprietary?. I have often said: if all published and deployed software in the world were guaranteed Free Software except for video games, I wouldn't work on the cause of software freedom anymore. Ultimately, I am not particularly concerned about the control structures in our culture that exist for pure entertainment. I suppose there's some line to be drawn between art/culture and pure entertainment/diversion, but considerations on differentiating control structures on that issue are beyond the scope of this blog post.

    Posted on Wednesday 07 July 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

June

  • 2010-06-30: Post-Bilski Steps for Anti-Software-Patent Advocates

    Lots of people are opining about the USA Supreme Court's ruling in the Bilski case. Yesterday, I participated in a oggcast with the folks at SFLC. In that oggcast, Dan Ravicher explained most of the legal details of Bilski; I could never cover them as well as he did, and I wouldn't even try.

    Anyway, as a non-lawyer worried about the policy questions, I'm pretty much only concerned about those forward-looking policy questions. However, to briefly look back at how our community responded to this Bilski situation over the last 18 months: it seems similar to what happened while the Eldred case was working its way to the Supreme Court. In the months preceding both Eldred and Bilski, there seemed to be a mass hypnosis that the Supreme Court would actually change copyright law (Eldred) or patent law (Bilski) to make it better for freedom of computer users.

    In both cases, that didn't happen. There was admittedly less of that giddy optimism before Bilski as there was before Eldred, but the ultimate outcome for computer users is roughly no different in both cases: as we were with Eldred, we're left back with the same policy situation we had before Bilski ever started making its way through the various courts. As near as I can tell from what I've learned, the entire “Bilski thing” appears to be a no-op. In short, as before, the Patent Office sometimes can and will deny applications that it determines are only abstract ideas, and the Supreme Court has now confirmed that the Patent Office can reject such an application if the Patent Office knows an abstract idea when it sees it. Nothing has changed regarding most patents that are granted every day, including those that read on software. Those of us that oppose software patents continue to believe that software algorithms are indeed merely abstract ideas and pure mathematics and shouldn't be patentable subject matter. The governmental powers still seems to disagree with us, or, at least, just won't comment on that question.

    Looking forward, my largest concern, from a policy perspective, is that the “patent reform” crowd, who claim to be the allies of the anti-software-patent folks, will use this decision to declare that the system works. Bilski's patent was ultimately denied, but on grounds that leave us no closer to abolishing software patents. Patent reformists will say: Well, invalid patents get denied, leaving space for the valid ones. Those valid ones, they will say, do and should include lots of patents that read on software. But only the really good ideas should be patented, they will insist.

    We must not yield to the patent reformists, particularly at a time like this. (BTW, be sure to read RMS' classic and still relevant essay, Patent Reform Is Not Enough, if you haven't already.)

    Since Bilski has given us no new tools for abolishing software patents, we must redouble efforts with tools we already have to mitigate the threat patents pose to software freedom. Here are a few suggestions, which I think are actually all implementable by the average developer, to will keep up the fight against software patents, or at least, mitigate their impact:

    • License your software using the AGPLv3, GPLv3, LGPLv3, or Apache-2.0. Among the copyleft licenses, AGPLv3 and GPLv3 offer the best patent protections; LGPLv3 offers the best among the weak copyleft licenses; Apache License 2.0 offers the best patent protections among the permissive licenses. These are the licenses we should gravitate toward, particularly since multiple companies with software patents are regularly attacking Free Software. At least when such companies contribute code to projects under these licenses, we know those particular codebases will be safe from that particular company's patents.
    • Demand real patent licenses from companies, not mere promises. Patent promises are not enough0. The Free Software community deserves to know it has real patent licenses from companies that hold patents. At the very least, we should demand unilateral patent licenses for all their patents perpetually for all possible copylefted code (i.e., companies should grant, ahead of time, the exact same license that the community would get if the company had contributed to a yet-to-exist GPLv3'd codebase)1. Note further that some companies, that claim to be part of the FLOSS community, haven't even given the (inadequate-but-better-than-nothing) patent promises. For example, BlackDuck holds a patent related to FLOSS, but despite saying they would consider at least a patent promise, have failed to do even that minimal effort.
    • Support organizations/efforts that work to oppose and end software patents. In particular, be sure that the efforts you support are not merely “patent reform” efforts hidden behind anti-software patent rhetoric. Here are a few initiatives that I've recently seen doing work regarding complete abolition of software patents. I suggest you support them (with your time or dollars):
    • Write your legislators. This never hurts. In the USA, it's unlikely we can convince Congress to change patent law, because there are just too many lobbying dollars from those big patent-holding companies (e.g., the same ones that wrote those nasty amicus briefs in Bilski). But, writing your Senators and Congresspeople once a year to remind them of your opposition patents that read on software simply can't hurt, and may theoretically help a tiny bit. Now would be a good time to do it, since you can mention how the Bilski decision convinced you there's a need for legislative abolition of software patents. Meanwhile, remember, it's even better if you show up at political debates during election season and ask these candidates to oppose software patents!
    • Explain to your colleagues why software patents should be abolished, particularly if you work in computing. Software patent abolition is actually a broad spectrum issue across the computing industry. Only big and powerful companies benefit from software patents. The little guy — even the little guy proprietary developer — is hurt by software patents. Even if you can't convince your colleagues who write proprietary software that they should switch to writing Free Software, you can instead convince them that software patents are bad for them personally and for their chances to succeed in software. Share the film, Patent Absurdity, with them and then discuss the issue with them after they've viewed it. Blog, tweet, dent, and the like about the issue regularly.
    • (added 2010-07-01 on tmarble's suggestion) Avoid products from pro-software-patent companies. This is tough to do, and it's why I didn't call for an all-out boycott. Most companies that make computers are pro-software-patent, so it's actually tough to buy a computer (or even components for one) without buying from a pro-software-patent company. However, avoiding the companies who are most aggressive with patent aggression is easy: starting with avoiding Apple products is a good first step (there are plenty of other reasons to avoid Apple anyway). Microsoft would be next on the list, since they specifically use software patents to attack FLOSS projects. Those are likely the big two to avoid, but always remember that all large companies with proprietary software products actively enforce patents, even if they don't file lawsuits. In other words, go with the little guy if you can; it's more likely to be a patent-free zone.
    • If you have a good idea, publish it and make sure the great idea is well described in code comments and documentation, and that everything is well archived by date. I put this one last on my list, because it's more of a help for the software patent reformists than it is for the software patent abolitionists. Nevertheless, sometimes, patents will get in the way of Free Software, and it will be good if there is strong prior art showing that the idea was already thought of, implemented, and put out into the world before the patent was filed. But, fact is, the “valid” software patents with no prior art are a bigger threat to software freedom. The stronger the patent, the worst the threat, because it's more likely to be innovative, new technology that we want to implement in Free Software.

    I sat and thought of what else I could add to this list that individuals can do to help abolish software patents. I was sad that these were the only five six things that I could collect, but that's all the more reason to do these five six things in earnest. The battle for software freedom for all users is not one we'll win in our lifetimes. It's also possible abolition of software patents will take a generation as well. Those of us that seek this outcome must be prepared for patience and lifelong, diligent work so that the right outcome happens, eventually.


    0 Update: I was asked for a longer write up on software patent licenses as compared to mere “promises”. Unfortunately, I don't have one, so the best I was able to offer was the interview I did on Linux Outlaws, Episode 102, about Microsoft's patent promise. I've also added a TODO to write something up more completely on this particular issue.

    1 I am not leaving my permissively-license-preferring friends out of this issue without careful consideration. Specifically, I just don't think it's practical or even fair to ask companies to license their patents for all permissively-licensed code, since that would be the same as licensing to everyone, including their proprietary software competitors. An ahead-of-time perpetual license to practice the teachings of all the company's patents under AGPLv3 basically makes sure that code that's eternally Free Software will also eternally be patent-licensed from that company, even if the company never contributes to the AGPLv3'd codebase. Anyone trying to make proprietary code that infringed the patent wouldn't have benefit of the license; only Free Software users, distributors and modifiers would have the benefit. If a company supports copyleft generally, then there is no legitimate reason for the company to refuse such a broad license for copyleft distributions and deployments.

    Posted on Wednesday 30 June 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-06-23: New Ground on Terminology Debate?

    (These days, ) I generally try to avoid the well-known terminology debates in our community. But, if you hang around this FLOSS world of ours long enough, you just can't avoid occasionally getting into them. I found myself in one this afternoon that spanned three identica threads. I had some new thoughts that I've shared today (and even previously) on my identi.ca microblog. I thought it might be useful to write them up in one place rather than scattered across a series of microblog statements.

    I gained my first new insight into the terminology issues when I had dinner with Larry Wall in early 2001 after my Master's thesis defense. It was first time I talked with him about these issues of terminology, and he said that it sounded like a good place to apply what he called the “golden rule of network protocols”: Always be conservative in what you emit and liberal in what you accept. I've recently noted again that's a good rule to follow regarding terminology.

    More recently, I've realized that the FLOSS community suffers here, likely due to our high concentration of software developers and engineers. Precision in communication is a necessarily component of the lives of developers, engineers, computer scientists, or anyone in a highly technical field. In our originating fields, lack of precise and well-understood terminology can cause bridges to collapse or the wrong software to get installed and crash mission critical systems. Calling x by the name y sometimes causes mass confusion and failure. Indeed, earlier this week, I watched a PBS special, The Pluto Files, where Neil deGrasse Tyson discussed the intense debate about the planetary status of Pluto. I was actually somewhat relieved that a subtle point regarding a categorical naming is just as contentious in another area outside my chosen field. Watching the “what constitutes a planet” debate showed me that FLOSS hackers are no different than most other scientists in this regard. We all take quite a bit of pride in our careful (sometimes pedantic) care in terminology and word choice; I know I do, anyway.

    However, on the advocacy side of software freedom (the part that isn't technical), our biggest confusion sometimes stems from an assumption that other people's word choice is as necessarily as precise as ours. Consider the phrase “open source”, for example. When I say “open source”, I am referring quite exactly to a business-focused, apolitical and (frankly) amoral0 interest in, adoption of, and contribution to FLOSS. Those who coined the term “open source” were right about at least one thing: it's a term that fits well with for-profit interests who might otherwise see software freedom as too political.

    However, many non-business users and developers that I talk to quite clearly express that they are into this stuff precisely because there are principles behind it: namely, that FLOSS seeks to make a better world by giving important rights to users and programmers. Often, they are using the phrase “open source” as they express this. I of course take the opportunity to say: it's because those principles are so important that I talk about software freedom. Yet, it's clear they already meant software freedom as a concept, and just had some sloppy word choice.

    Fact is, most of us are just plain sloppy with language. Precision isn't everyone's forte, and as a software freedom advocate (not a language usage advocate), I see my job as making sure people have the concepts right even if they use words that don't make much sense. There are times when the word choices really do confuse the concepts, and there are other times when they don't. Sometimes, it's tough to identify which of the two is occurring. I try to figure it out in each given situation, and if I'm in doubt, I just simplify to the golden rule of network protocols.

    Furthermore, I try to have faith in our community's intelligence. Regardless of how people get drawn into FLOSS: be it from the moral software freedom arguments or the technical-advantage-only open source ones, I don't think people stop listening immediately upon their arrival in our community. I know this even from my own adoption of software freedom: I came for the Free as in Price, but I stayed for the Free as in Freedom. It's only because I couldn't afford a SCO Unix license in 1992 that I installed GNU/Linux. But, I learned within just a year why the software freedom was what mattered most.

    Surely, others have a similar introduction to the community: either drawn in by zero-cost availability or the technical benefits first, but still very interested to learn about software freedom. My goal is to reach those who have arrived in the community. I therefore try to speak almost constantly about software freedom, why it's a moral issue, and why I work every day to help either reduce the amount of proprietary software, or increase the amount of Free Software in the world. My hope is that newer community members will hear my arguments, see my actions, and be convinced that a moral and ethical commitment to software freedom is the long lasting principle worth undertaking. In essence, I seek to lead by example as much as possible.

    Old arguments are a bit too comfortable. We already know how to have them on autopilot. I admit myself that I enjoy having an old argument with a new person: my extensive practice often yields an oratorical advantage. But, that crude drive is too much about winning the argument and not enough about delivering the message of software freedom. Occasionally, a terminology discussion is part of delivering that message, but my terminology debate tools box has a “use with care” written on it.


    0 Note that here, too, I took extreme care with my word choice. I mean specifically amorality — merely an absence of any moral code in particular. I do not, by any stretch, mean immoral.

    Posted on Wednesday 23 June 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-06-11: Where Are The Bytes?

    A few years ago, I was considering starting a Free Software project. I never did start that one, but I learned something valuable in the process. When I thought about starting this project, I did what I usually do: ask someone who knows more about the topic than I do. So I phoned my friend Loïc Dachary, who has started many Free Software projects, and asked him for advice.

    Before I could even describe the idea, Loïc said: you don't have a URL? I was taken aback; I said: but I haven't started yet. He said: of course you have, you're talking to me about it, so you've started already. The most important thing you can tell me, he said, is Where are the bytes?

    Loïc explained further: Most projects don't succeed. The hardest part about a software freedom project is carrying it far enough so it can survive even if its founders quit. Therefore, under Loïc's theory, the most important task at the project's start is to generate those bytes, in hopes those bytes find their way to the a group of developers who will help keep the project alive.

    But, what does he mean by “bytes”? He means, quite simply, that you have to core dump your thinking, your code, your plans, your ideas, just about everything on a public URL that everyone can take a look at. Push bytes. Push them out every time you generate a few. It's the only chance your software freedom project has.

    The first goal of a software freedom project is to gain developers. No project can have long-term success without a diverse developer base. The problem is, the initial development work and project planning too often ends up trapped in the head of a few developers. It's human nature: How can I spend my time telling everyone about what I'm doing? If I do that, when will I actually do anything? Successful software freedom project leaders resist this human urge and do the seemingly counterintuitive thing: they dump their bytes on the public, even if it slows them down a bit.

    This process is even more essential in the network age. If someone wants to find a program that does a job, the first tool is a search engine: to find out if someone else has done it yet. Your project's future depends completely that every such search performed helps developers find your bytes.

    In early 2001, I asked Larry Wall, of all the projects he'd worked on, which was the hardest. His answer was quick: when I was developing the first version of perl5, Larry said, I felt like I had to code completely alone and just make it work by myself. Of course, Larry's a very talented guy who can make that happen: generate something by himself that everyone wanted to use. While I haven't asked him what he'd do in today's world if he was charged with a similar task, I can guess — especially given at how public the Perl6 process has been — that he'd instead use the new network tools, such as DVCS, to push his bytes early and often and seek to get more developers involved early.0

    Admittedly, most developers' first urge is to hide everything. We'll release it when it's ready, is often heard, or — even worse — Our core team works so well together; it'll just slow us down to make things public now. Truth is, this is a dangerous mixture of fear and narcissism — the very same drives that lead proprietary software developers to keep things proprietary.

    Software freedom developers have the opportunity to actually get past the simple reality of software development: all code sucks, and usually isn't complete. Yet, it's still essential that the community see what's going on at ever step, from the empty codebase and beyond. When a project is seen as active, that draws in developers and gives the project hope of success.

    When I was in college, one of the teams in a software engineering class crashed and burned; their project failed hopelessly. This happened despite one of the team members spending about half the semester up long nights, coding by himself, ignoring the other team members. In their final evaluation, the professor pointed out: Being a software developer isn't like being a fighter pilot. The student, missing the point, quipped: Yeah, I know, at least a fighter pilot has a wingman. Truth is, one person, or two people, or even a small team, aren't going to make a software freedom project succeed. It's only going to succeed when a large community bolsters it and prevents any single point of failure.

    Nevertheless, most software freedom projects are going to fail. But, there is no shame in pushing out a bunch of bytes, encouraging people to take a look, and giving up later if it just doesn't make it. All of science works this way, and there's no reason computer science should be any different. Keeping your project private assures its failure; the only benefit is that you can hide that you even tried. As my graduate advisor told me when I was worried my thesis wasn't a success: a negative result can be just as compelling as a positive one. What's important is to make sure all results are published and available for public scrutiny.


    When I started discussing this idea a few weeks ago, some argued that early GNU programs — the founding software of our community — were developed in private initially. This much is true, but just because GNU developers once operated that way doesn't mean it was the right way. We have the tools now to easily do development in public, so we should. In my view, today, it's not really in the spirit of software freedom until the project, including its design discussions, plans, and prototypes are all developed in public. Code (regardless of its license) merely dumped over the wall on intervals deserves to be forked by a community committed to public development.


    Update (2010-06-12): I completely forgot to mention The Risks of Distributed Version Control by Ben Collins-Sussman, which is five years old now but still useful. Ben is making a similar point to mine, and pointing out how some uses of DVCS can cause the effects that I'm encouraging developers to avoid. I think DVCS is like any tool: it can be used wrongly. The usage Ben warns about should be avoided, and DVCS, when used correctly, assists in the public software development process.


    0Note that pushing code out to the public in the mid-1990s was substantially more arduous (from a technological perspective) than it is today. Those of you who don't remember shar archives may not realize that. :)

    Posted on Friday 11 June 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

May

  • 2010-05-08: Beware of Proprietary Drift

    The Free Software Foundation (FSF) announced yesterday a campaign to collect a clear list of OpenOffice.Org extensions that are FaiF, to convince the OO.o Community Council to list only FaiF extensions, and to find those extensions that are proprietary software, so that OO.o extension developers can focus of their efforts on writing replacements under a software-freedom-respecting license.

    I use OpenOffice.Org (OO.o) myself only when someone else sends me a document in that format; I'm a LaTeX, DocBook, MarkDown, or HTML user for documents I originate. Nevertheless, I'm obviously a rare sort of software user, and I understand that OO.o is a program many people use. Plus, a program like OO.o is extremely large, with a diverse user base, so extension-style improvement, from a technological perspective, makes sense to meet all the users' requirements.

    Unfortunately, the social impact of a program designed this way causes danger for software freedom. It sometimes causes a chain of events that I call “proprietary drift” — a social phenomena that leads otherwise FaiF codebases to slowly become, in their default use, mostly proprietary packages, at least with regard the features users find most important and necessary.

    Copyleft itself was originally designed to address this problem: to make sure that improved versions of packages were available with as much software freedom as the original. Copyleft isn't a perfect solution to reach this goal, and furthermore many essential software freedom codebases are under weak copyleft and/or permissive licenses. Such is the case with OO.o, and the proprietary drift of the codebase is thus of great concern here.

    For those of us that have the goal of building a world where software freedom is given for all published and deployed software, this problem of proprietary drift is a terrible threat. In many ways, it's even a worse threat than the marketing and production of fully proprietary software. This may seem a bit counter-intuitive on its surface; logic would seem to dictate that some software freedom is better than none, and therefore an OO.o user with a few proprietary extensions installed is better off than a Microsoft Word user. And, in fact, none of that is false.

    However, the situation introduces a complexity. In short, it can inspire a “good enough” reaction among users. Particularly for users who have generally used only proprietary software, the experience of using a package that mostly respects software freedom can be incredibly liberating. When 98% of your software is FaiF-licensed, you sometimes don't notice the 2% that isn't. Over time, the 2% goes up to 3%, then 4%. This proprietary drift will often lead back to a system not that much different from (for example) Apple's operating system, which has a permissively-licensed software freedom core, but most of the system is very much proprietary. In other words, in the long term, proprietary drift leads to mostly proprietary systems.

    Sometimes, I and other software freedom advocates are criticized for giving such a hard time to those who are seemingly closest to our positions. Often, this is because the threat of proprietary drift is so great. Concern about proprietary drift is, at least in large part, the inspiration for positions opposing UbuntuOne, for the Linux Libre project, and for this this new initiative to catalog the FaiF OO.o extensions and rewrite the proprietary ones. We all agree that purely proprietary software programs like those from Apple, Microsoft, and Oracle are the greatest threat to software freedom in the short term. But, in the long term, proprietary drift has the potential to creep up on users who prefer software freedom. You may never see it coming if you aren't constantly vigilant.

    [There's a derivative version of this article available in Arabic. I can't personally attest to the accuracy of the translation, as I can't read Arabic, but osamak, the translator, is a good guy.]


    Disclaimer: While I am a member of FSF's Board of Directors, and I believe the positions stated above are consistent with FSF's positions, the opinions are not necessarily those of the FSF even though I refer to various FSF-sponsored initiatives. Furthermore, this remains my personal blog and the opinions certainly do not express those of my employer nor those of any other organization or project for which I volunteer.

    Posted on Saturday 08 May 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

April

  • 2010-04-21: Launchpad Single Sign On Released

    I wrote 15 months ago thanking Canonical for their release of Launchpad. However, in the interim, a part of the necessary codebase was made proprietary, namely the authentication system used in the canonical instance of Launchpad hosted by Canonical. (Yes, I still insist on using canonical in the canonical way despite the company name making it confusing. :). I added this fact to my list of reasons of abandoning Ubuntu and other Canonical products.

    Fortunately, I've now removed this reason from the list of reasons I switched back to Debian from Ubuntu, since Jono Bacon announced release of this code today. According to Jono, this release means that Launchpad and its dependencies are again fully Free Software. This is a step forward. And, I did promise many people at Canonical that I'd make a point about thanking them for doing Free Software releases when they do them, since I do make a point of calling them out about negative things they do.

    Like any mixed proprietary/Free Software company, there is tons more to be released. I remain most concerned about UbuntuOne's server side code, but I very much hope this release today marks a bounce-back for Canonical to its roots in the 100% Free Software world.

    Posted on Wednesday 21 April 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-04-07: Proprietary Licenses Are Even Worse Than They Look

    There are lots of evil things that proprietary software companies might do. Companies put their own profit above the rights and freedoms of their users, and to that end, much can be done that subjugates users. Even as someone who avoids proprietary software, I still read many proprietary license agreements (mainly to see how bad they are). I've certainly become numb to the constant barrage of horrible restrictions they place on users. But, sometimes, proprietary licenses go so far that I'm taken aback by their gratuitous cruelty.

    Apple's licenses are probably the easiest example of proprietary licensing terms that are well beyond reasonableness. Of course, Apple's licenses do the usual things like forbidding users from copying, modifying, sharing, and reverse engineering the software. But even worse, Apple also forbid users from running Apple software on any hardware that is not produced by Apple.

    The decoupling of one's hardware vendor from one's software vendor was a great innovation brought about by the PC revolution, in which, ironically, Apple played a role. Computing history has shown us that when your software vendor also controls your hardware, you can easily be “locked in“ in ways that make mundane proprietary software licenses seem almost nonthreatening.

    Film image from Tron of the Master Control Program (MCP)

    Indeed, Apple has such a good hype machine that they even have convinced some users this restrictive policy makes computing better. In this worldview, the paternalistic vendor will use its proprietary controls over as many pieces of the technology as possible to keep the infantile users from doing something that's “just bad for them”. The tyrannical MCP of Tron comes quickly to my mind.

    I'm amazed that so many otherwise Free Software supporters are quite happy using OSX and buying Apple products, given these kinds of utterly unacceptable policies. The scariest part, though, is that this practice isn't confined to Apple. I've been recently reminded that other companies, such as IBM, do exactly the same thing. As a Free Software advocate, I'm critical of any company that uses their control of a proprietary software license to demand that users run that software only on the original company's hardware as well. The production and distribution of mundane proprietary software is bad enough. It's unfortunate that companies like Apple and IBM are going the extra mile to treat users even worse.

    Posted on Wednesday 07 April 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

March

  • 2010-03-26: LibrePlanet 2010 Completes Its Orbit

    Seven and a half years ago, I got this idea: the membership of the Free Software Foundation should have a chance to get together every year and learn about what the FSF has been doing for the last year. I was so nervous at the first one on Saturday 15 March 2003, that I even wore a suit which I rarely do.

    The basic idea was simple: the FSF Board of Directors came into town anyway each March for the annual board meeting. Why not give a chance for FSF associate members to meet the leadership and staff of FSF and ask hard questions to their hearts' content? I'm all about transparency, as you know. :)

    Since leaving the position of Executive Director a few months before the 2005 meeting, I've attended every annual meeting, just as an ordinary Associate Member and FSF volunteer. It's always enjoyable to attend a conference organized by someone else that you used to help organize; it's like, after having done sysadmin work for other people for years, to have someone keep a machine running and up to date just for you. It's been wonderful to watch the FSF AM meeting grow into a full-fledged conference for discussion and collaboration between folks from all over the Free Software world. “One room, one track, one day” has become “five rooms, three tracks, and three days” with the proverbial complaint throughout: But, why do I have to miss this great session so that I can go to some other great session!?!

    Some highlights for me this year were:

    • I saw John Gilmore win a well-deserved FSF Award for the Advancement of Free Software.
    • I got to spend time with the intrepid gnash developer Rob Savoye again, whom I knew of for years (his legend precedes him) but I'd rarely had a chance to see in person regularly, until lately.
    • I met so many young people excited about software freedom. I can only imagine to be only 19 or 20 years old and have the opportunity meet other Free Software developers in person. At that age, I considered myself lucky to simply have Usenet access so that I could follow and participate in online discussions about Free Software (good ol' gnu.misc.discuss ;). I am so glad that young folks, some from as far away as Brazil, had the opportunity to visit and speak about their work.
    • On the informal Friday sessions, I was a bit amazed that I pulled off a marathon six-hour session of mostly well-received talks/discussions (for which I readily admit I had not prepped well). The first three hours was about the challenges of software freedom on mobile devices, and the second three were about the nitty-gritty details of the hardest and most technical GPL enforcement task: the C&CS check. People seemed to actually enjoy watching me break half my Fedora chroots trying to build some source code for a plasma television. Someone even told me later: it was more fun because we got to see you make all the mistakes.
    • Finally (and I realize I've probably buried the lede here, but I've kept the list chronological, since I wrote most of it before I found out this last thing), after the FSF Board meeting, which followed LibrePlanet, I was informed by a phone call from my good friend Henry Poole that I'd been elected to FSF's Board of Directors, which has now been announced by FSF on Peter Brown's blog. I've often told the story that when I first learned about the FSF as a young programmer and sysadmin, I thought that someday, maybe I could be good enough to get a job as a sysadmin for the FSF. I did indeed volunteer as a sysadmin for the FSF starting around 1996, but I truly felt I'd exceeded any possible dream when I was later named FSF's Executive Director, and was able to serve in that post for so many years. Now, being part of the Board of Directors is an even greater opportunity for involvement in the organization that I've loved and respected for so long.

    FSF is an organization based around a very simple, principled idea: that users and programmers alike deserve inalienable rights to copy, share, modify, and redistribute all the software that they use. This issue isn't merely about making better software (although Free Software developers usually do, anyway); it's about a principle of morality: everyone using computers should be treated well and be given the maximal opportunity to treat their neighbors well, too. Helping make this simple idea into reality is the center of all the work I've done for the last 12 years of my life, and I expect it will be the focus of my (hopefully many) remaining years. I am thankful that the Voting Members of FSF have given me this additional opportunity to help our shared cause. I plan to work hard in this and all the other responsibilities that I already have to our Free Software community. Like everyone on FSF's Board of Directors, I serve in that role completely as a volunteer, so in some ways I feel this is just a natural extension of the volunteer work I've continued to do for the FSF regularly since I left its employment in 2005.

    Finally, I was glad to meet (or meet again) so many FSF supporters at LibrePlanet, and I deeply hope that I can serve our shared goal well in this additional role.

    Posted on Friday 26 March 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-03-15: Is Your Support of Copyleft Logically Consistent?

    Most of you are aware from one of my previous posts that It's a Wonderful Life! is my favorite film. Recently, I encountered something in the software freedom community that reminded me of yet another quote from the flim:

    Picture of George Bailey whispering to Clarence at the bar

    GEORGE:
    Look, uh … I think maybe you better not mention getting your wings around here.
    CLARENCE:
    Why? Don't they believe in angels?
    GEORGE:
    I… yeah, they believe in them…
    CLARENCE:
    Ohhh … Why should they be surprised when they see one?

    Obviously, I don't believe in angels myself. But, Clarence's (admittedly naïve) logic is actually impeccable: Either you believe in angels or you don't. If you believe in angels, then you shouldn't be surprised to (at least occasionally) see one.

    This film quote came to my mind in reference to a concept in GPL enforcement. Many people give lip service to the idea that the GPL, and copyleft generally, is a unique force that democratizes software and ensures that FLOSS cannot be exploited by proprietary software interests. Many of these same people, though, oppose GPL enforcement when companies exploit GPL'd code and don't give the source code and take away users' rights to modify and share that software.

    I've admitted that the copyleft is merely a strategy to achieve maximal software freedom. There are other strategies too, such as the Apache community process. The Apache Software Foundation releases software under a permissive non-copyleft license, but then negotiates with companies to convince them to contribute to the code base publicly. For some projects, that strategy has worked well, and I respect it greatly.

    Some (although not all) people in non-copyleft FLOSS communities (like the Apache community) are against GPL enforcement. I disagree with them, but their position is logically consistent. Such folks don't agree with us (copyleft-supporting folks) that a license should be used as a mechanism to guarantee that all published and deployed improved versions of the software are released in software freedom. It's not that those other folks don't prefer FLOSS; they simply prefer a non-legally binding social pressure to encourage software sharing rather than a strategy with legal backup. I prefer a strategy with legal strength, but I still respect non-copyleft folks who don't support that. They take a logically consistent and reasonable approach.

    However, it's ultimately hypocritical to claim support for a copyleft structure but oppose GPL enforcement. If you believe the license should have a legal requirement that ensures software is always distributed in software freedom, then why would you be surprised — or, even worse, angry — that a copyright holder would seek to uphold users' rights when that license is violated?

    There is great value in having multiple simultaneous strategies ongoing to achieve important goals. Universal software freedom is my most important goal, and I expect to spend nearly all of my life focused on achieving it for all published and deployed software in the world. However, I don't expect nor even want everyone else to single-minded-ly support my exact same strategies in all cases. The diversity of the software freedom community makes it more likely that we'll succeed if we avoid single point of failure on any particular plan, and I support that diversity.

    However, I also think it's reasonable to expect logically consistent positions. A copyleft license is effectively indistinguishable from the Apache license if copyleft is never enforced when violations occur. Condemning community-oriented0 GPL enforcement (that seeks primarily to get the code released) while also claiming to support the idea of copyleft is a logically inconsistent and self-contradictory position. It's unfortunate that so many people hold this contradictory position.


    0There are certain types of GPL enforcement that are not consistent with the goal of universal software freedom. For example, some so-called “Open Core” companies are well known for releasing their (solely) copyrighted code under GPL, and then using GPL enforcement as a mechanism to pressure users to take a proprietary license. GPL enforcement is only acceptable in my view if its primary goal is to have all code released under GPL. Such enforcement must never compromise about one point: that compliance with the GPL is a non-negotiable term of settling the enforcement action. If the enforcer is willing to sell out the rights that users' have to source code, then even I would condemn, as I have previously, such GPL enforcement as bad for the software freedom community. For this reason, in all GPL enforcement that I engage in, I make it a term of my participation that compliance with the terms of the GPL for the code in question be a non-negotiable requirement.

    Posted on Monday 15 March 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-03-05: Ok, Be Afraid if Someone's Got a Voltmeter Hooked to Your CPU

    Boy, do I hate it when a FLOSS project is given a hard time unfairly. I was this morning greeted with news from many places that OpenSSL, one of the most common FLOSS software libraries used for cryptography, was somehow severely vulnerable.

    I had a hunch what was going on. I quickly downloaded a copy of the academic paper that was cited as the sole source for the story and read it. As I feared, OpenSSL was getting some bad press unfairly. One must really read this academic computer science article in the context it was written; most commenting about this paper probably did not.

    First of all, I don't claim to be an expert on cryptography, and I think my knowledge level to opine on this subject remains limited to a little blog post like this and nothing more. Between college and graduate school, I worked as a system administrator focusing on network security. While a computer science graduate student, I did take two cryptography courses, two theory of computation courses, and one class on complexity theory0. So, when compared to the general population I probably am an expert, but compared to people who actually work in cryptography regularly, I'm clearly a novice. However, I suspect many who have hitherto opined about this academic article to declare this severe vulnerability have even less knowledge than I do on the subject.

    This article, of course, wasn't written for novices like me, and certainly not for the general public nor the technology press. It was written by and for professional researchers who spend much time each week reading dozens of these academic papers, a task I haven't done since graduate school. Indeed, the paper is written in a style I know well; my “welcome to CS graduate school” seminar in 1997 covered the format well.

    The first thing you have to note about such papers is that informed readers generally ignore the parts that a newbie is most likely focus on: the Abstract, Introduction and Conclusion sections. These sections are promotional materials; they are equivalent to a sales brochure selling you on how important and groundbreaking the research is. Some research is groundbreaking, of course, but most is an incremental step forward toward understanding some theoretical concept, or some report about an isolated but interesting experimental finding.

    Unfortunately, these promotional parts of the paper are the sections that focus on the negative implications for OpenSSL. In the rest of the paper, OpenSSL is merely the software component of the experiment equipment. They likely could have used GNU TLS or any other implementation of RSA taken from a book on cryptography1. But this fact is not even the primary reason that this article isn't really that big of a deal for daily use of cryptography.

    The experiment described in the paper is very difficult to reproduce. You have to cause very subtle faults in computation at specific times. As I understand it, they had to assemble a specialized hardware copy of a SPARC-based GNU/Linux environment to accomplish the experiment.

    Next, the data generated during the run of the software on the specially-constructed faulty hardware must be collected and operated upon by a parallel processing computing environment over the course of many hours. If it turns out all the needed data was gathered, the output of this whole process is the private RSA key.

    The details of the fault generation process deserve special mention. Very specific faults have to occur, and they can't occur such that any other parts of the computation (such as, say, the normal running of the operating system) are interrupted or corrupted. This is somewhat straightforward to get done in a lab environment, but accomplishing it in a production situation would be impractical and improbable. It would also usually require physical access to the hardware holding the private key. Such physical access would, of course, probably give you the private key anyway by simply copying it off the hard drive or out of RAM!

    This is interesting research, and it does suggest some changes that might be useful. For example, if it doesn't slow a system down too much, the integrity of RSA signatures should be verified, on a closely controlled proxy unit with a separate CPU, before sending out to a wider audience. But even that would be a process only for the most paranoid. If faults are occurring on production hardware enough to generate the bad computations this cracking process relies on, likely something else will go wrong on the hardware too and it will be declared generally unusable for production before an interloper could gather enough data to crack the key. Thus, another useful change to make based on this finding is to disable and discard RSA keys that were in use on production hardware that went faulty.

    Finally, I think this article does completely convince me that I would never want to run any RSA computations on a system where the CPU was emulated. Causing faults in an emulated CPU would only require changes to the emulation software, and could be done with careful precision to detect when an RSA-related computation was happening, and only give the faulty result on those occasions. I've never heard of anyone running production cryptography on an emulated CPU, since it would be too slow, and virtualization technologies like Xen, KVM, and QEMU all pass-through CPU instructions directly to hardware (for speed reasons) when the virtualized guest matches the hardware architecture of the host.

    The point, however, is that proper description of the dangers of a “security vulnerability” requires more than a single bit field. Some security vulnerabilities are much worse than others. This one is substantially closer to the “oh, that's cute” end of the spectrum, not the “ZOMG, everyone's going to experience identity theft tomorrow” side.


    0Many casual users don't realize that cryptography — the stuff that secures your networked data from unwanted viewers — isn't about math problems that are unsolvable. In fact, it's often based on math problems that are trivially solvable, but take a very long time to solve. This is why algorithmic complexity questions are central to the question of cryptographic security.

    1 I'm oversimplifying a bit here. A key factor in the paper appears to be the linear time algorithm used to compute cryptographic digital signatures, and the fact that the signatures aren't verified for integrity before being deployed. I suspect, though, that just about any RSA system is going to do this. (Although I do usually test the integrity of my GnuPG signatures before sending them out, I do this as a user by hand).

    Posted on Friday 05 March 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-03-04: Musings on Software Freedom for Mobile Devices

    I started using GNU/Linux and Free Software in 1992. In those days, while everything I needed for a working computer was generally available in software freedom, there were many components and applications that simply did not exist. For highly technical users who did not need many peripherals, the Free Software community had reached a state of complete software freedom. Yet, in 1992, everyone agreed there was still much work to be done. Even today, we still strive for a desktop and server operating system, with all relevant applications, that grants complete software freedom.

    Looked at broadly, mobile telephone systems are not all that different from 1992-era GNU/Linux systems. The basics are currently available as Free, Libre, and Open Source Software (FLOSS). If you need only the bare minimum of functionality, you can, by picking the right phone hardware, run an almost completely FLOSS operating system and application set. Yet, we have so far to go. This post discusses the current penetration of FLOSS in mobile devices and offers a path forward for free software advocates.

    A Brief History

    The mobile telephone market has never functioned like the traditional computer market. Historically, the mobile user made arrangements with some network carrier through a long-term contract. That carrier “gave” the user a phone or discounted it as a loss-leader. Under that system, few people take their phone hardware choice all that seriously. Perhaps users pay a bit more for a slightly better phone, but generally they nearly always pick among the limited choices provided by the given carrier.

    Meanwhile, Research in Motion was the first to provide corporate-slave-oriented email-enabled devices. Indeed, with the very recent focus on consumer-oriented devices like the iPhone, most users forget that Apple is by far not the preferred fruit for the smart phone user. Today, most people using a “smart phone” are using one given to them by their employer to chain them to their office email 24/7.

    Apple, excellent at manipulating users into paying more for a product merely because it is shiny, also convinced everyone that now a phone should be paid for separately, and contracts should go even longer. The “race to mediocrity” of the phone market has ended. Phones need real features to stand out. Phones, in fact, aren't phones anymore. They are small mobile computers that can also make phone calls.

    If these small computers had been introduced in 1992, I suppose I'd be left writing the Mobile GNU Manifesto, calling for developers to start from scratch writing operating systems for these new computers, so that all users could have software freedom. Fortunately, we have instead been given a head start. Unlike in 1992, not every company in the market today is completely against releasing Free Software. Specifically, two companies have seen some value in releasing (some parts of) phone operating systems as Free Software: Nokia and Google. However, the two companies have done this for radically different reasons.

    The Current State of Mobile Software Freedom

    For its part, Nokia likely benefited greatly from the traditional carrier system. Most of their phones were provided relatively cheaply with contracts. Their interest in software freedom was limited and perhaps even non-existent. Nokia sold new hardware every time a phone contract was renewed, and the carrier paid the difference between the loss-leader price and Nokia's wholesale cost. The software on the devices was simple and mostly internally developed. What incentive did Nokia have to release software in software freedom? (Nokia realized too late this was the wrong position, but more on that later.)

    In parallel, Nokia had chased another market that I've never fully understood: the tablet PC. Not big enough to be a real computer, but too large to be a phone, these devices have been an idea looking for a user base. Regardless of my personal views on these systems, though, GNU/Linux remains the ideal system for these devices, and Nokia saw that. Nokia built the Debian-ish Maemo system as a tablet system, with no phone. However, I can count on one hand all the people I've met who bothered with these devices; I just don't think a phone-less small computer is going to ever become the rage, even if Apple dumps billions into marketing the iPad. (Anyone remember the Newton?)

    I cannot explain, nor do I even understand, why Nokia took so long to use Maemo as a platform for a tablet-like telephone. But, a few months ago, they finally released one. This N900 is among only a few available phones that make any strides toward a fully free software phone platform. Yet, the list of proprietary components required for operation remains quite long. The common joke is that you can't even charge the battery on your N900 without proprietary software.

    While there are surely people inside Nokia who want more software freedom on their devices, Nokia is fundamentally a hardware company experimenting with software freedom in hopes that it will bolster hardware sales. Convincing Nokia to shorten that proprietary list will prove difficult, and the community based effort to replace that long list with FLOSS (called Mer) faces many challenges. (These challenges will likely increase with the recent Maemo merger with Moblin to form MeeGo).

    Fortunately, hardware companies are not the only entity interested in phone operating systems. Google, ever-focused on routing human eyes to its controlled advertising, realizes that even more eyes will be on mobile computing platforms in the future. With this goal in mind, Google released the Android/Linux system, now available on a variety of phones in varying degrees of software freedom.

    Google's motives are completely different than Nokia's. Technically, Google has no hardware to sell. They do have a set of proprietary applications that yield the “Google online experience” to deliver Google's advertising. From Google's point of view, an easy-to-adopt, licensing-unencumbered platform will broaden their advertising market.

    Thus, Android/Linux is a nearly fully non-copylefted phone operating system platform where Linux is the only GPL licensed component essential to Android's operation. Ideally, Google wants to see Android adopted broadly in both Free Software and mixed Free/proprietary deployments. Google's goals do not match that of the software freedom community, so in some cases, a given Android/Linux device will give the user more software freedom than the N900, but in many cases it will give much less.

    The HTC Dream is the only Android/Linux device I know of where a careful examination of the necessary proprietary components have been analyzed. Obviously, the “Google experience” applications are proprietary. There also are about 20 hardware interface libraries that do not have source code available in a public repository. However, when lined up against the N900 with Maemo, Android on the HTC Dream can be used as an operational mobile telephone and 3G Internet device using only three proprietary components: a proprietary GSM firmware, proprietary Wifi firmware, and two audio interface libraries. Further proprietary components are needed if you want a working accelerometer, camera, and video codecs as their hardware interface libraries are all proprietary.

    Based on this analysis, it appears that the HTC Dream currently gives the most software freedom among Android/Linux deployments. It is unlikely that Google wants anything besides their applications to be proprietary. While Google has been unresponsive when asked why these hardware interface libraries are proprietary, it is likely that HTC, the hardware maker with whom Google contracted, insisted that these components remain proprietary, and perhaps fear patent suits like the one filed this week are to blame here. Meanwhile, while no detailed analysis of the Nexus One is yet available, it's likely similar to the HTC Dream.

    Other Android/Linux devices are now available, such as those from Motorola and Samsung. There appears to have been no detailed analysis done yet on the relative proprietary/freeness ratio of these Android deployments. One can surmise that since these devices are from traditionally proprietary hardware makers, it is unlikely that these platforms are freer than those available from Google, whose maximal interest in a freely available operating system is clear and in contrast to the traditional desires of hardware makers.

    Whether the software is from a hardware-maker desperately trying a new hardware sales strategy, or an advertising salesman who wants some influence over an operating system choice to improve ad delivery, the software freedom community cannot assume that the stewards of these codebases have the interests of the user community at heart. Indeed, the interests between these disparate groups will only occasionally be aligned. Community-oriented forks, as has begun in the Maemo community with Mer, must also begin in the Android/Linux space too. We are slowly trying with the Replicant project, founded by myself and my colleague Aaron Williamson.

    A healthy community-oriented phone operating system project will ultimately be an essential component to software freedom on these devices. For example, consider the fate of the Mer project now that Nokia has announced the merger of Maemo with Moblin. Mer does seek to cherry-pick from various small device systems, but its focus was to create a freer Maemo that worked on more devices. Mer now must choose between following the Maemo in the merge with Moblin, or becoming a true fork. Ideally, the right outcome for software freedom is a community-led effort, but there may not be enough community interest, time and commitment to shepherd a fork while Intel and Nokia push forward on a corporate-controlled codebase. Further, Moblin will likely push the MeeGo project toward more of a tablet-PC operating system than a smart phone.

    A community-oriented Android/Linux fork has more hope. Google has little to lose by encouraging and even assisting with such forks; such effort would actually be wholly consistent with Google's goals for wider adoption of platforms that allow deployment of Google's proprietary applications. I expect that operating system software-freedom-motivated efforts will be met with more support from Google than from Nokia and/or Intel.

    However, any operating system, even a mobile device one, needs many applications to be useful. Google experience applications for Android/Linux are merely the beginning of the plethora of proprietary applications that will ultimately be available for MeeGo and Android/Linux platforms. For FLOSS developers who don't have a talent for low-level device libraries and operating system software, these applications represent a straightforward contribution towards mobile software freedom. (Obviously, though, if one does have talent for low-level programming, replacing the proprietary .so's on Android/Linux would be the optimal contribution.)

    Indeed, on this point, we can take a page from Free Software history. From the early 1990s onward, fully free GNU/Linux systems succeeded as viable desktop and server systems because disparate groups of developers focused simultaneously on both operating systems and application software. We need that simultaneous diversity of improvement to actually compete with the fully proprietary alternatives, and to ensure that the “mostly FLOSS” systems of today are not the “barely FLOSS” systems of tomorrow.

    Careful readers have likely noticed that I have ignored Nokia's other release, the Symbian> codebase. Every time I write or speak about the issues of software freedom in mobile devices, I'm chastised for leaving it out of the story. My answer is always simple: when a FLOSS version of Symbian can be compiled from source code, using a FLOSS compiler or SDK, and that binary can be installed onto an actual working mobile phone device, then (and only then) will I believe that the Symbian source release has value beyond historical interest. We have to get honest as a community about the future of Symbian: it's a ten-year-old proprietary codebase designed for devices of that era that doesn't bootstrap with any compilers our community uses regularly. Unless there's a radical change to these facts, the code belongs in a museum, not running on a phone.

    Also, lest my own community of hard-core FLOSS advocates flame me, I must also mention the Neo FreeRunner device and the OpenMoko project. This was a noble experiment: a freely specified hardware platform running 100% FLOSS. I used an OpenMoko FreeRunner myself, hoping that it would be the mobile phone our community could rally around. I do think the device and its (various) software stack(s) have a future as an experimental, hobbyist device. But, just as GNU/Linux needed to focus on x86 hardware to succeed, so must software freedom efforts in mobile systems focus on mass-market, widely used, and widely available hardware.

    Jailbreaking and the Self-Installed System

    When some of us at my day-job office decided to move as close to a software freedom phone platform as we could, we picked Android/Linux and the HTC Dream. However, we carefully considered the idea of permission to run one's own software on the device. In the desktop and server system market, this is not a concern, but on mobile systems, it is a central question.

    The holdover of those carrier-controlled agreements for phone acquisition is the demand that devices be locked down. Devices are locked down first to a single carrier's network, so that devices cannot (legally) be resold as phones ready for any network. Second, carriers believe that they must fear the FCC if device operating systems can be reinstalled.

    On the first point, Google is our best ally in this regard. The HTC Dream developer models, called the Android Dev Phone 1 (aka ADP1), while somewhat more expensive than T-Mobile branded G1s, permit the user to install any operating system on the phone, and the purchase agreement extract no promises from the purchaser regarding what software runs on the device. Google has no interest in locking you to a single carrier, but only to a single Google experience application vendor. Offering a user “carrier freedom of choice”, while tying those users tighter to Google applications, is probably a central part of their marketing plans.

    The second point — fear of an FCC crack down when mobile users have software freedom — is beyond the scope of this article. However, what Atheros has done with their Wifi devices shows that software freedom and FCC compliance can co-exist. Furthermore, the central piece of FCC's concern — the GSM chipset and firmware — runs on a separate processor in modern mobile devices. This is a software freedom battle for another day, but it shows that the FCC can be pacified in the meantime by keeping the GSM device a black box to the Free Software running on the primary processor of the device.

    Conclusion

    Seeking software freedom on mobile devices will remain a complicated endeavor for some time. Our community should utilize the FLOSS releases from companies, but should not forget that, until viable community forks exist, software freedom on these devices exists at the whim of these companies. A traditional “get some volunteers together and write some code” approach can achieve great advancement toward community-oriented FLOSS systems on mobile devices. Developers could initially focus on applications for the existing “mostly FLOSS” platforms of MeeGo and Android/Linux. The challenging and more urgent work is to replace lower-level proprietary components on these systems with FLOSS alternatives, but admittedly needs special programming skills that aren't easy to find.

    (This blog post first appeared as an article in the March 2010 issue of the Canadian online journal, The Open Source Business Resource.)

    Posted on Thursday 04 March 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-03-03: Thoughts on Jeremy's Sun/Oracle Analysis

    Leslie Hawthorn referred me to an excellent article by Jeremy Allison about Sun merging with Oracle. It was a particularly interesting read for me since, while I knew that Jeremy worked for Sun early in his career, I didn't realize that he started in engineering tech support.

    The most amusing part to me is that it's quite possible Jeremy was on the UK tech support hotline during the same time frame when I was calling USA Sun tech support while working for Westinghouse. I probably would have had a different view of proprietary software if Jeremy had answered the USA phone calls. One of the major life experiences that led me down the path of hard-core software freedom beliefs were my many calls to Sun tech support, who would usually tell me they just weren't going to fix the bugs I was reporting because Westinghouse just wasn't “big enough” (it was ironically one of the largest employers in Maryland in the 1980s and early 1990s) to demand that Sun fix such bugs (notwithstanding our monthly Sun maintenance fees).

    But, more fascinating still is Jeremy's analysis of why Sun failed as a FLOSS company. Specifically, Jeremy points out that the need for corporate control over all software technologies that Sun released, specifically demanding the exclusive right to proprietarize non-Sun contributions, was a primary reason that Sun just never succeeded as a FLOSS company.

    Meanwhile, I'm less optimistic than Jeremy on the future of Oracle. I have paid attention to Oracle's contributions to btrfs in light of recent events. Amusingly, btfs exists in no small part because ZFS was never licensed correctly and never turned into a truly community-oriented project. While the two projects don't have identical goals, they are similar enough that it seems unlikely btrfs would exist if Sun had endeavored to become a real FLOSS contributor and shepherd ZFS into Linux upstream using normal Linux community processes. It's thus strange to think that Oracle controls ZFS, even while it continues to contribute to btrfs, in a normal, upstream way (i.e., collaborating under the terms of GPLv2 with community developers and employees of other companies such as Red Hat, HP, Intel, Novell, and Fujitsu).

    I have mostly considered Oracle's contributions to btrfs (and to Xen, to which they contribute to in much the same way) as a complete fluke. Oracle is third only to Apple and Microsoft in its predatory, proprietary software marketing practices and mistreatment of users. Other than these notable exceptions, Oracle's attitude generally matches Sun's long-ago roots (and Apple's current attitude) in this regard: non-copyleft FLOSS without giving contributions back is the best “Open Source” plan.

    Software corporations usually oscillate between treating users and developers well and treating them poorly. Larger companies are often completely self-contradictory on this issue across multiple divisions. Microsoft and Apple are actually unique in their consistency of anti-software-freedom attitudes; I've typically assessed Oracle as roughly equivalent to the two of them0. I don't really see Oracle's predatory proprietary licensing models changing, and I expect them to try to manipulate FLOSS to bolster their proprietary licensing. Oracle was never an operating system company before the Sun acquisition, and therefore contributing to operating system components like btrfs and Xen were historically a non-issue. My pessimistic view is that Oracle's FLOSS involvement won't go beyond what currently exists (and I even find myself worrying if others can pick up the slack on btrfs if (when?) Oracle starts marketing a proprietarized ZFS-based solution instead). In short, I expect Oracle's primary business will still be anti-FLOSS. Nevertheless, I'll try to quickly acknowledge it if it turns out I'm wrong.


    0 Contrary to the popular receptions at the time, I was actually quite depressed both when, in 1999, Oracle announced first that they'd have a certified version of Oracle's database available for Red Hat Linux and when, in 2002, Oracle announced so-called “Unbreakable” Linux. These moves were not toward more software freedom, but rather to leverage the availability of a software freedom operating system, GNU/Linux, to sell proprietary licenses for Oracle databases. Neither event should have been heralded as anything but negative for software freedom.

    Posted on Wednesday 03 March 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

February

  • 2010-02-22: SCALE 8x Highlights

    I just returned today (unfortunately on an overnight flight, which always causes me to mostly lose the next day to sleep problems) from SCALE 8x. I spoke about GPL enforcement efforts, and also was glad to spend all day Saturday and Sunday at the event.

    These are my highlights of SCALE 8x:

    • Karsten Wade's keynote was particularly good. It's true that some of his talk was the typical messaging we hear from Corporate Open Source PR people (which are usually called “Community Managers”, although Karsten calls himself a “Senior Community Gardener” instead). Nevertheless, I was persuaded that Karsten does seek to educate Red Hat internally to have the right attitude about FLOSS contribution. In particular, he opened with a an illuminating literary analogy (from Chris Grams) about Tom Sawyer manipulating his acquaintances into paying him to do his work. I hadn't seen Chris' article when it was published back in September, and found this (“new to me”) analogy quite compelling. This is precisely the kind of activity that I see happening with problematic copyright assignments. I think the Tom Sawyer analogy fits aptly to that situation, because a contributor first does some work without compensation (the original patch), and then is manipulated even further into giving up something of value (signing away copyrights for nothing in return) for the mere honor of being able to do someone else's work. It was no surprised that after Karsten's keynote, jokes abounded in the SCALE 8x hallways all weekend that we should nickname Canonical's new COO, Matt Asay, the “Tom Sawyer of Open Source”. I am sure Red Hat will be happy that their keynote inspired some anti-Canonical jokes.
    • Another Red Hat employee (who is also my good friend and former cow-orker), Richard Fontana, also gave an excellent talk that many missed, as it was scheduled in the very final session slot. Fontana put forward more details about his theory of the “Lex Mercatoria” of FLOSS and how it works in resolving licensing conflicts and incompatibility inside the community. He contrasted it specifically against the kinds of disputes that happen in normal GPL violations, which are primarily perpetrated by those outside the FLOSS world). I agreed with Fontana's conclusions, but his argument seemed to assume that these in-community licensing issues were destabilizing. I asked him about this, pointing out that the community is really good at solving these issues before they destabilize anything. Fontana agreed that they do get easily resolved, and revised his point to say that the main problem is that distribution projects (like Debian and Fedora) hold the majority of responsibility for resolving these issues, and that upstreams need to take more responsibility on this. (BTW, Karsten was also in the audience for Fontana's talk, has written a more detailed blog post about it.) Fontana noted to me after his talk that he thought I wasn't paying attention, as I was using my Android phone a lot during the talk. I was actually dent'ing various points from his talk. I realized when Fontana expressed this concern that perhaps we as speakers have to change our views about what it means when people seem focused on computing devices during a talk. (I probably would have thought the same as Fontana in the situation.) The online conversation during a talk is a useful part of the interaction. Stormy Peters even once suggested before a talk at Linux World that we should have a way to put dents up on the screen as people comment during a talk. I may actually try to find a way to do this next time I give a talk.
    • I also saw Brian Aker's presentation about Drizzle, which is a fork of the MySQL codebase that he began inside Sun and now maintains further (having left Sun before the Oracle merger completed). I was impressed to see how much Drizzle has grown in just a few years, and how big its user base is. (Being a database developer, Brian thinks user numbers in the tens of thousands is just a start, but there are many FLOSS projects that would be elated even to max out at tens of thousands users. While I admire his goals of larger user bases, I think they've already accomplished a lot.) I talked with Brian for an hour after his talk all about the GPL and the danger of single-copyright-held business models. He's avoided this for Drizzle, and it sounds like none of the consulting companies spouting up around the user community has too much power over the project. (Brian also blogged a summary of some of the points in the discussion we had.)
    • Because it directly time-conflicted Brian's talk, I missed my friend and colleague's Karen Sandler's talk about trademarks, but I hear it went well. Karen told me not to attend anyway since she said I already knew everything it contained, and that she would have went to Brian's talk too if my talk was against it. She did however make a brief appearance at my talk, so I feel bad my post-talk chat with Brian made it impossible for me to do the same for her talk.
    • I spoke extensively with Matt Kraai in the Debian booth. It was great to meet Matt for the first time, as he had previously volunteered on the Free Software Directory project when I was at FSF, and he's also contributed a lot of development effort to BusyBox. It's always strange but great to finally meet someone in person you've occasionally been in touch with for nearly a decade online.
    • Don Armstrong was also in the Debian booth. I got to know Don when we served on one of the GPLv3 discussion committees together, and I hadn't been in touch with him regularly since the GPLv3 process ended. He's continuing to do massive amounts of volunteer work for Debian, including being in charge of the bug tracking system! I asked him for some ideas in how to help Debian more, and he immediately mentioned the Debian/GNOME Bug Weekend coming up this weekend. I'm planning to get involved this weekend, and I hope others will too.
    • Finally, I had a number of important meetings with lots of people in the FLOSS world, such as Tarus Balog, Michael Dexter, Bob Gobeille, Deb Nicholson, Rob Savoye and Randal Schwartz. Ok, enough name-dropping. (BTW, Tarus has written about his trip as well, and mentioned our ongoing copyright assignment debate. Tarus argues that he can do non-promise copyright assignment in OpenNMS and still avoid the normal Open Core shareware-like outcomes, which he dubs “fauxpen source” for “fake open source”. Time will tell.)

    SCALE is really the gold standard of community-run, local FLOSS conferences. It is the inspiration for many of the other regional events such as OLF, SELF, and the like. A major benefit of these regional events is that while they draw speakers from all over the country, the average attendee is a local who usually cannot travel to the better-known events like OSCON.

    Posted on Monday 22 February 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-02-17: Computer Science Education Benefits from FLOSS

    I read with interest today when Linux Weekly News linked to Greg DeKoenigsberg's response to Mark Guzdial's ACM Blog post, The Impact of Open Source on Computing Education (which is mostly a summary of his primary argument on his personal blog). I must sadly admit that I was not terribly surprised to read such a post from an ACM-affiliated academic that speaks so negatively of FLOSS's contribution to Computer Science education.

    I mostly agree with (and won't repeat) DeKoenigsberg's arguments, but I do have some additional points and anecdotal examples that may add usefully to the debate. I have been both a student (high school, graduate and undergraduate) and teacher (high school and TA) of Computer Science. In both cases, software freedom was fundamental and frankly downright essential to my education and to that of my students.

    Before I outline my copious disagreements, though, I want to make abundantly clear that I agree with one of Guzdial's primary three points: there is too much unfriendly and outright sexist (although Guzdial does not use that word directly) behavior in the FLOSS community. This should not be ignored, and needs active attention. Guzdial, however, is clearly underinformed about the extensive work that many of us are doing to raise awareness and address that issue. In software development terms: it's a known bug, it's been triaged, and development on a fix is in progress. And, in true FLOSS fashion, patches are welcome, too (i.e., get involved in a FLOSS community and help address the problem).

    However, the place where my disagreement with Guzdial begins is that this sexism problem is unique to FLOSS. As an undergraduate Computer Science major, it was quite clear to me that a sexist culture was prevalent in my Computer Science department and in CS in general. This had nothing to do with FLOSS culture, since there was no FLOSS in my undergraduate department until I installed a few GNU/Linux machines. (See below for details.)

    Computer Science as a whole unfortunately remains heavily male-dominated with problematic sexist overtones. It was common when I was an undergraduate (in the early 1990s) that some of my fellow male students would display pornography on the workstation screens without a care about who felt unwelcome because of it. Many women complained that they didn't feel comfortable in the computer lab, and the issue became a complicated and ongoing debate in our department. (We all frankly could have used remedial sensitivity training!) In graduate school, a CS professor said to me (completely straight-faced) that women didn't major in Computer Science because most women's long term goals are to have babies and keep house. Thus, I simply reject the notion that this sexism and lack of acceptance of diversity is a problem unique to FLOSS culture: it's a CS-wide problem, AFAICT. Indeed, the CRA's Taulbee Survey shows (see PDF page 10) that only 22% of the tenure track CS faculty in the USA and Canada are women, and only 12% of the full professors are. In short, Guzdial's corner of the computing world shares this problem with mine.

    Guzdial's second point is the most offensive to the FLOSS community. He argues that volunteerism in FLOSS sends a message that no good jobs are available in computing. I admit that I have only anecdotal evidence to go on (of course, Guzdial quotes no statistical data, either), but in my experience, I know that I and many others in FLOSS have been successfully and gainfully employed precisely because of past volunteer work we've done. Ted T'so is fond of saying: Thanks to Linux, my hobby became my job and my job became my hobby. My experience, while neither as profound nor as important as Ted's, is somewhat similar.

    I downloaded a copy of GNU/Linux for the first time in 1992. I showed it to my undergraduate faculty, and they were impressed that I had a Unix-like system running on PC hardware, and they encouraged me to build a computer lab with old PC's. I spent the next three and half years as the department's volunteer0 sysadmin and occasional developer, gaining essential skills that later led me to a lucrative career as a professional sysadmin and software developer. If the lure of software freedom advocacy's relative poverty hadn't sidetracked me, I'd surely still be on that same career path.

    But that wasn't even the first time I developed software and got computers working as a volunteer. Indeed, every computer geek I know was compelled to write code and do interesting things with computers from the earliest of ages. We didn't enter Computer Science because we wanted to make money from it; we make a living in computing because we love it and are driven to do it, regardless of how much we get paid for it. I've observed that dedicated, smart people who are really serious about something end up making a full-time living at that something, one way or the other.

    Frankly, there's an undertone in Guzdial's comments on this point that I find disturbing. The idea of luring people to Computer Science through job availability is insidious. I was an undergraduate student right before the upward curve in CS majors, and a graduate student during the plateau (See PDF page 4 of the Taulbee Survey for graphs). As an undergraduate, I saw the very beginnings of people majoring in Computer Science “for the money”, and as a graduate student, I was surrounded by these sorts of undergraduates. Ultimately, I don't think our field is better off for having such people in it. Software is best when it's designed and written by people who live to make it better — people who really hate to go to bed with a bug still open. I must constantly resist the urge to fix any given broken piece of software in front of me lest I lose focus on my primary task of the moment. Every good developer I've met has the same urge. In my experience, when you see software developed by someone who doesn't have this drive, you see clearly that it's (at best) substandard, and (usually) pure junk. That's what we're headed for if we encourage students to major in Computer Science “for the money”. If students' passion is making money for its own sake, we should encourage them to be investment bankers, not software developers, sysadmins, and Computer Scientists.

    Guzdial's final point is that our community is telling newcomers that programming is all that matters. The only evidence Guzdial gives for this assertion is a pithy quote from Linus Torvalds. If Guzdial actually listened to interviews that Torvalds has given, Guzdial would hear that Torvalds cares about a lot more than just code, and spends most of his time in natural language discussions with developers. The Linux community doesn't just require code; it requires code plus a well-argued position of why the code is right for the users.

    Guzdial's primary point here, though, is that FLOSS ignores usability. Using Torvalds and the Linux community as the example here makes little sense, since “usability” of a kernel is about APIs for fellow programmers. Linus' kernel is the pinnacle of usability measured against the userbase who interacts with it directly. If a kernel is something non-technical users are aware of “using”, then it's probably not a very usable kernel.

    But Guzdial's comment isn't really about the kernel; instead, he subtly insults the GNOME community (and other GUI-oriented FLOSS projects). Usability work is quite expensive, but nevertheless the GNOME community (and others) desperately want it done and try constantly to fund it. In fact, very recently, there has been great worry in the GNOME community that Oracle's purchase of Sun means that various usability-related projects are losing funding. I encourage Guzdial to get in touch with projects like the GNOME accessibility and usability projects before he assumes that one offhand quote from Linus defines the entire FLOSS community's position on end-user usability.

    As a final anecdote, I will briefly tell the story of my year teaching high school. I was actively recruited (again, yet another a job I got because of my involvement in FLOSS!) to teach a high school AP Computer Science class while I was still in graduate school in Cincinnati. The students built the computer lab themselves from scratch, which one student still claims is one of his proudest accomplishments. I had planned to teach only ‘A’ topics, but the students were so excited to learn, we ended up doing the whole ‘AB’ course. All but two of the approximately twenty students took the AP exam. All who took it at least passed, while most excelled. Many of them now have fruitful careers in computing and other sciences.

    I realize this is one class of students in one high school. But that's somewhat the point here. The excitement and the “do it yourself” inspiration of the FLOSS world pushed a random group of high school students into action to build their own lab and get the administration to recruit a teacher for them. I got the job as their teacher precisely because of my involvement in FLOSS. There is no reason to believe this success story of FLOSS in education is an aberration. More likely, Guzdial is making oversimplifications about something he hasn't bothered to examine fully.

    Finally, I should note that Guzdial used Michael Terry's work as a jumping off point for his comments. I've met, seen talks by, and exchanged email with Terry and his graduate students. I admit that I haven't read Terry's most recent papers, but I have read some of the older ones and am familiar generally with his work. I was thus not surprised to find that Terry clarified that his position differs from Guzdial's, in particular noting that we found that open source developers most certainly do care about the usability of their software, but that those developers make an error by focusing too much on a small subset of their userbase (i.e., the loudest). I can certainly verify that fact from the anecdotal side. Generally speaking, I know that Terry is very concerned about FLOSS usability, and I think that our community should work with him to see what we can learn from his research. I have never known Terry to be dismissive of the incredible value of FLOSS and its potential for improvement, particularly in the area of usability. Terry's goal, it seems to me, is to convince and assist FLOSS developers to improve the usability of our software, and that's certainly a constructive goal I do support.

    (BTW, I mostly used last names through out this post because Mark, Michael, and Greg are relatively common names and I can think of a dozen FLOSS celebrities who have one of those first names. :)


    0Technically, I was “paid” in that I was given my own office in the department because I was willing to do the sysadmin duties. It was nice to be the only undergraduate on campus (outside of student government) with my own office.

    Posted on Wednesday 17 February 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-02-08: The New Era of Big Company Forks

    I was intrigued to read Greg Kroah-Hartman's analysis of what's gone wrong with the Android fork of Linux, and the discussion that followed on lwn.net. Like Greg, I am hopeful that the Android platform has a future that will work closely with upstream developers. I also have my own agenda: I believe Android/Linux is the closest thing we have to a viable fully FaiF phone operating system platform to take on the proprietary alternatives like the BlackBerry and the iPhone.

    I believe Greg's comments hint at a “new era” problem that the FLOSS community hasn't yet learned to solve. In the “old days”, we had only big proprietary companies like Apple and Microsoft that had little interest in ever touching copylefted software. They didn't want to make improvements and share them. Back then (and today too) they prefer to consume all the permissively licensed Free Software they can, and release/maintain proprietary forks for years.

    I'm often critical of Google, but I must admit Google is (at least sometimes) not afraid of dumping code on a regular basis to the public, at least when it behooves them to do it0. A source-available Android/Linux helps Google, because Google executives know the profit can be found in pushing proprietary user-space Android application programs that link to Google's advertising. They don't want to fight with Apple or Research in Motion to get their ads onto those platforms; they'll instead use Free Software to shift the underlying platform.

    So, in this case, the interests of software freedom align a bit with Google's for-profit motive. We want a fully FaiF phone operating system, that also has a vibrant group of Free Software applications for that operating system. While Google doesn't care a bit about Free Software applications on the phone, they need a readily available phone operating system so that many hardware phone manufacturers will adopt it. The FLOSS community and Google thus can work together here, in much the same way various companies have always helped improve GNU/Linux on the desktop because they thought it would foil their competitors (i.e., Microsoft and Apple).

    Yet, the problematic spot for FLOSS developers is Google doesn't actually need our development help. Sure, Google needs the FLOSS licenses we developed, and they need to get access to the upstream. But they have that by default; all that knowledge and code is public. Meanwhile, they can easily afford to have their engineers maintain Android's Linux fork indefinitely, and can more or less ignore Greg's suggestions for shepherding the code upstream. A small company with limited resources would have to listen to Greg, lest the endeavor run out of steam. But Google has plenty of steam.

    We're thus left appealing to Google's sense of decency, goodwill, collaboration and other software freedom principles that don't necessarily make an impact on their business. This can be a losing battle when communicating with a for-profit company (particularly a publicly traded one). They don't have any self-interest nor for-profit reason to work with upstream; they can hire as many good Linux hackers as they need to keep their fork going.

    This new era problem is actually harder than the old problem. In other words, I can't simply write an anti-Google blog post here like I'd write an anti-Apple one. Google is releasing their changes, making them available. They even have a public git repository for (at least) the HTC Dream platform. True, I can and do criticize both Google and HTC for making some hardware interface libraries1 proprietary, but that makes them akin to NVidia, not Microsoft and Apple.

    I don't have an answer for this problem; I suggest only that our community get serious about volunteer development and improvement of Android/Linux. When Free Software started, we needed people to spend their nights and weekends writing Free Software because there weren't any companies and for-profit business models to pay them yet. The community even donated to Free Software charitable non-profits to sponsor development that served the public. The need for that hasn't diminished; it's actually increased. Now, there is more code than ever available under FaiF licenses, but even more limited not-for-profit community resources to shepherd that code in a community-oriented direction. For-profit employers are beginning to control the destiny of more community developers, and this will lead to more scenarios like the one Greg describes. We need people to step forward and say: I want to do what's right with this code for this particular userbase, not what's right for one company. I hope someone will see the value in this community-directed type of development and fund it, but for the meantime, it has my nights and weekends. Just about every famous FLOSS hacker today started with that attitude. We need a bit more of that to go around.

    (I don't think I can end a blog post on this topic without giving a little bit of kudos to a company whom I rarely agree with: Novell. As near as I can tell, despite the many negative things Novell does, they have created a position for Greg that allows him to do what's right for Linux with what (appears to be) minimal interference. They deserve credit for this, and I think more companies that benefit from FLOSS should create more positions like this. Or, even better, create such positions through non-profit intermediaries, as the companies that fund Linux Foundation do for Linus Torvalds.)


    0Compare this to Apple, which is so allergic to copyleft licenses that they will do bizarre things that are clearly against their own interest and more or less a waste of time merely to avoid GPL'd codebases.

    1Updated: I originally wrote drivers here, but Greg pointed out that there aren't actually Linux drivers that are proprietary. I am not sure what to call these various .so files which are clearly designed to interface with the HTC hardware in some way, so I just called them hardware interface libraries.

    Posted on Monday 08 February 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-02-02: I Think I Just Got Patented.

    I could not think of anything but the South Park quote, They took our jobs! when I read today Black Duck's announcement of their patent, Resolving License Dependencies For Aggregations of Legally-Protectable Content.

    I've read through the patent, from the point of view of someone skilled in this particular art. In fact, I'm specifically skilled in two distinct arts related to this patent: computer programming and Free Software license compatibility analysis. It's from that perspective that I took a look at this patent.

    (BTW, the thing to always remember about reading patents is that the really significant part isn't the abstract, which often contains pie-in-the-sky prose about what the patent covers. The claims are the real details of the so-called “invention”.)

    So, when I look closely at these claims, I am appalled to discover this patent claims, as a novel invention, things that I've done regularly, with a mix of my brain and a computer, since at least 1999. I quickly came to the conclusion that this is yet another stupid patent granted by the USPTO that it would be better to just ignore.

    Indeed, ever since Amazon's one-click patent, I've hated the inundation of “look what stupid patent was granted today” slashdot items. I think it's a waste of time, generally speaking, since the USPTO is granting many stupid software patents every single day. If we spend our time gawking and saying how stupid they are, we don't get any real work done.

    But, the (likely obvious) reason this caught my attention is that the patent covers activities I've done regularly for so long. It gives me this sick feeling in my stomach to read someone else claiming as an invention something I've done and considered quite obvious for more than a decade.

    I'm not a patent agent (nor do I want to be — spending a week of my life studying for a silly exam to get some credential hasn't been attractive to me since I got my Master's degree), but honestly, I can't see how this patented process isn't obvious to everyone skilled in the arts of FLOSS license evaluation and computer programming. Indeed, the process described is so simple-minded, that it's a waste of time in my view to spend time writing a software system to do it. With a few one-off 10-line Perl programs and a few greps, I've had a computer assist me with processes like this one many times since the late 1990s.

    I do feel some shame that I've now contributed to the “hey, everyone, let's gawk at this silly pointless surely-invalid patent” rant. I guess that I have new sympathy for website designers who were so personally offended regarding the Amazon one-click patent. I can now confirm first-hand: it does really feel different when the patent claims seem close to an activity you've engaged in yourself for many years prior to the patent application. It's when the horribleness of the software patent system starts to really hit home.

    The saddest part, though, is that Black Duck again shows itself as a company whose primary goal is to prey on people's fear of software freedom. They make proprietary software and acquire software patents with the primary goal of scaring people into buying stuff they probably don't need. I've spent a lot more time working regularly on FLOSS license compliance than anyone who has ever worked at Black Duck. Simply put, coming into (and staying in) compliance is a much simpler process than they say, and can be done easily without the use of overpriced proprietary analysis of codebases.

    Posted on Tuesday 02 February 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-02-01: Not All Copyright Assignment is Created Equal

    In an interview with IT Wire, Mark Shuttleworth argues that all copyright assignment systems are equal, saying further that what Intel, Canonical and other for-profit companies ask for in the process are the same things asked for by Free Software non-profit organizations like the Free Software Foundation.

    I've written about this before, and recently quit using Ubuntu in part because of Canonical's assignment policies (which are, as Mark correctly points out, not that different from other for-profit company's assignment forms.)

    However, it's quite disingenuous for companies to point to the long standing tradition of copyright assignment to the FSF as a justification for their own practices. There are two key differences that people like Shuttleworth constantly gloss over or outright ignore:

    • FSF promises to never make their software proprietary. Shuttleworth claims that All copyright assignment agreements empower dual licensing, and relicensing, but that is simply a false statement if you include FSF in the “All”. FSF promises to never proprietarize its versions of the software assigned to it and always release its versions of the software under Free Software licenses.
    • Non-profits have a different duty to the public. For-profit companies have one duty: to make money for their owners and/or shareholders. Non-profit organizations, by contrast, are chartered to carry out the public good. Therefore, they cannot liberally ignore what's in the public good just because it makes some money. An organization like FSF, which has a public charter that explicitly says that it seeks to advance software freedom, would fail to carry out its public mission if it engaged in proprietary relicensing.

    It seems that Mark Shuttleworth wants to confuse us about copyright assignment so we just start signing away our software. In essence, companies try to bank on the goodwill created by the FSF copyright assignment process over the years to convince developers to give up their rights under GPL and hand over their hard work for virtually nothing in return. We shouldn't give in.

    I am not opposed to copyright assignment in the least, in fact, I support it in many cases. However, without assurances that otherwise copylefted software won't be relicensed as proprietary software, developers should treat a copyright assignment process with maximum skepticism. Furthermore, we should simply not tolerate attempts by for-profit companies to confuse the developer community by comparing as equals copyright assignment systems that are radically different in their intent, execution, and consequences.

    (Some useful additional reading: my “Open Core” Is the New Shareware, Michael Meeks' Thoughts on Copyright Assignment, Dave Neary's Copyright assignment and other barriers to entry, and this LWN article.)

    Posted on Monday 01 February 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

January

  • 2010-01-26: Proud to Be a Member of GNOME Foundation

    I suppose that I should have applied years ago to be a member of the GNOME Foundation. I have served since 2001 as the Free Software Foundation's representative on the GNOME Advisory Board, and have worked hard the last nine years to maintain a good relationship between the FSF and the GNOME Foundation. Indeed, I was very glad and willing when FSF asked me to continue to serve in this role as a volunteer after I left employment of the FSF in 2005.

    Regarding actual GNOME Foundation membership, though, I suppose that I previously felt under-qualified to apply since (a) my personal avoidance of all things GUI is widely known, and (b) obviously I haven't contributed any code or even documentation to GNOME. The most I've done on the development side is the occasional bug report over the years. Yet, ever since I was finally able to switch the non-technical users in my life over to GNU/Linux, I've been very grateful and supportive for GNOME and its mission to create a Free Software desktop that everyone — not just computer geeks — can use effectively.

    Meanwhile, Leslie Hawthorn reminded me recently to stop perpetuating the false belief that the only useful FLOSS contribution is code and documentation. I think that it was her point that encouraged me to apply for GNOME Foundation membership. I was excited to receive my acceptance this morning.

    Many people in the GNOME community already know that I'm a good contact person if you have any issues that relate to the relationship between GNOME and GNU or between FSF and GNOME Foundation (these are, BTW, two clear and distinct sets of relationships). I'll take this opportunity to remind everyone that if you ever have a concern related to these relationships, I am always glad to assist in my diplomatic role between the two organizations (and projects).

    And, of course, as I have for years, I remain available to the GNOME community for the occasional licensing policy questions and/or GPL enforcement assistance.

    I very much hope to go to GUADEC this year, as I have not been in six years! However, I'm a bit worried about the tight scheduling between it and OSCON (which would mean at least two and a half weeks away in a row!), but I'll strive to be there.

    Posted on Tuesday 26 January 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-01-14: Back Home, with Debian!

    By the end of 2004, I'd been running Debian ‘testing’ on my laptop since around early 2003. For almost two years, I'd lived with periodic instability — including a week in the spring of 2003 when I couldn't even get X11 started — for the sake of using a distribution that maximally respected software freedom.

    I'd had no trouble with ‘potato’ for its two year lifespan, but after 6-8 months of woody, I was backporting far too much and I couldn't spare the time for upkeep. Running ‘testing’ was the next best option, as I could pin myself for 3-6 months at a time on a particularly stable day and have a de-facto “release”. But, I slowly was unable to spare the time for even that work, and I was ready to throw up my hands in surrender.

    At just about that time, a thing called ‘warty’ was released. I'd already heard about this company, Canonical, as they'd tried earlier that year to buy a domain name I technically own (canonical.org), but had long since given over to a group of old friends. (They of course had no interest in selling such a “hot property”). This new distribution, Ubuntu, was Debian-based, and when installed, it “felt” like Debian. Canonical was committed to a six-month release schedule, so I said to myself: well, if I have to ‘go corporate’ again, I might as well go to something that works like the distribution I prefer. And so, my five year stint as an Ubuntu user began.

    Of course, I hadn't always been a Debian user. I started in 1992 with SLS and quickly moved to Slackware. When the pain of that got too great, I went “corporate” for a while back then, too. I used Red Hat Linux from early 1996 until 1998. I ultimately gave up Red Hat because the distribution eventually became focused around the advancement of the company. They were happy to include lots of proprietary software — indeed, in the later 1990s, Red Hat CDs typically came with as many as two extra CDs filled with proprietary software. Red Hat (the company) had earlier made some efforts to appease us harder-core software-freedom folks. But, by the late 1990s, their briefly-lived RMS (aka Red Hat Means Source) distribution had withered completely. By then, I truly regretted my 1996 decision to go corporate, and fell in love quickly with Debian and its community-led, software-freedom-driven community. I remained a Debian user from 1998 until 2004.

    But, by the end of 2004, the pain of waiting for ‘sarge’ was great. So, for technical reasons only, “going corporate” again seemed like a reasonable trade-off. Ubuntu initially looked basically like Debian: ‘main’ and ‘universe’ were FaiF, ‘restricted’ was like ‘non-free’.

    Sadly, though, a for-profit, corporate-controlled distribution can never remain community-oriented. A for-profit company is eventually always going to put the acquisition of wealth above any community principle. So it has become with Ubuntu, in my view. The time has come (for me, at least) to go back to a truly community-oriented, software-freedom-respecting distribution. (Hopefully, I'll also never be tempted to leave again.)

    I didn't take this decision lightly, and didn't take it for only one reason. I've gone back to Debian for three (now) seven specific reasons:

    • UbuntuOne's server side system is proprietary software with no prospects of liberation. This has been exacerbated since Canonical now heavily focuses on strong integration of UbuntuOne into the desktop for the Lucid release. It seems clear that one of Canonical's top goals is to convince every Ubuntu user to rely regularly on new proprietary software and services.0
    • Canonical has become too aggressive with community-unfriendly copyright assignment policies. Copyright assignment on Free Software can be put to good uses. However, most for-profit corporations design their copyright assignment process primarily to circumvent the company's potential copyleft obligations; Canonical's copyright assignment is sadly typical in that regard. Even worse, Canonical's management has become increasingly more aggressive in pressuring the community into accepting such copyright assignment policies as a fait accompli. (I'll likely write more on this issue this year, but in the meantime, my “Open Core” Is the New Shareware, Michael Meeks' Thoughts on Copyright Assignment, Dave Neary's Copyright assignment and other barriers to entry, and this LWN article are all good “further reading” resources.)
    • The line between ‘restricted’ and ‘main’ has become far too blurry. I was very glad when I first saw Ubuntu's “you're about to install restricted drivers” warning window, and I find that a good way to deal with the issue. However, there are many times (particularly during initial install) when Ubuntu doesn't even inform the user that proprietary software has been installed. I realize that there's a reasonable trade-off between (a) making someone's hardware work (so they don't think Microsoft is better merely because “it works”) and (b) having a fully FaiF system. However, this trade-off is only reasonable when the users are told clearly that they own hardware made by vendors opposed to software freedom. If the users never know, how will they know what hardware to avoid in the future?
    • Updated on 2010-01-19: This one is less of an issue to me than the others, but it shows the same pattern of Let's do more proprietary software on our platform that Red Hat went through in the 1990s. Namely, Canonical is now directly encouraging customers to run proprietary software on Ubuntu. (Updated on 2010-02-03: it turns out Canonical was already doing this a long time ago but I didn't know about it until 2010-01-19. (Thanks to J.B. Nicholson-Owens for the info on this.))
    • Updated on 2010-01-25: osamak kindly pointed out that Canonical also has plans to offer a facility for installing third-party proprietary software, called the “Software Center”. This appears to be similar to services that help install proprietary software on GNU/Linux systems such as Linspire's system and Click-and-Run.
  • Updated on 2010-02-06: Canonical has named Matt Asay its COO. Matt has an often stated that sometimes proprietary software is a better option for customers and believes that software freedom, as a political and moral cause, should be given up, in favor of pragmatically providing proprietary solutions whenever it is convenient. Specifically, in Matt's own words:
    Sometimes, after all, an open-source project is absolutely the wrong choice for a customer … The path forward is open source, not free software. Sometimes that openness will mean embracing Microsoft in order to meet a customer's needs.
    I would not want to run a distribution led by someone who believes proprietary software and FLOSS are equally legitimate. As a side note, I also find it quite bizarre that Canonical would hire someone to run its operations whose past statements clearly disagree with closing Ubuntu Bug 1. (Also, Matt Asay said in an interview that Canonical has a goal of deploying more proprietary application software.)
  • Updated on 2010-02-17, 2010-04-21: After the (very good) news 13 months ago that Canonical would release LaunchPad under AGPLv3, Canonical abandoned the authentication and login system for LaunchPad (and many other Ubuntu/Canonical online systems), and replaced it with proprietary software, but then released it in April 2010.

(Updated on 2010-02-17: As can be seen above, my mere list of three reasons posted just one month ago has now more than doubled! It's as if Canonical made a 2010 plan to “do less software freedom”, and is executing it with amazing corporate efficiency. As Queen Gertrude says in Hamlet, One woe doth tread upon another's heel, so fast they follow.)

When considering all this and taking a step back and look at the status of major distributions, my honest assessment is this: among the two primary corporate-controlled-but-dabbling-in-community-orientation distributions (aka Fedora and Ubuntu), Fedora is clearly much more software-freedom-friendly. Nevertheless, since I've twice gone corporate and ultimately regretted it, I decided it was time to go back home — back to Debian.

So, during the last week of 2009, I took nearly two full days off to reinstall and configure my laptop from scratch with lenny. I've thus been back on Debian since 2010-01-01. Twelve days in, I am very impressed. Really, all the things I liked about Ubuntu are now available upstream as well. This isn't the distribution I left in 2004; it's much better, all while being truly community-oriented and software-freedom-respecting. It's good to be home. Thank you, Debian developers.


0 For more information on the danger that proprietary network services pose to software freedom, please see the Franklin Street Statement.

Posted on Thursday 14 January 2010 by Bradley M. Kuhn.

Comment on this post in this identi.ca conversation.

2009

December

  • 2009-12-14: Litigation filed against Various GPL Violators

    I probably won't comment too much on the specifics at this point, but I wanted to make sure everyone saw that Software Freedom Conservancy filed a lawsuit against fourteen GPL violators today (with Erik Andersen). A PDF copy of the complaint is available.

    Posted on Monday 14 December 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-12-10: Thanks for Rafael Rivera, an Excellent GPL Compliance Engineer

    I'd like to congratulate Rafael Rivera on his successful GPL compliance work regarding the Microsoft WUDT software, which is apparently used to make ISOs from stuff you downloaded from Microsoft software.

    I'm of course against the idea of using Microsoft Windows, and why you'd ever want to make an ISO out of some Microsoft Windows stuff is beyond my comprehension. However, Rafael identified that the WUDT was based on some GPL'd software, and as such he was quite correct in demanding that Microsoft comply with the terms of the GPL (as it has done before, for example, with its Windows Services for Unix). Rafael was first to discover and point out this violation. More importantly, he also did what we in the GPL enforcement world call the “compliance engineering work”, which includes confirming the violation exists by technical measures, and checking that the complete and corresponding source code actually builds and installs the binary as expected.

    That importance of that latter part of the work is unfortunately not often identified. GPL is designed to hook up the legal requirements of a copyright license with certain technical requirements needed to allow downstream users to modify and improve the software. This is the true innovation of the GPL: to make copyright law into a tool that gives users the actual means to improve and redistribute modified versions of software.

    When we check to see if someone is in compliance, it's not merely about seeing if they dumped a big pile of source onto the world. We also have to check carefully that the source builds and that the process produces a working binary that can be installed by the user. That's why GPLv2 requires scripts to control compilation and installation of the executable and what GPLv3 clarifies that requirement even further into the formally defined Installation Information.

    Thanks again to Rafael for doing this work. While everyone knows how often I fault Microsoft, I have to say they did a timely job in this particular case. A little under a month is actually the best one can hope for from initial identification to a violator about a problem to having in our hands complete and corresponding source code (or “C&CS”, as we GPL enforcement geeks call it). Microsoft should have known better than to screw this up after years of working with the GPL, but everyone makes mistakes, and the real measure of a company is how quickly they redress a mistake.

    Now if we could just get Microsoft to stop the more harmful mistake of attacking FLOSS with patents, but that's a tougher problem to solve…

    Posted on Thursday 10 December 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-12-06: The Anatomy of a Modern GPL Violation

    I've been thinking the last few weeks about the evolution of the GPL violation. After ten years of being involved with GPL enforcement, it seems like a good time to think about how things have changed.

    Roughly, the typical GPL violation tracks almost directly the adoption and spread of Free Software. When I started finding GPL violations, it was in a day when Big Iron Unix was still king (although it was only a few years away from collapse), and the GNU tools were just becoming state of the art. Indeed, as a sysadmin, I typically took a proprietary Unix system, and built a /usr/local/ filled with the GNU tools, because I hated POSIX tools that didn't have all the GNU extensions.

    At the time, many vendors were discovering the same frustrations I was as a sysadmin. Thus, the typical violation in those days was a third-party vendor incorporating some GNU tools into their products, for use on some Big Iron Unix. This was the age of the violating backup product; we saw frequently backup products that violated the GPL on GNU tar in those days.

    As times changed, and computers got truly smaller, the embedded Unix-like system was born. GNU/Linux and (more commonly) BusyBox/Linux were the perfect solutions for this space. What was once a joke on comp.os.linux.advocacy in the 1990s began to turn into a reality: it was actually nearly possible for Linux to run on your toaster.

    The first class of embedded devices that were BusyBox/Linux-based were the wireless routers. Throughout the 2000s, the typical violation was always some wireless router. I still occasionally see those types of products violating the GPL, but I think the near-constant enforcement done by Erik Andersen, FSF, and Harald Welte throughout the 2000's has led the wireless router violation to become the exception rather than the rule. That enforcement also led to the birth of community-focused development of the OpenWRT and DD-WRT, that all started from that first enforcement that we (Erik, Harald and FSF (where I was at the time)) all did together in 2002 to ensure the WRT54G source release.

    In 2009, there's a general purpose computer in almost every electronics product. Putting a computer with 8MB RAM and a reasonable processor in a device is now a common default. Well, BusyBox/Linux was always the perfect operating system for that type of computer! So, when you walk through the aisles of the big electronics vendors today, it's pretty likely that many of the devices you see are BusyBox/Linux ones.

    Some people think that a company can just get away with ignoring the GPL and the requirements of copyleft. Perhaps if a company has five customers total, and none of them ask for source, your violation may never be discovered. But, if you produce a mass market product based on BusyBox/Linux, some smart software developer is going to eventually buy one. They are going to get curious, and when they poke, they'll see what you put in there. And, that developer's next email is going to be to me to tell me all about that device. In my ten years of enforcement experience, I find that a company's odds of “getting away” with a GPL violation are incredibly low. The user community eventually notices and either publicly shames the company (not my preferred enforcement method), or they contact someone like me to pursue enforcement privately and encourage the company in a friendly way to join the FLOSS community rather than work against it.

    I absolutely love that so many companies have adopted BusyBox/Linux as their default platform for many new products. Since circa 1994 when I first saw the “can my toaster run Linux?” joke, I've dreamed of time when it would be impossible to buy a mass-market electronics product without finding FLOSS inside. I'm delighted we've nearly reached that era during my lifetime.

    However, such innovation is made possible by the commons created by the GPL. I have dedicated a large portion of my adult life to GPL enforcement precisely because I believe deeply in the value of that commons. As I find violator after violator, I look forward to welcoming them to our community in a friendly way, and ask them to respect the commons that gave them so much, and give their code back to the community that got them started.

    Posted on Sunday 06 December 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

November

  • 2009-11-08: GPL Enforcement: Don't Jump to Conclusions, But Do Report Violations

    In one of my favorite movies, Office Space, Tom Smykowski (one of the fired employees) has a magic-eight-ball-style novelty product idea: a “Jump to Conclusions” mat. Sometimes, I watch discussions in the software freedom community and think that, as a community, we're all jumping around on one of these mats.

    I find that people are most likely to do this when something seems novel and exciting. I don't really blame anyone for doing it; I do it myself when I have discovered an exciting thing that's new to me, even if it's well known by others. But, often, this new thing is actually rather mundane, and it's better to check in with the existing knowledge about the idea before “jumping” to any conclusions. In other words, the best square on the mat for us to land on is the one that reads: Think again!

    Meanwhile, as some who follow my microblog know, I've been on a mission in recent months to establish just how common and mundane GPL violations are. Since 21 August 2009, I've been finding one new GPL violating company per day (on average) and I am still on target to find one per day for 365 days straight. When I tell this to people who are new to GPL enforcement, they are surprised and impressed. However, when I tell people who have done GPL enforcement themselves, they usually say some version of: Am I supposed to be impressed by that? Couldn't a monkey do that? Fact is, the latter are a little bit right: there are so many GPL violations that I might easily be able to go on finding one per day for two years straight.

    In short, GPL violations are common and everyday occurrences. I believe firmly they should be addressed, and I continue to dedicate much of my life to resolve them. However, finding yet another GPL violation isn't a huge and earth-shaking discovery. Indeed, it's what I was doing today to kill time while drinking my Sunday morning coffee.

    I don't mean to imply that I don't appreciate greatly when folks find new GPL violations. I think finding and reporting GPL violations is a very valuable service, and I wouldn't spend so much time finding them myself if I didn't value the work highly. But, the work is more akin to closing annoying bugs than it is to launching a paradigm-shifting FLOSS project. Closing bugs is an essential part of FLOSS development, but no one blogs about every single bug they close (although maybe we do microblog them ;).

    Having this weekend witnessed another community tempest about a potential GPL violation, I decided to share a few guidelines that I encourage everyone to follow when finding a GPL violation. (In other words, what follows are a some basic guidelines for reporting violations; other such guides are also available at the FSF's site and the gpl-violations.org site (which is now defunct, since gpl-violations.org is no longer active.)

    • Assume the violation is an oversight or an accident by the violator until you have clear evidence that tells you differently. I'd say that 98% of the violations I've ever worked on since 1998 have been unintentional and due primarily to negligence, not malice.

    • Don't go public first. Back around late 1999, when I found my first GPL violation from scratch, I wanted to post it to every mailing list I could find and shame that company that failed to respect and cooperate with the software freedom community. I'm glad that I didn't do that, because I've since seen similar actions destroy the lines of communication with violators, and make resolution tougher. Indeed, I believe that if the Cisco/Linksys violations had not been a center of public ridicule in 2003 when I (then at the FSF) was in the midst of negotiating with them for compliance, we would not have ended up with such a long saga to resolution.

    • Do contact the copyright holders, or their designated enforcement agents. Since the GPL is a copyright license, if the violator fails to comply on their own, only the copyright holder (typically) has the power to enforce the license0. Here's a list of contact addresses that I know for reporting various violations (if you know more such addresses, please let me know and I'll add them here):

      If the GPL'd project you've found a violation on isn't on the list above, just find email addresses of people with commit access to the repository for the project or with email addresses in the MAINTAINERS or CONTRIBUTORS files. It's better not to post the violation to a public discussion list for the project, as that's just “going public”.

    • Never treat a “community violator” the same way as a for-profit violator. I believe there is a fundamental difference between someone who makes a profit during the act of infringement than someone who merely seeks to contribute as a volunteer and screws something up. There isn't a perfect line between the two — it's a spectrum. However, those who don't make any money from their infringement are probably just confused community members who misunderstood the GPL and deserve pure education and non-aggressive enforcement. Those who make money from the infringement deserve some friendly education too, of course, but ultimately they are making a profit by ignoring the rights of their users. I think these situations are fundamentally different, and deserve different tactics.

    • Once you've reported a violation, please be patient with those of us doing enforcement. There are always hundreds of GPL violations that need action, and there are very few of us engaged in regular and active enforcement. Also, most of us try to get compliance not just on the copyrights we represent, but all GPL'd software. (This behooves both the software freedom community and the violator, as the former wants to see broad compliance, and the latter doesn't want to deal with each copyright holder individually). Thus, it takes much time and effort to do each enforcement action. So, when you report a new violation, it might take some time for the situation to resolve.

    • Do try your best to request source from the violator on your own. While making the violation public doesn't help, inquiring privately does often help. If you have received distribution of a binary that you think is GPL'd or LGPL'd (or used a network service that you think is AGPL'd), do write to the violator (typically best to use the technical support channels) and ask for the complete and corresponding source code. Be as polite and friendly as possible, and always assume it is their intention to comply until you have specific evidence that they don't intend to do so.

    • Share as much good information with the violator as you can to encourage their compliance. My colleagues and I wrote A Practical Guide to GPL Compliance for just this purpose.

    We need a careful balance regarding GPL enforcement. Remember that the primary goal of the GPL is encourage more software freedom in the world. For many violators, the first experience the violator has with FLOSS is an enforcement action. We therefore must ensure that enforcement action is reasonable and friendly. I view every GPL violator as a potential FLOSS contributor, and try my best to open every enforcement action with that attitude. I am human and thus sometimes become more frustrated with uncooperative violators than I should be. However, striving for kindness with violators only helps give a great image to the software freedom community.


    0In some situations, there are a few possibilities for users that exist if the copyright holder is unable or unwilling to enforce the GPL. We've actually recently seen an interesting successful enforcement by a user. I plan to blog in detail about this soon.

    Posted on Sunday 08 November 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-11-04: Android/Linux's Future and Advancement of Mobile Software Freedom

    Harald Welte knows more about development of embedded systems than I ever will. So, I generally defer completely to his views about software freedom development for embedded systems. However, as you can tell by that opening, I am setting myself up to disagree a little bit with him just this once on the topic. :)

    But first, let me point out where we agree: I think his recent blog post about what Android/Linux is not should be read by everyone interested in software freedom for mobile devices. (Harald's post also refers to a presentation by Matt Porter. I agree with Harald that talk is worth looking at closely.) The primary point Matt and Harald both make is one that Stallman has actually made for years: Linux is an operating system kernel, not a whole system for a user. That's why I started saying Android/Linux to refer to this new phone platform. It's just the kernel, Linux, with a bunch of Java stuff on top. As Matt points out, it doesn't even use a common Linux-oriented C Library, such as uClibc or the GNU C Library; it used a BSD-derived libc called Bionic.

    Indeed, my colleague Aaron Williamson discovered this fact quickly five months ago when he started trying to make a fully FaiF Android/Linux platform on the HTC Dream. I was amazed and aghast when he told me about adb and how there is no real shell on the device by default. It's not a GNU/Linux system, and that becomes quickly and painfully obvious to anyone who looks at developing for the platform. On this much, I agree with Harald entirely: this is a foreign system that will be very strange to most GNU/Linux hackers.

    Once I learned this fact, I immediately pondered: Why did Google build Android in this way? Why not make it GNU/Linux like the OpenMoko? I concluded that there are probably a few reasons:

    • First, while Linux is easy to cram into a small space, particularly with BusyBox and uClibc, if you want things both really small and have a nice GUI API, it's a bit tougher to get right. There is a reason the OpenMoko software stack was tough to get right and still has issues. Maemo, too, has had great struggles in its history that may not be fully overcome.
    • Second, Google probably badly wanted Java as the native application language, due to its ubiquity. I dislike Java more than the average, but there's no denying that nearly all undergraduate Computer Science students of the last ten years did most of their work in Java. Java is more foreign to most GNU/Linux developers than Python, Perl, Ruby and the like, but to the average programmer in the world, Java is the lingua franca.
    • Third, and probably most troubling, Google wanted to have as little GPL'd and LGPL'd stuff in the stack as possible. Their goal isn't software freedom; it is to convince phone carriers and manufacturers to make Google's proprietary applications the default mobile application set. The operating system is pure commodity to sell the proprietary applications. So, from Google's perspective, the more permissively licensed stuff in the Android/Linux base system, the better.

    Once you ponder all this, the obvious next question is: Should we bother with this platform, or focus on GNU/Linux instead? In fact, this very question comes up almost weekly over on the Replicant project's IRC channel (#replicant on freenode). Harald's arguments for GNU/Linux are good ones, and as I tell my fellow Replicant hackers, I don't begrudge anyone who wants to focus on that line of development. However, I think this is the place where I disagree with Harald: I think the freed Android code does have an important future in the advancement of software freedom.

    We have to consider carefully here, as Android/Linux puts us in a place software freedom developers have never been in before. Namely, we have an operating system whose primary deployments are proprietary, but the code is mostly available to us as Free Software, too. Furthermore, this operating system runs on platforms that we don't have a fully working port of GNU/Linux yet. I think these factors make the decision to port GNU/Linux or fork the mostly FaiF release into nearly a coin-flip decision.

    However, when deciding where to focus development effort, I think the slight edge goes to Android/Linux. It's not a huge favorite — maybe 54% (i.e., for my fellow poker players, all-in-prelfop in HE, Android would be the pair, not the unsuited overcards :). Android/Linux deserves the edge primarily because Google and their redistributors (carriers and phone makers) will put a lot of marketing and work into gaining public acceptance of “Android” as an iPhone replacement. We can take advantage of this, and say: What we have is Android too, but you can modify and improve it and run more applications not available in the Android Market! Oh, and if you really really do want that proprietary application from the Market, those will run on our system, too (but we urge you not to use proprietary software). It's simply going to be easier to get people to jailbreak their phones and install a FaiF firmware if it looks almost identical to the one they have, but with a few more features they don't have already.

    So, by all means, if porting GNU/Linux and/or BusyBox/Linux to strange new worlds is your hobby, then by all means make it run on the HTC Dream too. In fact, as a pure user I'll probably prefer it once it's ready for prime time. However, I think the strategic move to get more software freedom in the world is to invest development effort into a completely freedom-respecting fork of Android/Linux. (And, yet another shameless plug, we need driver hacker help on Replicant! :).

    Posted on Wednesday 04 November 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

October

  • 2009-10-26: Software Freedom on Mobile Devices

    I agree pretty completely with Harald Welte's comments regarding Symbian. I encourage everyone to take a look at his comments.

    We are in a very precarious time with regard to the freedom of mobile devices. We currently have no truly Free Software operating system that does the job, and there are multiple companies trying to get our attention with code releases that have some Free Software in them. None of these companies have pro-software-freedom motives about these issues (obviously, they are for-profit companies, who focus solely on their own profits). So, we have to carefully analyze what these proprietary software companies are up to, why they are releasing some code, and determine if we'll be successful forking these platforms to build a fully software freedom phone platform.

    We thus must take care not to burn our developer time on likely hopeless codebases. I think Harald's analysis convinces me that Symbian is such a hopeless codebase. They haven't released software we can build for any known phone for sale, and we don't have a compiler that can build the stuff. It's also under a license that isn't a bad one by any means, but it is however not a widely used license for operating system software. Symbian's release, thus, is purely of academic interest to historians who might want to study what phone software looked like at the turn of the millennium before the advent of Linux-based phones.

    Currently, given the demise of mass-market OpenMoko production, our best hope, in my opinion, is the HTC Dream running a modified version of Android/Linux. We don't have 100% Free Software even for that yet, but we are actively working on it, and the list of necessary-to-work proprietary components is down to two libraries. Plus, the Maemo software (and the new device it runs on, not even released yet) is the only other option, and it has quite an extensive list of proprietary components. As far as we can tell currently, the device may even be unusable without a large amount of proprietary software.

    Even so, Android/Linux isn't a Dream (notwithstanding the name of the most widely used hardware platform). It's developed generally by a closed community, who throw software over the wall when they see fit, and we'll have to maintain forks to really make a fully Free Software version. But this is probably going to be true of any Free Software phone platform that a company releases anyway.

    I'll keep watching and expect my assessment will change if facts change. However, unless I see that giant laundry list of proprietary components in Maemo decreasing quickly, I think I'll stick with the least of all these evils, Android/Linux on the HTC Dream. It's by far the closest to having a fully free software platform. Since the only way to get us to freedom is to replace proprietary components one-by-one, picking the closest is just the best path to freedom. At the very least, we should eliminate platforms for which the code can't even be compiled!

    [ PC was kind enough to make a Belorussian translation of this blog post. I can't speak to its accuracy, of course, since I don't know the language. :) ]

    Posted on Monday 26 October 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-10-16: “Open Core” Is the New Shareware

    [ I originally wrote this essay below centered around the term “Open Core”. Despite that even say below that the terms is somewhat meaningless, I later realized this term was so problematic that it should be abandoned entirely, for use instead of the clearer term “proprietary relicensing”. However, since this blog post was widely linked to, I've nevertheless left the text as it originally was in October 2009. ]

    There has been some debate recently about so-called “Open Core” business models. Throughout the history of Free Software, companies have loved to come up with “innovative” proprietary-like ways to use the FLOSS licensing structures. Proprietary relicensing, a practice that I believe has proved itself to have serious drawbacks, was probably the first of these, and now Open Core is the next step in this direction. I believe the users embracing these codebases may be ignoring a past they're condemned to repeat.

    Like most buzzwords, Open Core has no real agreed-upon meaning. I'm using it to describe a business model whereby some middleware-ish system is released by a single, for-profit entity copyright holder, who requires copyright-assigned changes back to the company, and that company sells proprietary add-ons and applications that use the framework. Often, the model further uses the GPL to forbid anyone but the copyright-holding company to make such proprietary add-on applications (i.e., everyone else would have to GPL their applications). In the current debate, some have proposed that a permissive license structure can be used for the core instead.

    Ultimately, “Open Core” is a glorified shareware situation. As a user, you get some subset of functionality, and may even get the four freedoms with regard to that subset. But, when you want the “good stuff”, you've got to take a proprietary license. And, this is true whether the Core is GPL'd or permissively licensed. In both cases, the final story is the same: take a proprietary license or be stuck with cripple-ware.

    This fact remains true whether the Open Core is under a copyleft license or a permissive one. However, I must admit that a permissive license is more intellectually honest to the users. When users encounter a permissive license, they know what they are in for: they may indeed encounter proprietary add-ons and improvements, either from the original distributor or a third party. For example, Apple users sadly know this all too well; Apple loves to build on a permissively licensed core and proprietarize away. Yet, everyone knows what they're getting when they buy Apple's locked down, unmodifiable, and programmer-unfriendly products.

    Meanwhile, in more typical “Open Core” scenarios, the use of the GPL is actually somewhat insidious. I've written before about how the copyleft is a tool, not an end in itself. Like any tool, it can be misused or abused. I think using the GPL as a tool for corporate control over users, while legally permissible, is ignoring the spirit of the license. It creates two classes of users: those precious few that can proprietarize and subjugate others, and those that can't.1

    This (ab)use of GPL has led folks like Matt Aslett to suggest that the permissive licensing solution would serve this model better. While I've admitted such a change would have some level of increased intellectually honesty, I don't think it's the solution we should strive for to solve the problem. I think Aslett's completely right when he argues that GPL'd “Open Core” became popular because it's Venture Capitalists' way of making peace with freely licensed copyrights. However, heading to an Apple-like permissive only structure only serves to make more Apple-like companies, and that's surely not good for software freedom either. In fact, the problem is mostly orthogonal to licensing. It's a community building problem.

    The first move we have to make is simply give up the idea that the best technology companies are created by VC money. This may be true if your goal is to create proprietary companies, but the best Free Software companies are the small ones, 5-10 employees, that do consulting work and license all their improvements back to a shared codebase. From low-level technology like Linux and GCC to higher-level technology like Joomla all show that this project structure yields popular and vibrant codebases. The GPL was created to inspire business and community models like these examples. The VC-controlled proprietary relicensing and “Open Core” models are manipulations of the licensing system. (For more on this part of my argument, I suggest my discussions on Episode 0x14 of the (defunct) Software Freedom Law Show.)

    I realize that it's challenging for a community to create these sort of codebases. The best way to start, if you're a small business, is to find a codebase that gets you 40% or so toward your goal and start contributing to the code with your own copyrights, licensed under GPL. Having something that gets you somewhere will make it easier to start your business on a consulting basis without VC, and allow you to be part of one of these communities instead of trying to create an “Open Core” community you can exploit with proprietary licensing. Furthermore, the fact that you hold copyright alongside others will give you a voice that must be heard in decision-making processes.

    Finally, if you find an otherwise useful single-corporate-copyright-controlled GPL'd codebase from one of these “Open Core” companies, there is something simple you can do:

    Fork! In essence, don't give into pressure by these companies to assign copyright to them. Get a group of community developers together and maintain a fork of the codebase. Don't be mean about it, and use git or another DVCS to keep tracking branches of the company's releases. If enough key users do this and refuse to assign copyright, the good version will eventually become community one rather than the company-controlled one.

    My colleague Carlo Piana points out a flaw in this plan, saying the ant cannot drive the elephant. While I agree with Carlo generally, I also think that software freedom has historically been a little bit about ants driving elephants. These semi-proprietary business models are thriving on the fundamental principle of a proprietary model: keep users from cooperating to improve the code on which they all depend. It's a prisoner's dilemma that makes each customer afraid to cooperate with the other for fear that the other will yield to pressure not to cooperate. As the fictional computer Joshua points out, this is a strange game. The only winning move is not to play.

    The software freedom world is more complex than it once was. Ten years ago, we advocates could tell people to look for the GPL label and know that the software would automatically be part of a freedom-friendly, software sharing community. Not all GPL'd software is created equal anymore, and while the right to fork remains firmly in tact, the realities of whether such forks will survive, and whether the entity controlling the canonical version can be trusted is another question entirely. The new advice is: judge the freedom of your codebase not only on its license, but also on the diversity of the community that contributes to it.


    1I must put a fine point here that the only way companies can manipulate the GPL in this example is by demanding full copyright assignment back to the corporate entity. The GPL itself protects each individual contributor from such treatment by other contributors, but when there is only one contributor, those protections evaporate. I must further note that for-profit corporate assignment differs greatly from assignment to a non-profit, as non-profit copyright assignment paperwork typically includes broad legal assurances that the software will never be proprietarized, and furthermore, the non-profit's very corporate existence hinges on engaging only in activity that promotes the public good.

    Posted on Friday 16 October 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-10-11: Denouncing vs. Advocating: In Defense of the Occasional Denouncement

    For the last decade, I've regularly seen complaints when we harder-core software freedom advocates spend some time criticizing proprietary software in addition to our normal work preserving, protecting and promoting software freedom. While I think entire campaigns focused on criticism are warranted in only extreme cases, I do believe that denouncement of certain threatening proprietary technologies is a necessary part of the software freedom movement, when done sparingly.

    Denouncements are, of course, negative, and in general, negative tactics are never as valuable as positive ones. Negative campaigns alienate some people, and it's always better to talk about the advantages of software freedom than focus on the negative of proprietary software.

    The place where negative campaigns that denounce are simply necessary, in my view, is when the practice either (a) will somehow completely impeded the creation of FLOSS or (b) has become, or is becoming, widespread among people who are otherwise supportive of software freedom.

    I can think quickly of two historical examples of the first type: UCITA and DRM. UCITA was a State/Commonwealth-level law in the USA that was proposed to make local laws more consistent regarding software distribution. Because the implications were so bad for software freedom (details of which are beyond scope of this post but can be learned at the link), and because it was so unlikely that we could get the UCITA drafts changed, it was necessary to publicly denounce the law and hope that it didn't pass. (Fortunately, it only ever passed in my home state of Maryland and in Virginia. I am still, probably pointlessly, careful never to distribute software when I visit my hometown. :)

    DRM, for its part, posed an even greater threat to software freedom because its widespread adoption would require proprietarization of all software that touched any television, movie, music, or book media. There was also a concerted widespread pro-DRM campaign from USA corporations. Therefore, grassroots campaigns denouncing DRM are extremely necessary even despite that they are primarily negative in operation.

    The second common need for denouncement when use of a proprietary software package has become acceptable in the software freedom community. The most common examples are usually specific proprietary software programs that have become (or seem about to become) “all but standard” part of the toolset for Free Software developers and advocates.

    Historically, this category included Java, and that's why there were anti-Java campaigns in the Free Software community that ran concurrently with Free Software Java development efforts. The need for the former is now gone, of course, because the latter efforts were so successful and we have a fully FaiF Java system. Similarly, denouncement of Bitkeeper was historically necessary, but is also now moot because of the advent and widespread popularity of Mercurial, Git, and Bazaar.

    Today, there are still a few proprietary programs that quickly rose to ranks of “must install on my GNU/Linux system” for all but the hardest-core Free Software advocates. The key examples are Adobe Flash and Skype. Indeed, much to my chagrin, nearly all of my co-workers at SFLC insist on using Adobe Flash, and nearly every Free Software developer I meet at conferences uses it too. And, despite excellent VoIP technology available as Free Software, Skype has sadly become widely used in our community as well.

    When a proprietary system becomes as pervasive in our community as these have (or looks like it might), it's absolutely time for denouncement. It's often very easy to forget that we're relying more and more heavily on proprietary software. When a proprietary system effectively becomes the “default” for use on software freedom systems, it means fewer people will be inspired to write a replacement. (BTW, contribute to Gnash!) It means that Free Software advocates will, in direct contradiction of their primary mission, start to advocate that users install that proprietary software, because it seems to make the FaiF platform “more useful”.

    Hopefully, by now, most of us in the software freedom community agree that proprietary software is a long term trap that we want to avoid. However, in the short term, there is always some new shiny thing. Something that appeals to our prurient desire for software that “does something cool”. Something that just seems so convenient that we convince ourselves we cannot live without it, so we install it. Over time, short term becomes the long term, and suddenly we have gaping holes in the Free Software infrastructure that only the very few notice because the rest just install the proprietary thing. For example, how many of us bother to install Linux Libre, even long enough to at least know which of our hardware components needs proprietary software? Even I have to admit I don't do this, and probably should.

    An old adage of software development is that software is always better if the developers of it actually have to use the thing from day to day. If we agree that our goal is ultimately convincing everyone to run only Free Software (and for that Free Software to fit their needs), then we have to trailblaze by avoiding running proprietary software ourselves. If you do run proprietary software, I hope you won't celebrate the fact or encourage others to do so. Skype is particularly insidious here, because it's a community application. Encouraging people to call you on Skype is the same as emailing someone a Microsoft Word document: it's encouraging someone to install a proprietary application just to work with you.

    Finally, I think the only answer to the FLOSS community celebrating the arrival of some new proprietary program for GNU/Linux is to denounce it, as a counterbalance to the fervor that such an announcement causes. My podcast co-host Karen often calls me the canary in the software coalmine because I am usually the first to notice something that is bad for the advancement of software freedom before anyone else does. In playing this role, I often end up denouncing a few things here and there, although I can still count on my two hands the times I've done so. I agree that advocacy should be the norm, but the occasional denouncement is also a necessary part of the picture.

    (Note: this blog is part of an ongoing public discussion of a software program that is not too popular yet, but was heralded widely as a win for Free Software in the USA. I didn't mention it by name mainly because I don't want to give it more press than it's already gotten, as it is one of this programs that is becoming a standard GNU/Linux user application (at least in the USA), but hasn't yet risen to the level of ubiquity of the other examples I give above. Here's to hoping that it doesn't.)

    Posted on Sunday 11 October 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

July

  • 2009-07-29: Microsoft Releases GPL'd Software (Again): Does This Change Anything?

    Microsoft has received much undeserved press about their recent release of Linux drivers for their virtualization technology under GPLv2. I say “undeserved” because I don't particularly see why Microsoft should be lauded merely for doing something that is in their own interest that they've done before.

    Most people have forgotten that Microsoft once had a GPL-based product available for Windows NT. It was called Windows Services for UNIX, and AFAICT, remains available today (although perhaps they've transitioned in recent years to no longer include GPL'd software).

    This product was acquired by Microsoft when they purchased Softway Systems. The product was based on GCC, and included a variety of GNU system utilities ported to Windows. Microsoft was a compliant distributor of this software for years, right during the time when they were calling the GPL an unAmerican cancerous virus that eats up software like PacMan. The GPL is not a new license to Microsoft; they only pretend that it is to give bad press to the GPL or to give good press to themselves.

    Another thing that's not new to Microsoft is that they have no interesting in contributing to Free Software unless it makes their proprietary software more desirable. In my old example above, they hoped to entice developers who preferred a Unix development environment to switch to Windows NT. In the recent Linux driver release, they seek to convince developers to switch from Xen and KVM to their proprietary virtualization technology.

    In fact, the only difference in this particular release is that, unlike in the case of Softway's software, Microsoft was apparently (according to Steve Hemminger) out of compliance briefly. According to Steve, Microsoft distributed binaries linked to various GPL parts.

    Meanwhile, Sam Ramji claimed that Microsoft were already planning to release the software before Hemminger and Greg K-H contacted them. I do believe Sam when he says that there was already talk inside Microsoft about releasing the source underway before the Linux developers began their enforcement effort. However, that internal Microsoft talk doesn't mean that there wasn't a problem. As soon as one distributes the binaries of a GPL'd work, one must provide the source (or an offer therefor) alongside those binaries. Thus, if Microsoft released binaries and delayed in releasing source, there was a GPL violation.

    Like all GPL violations (and potential GPL violations), it's left to the copyright holders of the software to engage in enforcement. I think it's great that, according to Steve and related press coverage, the Linux developers used the most common enforcement strategy in the GPL community — quietly contact the company, inform them of their obligations, and help them in a friendly way into compliance. That process almost always works, and the fact that Microsoft came into compliance shows the value of our community's standard enforcement practice.

    Still, there is a more important item of note from a perspective of software freedom. This Linux driver — whether it is released properly under the GPL or kept proprietary in violation of the GPL — is designed to convince users to give up Free virtualization platforms like Xen and KVM and use Microsoft's virtualization technology instead. From that perspective, it matters little that it was released as Free Software: people should avoid the software and use platforms for virtualization that respect their freedom.

    Someday, perhaps, Microsoft will take a proper place among other large companies that actually contribute code that improves the general infrastructure of Free Software. Many companies give generally useful improvements back to Linux, GCC, and various other parts of the GNU/Linux system. Microsoft has never done this: they only contribute code when it improves Free Software interoperability with their proprietary technology. The day that Microsoft actually changes its attitude toward Free Software did not occur last week. Microsoft's old strategy stays the same: try to kill Free Software with patents, and in the meantime, convince as many Free Software users as possible to begin relying on Microsoft proprietary technology.

    Posted on Wednesday 29 July 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-07-17: Microsoft Patent Aggression Continues Against Free Software

    I think this news item from yesterday mostly speaks for itself, but I could not let the incident go by without blogging briefly about it.

    There has been so much talk in the last two weeks that Microsoft has changed with regard to its patent policy toward Free Software. We fool ourselves if we trust any of the window-dressing that Microsoft has put forward to convince us that we can trust them in this regard. Indeed, I spoke extensively about this in my interview on the Linux Outlaws show this week.

    What we see in this agreement between the Melco Group and Microsoft is another little above-water piece of the same patent aggression iceberg that Microsoft has placed in our community's way. They continue to shake down companies that distribute GNU/Linux systems for patent royalties. As I've written about before, it's difficult to judge if these are GPLv2-compliant, but they are almost certainly not GPLv3-compliant. If there were ever a moment for the community to scramble to GPLv3, this would be it, if for no other reason to defend ourselves against the looming aggression.

    In the meantime, we'd be foolish to trust any sort of promises Microsoft has to make about their patents. Would they really make a reliable promise that would prevent their ongoing campaign of patent aggression against Free Software?

    Update: In related news, I was also glad to read FSF's new statement on the issue, which includes some of the same comments I made on Linux Outlaws Episode 102.

    Posted on Friday 17 July 2009 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

June

  • 2009-06-29: Considerations on Patents that Read on Language Infrastructure

    In an essay last Friday entitled Why free software shouldn't depend on Mono or C#, RMS argued a key point that I agree with: the software freedom community should minimize its use of programming language infrastructure that comes primarily from anti-software-freedom companies, notwithstanding FaiF (Free as in Freedom) implementations. I've been thinking about an extension of that argument: that language infrastructure created in a community process is likely more resilient against attacks from proprietary software companies.

    Specifically, I am considering the risk that a patent attack will occur against the language or its canonical implementation. We know that the USPTO appears to have no bounds in constantly granting so-called “software patents”, most of which are invalid within their own system, and the rest may be like the RSA patent, and will force our community to invent around them, or (as we had to do with RSA), “wait them out”. I'd like to consider how these known facts apply to the implementation of language infrastructure in the Free Software world.

    Programming languages and their associated standard libraries and implementations evolve in three basic ways:

    • A Free Software community designs and implements the language in a grassroots fashion. Perl, PHP, and Python are a few examples.
    • A single corporate entity controls the language and its canonical implementation. They perhaps also convince some standards body to adopt it, but usually retain complete control. C# and Java a few examples.
    • A single corporate entity controlled the language initially, but more than 20 years have passed and the language now has many proprietary and Free Software implementations. C and C++ are a few examples.

    The patent issues in each of these situations deserves different consideration, primarily related to the dispersion of patents that likely read on the given language implementation. We have to assume that the USPTO has granted many patents that read on any software a person can conceivably write. The question is always: of all the things you can write, which has the most risk of patent attack from the patent holders in question?

    In the case of the community-designed and Free-Software-implemented languages, the patent risk is likely spread across many companies, and mitigated by the fact that few have probably filed patents applications designed specifically to read on the language and its implementation. Since various individuals and companies contributed to the development and design, and because it was a process run by the community, it's unlikely there was a master plan by one entity to apply specifically for patents on the language. So, while there are likely many patents that read on the implementation, a single holder is unlikely to hold all the patents, and those patents were probably not crafted for the specific language. Only some of these many patent-holding entities will have a desire to attack Free Software. It is therefore less likely that a user of the language will be sued; a patent troll would have to do some work to acquire the relevant patent. If that unlikely event does anyway occur, the fact that the patent was not specifically designed to read on the language implementation may indeed help, either by easing the process of “inventing around” or by making it more difficult for the patent troll to show the patent reads on the language implementation. Finally, if the implementation is under a license like GPL, or the Apache License (or any license with a patent grant), those companies that did contribute to the language implementation may have granted a patent license already.

    Of course, these are all relative arguments against the alternative: a language designed by a single company. If a single corporate entity designed and implemented the language more recently than 20 years ago, that company likely filed many yet-unexpired patents throughout the process of designing and implementing the language and its infrastructure. When the Free Software community implements fresh versions of the language from scratch, it's very likely that it will generate software that reads on those patents. Thus, the community must live in constant and direct fear of that company. We must assume the patents exist, and we know who holds them, and we know they filed them with this very language in mind. It may be tough to invent around them and still keep the Free Software implementation compatible. This is why I and other Free Software advocates have insisted for years the all companies who claim to support software freedom should grant GPL-compatible patent licenses for all their patents. (I still await Sam Ramji's response on my call for Microsoft to do so.)

    Without that explicit patent license, we certainly should prefer the community-driven and Free-Software-developed languages over those developed by companies (like Microsoft) that have a history of anti-Free Software practices. Regarding companies with a more ambiguous history toward Free Software, some might argue that patents consolidated in a “friendly” company is safest of all alternatives. They might argue that with all those patents consolidated, patent trolls will have a tough time acquiring patents and attacking FaiF implementations. However, while this can sometimes be temporarily true, one cannot rely on this safety. Java, for example, is in a precarious situation now. Oracle is not a friend to Free Software, and soon will hold all Sun's Java patents — a looming threat to FaiF Java implementations. While I think it's more likely that Microsoft will attack FaiF C# implementations with its patents eventually, an Oracle attack on FaiF Java is a possibility. (We should also not forget that Sun in the late 1990s was very opposed to Free Software implementations of Java; the corporate winds always change and we should not throw ourselves to them.)

    The last case in my list deserves at least a brief mention. Languages like C (which was a purely AT&T endeavor initially) have reached the age that the early patents would have now expired, and such languages have slowly moved into community and standards-driven control. Thus, over long periods of time, history shows us that companies do loosen their iron grip of proprietary control of language implementations. However, during that first 20 year period, we should face them with great trepidation and stick with languages developed by the Free Software community itself.

    Finally, I close with important advice: don't be paralyzed with fear over software patents. There are likely some USA patents that read on any software you write. Make good choices (like avoiding C#, as RMS suggests, and favoring languages like Perl, Python, PHP and C), and get on with your work. If, as a non-profit Free Software developer, someone writes you a threatening letter about patents or sues you for patent infringement, of course seek help from an attorney.

    Update:While my analysis was focused on the patent issues around languages, I couldn't resist this orthogonal topic posted by David Siegel with some very helpful suggestions to developers who wish to limit the use of C#. FLOSS is about using good software development to help solve legal, social and technological impediments to freedom. David is right on course with his suggestions.

    Posted on Monday 29 June 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-06-01: Response to NTEN's Holly Ross' Anti-Software-Freedom Remarks

    [ This post was not actually placed here until 2011-11-16, but I've put it in proper sequence with when the bulk of it was written. (Some of you may find it new in your RSS feeds as of 2011-11-16, however.) I originally posted it as a comment on an NTEN Blog post. NTEN got really sneaky over the years after I posted this comment. First, somewhere in late 2011, they removed the comments from the blog post which originally appeared on their website. Then, in August 2015, after I found an archive.org link that showed the original article, they seem to have made sure the original content was removed from archive.org (which a website owner is technically allowed to do, although it's sneaky behavior).

    I don't have the full text of Holly Ross' blog post, and it appears impossible to find online — NTEN and Holly have done an excellent job of rewriting history and pretending that they didn't originally hold an anti-software-freedom position. I suspect, though, given their historically close ties to proprietary software companies, that NTEN remains unfriendly to software freedom, even if they eventually made the URL of Holly Ross' blog post redirect to a seemingly-pro-FOSS propaganda page. Holly Ross, who later was the Executive Director of the Drupal Association, has never, to my knowledge, apologized for her comments nor responded to mine.

    My original post from 2011-11-16 follows:

    In May 2009, Holly Ross, NTEN's Executive Director attacked software freedom, arguing that:

    Open Source is Dead. … The code was free, but we paid tens of thousands of dollars to get our implementation up and running. … I try to use solutions that reflect our values as an organization, but at the end of the day, I just need it to work. Community support can be great, but you're no less beholden to the whims of the community for support and updates than you are to any paid vendor.…

    open source code isn't necessarily any better than proprietary code. The costs, in time and money, are just placed elsewhere. It's a difference in how we budget for software more than anything else. So, the old arguments for open source software adoption are dead to me.…

    [Open Source and Free Software] is great to have as options. I just don't accept the argument that we have to support them simply because the code is available to everybody.

    — Holly Ross, 2009-05-28

    First of all, Holly completely confuses free as in freedom and free as in price even while she's attempting to indicate she understands that there are “values” involved. But more to the point, she shuns software freedom as a social justice cause. This led me to write the following response at the time, that NTEN ultimately deleted from their website:

    The software freedom movement started primarily as an effort for social justice for programmers and users. The goal is to avoid the helplessness and lock-in that proprietary software demands, and to treat users and developers equally in freedom.

    Perhaps there was a time (hopefully now long ago) when non-profits that focused on non-environmental issues would say things like "there's a place for non-recycled paper; it looks nicer and is cheaper". I doubt any non-profit would say that now to their colleagues in the environmental movement. Yet, it's common for non-profit leaders outside of the FLOSS world to say that the issue of software freedom is not relevant and that they need not consider the ethical and moral implications of software choices in the way that they do with their choices about what paper to buy.

    I'm curious, Holly, if you had said “recycled paper isn't necessarily better than virgin tree paper”, what reaction would you expect from the environmental non-profits? Indeed, would you think it's appropriate for a non-profit to refuse to recycle because their geographical area charges more for it? I guess you wouldn't think that's appropriate, and I am left wondering why you feel that your colleagues in the software freedom movement simply don't deserve the same respect as those in the environmental movement.

    I have hoped for a long time that this attitude would change, and I will continue to hope. I am sad to see that it hasn't change yet, at least at NTEN.

    — Bradley M. Kuhn, 2009-06-01

    Note that Holly never responded to me. I am again left wondering; if someone from a respected environmental movement organization had pointed out one of her blog posts was anti-recycling, would she have bothered to respond?

    Posted on Monday 01 June 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

May

  • 2009-05-12: Support Your Friendly Neighborhood FLOSS Charities

    I don't think we talk enough in the FLOSS community about the importance of individual support of FLOSS-related charitable organizations. On a recent podcast episode, Karen and I discuss with Stormy Peters how important it is for geeks — who may well often give lots of code to many FLOSS projects — also should consider giving a little bit of financial funding to FLOSS organizations as well.

    Of course, it's essential that people give their time to the charities and the causes that they care about. In the FLOSS world, we typically do that by giving code or documentation to our favorite FLOSS project. I think that's led us all into the classic “I gave at the office” feeling. Indeed, I know that I too have fallen into this rut at times myself.

    I suppose I could easily claim that, more than most people, I've given enough at the office. Working at various non-profit organizations since the 1990s, I've always made substantially less in salary than I would in the for-profit industry for similar work. I also have always volunteered my time in addition to my weekly work schedule. For example, I currently get paid for my 40 hour/week job at the SFLC, but I also donate about 20 hours of work for the Software Freedom Conservancy each week.

    Still, I don't believe that this is enough. There are many, many FLOSS non-profits that deserve support — more than I have time to give. Meanwhile, very small amounts of money, aggregated over many people giving, makes a world of difference in a number of ways to these organizations.

    Non-profits that are funded by a broad base of supporters are much more stable and have greater longevity than other types of non-profits that are funded primarily by corporate donations. This is because one donor or even a few disappearing is not disaster. Also, through these donations, organizations build a constituency of supporters that truly represent the people that the non-profit seeks to serve.

    Traditionally (with a few notable exceptions), non-profits in the FLOSS world have relied primarily on corporate donations. I generally think this is not ideal for a community that wishes to be fully represented by the non-profits that embody the projects we care about. We want these projects to represent the interest of developers and users, not necessarily the for-profit corporate interests. Plus, we want the organizations to survive even when companies stop supporting FLOSS or just simply go out of business.

    If we all contribute, it doesn't take that much for each individual to be a part of making a real difference. I believe that if each person who has benefited seriously from FLOSS gave $200/year, we'd make a substantial change and a wonderful positive impact on the non-profit organizations that shepherd and keep these FLOSS projects alive. I'm not suggesting giving to any specific organization: just to take $200/year and divide in the way you think is best across 2-4 different FLOSS non-profits that sponsor project you personally care about or benefit from.

    Think about it: $200/year breaks down to $16/month. For me (and likely for most people in a major city), $16/month means one fewer dinner at a restaurant each month. Can't we all eat at home one more time per month, and share that savings to help FLOSS non-profits?

    If you are looking for a list of non-profits that could use your support, the FLOSS Foundations Directory is a good place to start. FWIW, in addition to my volunteer work with Conservancy, here's the list of non-profits that I'm supporting with a total of $200 this year (in alphabetical order): The Free Software Foundation, GNOME Foundation, The Parrot Foundation, and The Twisted Project. Which ones will you give to this year?

    Posted on Tuesday 12 May 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

April

  • 2009-04-24: Fork Well: It Could Be The Last, Best Hope for Community

    I have faced with much trepidation the news of Oracle's looming purchase of Sun. Oracle has never shown any interest in community development, particularly in the database area. They are the largest proprietary database vendor on the planet, and they probably have very simple plans for MySQL: kill it.

    That's why I read with relief this post by Monty (co-founder of the MySQL project) this week, wherein Monty plans (and encourages others, too) to put their full force behind a MySQL “fork” that will be centered outside of Oracle.

    Monty is undoubtedly correct when he says I don't think that anyone can own an open source project; the projects are defined by the de-facto project leaders and the developers that are working on the project. and that [w]ith Oracle now owning MySQL, I think that the need for an independent true Open Source entity for MySQL is even bigger than ever before.

    I don't find the root of this problem in that one company has sold itself to another, pursuant to the the greater glory of the Ferengi Rules of Acquisition. Instead, I think the error is that projects inside Sun did not have a non-profit entity to shepherd them. When a single for-profit company is in control of a project's copyrights, its trademarks, and employs nearly all its core developers, there is a gross imbalance. The community around the project isn't healthy, and can easily be disrupted by the winds of corporate change, which blow in service of the only goal of for-profit existence: higher profits.

    I encourage Monty, as well as core developers of VirtualBox, OpenOffice, OpenSolaris, Sun's Java, and any other project that is currently under the full control of Sun (or indeed any other for-profit corporation) to think about this idea. Non-profits, particularly 501(c)(3)'s, are fundamentally different than for-profits. They exist to serve a community or a constituency and the public good, never profit. Therefore, the health of the codebase, the diversity of the developer and user community, and the advancement of software freedom can be the clear mission of a non-profit that houses a FLOSS project. A non-profit ensures that while corporate funding comes and goes, the mission of the project and its institutional embodiment stay stable. For example, just like shareholders have a duty to fire a CEO when he fails to make enough profit (i.e., the for-profit company is not reaching its maximal goal), boards of directors and/or memberships of non-profits must fire the President and/or Executive Director when they fail to serve the community well. Instead of the “profit motive”, 501(c)(3)'s have the “community motive”.

    Yet, the challenge of focusing on such goals remains difficult for projects that did not spawn from a community to start. GNU and Linux were both started by individual developers that built strong communities before there was any for-profit corporate interest in the software. When a project started inside a company with profit in mind, shoehorning community principles into the project can rarely succeed. I believe that a community must usually evolve from the ashes of some incident that wakes everyone up to realize the project will come to harm due to strict adherence to the profit motive.

    I should probably remind everyone that I'm not opposed to capitalism per se. Indeed, I've often fought on the other side of this equation when licenses (such as MySQL's own very early pre-GPL license) permit noncommercial use but prohibit commercial use. I believe that commercial and non-commercial activity with the code should be equally permitted in a non-discriminatory way. However, the center of gravity for developers, where the copyrights and trademarks live, and how core work on the codebase is funded are all orthogonal questions to the question of the software's license.

    My experience has anecdotally taught me that FLOSS communities function best when the following two things are true: (a) the codebase is held neutrally, either in the hands of the individual developers who wrote the code, or in a 501(c)(3) non-profit, and (b) not too many core developers share the same employer. I believe that reaching that state should be Job One of any for-profit seeking to build a FLOSS community. Sadly, this type of community health is often at direct odds with the traditional capitalist thinking of for-profit shareholders. I'm thus not surprised when FLOSS community managers in for-profit companies can only do so much. The rest is really up to the community of developers to fork and demand that a non-profit or other neutral and diverse developer-controlled management team exist. Attempts at this, sadly, fail much more often than they succeed.

    Monty's post likely had more hope in it than this one. Monty didn't jump to my conclusion that Oracle will kill MySQL; Monty considers it also possible that Oracle might sell MySQL or (and here's the possibility I really doubt) that Oracle will change into a community-driven FLOSS company. I love Monty's optimism in even considering this possible. I honestly hope my pragmatism about this is shown to be sheer pessimism. In the meantime, focusing on the MySQL forks and pressuring Oracle to engage the FLOSS community in a genuine way is the best strategy no matter what outcome you think is most likely.

    Update (on 17 May 2009): Monty announced an industry consortium that will seek to be a neutral space for MySQL development. I tend to prefer charitable non-profits to trade associations, but better the latter than hoping for Oracle to do the right thing.

    Posted on Friday 24 April 2009 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2009-04-16: TomTom/Microsoft: A Wake-Up Call for GPLv3 Migration

    There has been a lot of press coverage about the Microsoft/TomTom settlement. Unfortunately, so far, I have seen no one speak directly about the dangers that this deal could pose to software freedom, and what our community should consider in its wake. Karen and I discussed some of these details on our podcast, but I thought it would be useful to have a blog post about this issue as well.

    Most settlement agreements are sealed. This means that we won't ever actually know what TomTom agreed to and whether or not it violates GPLv2. The violation, if one exists, would likely be of GPLv2's § 7. The problem has always been that it's difficult to actually witness a v2§7 violation occurring (due in large part to less than perfect wording of that section). To find a violation v2§7, you have to discover that there were conditions imposed on [TomTom] ... that contradict the conditions of [GPLv2]. So, we won't actually know if this agreement violates GPLv2 unless we read the agreement itself, or if we observe some behavior by Microsoft or TomTom that shows that the agreement must be in violation.

    To clarify the last statement, consider the hypothetical options. For TomTom to have agreed to something GPLv2-compliant with Microsoft, the agreement would have needed to either (a) not grant a patent license at all (perhaps, for example, Microsoft conceded in the sealed agreement that the patents aren't actually enforceable on the GPLv2'd components), or (b) give a patent license that was royalty-free and permitted all GPLv2-protected activities by all recipients of patent-practicing GPLv2'd code from TomTom, or downstream from TomTom.

    It's certainly possible Microsoft either capitulated regarding the unenforceability (or irrelevancy) of its patents on the GPLv2'd software in question, or granted some sort of license. We won't know directly without seeing the agreement, or by observing a later action by Microsoft. If, for example, Microsoft later is observed enforcing the FAT patent against a Linux distributor, one might successfully argue that the user must have the right to practice those Microsoft patents in the GPLv2 code, because otherwise, how was TomTom able to distribute under GPLv2? (Note, BTW, that any redistributor of Linux could make themselves downstream from TomTom, since TomTom distributes source on their website.) If no such permission existed, TomTom would then be caught in a violation — at least in my (perhaps minority) reading of GPLv2.0

    Many have argued that GPLv2 § 7 isn't worded well enough to verify this line of thinking. I and a few other key GPL thinkers disagree, mainly because this reading is clearly the intent of GPLv2 when you read the Preamble. But, there are multiple interpretations of GPLv2's wording on this issue, and, the wording was written before the drafters really knew exactly how patents would be used to hurt Free Software. We'll thus probably never really have complete certainty that such patent deals violate GPLv2.

    This TomTom/Microsoft deal (and indeed, probably dozens of others like it whose existence is not public, because lawsuits aren't involved) almost surely plays into this interpretation ambiguity. Microsoft likely convinced TomTom that the deal is GPLv2-compliant, and that's why there are so many statements in the press opining about its likely GPLv2 compliance. I, Jeremy Allison, and others might be in the minority in our belief of the strength of GPLv2 § 7, but no one can disagree with the intent of the section, as stated in the Preamble. Microsoft is manipulating the interpretation disagreements to convince smaller companies like Novell, TomTom, and probably others into believing that these complicated patent licensing deals and/or covenants are GPLv2-compliant. Since most of them are about the kernel named Linux, and the Linux copyright holders are the only ones with power to enforce, Microsoft is winning on this front.

    Fortunately, the GPLv3 clarifies this issue, and improves the situation. Therefore, this is a great moment in our community to reflect on the importance of GPLv3 migration. The drafters of GPLv3, responding to the Microsoft/Novell deal, considered carefully how to address these sorts of agreements. Specifically, we have these two paragraphs in GPLv3:

    If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.

    A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.

    Were Linux under GPLv3 (but not GPLv2), these terms, particularly those in the second paragraph, would clearly and unequivocally prohibit TomTom from entering into any arrangement with Microsoft that doesn't grant a license to any Microsoft patent that reads on Linux. Indeed, even what has been publicly said about this agreement seems to indicate strongly that this deal would violate GPLv3. While the Novell/Microsoft deal was grandfathered in (via the date above), this new agreement is not. Yet, the most frustrating aspect of the press coverage of this deal is that few have taken the opportunity to advocate for GPLv3 adoption by more projects. I hope now that we're a few weeks out from the coverage, project leaders will begin again to consider adding this additional patent protection for their users and redistributors.

    Toward the goal of convincing GPLv2 users to switch to GPLv3, I should explain a bit why special patent licensing deals like this are bad for software freedom; it's not completely obvious. To do so, we can look specifically at what TomTom and Microsoft said in the press coverage of their deal: The agreement protects TomTom's customers under the patents …, the companies said (Microsoft, TomTom Settle Patent Dispute, Ina Fried).

    Thus, according to Microsoft and TomTom, the agreement gives some sort of “patent protection” to TomTom customers, and presumably no one else. This means that if someone buys a GNU/Linux-based TomTom product, they have greater protection from Microsoft's patents than if they don't. It creates two unequal classes of users: those who pay TomTom and those who don't. The ones who don't pay TomTom will have to worry if they will be the next ones sued or attacked in some other way by Microsoft over patent infringement.

    Creating haves and have-nots in the software licensing space is precisely what all versions of the GPL seek to prevent. This is why the Preamble of GPLv2 said: any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary.

    Further to this point, in the Rationale Document for the Third Discussion Draft of GPLv3, a similar argument is given in more detail:

    The basic harm that such an agreement can do is to make the free software subject to it effectively proprietary. This result occurs to the extent that users feel compelled, by the threat of the patent, to get their copies in this way. So far, the Microsoft/Novell deal does not seem to have had this result, or at least not very much: users do not seem to be choosing Novell for this reason. But we cannot take for granted that such threats will always fail to harm the community. We take the threat seriously, and we have decided to act to block such threats, and to reduce their potential to do harm. Such deals also offer patent holders a crack through which to split the community. Offering commercial users the chance to buy limited promises of patent safety in effect invites each of them to make a separate peace with patent aggressors, and abandon the rest of our community to its fate.

    It's true that one can blissfully use, redistribute, sell and modify some patent-covered software for years without ever facing a patent enforcement action. But, particularly in situations where known patents have been asserted, those without a patent license often live in fear of copying, modifying and sharing code that exercises the teachings of the patent. We saw this throughout the 1990s with RSA, and today most commonly with audio and video codecs. Microsoft and other anti-Free Software companies have enough patents to attack if we let them. The first steps in stopping it are to (a) adopt GPLv3, LGPLv3 and AGPLv3 with the improved patent provisions, and (b) condemning GPLv2-only deals that solve a patent problem for some users but leave the rest out in the cold, and (c) pointing out that the purported certainty that such deals are GPLv2-compliant is definitely in question.

    Patents always remain a serious threat, and, while the protection under GPLv2 has probably been underestimated, we cannot overestimate the additional protection that GPLv3 gives us in this regard. Microsoft clearly knows that the GPLv3 terms will kill their patent aggression business model, and have therefore focused their attacks on GPLv2-licensed code. Shouldn't we start to flank them by making less GPLv2 code available for these sorts of deals?

    Finally, I would like to draw specific attention the fact that TomTom, as a company, is not necessarily an ally of software freedom. They are like most for-profit companies; they use FLOSS when it is convenient for them, and give back when the licenses obligate them to do so, or when it behooves them in some way. As a for-profit company, they made this deal to please their shareholders, not the Free Software community. Admittedly, their use of the FLOSS in their products was done legitimately (that is, once their GPLv2 non-compliance was corrected by Harald Welte in 2004). However, I do not think we should look upon TomTom as a particularly helpful member of the community. Indeed, most of the patents that Microsoft asserted against TomTom were on their proprietary components, not their FLOSS ones. Thus, most of this dispute was a proprietary software company arguing with another proprietary software company over patents that read on proprietary software. Our community should tell TomTom that if they want to join and support the FLOSS world, they should release their software under a FLOSS license — including software that they aren't obligated to do so by the licenses. Wouldn't it be quite interesting if TomTom's mapping display software were available under, say, GPLv3?

    (Added later): Even if TomTom fails to release their mapping applications as Free Software, our minimal demand should be a license to their patents for use in Free Software. Recall that TomTom countersued Microsoft, also alleging patent infringement on TomTom's patents. TomTom has still yet to offer a public license on those patents for use by the Free Software community. If they are actually not hostile to software freedom, wouldn't they allow us to at least practice the teachings of their patents in GPL'd software?


    0Update: Andrew Tridgell pointed out that my verb tenses in my hypothetical example made the text sound more broadly worded than I intended. I've thus corrected the text in the hypothetical example to be clearer. Thanks for the clarification, Tridge!

    Posted on Thursday 16 April 2009 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2009-04-08: Neary on Copyright Assignment: Some Thoughts

    Dave Neary found me during breakfast at the Linux Collaboration Summit this morning and mentioned that he was being flamed for a blog post he made, Copyright assignment and other barriers to entry. Or, as some might title it in a Computer Science academic tradition: Copyright Assignment Considered Harmful. I took a look at Dave's post, and I definitely think it's worth reading and considering, regardless of whether you agree with it or flame it. For my part, I think I agree with most of his points.

    One of the distinctions that Dave is making that some might miss is the difference between non-profit, community-controlled copyright assignment assignees and for-profit copyright assignees. He quotes Luis Villa to make the point that companies, ultimately, aren't the best destinations as a final home of FLOSS copyrights. If copyright assignment is looked only through the lens of a for-profit corporate entity — with only the duty to its shareholders to determine its future — then indeed it's a dangerous situation for many of the reasons that Dave raises.

    I believe strongly that assigning copyright to a for-profit corporate entity is usually problematic. As Dave points out, corporations aren't really community members proper of a Free Software community; rather, their employees typically are. I have always felt that either copyrights should be assigned to a transparently-run non-profit 501(c)(3) entity, or they should be held by individual contributors. Indeed, the Samba project even has a policy to accept absolutely no corporate copyrights in their codebase, and I would love to see more projects adopt that policy.

    I trust 501(c)(3) non-profits more than for-profits not only because I've spent most of my career in the former, and have enjoyed that time more than my time at the latter. I trust non-profits more because their charters and founding documents require a duty to a public-benefiting mission and to a community. They are failing to act properly under their charters if they put the needs of a for-profit entity ahead of the needs of the community and the public. This is exactly the correct alignment of incentives for a consolidation of FLOSS copyrights.

    Some projects don't like centralized copyright for various reasons. While I do prefer it myself, I can understand this desire among individuals to each keep their stake of control in the project. Thus, I don't object to projects that want each individual contributor to have their own copyright. In this situation, the incentives are still properly aligned, because individuals who helped make the project happen have the legal control. While these individuals have no required commitment to the public good like a non-profit, they are members of a community and are much more likely to put the community needs above the profit motive that controls all for-profit entities.

    When Dave says copyright assignment might be harmful, he seems to talk primarily about for-profit corporate assignment. I agree with him on that point. however, when he mentions that it's unnecessary, I don't completely agree, but he raises well the points that I would raise as to why it's important.

    However, in the middle of Dave's post is the bigger concern that deserves special mention. The important task is keeping a clear record of the copyright provenance about where the work came from, and who might have a copyright claim. Copyright assignment is a short-hand way to do this in an organized and clear fashion. It's a simple solution with some overhead, and sometimes projects over the years have been annoyed with (and even ridiculed) that overhead. However, the more complex solutions have overhead, too. If you don't do assignment, you must keep careful track of every contributor, what their employer agreements say, and whether they have the right to submit patches under their own copyrights to the project. Some projects do this better than others.

    Regardless, all of this is hard work. For years, I've seen it as a personal task of mine to help develop systems and recommendations that help make either process (assignment or good copyright record-keeping) less burdensome. I haven't worked on this task as much as I should have, but I have not forgotten that it needs attention. I envision integrated hooks and systems with revision control systems that help with this. I think we eventually need something that makes it trivial for hackers to implement and easy to maintain. I understand that the last thing any Free Software hacker wants to do is sit and contemplate the legal implications of contributions they've received. As such, all of us who follow this issue hope to make it easier for projects to do the work. In the meantime, I think discussion about this is good, and I'm thankful for Dave to raising the issue again.

    Posted on Wednesday 08 April 2009 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

March

January

  • 2009-01-27: Welcome (Finally!) to the GCC Runtime Library Exception

    For the past sixteen months, I participated in a bit of a “mini-GPLv3 process” among folks at the FSF, SFLC, the GNU Compiler Collection Steering Committee (GCC SC), and the GCC community at large. We've been drafting an important GPLv3 license exception (based on a concept by David Edelsohn and Eben Moglen, that they invented even before the GPLv3 process itself started). Today, that GCC Runtime Library Exception for GPLv3 went into production.

    I keep incessant track of my hours spent on various projects, so I have hard numbers that show I personally spent 188 hours — a full month of 40-hour weeks — on this project. I'm sure my colleagues have spent similar amounts, too. I am proud of this time, and I think it was absolutely worthwhile. I hope the discussion gives you a flavor of why FLOSS license exception drafting is both incredibly important and difficult to get right without the greatest of care and attention to detail.

    Why GPL Exceptions Exist

    Before I jump into discussion of this GCC Runtime Library exception, some background is needed. Exceptions have been a mainstay of copyleft licensing since the inception of the GNU project, and once you've seen many examples over many years, they become a standard part of FLOSS licensing. However, for the casual FLOSS developer who doesn't wish to be a licensing wonk (down this path lies madness, my friends, run screaming with your head covered!), exceptions are a rare discovery in a random source file or two, and they do not command great attention. An understandable reaction, but from a policy perspective, they are an essential part of the copyleft system.

    From the earliest days of the copyleft, it was understood that copyleft was a merely a strategy to reach the goal of software freedom. The GPL is a tool that implements this strategy, but like any tool, it doesn't fit every job.

    In some sense, the LGPL was the earliest and certainly the most widely known “GPL exception”. (Indeed, my friend Richard Fontana came up with the idea to literally make LGPL an exception to GPLv3, although in the v2 world, LGPLv2 was a fully separate license from GPLv2.) Discussions on why the LGPL exists are beyond the scope of this blog post (although I've written about them before). Generally speaking, though, LGPL is designed to be a tool when you don't want the full force of copyleft for all derivative works. Namely, you want to permit the creation of some proprietary (or partly proprietary) derivative works because allowing those derivations makes strategic sense in pursuing the goal of software freedom.

    Aside from the LGPL, the most common GPL exceptions are usually what we generally categorize as “linking exceptions”. They allow the modifier to take some GPL'd object code and combine it in some way with some proprietary code during the compilation process. The simplest of these exceptions is found when you, for example, write a GPL'd program in a language with only a proprietary implementation, (e.g., VisualBasic) and you want to allow the code to combine with the VisualBasic runtime libraries. You use your exclusive right as copyright holder on the new program to grant downstream users, redistributors and modifiers the right combine with those proprietary libraries without having those libraries subject to copyleft.

    In essence, copyleft exceptions are the scalpels of copyleft. They allow you to create very carefully constructed carve-outs of permission when pure copyleft is too blunt an instrument to advance the goal of software freedom. Many software freedom policy questions require this fine cutting work to reach the right outcome.

    The GCC Exception

    The GCC Exception (well, exceptions, really) have always been a particularly interesting and complex use of a copyleft exception. Initially, they were pragmatically needed to handle a technological reality about compilers that interacts in a strange way with copyright derivative works doctrine. Specifically, when you compile a program with gcc, parts of GCC itself, called the runtime library (and before that, crt0), are combined directly with your program in the output binary. The binary, therefore, is both a derivative work of your source code and a derivative work of the runtime library. If GCC were pure GPL, every binary compiled with GCC would need to be licensed under the terms of GPL.

    Of course, when RMS was writing the first GCC, he realized immediately this licensing implication and created an exception to avoid this. Versions of that exception has been around and improved since the late 1980s. The task that our team faced in late 2007 was to update that exception, both to adapt it to the excellent new GPLv3 exceptions infrastructure (as Fontana did for LGPLv3), and to handle a new policy question that has been kicking around the GCC world since 2002.

    The Plugin Concern

    For years, compiler experimentalists and researchers have been frustrated by GCC. It's very difficult to add a new optimization to GCC because you need quite a deep understanding of the codebase to implement one. Indeed I tried myself, as a graduate student in programming languages in the mid-1990s, to learn enough about GCC to do this, but gave up when a few days of study got me nowhere. Advancement of compiler technology can only happen when optimization experimentation can happen easily.

    To make it easy to try new optimizations out, GCC needs a plugin architecture. However, the GCC community has resisted this because of the software freedom implications of such an architecture: if plugins are easy to write, then it will be easy to write out to disk a version of GCC's internal program representation (sometimes called the intermediate representation, or IR). Then, proprietary programs could be used to analyze and optimize this IR, and a plugin could be used to read the file back into GCC.

    From a licensing perspective, such an optimizing proprietary program will usually not be a derivative work of GCC; it merely reads and writes some file format. It's analogous to OpenOffice reading and writing Microsoft Word files, which doesn't make it a derivative of Word by any means! The only parts that are covered by GPL are the actual plugins to GCC to read and write the format, just as OpenOffice's Word reader and writer are Free Software, but Microsoft Word is not.

    This licensing implication is a disaster for the GCC community. It would mean the advent of “compilation processes” that were “mixed”, FaiF and proprietary. The best, most difficult and most interesting parts of that compilation process — the optimizations — could be fully proprietary!

    This outcome is unacceptable from a software freedom policy perspective, but difficult to handle in licensing. Eben Moglen, David Edelsohn, and a few others, however, came up with an innovative idea: since all binaries are derivative of GCC anyway, set up the exception so that proprietary binary output from GCC is permitted only when the entire compilation process involves Free Software. In other words, you can do these proprietary optimization plugins all you want, but if you do, you'll not be able to compile anything but GPL'd software with them!

    The Drafting and the Outcome

    As every developer knows, the path from “innovative idea” to “working implementation” is a long road. It's just as true with licensing policy as it is with code. Those 188 hours that I've spent, along with even more hours spent by a cast of dozens, have been spent making a license exception that implements that idea accurately without messing up the GCC community or its licensing structure.

    With jubilation today, I link to the announcement from the FSF, the FAQ and Rationale for the exception and the final text of the exception itself. This sixteen-month long cooperation between the FSF, the SFLC, the GCC SC, and the GCC community has produced some fine licensing policy that will serve our community well for years to come. I am honored to have been a part of it, and a bit relieved that it is complete.

    Posted on Tuesday 27 January 2009 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2009-01-15: Launchpad's License Will Be AGPLv3

    Last week, I asked Karl Fogel, Canonical's newly hired Launchpad Ombudsman, if Launchpad will use the AGPLv3. His eyes said “yes” but his words were something like: Canonical hasn't announced the license choice yet. I was excited to learn this morning from him that Launchpad's license will be AGPLv3.

    This is exciting news. Launchpad is precisely the type of application that we designed the AGPLv3 for, and Launchpad is rapidly becoming a standard in the next generation of Free Software project hosting. Over the last year, I've felt much trepidation that Launchpad would be “another SourceForge”: that great irony of a proprietary platform becoming the canonical method for Free Software project hosting. It seems now the canonical and the Canonical method for hosting will be Launchpad, and it will respect the freedom of network users of the service.

    Given that they'd already announced plans to liberate Launchpad, it's not really surprising that Canonical has selected the AGPLv3. I would guess their primary worry about releasing the source was ensuring that competitors don't sprout up and fail to share their improvements back with the community of users. AGPLv3 is specifically designed for this situation.

    I'm glad we've made a license that is getting adoption by top-tier Free Software projects like this one. Critics keep saying that AGPLv3 is a marginal license of limited interest. I hope this license choice by Canonical will show them again that they continue to be mistaken.

    Thanks to Karl, Matthew Revell, Mark Shuttleworth himself, and all the others at Canonical who are helping make this happen.

    Posted on Thursday 15 January 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-01-14: LGPL'ing of Qt Will Encourage More Software Freedom

    The decision between the GPL or LGPL for a library is a complex one, particularly when that library solves a new problem or an old problem in a new way. TrollTech faced this decision for the Qt library, and Nokia (who acquired Trolltech last year) has now reconsidered the question and come to a different conclusion. Having followed this situation since even before Qt was GPL'd, I was glad that we have successfully encouraged the reconsideration of this decision.

    Years ago, RMS wrote what many consider the definitive essay on this subject, entitled Why you shouldn't use the Lesser GPL for your next library. A few times a year, I find myself rereading that essay because I believe it puts forward some good points to think about when making this decision.

    Nevertheless, there is a strong case for the LGPL in many situations. Sometimes, pure copyleft negatively impacts the goal of maximal software freedom. The canonical example, of course, is the GNU C Library (which was probably the first program ever LGPL'd).

    Glibc was LGPL'd, in part, because it was unlikely at the time that anyone would adopt a fully FaiF (Free as in Freedom) operating system that didn't allow any proprietary applications. Almost every program on a Unix-like system combines with the C library, and if it were GPL'd, all applications would be covered by the GPL. Users of the system would have freedom, but encouraging the switch would be painful because they'd have to give up all proprietary software all at once.

    The GNU authors knew that there would be proprietary software for quite some time, as our community slowly replaced each application with freedom-respecting implementations. In the meantime, better that proprietary software users have a FaiF C library and a FaiF operating system to use (even with proprietary applications) while work continued.

    We now face a similar situation in the mobile device space. Most mobile devices used today are locked down, top to bottom. It makes sense to implement the approach we know works from our two decades of experience — liberate the operating system first and the applications will slowly follow.

    This argument informs the decision about Qt's licensing. Qt and its derivatives are widely used as graphics toolkits in mobile devices. Until now, Qt was licensed under GPL (and before that various semi-Free licenses). Not only did the GPL create a “best is the enemy of the good” situation, but those companies that rejected the GPL could simply license a proprietary copy from TrollTech, which further ghettoized the GPL'd versions. All that is now changing.

    Beyond encouraging FaiF mobile operating systems, this change to LGPL yields an important side benefit. While the proprietary relicensing business is a common and legitimate business model to fund further development, it also has some negative social side effects. The codebase often lives in a silo, discouraging contributions from those who don't receive funding from the company who controls the canonical upstream.

    A change to LGPL sends a loud and clear message — the proprietary relicensing business for Qt is over. Developers who have previously rejected Qt because it was not community-developed might want to reconsider that position in light of this news. We don't know yet how the new Qt community will be structured, but it's now clear that Nokia, Qt's new copyright holder, no longer has a vested interest in proprietary relicensing. The opportunity for a true software freedom community around Qt's code base has maximum potential at this moment. A GUI programmer I am not; but I hope those who are will take a look and see how to create the software freedom development community that Qt needs.

    Posted on Wednesday 14 January 2009 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

2008

December

  • 2008-12-24: It's a Wonderful FLOSS!

    I suppose it's time for me to confess. For a regular humbug who was actually memory-leak-hunting libxml2 at the office until 21:30 on December 24th, I'm still quite a sucker for Frank Capra movies. Most people haven't seen any of them except It's a Wonderful Life. Like a lot of people, I see that film annually one way or the other, too.

    Fifteen years ago, I wrote a college paper on Capra's vision and worldview; it's not surprising someone who has devoted his life to Free Software might find resonance in it. Capra's core theme is simple (some even call it simplistic): An honest, hard-working idealist will always overcome if he never loses sight of community and simply refuses any temptation of corruption.

    I don't miss the opportunity to watch It's a Wonderful Life when it inevitably airs each year. (Meet John Doe sometimes can be found as well around this time of year — catch that one too if you can.) I usually perceive something new in each viewing.

    (There are It's a Wonderful Life spoilers below here; if you actually haven't seen it, stop here.)

    This year, what jumped out at me was the second of the three key speeches that George Bailey gives in the film. This occurs during the bank run, when Building and Loan investors are going to give up on the organization and sell their shares immediately at half their worth. I quote the speech in its entirety:

    You're thinking of this place all wrong. As if I had the money back in a safe. The money's not here. Your money's in Joe's house; that's right next to yours. And in the Kennedy house, and Mrs. Macklin's house, and a hundred others. Why, you're lending them the money to build, and then, they're going to pay it back to you as best they can. Now what are you going to do? Foreclose on them?

    [Shareholders decide to go to Potter and sell. Bailey stops the mob.]

    Now wait; now listen. Now listen to me. I beg of you not to do this thing. If Potter gets hold of this Building and Loan there'll never be another decent house built in this town. He's already got charge of the bank. He's got the bus line. He got the department stores. And now he's after us. Why?

    Well, it's very simple. Because we're cutting in on his business, that's why, and because he wants to keep you living in his slums and paying the kind of rent he decides. Joe, you had one of those Potter houses, didn't you? Well, have you forgotten? Have you forgotten what he charged you for that broken-down shack?

    Ed, you know! You remember last year when things weren't going so well, and you couldn't make your payments? You didn't lose your house, did you? Do you think Potter would have let you keep it?

    Can't you understand what's happening here? Don't you see what's happening? Potter isn't selling. Potter's buying! And why? Because we're panicking and he's not. That's why. He's picking up some bargains. Now, we can get through this thing all right. We've got to stick together, though. We've got to have faith in each other.

    Perhaps this quote jumped out on me because all the bank run jokes made this year. However, that wasn't the first thing that came to mind. Instead, I thought immediately of Microsoft's presence at OSCON this year and the launch of their campaign to pretend they haven't spent the last ten years trying destroy all of Free Software and Open Source.

    In the film, Potter eventually convinces George to come by his office for a meeting, offers him some fine cigars, and tells him that George's ship has come in because Potter is ready to give him a high paying job. George worries that the Building and Loan will fail if he takes the job. Potter's (non)response is: Confounded, man, are you afraid of success!?

    It's going to get more tempting to make deals with Microsoft. We're going to feel like their sudden (seemingly) positive interest in us — like Potter's sudden interest in George — is something to make us proud. It is, actually, but not for the obvious reason. We're finally a viable threat to the future of proprietary software. They've reached the stage where they know they can't kill us. They are going to try to buy us, try to corrupt us, try to do anything they can to convince us to give up our principles just to make our software a little better or a little more successful. But we can do those things anyway, on our own, in the fullness of time.

    Never forget why they are making the offer. Microsoft is unique among proprietary software companies: they are the only ones who have actively tried to kill Open Source and Free Software. It's not often someone wants to be your friend after trying to kill you for ten years, but such change is cause for suspicion. George was smart enough to see this and storm out of Potter's office, saying: You sit around here and spin your little webs and think the whole world revolves around you and your money! Well, it doesn't, Mr. Potter!. To Microsoft, I'd say: and that goes for you, too!

    Posted on Wednesday 24 December 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2008-12-09: One gpg --gen-key per Decade

    Today is an interesting anniversary (of sorts) for my cryptographic infrastructure. Nine years ago today, I generated the 1024 bit DSA key, DB41B387, that has been my GPG key every day since then. I remember distinctly that on the 350 MhZ machine I used at the time, it took quite a while to generate, even though I made sure the entropy pool remained nice and full by pounding on the keyboard.

    The horribleness of the recent Debian vulnerability meant that I have spent a much time this year pondering the pedigree my personal cryptographic infrastructure. Of course, my key was far too old to have been generated on a Debian-based system that had that particular vulnerability. However, the issue that really troubled me this past summer was this:

    Some DSA keys may be compromised by only their use. A strong key (i.e., generated with a ‘good’ OpenSSL) but used locally on a machine with a ‘bad’ OpenSSL must be considered to be compromised. This is due to an ‘attack’ on DSA that allows the secret key to be found if the nonce used in the signature is reused or known.

    Not being particularly hard core on cryptographic knowledge — most of my expertise comes from only one class I took 11 years ago on Encryption, Compression, and Secure Hashing in graduate school — I found this alarming and tried my best to do some ancillary reading. It seems that DSA keys, in many ways, are less than optimal. It seems (to my mostly uneducated eye) in skimming academic papers that DSA keys are tougher to deploy right and keep secure, which leads to these sorts of possible problems.

    I've resolved to switch entirely to RSA keys. The great thing about RSA is its simplicity and ease of understanding. I grok factoring and understand better the complexity situation of the factoring problem (this time, from the two graduate courses I took on Complexity Theory, so my comfort is more solid :). I also find it intriguing that a child can learn how to factor in grade school, yet we can't teach a computer to do it efficiently. (By contrast, I didn't learn the discrete logarithm problem until my Freshman year of college, and I still have to look up the details to remind myself.) So, the “simplicity brings clarity” idea hints that RSA is a better choice.

    Fact is, there was only one reason why I revoked my ancient RSA keys and generated DSA ones in the 1990s. The RSA patent and the strict licensing of that patent by RSA Data Security, Inc. made it impossible to implement RSA in Free Software back then. So, when I switched from proprietary PGP to GPG, my keys wouldn't import. Indeed, that one RSA patent alone set back the entire area of Free Software cryptography at least ten years.

    So, when I decided this evening that I'd need to generate a new key and begin promulgating it at key-signing parties sometime before DB41B387 turns ten, I realized I actually have the freedom to choose my encryption algorithm now! Sadly, it took almost these entire nine years to get there. Our community did not only have to wait out this unassailable patent. (RSA is among the most novel and non-obvious ideas that most computer professionals will ever seen in their lives). Once the RSA patent finally expired0, we had to then slowly but surely implement and deploy it in cryptographic programs, from scratch.

    I'm still glad that we're free of the RSA patent, but I fear among the mountain of “software patents” granted each year, that the “new RSA” — a perfectly valid, non-obvious and novel patent that reads on software and fits both the industry's and patent examiner's definition of “high quality” — is waiting to be discovered and used as a weapon to halt Free Software again. When I finally type gpg --gen-key (now with --expert mode!) for the first time in nine years, I hope I'll only experience the gladness of being able to generate an RSA key, and succeed in ignoring the fact that RMS' old essay about this issue remains a cautionary tale to this very day. Software patents are a serious long-term threat and must be eradicated entirely for the sake of software freedom. The biggest threat among them will always be the “valid”, “high quality” software patents, not the invalid, poor quality ones.


    0 Technically speaking, RSA didn't need to expire. In a seemingly bizarre move, RSA Data Security, Inc. granted a Free license to the patent a few weeks before the actual expiration date. To this day, I believe the same theory I espoused at the time: their primary goal in doing this was merely to ruin all the “RSA is Free” parties that had been planned.

    Posted on Tuesday 09 December 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2008-12-04: The FLOSS License Drafter's Responsibility to the Community

    I finally set aside some time to read my old boss' open letter responding to criticisms of the FDL process. I read gladly his discussion of the responsibilities of software freedom license stewardship.

    I've been involved with the drafting of a number of FLOSS licenses (and exceptions to existing licenses). For example, I helped RMS a little with the initial FDL 1.0 drafting (the license at issue here); I was a catalyst for the creation of Artistic 2.0 and advised that process; and, I was heavily involved with the creation of the AGPL, and somewhat with the GPLv3. From these experiences, I know that, just like when a core developer gets annoyed when kibitzed by a user who just downloaded the program and is missing something obvious, we license drafters are human and often have the “did this person even read all the stuff we've written on this issue?” knee-jerk response to criticism. However, we all try to put that aside, and be ready to respond and take seriously any reasonable criticism. I am glad that RMS has done so here. The entity that controls future versions of a license for which authors often use an “or later” term holds great power. As the clichéd Spiderman saying goes, with great power, comes great responsibility.

    The FSF as a whole, and RMS in particular, have always know this well and take it very seriously. Indeed, years ago, when I was still at FSF, RMS and I wrote an essay together on a closely related issue. This recent response on FDL reiterates some of those points, but with a real-world example explaining the decision making process regarding the reasonable exercise of that power to, in turn, grant rights and freedoms rather than take them away.

    The key quote from his letter that stands out to me is: our commitment is that our changes to a license will stick to the spirit of that license, and will uphold the purposes for which we wrote it. This point is fundamental. As FLOSS license drafters, we must always, as RMS says, abide by the highest ethical standards to uphold the spirit that spurred the creation of these licenses.

    Far from being annoyed, I'm grateful for those who assume the worst of intentions and demand that we justify ourselves. For my part, I try to answer every question I get at conferences and in email about licensing policy as best I can with this point in mind. We in the non-profit licensing sector of the FLOSS world have a duty to the community of FLOSS users and programmers to defend their software freedom. I try to make every decision, on licensing policy (or, indeed, any issue) with that goal in mind. I know that my colleagues at the FSF and at the many other not-for-profit organizations always do the same, too.

    Posted on Thursday 04 December 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2008-12-01: AGPL Declared DFSG-Free

    Crossposted with autonomo.us.

    Late last week, the FTP Masters of Debian — who, absent a vote of the Debian developers, make all licensing decisions — posted their ruling that AGPLv3 is DFSG-Free. I was glad to see this issue was finally resolved after months of confusion; the AGPLv3 is now approved by all known FLOSS licensing ruling bodies (FSF, OSI, and Debian).

    It was somewhat fitting that the AGPLv3 was approved by Debian within a week of the one year anniversary of AGPLv3's release. This year of AGPLv3 has shown very rapid adoption of the AGPL. Even conservative numbers show an adoption rate of 15 projects per month. I expect the numbers to continue a steady, linear climb as developers begin to realize that the AGPL is the “copyleft of the Cloud”.

    Posted on Monday 01 December 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

November

  • 2008-11-20: podjango: A Minimalist Django Application for Podcast Publishing

    I had yet to mention in my blog that I now co-host a podcast at SFLC. I found myself, as we launched the podcast last week, in a classic hacker situation of having one project demand the need to write code for a tangentially related project.

    Specifically, we needed a way to easily publish show notes and otherwise make available the podcast on the website and in RSS feeds. Fortunately, we already had a few applications we'd written using Django. I looked briefly at django podcast, but the interface was a bit complicated, and I didn't like its (over)use of templates to do most of the RSS feeding.

    The small blogging application we'd hacked up for this blog was so close to what we needed, that I simply decided to fork it and make it into a small podcast publisher. It worked out well, and I've now launched a Free Software project called podjango under the AGPLv3.

    Most of the existing code will be quite obvious to any Django hacker. The only interesting thing to note is that I made some serious effort for the RSS feeds. First, I heavily fleshed out the minimal example for an iTunesFeed generator in the Django documentation. It's currently a bit specific to this podcast, but should be easily abstracted. I did a good amount of research on the needed fields for the iTunes RSS and Media RSS and what should be in them. (Those feedforall.com tutorials appear to be the best I could find on this.)

    Second, I did about six hours of work to build what I called SFLC's ominbus RSS feed. The most effort went into building an RSS feed that includes disparate Django application components, but this thread on query set manipulation from django-users referenced from Michael Angela's blog was very helpful. I was glad, actually, that the ultimate solution centered around complicated features of Python. Being an old-school Perl hacker, I love when the solution is obvious once you learn a feature of the language that you didn't know before. (Is that the definition of programming language snobbery? ;)

    It also turns out that Fabian Scherschel (aka fabsh) had started working on on a Django podcast application too, and he's going to merge in his efforts with podjango. I preemptively apologize publicly, BTW, that I didn't reach out to the django-podcast guys before starting a new project. However, I'm sure fabsh and I both would be happy to cooperate with them if they want to try to merge the codebases (although I don't want to use a non-Free software platform like Google Code to host any project I work on ;). Anyway, I really think RSS feeds should be implemented using generators in Python code rather than in templates, though, and I think the user interface should be abstracted away from as many details for the DTD fields as possible. Thus, it may turn out that we and django-podcast have incompatible design goals.

    Anyway, I hope the code we've released is useful, and I'm glad for Fabian to take over as project lead. I need to move onto other projects, and hope that others will be interested in generalizing and improving the code under Fab's leadership. I'm happy to help it along.

    Posted on Thursday 20 November 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2008-11-13: GPLv3/AGPLv3 Adoption: If It Happened Too Fast, I'd Be Worried

    Since the release of GPLv3, technology pundits have been opining about how adoption is unlikely, usually citing Linux's still-GPLv2 status as (often their only) example. Even though I'm a pro-GPLv3 (and, specifically, pro-AGPLv3) advocate, I have never been troubled by slow adoption, as long as it remained on a linear upswing from release day onward (which it has).

    Only expecting linear growth is a simple proposition, really. Free, Libre and Open Source Software (FLOSS) projects do not always have the most perfectly organized of copyright inventories, nor is the licensing policy of the project the daily, primary focus of the developers. Indeed, most developers have traditionally seen a licensing decision as something you think about once and never revisit!

    In some cases, such as with many of the packages in FSF's GNU project, there is a single entity copyright holder with a policy agenda, and such organizations can (and did) immediately relicense large codebases under GPLv3. However, in most projects, individual contributors keep their own copyrights, and the relicensing takes time and discussion, which must compete with the daily work of making better code.

    Relicensing from GPLv2-or-later

    GPLv2-or-later packages can be relicensed to GPLv3-or-later, or GPLv3-only, basically instantaneously. However, wholesale relicensing by a project leader would be downright rude. We're a consensus-driven community, and any project leader worth her title would not unilaterally relicense without listening to the community. In fact, it's somewhat unlikely a project leader would relicense any existing GPLv2-or-later copyrights under GPLv3-only (or GPLv3-or-later, for that matter) without the consent of the contributor who holds those copyrights. Even though that consent isn't needed, getting it anyway is a nice, consensus-building thing to do.

    In fact, I think most projects prefer to slowly change the license in various subparts of the work, as those parts are changed and improved. That approach saves time from having to do a “bombing run” patch that changes all the notices across the project, and also reflects reality a bit better0.

    Of course, once you change one copyrightable part of a larger work to GPLv3-or-later, the effective license of the whole work is GPLv3-or-later, even if some parts could be extracted and distributed under GPLv2-or-later. So, in essence, GPLv2-or-later projects that have started taking patches licensed under GPLv3-or-later have effectively migrated to GPLv31. This fact alone, BTW, is why I believe strongly that GPLv3 adoption statistics sites (like Palamida's) have counts that underestimate adoption. Such sites are almost surely undercounting this phenomena. (It's interesting to note that even with such likely undercounting, Palamida's numbers show a sure and steady linear increase in GPLv3 and AGPLv3 adoption.)

    Relicensing from GPLv2-only

    Relicensing from GPLv2-only is a tougher case, and will take longer for a project that undertakes it. Such relicensing requires some hard work, as a project leader will have to account for the copyright inventory and ensure that she has permission to relicense. This job, while arduous, is not impossible (as many pundits have suggested).

    But even folks like Linus Torvalds himself are thinking about how to get this done. Recently, I began using git more regularly. I noticed that Linus designed git's license to leave open an easily implemented possibility for future GPLv3 licensing. Even the bastion of GPLv2-only-ville wants options for GPLv3-relicensing left open.

    Not Rushing Is a Good Thing

    Software freedom licenses define the rules for our community; they are, in essence, a form of legislation that each project constructs for itself. One “country” (i.e., the GNU project) has changed all its “laws” quickly because it's located on the epicenter of where those “laws” were drafted. Indeed, most of us who were deeply involved with the GPLv3 process were happy to change quickly, because we watched the license construction happen draft-by-draft, and we understood deeply the policy questions and how they were addressed.

    However, most FLOSS developers aren't FLOSS licensing wonks like I and my colleagues at the FSF are. So, we always understood that developers would need time to grok the new license, and that they would prefer to wait for its final release before they bothered. (Not everyone wants to “run the daily snapshot in production”, after all.) The developers should indeed take their time. As a copyleft advocate, I'd never want a project to pick new rules they aren't ready for, or set legal terms they don't fully understand yet.

    The adoption rate of GPLv3 and AGPLv3 seems to reflect this careful and reasoned approach. Pundits can keep saying that the new license has failed, but I'm not going take those comments seriously until the pundits can prove that this linear growth — a product of each project weighing the options slowly and carefully to come a decision and then starting the slow migration — has ended. For the moment, though, we seem right on course.


    0Merely replacing the existing GPLv2-or-later notice to read “GPLv3-or-later” (or GPLv3-only) has little effect. In our highly-archived Internet world, the code that was under GPLv2-or-later will always be available somewhere. Since GPLv2 is irrevocable, you can't take away someone's permanent right to copy, modify, distribute the work under GPLv2. So, until you actually change the code, the benefit of a relicense is virtually non-existent. Indeed, its only actual value is to remind your co-developers of the plan to license as GPLv3-or-later going forward, and make it easy for them to license their changes under GPLv3-or-later.

    1I also suspect that many projects that are doing this may not be clearly explaining the overall licensing of the project to their users. A side-project that I work on during the weekends called PokerSource is actually in the midst of slow migration from GPLv3-or-later to AGPLv3-or-later. I have carefully explained our license migration and its implications in the toplevel LICENSE file, and encourage other projects to follow that example.

    Posted on Thursday 13 November 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

September

  • 2008-09-20: A Day to Focus on Software Freedom and Reject Proprietary Software

    Today is International Software Freedom Day. I plan to spend the whole day writing as much Free Software as I can get done. I have read about lots of educational events teaching people how to use and install Free Software, and those sound great. I am glad to read stories about how well the day is being spent by many, and I can only hope to have contributed as much as people who spend the day, for example, teaching kids to use GNU/Linux.

    What troubles me, though, is the some events today are sponsored by companies that produce proprietary software. I notice that even the official Software Freedom Day site lists various proprietary (or semi-proprietary) software companies as sponsors. Indeed, I declined an invitation to an event sponsored and hosted by a proprietary software company.

    Today is about saying no to proprietary software, at least for one day. We live in the real world, of course, and some days we have to be willing to set our political beliefs aside to negotiate with proprietary software companies. But, on Software Freedom Day, I hope that our community will send a message to proprietary (or semi-proprietary) software companies that we reject user subjugation and favor software freedom instead.

    Posted on Saturday 20 September 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2008-09-04: GPL, The 2-clause BSD of Network Services

    Crossposted with autonomo.us.

    So often, a particular strategy becomes dogma. Copyleft licensing constantly allures us in this manner. Every long-term software freedom advocate I have ever known — myself included — has spent periods of time slipping on the comfortable shoes of belief that copyleft is the central catalyst for software freedom.

    Copyleft indeed remains a successful strategy in maximizing software freedom because it backs up a community consensus on software sharing with the protection of the law. However, most people do not comply with the GPL merely because they fear the consequences of copyright infringement. Rather, they comply for altruistic reasons: because it advances their own freedom and the freedom of the people around them.

    Indeed, it is so important to remember that many of the FLOSS programs we use every day are not copylefted, and do not actually have any long-term proprietary forks (for me, Subversion, Trac and Twisted come to mind quickly). Examples like this helped me to again re-eradicate some clouded thinking about copyleft as central tenant.

    With this mindset fresh, Mike Linksvayer and I had an excellent discussion last month that solidified this connection to network services, and specifically, the licenses for network services software. Many GPL'd network service software give no source to users, but that may have little to do with the authors' “failure to upgrade” to the AGPL. In other words, the non-source availability of network service applications that are otherwise licensed in freedom is probably unrelated to the lack of network-freedom provisions in the license.

    In fact, more likely, the network service world now mimics the early days of the BSD licenses. Deployers are “proprietarizing” by default merely because there is no social effect to encourage release of modified source. Often, they likely haven't considered the complex issues of network service freedom, and are following the common existing practices. Advent of the GPL did help encourage software sharing in the community, but the general change in social standards that accompanied the GPL probably had a more substantial impact.

    Therefore, improved social standards will help improve source sharing in network services. We need to encourage, and more importantly, make it easy for network service deployers to make source of network applications available, regardless of their particular FLOSS license. No existing non-AGPL FLOSS licenses prohibit making the source available to network users. Network providers can and should simply do it voluntarily out of respect for their users. Developers of network service software, even if they do not choose the AGPL, should make it easy for the deployers to give source to their users. I hope to assist in this regard more directly before the end of 2008.

    Posted on Thursday 04 September 2008 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2008-09-02: GNU's Birthday

    Twenty-five years ago this month, I had just gotten my first computer, a Commodore 64, and was learning the very basics (quite literally) of programming. Unfortunately for my education, it would be a full eight years before I'd be permitted to see any source code to a computer program that I didn't write myself. I often look back at those eight years and consider that my most formative years of programming learning were wasted, since I was not permitted to study the programs written by the greatest minds.

    Fortunately for all the young programmers to come after me, something else was happening in an office at an MIT building in September 1983 that would make sure everyone would have the freedom to study code, and the freedom to improve it and contribute to the global library of software development knowledge. Richard Stallman announced that he would start the GNU project, a complete operating system that would give all its users freedom.

    I got involved with Free Software in 1992. At the time, I was the one student in my university who had ever heard of GNU and the recently released kernel named Linux. My professors knew of “that Stallman guy” but were focused primarily on academic research. Fortunately for me, they nevertheless gave me free reign over the systems to turn them into what might have been, in late 1992, one of the first Computer Science labs running entirely Free Software.

    Much more has happened since even then. To commemorate all that has come since Stallman's announcement, my colleagues at the FSF, home of the GNU project, released a video for this historic 25 year anniversary. It took twenty-five years, and a fight at the BBC over DRM, but now even a famous, accomplished actor like Stephen Fry is interested in the work that Stallman began way back in a year when Michael Jackson was a musical phenomenon and not merely a punchline of a joke.

    These days, I have almost weekly moments of surprise that people outside of the Software Freedom Movement have actually heard of what I do for a living. When Matt Lee (whom I got to know when he came up through the ranks in the 2000's as I did in the 1990's as a new FSF volunteer) told me a few months ago that Stephen Fry had enthusiastically and immediately agreed to make this video, it was yet another moment of surprise. We now live in a movement that impacts everyone in the industrialized world, because nearly everyone who has access to electricity also must use a computer to interact with daily life. So many people are impacted by the problems of proprietary software that Stallman noticed in 1983 impacting his small developer community. Thanks to the work of thousands, we now have the opportunity to welcome new groups into a computing world that can give them freedom. I'm happy that the friendly face of a talented and accomplished entertainer and world-class actor is here to welcome them.

    Posted on Tuesday 02 September 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

August

  • 2008-08-20: Compliance Advice Core-Dumped

    For ten years, I've been building up a bunch of standard advice on GPL compliance. Usually, I've found myself repeating this advice on the phone, again and again, to another new GPL violator who screwed it all up, just like the last one did. In the hopes that we will not have to keep giving this advice one-at-a-time to each violator, my colleagues and I have finally gotten an opportunity to write out in detail our best advice on the subject.

    Somewhere around 2004 or so, I thought that all of the GPL enforcement was going to get easier. After Peter Brown, Eben Moglen, David Turner and I had formalized FSF's GPL Compliance Lab, and Dan Ravicher and I had taught a few CLE classes to lawyers in the field, we believed that the world was getting a clue about GPL compliance. Many people did, of course, and we constantly welcome new groups of well-educated people in the commercial space who comply with the GPL correctly and who interact positively with our community.

    However, the interest in FLOSS keeps growing, rapidly. So, for every new citizen who does the research ahead of time and learns the rules, there are dozens who don't. The education effort is therefore forever ongoing because the newbies always seem to outnumber the old hands. It's our own copyleft version of Eternal September. The whole space is now big enough that one-by-one education in our traditional way can no longer scale.

    Hopefully, publishing some guidelines for GPL compliance will help the education effort scale. If you redistribute GPL'd software commercially in any way, or you are a lawyer who represents people that do, please spend the time to familiarize yourself with this information. If you have ideas on how we can expand this document, we would of course love to hear from you.

    Update (on 2008-08-26): Thanks for all the feedback we've gotten from the community. We've been glad to update the document to incorporate your suggestions.

    Posted on Wednesday 20 August 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2008-08-16: If The Worst of Us Wins, The Best of Us Surely Will

    There has been much chatter and coverage about the court decision related to the Artistic License decision last week. Having spent a decade worrying about the Artistic License, I was surprised and relieved to see this decision.

    One of the first tasks I undertook in the late 1990s in the world of Software Freedom licenses were issues surrounding the Artistic License. My first Software Freedom community was the Perl one, but my second was the licensing wonks. Therefore, I walked the line for many years, as I considered the poor drafting of the Original Artistic License. As the Perl6 process started in 2000, I chaired the Licensing Committee, and wrote all of the licensing RFCs in the Perl6 process, including RFC 211, which collected all the historical arguments about bad drafting of the Artistic License and argued that we change the Artistic License.

    Last year, I was silent about the lower court decision, because I'd known for years that the Original Artistic License was a poorly drafted and confusing license. I frankly was not surprised that a court had considered it problematic. Of course, I was glad for the appeal, and that there was a widely supported amicus brief arguing that the Artistic License should be treated appropriately as a copyright license. However, I had already prepared myself to live with the fact that the my greatest licensing fears had come true: the most poorly drafted FLOSS license had been the first for a USA court to consider, and that court had seen what we all saw — a license that was confusing and could not be upheld due to lack of clarity.

    I was overjoyed last week to see that the Federal Circuit ruled that even a poorly drafted copyright license like that must be taken seriously and that the copyright holder could seek remedies under copyright law. Now that I have seen this decision, I feel confident that the rest of our licenses will breeze through the courts, should the need arise. We've been arguing for a decade that the Artistic license is problematic, and even Larry Wall (its author) admitted that his intent wasn't necessarily to draft a good license but to inspire people to contact him for additional permissions outside the GPL. Nevertheless, he drafted a license that the USA courts clearly see as a valid copyright license. The bottom bar has been set, and since all our other licenses are much clearer, it will be smooth sailing here on out.

    (Please note, if you are a fan of the Artistic License, the Artistic License 2.0 is a much better option and is recommended. Despite the decision, we should still cease using the Original Artistic License now that we have 2.0.)

    Posted on Saturday 16 August 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

July

  • 2008-07-23: When Will Hosting Sites Allow AGPLv3 Code?

    At the OSCON Google Open Source Update, Chris Dibona reiterated his requirement to see significant adoption before code.google.com will host AGPLv3 projects (his words). I asked him to tell us how tall we in the AGPLv3 community need to be to ride this ride, but unfortunately he reiterated only the bar of “significant adoption”. I therefore am redoubling my efforts to encourage projects to switch to the AGPLv3, and for our community to build a list of AGPLv3'd projects, so that we can convince them.

    Chris argues that including AGPLv3 would encourage of license proliferation. On their surface, his arguments seem to be valid. I don't like license proliferation, either. Indeed, I have been a proponent of reducing license proliferation since around 2000 — long before it was fashionable, and when the OSI itself was the primary purveyor of license proliferation. I'm very glad that everyone has gotten on the same page about this, and would certainly not want to change my position now that we've reached consensus.

    However, AGPLv3 is not an example of license proliferation for three reasons. First, AGPLv3 is a license published by an organization (my old employers, the FSF) that has a 24 year history of publishing — indeed, inventing — the most popular and major licenses available in the FLOSS world. To compare them to (as some have) Nokia, who published merely a vanity license with an OSI rubber stamp is simply not a valid comparison.

    Second, the history of AGPL itself shows that proliferation is not at work here. AGPL was first drafted and published in early 2002, and has been in constant use since then. It filled a niche for users who were clamoring for a specific license to address a clear concern related to software freedom. I grant that the license is adopted by a small community, but GPL itself started with minimal interest (i.e., only in the GNU project). Also, licenses that are “GPL plus various special exceptions” that deal with tightly confined areas are, similar to AGPLv3, of interest to only small groups currently. There is no reason to reject a license that has a strong level of interest in a small community, particularly if it is — as GPL+exceptions and AGPLv3 are — compatible with existing licenses like GPLv3. In these cases, we should understand the reasons its user community picks it. In the APGLv3 case, the license addresses important FLOSS principles under serious study by our community. Any license that is actually redundant couldn't pass this test; AGPLv3 can.

    Finally, the AGPLv3 is the outcome of a public process in which Google itself (as well as many others) participated. Indeed, it was the original intent of the GPLv3 drafters to include the Affero clause in the GPLv3 itself. The committees (on which Google served) convinced RMS and other drafters to not include the clause, and that is why it was put into a separate license. We must consider the fairness issue: some members of the community asked us to not include the Affero clause in GPLv3; others wanted it. The parts of the community who didn't want the clause should be accepting of the idea that another publicly-audited license to address this concern should be published for the slighted community.

    Therefore, in this post, I am asking for help: will someone maintain a website that specifically tracks AGPLv3 adoption (as opposed to other sites that try to track everything)? I was going to do it myself, but since I'm the author of the Affero clause and a primary advocate in AGPLv3 adoption, I think it would better if someone else did it. Please email me if you are interested in this volunteer task. I'll update this post once we have a team of folks willing to work on this.

    Posted on Wednesday 23 July 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2008-07-22: Welte Receives Open Source Award for GPL Enforcement

    About two hours ago, Harald Welte received the 2008 Open Source Award entitled the Defender of Rights. (Open Source awards are renamed for each individual who receives them.) This award comes on the heels of the FSF Award for the Advancement of Free Software in March. I am glad that GPL enforcement work is now receiving the recognition it deserves.

    When I started doing GPL enforcement work in 1999, and even when, two years later, it became a major center of my work (as it remains today), the violations space was a very lonely place to work. During that early period, I and my team at FSF were the only people actively enforcing the GPL on behalf of the Software Freedom Movement. When Harald started gpl-violations.org in 2004, it was a relief to finally see someone else taking GPL violations as seriously as I and my colleagues at the FSF had been for so many years.

    Of course, it was no surprise when Harald received the FSF award earlier this year. This Open Source Award now shows a broader recognition. In fact, I hope that this award is a harbinger to indicate that the larger FLOSS world has realized the tremendous value in consistent and serious GPL enforcement that some of us have done for so long. The copyleft is meaningless if it is not defended against those who ignore it, and I am glad that more of the FLOSS world has begun to see that.

    Posted on Tuesday 22 July 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2008-07-14: Autonomo.us Computing

    The Network Services committee that I alluded to recently in various interviews is now officially public and named: Autonomo.us. (Thanks to one of the committee members, Evan Prodromou, who donated the domain name. ) Autonomo.us is officially endorsed by the FSF.

    I've written before about how discussions began at FSF in January 2002 to address the “ASP loophole of the GPL”. In those months that followed, when I came up with the idea for what would (later be named) the Affero clause, I naïvely thought that a license term for the software would “solve” the Software as a Service (SaaS) problem. Indeed, I considered the problem fully addressed upon publication of the original AGPL, and it was much later before I realized the problem was more complex.

    The AGPLv3 is only one (albeit essential) part of what must be a multi-pronged strategy to address the freedom implications and concerns of SaaS. At Auotonomo.us, we have published The Franklin Street Statement on Freedom and Network Services (named for the place it was declared — the location of post-Temple-Place FSF offices). The Statement is a manifesto (of sorts) outlining the concerns that must be addressed and the beginnings of some ideas for solutions. I hope you will read it and begin considering this issue if you haven't already, and that you will endorse the statement if you already understand the issue. We hope to be publishing more on that site as the year goes on!

    Posted on Monday 14 July 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2008-07-03: Like Twitter, but with Freedom Inside

    A company called Control Yourself, led by Evan Prodromou (who serves with me and many others on the FSF-endorsed Freedom for Network Services Committee) yesterday launched a site called identi.ca. It's a microblogging service similar to Twitter, but it is designed to respect the rights and freedoms of its users.

    I'm personally excited because the software for the system, Laconica, is under the license that I originally drafted back in 2002, the Affero GPL (which was updated as part of the GPLv3 process, and is now available as AGPLv3). This marks the first time I've seen a company release its product under a network service freedom-defending license from the start.

    His launch comes at an interesting time. Twitter has had no Jabber-based updates for more than a month, and Identica allows updates via Jabber. Thus, in a way, it's more fully featured than Twitter is right now!

    Posted on Thursday 03 July 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

June

  • 2008-06-28: Does This Mean We've “Made It” as a Social Cause?

    I got a phone call yesterday from someone involved with one of the many socially responsible investment houses. It appears that in some (thus far, small) corners of the socially responsible investment community, they've begun the nascent stages of adding “willingness to contribute to FLOSS” to the consideration map of social responsibility. This is an issue that has plagued me personally for many years, and I was excited to receive the call.

    When I graduated high school and read my first book on personal financial management, I learned how to invest for retirement in mutual funds. The book mentioned the (then) somewhat new practice of “socially responsible investing”, which immediately intrigued me. The author argued, however, that it was silly to make investment decisions based on personal beliefs. I immediately disagreed with that, but I discovered that his secondary point was actually accurate: beyond the Big Issues (weapons manufacturing, tobacco, etc.), it was tough to find a fund that actually shared your personal beliefs.

    Once I did some research, I discovered that it wasn't actually as bad as that, because there actually is a pretty good consensus on what is and is not socially responsible (or, at least, the general consensus in this regard seems to match my personal beliefs, anyway). However, I did discover a gaping hole in the social responsible investment agenda. The biggest social issue in my personal life — the issue of software freedom — was never on others' radar screens as a “socially responsible issue”.

    For example, in 1996, when I had my first opportunity to roll a 401(k) into an investment of my own choosing, I discovered a troubling fact. Every single socially responsible fund, when I looked at their stocks held (sorted by percentage), Microsoft was always in the top ten, and Oracle in the top twenty. Indeed, on most socially responsible axes, Microsoft and Oracle look good: they treat their employees reasonably well, they don't generally build products that actively kill people (although many of us die inside a little bit every time we use proprietary software), and, heck, if they use more DRM, they can ship their software and documentation via the network and won't even ship as many CDs to fill up landfills. This kind of thinking about “socially responsible” ignores how the proprietariness of the company's technology negatively impacts people outside of the company. Nevertheless, for years, I've held my nose and put my retirement money in these funds, content on the compromised idea that at least I don't have my retirement savings in oil companies.

    I tell this backstory to communicate how glad I was to get the call from an employee of a socially responsible investment house. This fellow was actually investigating the FLOSS credentials of various companies and trying to bring it forward as a criterion when considering how socially responsible their practices are. He seemed genuinely interested in bringing this forward as part of a social agenda for his company. I told him: every great idea starts as a conversation between two people, and enthusiastically answered his queries.

    It was clear FLOSS considerations are new and not widely adopted as a factor in the socially responsible investing world, but I am glad that at least someone in that world is thinking about these questions. Of course, I agree that in grand scheme, FLOSS issues should not be ranked too highly — certainly issues of environmental sustainability and human rights have a higher and more immediate social impact0. However, given that Microsoft so often ends up in the top ten of “good socially responsible investments”, FLOSS issues are clearly ranked far too low in the calculation.

    Hopefully, this phone call I took yesterday shows we're entering an era where FLOSS issues are on the socially responsible criteria list for investors. I further hope this blog entry doesn't stop socially responsible investors and fund managers from contacting me in the future to get advice on how socially responsible various companies are. I debated whether to write about this call publicly, but ultimately went for it, since it's an issue I think deserves some net.attention. So many of us, FLOSS fans included, must now must manage our own retirement accounts, since pension funds have generally given way to self-directed retirement savings options. If you have a fund with a socially responsible investment company, take this opportunity to give them a call or send them a letter to tell them you'd like to see FLOSS issues on the criteria list. If you don't yet invest in with a socially responsible company, consider switching to one, as they clearly will be the first to add FLOSS-related criteria to their investing agenda.


    0I have never believed myself that FLOSS is the most important social justice issue in the grand scheme. I struggled for years with the question of whether to devote my career to a social cause that wasn't top priority; things like human rights and environmental sustainability certainly deserve more immediate attention. However, it turned out that my skills, knowledge, background and talent are clearly uniquely tuned to Computer Science in general and FLOSS in particular, and therefore I can have the greatest positive impact focusing on this rather than would-be higher priority causes. If only we could get people in these other movements to at least see that they are better off not using Microsoft for their own operations (in my experience, NGOs and NPOs are more likely to stick with proprietary software than for-profit companies), but that's an agenda for another blog entry.

    Posted on Saturday 28 June 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2008-06-20: Stop Obsessing and Just Do It: VoIP Encryption Is Easier than You Think

    Ian Sullivan showed me an article that he read about eavesdropping on Internet telephony calls. I'm baffled at the obsession about this issue on two fronts. First, I am amazed that people want to hand their phone calls over to yet another proprietary vendor (aka Skype) using unpublished, undocumented non-standard protocols and who respects your privacy even less than the traditional PSTN vendors. Second, I don't understand why cryptography experts believe we need to develop complicated new technology to solve this problem in the medium term.

    At SFLC, I set up the telephony system as VoIP with encryption on every possible leg. While SFLC sometimes uses Skype, I don't, of course, because it is (a) proprietary software and (b) based on an undocumented protocol, (c) controlled by a company that has less respect for users' privacy than the PSTN companies themselves. Indeed, security was actually last on our list for reasons to reject Skype, because we already had a simple solution for encrypting our telephony traffic: All calls are made through a VPN.

    Specifically, at SFLC, I set up a system whereby all users have an OpenVPN connection back to the home office. From there, they have access to register a SIP client to an internal Asterisk server living inside the VPN network. Using that SIP phone, they could call any SFLC employee, fully encrypted. That call continues either on the internal secured network, or back out over the same VPN to the other SIP client. Users can also dial out from there to any PSTN DID.

    Of course, when calling the PSTN, the encryption ends at SFLC's office, but that's the PSTN's fault, not ours. No technological solution — save using a modem to turn that traffic digital — can easily solve that. However, with minimal effort, and using existing encryption subsystems, we have end-to-end encryption for all employee-to-employee calls.

    And it could go even further with a day's effort of work! I have a pretty simple idea on how to have an encrypted call to anyone who happens to have a SIP client and an OpenVPN client. My plan is to make a public OpenVPN server that accepts connection from any host at all, that would then allow encrypted “phone the office” calls to any SFLC phone with any SIP client anywhere on the Internet. In this way, anyone wishing end-to-end phone encryption to the SFLC need only connect to that publicly accessible OpenVPN and dial our extensions with their SIP client over that line. This solution even has the added bonus that it avoids the common firewall and NAT related SIP problems, since all traffic gets tunneled through the OpenVPN: if OpenVPN (which is, unlike SIP, a single-port UDP/IP protocol) works, SIP automatically does!

    The main criticism of this technique regards the silliness of two employees at a conference in San Francisco bouncing all the way through our NYC offices just to make a call to each other. While the Bandwidth Wasting Police might show up at my door someday, I don't actually find this to be a serious problem. The last mile is always the problem in Internet telephony, so a call that goes mostly across a single set of last mile infrastructure in a particular municipality is no worse nor better than one that takes a long haul round trip. Very occasionally, there is a half second of delay when you have a few VPN-based users on a conference call together, but that has a nice social side effect of stopping people from trying to interrupt each other.

    Finally, the article linked above talks about the issue of variable bit rate compression changing packet size such that even encrypted packets yield possible speech information, since some sounds need larger packets than others. This problem is solved simply for us with two systems: (a) we use µ-law, a very old, constant bit rate codec, and (b) a tiny bit of entropy is added to our packets by default, because the encryption is occurring for all traffic across the VPN connection, not just the phone call itself. Remember: all the traffic is going together across the one OpenVPN UDP port, so an eavesdropper would need to detangle the VoIP traffic from everything else. Indeed, I could easily make (b) even stronger by simply having the SIP client open another connection back to the asterisk host and exchange payloads generated from /dev/random back and forth while the phone call is going on.

    This is really one of those cases where the simpler the solution, the more secure it is. Trying to focus on “encryption of VoIP and VoIP only” is what leads us to the kinds of vulnerabilities described in that article. VoIP isn't like email, where you always need an encryption-unaware delivery mechanism between Alice and Bob. I believe I've described a simple mechanism that can allow anyone with an Asterisk box, an OpenVPN server, and an Internet connection to publish to the world easy instructions for phoning them securely with merely a SIP client plus and OpenVPN client. Why don't we just take the easy and more secure route and do our VoIP this way?

    Posted on Friday 20 June 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

April

  • 2008-04-10: The GPL is a Tool to Encourage Freedom, Not an End in Itself

    I was amazed to be involved in yet another discussion recently regarding the old debate about the scope of the GPL under copyright law. The debate itself isn't amazing — these debates have happened somewhere every six months, almost on cue, since around 1994 or so. What amazed me this time is that some people in the debate believed that the GPL proponents intend to sneakily pursue an increased scope for copyright law. Those who think that have completely misunderstood the fundamental idea behind the GPL.

    I'm disturbed by the notion that some believe the goal of the GPL is to expand copyrightability and the inclusiveness of derivative works. It seems that so many forget (or maybe they never even knew) that copyleft was invented to hack copyright — to turn its typical applications to software inside out. The state of affairs that software is controlled by draconian copyright rules is a lamentable reality; copyleft is merely a tool that diffuses the proprietary copyright weaponry.

    But, if it were possible to really consider reduction in copyright control over software, then I don't know of a single GPL proponent who wouldn't want to bilaterally reduce copyright's scope for software. For example, I've often proposed, since around 2001, that perhaps copyright for software should only last three years, non-renewable, and that it require all who wished to distribute non-public-domain software to register the source with the Copyright Office. At the end of the three years, the Copyright Office would automatically publish that now public-domain source to the world.

    If my hypothetical system were the actual (and only) legal regime for software, and were equally applied to all software — from the fully Free to the most proprietary — I'd have no sadness at all that opportunities for GPL enforcement ended after three years, and that all GPL'd software fell into the public domain on that tight schedule, because proprietary software and FLOSS would have the same treatment. Meanwhile, great benefit would be gained for the freedom of all software users. In short, GPL is not an end in itself, and I wouldn't want to ignore the actual goal — more freedom for software users — merely to strengthen one tool in that battle.

    In one of my favorite films, Kevin Smith's Dogma, Chris Rock's character, Rufus, argues that it's better to have ideas than beliefs, because ideas can change when the situation does, but beliefs become ingrained and are harder to shake. I'm not a belief-less person, but I certainly hold the GPL and the notion of copyleft firmly in the “idea” camp, not the “belief” one. It's unfortunate that the entrenched interests outside of software are (more or less) inadvertently strengthening software copyright, too. Thus, in the meantime, we must hold steadfast to the GPL going as far as is legally permitted under this ridiculously expansive copyright system we have. But, should a real policy dialogue open on the reduction software copyright's scope, GPL proponents will be the first in line to encourage such bilateral reduction.

    Posted on Thursday 10 April 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

January

  • 2008-01-24: When your apt-mirror is always downloading

    When I started building our apt-mirror, I ran into a problem: the machine was throttled against ubuntu.com's servers, but I had completed much of the download (which took weeks to get multiple distributions). I really wanted to roll out the solution quickly, particularly because the service from the remote servers was worse than ever due to the throttling that the mirroring created. But, with the mirror incomplete, I couldn't so easily make available incomplete repositories.

    The solution was to simply let apache redirect users on to the real servers if the mirror doesn't have the file. The first order of business for that is to rewrite and redirect URLs when files aren't found. This is a straightforward Apache configuration:

                       RewriteEngine on
                       RewriteLogLevel 0
                       RewriteCond %{REQUEST_FILENAME} !^/cgi/
                       RewriteCond /var/spool/apt-mirror/mirror/archive.ubuntu.com%{REQUEST_FILENAME} !-F
                       RewriteCond /var/spool/apt-mirror/mirror/archive.ubuntu.com%{REQUEST_FILENAME} !-d
                       RewriteCond %{REQUEST_URI} !(Packages|Sources)\.bz2$
                       RewriteCond %{REQUEST_URI} !/index\.[^/]*$ [NC]
                       RewriteRule ^(http://%{HTTP_HOST})?/(.*) http://91.189.88.45/$2 [P]
                     

    Note a few things there:

    • I have to hard-code an IP number, because as I mentioned in the last post on this subject, I've faked out DNS for archive.ubuntu.com and other sites I'm mirroring. (Note: this has the unfortunate side-effect that I can't easily take advantage of round-robin DNS on the other side.)

    • I avoid taking Packages.bz2 from the other site, because apt-mirror actually doesn't mirror the bz2 files (although I've submitted a patch to it so it will eventually).

    • I make sure that index files get built by my Apache and not redirected.

    • I am using Apache proxying, which gives me Yet Another type of cache temporarily while I'm still downloading the other packages. (I should actually work out a way to have these caches used by apt-mirror itself in case a user has already requested a new package while waiting for apt-mirror to get it.)

    Once I do a rewrite like this for each of the hosts I'm replacing with a mirror, I'm almost done. The problem is that if for any reason my site needs to give a 403 to the clients, I would actually like to double-check to be sure that the URL doesn't happen to work at the place I'm mirroring from.

    My hope was that I could write a RewriteRule based on what the HTTP return code would be when the request completed. This was really hard to do, it seemed, and perhaps undoable. The quickest solution I found was to write a CGI script to do the redirect. So, in the Apache config I have:

                    ErrorDocument 403 /cgi/redirect-forbidden.cgi
                    

    And, the CGI script looks like this:

                    #!/usr/bin/perl
                    
                    use strict;
                    use CGI qw(:standard);
                    
                    my $val = $ENV{REDIRECT_SCRIPT_URI};
                    
                    $val =~ s%^http://(\S+).sflc.info(/.*)$%$2%;
                    if ($1 eq "ubuntu-security") {
                       $val = "http://91.189.88.37$val";
                    } else {
                       $val = "http://91.189.88.45$val";
                    }
                    
                    print redirect($val);
                    

    With these changes, the user will be redirected to the original when the files aren't available on the mirror, and as the mirror gets more accurate, they'll get more files from the mirror.

    I still have problems if for any reason the user gets a Packages or Sources file from the original site before the mirror is synchronized, but this rarely happens since apt-mirror is pretty careful. The only time it might happen is if the user did an apt-get update when not connected to our VPN and only a short time later did one while connected.

    Posted on Thursday 24 January 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2008-01-16: apt-mirror and Other Caching for Debian/Ubuntu Repositories

    Working for a small non-profit, everyone has to wear lots of hats, and one that I have to wear from time to time (since no one else here can) is “sysadmin”. One of the perennial rules of system administration is: you can never give users enough bandwidth. The problem is, they eventually learn how fast your connection to the outside is, and then complain any time a download doesn't run at that speed. Of course, if you have a T1 or better, it's usually the other side that's the problem. So, I look to use our extra bandwidth during off hours to cache large pools of data that are often downloaded. With a organization full of Ubuntu machines, the Ubuntu repositories are an important target for caching.

    apt-mirror is a program that mirrors large Debian-based repositories, including the Ubuntu ones. There are already tutorials available on how to set it up. What I'm writing about here is a way to “force” users to use that repository.

    The obvious way, of course, is to make everyone's /etc/apt/sources.list point at the mirrored repository. This often isn't a good option. Save the servers, the user base here is all laptops, which means that they will often be on networks that may actually be closer to another package repository and perhaps I want to avoid interfering with that. (Although given that I can usually give almost any IP number in the world better than the 30kbs/sec that ubuntu.com's servers seem to quickly throttle to, that probably doesn't matter so much).

    The bigger problem is that I don't want to be married to the idea that the apt-mirror is part of our essential 24/7 infrastructure. I don't want an angry late-night call from a user because they can't install a package, and I want the complete freedom to discontinue the server at any time, if I find it to be unreliable. I can't do this easily if sources.list files on traveling machines are hard-coded with the apt-mirror server's name or address, especially when I don't know when exactly they'll connect back to our VPN.

    The easier solution is to fake out the DNS lookups via the DNS server used by the VPN and the internal network. This way, user only get the mirror when they are connected to the VPN or in the office; otherwise, the get the normal Ubuntu servers. I had actually forgotten you could fake out DNS on a per host basis, but asking my friend Paul reminded me quickly. In /etc/bin/named.conf.local (on Debian/Ubuntu), I just add:

                    zone "archive.ubuntu.com"      {
                            type master;
                            file "/etc/bind/db.archive.ubuntu-fake";
                    };
                    

    And in /etc/bind/db.archive.ubuntu-fake:

                    $TTL    604800
                    @ IN SOA archive.ubuntu.com.  root.vpn. (
                           2008011001  ; serial number                                              
                           10800 3600 604800 3600)
                         IN NS my-dns-server.vpn.
                    
                    ;                                                                               
                    ;  Begin name records                                                           
                    ;                                                                               
                    archive.ubuntu.com.  IN A            MY.EXTERNAL.FACING.IP
                    

    And there I have it; I just do one of those for each address I want to replace (e.g., security.ubuntu.com). Now, when client machines lookup archive.ubuntu.com (et al), they'll get MY.EXTERNAL.FACING.IP, but only when my-dns-server.vpn is first in their resolv.conf.

    Next time, I'll talk about some other ideas on how I make the apt-mirror even better.

    Posted on Wednesday 16 January 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2008-01-09: Postfix Trick to Force Secondary MX to Deliver Locally

    Suppose you have a domain name, example.org, that has a primary MX host (mail.example.org) that does most of the delivery. However, one of the users, who works at example.com, actually gets delivery of <[email protected]> at work (from the primary MX for example.com, mail.example.com). Of course, a simple .forward or /etc/aliases entry would work, but this would pointlessly push email back and forth between the two mail servers — in some cases, up to three pointless passes before the final destination! That's particularly an issue in today's SPAM-laden world. Here's how to solve this waste of bandwidth using Postfix.

    This tutorial here assumes you have a some reasonable background knowledge of Postfix MTA administration. If you don't, this might go a bit fast for you.

    To begin, first note that this setup assumes that you have something like this with regard to your MX setup:

                    $ host -t mx example.org
                    example.org mail is handled by 10 mail.example.org.
                    example.org mail is handled by 20 mail.example.com.
                    $ host -t mx example.com
                    example.com mail is handled by 10 mail.example.com.
                    

    Our first task is to avoid example.org SPAM backscatter on mail.example.com. To do that, we make a file with all the valid accounts for example.org and put it in mail.example.com:/etc/postfix/relay_recipients. (For more information, read the Postfix docs or various tutorials about this.) After that, we have something like this in mail.example.com:/etc/postfix/main.cf:

                    relay_domains = example.org
                    relay_recipient_maps = hash:/etc/postfix/relay_recipients
                    
    And this in /etc/postfix/transport:
                    example.org     smtp:[mail.example.org]
                    

    This will give proper delivery for our friend <[email protected]> (assuming mail.example.org is forwarding that address properly to <[email protected]>), but mail will push mail back and forth unnecessarily when mail.example.com gets a message for <[email protected]>. What we actually want is to wise up mail.example.com so it “knows” that mail for <[email protected]> is ultimately going to be delivered locally on that server.

    To do this, we add <[email protected]> to the virtual_alias_maps, with an entry like:

                    [email protected]      user
                    
    so that the key [email protected] resolves to the local username user. Fortunately, Postfix is smart enough to look at the virtual table first before performing a relay.

    Now, what about aliases like <[email protected]>, that actually forwards to <[email protected]>? That will have the same pointless forwarding from server-to-server unless we address it specifically. To do so, we use the transport file. of course, we should already have that catch-all entry there to do the relaying:

                    example.org     smtp:[mail.example.org]
                    

    But, we can also add email address specific entries for certain addresses in the example.org domain. Fortunately, email address matches in the transport table take precedence over whole domain match entries (see the transport man page for details.). Therefore, we simply add entries to that transport file like this for each of user's aliases:

                    [email protected]    local:user
                    
    (Note: that assumes you have a delivery method in master.cf called local. Use whatever transport you typically use to force local delivery.)

    And there you have it! If you have (those albeit rare) friendly and appreciative users, user will thank you for the slightly quicker mail delivery, and you'll be glad that you aren't pointlessly shipping SPAM back and forth between MX's unnecessarily.

    Posted on Wednesday 09 January 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2008-01-01: Apache 2.0 -> 2.2 LDAP Changes on Ubuntu

    I thought the following might be of use to those of you who are still using Apache 2.0 with LDAP and wish to upgrade to 2.2. I found this basic information around online, but I had to search pretty hard for it. Perhaps presenting this in a more straightforward way might help the next searcher to find an answer more quickly. It's probably only of interest if you are using LDAP as your authentication system with an older Apache (e.g., 2.0) and have upgraded to 2.2 on an Ubuntu or Debian system (such as upgrading from dapper to gutsy.)

    When running dapper on my intranet web server with Apache 2.0.55-4ubuntu2.2, I had something like this:

                         <Directory /var/www/intranet>
                               Order allow,deny
                               Allow from 192.168.1.0/24 
                    
                               Satisfy All
                               AuthLDAPEnabled on
                               AuthType Basic
                               AuthName "Example.Org Intranet"
                               AuthLDAPAuthoritative on
                               AuthLDAPBindDN uid=apache,ou=roles,dc=example,dc=org
                               AuthLDAPBindPassword APACHE_BIND_ACCT_PW
                               AuthLDAPURL ldap://127.0.0.1/ou=staff,ou=people,dc=example,dc=org?cn
                               AuthLDAPGroupAttributeIsDN off
                               AuthLDAPGroupAttribute memberUid
                    
                               require valid-user
                        </Directory>
                    

    I upgraded that server to gutsy (via dapper → edgy → feisty → gutsy in succession, just because it's safer), and it now has Apache 2.2.4-3build1. The methods to do LDAP authentication is a bit more straightforward now, but it does require this change:

                        <Directory /var/www/intranet>
                            Order allow,deny
                            Allow from 192.168.1.0/24 
                    
                            AuthType Basic
                            AuthName "Example.Org Intranet"
                            AuthBasicProvider ldap
                            AuthzLDAPAuthoritative on
                            AuthLDAPBindDN uid=apache,ou=roles,dc=example,dc=org
                            AuthLDAPBindPassword APACHE_BIND_ACCT_PW
                            AuthLDAPURL ldap://127.0.0.1/ou=staff,ou=people,dc=example,dc=org
                    
                            require valid-user
                            Satisfy all
                        </Directory>
                    

    However, this wasn't enough. When I set this up, I got rather strange error messages such as:

                    [error] [client MYIP] GROUP: USERNAME not in required group(s).
                    

    I found somewhere online (I've now lost the link!) that you couldn't have standard pam auth competing with the LDAP authentication. This seemed strange to me, since I've told it I want the authentication provided by LDAP, but anyway, doing the following on the system:

                    a2dismod auth_pam
                    a2dismod auth_sys_group
                    

    solved the problem. I decided to move on rather than dig deeper into the true reasons. Sometimes, administration life is actually better with a mystery about.

    Posted on Tuesday 01 January 2008 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

2007

November

  • 2007-11-21: stet and AGPLv3

    Many people don't realize that the GPLv3 process actually began long before the November 2005 announcement. For me and a few others, the GPLv3 process started much earlier. Also, in my view, it didn't actually end until this week, the FSF released the AGPLv3. Today, I'm particularly proud that stet was the first software released covered by the terms of that license.

    The GPLv3 process focused on the idea of community, and a community is built from bringing together many individual experiences. I am grateful for all my personal experiences throughout this process. Indeed, I would guess that other GPL fans like myself remember, as I do, the first time the heard the phrase “GPLv3”. For me, it was a bit early — on Tuesday 8 January 2002 in a conference room at MIT. On that day, Richard Stallman, Eben Moglen and I sat down to have an all-day meeting that included discussions regarding updating GPL. A key issue that we sought to address was (in those days) called the “Application Service Provider (ASP) problem” — now called “Software as a Service (SaaS)”.

    A few days later, on the telephone with Moglen2 one morning, as I stood in my kitchen making oatmeal, we discussed this problem. I pointed out the oft-forgotten section 2(c) of the GPL [version 2]. I argued that contrary to popular belief, it does have restrictions on some minor modifications. Namely, you have to maintain those print statements for copyright and warranty disclaimer information. It's reasonable, in other words, to restrict some minor modifications to defend freedom.

    We also talked about that old Computer Science problem of having a program print its own source code. I proposed that maybe we needed a section 2(d) that required that if a program prints its own source to the user, that you can't remove that feature, and that the feature must always print the complete and corresponding source.

    Within two months, Affero GPLv1 was published — an authorized fork of the GPL to test the idea. From then until AGPLv3, that “Affero clause” has had many changes, iterations and improvements, and I'm grateful for all the excellent feedback, input and improvements that have gone into it. The result, the Affero GPLv3 (AGPLv3) released on Monday, is an excellent step forward for software freedom licensing. While the community process indicated that the preference was for the Affero clause to be part of a separate license, I'm nevertheless elated that the clause continues to live on and be part of the licensing infrastructure defending software freedom.

    Other than coining the Affero clause, my other notable personal contribution to the GPLv3 was management of a software development project to create the online public commenting system. To do the programming, we contracted with Orion Montoya, who has extensive experience doing semantic markup of source texts from an academic perspective. Orion gave me my first introduction to the whole “Web 2.0” thing, and I was amazed how useful the result was; it helped the leaders of the process easily grok the public response. For example, the intensity highlighting — which shows the hot spots in the text that received the most comments — gives a very quick picture of sections that are really of concern to the public. In reviewing the drafts today, I was reminded that the big red area in section 1 about “encryption and authorization codes” is substantially changed and less intensely highlighted by draft 4. That quick-look gives a clear picture of how the community process operated to get a better license for everyone.

    Orion, a Classics scholar as an undergrad, named the software stet for its original Latin definition: “let it stand as it is”. It was his hope that stet (the software) would help along the GPLv3 process so that our whole community, after filing comments on each successive draft, could look at the final draft and simply say: Stet!

    Stet has a special place in software history, I believe, even if it's just a purely geeky one. It is the first software system in history to be meta-licensed. Namely, it was software whose output was its own license. It's with that exciting hacker concept that I put up today a Trac instance for stet, licensed under the terms of the AGPLv3 [ which is now on Gitorious ] 1.

    Stet is by no means ready for drop-in production. Like most software projects, we didn't estimate perfectly how much work would be needed. We got lazy about organization early on, which means it still requires a by-hand install, and new texts must be carefully marked up by hand. We've moved on to other projects, but hopefully SFLC will host the Trac instance indefinitely so that other developers can make it better. That's what copylefted FOSS is all about — even when it's SaaS.


    1Actually, it's under AGPLv3 plus an exception to allow for combining with the GPLv2-only Request Tracker, with which parts of stet combine.

    2Update 2016-01-06:After writing this blog post, I found evidence in my email archives from early 2002, wherein Henry Poole (who originally suggested the need for Affero GPL to FSF), began cc'ing me anew on an existing thread. In that thread, Poole quoted text from Moglen proposing the original AGPLv1 idea to Poole. Moglen's quoted text in Poole's email proposed the idea as if it were solely Moglen's own. Based on the timeline of the emails I have, Moglen seems to have written to Poole within 36-48 hours of my original formulation of the idea.

    While I do not accuse Moglen of plagiarism, I believe he does at least misremember my idea as his own, which is particularly surprising, as Moglen (at that time, in 2002) seemed unfamiliar with the Computer Science concept of a quine; I had to explain that concept as part of my presentation of my idea. Furthermore, Moglen and I discussed this matter in a personal conversation in 2007 (around the time I made this blog post originally) and Moglen said to me: “you certainly should take credit for the Affero GPL”. Thus, I thought the matter was thus fully settled back in 2007, and thus Moglen's post-2007 claims of credit that write me out of Affero GPL's history are simply baffling. To clear up the confusion his ongoing claims create, I added this footnote to communicate unequivocally that my memory of that phone call is solid, because it was the first time I ever came up with a particularly interesting licensing idea, so the memory became extremely precious to me immediately. I am therefore completely sure I was the first to propose the original idea of mandating preservation of a quine-like feature in AGPLv1§2(d) (as a fork/expansion of GPLv2§2(c)) on the telephone to Moglen, as described above. Moglen has never produced evidence to dispute my recollection, and even agreed with the events as I told them back in 2007.

    Nevertheless, unlike Moglen, I do admit that creation of the final text of AGPLv1 was a collaborative process, which included contributions from Moglen, Poole, RMS, and a lawyer (whose name I don't recall) whom Poole hired. AGPLv3§13's drafting was similarly collaborative, and included input from Richard Fontana, David Turner, and Brett Smith, too.

    Finally, I note my surprise at this outcome. In my primary community — the Free Software community — people are generally extremely good at giving proper credit. Unlike the Free Software community, legal communities apparently are cutthroat on the credit issue, so I've learned.

    Posted on Wednesday 21 November 2007 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

August

  • 2007-08-24: More Xen Tricks

    In my previous post about Xen, I talked about how easy Xen is to configure and set up, particularly on Ubuntu and Debian. I'm still grateful that Xen remains easy; however, I've lately had a few Xen-related challenges that needed attention. In particular, I've needed to create some surprisingly messy solutions when using vif-route to route multiple IP numbers on the same network through the dom0 to a domU.

    I tend to use vif-route rather than vif-bridge, as I like the control it gives me in the dom0. The dom0 becomes a very traditional packet-forwarding firewall that can decide whether or not to forward packets to each domU host. However, I recently found some deep weirdness in IP routing when I use this approach while needing multiple Ethernet interfaces on the domU. Here's an example:

    Multiple IP numbers for Apache

    Suppose the domU host, called webserv, hosts a number of websites, each with a different IP number, so that I have Apache doing something like1:

                    Listen 192.168.0.200:80
                    Listen 192.168.0.201:80
                    Listen 192.168.0.202:80
                    ...
                    NameVirtualHost 192.168.0.200:80
                    <VirtualHost 192.168.0.200:80>
                    ...
                    NameVirtualHost 192.168.0.201:80
                    <VirtualHost 192.168.0.201:80>
                    ...
                    NameVirtualHost 192.168.0.202:80
                    <VirtualHost 192.168.0.202:80>
                    ...
                    

    The Xen Configuration for the Interfaces

    Since I'm serving all three of those sites from webserv, I need all those IP numbers to be real, live IP numbers on the local machine as far as the webserv is concerned. So, in dom0:/etc/xen/webserv.cfg I list something like:

                    vif  = [ 'mac=de:ad:be:ef:00:00, ip=192.168.0.200',
                             'mac=de:ad:be:ef:00:01, ip=192.168.0.201',
                             'mac=de:ad:be:ef:00:02, ip=192.168.0.202' ]
                    

    … And then make webserv:/etc/iftab look like:

                    eth0 mac de:ad:be:ef:00:00 arp 1
                    eth1 mac de:ad:be:ef:00:01 arp 1
                    eth2 mac de:ad:be:ef:00:02 arp 1
                    

    … And make webserv:/etc/network/interfaces (this is probably Ubuntu/Debian-specific, BTW) look like:

                    auto lo
                    iface lo inet loopback
                    auto eth0
                    iface eth0 inet static
                     address 192.168.0.200
                     netmask 255.255.255.0
                    auto eth1
                    iface eth1 inet static
                     address 192.168.0.201
                     netmask 255.255.255.0
                    auto eth2
                    iface eth2 inet static
                     address 192.168.0.202
                     netmask 255.255.255.0
                    

    Packet Forwarding from the Dom0

    But, this doesn't get me the whole way there. My next step is to make sure that the dom0 is routing the packets properly to webserv. Since my dom0 is heavily locked down, all packets are dropped by default, so I have to let through explicitly anything I'd like webserv to be able to process. So, I add some code to my firewall script on the dom0 that looks like:2

                    webIpAddresses="192.168.0.200 192.168.0.201 192.168.0.202"
                    UNPRIVPORTS="1024:65535"
                    
                    for dport in 80 443;
                    do
                      for sport in $UNPRIVPORTS 80 443 8080;
                      do
                        for ip in $webIpAddresses;
                        do
                          /sbin/iptables -A FORWARD -i eth0 -p tcp -d $ip \
                            --syn -m state --state NEW \
                            --sport $sport --dport $dport -j ACCEPT
                    
                          /sbin/iptables -A FORWARD -i eth0 -p tcp -d $ip \
                            --sport $sport --dport $dport \
                            -m state --state ESTABLISHED,RELATED -j ACCEPT
                    
                          /sbin/iptables -A FORWARD -o eth0 -s $ip \
                            -p tcp --dport $sport --sport $dport \
                            -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
                        done  
                      done
                    done
                    

    Phew! So at this point, I thought I was done. The packets should find their way forwarded through the dom0 to the Apache instance running on the domU, webserv. While that much was true, I now have the additional problem that packets got lost in a bit of a black hole on webserv. When I discovered the black hole, I quickly realized why. It was somewhat atypical, from webserv's point of view, to have three “real” and different Ethernet devices with three different IP numbers, which all talk to the exact same network. There was more intelligent routing needed.3

    Routing in the domU

    While most non-sysadmins still use the route command to set up local IP routes on a GNU/Linux host, iproute2 (available via the ip command) has been a standard part of GNU/Linux distributions and supported by Linux for nearly ten years. To properly support the situation of multiple (from webserv's point of view, at least) physical interfaces on the same network, some special iproute2 code is needed. Specifically, I set up separate route tables for each device. I first encoded their names in /etc/iproute2/rt_tables (the numbers 16-18 are arbitrary, BTW):

                    16      eth0-200
                    17      eth1-201
                    18      eth2-202
                    

    And here are the ip commands that I thought would work (but didn't, as you'll see next):

                    /sbin/ip route del default via 192.168.0.1
                    
                    for table in eth0-200 eth1-201 eth2-202;
                    do
                       iface=`echo $table | perl -pe 's/^(\S+)\-.*$/$1/;'`
                       ipEnding=`echo $table | perl -pe 's/^.*\-(\S+)$/$1/;'`
                       ip=192.168.0.$ipEnding
                       /sbin/ip route add 192.168.0.0/24 dev $iface table $table
                    
                       /sbin/ip route add default via 192.168.0.1 table $table
                       /sbin/ip rule add from $ip table $table
                       /sbin/ip rule add to 0.0.0.0 dev $iface table $table
                    done
                    
                    /sbin/ip route add default via 192.168.0.1 
                    

    The idea is that each table will use rules to force all traffic coming in on the given IP number and/or interface to always go back out on the same, and vice versa. The key is these two lines:

                       /sbin/ip rule add from $ip table $table
                       /sbin/ip rule add to 0.0.0.0 dev $iface table $table
                    

    The first rule says that when traffic is coming from the given IP number, $ip, the routing rules in table, $table should be used. The second says that traffic to anywhere when bound for interface, $iface should use table, $table.

    The tables themselves are set up to always make sure the local network traffic goes through the proper associated interface, and that the network router (in this case, 192.168.0.1) is always used for foreign networks, but that it is reached via the correct interface.

    This is all well and good, but it doesn't work. Certain instructions fail with the message, RTNETLINK answers: Network is unreachable, because the 192.168.0.0 network cannot be found while the instructions are running. Perhaps there is an elegant solution; I couldn't find one. Instead, I temporarily set up “dummy” global routes in the main route table and deleted them once the table-specific ones were created. Here's the new bash script that does that (lines that are added are emphasized and in bold):

                    /sbin/ip route del default via 192.168.0.1
                    for table in eth0-200 eth1-201 eth2-202;
                    do
                       iface=`echo $table | perl -pe 's/^(\S+)\-.*$/$1/;'`
                       ipEnding=`echo $table | perl -pe 's/^.*\-(\S+)$/$1/;'`
                       ip=192.168.0.$ipEnding
                       /sbin/ip route add 192.168.0.0/24 dev $iface table $table
                    
                       /sbin/ip route add 192.168.0.0/24 dev $iface src $ip
                    
                       /sbin/ip route add default via 192.168.0.1 table $table
                       /sbin/ip rule add from $ip table $table
                    
                       /sbin/ip rule add to 0.0.0.0 dev $iface table $table
                    
                       /sbin/ip route del 192.168.0.0/24 dev $iface src $ip
                    done
                    /sbin/ip route add 192.168.0.0/24 dev eth0 src 192.168.0.200
                    /sbin/ip route add default via 192.168.0.1 
                    /sbin/ip route del 192.168.0.0/24 dev eth0 src 192.168.0.200
                    

    I am pretty sure I'm missing something here — there must be a better way to do this, but the above actually works, even if it's ugly.

    Alas, Only Three

    There was one additional confusion I put myself through while implementing the solution. I was actually trying to route four separate IP addresses into webserv, but discovered that I got found this error message (found via dmesg on the domU): netfront can't alloc rx grant refs. A quick google around showed me that the XenFaq, which says that Xen 3 cannot handled more than three network interfaces per domU. Seems strangely arbitrary to me; I'd love to hear why cuts it off at three. I can imagine limits at one and two, but it seems that once you can do three, n should be possible (perhaps still with linear slowdown or some such). I'll have to ask the Xen developers (or UTSL) some day to find out what makes it possible to have three work but not four.


    1Yes, I know I could rely on client-provided Host: headers and do this with full name-based virtual hosting, but I don't like to do that for good reason (as outlined in the Apache docs).

    2Note that the above firewall code must run on dom0, which has one real Ethernet device (its eth0) that is connected properly to the wide 192.168.0.0/24 network, and should have some IP number of its own there — say 192.168.0.100. And, don't forget that dom0 is configured for vif-route, not vif-bridge. Finally, for brevity, I've left out some of the firewall code that FORWARDs through key stuff like DNS. If you are interested in it, email me or look it up in a firewall book.

    3I was actually a bit surprised at this, because I often have multiple IP numbers serviced from the same computer and physical Ethernet interface. However, in those cases, I use virtual interfaces (eth0:0, eth0:1, etc.). On a normal system, Linux does the work of properly routing the IP numbers when you attach multiple IP numbers virtually to the same physical interface. However, in Xen domUs, the physical interfaces are locked by Xen to only permit specific IP numbers to come through, and while you can set up all the virtual interfaces you want in the domU, it will only get packets destine for the IP number specified in the vif section of the configuration file. That's why I added my three different “actual” interfaces in the domU.

    Posted on Friday 24 August 2007 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

June

  • 2007-06-12: Virtually Reluctant

    Way back when User Mode Linux (UML) was the “only way” the Free Software world did anything like virtualization, I was already skeptical. Those of us who lived through the coming of age of Internet security — with a remote root exploit for every day of the week — became obsessed with the chroot and its ultimate limitations. Each possible upgrade to a better, more robust virtual environment was met with suspicion on the security front. I joined the many who doubted that you could truly secure a machine that offered disjoint services provisioned on the same physical machine. I've recently revisited this position. I won't say that Xen has completely changed my mind, but I am open-minded enough again to experiment.

    For more than a decade, I have used chroots as a mechanism to segment a service that needed to run on a given box. In the old days of ancient BINDs and sendmails, this was often the best we could do when living with a program we didn't fully trust to be clean of remotely exploitable bugs.

    I suppose those days gave us all rather strange sense of computer security. I constantly have the sense that two services running on the same box always endanger each other in some fundamental way. It therefore took me a while before I was comfortable with the resurgence of virtualization.

    However, what ultimately drew me in was the simple fact that modern hardware is just too darn fast. It's tough to get a machine these days that isn't ridiculously overpowered for most tasks you put in front of it. CPUs sit idle; RAM sits empty. We should make more efficient use of the hardware we have.

    Even with that reality, I might have given up if it wasn't so easy. I found a good link about Debian on Xen, a useful entry in the Xen Wiki, and some good network and LVM examples. I also quickly learned how to use RAID/LVM together for disk redundancy inside Xen instances. I even got bonded ethernet working with some help to add additional network redundancy.

    So, one Saturday morning, I headed into the office, and left that afternoon with two virtual servers running. It helped that Xen 3.0 is packaged properly for recent Ubuntu versions, and a few obvious apt-get installs get you what you need on edgy and feisty. In fact, I only struggled (and only just a bit) with the network, but quickly discovered two important facts:

    • VIF network routing in my opinion is a bit easier to configure and more stable than VIF bridging, even if routing is a bit slower.
    • sysctl -w net.ipv4.conf.DEVICE.proxy_arp=1 is needed to make the network routing down into the instances work properly.

    I'm not completely comfortable yet with the security of virtualization. Of course, locking down the Dom0 is absolutely essential, because there lies the keys to your virtual kingdom. I lock it down with iptables so that only SSH from a few trusted hosts comes in, and even services as fundamental as DNS can only be had from a few trusted places. But, I still find myself imagining ways people can bust through the instance kernels and find their way to the hypervisor.

    I'd really love to see a strong line-by-line code audit of the hypervisor and related utilities to be sure we've got something we can trust. However, in the meantime, I certainly have been sold on the value of this approach, and am glad it's so easy to set up.

    Posted on Tuesday 12 June 2007 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

May

  • 2007-05-08: Tools for Investigating Copyright Infringement

    Nearly all software developers know that software is covered by copyright. Many know that copyright covers the expression of an idea fixed in a medium (such as a series of bytes), and that the copyright rules govern the copying, modifying and distributing of the work. However, only a very few have considered the questions that arise when trying to determine if one work infringes the copyright of another.

    Indeed, in the world of software freedom, copyright is seen as a system we have little choice but to tolerate. Many Free Software developers dislike the copyright system we have, so it is little surprise that developers want to spend minimal time thinking about it. Nevertheless, the copyright system is the foremost legal framework that governs software1, and we have to live within it for the moment.

    My fellow developers have asked me for years what constitute copyright infringement. In turn, for years, I have asked the lawyers I worked with to give me guidelines to pass on to the Free Software development community. I've discovered that it's difficult to adequately describe the nature of copyright infringement to software developers. While it is easy to give pathological examples of obvious infringement (such as taking someone's work, removing their copyright notices and distributing it as your own), it quickly becomes difficult to give definitive answers in many real world examples whether some particular activity constitutes infringement.

    In fact, in nearly every GPL enforcement cases that I've worked on in my career, the fact that infringement had occurred was never in dispute. The typical GPL violator started with a work under GPL, made some modifications to a small portion of the codebase, and then distributed the whole work in binary form only. It is virtually impossible to act in that way and still not infringe the original copyright.

    Usually, the cases of “hazy” copyright infringement come up the other way around: when a Free Software program is accused of infringing the copyright of some proprietary work. The most famous accusation of this nature came from Darl McBride and his colleagues at SCO, who claimed that something called “Linux” infringed his company's rights. We now know that there was no copyright infringement (BTW, whether McBride meant to accuse the GNU/Linux operating system or the kernel named Linux, we'll never actually know). However, the SCO situation educated the Free Software community that we must strive to answer quickly and definitively when such accusations arise. The burden of proof is usually on the accuser, but being able to make a preemptive response to even the hint of an allegation is always advantageous when fighting FUD in the court of public opinion.

    Finally, issues of “would-be” infringement detection come up for companies during due diligence work. Ideally, there should be an easy way for companies to confirm which parts of their systems are derivatives of Free Software systems, which would make compliance with licenses easy. A few proprietary software companies provide this service; however there should be readily available Free Software tools (just as there should be for all tasks one might want to perform with a computer).

    It is not so easy to create such tools. Copyright infringement is not trivially defined; in fact, most non-trivial situations require a significant amount of both technical and legal judgement. Software tools cannot make a legal conclusion regarding copyright infringement. Rather, successful tools will guide an expert's analysis of a situation. Such systems will immediately identify the rarely-found obvious indications of infringement, bring to the forefront facts that need an exercise of judgement, and leave everything else in the background.

    In this multi-part series of blog entries, I will discuss the state of the art in these Free Software systems for infringement analysis and what plans our community should make for the creation Free systems that address this problem.


    1 Copyright is the legal system that non-lawyers usually identify most readily as governing software, but the patent system (unfortunately) also governs software in many countries, and many non-Free Software licenses (and a few of the stranger Free Software ones) also operate under contract law as well as copyright law. Trade secrets are often involved with software as well. Nevertheless, in the Software Freedom world, copyright is the legal system of primary attention on a daily basis.

    Posted on Tuesday 08 May 2007 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2007-05-05: Walnut Hills, AP Computer Science, 1998-1999

    I taught AP Computer Science at Walnut Hills High School in Cincinnati, OH during the 1998-1999 school year.

    I taught this course because:

    • They were desperate for a teacher. The rather incompetent teacher who was scheduled to teach the course quit (actually, frighteningly enough, she got a higher paying and higher ranking job in a nearby school system) a few weeks before the school year was to start.
    • The environment was GNU/Linux using GCC's C++ compiler. I went to the job interview because a mother of someone in the class begged me to go, but I was going to walk out as soon as I saw I'd have to teach on Microsoft (which I assumed it would be). My jaw literally dropped when I saw:
    • The students had built their own lab, which even got covered in the Cincinnati Post. I was quite amazed that some of the most brilliant high school students I've ever seen were assembled there in one classroom.

    It became quite clear to me that I owed it to these students to teach the course. They'd discovered Free Software before the boom, and built their own lab despite the designate CS teacher obviously knowning a hell of lot less about the field than they did. There wasn't a person qualified and available , in my view, in all of Cincinnati to teach the class. High school teacher wages are traditionally pathetic. So, I joined the teacher's union and took the job.

    Doing this work delayed my thesis and graduation from the Master's program at University of Cincinnati for yet another year, but it was worth doing. Even almost a decade later, it ranks in my mind on the top ten list of great things I've done in my life, even despite all the exciting Free Software work I've been involved with in my positions at the FSF and the Software Freedom Conservancy.

    I am exceedingly proud of what my students have accomplished. It's clear to me that somehow we assembled an incredibly special group of Computer Science students; many of them have gone on to make interesting contributions. I know they didn't always like that I brought my Free Software politics into the classroom, but I think we had a good year, and their excellent results on that AP exam showed it. Here are a few of my students from that year who have a public online life:

    If you were my student at Walnut Hills and would like a link here, let me know and I'll add one.

    Posted on Saturday 05 May 2007 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

April

  • 2007-04-17: Remember the Verbosity (A Brief Note)

    I don't remember when it happened, but sometime in the past four years, the Makefiles for the kernel named Linux changed. I can't remember exactly, but I do recall sometime “recently” that the kernel build output stopped looking like what I remember from 1991, and started looking like this:

    CC arch/i386/kernel/semaphore.o
    CC arch/i386/kernel/signal.o

    This is a heck of a lot easier to read, but there was something cool about having make display the whole gcc command lines, like this:

    gcc -m32 -Wp,-MD,arch/i386/kernel/.semaphore.o.d -nostdinc -isystem /usr/lib/gcc/i486-linux-gnu/4.0.3/include -D__KERNEL__ -Iinclude -include include/linux/autoconf.h -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -ffreestanding -Os -fomit-frame-pointer -pipe -msoft-float -mpreferred-stack-boundary=2 -march=i686 -mtune=pentium4 -Iinclude/asm-i386/mach-default -Wdeclaration-after-statement -Wno-pointer-sign -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(semaphore)" -D"KBUILD_MODNAME=KBUILD_STR(semaphore)" -c -o arch/i386/kernel/semaphore.o arch/i386/kernel/semaphore.c
    gcc -m32 -Wp,-MD,arch/i386/kernel/.signal.o.d -nostdinc -isystem /usr/lib/gcc/i486-linux-gnu/4.0.3/include -D__KERNEL__ -Iinclude -include include/linux/autoconf.h -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -ffreestanding -Os -fomit-frame-pointer -pipe -msoft-float -mpreferred-stack-boundary=2 -march=i686 -mtune=pentium4 -Iinclude/asm-i386/mach-default -Wdeclaration-after-statement -Wno-pointer-sign -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(signal)" -D"KBUILD_MODNAME=KBUILD_STR(signal)" -c -o arch/i386/kernel/signal.o arch/i386/kernel/signal.c

    I never gave it much thought, since the new form was easier to read. I figured that those folks who still eat kernel code for breakfast knew about this change well ahead of time. Of course, they were the only ones who needed to see the verbose output of the gcc command lines. I could live with seeing the simpler CC lines for my purposes, until today.

    I was compiling kernel code and for the first time since this change in the Makefiles, I was using a non-default gcc to build Linux. I wanted to double-check that I'd given the right options to make throughout the process. I therefore found myself looking for a way to see the full output again (and for the first time). It was easy enough to figure out: giving the variable setting V=1 to make gives you the verbose version. For you Debian folks like me, we're using make-kpkg, so the line we need looks like: MAKEFLAGS="V=1" make-kpkg kernel_image.

    It's nice sometimes to pretend I'm compiling 0.99pl12 again and not 2.6.20.7. :) No matter which options you give make, it is still a whole lot easier to bootstrap Linux these days.

    Posted on Tuesday 17 April 2007 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2007-04-10: User-Empowered Security via encfs

    One of my biggest worries in using a laptop is that data can suddenly become available to anyone in the world if a laptop is lost or stolen. I was reminded of this during the mainstream media coverage1 of this issue last year.

    There's the old security through obscurity perception of running GNU/Linux systems. Proponents of this theory argue that most thieves (or impromptu thieves, who find a lost laptop but decide not to return it to its owner) aren't likely to know how to use a GNU/Linux system, and will probably wipe the drive before selling it or using it. However, with the popularity of Free Software rising, this old standby (which never should have been a standby anyway, of course) doesn't even give an illusion of security anymore.

    I have been known as a computer security paranoid in my time, and I keep a rather strict regiment of protocols for my own personal computer security. But, I don't like to inflict new onerous security procedures on the otherwise unwilling. Generally, people will find methods around security procedures when they aren't fully convinced they are necessary, and you're often left with a situation just as bad or worse than when you started implementing your new procedures.

    My solution for the lost/stolen laptop security problem was therefore two-fold: (a) education among the userbase about how common it is to have a laptop lost or stolen, and (b) providing a simple user-space mechanism for encrypting sensitive data on the laptop. Since (a) is somewhat obvious, I'll talk about (b) in detail.

    I was fortunate that, in parallel, my friend Paul and one of my coworkers discovered how easy it is to use encfs and told me about it. encfs uses the Filesystem in Userspace (FUSE) to store encrypted data right in a user's own home directory. And, it is trivially easy to set up! I used Paul's tutorial myself, but there are many published all over the Internet.

    My favorite part of this solution is that rather than an onerous mandated procedure, encfs turns security into user empowerment. My colleague James wrote up a tutorial for our internal Wiki, and I've simply encouraged users to take a look and consider encrypting their confidential data. Even though not everyone has taken it up yet, many already have. When a new security measure requires substantial change in behavior of the user, the measure works best when users are given an opportunity to adopt it at their own pace. FUSE deserves a lot of credit in this regard, since it lets users switch their filesystem to encryption in pieces (unlike other cryptographic filesystems that require some planning ahead). For my part, I've been slowly moving parts of my filesystem into an encrypted area as I move aside old habits gradually.

    I should note that this solution isn't completely without cost. First, there is no metadata encryption, but I am really not worried about interlopers finding out how big our nameless files and directories are and who created them (anyway, with an SVN checkout, the interesting metadata is in .svn, so it's encrypted in this case). Second, we've found that I/O intensive file operations take approximately twice as long (both under ext3 and XFS) when using encfs. I haven't moved my email archives to my encrypted area yet because of the latter drawback. However, for all my other sensitive data (confidential text documents, IRC chat logs, financial records, ~/.mozilla, etc.), I don't really notice the slow-down using a 1.6 Ghz CPU with ample free RAM. YMMV.


    1 BTW, I'm skeptical about the FBI's claim in that old Washington Post article which states “review of the equipment by computer forensic teams has determined that the data base remains intact and has not been accessed since it was stolen”. I am mostly clueless about computer forensics; however, barring any sort of physical seal on the laptop or hard drive casing, could a forensics expert tell if someone had pulled out the drive, put it in another computer, did a dd if=/dev/hdb of=/dev/hda, and then put it back as it was found?

    Posted on Tuesday 10 April 2007 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

2005

May

  • 2005-05-10: CP Technologies CP-UH-135 USB 2.0 Hub

    I needed to pick a small, inexpensive, 2.0-compliant USB hub for myself, and one for any of the users at my job who asked for one. I found one, the “CP Technologies Hi-Speed USB 2.0 Hub”, which is part number CP-UH-135. This worked great with GNU/Linux without any trouble (using Linux 2.6.10 as distributed by Ubuntu), at least at first.

    Image of the CP UH 135 USB Hub with the annoying LED coming right at you

    I used this hub without too much trouble for a number of months. Then, one day, I plugged in a very standard PS-2 to USB converter (a cable that takes a standard PS-2 mouse and PS-2 keyboard and makes them show up as USB devices). The hub began to heat up and the smell of burning electronics came from it. After a few weeks, the hub began to generate serious USB errors from the kernel named Linux, and I finally gave up on it. I don't recommend this hub!

    Finally, it has one additional annoying drawback for me: the blue LED power light on the side of thing is incredibly distracting. I put a small piece of black tape over it to block it, but it only helped a little. Such a powerful power light on a small device like that is highly annoying. I know geeks are really into these sorts of crazy blue LEDs, but for my part, I always feel like I am about to be assimilated by a funky post-modern Borg.

    I am curious if there are any USB hubs out there that are more reliable and don't have annoying lights. I haven't used USB hubs in the past so I don't know if a power LED is common. If you find one, I'd encourage you to buy that one instead of this one. Almost anywhere you put the thing on a desk, the LED catches your eye.

    Posted on Tuesday 10 May 2005 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2005-05-04: IBM xSeries EZ Swap Hard Drive Trays

    A few days ago, I acquired a number of IBM xSeries servers — namely x206 and x226 systems — for my work at the The Software Freedom Law Center. We bought bare-metal, with just CPU and memory, with plans to install drives ourselves.

    I did that for a few reasons. First, serial ATA (S-ATA or SATA) support under Linux has just become ready for prime time, and despite being a SCSI-die-hard for most of my life, I've given in that ATA's price/performance ratio can't really be beat, especially if you don't need hot swap or hardware RAID.

    When I got the machines, which each came with one 80 GB S-ATA drive, I found them well constructed, including a very easy mounting system for hard drives. Drives have a blue plastic tray that looks like this (follow link of image for higher resolution shot).

    Image of the IBM xSeries Easy Swap Tray

    These so-called "EZ Swap" trays are not for hot-swap; the big IBM swap trays with the lever are for that. This is just to mount and unmount drives quickly. I was impressed, and was sad that, since IBM's goal is to resell you hard drives, they don't make it easy to buy these things outright. You have to look on IBM's parts and upgrade site for the x206, you'll find that they offer to sell 26K-7344, which is listed as a "SATA tray", and a 73P-8007, which is listed as a "Tray, SATA simple swap". However, there is no photo, and that part number does not match the part number on the item itself. On the machines I got, the tray is numbered 73P-9591 (or rather, P73P9591, but I think the "P" in the front is superfluous and stands for "Part").

    I spoke to IBM tech support (at +1-800-426-7378), who told me the replacement part number he had for that tray I had was 73P-8007. Indeed, if you look at third party sites, such as Spare Parts Warehouse, you find that number and a price of US$28 or so. Spare Parts Warehouse doesn't even sell the 26K-7344.

    It seemed to me strange that we had two things described as SATA tray could be that different. And the difference in price was substantial. It costs about US$28 for the 73P-8007 and around US$7 for the 26K-7344.

    So, I called IBM spare parts division at +1-800-388-7080, and ordered one of each. They arrived by DHL this morning. Lo and behold, they are the very same item. I cannot tell the difference between them upon close study. The only cosmetic difference is that they are labeled with different part numbers. The cheaper one is labeled 26K-7343 (one number less than what I ordered) and the other is labeled 73P-9591 (the same number that my original SATA drives came with).

    So, if you need an EZ Swap tray from IBM for the xSeries server, I suggest you order the 26K-7344. If you do so, and find any difference from the 73P-8007, please do let me know. Update: on 2005-06-22, a reader told me they now charge US$12 for the 26K-7344 tray. Further Update: The prices seem to keep rising! Another reader reported to me on 2005-08-08 that the 26K-7344 is now US$84 (!) and the 73P-8007 is now only US$15. So, it costs twice as much as it did a few months ago to get these units, and the cheaper unit apperas to be the 73P-8007. It'll be fun to watch and see if the prices change big again in the months to come.

    When you call IBM's spare parts division, they may give you some trouble about ordering the part. When you call +1-800-388-7080, they are expecting you to be an out-of-warranty customer, and make it difficult for you to order. It depends on who you get, but you can place an order with a credit card even without an "IBM Out-of-Warranty Customer Number". If you have a customer number you got with your original IBM equipment order, that's your warranty customer number and is in a different database than the one used by the IBM Spare Parts Division.

    You can just tell them that you want to make a new order with a credit card. After some trouble, they'll do that.

    Posted on Wednesday 04 May 2005 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

2001

February

  • 2001-02-21: The GNU GPL and the American Dream

    [ This essay was originally published on gnu.org. ]

    When I was in grade school, right here in the United States of America, I was taught that our country was the “land of opportunity”. My teachers told me that my country was special, because anyone with a good idea and a drive to do good work could make a living, and be successful too. They called it the “American Dream”.

    What was the cornerstone to the “American Dream”? It was equality — everyone had the same chance in our society to choose their own way. I could have any career I wanted, and if I worked hard, I would be successful.

    It turned out that I had some talent for working with computers — in particular, computer software. Indoctrinated with the “American Dream”, I learned as much as I could about computer software. I wanted my chance at success.

    I quickly discovered though, that in many cases, not all the players in the field of computer software were equal. By the time I entered the field, large companies like Microsoft tended to control much of the technology. And, that technology was available to me under licensing agreements that forbid me to study and learn from it. I was completely prohibited from viewing the program source code of the software.

    I found out, too, that those with lots of money could negotiate different licenses. If they paid enough, they could get permission to study and learn from the source code. Typically, such licenses cost many thousands of dollars, and being young and relatively poor, I was out of luck.

    After spending my early years in the software business a bit downtrodden by my inability to learn more, I eventually discovered another body of software that did allow me to study and learn. This software was released under a license called the GNU General Public License (GNU GPL). Instead of restricting my freedom to study and learn from it, this license was specifically designed to allow me to learn. The license ensured that no matter what happened to the public versions of the software, I'd always be able to study its source code.

    I quickly built my career around this software. I got lots of work configuring, installing, administering, and teaching about that software. Thanks to the GNU GPL, I always knew that I could stay competitive in my business, because I would always be able to learn easily about new innovations as soon as they were made. This gave me a unique ability to innovate myself. I could innovate quickly, and impress my employers. I was even able to start my own consulting business. My own business! The pinnacle of the American Dream!

    Thus, I was quite surprised last week when Jim Allchin, a vice president at Microsoft hinted that the GNU GPL contradicted the American Way.

    The GNU GPL is specifically designed to make sure that all technological innovators, programmers, and software users are given equal footing. Each high school student, independent contractor, small business, and large corporation are given an equal chance to innovate. We all start the race from the same point. Those people with deep understanding of the software and an ability to make it work well for others are most likely to succeed, and they do succeed.

    That is exactly what the American Way is about, at least the way I learned it in grade school. I hope that we won't let Microsoft and others change the definition.

    Posted on Wednesday 21 February 2001 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

January

  • 2001-01-22: Finished Thesis

    My thesis is nearly complete. I defend tomorrow, and as usual, I let the deadline run up until the end. I just finished my slides for the defense, and practiced once. I have some time in the schedule tomorrow to practice at least once, although I have to find some empty room up at the University to do it in.

    I'll be glad to be done. It's been annoying to spend three or four weeks here sitting around writing about perljvm, and not hacking on it. I have a Cosource deadline coming up this week, so now's a good a time as any to release the first version of the Kawa-based perljvm.

    I am really excited about how Kawa works, and how easy it is to massage perl's IR into Kawa's IR. I got more excited about it as I wrote my thesis defense talk. I really think great things can happen with Kawa in the future.

    Larry Wall is here, and we've had two dinners for the Cincinnati GNU/Linux Users' Group (who paid Larry's way to come here). I was there, and Larry was asking some hard-ish questions about Kawa. Not hard exactly, just things I didn't know. I began to realize how much I have focused on the Kawa API, and I haven't really been digging in the internals. I told him I'd try to have some answers about it for my defense, and I will likely reread Bothner's papers on the subject tomorrow to get familiar with how he deals with various issues.

    It's odd having Larry on my thesis committee. I otherwise wouldn't be nervous in the least, but I am quite worried with him on the committee.

    Anyway, so I defend tommorrow, then it's into perljvm hacking again right away on Tuesday to make the Cosource deadline, and then I have to finish preparing my Perl tutorial for LinuxExpo Paris.

    Posted on Monday 22 January 2001 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.

  • 2001-01-18: Finished Thesis Document

    Tonight, I finished the actual document of my Master's thesis. I had to vet it by reading it out loud, about three times. I have a real hard time finding subtle grammar errors. I believe that when I read, I parse them out in my head. Reading out loud usually helps, but it wasn't working so well this time. (The first draft had many errors, even though I read it out loud.)

    This time, I went through it twice, reading it out loud while bouncing the mouse along each word. This seemed to help a lot, as I was catching errors left and right. I hope I got them all.

    I sent the final document off to the committee. I haven't heard from Larry Wall, whose an external member of my committee, at all. I haven't heard from since we set up the plane tickets months ago. I am sure he's insanely busy, and that's likely why. No big deal, I suppose, I am just overly nervous.

    I really need to get to the actually hacking on perljvm. I have lost three weeks working on the thesis document, which is really only describing things, not hacking. While I'll be glad, I'm sure, to have the Master's thesis done, but perljvm needs some hacking done on it, especially considering that I have a Cosource deadline to meet soon.

    Posted on Thursday 18 January 2001 by Bradley M. Kuhn.

    Submit comments on this post to <[email protected]>.



Creative Commons License This website and all documents on it are licensed under a Creative Commons Attribution-Share Alike 3.0 United States License .


#include <std/disclaimer.h>
use Standard::Disclaimer;
from standard import disclaimer
SELECT full_text FROM standard WHERE type = 'disclaimer';

Both previously and presently, I have been employed by and/or done work for various organizations that also have views on Free, Libre, and Open Source Software. As should be blatantly obvious, this is my website, not theirs, so please do not assume views and opinions here belong to any such organization.

— bkuhn


ebb is a (currently) unregistered service mark of Bradley M. Kuhn.

Bradley M. Kuhn <[email protected]>