Chris Krycho - Techhttp://v4.chriskrycho.com/2019-11-13T22:30:00-05:00Sympolymathesy, or: v5.chriskrycho.comhttps://v4.chriskrycho.com/2019/sympolymathesy-or-v5chriskrychocom.html2019-11-18T20:00:00.000-07:00<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</b> literally every single subscriber of this blog!</i></p>
<p>(I apologize if you’re seeing this in multiple feeds, if you are one of the people subscribed to more than one sub-feed on this site. I needed to make sure *all* my subscribers saw this.)</p>
<p>I’ve just officially launched v5.chriskrycho.com, “Sympolymathesy”. As such, this is the final post on this site! For all the details, check out <a href="https://v5.chriskrycho.com/journal/relaunch!/">the relaunch post</a>!</p>Chris Krycho[email protected]https://www.chriskrycho.comTest the Interface2019-11-13T22:30:00-05:002019-11-13T22:30:00-05:00Chris Krychotag:v4.chriskrycho.com,2019-11-13:/2019/test-the-interface.htmlA fundamental principle of testing software is: test the interface. If you test at the right level, it makes refactoring easy. If you test at the wrong level, you’re not even really testing what you think you are.
<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</b> software developers interested in honing their craft—especially folks just trying to get a handle on good techniques for testing.</i></p>
<p>A fundamental principle of testing software is: <i>test the interface</i>. Failing to keep this principle in mind is at the root of the majority of the problems I see in automated tests (including quite a few of those I’ve written in the past!).</p>
<p>What do I mean by <i>test the interface</i>? I mean that when you are thinking about what kind of test to write, you can answer it by thinking about how the piece of code will be <em>used</em>. That’s the interface: the place that piece of code interacts with the rest of the world around it. The interaction might be between two functions, or it might be feeding data from a web <abbr title="application programming interface">API</abbr> into your application to show users data, or any of a host of things in between. The point is: wherever that interaction is, <em>that’s</em> the interface, and <em>that’s</em> what you test.</p>
<p>To see what I mean, let’s define some different kinds of interfaces and how we might test them in the context of a JavaScript application. (I’m using this context because it’s the one I’m most familiar with these days—but the basic principles apply equally well in lots of other contexts.) When we’re writing our app, we have a bunch of different levels of abstraction we can deal with:</p>
<ul>
<li>the entire application as the user experiences it</li>
<li>individual user interface elements within the application—<abbr title="user interface">UI</abbr> components</li>
<li>functions and classes that manage the business logic of the application</li>
</ul>
<p>This is actually pretty much it, though each of those covers an enormous amount of ground. Notice too that each of these layers of abstraction (each interface) is composed of lower levels of abstraction (smaller interfaces). However, you still want to test each interface on its own terms.</p>
<p>When you are trying to test the entire application as the user experiences it, you should be doing “end-to-end” style testing, preferably with some kind of testing tool that generates the same kinds of input (from the app’s perspective) as a user would. In web apps, we often use tools like <a href="https://developers.google.com/web/tools/puppeteer">Puppeteer</a> or <a href="https://www.seleniumhq.org/projects/webdriver/">Webdriver</a> to simulate a user clicking through our <abbr>UI</abbr> and filling in forms and so on. This is the right level of testing: we interact with the whole app and its interface the same way a user does!</p>
<p>What we <em>shouldn’t</em> do at this level is use our knowledge of the framework our app is using to go in and replace function calls, or swap out <abbr>UI</abbr> components. As soon as we do that, our test stops actually telling us the truth about the interface it’s testing. A user can’t reach in and swap out a function at runtime. If <em>you</em> do that in your tests, then your test tells you something about a fake world you’ve constructed—not the world your user lives in! How do you <em>know</em> that’s the right level to test at? Because that’s the level at which your app interacts with the user: in terms of clicks and form-filling and those kinds of events. <em>Not</em> in terms of function calls!</p>
<p>What about <abbr>UI</abbr> components? The same basic principle holds here. The public interface of a component in any modern web framework is its template—whether that’s JSX, Glimmer templates, Vue templates, Angular templates, or something else. How do you know that? Because that’s the level at which the rest of your codebase will <em>use</em> the component. So what you should test is that template invocation. This is the level of a “rendering” test (as we call them in Ember).</p>
<p>The rest of your codebase doesn’t have the liberty (and in most cases doesn’t have the <em>ability</em>) to reach in and change the behavior of the class or function for your component at runtime. All it can do is call that component with its arguments, and work with anything the component hands back to it. If, during your tests, you violate that—say, by reaching in and calling internal methods on the class that backs a component, rather than via the event handlers you set up to trigger those methods—you are no longer testing what you think you are. Again: you’re in a world of your own construction, <em>not</em> the world the rest of your app code lives in. Your test only tells you what happens when you do something manually behind the scenes with the internals of your component… <em>not</em> what happens when interacting with the component the way other code will.</p>
<p>The same basic principle applies for other classes used in your codebase. This is the layer for “unit” tests. For functions, you just pass in the various arguments allowed and check that you’re getting the results you expect. For classes, you set them up using their public constructors and call only their public methods and set only their public fields. In languages like JavaScript, Python, Ruby, and others, you can often poke at and use methods and data on the class which are really meant to be private.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> That can be particularly tempting when you’re the author of the class: <em>you</em> know what these internal details are supposed to do, after all! It can seem faster and easier to just set up a class with some state ahead of time, or to swap out one of its methods for an easier one to test using monkey-patching or mocking.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> If you do this, however, instead of using the documented public <abbr>API</abbr>, you’re once again testing something other that what the rest of your app will be using… and this means that once again your tests don’t actually tell you whether the rest of the app can actually use it correctly!</p>
<p>In each of these cases, we need to <i>test the interface</i>—the place where the rest of the world will interact with our code, <em>not</em> its internal mechanics.</p>
<p>This helps guarantee that what we are testing is what the rest of the world sees—whether the “world” in question is other functions or classes, or external <abbr>API</abbr>s, or actual users. It also helps us when refactoring, which is making changes to the internals of some piece of code <em>without changing its public interface</em>. If we test the interface, we can safely refactor internally and know two things: if our tests break, we got our refactoring wrong; and we don’t have to change our tests in the process of refactoring! If we test the internals instead of the interface, though, we’ll <em>often</em> have to make changes to our tests when we’re trying to refactor, because we’ll be changing those “behind the scenes” details.</p>
<p>None of this is obvious or intuitive when you’re just starting out, but keeping the principle of <i>test the interface</i> in mind will help you pick the right kind of test: end-to-end, some kind of rendering/<abbr>UI</abbr> test for individual components, and unit tests for standalone “business logic” classes or functions. Hopefully this can help a few of you out there internalize this faster than I did!</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>JavaScript is getting private fields and methods soon, which will help a lot with this—but the basic principle here will remain important even then, because not everything that’s private in terms of <abbr>API</abbr> design can or should be private in terms of the implementation mechanics. This is a question I’d love to dig into… in a future post.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn2" role="doc-endnote"><p>A related tip—if you find yourself wishing that the implementation were easier to <em>test</em>, and needing to mock or stub parts of it to make it testable, that’s <em>often</em> a sign that your design needs some work!</p>
<p>Note that I didn’t spend much time on functions here because it’s much <em>harder</em> to get yourself into these messes with functions. In most languages, you don’t have any way to reach in and mess with their internals, so you’re safe from a lot of these issues. Inputs and outputs are all you have to work with. This is one of the great advantages to working with a functional style where you can. Use of closures for managing state complicates this story a bit, but even there: less so than with most of the other things discussed here!<a href="#fnref2" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
</ol>
</section>
A Note on Attention2019-11-04T22:00:00-05:002019-11-04T22:00:00-05:00Chris Krychotag:v4.chriskrycho.com,2019-11-04:/2019/a-note-on-attention.htmlWalking through the airport this morning, I found that my attention itself had caught my attention. Having a camera around my neck made a difference in how I experienced the world.
<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</b> written with an eye to folks willing to think a little bit harder about <em>attention</em>—something we <em>all</em> need to get better at.</i></p>
<hr />
<p>Walking through the airport this morning, I found that my attention itself had caught my attention. I have had <a href="https://thefrailestthing.com/2018/08/03/spectrum-of-attention/">this bit</a> from L. M. Sacasas bouncing around in my head for the last month or so:</p>
<blockquote>
<p>The ability to take photographs expands (and limits) the hiker’s perceptive repertoire—it creates new possibilities, the landscape now appears differently to our hiker. Smartphone in hand, the hiker might now perceive the world as field of possible images. This may, for example, direct attention up from the path toward the horizon, causing even our experienced hiker to stumble. We may be tempted to say, consequently, that the hiker is no longer paying attention, that the device has distracted him. But this is, at best, only partly true. The hiker is still paying attention. In fact, he may be paying very close, directed attention, hunting for that perfect image. But his attention is of a very different sort than the “in the flow” attention of a hiker on the move. They are now missing some aspects of the surroundings, but picking up on others. Without the smartphone in hand, the hiker may not stumble, but they may not notice a particularly striking vista either.</p>
</blockquote>
<p><a href="https://v4.chriskrycho.com/2019/photography-ing-again.html">Yesterday’s bit of history</a> might give you to how I ended up walking through Denver International Airport with my camera in hand today, taking in the architecture with a rather different eye than I usually do. My normal path through the airport is <em>merely</em> utilitarian: get to the gate, fill my water bottle, hit the bathroom, grab a seat and knock out some work while I wait to board. Today, though, I found myself considering the structures of the facility, its décor, the little touches architects and designers had put in place to (or at least, to attempt to) humanize what might otherwise be a terribly forbidding structure.</p>
<p>(Airports are among the chief edifices of our modern, technological regime. They are not merely travel hubs, but miniature shopping malls. They require not only the massive infrastructure that supports aircraft themselves, but also the perhaps even greater infrastructure to support the cars we park there, the workers who cycle through, the food to be eaten, and so on. They are, I think, under-appreciated as signifiers of our default approach of technological mastery over the world. More on this, perhaps, another day.)</p>
<p>The very act of carrying a camera with me changed my perceptual relationship to the airport. And why? Because the device around my neck prompted (that is: my choice to carry it with me prompted) a shift in my attention. I left Sacasas’ words above just as he wrote them, focused on the way a smartphone shifts the attention of his hypothetical hiker. The smartphone, by dint of its ubiquity and its versatility, does indeed (and <em>more especially</em> than other devices) shift our attention. Not least, I think, because for most of us, there is no conscious choice to bring it with us, as there was for me with my camera this morning.</p>
<p>The phone both draws our attention and diffuses it. For all that having the smartphone in my pocket may increase my attentiveness to the beautiful vista on a hike, it may also tempt me to disengage entirely from the hike: not only to consider the environment in a different light, but to send my mind across the world to other places, other activities entirely. The contrast with a dedicated device like the camera I brought with me is illuminating. Certainly a camera shifts my relationship to a hike, just as it does to a stroll through the airport. But it does so by way of <em>focusing</em> my attention, and drawing it into aspects of the hike that I might otherwise miss. That focusing is a tradeoff: as Sacasas notes, it means there are things I notice which I might not have had I not had a camera in hand; but also that there are aspects of a hike that I do not enjoy as I would had I no camera to hand—precisely because my attention is focused by the device. But the difference between the way a single-purpose technology affects our attention and the way a device with greater versatility affects our attention is rather stark.</p>
<p>I have an increasing degree of affection for an interest in devices dedicated to a single job. Some of this is simply that, as good as smartphones are at many tasks, a purpose-built piece can often serve better than a jack-of-all-trades production. As my friend <a href="https://www.stephencarradini.com">Stephen</a> has often lamented: the iPod was a better portable music player—in the sense of doing that one specific job—than any iPhone! I could say the same of my <a href="https://us.kobobooks.com/collections/ereaders/products/kobo-aura-one-limited-edition">Kobo</a>… and far more for a paper book! I find the experience of using those task-tailored technologies much more <em>enjoyable</em> than our ubiquitous and flexible—and therefore ultimately <em>distracting</em>—smartphones. They shape my attention very differently, and (I think), <em>better</em>.</p>
Apple, Your Developer Documentation is… Missing2019-10-28T07:25:00-04:002019-10-28T08:10:00-04:00Chris Krychotag:v4.chriskrycho.com,2019-10-28:/2019/apple-your-developer-documentation-is-garbage.htmlThe current state of Apple’s software documentation is the worst I’ve ever seen for any framework anywhere. Apple needs to fix this—now.
<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</b> practitioners or interested lookers-on for software development—and Apple itself.</i></p>
<p><i class=editorial><b>Edit:</b> some folks <a href="https://news.ycombinator.com/item?id=21377100">rightly pointed out</a> that my use of “garbage” suggests that the problem is the quality of the existing documentation; I’ve retitled the post to capture that the problem is the <em>massive absence</em> of documentation. You can see the original title by way of the slug.</i></p>
<p>Over the past few months, I have been trying to get up to speed on the Apple developer ecosystem, as part of working on my <a href="https://rewrite.software"><b><i>re</i>write</b></a> project. This means I have been <a href="https://v4.chriskrycho.com/2019/rewrite-dev-journal-how-progress-doesnt-feel.html">learning</a> Swift (again), SwiftUI, and (barely) the iOS and macOS <abbr title='application programming interface'>API</abbr>s.</p>
<p>It has been <em>terrible</em>. The number of parts of this ecosystem which are entirely undocumented is frankly shocking to me.</p>
<p>Some context: I have spent the last five years working very actively in the JavaScript front-end application development world, working in first AngularJS and then Ember.js. Ember’s docs once had a reputation of being pretty bad, but in the ~4 years I’ve been working with it, they’ve gone from decent to really good. On the other hand, when I was working in AngularJS 5 years ago, I often threw up my hands in quiet despair at the utter lack of explanation (or, occasionally, the <em>inane</em> explanations) of core concepts. I thought that would have to be the absolute worst a massive tech company (in that case, Google) providing public <abbr>API</abbr>s could possibly do.</p>
<p>I was wrong. The current state of Apple’s software documentation is the worst I’ve ever seen for any framework anywhere.</p>
<p>Swift itself is relatively well covered (courtesy of the well-written and well-maintained book). But that’s where the good news ends.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> Most of SwiftUI is entirely undocumented—not even a single line explanation of what a given type or modifier does. Swift Package Manager has <em>okay</em> docs, but finding out the limits of what it can or can’t do from the official docs is difficult to impossible; I got my ground truth from Stack Overflow questions. I’ve repeatedly been reduced to searching through <abbr title='World Wide Developer Conference'>WWDC</abbr> video transcripts to figure out where someone says something relevant to whatever I’m working on.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<p>This is, frankly, unacceptable. In the Ember ecosystem, we have a simple rule that code doesn’t get to ship unless it’s documented. The same goes in Rust (I should know: I <a href="https://github.com/rust-lang/rfcs/pull/1636">wrote</a> the <abbr title='request for comments'>RFC</abbr> <a href="https://rust-lang.github.io/rfcs/1636-document_all_features.html">that made that official policy</a>). Now, I understand that Apple’s <abbr>API</abbr> developers (often) work in a different context than these open source projects—especially in that they face crunches around releases which are tied to hardware products shipping.</p>
<p>But. At the end of the day, when you’re vending an <abbr>API</abbr>, it isn’t done until it’s documented. Full stop.</p>
<p>Given what I know of Apple’s approach to this, the problem is not individual engineers (who are not responsible for writing docs) or even the members of dedicated documentation teams (who <em>are</em> responsible for writing docs). But that does not make it any less a failure of Apple’s engineering <em>organization</em>. The job of an <abbr>API</abbr> engineering organization is to support those who will consume that <abbr>API</abbr>. I don’t doubt that many of Apples <abbr>API</abbr> engineers would <em>love</em> for all of these things to be documented. I likewise do not doubt that the documentation team is understaffed for the work they have to do. (If I’m wrong, if I <em>should</em> doubt that, because Apple’s engineering culture <em>doesn’t</em> value this, then that’s even worse an indictment of the engineering culture.) This kind of thing has to change at the level of the entire engineering organization.</p>
<p>Apple claims to be interested in building a platform that is accessible to everyone—from a brand new developer to the most experienced gray-haired folks who’ve been around since the NeXT days. Right now, they’re not even close. I have a decade of software development under my belt, a fair bit of it in languages with rich type systems and functional programming idioms, and some of this stuff is hard for <em>me</em> to figure out. I can’t imagine how mind-bogglingly terrible the experience would be for someone just starting in software, or coming over with experience only in languages like Ruby or Python or JavaScript. It would be completely impossible to learn.</p>
<p>Apple, if you want developers to love your platform—and you should, because good developers are your lifeblood—and if you don’t want them to flee for other platforms—and you should be worried about that, because the web is everywhere and Microsoft is coming for you—then you need to take this seriously. Adopt the mentality that has served other frameworks and languages so well: <strong><em>If it isn’t documented, it isn’t done.</em></strong></p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I am not the only one who has noticed this. <a href="https://nooverviewavailable.com">No Overview Available</a> summarizes the extent of documentation in Apple’s <abbr>API</abbr>s and… it’s not a good look. Hat tip to <a href="https://lobste.rs/u/wink">Lobste.rs user wink</a> and my friend <a href="https://jeremywsherman.com">Jeremy Sherman</a>, who both noted this.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn2" role="doc-endnote"><p>Credit where credit is due: it is genuinely excellent from an accessibility and general usability standpoint that Apple has these transcripts. However, they’re not a substitute for <em>documentation</em>!<a href="#fnref2" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
</ol>
</section>
rewrite Dev Journal: How Progress Doesn’t Feel2019-10-26T18:30:00-04:002019-10-26T18:30:00-04:00Chris Krychotag:v4.chriskrycho.com,2019-10-26:/2019/rewrite-dev-journal-how-progress-doesnt-feel.htmlI spent this afternoon learning about Swift Package Manager. This kind of learning in a new ecosystem doesn’t always <em>feel</em> like progress—but it is.
<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</b> practitioners or interested lookers-on for software development—especially indies.</i></p>
<p>Today, I had about 3½ hours dedicated to working on <b><i>re</i>write</b> and I <em>did</em> make progress… but it sure didn’t feel like it.</p>
<p>For the past month or so, I’ve been sidelined from working on the project by way of getting <em>very</em> sick and then being swamped with travel for a conference followed by a friend’s wedding. I resolved, however, to get some things done today. I enjoyed that sense of momentum I had in those first couple weeks I was working, and I want it back.</p>
<p>Today’s work, however… <em>felt</em> exceedingly unproductive. It wasn’t. I was <em>learning</em>. Specifically, I was learning how Swift Packages and the <a href="https://github.com/apple/swift-package-manager">Swift Package Manager (<abbr>SPM</abbr>)</a> work, how they work with Xcode, and what they can and cannot do. In general, learning like this is necessary and valuable, and this specific knowledge domain is particularly necessary and valuable for me. For one thing, the details of how I’m building the app will make this very applicable very quickly. (More on that in a future update.) For another, because I work best when I have a good understanding of how the whole system works together.</p>
<p>The net of it, though, was that I came out with a good idea of how to use Swift Packages… and zero new lines of code written for the app itself. I may yet make a little progress this evening after my little girls are in bed, but the big takeaway for me today was the need to remind myself that <em>learning is a form of progress</em>. You might think that I, academically-minded and hyper-nerdy fellow that I am, would find that easy to remember. It turns out, though, that it <em>isn’t</em> easy to remember in the context of the desire to actually ship something!</p>
<p>I hope this dev journal entry serves as an encouragement to others doing development work. There’s value in these days. They’re not failures. They’re an integral part of the way you <em>get</em> to the day when you actually ship.</p>
<p>I know this because I’ve been here before. There was a time when I had no idea how I was going to ship that PHP app and I spent whole afternoons figuring out something about <a href="https://subversion.apache.org">SVN</a>—afternoons that did not feel like part of shipping, but which ultimately led me to recover from accidentally deleting my entire codebase. There was a time when I had no idea how a JavaScript <a href="https://en.wikipedia.org/wiki/Single-page_application">“single page application” (<abbr>SPA</abbr>)</a> could actually work, and spent <em>multiple</em> afternoons reading about <abbr title='application programming interface'>API</abbr>-driven applications—afternoons which did not feel productive, but ultimately led to the point where I’m in a technical leadership role on one of the biggest <abbr>SPA</abbr>s in the world. So I can say fairly confidently that however unproductive this afternoon’s work loading up an accurate understanding of <abbr>SPM</abbr> <em>will</em> yield dividends. And sooner than it feels like today!</p>
User Interfaces are API Boundaries2019-09-12T09:00:00-04:002019-09-12T09:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2019-09-12:/2019/user-interfaces-are-api-boundaries.htmlYesterday, in the midst of a rollicking conversation about building forms in web apps, I realized: User interfaces are API boundaries, too!
<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</b> software developers, especially those who work on user interfaces.</i></p>
<p><a href="https://fsharpforfunandprofit.com/ddd/">Domain-driven design</a>, and its near neighbor the <a href="https://web.archive.org/web/20060711221010/http://alistair.cockburn.us:80/index.php/Hexagonal_architecture">ports and adapters (hexagonal) architecture</a> all emphasize the importance of distinguishing between your internal “business logic” and your interactions with the rest of the world. Much of the time, the “ports” that get discussed are <abbr title="application programming interface">API</abbr> calls (e.g. over <abbr title="hyper-text transfer protocol">HTTP</abbr>) or interacting with a database.</p>
<p>Yesterday, in the midst of a rollicking conversation about building forms in web apps,<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> I realized:</p>
<p><em>User interfaces are <abbr title="application programming interface">API</abbr> boundaries, too!</em></p>
<aside>
<p>I claim no novelty here; I’m sure that if I went through the literature on domain-driven design and the adjacent architectural ideas, I’d find this same point made by others. It’s just a fruitful<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> wording I have not heard before—a concise way of expressing an idea that has been rather floating around in my head in much vaguer terms for the last couple years.</p>
</aside>
<p>One of the key insights of <abbr title="domain-driven design">DDD</abbr> and the ports-and-adapters model is that every interaction with the world outside your program is a place of uncertainty. The <abbr title="application programming interface">API</abbr> might have changed and you might be getting back different responses than you expect. The database might have been corrupted. The network might be down—or worse, degraded so that you get <em>partial</em> messages through, and have to deal with incomplete or nonsensical data. Your software design has to account for this. If you isolate the complexity of dealing with that to well-defined, well-constrained boundaries for your application, everything in between can be <em>much</em> simpler.</p>
<p>And the most reliably unpredictable source of data we have for <em>any</em> application… is users! People are complicated and distracted, and our interfaces are always imperfect, often misleading or confusing in various ways (our best intentions notwithstanding). So we get “bad” data from our users. I scare-quote “bad” here because the data is not (necessarily) <em>morally</em> bad (though: see Twitter!) and it is (usually) a <em>mistake</em> rather than <em>malice</em> at root (though: see all sorts of hacking). But from the perspective of our app’s internals, the data has to be validated and transformed to our model of the world, just as data returned from an <abbr title="application programming interface">API</abbr> or a database does.</p>
<p>If you’re familiar with how these architectures suggest handling sources of data external to your program, the implication for user interaction is obvious: you need to treat it like you would an <abbr title="application programming interface">API</abbr>. You should have a clean separation between the data model of a form and the data model used within your application. Put in common <abbr title="object oriented">OO</abbr> parlance: your form model is a kind of <a href="https://martinfowler.com/eaaCatalog/dataTransferObject.html">data transfer object</a>.</p>
<p>Notice that this holds whether you’re using a traditional web form which submits a <code>POST</code> request via <abbr title="hyper-text transfer protocol">HTTP</abbr>, or building a rich <abbr title="single page application">SPA</abbr>-style JavaScript app which will use the form data without ever sending it anywhere. You have to first validate the data to make sure it is complete and correct—presumably with a mechanism for letting the user know if it isn’t. You also normally need to <em>transform</em> the basically flat data you get back from your form into in a data structure which is appropriately rich for the domain you’re working with.</p>
<p>Again: all of this is bog standard for DDD and ports-and-adapters thinking. The point is that you should treat forms specifically and user interaction in general in much the same way as any other external data: because <em>user interfaces are <abbr title="application programming interface">API</abbr> boundaries</em>.</p>
<p>In a future post, hopefully some time in the next week or two, I’ll trace out one of the implications of this for how I think about building forms in much more concrete terms!</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>about which conversation more another day!<a href="#fnref1" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn2" role="doc-endnote"><p>I found it <i>fruitful</i> in two senses: it was generative for me as I reflected on it, and it produced some forward motion in a conversation about real-world software development.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
</ol>
</section>
rewrite Dev Journal: How I Started2019-09-04T21:30:00-04:002019-09-04T21:30:00-04:00Chris Krychotag:v4.chriskrycho.com,2019-09-04:/2019/rewrite-dev-journal-how-i-started.htmlThis week, I started actually writing software for my rewrite project. This is the story of how I got past the daunting feeling of not knowing where to begin.
<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</b> practitioners or interested lookers-on for software development—especially indies.</i></p>
<p>This week, I started the <i>actually writing software</i> phase of working on <a href="https://rewrite.software"><b><i>re</i>write</b></a>. As of yesterday evening, I actually have a bit of code that, although useless in every way, and not especially attractive, does in fact displays a single reference on an iPhone screen. I have an incredibly, impossibly long way to go.</p>
<p>It doesn’t matter: I <em>started</em>.</p>
<p>I had the day off on Monday courtesy of Labor Day, and I chose to spend the chunk of it that <em>wasn’t</em> focused on chores and housework and family time on actually getting this thing moving. I’ve been dreaming for over four years now. That time wasn’t wasted, by a long shot, but as I noted in <a href="https://v4.chriskrycho.com/2019/announcing-rewrite.html">my announcement post</a>, you can plan forever… or you can just start building at some point.</p>
<p>The problem I had—as I noted <a href="https://v4.chriskrycho.com/2019/starting.html">back in July</a>—is that I didn’t actually feel like I knew <em>how</em> to start. This a mammoth project, for one thing. For another, doing this kind of indie work requires dramatically improving my design and product chops. And of course, developing for native apps means learning whole new technology stacks as well!</p>
<p>I’ve known all along that the only way to get anywhere was to break the problem down. You know: the same as always in software development. Find something small to start on. I had already identified which <em>part</em> of the app to dig into first: reference management. This is a place where I could have something that is useful to <em>me</em> in reasonably short order (I want to be able to update my own reference library on my iPad!). Even that was daunting, though—still far too large.</p>
<p>So, Monday, I asked: <i>What is the smallest possible reference-related functionality I could build?</i> The answer, I concluded, was: <i>just display an existing library of references.</i> Then I went further, because “display an existing library of references” is surprisingly complicated. It involves, at a minimum:</p>
<ul>
<li>defining how to display a reference
<ul>
<li>…and therefore also designing the data structures representing references</li>
</ul></li>
<li>actually building that display</li>
<li>defining how to display a list of references to the user—which things to include, which not to, etc.</li>
<li>actually building that list</li>
<li>making it so that tapping on an item in a list displays the detailed view defined previously</li>
<li>making it so you can get back <em>out</em> of the detail view to the list view again</li>
<li>loading a list of references from somewhere, e.g. a BibTeX file from iCloud Drive</li>
<li>parsing that list of references into the internal data structures designed to represent them</li>
</ul>
<p>Now, for an experienced iOS or macOS developer, some of those things might feel trivial enough that they don’t even warrant their own bullet point. But I’m <em>not</em> an experienced iOS or macOS developer; I have no idea how to do any of this! What’s more breaking it down that way means that I have small, discrete tasks that I can go after in small blocks of time. Remember: this is a side project, which I am fitting in around my day job, my church commitments, and time with friends and family. I have to be able to make progress in small blocks of time! I have to <em>feel</em> that progress, at least a little, so that I can keep going.</p>
<p>Every bullet point on the list I came up with represents a discrete item of work. It not only <em>may</em> but <em>must</em> be small, so that I can do it in an evening or three. It needs to be done in such a way that later steps can build on it—but it doesn’t actually need to be anything like what those later steps will transform it into. It just needs to be enough to keep momentum moving, and done in such a way that later changes are not too difficult to make.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>Monday evening, I took the items I had identified and turned them into GitHub Issues in a GitHub project, called <b>Display a BibTeX Library</b>. (I’ll talk about how I’m using GitHub projects a little at some point in the future; it’s a nice flow so far.) Every single one of those items, no matter how apparently small, got its own card. One of them, about Swift-Rust interop, got a note saying, roughly, “I’m pretty sure this will need to be its own whole sub-project.” And looking at that list… I felt a lot less daunted! Every one of the tasks involves something I don’t know how to do yet—but only <em>one</em> thing I don’t know how to do yet, not <em>many</em>.</p>
<p>Last night, I took the first item on the list: display a single reference item. Not <em>parsing</em> them, not even from a string hard-coded into the app, and certainly not loading a file or anything like that. Just hard-coding in a data structure and displaying it on the screen. And, over the course of a few hours, including <em>lots</em> of searching<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> for answers about specific SwiftUI error messages,<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> I managed to implement that one thing.</p>
<p>As I said above: it’s <em>useless</em> in the strict sense. It’s not especially pretty, either. But it’s progress. I started. And now I have that little sense of momentum, and I can keep going!</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>If you think this sounds like the ideas current in the best parts of agile software development—working to make future changes easy; iterating rapidly; delivering the smallest possible chunks of value, and doing so continuously—you’re not wrong!<a href="#fnref1" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn2" role="doc-endnote"><p>“Searching,” not “Googling,” because I only very rarely Google anything. I search, with <a href="https://duckduckgo.com">Duck Duck Go</a> as my starting point and its <a href="https://duckduckgo.com/bang">! search commands</a> as power tools for searches in specific places.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn3" role="doc-endnote"><p><em>Wow</em> could SwiftUI’s compiler diagnostics use some improvement! I sincerely hope that the Swift team in general and the SwiftUI folks in particular take a close look at Rust and Elm for inspiration here.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
</ol>
</section>
Ember Type-Defs Livestreaming2019-07-20T15:05:00-04:002019-07-20T15:05:00-04:00Chris Krychotag:v4.chriskrycho.com,2019-07-20:/2019/ember-type-defs-livestreaming.htmlOver the course of the rest of this year, I'm going to be working regularly on expanding the set of available TypeScript type definitions in the Ember.js community—and live-streaming it!
<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</b> People interested in <a href="https://emberjs.com">Ember</a> and/or <a href="http://www.typescriptlang.org">TypeScript</a>.</i></p>
<p>Over the course of the rest of this year, I’m going to be working regularly on expanding the set of available <a href="http://www.typescriptlang.org">TypeScript</a> type definitions in the <a href="https://emberjs.com">Ember.js</a> community. A huge part of that effort is <em>enabling others to do the work</em>. It’s completely unfeasible for me to be the only person (or even the Typed Ember team the only people) doing this work. There’s simply too much to be done!</p>
<p>One part of my work, then, is figuring out how to level up the community’s ability to take on these tasks. I’m going to be doing this in a few ways:pairing with people who want to convert their addons to TypeScript or write type definitions for them, creating much deeper guides on <em>how</em> to convert a library to TypeScript or write effective types for a third-party library, and—more notably for <em>this</em> post—live-streaming, and sharing recordings of, as many of those efforts as I can.</p>
<p>I’m hoping that this will prove a boon not only to the Ember community, but also to the TypeScript community at large. Many people are comfortable working in these spaces… but especially as TypeScript’s popularity grows, having more of this kind of advanced material will hopefully be a boon to <em>lots</em> of teams in <em>lots</em> of frameworks and contexts.</p>
<p>You can check out the recording of the first live-stream (working on types for a new module layout in Ember Data 3.11) <a href="https://www.youtube.com/watch?v=eNLXi-s7-5o">here</a>, and I created a dedicated YouTube playlist for <em>all</em> of this content, as well: <a href="https://www.youtube.com/watch?v=eNLXi-s7-5o&list=PLelyiwKWHHApVB8gKKqZJw8Tv8qqCJGUv">Typing the Ember.js Ecosystem!</a></p>
<p>Here’s <a href="https://www.youtube.com/watch?v=eNLXi-s7-5o">the first episode</a>! (Note that I accidentally overlapped the tool I’m using to show keystrokes and the video—noob mistake for sure!)</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/eNLXi-s7-5o" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen>
</iframe>
<p>I expect to be doing this on the following dates almost certainly:</p>
<ul>
<li>August 16</li>
<li>September 20</li>
<li>October 18 or</li>
<li>November 15</li>
<li>December 20</li>
</ul>
<p>I will also be doing it a number of <em>other</em> Fridays throughout the year—most likely beginning with August 9. To watch live, you can <a href="https://www.twitch.tv/chriskrycho">follow me on Twitch</a>. I’ll also always announce those plans in <a href="https://discordapp.com/invite/zT3asNS">the Ember Discord</a> at least one day ahead of time, and I’ll shortly be setting up an ongoing thread in <a href="https://discuss.emberjs.com">the Ember forums</a> for discussion and feedback and the like.</p>
<p>Here’s hoping both Ember and non-Ember users find the materials helpful in picking up TypeScript!</p>
Appearance: Corecursive #342019-07-15T16:45:00-04:002019-07-15T16:45:00-04:00Chris Krychotag:v4.chriskrycho.com,2019-07-15:/2019/appearance-corecursive-34.htmlChatting with Adam Gordon Bell on Corecursive—mostly TypeScript, but also a bit of Rust, type theory, and productivity!
<p>I was delighted to spend a bit over an hour <a href="https://corecursive.com/034-chris-krycho-typescript/">chatting with Adam Gordon Bell on the Corecursive podcast</a>. I was there officially to talk about TypeScript, and I did a <em>lot</em> of that… but we also dug into Rust a bit, of course, as well as talking about my schedule and “productivity”.</p>
<p>I’ve been podcasting for a few years now, but this was only the second time I’ve ever been on someone <em>else’s</em> podcast—and it was a blast. Thanks so much to Adam for having me on!</p>
Announcing rewrite2019-07-06T21:50:00-04:002019-07-06T21:50:00-04:00Chris Krychotag:v4.chriskrycho.com,2019-07-06:/2019/announcing-rewrite.htmlFor the past four years, I have been dreaming about an absurdly ambitious project: to build a genuinely great research writing app. Today, I’m “kicking off” the project publicly, at rewrite.software.
<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</b> people interested in note-taking, research, and long-form writing.</i></p>
<p>For the past four years, I have been dreaming of an absurdly ambitious application: a tool that can actually handle research writing <em>well</em>. Good research writing—whether it’s a college paper or a Ph.D. thesis, a journal publication or a magazine article, a scholarly blog or a big book—is a complex and challenging task. At a <em>minimum</em>, research writing includes:</p>
<ul>
<li>finding resources, tracking references to them, and citing them correctly</li>
<li>taking, and making good use of, notes—including quotes from those references</li>
<li>writing large, complex, highly structured documents</li>
<li>publishing in a <em>very</em> specific format for everything from typeface to page margins to citation styles</li>
</ul>
<p>There are tools out there which handle <em>some</em> of those pieces well—a few genuinely great notes apps in particular, and some solid contenders in the reference management space. They don’t play together well, though—whether it’s sync errors between apps, the disjointed experience of using tools which know almost nothing about each other, fighting with different journals’ styles, or the simple pain of trying to <em>write</em> in what is really a <i>desktop publishing tool</i> (looking at you, Microsoft Word).</p>
<p>And I would know: over the course of my M.Div., I wrote well over a hundred thousand words of papers; and never mind the many books’ worth of notes I took.</p>
<p>I eventually found a flow that worked for me during those years, pulling together <a href="https://daringfireball.net/projects/markdown">Markdown</a>, <a href="https://en.wikipedia.org/wiki/BibTeX">BibTeX</a>, <a href="https://pandoc.org">pandoc</a>, and <a href="https://citationstyles.org"><abbr title="citation style languages">CSL</abbr>s</a> to generate Word documents. (I wrote about that <a href="https://v4.chriskrycho.com/2015/academic-markdown-and-citations.html">here</a>, if you’re curious.) It did what I wanted! But it was arcane: even after years of working that way, I could never remember the various script invocations involved without looking them up. In short: there was something wonderful there… but the user experience was terrible.</p>
<p>Even by the time I wrote <a href="https://v4.chriskrycho.com/2015/academic-markdown-and-citations.html">that blog post</a>, I was thinking: <i>what would a genuinely great app in this space look like?</i> I had sketched out the basics in a notebook early that month—</p>
<figure>
<img src="https://f001.backblazeb2.com/file/rewrite/first-page.jpeg" title="a picture of the first page of my working notebook" alt="the first page of my working notebook" /><figcaption>the first page of my working notebook</figcaption>
</figure>
<p>—and I haven’t stopped thinking about it ever since. <i>How do you do reference management well? What makes for a good note-taking system? How do you author large, complex documents with easy reference to those notes? How do you pull all those all together for publishing? How do you make it a <em>great</em> app <em>and</em> make it cross-platform?</i></p>
<p>For the past four years, I have been doing two things in preparation for building <b><i>re</i>write</b> (the working title for this app):</p>
<ul>
<li>filling up that same notebook with design constraints, software architecture considerations, and business model ideas</li>
<li>developing the technical skills I’ll need to actually deliver on that dream</li>
</ul>
<p>Even with those years of planning, though, I’d be lying if I said I feel ready to take this on: this project is huge. It will take years to get to a minimum viable product. But you can plan forever and never ship… or you can just get moving, whether you feel ready or not. I quietly launched <a href="https://buttondown.email/rewrite">a newsletter</a> for the project back at the end of April, and over the past few weeks since getting back from vacation—and having at last <a href="https://v4.chriskrycho.com/2019/finishing-things-on-the-internet.html">wrapped up New Rustacean</a>—I have been working on it in earnest. I started digging into <a href="https://developer.apple.com/xcode/swiftui/">SwiftUI</a>, and have been sketching out more detailed <abbr title="user interface">UI</abbr> ideas, and even decided on what to build first.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>And so today, I’m publicly “launching” the project, by which I mean: giving it a little more fanfare than I have previously by blogging about it here <em>and</em> by giving it its own website: <a href="https://rewrite.software">rewrite.software</a>. For now, it’s just a simple landing page with a form to sign up for the mailing list, mostly there so I have a place to direct people to sign up for <a href="https://buttondown.email/rewrite">the project mailing list</a>. Anything more than that would be a distraction from what I really need to be doing now: learning and building!</p>
<p>I hope some of you will follow along as I take on this absurdly ambitious project, and I expect to be blogging about my adventures in Swift and SwiftUI, Rust, and more in the years to come!</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I’m going to try to ship a nice tool for managing references. It’ll be good to have the experience of actually <em>shipping</em> an app. More on that<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
My Final Round of URL Rewrites… Ever.2019-07-05T10:45:00-04:002019-07-05T10:45:00-04:00Chris Krychotag:v4.chriskrycho.com,2019-07-05:/2019/my-final-round-of-url-rewrites-ever.htmlThis site now lives at v4.chriskrycho.com, and the previous versions of my public site are being migrated to v1, v2, and v3. And so I will never have to do a bunch of URL rewrites for new designs again. (Yes, this means I’m working on a redesign!)
<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</b> web development nerds like me.</i></p>
<p>Those of you subscribed to my <abbr>RSS</abbr> feed most likely saw a bunch of posts again earlier this week. That’s because the canonical <abbr>URL</abbr>s for the posts on my site changed: from <code>www.chriskrycho.com/<year>/<title slug></code> to <code>v4.chriskrycho.com/<year>/<title slug></code>. So, for example, <a href="https://v4.chriskrycho.com/2019/all-things-open-2019">my announcement</a> that I’m speaking at All Things Open 2019 moved from <code>www.chriskrycho.com/2019/all-things-open-2019.html</code> to <code>v4.chriskrycho.com/2019/all-things-open-2019.html</code>. I spent much of this past Wednesday working on getting this migration done, after spending a fair bit of time over the last week <em>planning</em> it. Over the course of the next few days, you’ll see <a href="https://v1.chriskrycho.com">v1</a> and <a href="https://v3.chriskrycho.com">v3</a> start working; <a href="https://v2.chriskrycho.com">v2</a> is already up as I write this.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>But <em>why</em>, you ask? Because I now have—at last!—a stable <abbr>URL</abbr> design for my website, which will <em>never have to change again</em>. (“At last” I say because I’ve been thinking about doing this since 2015. It feels <em>great</em> to finally have it done.) I care about stable <abbr>URL</abbr>s. I want a link to my content to work just exactly as well in 10 years as it does today. Don’t break the web! Don’t break all the documents that <em>aren’t</em> on the web but which point to places on the web! Historically, that has meant that <em>every</em> time I launch a new website design, I have to do a bunch of work to move the <em>previous</em> version of the site and create redirects for it.</p>
<p>No more! From this point forward, my content will always live at a <em>versioned</em> <abbr>URL</abbr>. This site is <code>v4.chriskrycho.com</code>. When I launch the redesign I’ve been working on (very soon!), it’ll be <code>v5.chriskrycho.com</code>.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> When I launch another redesign in 5 years, that’ll live at <code>v6.chriskrycho.com</code>—and so on. All I’ll have to do at that point is change where <code>www</code> and the root <code>feed.xml</code> redirect to, and everything else will just keep working.</p>
<p>The idea isn’t new to me—I got it originally from <em>someone</em> else; but I don’t remember who because it has been such a long time since I first saw the idea. I had done something <em>somewhat</em> similar when I launched the last version of my site, archiving the previous version at <code>2012-2013.chriskrycho.com</code>, but I failed to start the <em>new</em> version at a similarly specific location. What this means is that I had to take and redirect every piece of content that lived on what is now <code>v3.chriskrycho.com</code> from <code>www.chriskrycho.com</code> to its new home. Now, as I’m preparing to do the <code>v5</code> launch, I had to do the same <em>again</em>, but this time for what is now at <code>v4</code>!</p>
<p>I don’t want to do this again! Even with building <a href="https://github.com/chriskrycho/redirects">a small tool</a> to generate either file-based or Netlify redirect rules, getting it right is both time-consuming and error-prone, especially when <em>also</em> needing to do a <abbr title="domain name server">DNS</abbr> migration to <em>create</em> <code>v4.chriskrycho.com</code> and get myself off some old shared hosting and… it was a pain and a lot of manual work.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> The new approach means I will never have to do this again, and I cannot express just how happy that makes me.</p>
<p>So: <code>v4</code> it is for now, and <code>v5</code> coming soon. When that happens, you’ll see an announcement post in your feed, and then you’ll automatically be switched over to the new root feed on the <code>v5</code> site, without having to do anything at all. 🎉</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>For <em>very</em> long-time readers: I also used this as an opportunity to get my old <a href="https://52verses.chriskrycho.com">52 Verses</a> site off of Blogger’s infrastructure and into a purely-static-<abbr>HTML</abbr> setup as well. Happily, that one doesn’t involve any <abbr>URL</abbr> tweaking—just extracting the content from Blogger and pushing it to a static site host.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Feel free to watch that space as I iterate on it! It’s coming together nicely but still has a long way to go.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>The final redirects file is <a href="https://github.com/chriskrycho/www.chriskrycho.com/blob/d0b2584d94b55060d89c500bf0f146635e17d84f/public/_redirects">here</a>, if you’re curious.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
#EmberJS2019, Part 22019-06-17T21:20:00-04:002019-06-17T21:20:00-04:00Chris Krychotag:v4.chriskrycho.com,2019-06-17:/2019/emberjs2019-part-2.htmlLet’s make TypeScript a first-class citizen of the Ember ecosystem. There’s a lot already done, but a lot left to do!
<p>Over the last year, the Ember community has steadily delivered on the vision we all traced out in last year’s #EmberJS2018 and Roadmap <abbr title="Request for Comments">RFC</abbr> process, culminating with the shipping-very-soon-now <a href="https://emberjs.com/editions/octane/">Ember Octane Edition</a>. (All the pieces are pretty much done and are either on stable or will be shortly; we just need another <abbr title="long term support">LTS</abbr> release before we cut a full new edition!)</p>
<p>So… what should we tackle next? This year, I have only two parts, unlike <a href="https://v4.chriskrycho.com/emberjs2018">last year’s four</a> (and I’m sneaking them in just under the wire, as today is the deadline for entries!):</p>
<ul>
<li><b>Part 1:</b> <a href="https://v4.chriskrycho.com/2019/emberjs2019-part-1">Let’s finish modernizing the Ember programming model!</a></li>
<li><b>Part 2 (this post):</b> <a href="https://v4.chriskrycho.com/2019/emberjs2019-part-2">Let’s make TypeScript a first-class citizen of the Ember ecosystem.</b></a></li>
</ul>
<hr />
<p>For the last two and a half years, I have been working off and on towards making TypeScript viable in Ember apps and addons. I was delighted when others came along to help pick up the load, and we’ve <a href="https://v4.chriskrycho.com/2019/emberconf-2019-typed-ember-team-report.html">charted a course</a> for what we’d like to do over the year ahead. The major priorities we identified all point at what I’ve wanted since I started down this road back at the very end of 2016: for TypeScript to be a first-class citizen in the Ember ecosystem. Here’s my roadmap for how we get there. (Note that here I’m speaking only for myself—neither for the Typed Ember team nor for LinkedIn!)</p>
<section id="the-roadmap" class="level2">
<h2>The Roadmap</h2>
<section id="execute-on-our-priorities" class="level3">
<h3>1. Execute on Our Priorities</h3>
<p>All of us want this to happen. It’s not yet clear what all of our priorities will be in our jobs over the back half of 2019—but if we can, we’d like to see those efforts across the line.</p>
<p>The TypeScript team has eased one of our heavy burdens, by investing in performance monitoring infrastructure over the course of this year and paying close attention to how their changes affect us as well as other TypeScript consumers. We’re deeply appreciative! But there’s still a lot of work to be done that just needs time to actually do the work—reducing churn in type definitions, building type-checked templates, and improving our documentation.</p>
<p>None of those are insurmountable by any stretch. But they’d also be far likelier to happen if they were concretely identified as priorities for Ember as a whole, and we had commitment from the community to help!</p>
<p>To briefly summarize <a href="https://v4.chriskrycho.com/2019/emberconf-2019-typed-ember-team-report.html">those priorities</a> again:</p>
<ul>
<li><p>We need to make it so that consumers of our type definitions do not face breakage from updates to Ember’s types <em>or</em> TypeScript definitions. We already worked out a basic strategy to solve this problem, and I’ve done further exploration to validate that with key stakeholders of large apps (both TypeScript users and apps which <em>want</em> to use TypeScript) and core Ember contributors… but none of us have had time since EmberConf to write out the strategy as a Typed Ember <abbr title="Request for Comments">RFC</abbr>, much less to do the actual implementation work.</p></li>
<li><p>We need to make templates type-aware and type-safe. As fantastic as the experience of writing Glimmer components is—and I genuinely do love it!—it’ll be an order of magnitude better when you get autocomplete from your editor as soon as you type <code><SomeComponent @</code>… and see the names of the valid arguments to a <code>SomeComponent</code> and the types of things you can pass to them. Everyone who has used TSX in a well-typed React app knows just how good this can be. We can make the experience equally great in Ember, while maintaining the authoring and performance advantages of separate templates. Again: we know how to do this (and the TypeScript team is also working on things which may make it even better for us). We just need the time allocated to take the work from prototype to ready-for-real-world-use.</p></li>
<li><p>We need to dramatically expand the documentation we provide. Right now, ember-cli-typescript provides a minimal (and very useful!) set of docs. It’s enough to get you <em>started</em>, and if you’re already comfortable with TypeScript you’ll do all right. However, we’d love to provide helpful guides that show people not just the <em>mechanics</em> of using TypeScript with an Ember app, but best practices and techniques and the happy path. There’s a world of difference between being able to run <code>ember install ember-cli-typescript</code> successfully and being able to author an app or addon in TypeScript successfully, and we need to bridge that gap for successful ecosystem-wide adoption!</p></li>
</ul>
</section>
<section id="type-the-ecosystem" class="level3">
<h3>2. Type the Ecosystem</h3>
<p>We also need to do the work—and this simply <em>cannot</em> be done solely by the handful of us that make up the core team—to get types in place for the whole of the Ember ecosystem. Two years ago, I started drafting a blog post outlining a quest to type the ecosystem. I rewrote a good chunk of it last year. I even began working on a draft of the quest in our repository in 2018! But we haven’t actually done it, and while a handful of important addons do now have types, (a) most still don’t, and (b) many of those which <em>do</em> have type definitions could use further iteration to tighten them up to make them more reliable or more useful.</p>
<p>I <em>hope</em> to actually open that quest sometime during the third quarter of this year. If things go as I hope, I will be doing some of that work myself, and I will be building documentation and training materials for others so <em>they</em> can see how to do it, and I will be available for code reviews on conversion efforts. I cannot guarantee that by any stretch—but it is my fervent hope, and there is very good reason to think it may actually come to pass!</p>
<p>In many ways, this also hinges on our ability to provide a good story for reducing churn in type definitions. Just as it’s important that we make Ember’s <em>own</em> types stable for consumers, it’s also important that we help addon developers provide the same kinds of guarantees for <em>their</em> consumers. The entire Ember community takes SemVer seriously, and that means the tools have to support that.</p>
</section>
<section id="define-embers-typescript-story" class="level3">
<h3>3. Define Ember’s TypeScript Story</h3>
<p>If we manage to execute on all the priorities outlined above, then there’s one last step for making TypeScript an official part of Ember’s story: an <abbr title="Request for Comments">RFC</abbr> defining <em>exactly</em> how Ember can officially support TypeScript—including two major commitments:</p>
<ul>
<li>shipping its own type definitions, with a well-considered approach to SemVer and TypeScript</li>
<li>considering TypeScript a <em>peer</em> to JavaScript in <abbr title="Application Programming Interface">API</abbr> design decisions</li>
</ul>
<p>I have an basic frame in mind for how we tackle both of those. The first is by far the more important of the two; the latter is already happening in an <i>ad hoc</i> way and merely requires the former before it can be formalized. But getting to the point where Ember can ship its own type definitions while upholding its semantic versioning commitments <em>and</em> taking advantage of the advances always happening with TypeScript itself is a large and non-trivial task.</p>
<p>It means a substantial amount of work within the Ember codebase. It means building new tooling for Ember’s core development process—so that the experience of working on Ember itself can remain ever-more productive even as we make sure that the types published for consumers are reliable, accurate, and stable. It means investing in both further education of the community and more static analysis tooling, to make sure that breaking <em>type</em>-level changes are not introduced accidentally.</p>
<p>These efforts are worth the investment they will require, but they <em>are</em> serious efforts.</p>
</section>
</section>
<section id="why-it-matters" class="level2">
<h2>Why It Matters</h2>
<p>This is not just me speaking as TypeScript fanboy and promoter. These things matter for TypeScript consumers of Ember—and they do, profoundly. But I would not suggest even that <abbr title="Request for Comments">RFC</abbr> for official support merely to that end. The TypeScript user community in Ember has done extremely well to date <em>without</em> that kind of official commitment, and could continue to do so (as do many other communities in the broader JavaScript ecosystem).</p>
<p>Why, then, do I suggest we make this not just a commitment for our little informal team but for the Ember community as a whole? Because while TypeScript users will benefit the <em>most</em> from these improvements, JavaScript developers will <em>also</em> benefit from them.</p>
<p>When both Ember core and every major addon in the Ember ecosystem has top-notch types available, <em>every</em> other app and addon author will be able to take advantage of those types, courtesy of the integration offered by TypeScript in every major editor. Whether they’re using Vim or Visual Studio, Ember developers will be able to get rich documentation, suggestions, and inline errors—even for their templates! This can be a massive win for developer productivity throughout the ecosystem. Investing to make TypeScript a first-class citizen fo the Ember ecosystem will make the experience of authoring Ember apps and libraries better for <em>everyone</em>. So let’s do it!</p>
</section>
#EmberJS2019, Part 12019-06-17T20:25:00-04:002019-06-17T20:25:00-04:00Chris Krychotag:v4.chriskrycho.com,2019-06-17:/2019/emberjs2019-part-1.htmlLet’s finish modernizing the Ember programming model! That means everything from routes and controllers to the file system and build pipeline changes.
<p>Over the last year, the Ember community has steadily delivered on the vision we all traced out in last year’s #EmberJS2018 and Roadmap <abbr title="Request for Comments">RFC</abbr> process, culminating with the shipping-very-soon-now <a href="https://emberjs.com/editions/octane/">Ember Octane Edition</a>. (All the pieces are pretty much done and are either on stable or will be shortly; we just need another <abbr title="long term support">LTS</abbr> release before we cut a full new edition!)</p>
<p>So… what should we tackle next? This year, I have only two parts, unlike <a href="https://v4.chriskrycho.com/emberjs2018">last year’s four</a> (and I’m sneaking them in just under the wire, as today is the deadline for entries!):</p>
<ul>
<li><b>Part 1 (this post):</b> <a href="https://v4.chriskrycho.com/2019/emberjs2019-part-1">Let’s finish modernizing the Ember programming model!</a></li>
<li><b>Part 2:</b> <a href="https://v4.chriskrycho.com/2019/emberjs2019-part-2">Let’s make TypeScript a first-class citizen of the Ember ecosystem.</a></li>
</ul>
<hr />
<p>The Octane Edition represents the <em>delivery</em> of several years worth of work and experimentation. It represents a willingness to say “no” to many good efforts and things Ember needs to continue succeeding. All of that is <em>very</em> much to the good! It’s precisely what I and many others called for last year.</p>
<p>This year, it’s time to deliver on a number of other long-standing goals of the Ember effort. That means:</p>
<ul>
<li>a modernized <em>build-system</em>, and with it the long-promised “svelte” builds, tree-shaking, and the ability to npm-install-your-way-to-Ember</li>
<li>a modernized <em>routing system</em>, leaving behind the final bits of cruft from the Ember 1.x era and fully-embracing the component-service architecture suggested last year</li>
</ul>
<section id="modernized-build-system" class="level2">
<h2>Modernized Build System</h2>
<p>Others have covered the build system in some detail, and I largely agree with their assessments. We <em>do</em> need to focus on landing that and continuing to modernize our build pipeline, and the Embroider effort and everything it unlocks should absolutely be a core part of the roadmap. One of the biggest ones, for many potential adopters of Ember <em>and</em> many existing Ember users who have large codebases they’d like to migrate <em>into</em> Ember is that long-awaited npm-install-your-way-to-Ember story. Let’s make that happen! I’m confident that as long as we make that commitment, we’ll get it done.</p>
<p>Given that confidence, I’m going to focus for the rest of this post on the <em>directional</em> question with our routing system—both why we need a change and what I think the change should look like.</p>
</section>
<section id="modernized-routing-system" class="level2">
<h2>Modernized Routing System</h2>
<p>The Ember Router was years ahead of its time, and it remains very solid and reliable; it’s a workhorse. Unfortunately, the routing system <em>as a system</em> is showing its age, and has entered something of a period of instability of just the sort Editions are meant to address. Today, routing concerns are spread across four different parts of the application: the <i>route map</i>, <i>route classes</i>, <i>controller classes</i>, and <i>the router service</i>. Over the next year, we should iteratively design and implement our way toward a future without controllers… and possibly some other simplifications, if we can manage them. We can make working with Ember simultaneously <em>easier for newcomers</em> and <em>better for old hands</em>.</p>
<p>“Controllers are dead!” is one of the great bogeymen of Ember lore at this point; but I hope quite sincerely that a year from now it’s basically true. Controllers are the single part of Ember today that shows <em>very</em> clearly the application’s SproutCore roots. When I started learning AppKit and UIKit early this year, I was struck by all the things that sounded like Ember 1.x—or rather, vice versa! And just as Apple itself is now <a href="https://developer.apple.com/xcode/swiftui/">moving aggressively toward a programming model without controllers</a>, so should we!<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>It’s not that controllers are <em>bad</em>, exactly. It’s that they don’t <em>fit</em> with the rest of the framework at this point. They’re long-lived singletons (like services) but serve as backing classes for templates (like components). They are eagerly instantiated as soon as a <code>LinkTo</code> component with references them is instantiated, but not <em>set up</em> until the app transitions to the route associated with them. They’re required if you want to use query parameters, and query parameters don’t work with Ember’s “data down, actions up” paradigm… pretty much at all.</p>
<p>Controllers need to go, but they need a well-designed set of replacements—in particular, we need a good design for query parameter handling and for what template should be associated with a given route.</p>
<p>Query param handling is, I admit, mostly outside my wheelhouse. All of the situations where I’ve used it would be trivially solved by putting them on the router service, tracking changes on them with the <code>@tracked</code> decorator, and updating them with actions. However, I’m reliably informed that some of the more gnarly scenarios out there require a bit more than this, and I defer to the folks who know what they’re talking about there!</p>
<p>React and Vue simply solve the template problem by mounting <em>components</em> at given route locations. Ember should probably follow <em>roughly</em> the same path, while baking in good defaults along the way. Don’t call them “routable components” though! It’s not just that it’s too much baggage; it’s that a good design in this space should not require the components themselves to be anything special at all. Instead, whether it’s part of the route map or the router class grows a small bit of new, purely declarative <abbr>API</abbr>—e.g. static class properties specifying the relevant components for the loading, resolved, and error states of the route’s model—the route itself should be able to specify exactly what component to render.</p>
<p>If we put in the work to get a design that satisfies all these constraints, we can come out with a <em>much</em> simpler routing system—and Ember’s entirely programming model will be <em>much</em> more coherent as a result.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> We’ll simply have components, services, and routes—and routes will simply be a mapping from URL to a particular set of data and a corresponding component to render it into. That in turn will take us most of the rest of the way toward the programming model Chris Garrett proposed a year ago: <a href="https://medium.com/@pzuraq/emberjs-2018-ember-as-a-component-service-framework-2e49492734f1">Ember as a Component-Service Architecture</a>. This is the fitting conclusion to what we started in Octane: bringing the <em>whole</em> Ember programming model into coherence.</p>
</section>
<section id="bonus" class="level2">
<h2>Bonus</h2>
<p>I’d also like to strongly commend my friend Dustin Masters’ post, <a href="https://dev.to/dustinsoftware/the-case-for-embeddable-ember-4120">The Case for Embeddable Ember</a>. Call this a stretch goal: if we ship all the build pipeline elements represented above, the extra work required to get to that point is <em>relatively</em> small—and extremely valuable for many teams who want to replace legacy applications written wholly with Backbone, jQuery etc., or who just want to see if Ember might have value to add without doing a full rewrite of their existing React/Vue/Angular/Aurelia apps.</p>
<p>Oh… and it turns out that the design constraints I suggested for a routing system that works well with components would lead fairly nicely and easily to Dustin’s proposal, and make for a straightforward path to fully adopt Ember and map its component tree to the router when you’re ready. Just saying.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>You can expect to hear a <em>lot</em> more from me about Swift <abbr title="user interface">UI</abbr> in this space, both in a general sense <em>and</em> as it relates to Ember. There are some fascinating points of contact between the two programming models!<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>There may also be opportunities for further simplification past this, along with more substantial rethinks of our router as we have it. But those are not <em>necessary</em> for the next year, and making these changes will unlock further experimentation in that direction while making Ember more usable in the meantime.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
From My Sent Folder: On Mozilla and IRC2019-04-30T08:25:00-04:002019-04-30T08:25:00-04:00Chris Krychotag:v4.chriskrycho.com,2019-04-30:/2019/from-my-sent-folder-on-mozilla-and-irc.htmlA lesson free- and open-source software advocates still need to learn: that they have to prioritize what users actually value as well as they things they (we!) think users should value.
<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</b> anyone who cares about the success of free and open-source software.</i></p>
<p><i>A New Rustacean listener sent me a note lamenting the way <a href="https://blog.rust-lang.org/2019/04/26/Mozilla-IRC-Sunset-and-the-Rust-Channel.html">Mozilla’s transition from IRC to Discord</a>—i.e., from an open protocol to a proprietary service. For many advocates of free software, this is a deeply unsettling move. The kind listener who sent me an email pointed to <a href="https://matrix.org/blog/index">Matrix</a>, an open-protocol service, and noted: “perhaps the default web interface is slightly less slick, but there’s not much in it, and it’s open source.” I sympathize with them, but also had a few <em>other</em> thoughts. My reply is reproduced below.</i></p>
<hr />
<p>Yeah, it’s complicated for sure. The last time I looked, Matrix’s mobile clients were so bad as to be effectively unusable; I know that was one of the considerations for the Rust move to Discord.</p>
<p>It seems to me that a lot of folks committed to free and open-source have ended up wanting free-as-in-beer as well as free-as-in-speech, and the combo of that with the existing market dynamics means that good is rarely open/free-as-in-speech, and vice versa. It’s a Gordian knot I don’t see an easy way through.</p>
<p>The specific dynamic you highlight with docs is a prime example of the other cultural challenges I see, too: devs don’t like writing docs, they like writing code… but it turns out that docs and other “soft” things—including, yes, <abbr>UI</abbr> polish!—are actually as or more important for adoption as things like protocols.</p>
<p>We’ll see this same cycle repeat until free and open source software advocates learn to prioritize the things users actually value as well as the things they (we!) believe they should value.</p>
EmberConf 2019 Typed Ember Team Report2019-03-26T15:30:00-04:002019-03-26T15:30:00-04:00Chris Krychotag:v4.chriskrycho.com,2019-03-26:/2019/emberconf-2019-typed-ember-team-report.htmlAt EmberConf 2019, the Typed Ember team (Mike North, James Davis, Dan Freeman, and I) enjoyed dinner together and talked about the big problems on deck for TypeScript in Ember. Here’s what we covered!
<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</b> People interested in <a href="https://emberjs.com">Ember</a> and <a href="http://www.typescriptlang.org">TypeScript</a>.</i></p>
<p>One of the real joys of working on <a href="https://ember-cli-typescript.com">ember-cli-typescript</a> over the last few years has been the <em>team</em> that has grown up around it. When I started, it was just me—trying desperately to just make things work at all and blogging about it as a way of documenting what I had learned and maybe, just <em>maybe</em>, drumming up interest along the way. No longer! Over the last two years, those efforts have grown into a group of us—Mike North, Dan Freeman, James Davis, and me—steadily pushing forward the state of the art for TypeScript in Ember.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> This week, the four of us were all at EmberConf 2019, and we ate dinner together the last evening of the conference—turning it into an impromptu discussion of the current state of affairs and what’s next on our plates.</p>
<section id="notes-on-our-meeting" class="level2">
<h2>Notes on Our Meeting</h2>
<p>The following is a summary (which all of us signed off on as accurate) of what ended up being a very wide-ranging conversation about the state of TypeScript in the Ember ecosystem.</p>
<section id="reducing-churn" class="level3">
<h3>Reducing Churn</h3>
<p>We have made very good progress getting the types to the generally very-good state they’re now in, including <em>mostly</em> insulating Ember users from the breakages in types that come up at times as TypeScript itself changes. However, the single most significant challenge for our efforts remains finding a way to change this story from <em>mostly insulating users</em> to <em>genuine reliability</em> of the sort Ember users are used to. An initial period of churn was to be expected given the amount of ground we had to cover. However, we are rapidly moving out of the early adopter phase for this tooling, and our responsibility to shield users from churn is increasing commensurate with that shift.</p>
<p>This is not a small task, because Ember and TypeScript have diametrically opposed views of the world when it comes to <a href="https://semver.org/spec/v2.0.0.html">Semantic Versioning</a> and tooling stability.</p>
<p>On the one hand, Ember is deeply committed to SemVer—more than nearly any other tool we’re aware of.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> As a result, the Ember community expects tooling and libraries to have similar commitments to backwards compatibility, stability, and clear deprecation warnings and migration paths when there are breaking changes.</p>
<p>On the other hand, the TypeScript team <a href="https://github.com/Microsoft/TypeScript/issues/14116">wholly disavows the validity of SemVer for a compiler in particular</a>: They take the view that <em>any</em> change to a compiler is effectively a breaking change, since even a bug fix will certainly cause <em>someone’s</em> code to stop compiling. (See especially these two comments: <a href="https://github.com/Microsoft/TypeScript/issues/14116#issuecomment-280592571">1</a>, <a href="https://github.com/Microsoft/TypeScript/issues/14116#issuecomment-292581018">2</a>.) As a team, we understand the TypeScript team’s position, but broadly land on Ember’s side in this discussion—no surprise there!<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a></p>
<p>Although we need to clearly articulate for our users the reasons for churn when churn does exist, reconciling these two opposed paradigms is not merely a matter of documentation. We also need (more) tooling to bridge the gap between them. A few examples of the kinds of things we need to address:</p>
<ul>
<li><p>We need to be able to map type definitions to specific versions of Ember and TypeScript. For example, our types today do not have any way to distinguish between how computed properties resolve (and therefore can be accessed) in Ember <3.1 and Ember ≥3.1 for the traditional <code>EmberObject</code> model—that is, do you need to do <code>this.get('someProperty')</code> or can you simply do <code>this.someProperty</code>? We’d like to be able to offer that kind of granularity.</p></li>
<li><p>We need to guarantee that the Ember sub-packages (<code>@ember/object</code>, <code>@ember/array</code>, etc.) specify their own dependencies on each other correctly. In our current flow and structure with DefinitelyTyped, these occasionally get out of sync, with changes in the packages getting published in an order that makes it difficult for them to resolve each <em>other</em> correctly.</p></li>
<li><p>We need to help users avoid the problem of having multiple versions of the type definitions installed as a result of yarn’s and npm’s strategies for preventing unwanted transitive dependency changes—strategies which are basically correct for everything <em>except</em> type definitions, but which are the <em>opposite</em> of what type definitions require.<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a> Current strategies (mostly involving Yarn’s <code>resolutions</code> field in <code>package.json</code>) <em>work</em> but are fragile, error-prone, and difficult for end users to automate.</p></li>
<li><p>Perhaps most importantly, we need to be able to insulate end users from changes in TypeScript that break the existing type definitions. Minimally, this probably means specifying the versions of TypeScript with which they’re compatible. TS already provides some tooling for this—but we have not yet started using that tooling, and we may need some layer on top of it. Our end goal will be to make it possible for users to reliably <em>know</em> when they can safely upgrade between TypeScript versions and what version of the type definitions they will need.</p></li>
</ul>
<p>We batted around a number of ideas as a group for how to address those issues, and we <em>think</em> we have ideas for how to solve all of them. You can expect to see a Typed Ember Roadmap <abbr title='Request for Comments'>RFC</abbr> forthcoming in which we synthesize those thoughts into a coherent plan for solving these pain points, pending further discussion and an idea of when and how we can allocate our own time toward those efforts.</p>
</section>
<section id="typed-templates" class="level3">
<h3>Typed Templates</h3>
<p>The other major pain point for TypeScript adopters in Ember has been that no type-checking occurs in templates. This means that a substantial part of all apps and many addons is completely outside the TypeScript world and the benefits it brings—type checks around invocation, autocomplete, refactoring, etc. The same is true of everything except JSX/TSX, it turns out. People have written <em>workarounds</em> for Vue and Angular, but they are just that: workarounds, rather than first-class citizens of TypeScript the way TSX is.<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> We know how to implement that kind of workaround ourselves—Dan implemented a basic prototype last year, in fact!—and we plan to work out the remaining details and do so… and more!</p>
<p>“Remaining details” is an important part of that, though.</p>
<ul>
<li>We want to make sure that we build that in such a way that it can be generalized to other kinds of files (e.g. <abbr title='cascading style sheets'>CSS</abbr> for use with <abbr title='cascading style sheets'>CSS</abbr> Modules).</li>
<li>We will need to work out what the story will be for integrating with editor tooling.</li>
<li>We need to make sure that any solution we implement will integrate nicely with the recently proposed <a href="https://github.com/emberjs/rfcs/pull/454">template imports and single-file components primitives</a>.</li>
<li>We need to identify what they will and won’t be able to type-check effectively, and document those constraints.</li>
<li>Last but not least, we need to make sure the implementation will be relatively straightforward to to rework if TypeScript eventually exposes a first-class hook for us to provide it this kind of information.</li>
</ul>
<p>These are obviously not trivial problems to solve. More details will be in the aforementioned roadmap <abbr title='Request for Comments'>RFC</abbr> when it appears!</p>
</section>
<section id="improving-documentation" class="level3">
<h3>Improving Documentation</h3>
<p>We also need to substantially expand our documentation in general. We made a decision not to ship ember-cli-typescript 2.0 without having a proper documentation site in place,<a href="#fn6" class="footnote-ref" id="fnref6" role="doc-noteref"><sup>6</sup></a> and we finally got that foundation in place last week, with an <a href="https://ember-learn.github.io/ember-cli-addon-docs/">ember-cli-addon-docs</a> setup. We converted the existing (far-too-long) README, updated the pieces which were out of date, and added notes for upgrading from ember-cli-typescript 1.x. However, the documentation as its stands is a <em>starting</em> point. We need to expand it to cover (at least):</p>
<ul>
<li>successful strategies both for starting new projects and for migrating existing projects</li>
<li>working with types effectively</li>
<li>writing type definitions effectively</li>
<li>managing your public API as a library/addon author</li>
<li>standard troubleshooting and debugging techniques</li>
<li>explanations for the mechanics of some of the unusual workarounds we have implemented</li>
</ul>
<p>And no doubt that’s just the tip of the iceberg. If that sounds like a lot, that’s because it is! (If it sounds like a lot of that will be useful to the broader TypeScript ecosystem, that point isn’t lost on me, either.)</p>
</section>
</section>
<section id="other-concerns-on-our-radar" class="level2">
<h2>Other Concerns on our Radar</h2>
<p>There are a number of other concerns on our radar which we didn’t explicitly talk about at our impromptu meetup but which we have discussed at other times over the last few months:</p>
<ul>
<li><p><b>Performance:</b> we have occasionally seen <em>massive</em> regressions in the performance of TypeScript against our apps and types, and we’d like to prevent that happening in the future. At the moment, we don’t even have good benchmarks to point the TypeScript team to—we’ll need to work out a good strategy here.</p></li>
<li><p><b>Education:</b> as members of the Ember community get excited about adopting TypeScript and start adopting it, we want to help them be successful—avoiding common pitfalls, making libraries/addons easy for others to use safely, and so on. <em>In particular, if you are converting an addon to TypeScript or adding type definitions for it, <strong>please involve us</strong> so we can help you stay on the happy path and author your types in a way that is stable and usable for the rest of the community!</em></p></li>
<li><p><b>Providing a type-safe story for Ember Concurrency:</b> currently, the various workarounds available for using Ember Concurrency with TypeScript require you to throw away type safety in several places, because TypeScript does not understand the type-level transformation applied by decorators and cannot resolve the types correctly in the traditional <code>.extend</code> block. We’ve been working on a strategy to solve this with with <a href="https://github.com/buschtoens">Jan Buschtöns (@buschtoens)</a>, involving a bit of a clever hack with a Babel transform that none of us love, but which <em>will</em> get the job done.</p></li>
<li><p><b>Supporting Ember core’s adoption of TypeScript:</b> much as with the community in general, we want to help make sure that as core Ember libraries adopt TypeScript, they’re able to do so successfully—both in terms of gaining utility to their own projects, and also so that someday we can see…</p></li>
<li><p><b>Ember shipping its own types:</b> we would love to get to a point where Ember itself ships its own types natively. However, we all see many of the other points in this document as effectively being prerequisites for that to happen successfully—especially the issues around stability and semantic versioning.</p></li>
</ul>
</section>
<section id="onward" class="level2">
<h2>Onward!</h2>
<p>In many ways, the ember-cli-typescript 1.x era was the <abbr title='minimum viable product'>MVP</abbr> era for Ember-with-TypeScript. We made it <em>possible</em> for teams to successfully adopt TypeScript in their Ember apps and addons, even in large codebases. Early adopters in this era have enjoyed many upsides—but have also had to eat higher costs as we tried to stabilize the foundational tooling and types for the ecosystem. In the 2.x era, we aim to eliminate that churn and make the experience best-in-class, in ways that benefit the rest of the Ember ecosystem whether or not users are adopting TypeScript.</p>
<p>Someday, we’d love for TypeScript support to be so good everywhere in Ember that our team is out of a job. There’s a long way to go to get there, though—we could use your help! Come find us <a href="https://github.com/typed-ember/ember-cli-typescript">on GitHub</a> and in <strong>#e-typescript</strong> on <a href="https://discordapp.com/invite/zT3asNS">the Ember Discord</a> and help us make this happen!</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>We also want to mention—and thank!—Derek Wickern, who did a heroic amount of work to write types for Ember in 2017, and even though he is no longer working in Ember projects, he kindly still contributes especially by way of answering questions in Discord now and again.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>The only comparable ecosystems I’m aware of are the Rust and Elm programming languages. Rust has an official <abbr title='Request for Comments'>RFC</abbr> defining its commitment to and definition of SemVer for the language and packages within the ecosystem. Elm has a similar commitment for packages—enforcing its definition with integration between the package manager and the compiler—while still being in a 0.x mode for the language.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>I may write more on this from my own perspective in the future; I leave that aside here as this post aims to accurately present of the team’s conversation and perspective, not just mine!<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>I added <a href="https://typed-ember.github.io/ember-cli-typescript/docs/troubleshooting/conflicting-types">documentation</a> with details on the current state of affairs around this specific issue to the docs site!<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn5" role="doc-endnote"><p>The same is true for Vue and Angular; we <em>hope</em> that the things we do here will help those communities as well.<a href="#fnref5" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn6" role="doc-endnote"><p>I have long been of the mind that an open source project is simply not done until it has at least a reasonable baseline of documentation, and was delighted to learn that the rest of the team shares that mentality.<a href="#fnref6" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Announcing ember-cli-typescript v2.02019-03-13T16:00:00-04:002019-03-13T16:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2019-03-13:/2019/announcing-ember-cli-typescript-v20.htmlThe unofficial Ember TypeScript team just published v2 of ember-cli-typescript, with nicer build errors, a docs site, and much faster builds which play more nicely with the ecosystem by way of Babel 7.
<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</b> People interested in <a href="https://emberjs.com">Ember</a> and <a href="http://www.typescriptlang.org">TypeScript</a>.</i></p>
<p>I’m pleased to announce that the unofficial Ember TypeScript team (Dan Freeman, Derek Wickern, James Davis, and Mike North, and I) have just published ember-cli-typescript v2.0.0!</p>
<p>Check out the <a href="https://typed-ember.github.io/ember-cli-typescript/versions/master/docs/upgrade-notes">upgrade instructions</a> to get started with the new version!</p>
<section id="whats-new" class="level2">
<h2>What’s new?</h2>
<p>There are just two changes, but they’re a big deal!</p>
<ol type="1">
<li><p><strong>The addon now uses Babel 7’s TypeScript support</strong> to actually build your TypeScript, while continuing to use TypeScript itself to type-check your app. We added some fancy new build errors to go with that, too! This means your builds will be <em>much faster</em> and that tools like <a href="https://github.com/ef4/ember-auto-import">ember-auto-import</a> will Just Work™ with TypeScript apps and addons now. (There are a few caveats that come with this, so <em>please</em> see <a href="https://github.com/typed-ember/ember-cli-typescript/releases/tag/v2.0.0">the release notes</a>!)</p></li>
<li><p><strong>We added a documentation site!</strong> You can check out the documentation at <a href="https://typed-ember.github.io/ember-cli-typescript/versions/master/">typed-ember.github.io/ember-cli-typescript</a>. Previously, the README was over 6,000 words long… and growing. Now, the README just has the basic stuff you need to get started, and documentation lives in… the docs! Thanks to the <a href="https://www.github.com/ember-learn/ember-cli-addon-docs">ember-cli-addon-docs</a> crew for making it so easy to build such a nice documentation site!</p></li>
</ol>
<p>We covered the Babel 7 changes in detail two earlier blog posts:</p>
<ul>
<li><a href="https://v4.chriskrycho.com/2018/ember-cli-typescript-v2-beta.html">ember-cli-typescript v2 beta</a> (me)</li>
<li><a href="https://medium.com/@mikenorth/ember-cli-typescript-v2-release-candidate-3d1f72876ea4">ember-cli-typescript v2 release candidate</a> (Mike North)</li>
</ul>
<p>Happily, nothing has changed in the implementation since the beta or <abbr>RC</abbr> releases except fixing some bugs!</p>
<p>For my part, thanks to Dan Freeman, who did the lion’s share of the work to get us working with Babel 7!</p>
<p>On behalf of the whole unofficial Ember TypeScript team, thanks to everyone who tested this before we released, identified gaps in the docs (not to mention catching errors and typos there!), and provided feedback along the way. There’s more good stuff coming in the months ahead!</p>
</section>
My Current Setup2019-01-19T09:00:00-05:002019-01-19T09:00:00-05:00Chris Krychotag:v4.chriskrycho.com,2019-01-19:/2019/my-current-setup.htmlA quick overview of the apps I currently use, and a bit of commentary on why I use them.
<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</b> people interested in productivity and work flow and app choice.</i></p>
<p>A friend was asking me the other day what my workflow looks like, and while I don’t generally spend a <em>lot</em> of time writing about my working setup, I figured I’d throw up a quick post representing my <em>current</em> list so that if people ask I can point them here.</p>
<p>Two important notes on this, though: First, this is <em>just what I use</em>. I make no particular claim that it’s <em>the best</em>. There are lots of things here that are very specific to me and the way I work, and even to my specific mental quirks. Second, it’s far more important to care about the work you do than about the tools you use to get it done. The tools matter: some people say they don’t and I don’t think that’s right at all. But they don’t matter as much as they might <em>feel</em> like they do, and tool fetishism is real. I think the happy point is finding tools which are good enough and fit comfortably enough into your workflow that they don’t <em>distract</em> you, and then get back to the work you do!</p>
<hr />
<section id="software-development" class="level2">
<h2>Software development</h2>
<p>For software development, I currently use:</p>
<ul>
<li><p><a href="https://code.visualstudio.com">VS Code</a> as my primary text editor; I also occasionally use Sublime Text or Vim for specific tasks where it makes sense. Code is incredibly fast, impressively low in memory usage given it’s an Electron app, and remarkably customizable. My only outstanding complaint is that there’s no way to actually make it look like a native macOS app. Happily, I <em>can</em> make it <em>behave</em> like a native macOS app in all the ways that matter to me. Its support for both TypeScript and Rust—the two languages I spend the most time with right now—is <em>great</em>. You’re welcome to see <a href="https://gist.github.com/chriskrycho/f39442dd78ad6d150bcaaadd9fedf9f4">my full configuration</a>; I keep it updated at that location via the <a href="https://marketplace.visualstudio.com/items?itemName=Shan.code-settings-sync">Settings Sync</a> plugin.</p></li>
<li><p>macOS’ built-in Terminal app, just using its tabs for individual tasks. I have spent a lot of time with alternatives, including <a href="https://iterm2.com">iTerm 2</a> and <a href="https://sw.kovidgoyal.net/kitty/">kitty</a>, and I’m comfortable in <a href="https://github.com/tmux/tmux/wiki">tmux</a> – but at the end of the day I just like the way Terminal <em>feels</em> the best. It’s fast, light, and built-in macOS things all just work correctly out of the box.</p></li>
<li><p><a href="http://git-fork.com">Fork</a> for a <a href="https://git-scm.com">git</a> <abbr>GUI</abbr>. I’ve also used <a href="https://www.git-tower.com">Tower</a> in the past, but I’ve found Fork to be lighter, faster, and a better fit for the way I think and how I expect things to behave (e.g. for interactive rebasing). I do a ton of work in git on the command line as well.</p></li>
<li><p>A mix, varying day by day, of Safari Tech Preview, Firefox, and Chrome for my test browsers. I substantially prefer Safari in nearly every way, but Chrome’s dev tools remain best in class for most things—with the exception of Grid, where Firefox is still the undisputed champion. (When I can, I do most of my JavaScript/TypeScript debugging in VS Code, though: it’s a much better experience.)</p></li>
<li><p><a href="https://kapeli.com/dash">Dash</a> for offline documentation access. There are other options out there, including some which are free, but Dash remains the best in my experience, and if nothing else it’s deeply integrated into my workflow and muscle memory.</p></li>
</ul>
<p>That’s basically the whole list at the moment. I keep my software development workflow fairly light.</p>
</section>
<section id="research-and-writing" class="level2">
<h2>Research and writing</h2>
<p>For research and writing, I use:</p>
<ul>
<li><p><a href="https://ia.net/writer">iA Writer</a> for all my blogging and longer-form writing. This is a recent change; I had been a heavy user of <a href="http://www.ulysses.app">Ulysses</a> for the last few years. However, while Ulysses was the best option I had found for how I work, it’s never been quite <em>right</em> for me. For one, I’ve always found Ulysses’ “feel” to be a bit heavy: it’s <em>just</em> noticeably slower for me than most of the other writing apps, and I’m hypersensitive to input latency. I’ve also always disliked one of the things that makes many people love it: the way it abstracts Markdown into its “smart text objects.” iA Writer gives me the ability to manage a library in much the same way that Ulysses did, but in a way that feels more truly native on both macOS and iOS; it just uses normal Markdown (no fancy text objects); and it’s <em>fast</em>.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> That combo makes it a better fit for me.</p></li>
<li><p><a href="https://bear.app">Bear</a> for my note-taking. I’ve talked about this <a href="https://v4.chriskrycho.com/2018/starting-to-build-a-zettelkasten.html">a fair bit</a> here <a href="https://v4.chriskrycho.com/2018/zettelkasten-update-all-in-on-bear.html">before</a>, so I won’t belabor the details. It’s an excellent, lightweight note-taking app. The main way it falls down for me is that it does not really handle nested block content in Markdown documents (e.g. it won’t correctly display a block quote inside another block quote, or a code sample inside a list, etc.). I’d also love it if it stored its library in a way that made it easier for me to interact with from other apps, i.e. as plain text on the local drive. (You can export content from it easily enough, which is great, but it’s not quite as seamless as I’d like.) Those nits are just that, though: nits. I’m very happy with Bear for my note-taking at this point.</p></li>
<li><p><a href="https://www.goldenhillsoftware.com/unread/">Unread</a> on iOS for reading RSS, with <a href="https://feedbin.com">Feedbin</a> as my long-preferred RSS feed service backing it. (I’m a fan of paying for these kinds of services, so I’m happy to lay out the $5/month for Feedbin. Free alternatives exist, but I don’t love ad-driven models and avoid them where I can.)</p></li>
</ul>
<p>You’ll note that there are no apps for reading <em>longer</em> material on that list. I could mention Apple Books as the place I read most ebooks I read, but that’s more a function of the alternatives not being meaningfully <em>better</em> in the ways I care about.</p>
</section>
<section id="productivity" class="level2">
<h2>“Productivity”</h2>
<p>For “productivity” concerns, I use:</p>
<ul>
<li><p><a href="https://tadamapp.com">Tadam</a> for a Pomodoro timer—because it’s precisely as annoyingly obtrusive as I need it to be, which is <em>very</em>!—and <a href="https://bear.app">Bear</a> for <a href="https://v4.chriskrycho.com/2018/just-write-down-what-you-do.html">tracking</a> what I do each Pomodoro cycle, each day, each week, each month, and each year. That habit remains very helpful for me.</p></li>
<li><p><a href="https://culturedcode.com">Things</a> for a to-do app. Things hits the sweet spot for me in terms of its ability to manage everything from simple tasks and recurring to-do items around the house up to complex multi-month-long projects. I particularly like its distinction between when I want to be <em>reminded</em> about something and when that task is <em>due</em>. I’ve used <a href="https://www.omnigroup.com/omnifocus/">OmniFocus</a> in the past, but it never quite <em>fit</em> me; Things does. They’re very comparable in terms of features; it’s just that the way Things approaches those features works better for me.</p></li>
<li><p><a href="https://sparkmailapp.com">Spark</a> as my email client, mostly for its snooze feature, which I use when I know I need to see an email <em>as an email</em> sometime later, and its ability to integrate nicely with other apps. I have it connect to Things, and emails that require an <em>action</em> get put there instead of snoozed. The combination lets me keep my inbox at Zero by the end of every day. And its lovely “Inbox Zero” images are a really nice touch:</p>
<figure>
<img src="https://f001.backblazeb2.com/file/chriskrycho-com/images/inbox-zero-spark.png" alt="Inbox Zero in Spark" /><figcaption>Inbox Zero in Spark</figcaption>
</figure></li>
</ul>
</section>
<section id="podcasting" class="level2">
<h2>Podcasting</h2>
<p>For podcast production, I use:</p>
<ul>
<li><p><a href="https://www.izotope.com/en/products/repair-and-edit/rx.html">iZotope RX</a> (I’m using v6 Standard) for audio cleanup, as I <a href="https://v4.chriskrycho.com/2018/izotope-rx-is-amazing.html">recently wrote about</a>.</p></li>
<li><p><a href="https://www.apple.com/logic-pro/">Logic Pro X</a> for the actual editing work most of the time, occasionally using <a href="https://www.wooji-juice.com/products/ferrite/">Ferrite</a>. Logic is overkill for what I do, but I’m <em>fast</em> with it at this point, so I can’t see moving anytime soon, and there’s nothing else out there that I think is substantially <em>better</em> (though there are other apps that are comparably good).</p></li>
<li><p><a href="https://www.overcast.fm/forecast">Forecast</a> for encoding and including chapter breaks.</p></li>
<li><p><a href="https://reinventedsoftware.com/feeder/">Feeder</a> for generating the <abbr>RSS</abbr> feeds, since all my podcasts are currently built in ways that don’t support <abbr>RSS</abbr> feed generation: <a href="http://docs.getpelican.com/en/stable/">Pelican</a> for <a href="https://winningslowly.org">Winning Slowly</a> and <a href="https://massaffection.com">Mass Affection</a>,<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> and <a href="https://doc.rust-lang.org/rustdoc">rustdoc</a> for <a href="https://newrustacean.com">New Rustacean</a>. (If I ever manage to finish building my own site generator, it’ll have out-of-the-box support for custom RSS feed templates, so that I can have this stuff generated automatically for me!)</p></li>
<li><p><a href="https://www.netlify.com">Netlify</a> for serving the actual static site content (i.e. HTML, CSS, and JS), and <a href="https://www.backblaze.com/b2">Backblaze B2</a> for hosting the audio.</p></li>
<li><p><a href="https://panic.com/transmit/">Transmit</a> for actually uploading the audio files.</p></li>
</ul>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>iA Writer seems to get <em>randomly</em> slow at times for reasons I haven’t yet identified, but at least for now, I’m taking that tradeoff over Ulysses’ habit of being a bit slow <em>all the time</em>. As I’ve often noted before: <a href="https://v4.chriskrycho.com/2016/ulysses-byword-and-just-right.html" title="Ulysses, Byword, and “Just Right”">my ideal writing app doesn’t exist</a>.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Mass Affection isn’t dead! I promise!<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
JavaScript is C2018-12-20T18:45:00-05:002018-12-20T18:45:00-05:00Chris Krychotag:v4.chriskrycho.com,2018-12-20:/2018/javascript-is-c.htmlJavaScript and C both make you maintain your invariants the most painful way possible. Happily, there are better alternatives these days.<p><i><b><a href="https://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience:</a></b> software developers, especially those interested in modern, typed programming languages.</i></p>
<p>Earlier this week, I was working on a problem in the Ember app where I spend most of my day job, and realized: <i>JavaScript is the same as C.</i></p>
<p>That probably doesn’t make any sense, so let’s back up. The scenario I was dealing with was one where there was a bit of invariant around a piece of data that I <em>had</em> to maintain for the application not to blow up in horrible ways, but had no good way to enforce with the language’s tools. <em>This</em> action on <em>that</em> piece of data was only valid if <em>this</em> condition held true… but even with the fully-type-checked TypeScript application we now have, the action (because of the entire application’s architecture and indeed the entire way that Ember apps are wired together!) could not be statically verified to be safe.</p>
<p>As I considered the best way to handle this—I ended up having the function involved in the action just throw an error if the invariant wasn’t properly maintained—I was reminded of the years I spent writing C. In C, it’s quite <em>possible</em> to write safe code around memory management. I managed it fine in the applications I worked on, by carefully documenting the invariants a given function required to be safe. <em>This</em> piece of data is allocated by <em>that</em> function and then released to <em>the caller</em> to manage. Even with every bit of static analysis I threw at those kinds of things, it was possible to get it wrong.</p>
<p>The exact same kinds of problems I had in C, I have in JavaScript or even TypeScript today. Experientially, JavaScript<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> <em>is</em> C, as far as having to deal with these kinds of invariants goes.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<p>Enter <a href="https://www.rust-lang.org">Rust</a>: the kinds of management of memory that I was always having to keep track of in my head (or, better, with detailed documentation comments along the way—but with the same problem that it was easy to get wrong), I could now have statically guaranteed by a compiler. Given that I spent the first six years of my career managing and carefully tracking all of that by hand, it’s no wonder I <a href="https://newrustacean.com">fell in love</a> with Rust. I could have the <em>compiler</em> guarantee the invariants I needed around memory management.</p>
<p>And it turns out, this same dynamic exists in the world of front-end web development. People sometimes wonder why (and colleagues are often bemused that) I get so excited by <a href="https://elm-lang.org">Elm</a>. But the step from JavaScript (or even TypeScript) to Elm is just like the step from C to Rust. It’s a real and profound shift in what kinds of things you can <em>know for certain</em> about your program.</p>
<p>In a C application, try as hard as I may, at the end of the day I am always on my own, making sure the invariants I need for memory safety hold. In Rust, I can be 100% confident that I will not have memory-unsafe code. Not 98%-and-I’d-better-check-those-last-2%-really-closely. One hundred percent. That’s a game-changer.</p>
<p>In a JavaScript or TypeScript application, try as hard as I may, at the end of the day I am always on my own, making sure the invariants I need for state management hold. In Elm, I can be 100% confident that I will not have code which needs a given invariant about a piece of state to hold break the way it could in this TypeScript application. Because I can’t even apply the relevant transformations in question if it isn’t! That’s a game-changer.</p>
<p>Neither of those is a guarantee I won’t have bugs. (A compiler that could guarantee that would have to be sentient and far smarter than any human!) Neither of them means I can’t intentionally do stupid things that violate invariants in ways that get the program into broken states from the user’s point of view. But both of them give me the tools and the confidence that I can absolutely guarantee that certain, very important kinds of invariants hold. We’re not looking for an absence of all bugs or a system which can prevent us from making any kind of mistake. We’re looking to be able to spend our times on the things that matter, <em>not</em> on minutiae the computer can check for us.</p>
<p>So: I’m not going back to C, and I’m ready to move past JavaScript and TypeScript.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>This goes for plenty of languages that aren’t JavaScript, too. It’s equally true of c<sup>♯</sup> or Python.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Obviously there are some kinds of things you don’t have to worry about in JS that you do in C: memory management, for one. The point is that the manual-verification-of-every-invariant-you-care-about is the same.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Internal and External Parameter Names in JavaScript and TypeScript2018-11-26T20:25:00-05:002018-11-26T20:25:00-05:00Chris Krychotag:v4.chriskrycho.com,2018-11-26:/2018/internal-and-external-parameter-names-in-javascript-and-typescript.htmlYou can use destructuring to make nice JavaScript/TypeScript APIs both for users and implementers of a given function: a handy design pattern stolen from Objective-C and Swift.<p>Earlier this month I was working on a fairly thorny problem for work—taking a total value and splitting it into numbers which summed up to it, possibly including with a rule about what the split-up values had to be a multiple of. E.g. you want to order 50 Buffalo wings, and you have to choose the flavors for the wings in increments of 5.</p>
<p>I spent a lot of time thinking about the implementation of the algorithm for that, but I also spent a lot of time thinking about what its <abbr>API</abbr> should look like. Here, it’s the latter I want to dive into (the former is a little tricky but not all that interesting).</p>
<p>I started out with just simple parameters to the function:</p>
<pre class="ts"><code>function splitNicely(
total: number, components: number, factor?: number
): number {
// the implementation
}</code></pre>
<p>This is nice enough to use internally. But calling it is pretty confusing:</p>
<pre class="ts"><code>const result = splitNicely(50, 5, 2);</code></pre>
<p>Which number is what value here? Who knows!</p>
<p>So then I just exposed <em>all</em> of the items as an options hash:</p>
<pre class="ts"><code>interface SplitArgs {
total: number;
components: number;
factor?: number;
}
function splitNicely(
{ total, components, factor }: SplitArgs
): number {
// the implementation
}</code></pre>
<p>This was a lot nicer to call:</p>
<pre class="ts"><code>splitNicely({ total: 50, components: 5, factor: 2 });</code></pre>
<p>However, it was a bit verbose, and I realized that it’s fairly obvious that the first argument should be the value we’re splitting up, so I simplified a bit:</p>
<pre class="ts"><code>interface SplitArgs {
components: number;
factor?: number;
}
function splitNicely(
total: number,
{ components, factor }: SplitArgs
): number {
// the implementation
}
</code></pre>
<p>Now calling it read <em>relatively</em> well:</p>
<pre class="ts"><code>splitNicely(10, { components: 5, factor: 2 });</code></pre>
<p>However, the names were not my favorite for invoking the function. Really, what I wanted was for the function invocation to describe what I was doing, when reading it from the outside—while having these useful names for operating on the implementation internally.</p>
<p>At this point, I remembered two things:</p>
<ol type="1">
<li>Swift and Objective-C have the nice notion of internal and external parameter names.</li>
<li>JavaScript (and thus TypeScript) let you rename values in “destructuring assignment.”</li>
</ol>
<p>The second one lets us get the same basic effect in JavaScript or TypeScript as we get in Swift, if we’re using an options argument! Here’s how destructuring works in the function definition. Let’s see it first with just JavaScript. The object passed as a parameter has a key named <code>of</code>, which has a string value—but <code>of</code> is a bad name inside the function; there, we can just call it <code>str</code> and it’s perfectly clear.</p>
<pre class="js"><code>function length({ of: str }) {
return str.length;
}
console.log(length({ of: "waffles" })); // 7</code></pre>
<p>That’s the equivalent of a function that looks like this:</p>
<pre class="js"><code>function length({ of }) {
const str = of;
return str.length
}</code></pre>
<p>Here’s the same code but in TypeScript:</p>
<pre class="ts"><code>function length({ of: str }: { of: string }): number {
return str.length;
}
console.log(length({ of: "waffles" })); // 7</code></pre>
<p>This is a big more annoying to write out in TypeScript, because we need to supply the type of the whole object after the object we’ve destructured, but the effect is the same once we get past the declaration. It’s also pretty silly to do this kind of thing at all in this example—but it becomes much more useful in more complicated functions, like the one that motivated me to explore this in the first place.</p>
<p>Recall that I <em>liked</em> having <code>components</code> and <code>factor</code> as the internal names. They weren’t great for <em>calling</em> the function, though. After some consideration, I decided invoking the function should look like this:</p>
<pre class="ts"><code>splitNicely(10, { into: 5, byMultiplesOf: 2 });</code></pre>
<p>By using the destructuring technique, we can get exactly this, while keeping <code>components</code> and <code>factor</code> internally:</p>
<pre class="ts"><code>interface SplitArgs = {
into: number;
byMultiplesOf?: number;
}
function splitNicely(
total: number,
{ into: components, byMultiplesOf: factor }: SplitArgs
): number {
// the implementation
}</code></pre>
<p>This is a great pattern to put in your toolbox. You can of course overdo it with this, as with any technique, but it’s a nice tool for these kinds of cases where you really want to make an expressive <abbr>API</abbr> for both callers and the internal implementation of a function.</p>
ember-cli-typescript v2 beta2018-11-19T21:30:00-05:002018-11-19T21:30:00-05:00Chris Krychotag:v4.chriskrycho.com,2018-11-19:/2018/ember-cli-typescript-v2-beta.htmlWe've released a beta of ember-cli-typescript v2, which will make your builds faster and more reliable, and which will give you better error output with type errors. Please come test it in your apps and addons!
<p>A few weeks ago, the Typed Ember team published the first beta releases of <code>ember-cli-typescript</code> v2 (currently at beta 3). The new version brings much faster and more reliable builds, with nicer error reporting to boot.</p>
<p>In this post:</p>
<ul>
<li><a href="#testing-the-upgrade">Testing the Upgrade</a></li>
<li><a href="#under-the-hood">Under the Hood</a></li>
</ul>
<section id="testing-the-upgrade" class="level2">
<h2>Testing the Upgrade</h2>
<p>I emphasize: <strong><em>this is a beta release.</em></strong> We have tested it in two large apps (including the one I work on every day) and a number of addons, and it seems to work correctly, but that does <em>not</em> mean it is ready for production. We need your help to <em>get</em> it ready for production. Accordingly, please <em>do</em> test it in your apps, and please <em>do not</em> run it in production! (If you do, and something breaks, we will not feel bad for you! You have been warned!)</p>
<p>This upgrade <em>requires</em> that you be using Babel 7. As such, we suggest you start by doing that upgrade:</p>
<pre class="bash"><code>ember install ember-cli-babel@7</code></pre>
<p>There may be a couple things you find different with your app with Babel 7, so resolve those first. Then, and only then, upgrade <code>ember-cli-typescript</code>.</p>
<p>To test the <code>ember-cli-typescript</code> v2 beta in an app:</p>
<pre class="bash"><code>ember install ember-cli-typescript@beta</code></pre>
<p>To test it in an addon:</p>
<pre class="bash"><code>ember install ember-cli-typescript@beta --save</code></pre>
<p>(This will add the addon to your addon’s runtime dependencies, not just its development-time dependencies, which is necessary because of the changes we made to how the build pipeline works. See <a href="#under-the-hood">below</a> for details.)</p>
<p>Then run your test suite and poke around! Note that on your first run, there are <em>almost certainly</em> going to be some things that break. In both of the large apps this has been tested in, there were a number of small changes we had to make to get everything working again. (We’re marking this as a breaking <a href="http://www.semver.org/spec/v2.0.0.html">semver</a> change for a reason!)</p>
<p>In general, most apps will only have a few places they need to make changes, but because a certain kind of imports are included in the things affected, you may see your test suite explode. Don’t panic! You will probably make a couple a couple changes and see everything go back to working again.</p>
<p>Here are the changes you need to know about:</p>
<ul>
<li><p>We now build the application using Babel 7’s TypeScript plugin. This has a few important limitations—some of them bugs (linked below); others are conscious decisions on the part of Babel. The changes:</p>
<ul>
<li><p><code>const enum</code> types are unsupported. You should switch to constants or regular <code>enum</code> types.</p></li>
<li><p>trailing commas after rest function parameters (<code>function foo(...bar[],) {}</code>) are disallowed by the ECMAScript spec, so Babel also disallows them.</p></li>
<li><p>re-exports of types have to be disambiguated to be <em>types</em>, rather than values. Neither of these will work:</p>
<pre class="ts"><code>export { FooType } from 'foo';
import { FooType } from 'foo';
export { FooType };</code></pre>
<p>In both cases, Babel attempts to emit a <em>value</em> export, not just a <em>type</em> export, and fails because there is no actual value to emit. You can do this instead as a workaround:</p>
<pre class="ts"><code>import * as Foo from 'foo';
export type FooType = Foo.FooType;</code></pre></li>
</ul>
<p>Other bugs you should be aware of:</p>
<ul>
<li><p>If an enum has a member with the same name as an imported type (<a href="https://github.com/babel/babel/issues/8881">babel/babel#8881</a>), it will fail to compile.</p></li>
<li><p><code>this</code> types in ES5 getters and setters do not work (<a href="https://github.com/babel/babel/issues/8069">babel/babel#8069</a>).</p></li>
<li><p>Destructuring of parameters in function signatures currently do not work (<a href="https://github.com/babel/babel/issues/8099">babel/babel#8099</a>).</p></li>
</ul></li>
<li><p><code>ember-cli-typescript</code> must be in <code>dependencies</code> instead of <code>devDependencies</code> for addons, since we now hook into the normal Broccoli + Babel build pipeline instead of doing an end-run around it.</p></li>
<li><p>Addons can no longer use <code>.ts</code> in app, because an addon’s <code>app</code> directory gets merged with and uses the <em>host’s</em> (i.e. the other addon or app’s) preprocessors, and we cannot guarantee the host has TS support. Note that everything will continue to work for in-repo addons because of the app build works with the host’s (i.e. the app’s, not the addon’s) preprocessors.</p></li>
<li><p>Apps need to use <code>.js</code> for addon overrides in the <code>app</code> directory, since the different file extension means apps no longer consistently “win” over addon versions (a limitation of how Babel + app merging interact)—note that this won’t be a problem with Module Unification apps.</p></li>
</ul>
<p>That’s it. Again, please test it out in your app and <a href="https://github.com/typed-ember/ember-cli-typescript/issues/new/choose">report any issues</a> you find!</p>
</section>
<section id="under-the-hood" class="level2">
<h2>Under the Hood</h2>
<p>In the 1.x series of releases, we used TypeScript’s own build tooling, including its <code>--watch</code> setting, and then fed the results of that into Ember’s build pipeline. We made this work, but it was pretty fragile. Worse, it was <em>slow</em>, because we had to compile your code twice: once with TypeScript, and once with Babel in the normal Ember <abbr>CLI</abbr> pipeline.</p>
<p>In v2, we’re <a href="https://blogs.msdn.microsoft.com/typescript/2018/08/27/typescript-and-babel-7/">now able</a> to use Babel to do the actual <em>build</em> of your TypeScript code, while using the TypeScript compiler in parallel to type-check it. This meant we were able to throw away most of that fragile custom code wiring TypeScript and Ember <abbr>CLI</abbr>’s file-watching and build pipelines together, so it’s much less fragile. And of course it’s much <em>faster</em>. What’s more, because Babel 7 is itself substantially faster than Babel 6 was, build times see an even <em>larger</em> speedup.</p>
<p>While we were at it, we added some extra nice error reporting for your type errors:</p>
<figure>
<img src="https://user-images.githubusercontent.com/108688/47465007-19687d80-d7b9-11e8-8541-395ad82ceb67.gif" title="build errors" alt="An example of TypeScript build errors in an Ember app" /><figcaption>An example of TypeScript build errors in an Ember app</figcaption>
</figure>
<p>These changes are why the addon is now back to being part of the runtime dependencies for addons. In the 1.x series, we did a complicated dance to make sure you could use TypeScript in an addon without burdening consumers with the full size of each distinct version of the TypeScript compiler in use by addons they consumed. (For more on that, see <a href="https://v4.chriskrycho.com/2018/announcing-ember-cli-typescript-110.html#addon-development">my blog post on the 1.1 release</a>.) We still want to keep that commitment, and we do: TypeScript is only a dev dependency. <code>ember-cli-typescript</code>, however, is a full dependency for addons, because we’re just part of the regular Babel/Ember <abbr>CLI</abbr> pipeline now! We play exactly the same role that other Babel transforms in the Ember ecosystem do, including e.g. the <a href="https://github.com/ember-cli/babel-plugin-ember-modules-api-polyfill">modules <abbr>API</abbr> polyfill</a>.</p>
<p>In short, this update simplifies our internals <em>and</em> makes your experience as a developer better. That’s about as good an outcome as we could hope for. We as a team have been thinking through this for quite some time—ever since we learned that Babel 7 was <a href="https://github.com/facebook/create-react-app/pull/4837">adding TypeScript support</a>—but the actual implementation was almost all <a href="https://twitter.com/__dfreeman">Dan Freeman</a>’s excellent work, so say thank you to him if you get a chance!</p>
<p>We’re eager to get it to a production-ready state, so please test it!</p>
</section>
The Apple Magic Keyboard2018-11-11T21:20:00-05:002018-11-11T21:20:00-05:00Chris Krychotag:v4.chriskrycho.com,2018-11-11:/2018/the-apple-magic-keyboard.htmlWhat Cherry Blues are to some mechanical keyboard lovers, the Apple Magic Keyboard is to me.<p><i>Assumed audience: nerds who care about keyboards, most of whom will probably think I’m crazy for my views in this post.</i></p>
<p>I was sitting at my kitchen counter this evening, doing a mishmash of things—from writing a little Rust, to sending some text messages to my family, to leaving a note on a programming forum. The whole time, I’ve just had this quiet sense of deep pleasure about the typing. I was slightly confused about why I was enjoying the feel of the <em>typing</em> specifically… until I realized what exactly I was typing on.</p>
<p>I’m sitting in front of a 2015 MacBook Pro… but I’m not typing on that keyboard. I’m typing on the <a href="https://www.apple.com/shop/product/MLA22LL/A/magic-keyboard-us-english">Apple Magic Keyboard</a> instead, which I brought upstairs from its normal perch on my desk and started using <em>everywhere</em>.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>This keyboard is <em>perfect</em> for me. It’s not what other people love, I know—people who prefer “mechanical keyboards” find it far too shallow. But there is something about the feeling of typing on this particular keyboard that I simply love. For me, it’s a perfect bit of industrial design, because it simultaneously <i>looks good</i> and <i>works exactly the way I want it to</i>, right down to the feeling of every keystroke.</p>
<p>Mechanical keyboard lovers: you can keep your Cherry Blues. I’ll just stockpile Apple Magic Keyboards and be happy.</p>
<hr />
<p>P.S. Apple should totally just figure out how to put <em>exactly</em> this keyboard in their next-generation MacBook Pros. They won’t. But they should.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>What prompted it? I ordered a stand for it for all the times I’m not at my desk with its 5k monitor. And then just decided it was worth doing this way <em>all</em> the time.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Review: The Rust Programming Language2018-11-07T07:00:00-05:002018-11-07T07:00:00-05:00Chris Krychotag:v4.chriskrycho.com,2018-11-07:/2018/review-the-rust-programming-language.htmlNo Starch's The Rust Programming Language is a genuinely *great* introduction to Rust, and a genuinely great programming book.
<p><i class=editorial>I intended to publish this review months ago, but the single hardest month of my career punched me in the face repeatedly and I just found myself entirely unable to write for all of September and most of October. Here it is at last, with apologies to No Starch for the delay!</i></p>
<hr />
<p>No Starch Press kindly provided me with a review copy of <a href="https://nostarch.com/Rust"><cite>The Rust Programming Language</cite></a>, by Steve Klabnik and Carol Nichols, with contributions from the Rust community. A <em>real-world, physical</em> review copy. And it’s magnificent.</p>
<figure>
<img src="https://f001.backblazeb2.com/file/chriskrycho-com/images/trpl.jpg" alt="The Rust Programming Language" /><figcaption><cite>The Rust Programming Language</cite></figcaption>
</figure>
<p>I’ve read the vast majority of this book (as well as the previous edition) online over the last couple years as Steve and Carol have worked on it; it has been an invaluable resource for many a <a href="https://newrustacean.com">New Rustacean</a> episode. (Bonus: you can hear me talk with Carol about working on <cite>The Rust Programming Language</cite> in <a href="https://newrustacean.com/show_notes/interview/_3/index.html">my interview with her back in 2016</a>! And yes: writing the book took that long.) This is in some sense <em>the</em> authoritative book on Rust.</p>
<p>You might wonder why you’d pick up a physical copy of this book given that it is available online for free. There are a few reasons that come to mind:</p>
<ol type="1">
<li><p>The quality of the online book is in many ways a direct result of No Starch Press’ deep investment in the text. They’re not making any money from the online copy. They <em>do</em> recoup some of their costs when we buy ebook or physical copies. So that’s one good reason: a way of saying “thank you!” and investing in the continued existence of No Starch and projects like this.</p></li>
<li><p>I used the word <em>magnificent</em> above to describe this, and I mean it. This printing is a fabulous example of really excellent book design. I take typography and presentation seriously, and not a whit less for programming books than for copies of <cite>The Lord of the Rings</cite>. Everything in this printing is top-notch. Little details like the way that code listings are displayed—right down to the way some text is faded away to emphasize what’s <em>new</em> in a listing—make this one of the most readable programming texts I’ve ever seen.</p></li>
<li><p>As delightful and powerful as hypertext is, the physicality of a book is equally delightful and powerful, just in different ways. There is nothing quite like the tactile experience of flipping through a book. Nothing in digital text lets you forge connections to learning the way that scribbling notes in a margin does. And the sheer physicality of a volume this large gives you mental hooks to hang what you’re learning on: you can remember that it felt like <em>this</em> to have the book open to where you learned it.</p></li>
</ol>
<p>(That last point is an argument in favor of printed books in general. You can expect me to come back to that theme time and again in this space, even while I continue to value digital spaces like this one for what <em>they</em> uniquely do.)</p>
<p>No Starch sent me this copy to review with no expectation of a positive review—but even if they’d been paying me, they couldn’t make me say that this book is <em>great</em>. But great it is. If you have any interest in <a href="https://www.rust-lang.org/en-US/">Rust</a>, you should grab a copy!</p>
True Myth 2.22018-10-27T17:00:00-04:002018-10-27T17:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-10-27:/2018/true-myth-22.htmlTrue Myth 2.2 adds two `Maybe` helpers for safe object lookup and two `Result` helpers for handling exception-throwing code.
<p>I just released v2.2<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> of True Myth, with two new pairs of helpers to deal with <a href="#safe-java-script-object-property-lookup">safe JavaScript object property lookup with <code>Maybe</code>s</a> and <a href="#handling-exception-throwing-functions">handling exception-throwing code with <code>Result</code>s</a>.</p>
<section id="safe-javascript-object-property-lookup" class="level2">
<h2>Safe JavaScript object property lookup</h2>
<p>We often deal with <em>optional properties</em> on JavaScript objects, and by default JavaScript just gives us <code>undefined</code> if a property doesn’t exist on an object and we look it up:</p>
<pre class="ts"><code>type Person = {
name?: string;
};
let me: Person = { name: 'Chris' };
console.log(me.name); // Chris
let anonymous: Person = {};
console.log(anonymous.name); // undefined</code></pre>
<p>We can already work around that with <code>Maybe.of</code>, of course:</p>
<pre class="ts"><code>function printName(p: Person) {
let name = Maybe.of(p.name);
console.log(name.unwrapOr('<anonymous>'));
}</code></pre>
<p>But this is a <em>really</em> common pattern! <code>Maybe.property</code> is a convenience method for dealing with this:</p>
<pre class="ts"><code>function printName(p: Person) {
let name = Maybe.property('name', p);
console.log(name.unwrapOr('<anonymous>'));
}</code></pre>
<p>At first blush, this might be a head-scratcher: after all, it’s actually slightly <em>longer</em> than doing it with <code>Maybe.of</code>. However, it ends up showing its convenience when you’re using the curried form in a functional pipeline. For example, if we had a <em>list</em> of people, and wanted to get a list of just the people’s names (ignoring anonymous people), we might do this:</p>
<pre class="ts"><code>function justNames(people: Person[]): string[] {
return people
.map(Maybe.property('name'))
.filter(Maybe.isJust)
.map(Just.unwrap);
}</code></pre>
<p>Another common scenario is dealing with the same kind of lookup, but in the context of a <code>Maybe</code> of an object. Prior to 2.2.0, we could do this with a combination of <code>Maybe.of</code> and <code>andThen</code>:</p>
<pre class="ts"><code>function getName(maybePerson: Maybe<Person>): string {
return maybePerson.andThen(p => Maybe.of(p.name));
}</code></pre>
<p>This is harder to compose than we might like, and we <em>can’t</em> really write it in a “point free” style, even if that’s more convenient. We also end up repeating the <code>andThen</code> invocation every time we go down a layer if we have a more deeply nested object than this. Accordingly, 2.2.0 also adds another convenience method for dealing with deeply nested lookups on objects in a type-safe way: <code>Maybe.get</code> (and the corresponding instance methods).</p>
<pre class="ts"><code>// Function version:
function getNameFn(maybePerson: Maybe<Person>): string {
return Maybe.get('name', maybePerson);
}
// Method version
function getNameM(maybePerson: Maybe<Person>): string {
return maybePerson.get('name');
}</code></pre>
<p>Again, since the function version is curried, we can use this to create other little helper functions along the way:</p>
<pre class="ts"><code>const getName = Maybe.get('name');
function getAllNames(people: Maybe<Person>[]): string[] {
return people
.map(getName)
.filter(Maybe.isJust)
.map(Just.unwrap);
}</code></pre>
<p>And if our object is a deeper type:</p>
<pre class="ts"><code>type ComplicatedPerson = {
name?: {
first?: string;
last?: string;
};
};
let none: Maybe<ComplicatedPerson> = Maybe.nothing();
console.log(none.get('name').toString());
// Nothing
console.log(none.get('name').get('first').toString());
// Nothing
let nameless: Maybe<ComplicatedPerson> = Maybe.just({});
console.log(nameless.get('name').toString());
// Just([object Object]);
console.log(nameless.get('name').get('first').toString());
// Nothing
let firstOnly: Maybe<ComplicatedPerson> = Maybe.just({
name: {
first: 'Chris',
},
});
console.log(firstOnly.get('name').toString());
// Just([object Object]);
console.log(firstOnly.get('name').get('first').toString());
// Just(Chris);</code></pre>
<p>Note that in these cases, since the type we’re dealing with is some kind of object with specific keys, if you try to pass in a key which doesn’t existing on the relevant object type, you’ll get a type error. (Or, if you’re using the curried version, if you try to pass an object which doesn’t have that key, you’ll get a type error.) However, we also often use JavaScript objects as <em>dictionaries</em>, mapping from a key to a value (most often, but not always, a <em>string</em> key to a specific value type). <code>Maybe.property</code> and <code>Maybe.get</code> both work with dictionary types as well.</p>
<pre class="ts"><code>type Dict<T> = { [key: string]: T };
let ages: Dict<number> = {
'chris': 31,
};
console.log(Maybe.property('chris', ages)); // Just(31)
console.log(Maybe.property('joe', ages)); // Nothing
let maybeAges: Maybe<Dict<number>> = Maybe.of(ages);
console.log(ages.get('chris')); // Just(31)
console.log(ages.get('joe')); // Nothing</code></pre>
<p>Hopefully you’ll find these helpful! I ran into the motivating concerns for them pretty regularly in the codebase I work with each day, so I’m looking forward to integrating them into that app!</p>
</section>
<section id="handling-exception-throwing-functions" class="level2">
<h2>Handling exception-throwing functions</h2>
<p>The other big additions are the <code>Result.tryOr</code> and <code>Result.tryOrElse</code> functions. Both of these help us deal with functions which throw exceptions. Since JavaScript doesn’t have any <em>native</em> construct like <code>Result</code>, idiomatic JavaScript <em>does</em> often throw exceptions. And that can be frustrating you want to have a value type like a <code>Result</code> to deal with instead.</p>
<p>Sometimes, you don’t care <em>what</em> the exception was; you just want a default value (or a value constructed from the local state of your program, but either way just one value) you can use as the error to keep moving along through your program. In that case, you wrap a function which throws an error in <code>Result.tryOr</code>. Let’s assume we have a function either returns a number of throws an error, which we’ll just call <code>badFunction</code> because the details here don’t really matter.</p>
<pre class="ts"><code>const err = 'whoops! something went wrong!';
const result = Result.tryOr(err, badFunction());</code></pre>
<p>The <code>result</code> value has the type <code>Result<number, string></code>. If <code>badFunction</code> through an error, we have an <code>Err</code> with the value <code>'whoops! something went wrong!'</code> in it. If it <em>didn’t</em> throw an error, we have an <code>Ok</code> with the number returned from <code>badFunction</code> in it. Handy!</p>
<p>Of course, we often want to <em>do something</em> with the exception that gets thrown. For example, we might want to log an error to a bug-tracking service, or display a nice message to the user, or any number of other things. In that case, we can use the <code>Result.tryOrElse</code> function. Let’s imagine we have a function <code>throwsHelpfulErrors</code> which returns a <code>number</code> or does just what it says on the tin: it throws a bunch of different kinds of errors, which are helpfully distinct and carry around useful information with them. Note that the type of the error-handling callback we pass in is <code>(error: unknown) => E</code>, because JS functions can throw <em>anything</em> as their error.</p>
<pre class="ts"><code>const handleErr = (e: unknown): string => {
if (e instanceof Error) {
return e.message;
} else if (typeof e === 'string') {
return e;
} else if (typeof e === 'number') {
return `Status code: ${e}`;
} else {
return 'Unknown error';
}
}
const result = Result.tryOrElse(handleErr, throwsHelpfulErrors);</code></pre>
<p>Here, <code>result</code> is once again a <code>Result<number, string></code>, but the error side has whatever explanatory information the exception provided to us, plus some massaging we did ourselves. This is particularly handy for converting exceptions to <code>Result</code>s when you have a library which uses exceptions extensively, but in a carefully structured way. (You could, in fact, just use an identity function to return whatever error the library throws—as long as you write your types carefully and accurately as a union of those error types for the <code>E</code> type parameter! However, doing that would require you to explicitly opt into the use of <code>any</code> to write it as a simple identity function, so I’m not sure I’d <em>recommend</em> it. If you go down that path, do it with care.)</p>
<hr />
<p>And that’s it for True Myth 2.2! Enjoy, and of course please <a href="https://github.com/true-myth/true-myth/issues">open an issue</a> if you run into any bugs!</p>
<p>Thanks to <a href="https://github.com/bmakuh">Ben Makuh</a> for implementing <code>Result.tryOr</code> and <code>Result.tryOrElse</code>. Thanks to Ben and also <a href="https://github.com/tansongyang">Frank Tan</a> for helpful input on the <code>Maybe.get</code> and <code>Maybe.property</code> <abbr>API</abbr> design!</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I published both <code>2.2.0</code> and <code>2.2.1</code>, because once again I missed something along the way. This time it was making sure all the new functions were optionally curried to support partial application.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Scales of Feedback Time in Software Development2018-10-22T21:15:00-04:002018-10-22T21:45:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-10-22:/2018/scales-of-feedback-time-in-software-development.htmlThere are rough order-of-magnitude differences between the feedback times for build-time errors, automated tests, manual testing, CI, staging, and production. This is useful when thinking about tradeoffs of where you want to catch certain failure classes.<p><i class=editorial><strong><a href="http://v4.chriskrycho.com/2018/assumed-audiences.html">Assumed Audience</a>:</strong> fans of compiled languages with expressive type systems. I’m not trying to persuade fans of dynamic languages they should use a compiler here; I’m trying to surface something that often goes unstated in discussions among fans of compiled languages with expressive type systems, but hopefully it’s interesting beyond that. If you don’t like compiled languages, just skip the build step bits; the rest all still applies.</i></p>
<p>There are basically six stages of the development of any given software component where you can receive feedback on what you build:<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<ol type="1">
<li>compilers, static analysis tools, and/or pair programming</li>
<li>automated test suites</li>
<li>manual local testing</li>
<li>continuous integration (<abbr>CI</abbr>) test results</li>
<li>deploying to staging (or a similar test environment) for manual testing</li>
<li>deploying to live, i.e. when production traffic is meaningfully different from what you can test on staging</li>
</ol>
<p>What’s interesting to note is that there are also, in my experience, roughly order-of-magnitude differences between each of those layers in terms of the <em>cycle time</em> between when you make a change and whether you know it is broken. That is, there seem to be rough factor-of-ten differences between the feedback you get from—</p>
<ol type="1">
<li><p>compilers, static analysis tools, and/or pair programming—all of which can show you feedback in near-real-time as you’re typing and saving your code, especially with a good language server or a fast compiler or a speedy linter</p></li>
<li><p>automated test suites, assuming they’re running on every build change and are reasonably speedy themselves, or scoped to the things impacted by the changes made</p></li>
<li><p>manual local testing, which you can repeat after every build, but which usually requires you to switch contexts to execute the program in some way</p></li>
<li><p><abbr>CI</abbr>, presumably doing the automated equivalent of what you do in both layers 2 and 3, but requiring a push to some central location and a remote build and execution of the test suite, and often a much larger integration test suite than you’d run locally</p></li>
<li><p>deploying to staging, and repeating the same kinds of manual testing you might do locally in layer 2 in a more production-like environment</p></li>
<li><p>deploying to live, and repeating the same kinds of manual testing you might do locally in layers 2 or 5, as well as getting feedback from observability or monitoring systems using your real traffic</p></li>
</ol>
<p>(Those last two <em>might</em> be comparable in the cycle time sense. However, the way most teams I’ve heard of work, any deploy to live is usually preceded by a deploy to staging. What’s more, with most changes that you can’t test until it’s live, it’s often the case that you’re not going to know if something is wrong until it has been live for at least a little while. Finally, some kinds of things you can really only test with production load and monitoring or observability systems, and those kinds of things are at least sometimes not to be visible immediately after deployment, but only in the system’s aggregate behavior or weird outliers that show up given enough scale.)</p>
<p>What all of this gets at is that stepping to a higher layer nearly always entails a <em>dramatic</em> increase in the <em>cycle time</em> for software development: that is, the amount of time between when I make a change and when I know whether it’s broken or not. If I can know that I have a problem because my compiler surfaces errors in my editor, that probably saves me a minute or two each day over only being able to see the same error in a test suite. By the same token, being able to surface an error in a test suite running on every build will likely save me anything from minutes to hours of cycle time compared to something I can only test in production.</p>
<p>At first blush, this looks like an argument for pushing everything to the lowest-numbered layer possible, and I think that’s <em>kind of</em> right. I (and probably many other people who end up in, say, Rust or Haskell or Elm or other languages with similarly rich type systems) tend to prefer putting as much as possible into layer 1 here precisely because we have so often been bitten by things that are at layer 2 in other languages or frameworks and take a lot of time to figure out why they broke at layer 2. This happened to me in a C<sup>♯</sup> server application just a couple weeks ago, and chasing it down was <em>not fun</em>.</p>
<p>However, my enthusiasm for rich type systems notwithstanding, I <em>don’t</em> think this observation about these layers of cycle time means we should put everything in the compiler all the time. Indeed, there are some things it is too expensive or difficult to test anywhere <em>but</em> production (all the way up at layer 6). What’s more–although this is often overlooked in these discussions–putting too much of this rich information in layer 1 can absolutely kill your compile times in many languages. In my experience, this is particularly true of many of the languages with rich enough type systems to make layer 1 handling genuinely viable in the first place!<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<p>I do think, though, that being aware of the cost in cycle time is useful, as is being explicit about <em>why</em> we think it’s worth slotting a particular set of feedback into layer 2 vs. layer 1 (or layers 3, 4, 5, or 6). That goes for library development, of course.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> It goes equally for application development, though! It can be really helpful to make explicit both which of these layers you’re landing in and (just as important) why you’ve landed there for any given bit of feedback you want or need to get–making the tradeoffs explicit along the way.</p>
<hr />
<p><i class=editorial>Thanks to my friend Ben Makuh for looking over an earlier draft of this piece and providing really helpful feedback on it! Thanks as well to Greg Vaughn for noting shortly after I published it that pair programming also sits at the “immediate feedback” layer.</i></p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>There’s some ongoing work in the Rust web working group to build an exemplar web framework, <a href="https://rust-lang-nursery.github.io/wg-net/2018/09/11/tide.html">Tide</a>. The <a href="https://rust-lang-nursery.github.io/wg-net/2018/10/16/tide-routing.html">most recent post</a> tackled routing, and prompted <a href="https://internals.rust-lang.org/t/routing-and-extraction-in-tide-a-first-sketch/8587">an interesting discussion</a> on the <a href="https://internals.rust-lang.org/">Rust internals forum</a>. This post is a cleaned-up, better-articulated, more general version of <a href="https://internals.rust-lang.org/t/routing-and-extraction-in-tide-a-first-sketch/8587/36?u=chriskrycho">a post</a> I offered in that thread.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Right now I and a few others are trying to figure out why one particular type definition in the TypeScript definitions for Ember.js causes a build to take about 20× as long as the build without that type definition. It’s the difference between a 6.5-second build and a 2.5-<em>minute</em> build.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>as in the example of a web server’s <abbr>API</abbr> for route handling which originally prompted this post<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Why We Want Pattern-Matching in JavaScript2018-09-23T13:00:00-04:002018-09-24T18:10:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-09-23:/2018/why-we-want-pattern-matching-in-javascript.htmlA worked example of transforming if/else statements to the proposed pattern-matching syntax, showing how much pattern-matching can clarify (as well as shorten) complicated code.
<p>I’ve often noted how much I want the <a href="https://github.com/tc39/proposal-pattern-matching">JavaScript pattern-matching proposal</a> to land. I noted in conversation with some people recently, though, that it’s not always obvious <em>why</em> it will be so helpful. Similarly, <a href="https://twitter.com/littlecalculist">Dave Herman</a> recently noted to me that <a href="https://twitter.com/dhh">DHH</a>’s mantra of “Show me the code” is a really helpful tool for thinking about language design. (I tend to agree!) So with that in mind, here’s a piece of example code from the Ember app I work on today, very slightly modified to get at the pure essentials of this particular example.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>The context is a <abbr>UI</abbr> component which shows the user their current discount, if any, and provides some nice interactivity if they try to switch to a different discount.</p>
<p>First, some types that we’ll use in the example, which I use in the actual component to avoid the problems that inevitably come with using string values for these kinds of things. Linters like ESLint or type systems like TypeScript or Flow will catch typos this way, and you’ll also get better errors at runtime even if you’re not using a linter or a type system!<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<pre class="js"><code>const DiscountTypes = {
Offer: 'Offer',
Coupon: 'Coupon',
None: 'None',
};
const Change = {
OfferToOffer: 'OfferToOffer',
OfferToCoupon: 'OfferToCoupon',
CouponToCoupon: 'CouponToCoupon',
CouponToOffer: 'CouponToOffer',
};</code></pre>
<p>Now, we set up a component which has a little bit of internal state to track the desired change before we submit it, which we display differently based on what the value of the <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/get"><abbr>ES5</abbr> getter</a> for <code>change</code> is here:</p>
<pre class="js"><code>class DiscountComponent {
constructor(currentDiscountType) {
this.currentDiscountType = currentDiscountType;
this.newDiscountType = null;
}
changeDiscount(newDiscountType) {
this.newDiscountType = newDiscountType;
}
submitChange() {
// logic for talking to the server
}
get change() {
const { currentDiscountType, newDiscountType } = this;
if (currentDiscountType === DiscountTypes.Offer) {
if (newDiscountType === DiscountTypes.Offer) {
return Change.OfferToOffer;
} else if (newDiscountType === DiscountTypes.Coupon) {
return Change.OfferToCoupon;
} else if (newDiscountType === DiscountTypes.None) {
return null;
} else {
assertInDev(
`Missed a condition: ${currentDiscountType}, ${newDiscountType}`
);
}
} else if (currentDiscountType === DiscountTypes.Coupon) {
if (newDiscountType === DiscountTypes.Offer) {
return Change.CouponToOffer;
} else if (newDiscountType === DiscountTypes.Coupon) {
return Change.CouponToCoupon;
} else if (newDiscountType === DiscountTypes.None) {
return null;
} else {
assertInDev(
`Missed a condition: ${currentDiscountType}, ${newDiscountType}`
);
}
} else if (currentDiscountType === DiscountTypes.None) {
return null;
} else {
assertInDev(
`Missed a condition: ${currentDiscountType}, ${newDiscountType}`
);
}
}
}</code></pre>
<p>Here’s the <em>exact</em> same semantics for computing the <code>change</code> value we’re interested, but with pattern matching:</p>
<pre class="js"><code>class DiscountComponent {
// ...snip
get change() {
case ([this.currentDiscountType, this.newDiscountType]) {
when [DiscountTypes.Offer, DiscountTypes.Offer] ->
return Change.OfferToOffer;
when [DiscountTypes.Offer, DiscountTypes.Coupon] ->
return Change.OfferToCoupon;
when [DiscountTypes.Coupon, DiscountTypes.Offer] ->
return Change.CouponToOffer;
when [DiscountTypes.Coupon, DiscountTypes.Coupon] ->
return Change.CouponToCoupon;
when [DiscountTypes.None, ...] || [..., DiscountTypes.None] ->
return null;
when [...] ->
assertInDev(
`Missed a condition: ${currentDiscountType}, ${newDiscountType}`
);
}
}
}</code></pre>
<p>The difference is stark. It’s not just that there are fewer lines of code, it’s that the actual intent of the code is dramatically clearer. (And while I’ve formatted it for nice display here, those are all one-liners in my normal 100-characters-per-line formatting.)</p>
<p>My preference would be for pattern-matching to have expression semantics, so you wouldn’t need all the <code>return</code> statements in the mix—and it’s <em>possible</em>, depending on how a number of proposals in flight right now shake out, that it still will. Even if pattern matching doesn’t ultimately end up with an expression-based syntax, though, we can still get a lot of those niceties if the <code>do</code>-expression proposal lands:</p>
<pre class="js"><code>class DiscountComponent {
// ...snip
get change() {
return do {
case ([this.currentDiscountType, this.newDiscountType]) {
when [DiscountTypes.Offer, DiscountTypes.Offer] ->
Change.OfferToOffer;
when [DiscountTypes.Offer, DiscountTypes.Coupon] ->
Change.OfferToCoupon;
when [DiscountTypes.Coupon, DiscountTypes.Offer] ->
Change.CouponToOffer;
when [DiscountTypes.Coupon, DiscountTypes.Coupon] ->
Change.CouponToCoupon;
when [DiscountTypes.None, ...] || [..., DiscountTypes.None] ->
null;
when [...] ->
assertInDev(
`Missed a condition: ${currentDiscountType}, ${newDiscountType}`
);
}
}
}
}</code></pre>
<p>Again, this is profoundly clearer about the intent of the code, and it’s far easier to be sure you haven’t missed a case.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a></p>
<p><strong>Edit:</strong> after some comments on Twitter, I thought I’d note how this is <em>even nicer</em> in pure functions. If we assume that it gets expression semantics (which, again, I’m hoping for), a pure functional version of the sample above would look like this:</p>
<pre class="js"><code>const change = (currentType, newType) =>
case ([currentType, newType]) {
when [DiscountTypes.Offer, DiscountTypes.Offer] ->
Change.OfferToOffer;
when [DiscountTypes.Offer, DiscountTypes.Coupon] ->
Change.OfferToCoupon;
when [DiscountTypes.Coupon, DiscountTypes.Offer] ->
Change.CouponToOffer;
when [DiscountTypes.Coupon, DiscountTypes.Coupon] ->
Change.CouponToCoupon;
when [DiscountTypes.None, ...] || [..., DiscountTypes.None] ->
null;
when [...] ->
assertInDev(
`Missed a condition: ${currentDiscountType}, ${newDiscountType}`
);
};</code></pre>
<p>This may not be <em>quite</em> as clear as the same thing in F<sup>♯</sup> or Elm or another language in that family… but it’s amazingly better than anything we’ve seen in JavaScript to date.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p><code>assertInDev</code> looks a little different; we’re actually using the <code>Maybe</code> type from my <a href="https://github.com/chriskrycho/true-myth">True Myth</a> library instead of returning <code>null</code>; it’s an Ember app; as such it uses a <code>@computed</code> decorator; and of course it’s all in TypeScript. I chose to write it with standard JavaScript to minimize the number of things you have to parse as a reader.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>In the actual TypeScript, these are defined with an <a href="http://www.typescriptlang.org/docs/handbook/enums.html"><code>enum</code></a>.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>Fun fact: the original code actually <em>had</em> missed a number of cases, which I learned only because TypeScript’s <code>strictNullChecks</code> setting informed me.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
“Zuckerberg’s Blindness and Ours” (L. M. Sacasas)2018-09-17T08:20:00-04:002018-09-17T08:20:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-09-17:/2018/zuckerbergs-blindness-and-ours-l-m-sacasas.htmlSolutionism is a nasty besetting culture-level sin we barely recognize as such.
<p>Yesterday while talking with my wife as we drove to spend some time with extended family, I caught myself: tempted to describe a given <em>response</em> to a particular cultural ill as a <em>solution</em>. This is a turn of thinking that’s especially tempting for engineers—and perhaps the more so engineers with a physics background (like me!). In two of the fields to which I have applied myself (physics and software), knowledge often genuinely appears in the form of <em>solutions to problems</em>. But the extent to which science (and scientism) on the one hand and engineering disciplines (especially software) on the other have come to the fore in our culture—the degree to which they have achieved nearly unassailable authority for us—means that we now too often take solutions as coextant with knowledge more generally.</p>
<p>This is <em>solutionism</em>, and it is bad. I noted above that two of the fields to which I have applied myself share this feature of having solutions to problems as their predominant form of knowledge. But this is not so in two of the other fields I have studied in some depth: for neither theology nor music is a <em>solution</em> very often in demand. Very different modes of thought and of reasoning are in play in each of those, and appropriately so.</p>
<p>So it was with some particular appreciation that I read <a href="https://thefrailestthing.com/2018/09/16/zuckerbergs-blindness-and-ours/">this piece by L. M. Sacasas</a>, reflecting on the recent New Yorker profile of Mark Zuckerberg. Sacasas rightly highlights how mistaken this solutionist frame of knowledge is. From his conclusion (emphasis mine):</p>
<blockquote>
<p>Reducing knowledge to know-how and doing away with thought leaves us trapped by an impulse to see the world merely as a field of problems to be solved by the application of the proper tool or technique, and this impulse is also compulsive because it cannot abide inaction. We can call this an ideology or we can simply call it a frame of mind, but either way it seems that this is closer to the truth about the mindset of Silicon Valley.</p>
<p>It is not a matter of stupidity or education, formally understood, or any kind of personal turpitude. Indeed, by most accounts, Zuckerberg is both earnest and, in his own way, thoughtful. Rather it is the case that one’s intelligence and one’s education, even if it were deeply humanistic, and one’s moral outlook, otherwise exemplary and decent, are framed by something more fundamental: a distinctive way of perceiving the world. This way of seeing the world, including the human being, as a field of problems to be solved by the application of tools and techniques, bends all of our faculties to its own ends. <em>The solution is the truth, the solution is the good, the solution the beautiful. Nothing that is given is valued.</em></p>
<p><em>The trouble with this way of seeing the world is that it cannot quite imagine the possibility that some problems are not susceptible to merely technical solutions or, much less, that some problems are best abided.</em> It is also plagued by hubris—often of the worst sort, the hubris of the powerful and well-intentioned—and, consequently, it is incapable of perceiving its own limits. As in the Greek tragedies, hubris generates blindness, a blindness born precisely out of one’s distinctive way of seeing. And that’s not the worst of it. That worst of it is that we are all, to some degree, now tempted and prone to see the world in just this way too.</p>
</blockquote>
A Real Victory2018-09-05T21:45:00-04:002018-09-05T21:45:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-09-05:/2018/a-real-victory.htmlToday—just shy of two years since I started adding types to our Ember app—it fully type-checks.
<p>On September 29, 2016, I started working on adding (<a href="https://flow.org">Flow</a>) types to the <a href="https://emberjs.com">Ember</a> app I had been working on since the start of the year. Throughout the rest of the year I worked on adding some basic Flow types to our app and for Ember. For my last commit in 2016, I switched us to TypeScript and began the rest of the long journey to fully type-checking our app. In early 2018, we made it to “the app type-checks”… in the loosest strictness settings.</p>
<p>And as of 6pm today—September 5, 2018, almost two full years later, and 21 months after we switched from Flow to TypeScript (more on this below)—we have a fully type-checked TypeScript Ember application, with the strictness notches dialed as strict as they will go.</p>
<p>It took almost two full years for us to get there, and I’m incredibly proud of that work.</p>
<p>It took almost two full years because it was a lot of work, and slow work to do at that, and it was rare that I could block out any large chunks of time for that work—we had to sneak in improvements between features we were working urgently on for our clients and our own internally goals. More, it wasn’t just the work of adding types to our application. It was also the work of writing types for Ember itself, and for the surrounding ecosystem—which thankfully I did not finish alone, but which I did have to start alone. It was the work of integrating (and reintegrating) TypeScript into Ember’s build pipeline.</p>
<p>Happily, I did <em>not</em> do most of that work alone, and even on our app I’ve had a ton of help getting the types in place. But it has been a massive task, and finishing it today was a real victory. It’s not perfect. We have 200-or-so instances of <code>any</code> in the application (most of them in semi-legitimate places, to be fair), and I wish it were more like 20. We have a number of places in the app with the <code>!</code> “I promise this isn’t <code>null</code> or <code>undefined</code> here” operator on some nullable field, with long comments explaining <em>why</em> it’s not possible for it to be null there.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>But it type-checks today, and type errors fail the builds, and that <em>is</em> a real victory.</p>
<hr />
<p>You can consider this “part 1” of my thoughts on what feels to me like a pretty significant achievement. I’ll hopefully follow this up with some backstory sometime in the next few weeks.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>See my <a href="https://v4.chriskrycho.com/2018/type-informed-design.html">recent post</a> on thinking a lot about design decisions I would have made differently with TypeScript’s strict null checking available from day one!<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
True Myth 2.1.0 Released2018-09-02T16:25:00-04:002018-09-02T16:25:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-09-02:/2018/true-myth-210-released.htmlA bunch of neat new utility functions on Maybe for arrays and tuples.
<p>I’ve just released True Myth 2.1.0 (<a href="https://github.com/chriskrycho/true-myth/tree/v2.1.0">source</a>, <a href="https://true-myth.js.org">docs</a>), which includes a handful of new utility functions for use with the <code>Maybe</code> types and arrays or tuples. Note that to make use of these you’ll need to be on at least TypeScript 3.0: they take advantage of the some of the shiny new features in the type system!</p>
<p><strong>Edit:</strong> and, five minutes later, versions 2.1.1 and 2.1.2 are out with bugfixes consisting of “I forgot to export two functions. Now they’re exported.” Because that’s how this <em>always</em> works, right?</p>
<p>Here’s what’s new:</p>
<ul>
<li><p><strong><code>Maybe.find</code>:</strong> for those times when you want to do <code>Array.prototype.find</code> and would love to not have to wrap up the result with a <code>Maybe</code> explicitly every time. As with most functions in True Myth, it’s curried so you can easily use it in a functional programming style.</p>
<pre class="ts"><code>import Maybe from 'true-myth/maybe';
let foundRegular = Maybe.find(n => n > 1, [1, 2, 3]);
console.log(foundRegular.toString()); // Just(2)
let notFound = Maybe.find(n = n < 1, [1, 2, 3]);
console.log(notFound.toString()); // Nothing
let findAtLeast5 = Maybe.find((n: number) => n > 5);
let foundCurried = findAtLeastFive([2, 4, 6, 8, 10]);
console.log(foundCurried.toString()); // Just(6)</code></pre></li>
<li><p><strong><code>Maybe.head</code> (aliased as <code>Maybe.first</code>):</strong> for getting the first item of an array safely. Like lodash’s <code>_.head</code> (or <code>someArray[0]</code>) but it returns a <code>Maybe</code> instead of possibly giving you back <code>undefined</code>.</p>
<pre class="ts"><code>import Maybe from 'true-myth/maybe';
let empty = Maybe.head([]);
console.log(empty.toString()); // Nothing
let hasItems = Maybe.head([1, 2, 3]);
console.log(hasItems.toString()); // Just(1)</code></pre></li>
<li><p><strong><code>Maybe.last</code>:</strong> the same as <code>Maybe.head</code>, but for getting the <em>last</em> element in an array.</p>
<pre class="ts"><code>import Maybe from 'true-myth/maybe';
let empty = Maybe.last([]);
console.log(empty.toString()); // Nothing
let hasItems = Maybe.last([1, 2, 3]);
console.log(hasItems.toString()); // Just(3)</code></pre></li>
<li><p><strong><code>Maybe.all</code>:</strong> for converting an array of <code>Maybe</code>s to a <code>Maybe</code> of an array. If you have an array whose contents are all <code>Maybe</code>s, it’s sometimes useful to be able to flip that around so that if all of the items are <code>Just</code>s, you get back a single <code>Just</code> wrapping the array of the values which were wrapped in all the <code>Just</code>s in the array, but if any were <code>Nothing</code>, the whole thing is a single <code>Nothing</code>. This works for both heterogeneous and homogenous arrays, which is pretty cool. A code sample will make this a lot clearer:</p>
<pre class="ts"><code>import Maybe, { just, nothing } from 'true-myth/maybe';
let includesNothing = Maybe.all(just(2), nothing<string>());
console.log(includesNothing.toString()); // Nothing
let allJusts = Maybe.all(just(2), just('hi'), just([42]));
console.log(allJusts.toString()); // Just([2, 'hi', [42]]);</code></pre>
<p>The resulting type of both <code>includesNothing</code> and <code>allJusts</code> here is <code>Maybe<Array<string | number | Array<number>>></code>.</p></li>
<li><p><strong><code>Maybe.tuple</code>:</strong> just like <code>Maybe.all</code> except it works in tuples (preserving their types’ order) for up to five-item tuples. (As the docs I wrote say: if you’re doing a larger tuple than that I don’t want to know what you’re doing but I won’t help with it!)</p>
<pre class="ts"><code>import Maybe, { just, nothing } from 'true-myth/maybe';
type Tuple = [Maybe<number>, Maybe<string>, Maybe<number[]>];
let withNothing: Tuple = [just(2), nothing(), just([42])];
let withNothingResult = Maybe.tuple(withNothing);
console.log(withNothingResult.toString()); // Nothing
let allJusts: Tuple = [just(2), just('hi'), just([42])];
let allJustsResult = Maybe.tuple(allJusts);
console.log(allJustsResult.toString()); // Just([2, "hi", [42]])</code></pre>
<p>These have the same <em>output</em> (i.e. the same underlying representation) as the array output, but a different type. The resulting type of both <code>includesNothing</code> and <code>allJusts</code> here is <code>Maybe<[number, string, Array<number>]></code>.</p></li>
</ul>
<p>Once TypeScript 3.1 is out, I should be able to collapse these into a single <code>all</code>, and <code>tuple</code> will just become an alias for it.</p>
Going Offline2018-09-01T09:00:00-04:002018-09-01T09:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-09-01:/2018/going-offline.htmlI signed out of Slack for most of the day a few times this week, and it was bliss. Chatting can genuinely be great, but it can also be very disruptive when there are large or hard tasks to be finished.
<p>Several times this week, I signed out of both my company Slack and the <a href="https://embercommunity.slack.com/">Ember Community Slack</a> and just worked in “solitude” for about six and a half of the eight hours I was working. It was, in a word, <em>bliss</em>.</p>
<p>Over the past few years, I’ve come to enjoy a lot of chatting during the day while at work, and the social interaction is very much a <em>must</em> in a lot of ways for someone who works remotely. The flip side, however, is that chat is a huge time-suck, and more than that it’s a constant drain on attention. I’m far from the first person to see this or to say it, of course, and I’ve experienced it before. But it struck me much more forcefully this week than it usually has in the past.</p>
<p>Some of that, I suspect, is the effect of the burnout I’ve been experiencing: I’m much more taxed by social interaction and by shifts in attention right now than I have been in the past. More and more, though, I’ve also become aware of the effect the constant distraction of chat has on me. I am much less effective as a software developer when I am constantly switching modes.</p>
<p>To be sure: some tasks are more mentally demanding than others. Some days I get done <em>plenty</em> while still switching contexts often—less, perhaps, than I might if I were not being interrupted, but also perhaps <em>more</em> in the sense that enabling others to finish <em>their</em> tasks is also important. When I need to dive deep on something hard, though—or even when I just need to get after a <em>large</em> task—the kind of mental silence that signing out of chat affords is very helpful. I plan to make this a regular habit.</p>
Once More Around The Wheel2018-08-31T18:30:00-04:002018-08-31T18:30:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-08-31:/2018/once-more-around-the-wheel.htmlI keep hoping a solution to my publishing needs will present itself instead of my having to build it myself. Such a solution does *not* appear forthcoming, though.
<p>I very much wish my publishing needs were not so… complicated. Academic writing—though that’s done for a while—poetry, code, and a strong preference for semantic HTML to be generated by it along with an equally strong preference for plain text authoring…</p>
<p>Nothing works for me.</p>
<p>The only Markdown parser out there which does the right thing with all of those is <a href="http://pandoc.org">Pandoc</a>. The options for using Pandoc directly are not great. Shelling out to it via <a href="https://getpelican.com">Pelican</a> (my current strategy) works but is <em>slow</em>. The implemented-in-Rust <a href="https://www.getgutenberg.io">Gutenberg</a> generator looks like <em>exactly</em> what I want performance-wise, but its <a href="https://github.com/google/pulldown-cmark" title="pulldown-cmark">underlying Markdown engine</a> doesn’t support citations <em>or</em> poetry.</p>
<p>I keep coming back to the conclusion that I’m basically going to have to build whatever I want myself, if I want my desired publishing flow to exist. I don’t really <em>want</em> to do that, though. It’s a boatload of work, even “just”—just!—to do something like (a) learn Haskell well enough to build on top of Pandoc directly or (b) build a good C-based API wrapper in Rust so that I can do it <em>that</em> way or (c) extend <a href="https://github.com/google/pulldown-cmark">pulldown-cmark</a> to support poetry and citation management.</p>
<p>For lots of reasons (c) is probably what I’ll ultimately end up doing; I want that for more than just blogging. More on that eventually, I hope.</p>
<p>But in the meantime I really just… want someone else to have the same weird needs I do and build this for me. I just know full well at this point that that’s not going to happen, and accordingly am basically resigned to muddling along with Pelican and Pandoc until such a time as I can actually buckle down and build what I want and need.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> <em>C’est la vie.</em></p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>As an aside: the prospect of buckling down and building things like that in my spare time is <em>much</em> less appealing given my current <a href="https://v4.chriskrycho.com/burnout/">struggles with burnout</a>, and as I’ll write about at some point in the future I feel a (perhaps-surprising to you, my reader) lack of confidence about my <em>ability</em> to accomplish those things.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Type-Informed Design2018-08-30T19:40:00-04:002018-08-30T19:40:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-08-30:/2018/type-informed-design.htmlRevisiting our app in TypeScript’s strict mode has me thinking about what we’d do different if we had this input in the first place.<p>I’ve been working on getting the Ember app I work on fully type-checked in strict mode this week, and noticed something interesting along the way: there are a lot of design decisions—a few of them really core to the behavior of the app!—which we never, <em>ever</em> would have made if we had been using Typescript in the first place.</p>
<p>One of these is pervasive references to certain optional properties that appear in services throughout our app—the basket, for example. These can indeed be inset and at certain times they are. However, many of our components pull in this property from the service and simply assume it’s there to use. We’ve known for a while that this was a problem at times: <a href="https://raygun.com/">Raygun</a> has told us loud and clear. But it wasn’t obvious how pervasive this was—and how badly we were just assuming the presence of something that may well be absent <em>all over the app</em>!—until I saw the type errors from it. Dozens of them.</p>
<p>Some of them are places where we should have built the components differently: to take the item as an argument, for example, and to require it as an input, because the component just doesn’t make any sense without it, indeed lives in a part of the app such that it’s not even possible to render the component without it.</p>
<p>And sure, we could document that invariant and use TypeScript’s override tools to carry on. (What do you think I’m doing this week?)</p>
<p>But, and this is the thing that really caught my attention in thinking about all of this: it would be much better <em>not</em> to have to do that. Had we had TypeScript in place when we started, we simply would have designed large swaths of the app differently because we’d have seen these kinds of things when we were building it in the first place!</p>
<p>That’s a bit of wishing for the impossible in one sense: we literally couldn’t have done that when we started on the app, because TS didn’t have the necessary pieces to support typing the Ember libraries. My team helped <em>build</em> the TS and Ember story over the last 18 months! But at a minimum I have a pretty good idea how the process will be different next time around, with this tool available and providing this kind of helpful design feedback from the outset!</p>
RSS Triage is Just Like Email Triage2018-08-22T18:15:00-04:002018-08-22T18:15:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-08-22:/2018/rss-triage-is-just-like-email-triage.htmlMy strategy for keeping up with my RSS subscriptions: read it, send it to Pocket to read later, or mark it as read. Then move on.
<p>I’ve heard people say they find <abbr>RSS</abbr> overwhelming: the constant flow of new items. I find that managing <abbr>RSS</abbr> well is not actually all that hard though; it just takes the same kind of discipline as keeping an empty inbox does.</p>
<p>In short: keep my reading list restrained, and have a strategy to keep up to date on that list.</p>
<p>My strategy for keeping up to date is simple, too: for every time, either read it, send it to Pocket to read later (on my Kobo, via Kobo’s Pocket integration), or mark it as read. Then move on.</p>
<p>As for the Pocket items: clear <em>those</em> out after long enough, too: I go through my backlog every so often and if I haven’t read it yet and it has been in my to-read list for more than a month or two, I decide to read it <em>now</em> or just remove it. (At some point you just have to admit that you’re <em>not</em> going to read something, and that it’s okay. We all have limited time.)</p>
<p>With that simple strategy, <abbr>RSS</abbr> becomes more than merely “manageable” for me: I get to read a lot of things I actually <em>want</em> to read, and it’s focused in a way that Twitter or Apple News or whatever else isn’t.</p>
Level up your `.filter` game2018-08-18T10:00:00-04:002018-08-18T10:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-08-18:/2018/level-up-your-filter-game.html<p>Adam Giese’s <a href="https://css-tricks.com/level-up-your-filter-game/">“Level up your <code>.filter</code> game”</a> does something really interesting and helpful: it introduces a bunch of fairly sophisticated functional programming concepts without ever mentioning functional programming and without ever using any of the jargon associated with those terms.</p>
<p>“Level up your <code>.filter</code> game” gives you a reason …</p><p>Adam Giese’s <a href="https://css-tricks.com/level-up-your-filter-game/">“Level up your <code>.filter</code> game”</a> does something really interesting and helpful: it introduces a bunch of fairly sophisticated functional programming concepts without ever mentioning functional programming and without ever using any of the jargon associated with those terms.</p>
<p>“Level up your <code>.filter</code> game” gives you a reason to use some standard FP tools—currying, higher-order functions, composition—in your ordinary work. It’s pitched at working JS developers. It gives a real-world example of wanting to filter search results based on user input. It shows the utility of defining a bunch of small functions which can fit together like LEGO.</p>
<blockquote>
<p>Filters are an essential part of JavaScript development. Whether you’re sorting out bad data from an API response or responding to user interactions, there are countless times when you would want a subset of an array’s values. I hope this overview helped with ways that you can manipulate predicates to write more readable and maintainable code.</p>
</blockquote>
<p>I commend the piece to you not so much for the explanation of how to use JavaScript’s <code>Array.prototype.filter</code> effectively (though it has some good suggestions that way!) but <em>primarily</em> as a great example of the kind of pedagogy we need a lot more of to demonstrate the value of functional programming in ordinary, day-to-day development work.</p>
Stable Libraries2018-08-14T19:45:00-04:002018-08-14T19:45:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-08-14:/2018/stable-libraries.htmlTrue Myth has changed very little since I first released it, and I do not expect it to change much in the future: because it is basically done. I wish more libraries took this approach; churn is not a virtue.
<p><a href="https://github.com/chriskrycho/true-myth">True Myth</a> has changed very little since I first released it, and although I have a few ideas for small additions I might make, I don’t really expect it to change much in the future. <em>That’s okay.</em></p>
<p>There’s a strange idea in some parts of the software development ecosystem—a way of think I also find myself falling into from time—which takes a lack of changes to a library as a sign that the library is <em>dead</em> and shouldn’t be used. I call this idea “strange” because if you take a step back, it’s actually not necessarily very healthy for certain kinds of libraries to be changing all the time.</p>
<p>But if you’re in an ecosystem where rapid change in libraries is normal, you end up assuming that something which <em>isn’t changing</em> is <em>unmaintained</em> or <em>not usable</em> when in fact the opposite may be true. If someone opens a pull request or an issue for True Myth, I generally get to it in under a day, often under an hour if it’s in my normal working time. (That’s easy enough for me to do because it’s a small, simple library; I don’t have the scale problems that larger projects do.) The project isn’t <em>dead</em>. It’s just mostly <em>done</em>.</p>
<p>One of the things I’d like to see in the front-end/JavaScript community in particular is a growing embrace of the idea that some libraries can genuinely be finished. They might need a tweak here or there to work with a new packaging solution, or to fix some corner case bug that has been found. But the “churn” we all feel to varying degrees would be much diminished if maintainers didn’t feel a constant push to be changing for the sake of, well… change. The burden on maintainers would be lower, too. Maybe we’d all get to spend less time on small changes that just keep us “up to date” and more on solving bigger problems.</p>
<p>Don’t get me wrong: sometimes changing perspective warrants a rewrite. But in libraries as in apps, just as often you’ll end up with a bad case of <a href="https://en.m.wikipedia.org/wiki/Second-system_effect">second system syndrome</a>; and rewrites are <em>rarely</em>—not never, but rarely—clean wins.</p>
“Free Speech”2018-08-11T10:35:00-04:002018-08-11T10:35:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-08-11:/2018/free-speech.htmlArguments about free speech on these private platforms are exercises in missing the point. The bigger problem is that we have abandoned our public discourse (and nearly everything else) to these companies.
<p>Every time there is a major controversy about large platforms blocking or delisting some controversial figure, something like the following exchange follows:<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<blockquote>
<p><strong>Person 1:</strong> But what about free speech? You’re censoring this party!</p>
<p><strong>Person 2:</strong> [Twitter/Facebook/Youtube/etc.] is a private platform! Free speech guarantees the right not to be jailed for what you say, <em>not</em> the right to have it on every platform you want.</p>
</blockquote>
<p>So far as it goes, this is true. XKCD’s explanation is completely right on the legal merits:</p>
<p><a href="https://xkcd.com/1357/"><img src="http://www.explainxkcd.com/wiki/images/a/ae/free_speech.png" title="XKCD 1357: Free Speech" /></a></p>
<p>But while this is all true in some sense, it also seems to me to be missing the larger and <em>much</em> more important point. Namely: the whole reason we have these arguments—and the reason people tend to think as they do about the “free speech” question in these situations, legally nonsensical or not—is that we have outsourced the vast majority of our public discourse to these private platforms.</p>
<p>Twitter and Facebook have become the <em>de facto</em> public fora of the 2010s, with Google’s search results and Wikipedia’s summaries taking similarly authoritative roles on what <em>exists</em> and what <em>is true</em>. Not that most people would put it that way, but it remains true: if something isn’t in Google search, it might as well not exist on the internet, and therefore for many people <em>at all</em>. Likewise with Wikipedia’s summaries: the admonitions of every college professor in the world notwithstanding, what Wikipedia says has an undeniable authority. And when someone is blacklisted from Twitter or Facebook, their ability to be heard at all by internet users as a block is <em>dramatically</em> curtailed.</p>
<p>This centralization of discussion and information into a few private platforms has a great many downsides. But perhaps chief among them is that we have ceded major aspects of our public and civic life to private platforms, and their interests are not the interests of the public good. They are driven almost entirely by the profit motive, or (possibly even worse at times) by nebulous and chimeric ideologies that treat “connecting people [digitally]” or “organizing the world’s information” as inherent and superlative goods. So when someone has their page removed from Facebook, or their website blacklisted from Google, there is a real sense in which they <em>have</em> been removed from public discourse and their speech “silenced”—even if not in an illegal sense.</p>
<p>For the purposes of this post, though, I could not care less what the major internet companies do or don’t show on their platforms. Instead, I worry about our practice both as individuals and also as communities-of-practice—churches, associations, and so on—of abdicating our responsibility to maintain real public and civic lives in our local places in favor of letting these corporate giants do the work for us. I worry about the costs of letting Google and Facebook replace genuine public fora in our lives. I worry about the long-term effect of letting supranational megacorporations driven by that toxic combination of profit motive and nonsensical ideologies set the terms of our lives. I worry about the whole set of underlying structural and systematic moves that have made delisting on one of those platforms seem like a violation of the ideal of free speech.</p>
<p>As I’ve said for many years in this space: we should work hard at reclaiming our lives from the tangle of the corporations. We should limit the way we both use and think about these platforms. We should read books, old and new, rather than simply rely on the Google results and Wikipedia summaries. We should have painful, awkward conversations and indeed arguments with neighbors and colleagues and family members rather than merely all-caps shouting at each other on Facebook or Twitter. We should carve out our own spaces on the internet, <a href="http://tumblr.austinkleon.com/post/37863874092">owning our own turf</a>; but more than that we should remember that even that is no substitute for the thicker (and yes, more painful, frustrating, and awkward!) communities and interactions of a church or a neighborhood or a town hall meeting.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Given the context in which I’m writing this, it’s probably helpful to say that I think InfoWars is a font of demonic lies. I’m a Christian; I mean that literally.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Building Things2018-08-06T21:15:00-04:002018-08-06T21:15:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-08-06:/2018/building-things.htmlOn “force multipliers” and being an individual contributor at heart—even if one who is good at teaching and who enjoys leading.
<section id="i." class="level2">
<h2>I.</h2>
<p>For almost three years, now, I have been more or less steadily—sometimes more, sometimes less!—putting out episodes of New Rustacean. It’s fairly popular. I’ve had really smart people tell me how helpful it was in getting them up to speed with the language. I have had the surprising and slightly weird (if also somewhat gratifying) experience of walking into a room and seeing people respond in recognition of my voice.</p>
<p>I’m grateful for the impact the podcast has had, and as I tell people often: this is far and away the most significant thing I could have done in the Rust ecosystem in the last three years. There are a lot of people better-equipped than I to write top-notch libraries and applications in the ecosystem. People well-equipped for podcasting by dint of already being active in the space, and well-equipped for teaching specifically by dint of background and training? There are a lot fewer of those. I don’t think there is anywhere at all I could have made a bigger dent in the same time for Rust.</p>
<p>And yet.</p>
<p>If I went and applied for a job today, where actual Rust <em>experience</em> was desired, the vast majority of my show’s listeners would have substantially more to show than me. A command line tool here, a little experiment there. My <a href="https://github.com/chriskrycho/lightning-rs" title="lightning (lx)">one real project</a> has been on hold almost since I started it. Another project, my original inspiration for learning Rust at all, I’ve never even started. My actual lines of Rust code written in the last three years top out somewhere under 3,000. It’s a pittance. As well as I know the language’s <em>ideas</em>, and indeed as well as I can explain them… I actually haven’t gotten to <em>build</em> much of anything with it.</p>
</section>
<section id="ii." class="level2">
<h2>II.</h2>
<p>The last few months at work, I’ve spent a lot of my time—and an increasingly large proportion of it—on mentoring, code reviews, and leading the team and effort I’m on. This is genuinely wonderful in a lot of ways. I <em>love</em> teaching, and it’s a pleasure to help shape the overall direction of a project and a codebase.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> In many ways, I’m right in line with the goals I set explicitly with my manager at the beginning of the year.</p>
<p>That’s really good, and really important. I recently saw someone tweet the pithy remark that the <em>definition</em> of a senior engineer is that they are mentoring a more junior engineer. I don’t think that’s quite right—there is a lot of room for really outstanding technical contributors who don’t have the gift of teaching, but whose technical chops mean they genuinely <em>are</em> senior people on the team.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> But the reasonable insight under the hyperbole is that enabling others can often be far more effective than merely doing work yourself.</p>
<p>And yet.</p>
<p>Over the last several months, the amount of code I have written myself has dropped substantially. Not to nothing, of course; I’m still doing the actual work of designing and implementing pieces of the application I work on a majority of the time. But I’m not sure how much more than 50% of my time it is on any given week at this point. As much as I’ve enjoyed helping drive this particular project forward,<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> I haven’t actually gotten to <em>build</em> as much during this phase of it.</p>
</section>
<section id="iii." class="level2">
<h2>III.</h2>
<p>These two things have a great deal in common, for all their superficial differences. Both are places where my most valuable contributions are not what I can build myself, but what I can enable <em>others</em> to build.</p>
<p>Thousands and thousands of people have listened to New Rustacean. For some non-trivial number of them, the podcast was an important part of their wrapping their heads around the language. I know this because they tell me, in emails and conversations and tweets that are genuinely my favorite parts of doing the show! I have done far, far more with the podcast than I possibly could have by building another library in Rust.</p>
<p>Similarly, albeit on a much smaller scale, my role in my team at Olo matters. I’ve been able to help set the overall technical direction of a number of our front-end initiatives at the company in important ways. I’ve been able to help more junior developers ramp up their skills. I have done far more in this kind of role than I could possibly have done by just quietly shipping features.</p>
<p>And yet.</p>
<p>Being a “force multiplier” (what a terrible phrase!) isn’t always what it’s cracked up to be. It can be both <em>worth it</em> and also <em>profoundly frustrating and boring</em> at times. I was drawn to software in no small part because of the joy of being able to make things—to start with nothing but an idea or a sketch and a few hours later have something people can interact with, that solves a problem for them. I still love that side of it, and it’s clear to me if nothing else that (for the foreseeable future, anyway) I have no desire whatsoever to go into management roles, “force multiplier” or not.</p>
<p>There’s a real trick here, because it’s not that I’m <em>not</em> building things in these roles. It’s just that building a team or a community is not quite the same thing—it does not scratch the same itch—as building a really elegant user interface component with an elegant and communicative animation. They’re both good; and they’re very, very different from each other.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I separate those on purpose: a project and a codebase are <em>related</em>, but they’re far from identical. A project can succeed—at least in the short term—with a terrible codebase; an excellent codebase is no guarantee of project success. Getting them aligned is rare, difficult, and rewarding.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>This seems like a typical overcorrection: against the idea that teaching is <em>unimportant</em>, it now comes into vogue to say that teaching is the <em>most</em> important. Imagine if we simply noted that teaching is some people’s gift and vocation, and not others; and that we can complement one another’s strengths by sharing our own—that it is not a zero-sum game but one in which <a href="https://www.esv.org/1+Corinthians+12+12/" title="1 Corinthians 12:12 and following">we are like hands and feet and elbows and ears, each one needing the other, none able to do without the others</a>.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>much of the time anyway; the <a href="https://v4.chriskrycho.com/2018/some-mild-burnout.html" title="Some Mild Burnout">burnout</a> I’m experiencing is related to some of the dynamics of this particular project<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Is Agile the Enemy of Good Design?2018-07-29T16:15:00-04:002018-07-29T16:15:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-07-29:/2018/is-agile-the-enemy-of-good-design.htmlThis is painfully accurate: “It is all about “ship, ship, ship”. We don’t pivot. We don’t refine. The product owner just wants to mark it done in Jira. The MVPs are an excuse to get crappy stuff out the door.”
<p><i class=editorial>In line with my recently stated <a href="https://v4.chriskrycho.com/2018/continuing-to-reflect-on-my-internet-presence.html" title="Continuing to Reflect on My Internet Presence">desire</a> to share out things I’m reading:</i></p>
<p>I just ran into a really excellent piece by John Cutler (who is also new to me), <a href="https://hackernoon.com/is-agile-the-enemy-of-good-design-14a35806cde7">Is Agile the Enemy (of Good Design)?</a>. The whole thing is worth your time, but a couple bits in particular stood out to me in light of some ongoing conversations <a href="https://mobile.twitter.com/bmakuh">Ben Makuh</a> about wisdom and folly in startup culture.</p>
<p>In particular, these two bits from other designers Cutler cites sum up a <em>huge</em> amount of what’s wrong with a lot of what passes for “Agile” and indeed for “startup culture”:</p>
<blockquote>
<blockquote>
<p>The stuff you’re talking about rarely happens. It is all about “ship, ship, ship”. We don’t pivot. We don’t refine. The product owner just wants to mark it done in Jira. The MVPs are an excuse to get crappy stuff out the door. I guarantee that if I am methodical with my prototype testing, I can come up with something better because I will expose it to users. Not AS great as doing it the perfect Agile way, but better than nothing. I mean I struggle even to do usability testing. So you know…yes in theory all that is good, but it doesn’t happen.</p>
</blockquote>
</blockquote>
<!-- -->
<blockquote>
<blockquote>
<p>The enemy of both actual agilistas and the UX/design community in 2018 is, as John points out, short-term, output-centric thinking driven by a focus on short-term financial results, and all the cultural ramifications of this mindset.</p>
</blockquote>
</blockquote>
<p>These things are <em>antithetical</em> to the original ideas of the <a href="http://agilemanifesto.org">Manifesto for Agile Software Development</a>. But they’re also, well… pretty common. As Cutler puts it:</p>
<blockquote>
<p>…Agile — like many other things in cut-throat business — is often no match for the universal threats of output fetishism, success theater, and cutting corners. Trust me… these predated Agile.</p>
</blockquote>
<p>And this is some hot fire here:</p>
<blockquote>
<p>So where does this leave us? Designers have a right to be concerned. At least with waterfall no one prematurely yells “ship it” in the middle of the project. Designers have time to work instead of trying to jump on and off the sprint conveyor belt. And because the “thing” is built in a big batch, they have time to tackle the design problem holistically right from the beginning. “Good” waterfall beats abused Agile any day.</p>
</blockquote>
<p>He’s not wrong. <a href="https://hackernoon.com/is-agile-the-enemy-of-good-design-14a35806cde7">You should read the whole thing.</a></p>
Ember.js, TypeScript, and Class Properties2018-07-10T20:00:00-04:002018-07-10T20:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-07-10:/2018/ember-ts-class-properties.htmlI made an important mistake in my discussion of JavaScript and TypeScript class properties in relation to computed properties and injections in Ember earlier this year. Here's the fix you need.<p>A few months ago, I wrote a mostly-complete series describing the state of using <a href="https://typescriptlang.org">TypeScript</a> with <a href="https://emberjs.com">Ember</a> in 2018. I got one <em>very</em> important thing wrong in that series, and I’m back with the correction!<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>In that series, I showed an example of a component definition; it looked like this:</p>
<pre class="typescript"><code>import Component from '@ember/component';
import { computed, get } from '@ember/object';
import Computed from '@ember/object/computed';
import { inject as service } from '@ember/service';
import { assert } from '@ember/debug';
import { isNone } from '@ember/utils';
import Session from 'my-app/services/session';
import Person from 'my-app/models/person';
export default class AnExample extends Component {
// -- Component arguments -- //
model: Person; // required
modifier?: string; // optional, thus the `?`
// -- Injections -- //
session: Computed<Session> = service();
// -- Class properties -- //
aString = 'this is fine';
aCollection: string[] = [];
// -- Computed properties -- //
// TS correctly infers computed property types when the callback has a
// return type annotation.
fromModel = computed(
'model.firstName',
function(this: AnExample): string {
return `My name is ${get(this.model, 'firstName')};`;
}
);
aComputed = computed('aString', function(this: AnExample): number {
return this.lookAString.length;
});
isLoggedIn = bool('session.user');
savedUser: Computed<Person> = alias('session.user');
actions = {
addToCollection(this: AnExample, value: string) {
const current = this.get('aCollection');
this.set('aCollection', current.concat(value));
}
};
constructor() {
super();
assert('`model` is required', !isNone(this.model));
this.includeAhoy();
}
includeAhoy(this: AnExample) {
if (!this.get('aCollection').includes('ahoy')) {
this.set('aCollection', current.concat('ahoy'));
}
}
}</code></pre>
<p>The problem here is all the computed property assignments and the actions hash assignments. The fact that this sample code ever worked at all was… an accident. It wasn’t <em>supposed</em> to work. I <a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-3.html#computed-properties">noted at the time</a> that this way of doing things had a performance tradeoff because computed properties ended up installed on every <em>instance</em> rather than on the <em>prototype</em>… and as it turns out, that was never intended to work. Only the prototype installation was supposed to work. And as it turns out, the <a href="https://github.com/emberjs/rfcs/blob/master/text/0281-es5-getters.md" title="RFC #0281"><abbr>ES5</abbr> getters implementation of computed properties</a> which landed in Ember 3.1 broke every computed property set up this way.</p>
<p>So if you can’t use class properties for this… how <em>do</em> you do it? There are two ways: the <code>.extend()</code> hack I mentioned <a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-3.html#computed-properties-1">previously</a>, and <a href="http://ember-decorators.github.io/ember-decorators/latest/">decorators</a>. (The Ember Decorators docs include a discussion of this topic as well—see <a href="http://ember-decorators.github.io/ember-decorators/latest/docs/class-fields">their discussion of class fields</a>.)</p>
<p>Note that throughout I’m assuming Ember 3.1+ and therefore <abbr>ES5</abbr> getter syntax (<code>this.property</code> instead of <code>this.get('property')</code>).</p>
<section id="extend" class="level2">
<h2><code>.extend()</code></h2>
<p>The first workaround uses <code>.extend()</code> in conjunction with a class definition. I originally wrote about this approach:</p>
<blockquote>
<p>If you need the absolute best performance, you can continue to install them on the prototype by doing this instead…</p>
</blockquote>
<p>As it turns out, it’s more like “If you want your app to work at all…”</p>
<p>Here’s how that would look with our full example from above. Note that there are three things which <em>must</em> go in the <code>.extend()</code> block with this approach: injections, computed properties, and the <code>actions</code> hash.</p>
<pre class="typescript"><code>import Component from '@ember/component';
import { computed, get } from '@ember/object';
import { inject as service } from '@ember/service';
import { assert } from '@ember/debug';
import { isNone } from '@ember/utils';
import Person from 'my-app/models/person';
export default class AnExample extends Component.extend({
// -- Injections -- //
session: service('session'),
// -- Computed properties -- //
// TS correctly infers computed property types when the callback has a
// return type annotation.
fromModel: computed(
'model.firstName',
function(this: AnExample): string {
return `My name is ${this.model.firstName};`;
}
),
aComputed: computed('aString', function(this: AnExample): number {
return this.lookAString.length;
}),
isLoggedIn: bool('session.user'),
savedUser: alias('session.user') as Person,
actions: {
addToCollection(this: AnExample, value: string) {
this.set('aCollection', this.aCollection.concat(value));
}
},
}) {
// -- Component arguments -- //
model!: Person; // required
modifier?: string; // optional, thus the `?`
// -- Class properties -- //
aString = 'this is fine';
aCollection: string[] = [];
constructor() {
super();
assert('`model` is required', !isNone(this.model));
this.includeAhoy();
}
includeAhoy(this: AnExample) {
if (!this.aCollection.includes('ahoy')) {
this.set('aCollection', this.aCollection.concat('ahoy'));
}
}
}</code></pre>
<p>There are three main things to note here.</p>
<p>First, check out the <code>session('service')</code> injection. We need the name of the service being injected for TypeScript to be able to resolve the type correctly (which it does by using “type registries,” as discussed briefly <a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-4.html#fn1">in this footnote</a> in my series earlier this year). The alternative is writing <code>session: service() as Session</code>—a type cast—which is <em>fine</em> but isn’t particularly idiomatic TypeScript.</p>
<p>Second, notice that we do have to use a type cast, <code>as Person</code>, for the <code>savedUser</code> definition. While many computed property macros and the <code>computed</code> helper itself can properly infer the type of the resulting computed property, macros which accept nested keys do not and cannot. Thus, <code>bool</code> can resolve its type to a <code>boolean</code>, but <code>readOnly</code> or <code>alias</code> have to resolve their type as <code>any</code>. The value passed to them could be a strangely shaped string key on the local object (<code>['like.a.path']: true</code>) or an actual path through multiple objects. (This is the same limitation that means we cannot do nested <code>get</code> lookups.)</p>
<p>Third, as I noted even when we were doing this the <em>wrong</em> way, with class field assignment, we need to explicitly specify the type of <code>this</code> for callback passed in to define the computed properties. In the context of a <code>.extend()</code> invocation, though, this sometimes falls down. You’ll see an error like this:</p>
<blockquote>
<p>‘AnExample’ is referenced directly or indirectly in its own base expression.</p>
</blockquote>
<p>This doesn’t happen for <em>all</em> computed properties, but it happens often enough to be very annoying—and it <em>always</em> happens with Ember Concurrency tasks. (More on this <a href="#ember-concurrency">below</a>.) This problem was actually the original motivation for my experimentation with assigning computed properties to class fields.</p>
<p>This set of problems with defining computed properties and injections in an <code>.extend()</code> invocation is a major motivator for my team in eagerly adopting decorators.</p>
</section>
<section id="decorators" class="level2">
<h2>Decorators</h2>
<p>The cleaner, but currently still experimental, way to do this is to use Ember Decorators.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> To use these, you should run <code>ember install ember-decorators</code> and then set the <code>experimentalDecorators</code> compiler option to <code>true</code> in your <code>tsconfig.json</code>.</p>
<p>Once you’ve installed the decorators package, you can update your component. In general, the imports match exactly to the Ember module imports, just with <code>@ember-decorators</code> as the top-level package rather than <code>@ember</code>. Here’s how our component looks using decorators:</p>
<pre class="typescript"><code>import Component from '@ember/component';
import { assert } from '@ember/debug';
import { isNone } from '@ember/utils';
import { action, computed } from '@ember-decorators/object';
import { alias, bool } from '@ember-decorators/object/computed';
import { service } from '@ember-decorators/service';
import Session from 'my-app/services/session';
import Person from 'my-app/models/person';
export default class AnExample extends Component {
// -- Component arguments -- //
model!: Person; // required
modifier?: string; // optional, thus the `?`
// -- Injections -- //
@service session: Session;
// -- Class properties -- //
aString = 'this is fine';
aCollection: string[] = [];
// -- Computed properties -- //
// TS correctly infers computed property types when the callback has a
// return type annotation.
@computed('model.firstName')
get fromModel(): string {
return `My name is ${this.model.firstName}`;
}
@computed('aString')
get aComputed(): number {
return this.aString.length;
}
@bool('session.user') isLoggedIn: boolean;
@alias('session.user') savedUser: Person;
@action
addToCollection(this: AnExample, value: string) {
this.set('aCollection', this.aCollection.concat(value));
}
constructor() {
super();
assert('`model` is required', !isNone(this.model));
this.includeAhoy();
}
includeAhoy(this: AnExample) {
if (!this.aCollection.includes('ahoy')) {
this.set('aCollection', this.aCollection.concat('ahoy'));
}
}
}</code></pre>
<p>First, notice that using decorators switches us to using actual <abbr>ES5</abbr> getters. This is <em>exactly</em> the same thing that <a href="https://github.com/emberjs/rfcs/blob/master/text/0281-es5-getters.md" title="RFC #0281"><abbr>RFC</abbr> #0281</a> specified, and which was implemented for Ember’s traditional computed property and injection functions in Ember 3.1 to unlock . What’s extra nice, though, is that decorators are backwards compatible <a href="http://ember-decorators.github.io/ember-decorators/latest/docs/stability-and-support#ember-support">all the way to Ember 1.11</a>. (You won’t get the <abbr>ES5</abbr> getters on versions prior to to Ember 3.1—there the decorators <em>just</em> install things on the prototype—but you will at least get the correct behavior.)</p>
<p>Second, note that we don’t get type inference for the computed property macros like <code>@bool</code> here. That’s because decorators are not currently allowed to modify the <em>type</em> of the thing they’re decorating from TypeScript’s perspective. Now, decorators can—and <em>do</em>!—modify the type of the thing they decorate at runtime; it’s just that <abbr>TS</abbr> doesn’t yet capture that. This means that <em>all</em> decorated fields will still require type annotations, not just a subset as in the <code>.extend()</code> world. It’s annoying—especially in the case of things like <code>@bool</code>, where it <em>really</em> seems like we ought to be able to just tell TypeScript that this means the thing is a boolean rather than writing <code>@bool('dependentKey') someProp: boolean</code>.</p>
<p>This leads us to our final point to notice: we also need the type annotations for service (or controller) injections—but we do <em>not</em> need the string keys for them service injections.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> The net of this is that the injections themselves roughly equally ergonomic.<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a></p>
<pre class="ts"><code>// the old way
session: service('session'),
// the new way
@service session: Session;</code></pre>
</section>
<section id="ember-concurrency" class="level2">
<h2>Ember Concurrency</h2>
<p>One other thing I need to draw your attention to here: I and a few others have taken a stab at writing type definitions for <a href="http://ember-concurrency.com/docs/introduction/">Ember Concurrency</a>. Unfortunately, typings that <em>type-check</em> run smack dab into the fact that as of 3.1 that style doesn’t <em>work</em>; and typings that <em>work</em> cannot be type-checked at present. You can’t even use decorators to push your way to a solution. Nor is there a lot of hope on the horizon for this reality to change.</p>
<p>You can see some of the discussion as to <em>why</em> <a href="https://github.com/machty/ember-concurrency/pull/209#issuecomment-403246551">starting here</a> in one pull request for them; it all gets back to the limitation I mentioned above: TypeScript doesn’t let you change the types of things with decorators. Unfortunately, there’s no reason to believe that will change anytime soon. This is a <em>fundamental</em> conflict between the Ember Object model and modern JavaScript—and specifically TypeScript’s understanding of it.</p>
<p>I am still mulling over solutions to that problem (as are others), and we’ll be continuing to work on this idea in <a href="https://embercommunity.slack.com/messages/C2F8Q3SK1">#-topic-typescript</a> in the Ember Community Slack (and publicizing any good ideas we come up with there in a searchable location, of course). For today, the best thing you can do is explicitly set the <code>this</code> type to <code>any</code> for the task property generator function callback, and use type casts internally if you look up services or other properties from the containing object.</p>
</section>
<section id="summary-mea-culpa" class="level2">
<h2>Summary: <em>mea culpa</em></h2>
<p>Sorry again to everyone I misled along the way with my earlier, very wrong advice! Hopefully this helps clear up the state of things and will help you keep from falling into this tar pit going forward!</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I don’t feel too bad about having gotten in wrong: no one who read the posts noticed the problem at the time, and it was subtle and easy to miss… because, at the time, everything actually <em>worked</em>.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>It’s experimental because decorators are still only at Stage 2 in the <abbr>TC39</abbr> process. They <em>may</em> advance at this month’s meeting.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>If you’re using a non-default name, like <code>specialSession</code>, for the name of the property, the usual rules apply for injections. In that case, you’d write the injection like this:</p>
<pre class="ts"><code>import Component from '@ember/component';
import { service } from '@ember-decorators/service';
import Session from 'my-app/services/session';
export default class AnExample extends Component {
@service('session') specialSession: Session;
}</code></pre>
<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></li>
<li id="fn4" role="doc-endnote"><p>Files do get an extra import in the decorator version… but as it happens, I’m more than okay with that; I’d actually <em>prefer</em> explicit imports of dependencies personally.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Client-Side Ideas for Server-Side Apps2018-06-07T16:00:00-04:002018-06-07T16:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-06-07:/2018/client-side-ideas-for-server-side-apps.htmlIt turns out that a bunch of the tools we've built for front-end web development are really, really nice ways to build UI. Who could have guessed, from all the kvetching you hear about them?
<p><i class="editorial">A quick note: I drafted this back in June, but forgot to actually publish it!</i></p>
<p>I’ve been working on the design of a particular website I maintain (not this one; keep your eyes open), and besides the fact that I have learned a <em>lot</em> about web design in general in the years since I originally built that site, I discovered that I desperately want to use a component-drive model for developing sites on the client.</p>
<p>In my day job, I’m used to breaking down my application into discrete components with their own responsibilities. I’ve gotten spoiled by the component-driven model that dominates the front-end web development world now. (My tool of choice is usually Ember, but you’d get the same with React or Vue or whatever else.) And on the server development side, I’m desperately missing those.</p>
<p>I’m using <a href="https://getpelican.com">Pelican</a> for this particular site because that’s what it’s been built on for the past few years and I have no desire to change it at the moment. And that means using <a href="http://jinja.pocoo.org">Jinja2</a> for templating. And Jinja2 has no notion of <em>components</em>. Partials, yes—with all the implicit context you have to carry around in your head. It has a few different ways you can sort of hack your way to something sort of vaguely component-like using some of its <a href="http://jinja.pocoo.org/docs/2.10/templates/#block-assignments">fancy features</a>. But without any kind of “argument” or “return value”/yielding (<em>a la</em> the ideas I discussed in <a href="https://v4.chriskrycho.com/2018/higher-order-components-in-emberjs.html" title="Higher-Order Components in Ember.js">this post</a>). All of the solutions available in <em>any</em> of these server-side frameworks for breaking up pages are <em>partial</em>-style: which means they’re basically just dumb string includes!</p>
<p>There’s nothing like the way I solve this problem in an Ember app every single day: <em>components</em>. There’s no particular reason that the same component-based approach that has flourished on the client <em>can’t</em> be done on the client side. It just… hasn’t, mostly. Which is kind of weird.</p>
<p>Until this week, projects like <a href="https://github.com/gatsbyjs/gatsby">Gatsby</a> in the React world made no sense to me at all. It seemed like using a sledgehammer to kill a spider. But after this week, I’m suddenly <em>very</em> interested in it—and I might in fact experiment with some server-side component-driven approaches to this at some point in the future—because a couple of days mucking with Jinja2 has me desperately wishing for a good old Ember or React component.</p>
<hr />
<p>As an aside: people talk about client-side development being overly complicated. I know some of what they mean, but the truth is that my experience hacking on this over the last week has actually served to remind me of just how <em>great</em> the tooling is in this world.</p>
<p>It’s true that there’s more complexity in many ways to building things with Ember or React or whatever other <abbr>JS</abbr>-powered client-side framework than with plain-old <abbr>HTML</abbr>. It’s more complex even than with something like Jinja2 or Liquid or whatever other server-side templating language you use. There’s good reason for that complexity, though: it comes with <em>more power</em> and <em>more expressiveness</em>. And the thing many critiquing the front-end seem to miss is that once you are used to having that power and expressiveness, it’s <em>really</em> painful to go back to not having it.</p>
Sum Type Constructors in TypeScript2018-05-31T07:00:00-04:002018-05-31T07:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-05-31:/2018/sum-type-constructors-in-typescript.htmlYou can build the same kind of sophisticated discriminated union types in TypeScript as you'd get in Elm or F♯. Kind of. With a lot of work. (Here’s how.)<p>A pretty common pattern I’ve seen is to have three basic states for some kind of <abbr>HTTP</abbr> request: <em>loading</em>, <em>failure</em>, and <em>success</em>. Since each of these has its own associated date, it’s a really good fit for a discriminated union or sum type. In a language like Elm (or F<sup>♯</sup> or Haskell or PureScript or…) you’d write that basically like this:</p>
<pre class="elm"><code>module Fetch exposing (State)
type alias HTTPStatusCode = Int
type alias ErrorData = { code: HTTPStatusCode, reason: String }
type State a
= Loading
| Failure ErrorData
| Success a</code></pre>
<p>Because I find that pattern extremely helpful, I’ve at times gone out of my way to replicate it in TypeScript. And what you get is… verbose. It’s a necessary evil, given what TypeScript is doing (layering on top of JavaScript), and so much so that I wouldn’t actually recommend this unless you’re already doing this kind of programming a lot and find it pretty natural. If you are, though, here’s how you get the equivalent of those four lines of Elm in TypeScript:</p>
<pre class="typescript"><code>type HttpStatusCode = number;
export enum Type { Loading, Failure, Success }
export class Loading {
readonly type: Type.Loading = Type.Loading;
static new() {
return new Loading();
}
}
type ErrorData = { code: HttpStatusCode, reason: string };
export class Failure {
readonly type: Type.Failure = Type.Failure;
constructor(readonly value: ErrorData) {}
static new(value: ErrorData) {
return new Failure(value);
}
}
export class Success<T> {
readonly type: Type.Success = Type.Success;
constructor(readonly value: T) {}
static new<A>(value: A) {
return new Success(value);
}
}
export type FetchState<T> = Loading | Failure | Success<T>;
export const FetchState = {
Type,
Loading,
Failure,
Success,
};
export default FetchState;</code></pre>
<p>That’s a <em>lot</em> more code to do the same thing. Even if you dropped the static constructors—which you really don’t want to do, because then you can’t use them in a functional style but <em>have</em> to use <code>new Loading()</code> or whatever to construct them.</p>
<p>You can make this work. And I do. And honestly, it’s amazing that TypeScript can do this at all—a real testament to the sophistication of the TypeScript type system and the ingenuity that has gone into it.</p>
<p>But have I mentioned recently that I’d <em>really</em> prefer to be writing something like F<sup>♯</sup> or Elm than TypeScript?</p>
#EmberJS2018, Part 42018-05-29T07:45:00-04:002018-05-29T07:45:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-05-29:/2018/emberjs2018-part-4.htmlWe need to shift from a posture of defensiveness about Ember.js to one of embracing the ecosystem, and embracing our role in the ecosystem.<p>Following <a href="https://blog.rust-lang.org/2018/01/03/new-years-rust-a-call-for-community-blogposts.html">the example</a> of the Rust community, the <a href="https://emberjs.com">Ember.js</a> team has <a href="https://emberjs.com/blog/2018/05/02/ember-2018-roadmap-call-for-posts.html" title="Ember's 2018 Roadmap: A Call for Blog Posts">called for blog posts</a> as the first step in setting the 2018 roadmap (which will formally happen through the normal <a href="https://github.com/emberjs/rfcs"><abbr title="Request for Comments">RFC</abbr> process</a>). This is my contribution.</p>
<p>There are three major themes I think should characterize the Ember.js community and project for the rest of 2018:</p>
<ol type="1">
<li><a href="http://v4.chriskrycho.com/2018/emberjs2018-part-1.html"><strong>Finishing What We’ve Started</strong></a></li>
<li><a href="https://v4.chriskrycho.com/2018/emberjs2018-part-2.html"><strong>Doubling Down on Documentation</strong></a></li>
<li><a href="https://v4.chriskrycho.com/2018/emberjs2018-part-3.html"><strong>Defaulting to Public for Discussions</strong></a></li>
<li><strong>Embracing the Ecosystem</strong> (this post)</li>
</ol>
<hr />
<p>Over the last few weeks, I’ve talked about a few big ideas that I think the Ember.js community should go after in 2018 which will help the framework excel over the next few years. This last one (like Part 3 before it) is more a <em>culture shift</em> than a matter of <em>things to build</em>.</p>
<p>We need to shift from a posture of defensiveness about Ember.js to one of embracing the ecosystem, and embracing our role in the ecosystem.</p>
<p>It’s easy to end up in an us-vs.-them mentality when looking at different libraries and frameworks. It’s doubly easy to go there when you often hear “Isn’t Ember dead?” or variations on that theme. We should avoid that way of thinking anyway. And there are three big pieces to this: <em>contributing outwards</em>, <em>smoothing the paths into Ember</em> from other ecosystems, and <em>embracing the rest of the ecosystem</em>.</p>
<section id="contributing-outwards" class="level3">
<h3>Contributing outwards</h3>
<p>There is genuinely great stuff happening all over the place in the front-end, and many of the things we love about working with Ember today have come directly out of e.g. React—hello, “data-down-actions-up”! The same is true in reverse: Ember has contributed many important ideas to the broader front-end ecosystem, from its early emphasis on rigorously linking URLs and application state to helping pioneer and popularize the use of good command line tooling, to more recent emphasis on <em>compilation</em> as a way of solving certain classes of problems.</p>
<p>So as we build all of these things, one of the best things to do—and, I believe, one of the ways we help Ember grow!—is think about how our work can benefit the larger ecosystem. When you build a library, you should consider whether there are parts of it that <em>don’t</em> have to be Ember specific. For example, a colleague and I recently built out the foundation of a solution for well-rationalized form-handling.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> We build it in two pieces, though: a core library in TypeScript that will work as well in Vue or React as in Ember, and an Ember component library that consumes that core functionality.</p>
<p>The more we can take that tack in <em>general</em>, the better. It’s the first piece of making the gap between people’s experience in other parts of the front-end ecosystem and the Ember part smaller. Ember will seem much more interesting if people find themselves <em>often</em> getting value out of things we’ve built.</p>
</section>
<section id="smoothing-the-paths-in" class="level3">
<h3>Smoothing the paths in</h3>
<p>The flip side of this is figuring out ways to make it easier for people coming <em>into</em> Ember.js to map patterns from their existing experience onto the framework’s tools and patterns. The simple reality is that there are far, far more developers familiar with React, Angular, and Vue than with modern Ember.js. Ember genuinely has a lot to offer there, but we need to make it easier for people to see that value and to recognize how it’s a lot like the good parts of what they already know!</p>
<p>This is primarily a communications effort; it means changes to the docs and to the homepage, but also to what we do in blog posts and tutorials and talks as a community!</p>
<p>At the highest level, I cannot recommend strongly enough the model suggested by Chris Garrett in <a href="https://medium.com/@pzuraq/emberjs-2018-ember-as-a-component-service-framework-2e49492734f1">his #EmberJS2018 post</a>: treat Ember.js (both in the docs and also in our presentations and communications about it) as a <em>component-service</em> framework. This not only maps more easily to patterns people know from other communities, it has the really important effect of demystifying a lot of the “magic” that seems perplexing in the framework, especially around Ember Data—which is, after all, just a service you can inject!</p>
<p>When we write blog posts, we can accomplish a lot of this simply by being aware of the rest of the ecosystem and making analogies there. You can see an example of how I’ve started trying to do this in my recent blog post on <a href="http://v4.chriskrycho.com/2018/higher-order-components-in-emberjs.html">higher-order components in Ember.js</a>. It was just one little line:</p>
<blockquote>
<p>In React, the [higher-order components] pattern as a whole is often known as the <code>renderProps</code> pattern, for the way you most often accomplish it. It’s all the same idea, though!</p>
</blockquote>
<p>That’s not a lot of extra work, but it means that if someone searches for “renderProps Ember.js” there now exists a blog post which will help someone map there existing knowledge over! I wasn’t writing a “how to do React renderProps in Ember” post—but I still smoothed the path in just a little bit. We should be doing that everywhere we can. It’s usually not a lot of effort to make those kinds of moves in talks or blog posts, but the yield is high: Ember stops being some super weird foreign entity and starts looking like a variation on a theme.</p>
<p>There is also a much larger effort we <em>do</em> need to undertake to make that story clearer on the home page and in the documentation—an effort that I know is already very much in consideration from chatting with the really amazing crew in <code>#-team-learning</code> on Slack. In the <strong>how you can help</strong> bucket: seriously please go into that channel and start chipping away at small tasks! There’s (<a href="https://m.youtube.com/watch?v=Abu2BNixXak" title="“Becoming a Contributor”, my Rust Belt Rust 2017 talk">always!</a>) way more work to be done than hands to do it.</p>
<p>I think this also means prioritizing technical work that eases this. The sooner we can land the Glimmer component model, the better. The sooner we can hash out a more cogent story on routes and controllers and components, the better. The sooner we can make “npm-install-your-way-to-Ember” an actually viable strategy, the better. Because each of those things makes Ember dramatically more accessible to people working in other ecosystems today; each lowers the barrier to entry in some substantial way; and the combination of them all makes it far more viable for someone to <em>try</em> Ember in an existing application.</p>
</section>
<section id="embracing-the-rest-of-the-ecosystem" class="level3">
<h3>Embracing the rest of the ecosystem</h3>
<p>The final piece of this is actively embracing the best parts of the rest of the ecosystem.</p>
<p>We as a community need to avoid defensiveness and recognize that there’s a <em>lot</em> of good in the rest of the front-end space. I understand how it can be easy to feel defensive. Being dismissed, having people be surprised that the project even still exists, etc. gets really old after a while. But however reasonable that defensiveness is, it’s ultimately counterproductive. It makes us hold onto things we don’t need to hold onto, and it makes us ignore things that might benefit us, and as a result it can make us <em>needlessly weird</em> technically.</p>
<p><em>Needless weirdness</em> is an important idea I’d love for us to keep in mind. Any time you’re willing to move more slowly, to let the “new shiny” bake for a while to see whether it’s genuinely worth investing in, you’re going to seem weird. Likewise when you strongly embrace stability, in a broader ecosystem which hasn’t. Likewise when you value convention over configuration, in a broader ecosystem which hasn’t. But it’s important to be able to distinguish between <em>needful</em> and <em>needless</em> weirdness.</p>
<p>We should have regular conversations as a community—through <abbr title="request for comments">RFC</abbr>s, through forum threads, through blog post arguments, etc.—about what’s <em>needful</em> weirdness, and what has become <em>needless</em> weirdness. (Because which weird things are needful change over time!) We should gleefully embrace the needful weirdness. But we should equally gleefully drop the needless weirdness.</p>
<p>What makes Ember special is, by and large, <em>not</em> the specific technical implementations we’ve landed on.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> What makes Ember valuable is having a coherent top-to-bottom story and a rich community with a commitment to aggressively seeking out shared solutions, and an even deeper commitment to providing good migration paths forward when we change things.</p>
<p>But here’s the thing: those values are increasingly (if slowly) being embraced <em>outside</em> the Ember ecosystem as well. Ember can contribute and even lead in many ways here—but only if we start actively embracing the good of other parts of the front-end ecosystem.</p>
<p>For example: I’ve heard more times than I can count over the last few years that our use of Broccoli.js is really important for Ember, and the reality is… that isn’t true. We could have built on top of just about <em>any</em> solution, and it would have been <em>fine</em>. Broccoli <em>does</em> have some advantages; it also has some real disadvantages (one of which is that we’re the only ones using it!), and we should forthrightly acknowledge those. By the same token, if Webpack is working well for many people, let’s neither trash it in discussion nor ignore it in implementation. Instead, let’s make it easy for people to integrate Webpack into the Ember world.</p>
<p>That doesn’t oblige us to chuck out our existing build tooling! It just means making our own build pipelines robust enough to interoperate well with other packaging systems. And that’s precisely what the Ember <abbr>CLI</abbr> team has been doing! This needs to be our pattern across the board going forward.</p>
<p>It’s truly well and good to have made a call a few years ago, and to be going out of our way to mitigate the costs of churn. At the same time, we need to communicate—to a degree that probably feels like <em>over</em>communicating to the people who already understand all these decisions!—so that both the original rationales and the current status are accessible to all the people who <em>weren’t</em> there when the decisions were made.</p>
<p>Insofar as it’s true that Broccoli and Webpack solve different problems, <em>explaining</em> how Broccoli and Webpack actually solve meaningfully different problems —or at least, <em>excel</em> at solving different problems—is one of the most important things we can do as well. Props to Chris Thoburn (<a href="https://twitter.com/runspired">@runspired</a>) for doing this in a few different contexts recently, but we need a lot more of it—because it’s one example I think most people both inside and outside the Ember community have just kind of scratched their heads at for a long time (me included).</p>
<p>Again: I take the Broccoli/Webpack example simply because it’s an obvious one. The broader point is that we need to find ways to embrace the shared solutions which emerge not only in the Ember community but in the front-end ecosystem as a whole, even as we also do the hard work to make our own shared solutions useful to the rest of the front-end ecosystem. That two-way exchange will benefit us, and smooth the paths in for newcomers, and benefit the rest of the ecosystem, too—and that’s a huge win. Because in a very real sense, we front-end developers are all in this together.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Keep your eyes open; you’ll see a blog post announcing that along with a full set of documentation for it sometime in the next month or so!<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>To be clear: many, though certainly not all, of those specific implementations I like, but that’s beside the point.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Higher-Order Components in Ember.js2018-05-26T14:00:00-04:002018-05-28T06:50:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-05-26:/2018/higher-order-components-in-emberjs.htmlComponents as arguments! Components getting yielded! Components everywhere! A powerful way to make your Ember.js components more flexible and composeable.
<p>One of the most powerful patterns in programming is the idea of <em>higher-order functions</em>: functions which can take other functions as arguments or return them as their return values. If you’ve spent much time at all working in JavaScript, you’ve certainly encountered these—whether you’re using <code>Array.map</code> to transform the values in an array, or passing a function as an argument to an event handler.</p>
<p>The same pattern is incredibly useful in building components, and most modern front-end frameworks support it—including Ember.js! (In React, the pattern as a whole is often known as the <code>renderProps</code> pattern, for the way you most often accomplish it. It’s all the same idea, though!)</p>
<p>In this little post, I’ll show you how to build a small “higher-order component” in Ember.js, hopefully demystifying that term a little bit a long the way. (If you just want to see how the pieces fit together, you can see the finished app <a href="https://github.com/chriskrycho/ember-hoc-example">in this repo</a>.)</p>
<aside>
<p>I’m going to be using classes and decorators throughout. Both are very much ready-to-go in Ember, and I commend them to you! I’m also going to be using some of the new <a href="https://emberjs.com/blog/2018/04/13/ember-3-1-released.html#toc_introducing-optional-features-3-of-4">optional features</a> available in Ember 3.1+ to use template-only components!</p>
<p>Note that one of the most important consequences of this is that arguments have to be referenced as <code>@theArgumentName</code> rather than just <code>theArgumentName</code> in templates. The reason is precisely that there is no backing JavaScript component. In old-school Ember.js components, <code>{{theArgumentName}}</code> is implicitly turned into <code>{{this.argumentName}}</code>, which does a lookup on the backing component. In Glimmer-style components—of which these are the first part—arguments live on a designated <code>args</code> property and are accessible in templates via <code>@theArgumentName</code> instead.</p>
</aside>
<section id="higher-order-components-what-are-they" class="level2">
<h2>Higher-Order Components, What Are They</h2>
<p>Just like with a “higher-order function,” all we mean when we talk about a “higher-order component” is a component which takes other components as arguments, returns other components itself (in Ember’s case via <code>yield</code> in a template), or both.</p>
<p>The thing we’re actually going to build here is a “modal” which accepts an optional button as an arguments, and which yields out a component for dividing the modal into sections visually so you can pass your own content in and have it look just right. This is closely based on a component my colleagues and I at Olo built recently, just with some of our specific details stripped away to get at the actually important bits. Here’s what it looks like in practice:</p>
<figure>
<img src="https://f001.backblazeb2.com/file/chriskrycho-com/images/hoc-rendered.png" alt="a modal with sectioned text and a close button" /><figcaption>a modal with sectioned text and a close button</figcaption>
</figure>
<p>The goal for the button arguments is to let the modal be able to render the button the caller passes in, while not being concerned with the <em>functionality</em> of the button. Otherwise, we’d have to tie the “API” of the modal to the details of button behavior, bind more actions into it, etc.</p>
<p>The goal for the yielded sectioning component is for whatever is rendering the modal itself to be able to pass content in and get it chunked up however the modal decides is appropriate—the modal can display its own styles, etc.—without having to worry about the details of applying classes or sectioning up the content itself.</p>
<p>In short, we want to <em>separate our concerns</em>: the modal knows how to lay out its contents and where to put buttons, but it doesn’t want to have to know <em>anything</em> about what the buttons do. The most complicated interaction in the world could be going on, and the modal won’t have to care. Likewise, things <em>using</em> the modal can pass content and buttons into it, and let the modal manage its own layout and so on without having to be concerned with the details of that. So what does that look like in practice?</p>
<p>The approach I use here builds on the “contextual components” pattern in Ember.js. The main new idea is that the <em>context</em> includes components!</p>
</section>
<section id="implementing-it" class="level2">
<h2>Implementing It</h2>
<p>We have three components here:</p>
<ul>
<li>a button</li>
<li>a modal</li>
<li>a modal section</li>
</ul>
<p>Since Ember.js still (for now!) requires component names to be at least two words separated by a dash, we’ll just call these <code>x-button</code>, <code>x-modal</code>, and <code>x-modal-section</code>.</p>
<section id="x-button" class="level3">
<h3><code>x-button</code></h3>
<p>The button component, we’ll keep pretty simple: it’s just a button element with a given label and an action bound to it:</p>
<pre class="handlebars"><code><button class={{@buttonClass}} type='button' {{action @onClick}}>
{{@label}}
</button></code></pre>
</section>
<section id="x-modal" class="level3">
<h3><code>x-modal</code></h3>
<p>The <code>x-modal</code> has the meat of the implementation.</p>
<pre class="handlebars"><code><div class='modal-backdrop'></div>
<div class='modal'>
<div class='modal-content'>
{{yield (hash section=(component 'x-modal-section'))}}
</div>
{{#if @button}}
{{component @button buttonClass='modal-button'}}
{{/if}}
</div></code></pre>
<p>The two things two notice here are the <code>yield</code> and the <code>component</code>.</p>
<p>The <code>yield</code> statement yields a <a href="https://www.emberjs.com/api/ember/3.1/classes/Ember.Templates.helpers"><code>hash</code></a> with one property: <code>section</code>. Yielding a hash is a convenient pattern in general. Here, we’re doing it to make the <abbr>API</abbr> nicer for users of this component. It means that if we name the yielded value <code>|modal|</code> when we invoke this, we’ll be able to write <code>modal.section</code> to name this particular yielded item. (You’ll see exactly this below.)</p>
<p>We use the <code>component</code> helper twice: once as the value of the <code>section</code> key in the yielded hash, and once for the <code>button</code> below. In both cases, the helper does the same thing: invokes a component! While the most common way to render a component is with its name, inline—like <code>{{x-modal}}</code>—you can always render it with the <code>component</code> helper and the name as a string: <code>{{component 'x-modal'}}</code>. This lets you render different components dynamically!</p>
<p>Let’s remember our initial analogy: the same way you can pass different functions to a higher-order function like <code>Array.prototype.map</code>, you can pass different components to a higher-order component like our <code>x-modal</code> here. And just like you can <em>return</em> a function from a higher-order function, we can <em>yield</em> a component from a higher-order component. Just like higher-order functions, the function passed in or returned just has to have the right shape.</p>
<p>For example, the argument to <code>Array.prototype.map</code> needs to be a function which performs an operation on a single item in the array (and maybe also the index) and hands back the result of that operation. Similarly, the <code>button</code> argument to our <code>x-modal</code> needs to accept a <code>buttonClass</code> component so that the modal can apply some styling to it. The same thing holds for the component being yielded back out: it has an <abbr>API</abbr> you should use to invoke it, just like any other.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>All of this gets at something really important: you can think of components as just being <em>pure functions</em>: they take some input in the form of arguments, and give you the output of what they <em>render</em> and what they <em>yield</em>—and they always give you the same rendered <abbr>HTML</abbr> and the same yielded values for the same inputs. They’re just functions!</p>
</section>
<section id="x-modal-section" class="level3">
<h3><code>x-modal-section</code></h3>
<p>The <code>x-modal-section</code> component is the simplest of all of these: it has no behavior, just some styling to actually chunk up the content:</p>
<pre class="handlebars"><code><div class='modal-section'>
{{yield}}
</div></code></pre>
</section>
<section id="application-controller-and-template" class="level3">
<h3>Application controller and template</h3>
<p>Now, let’s use in the context of the application template, where we can see how the pieces all fit together. First, let’s see the application controller backing it—nothing unusual here, just a simple toggle to show or hide the modal.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<pre class="ts"><code>import Controller from "@ember/controller";
import { action } from "@ember-decorators/object";
export default class Application extends Controller {
constructor() {
super(...arguments);
this.showModal = false;
}
@action
showIt() {
this.set("showModal", true);
}
@action
hideIt() {
this.set("showModal", false);
}
}</code></pre>
<p>Now for the interesting bit—the template where we invoke <code>x-modal</code> and use its higher-order-component functionality:</p>
<pre class="handlebars"><code>{{#if showModal}}
{{#x-modal
button=(component 'x-button'
label='Close modal!'
onClick=(action 'hideIt')
)
as |modal|
}}
{{#modal.section}}
Here is some content!
{{/modal.section}}
{{#modal.section}}
Here is some other content.
{{/modal.section}}
{{#modal.section}}
<p>The content can have its own sections, as you'd expect!</p>
<p>Nothing crazy going on here. Just a normal template!</p>
{{/modal.section}}
{{/x-modal}}
{{/if}}
<button class='button' {{action 'showIt'}}>Show modal</button>
<!-- some other content on the page --></code></pre>
<p>We invoke the block form of <code>x-modal</code> just like we would any block component, and we get back the thing it yields with <code>as |modal|</code>. However, one of the arguments we pass to it is a component. But <code>modal</code> is a <code>hash</code> (an object!) with a property named <code>section</code>, which is the <code>x-modal-section</code> component.</p>
<p>Again, you can think of this like calling a function with one function as an argument and getting another function back as its return value—that returned function being something we could call over and over again once we had it.</p>
<p>Here, we “call the function”—invoke the <code>x-modal</code> component—with <code>component 'x-button'</code> as its argument, and the returned <code>modal.section</code> is a component we can invoke like a normal component.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> We could even pass it into some <em>other</em> component itself if we so desired.</p>
<p>And that’s really all there is to it!</p>
</section>
</section>
<section id="summary" class="level2">
<h2>Summary</h2>
<p>“Higher-order components” aren’t necessarily something you need all the time, but they’re really convenient and very powerful when you <em>do</em> need them. They’re also a lot less complicated than the name might seem! Components are just things you can pass around in the context of a component template—they’re the <em>functions</em> of Handlebars.<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a></p>
<p>Splitting things into components like this does increase complexity, and in particular it can increase the mental overhead of keeping track of how the pieces fit together. However, they also let us cleanly separate different pieces of functionality from each other. Doing it this way means that our modal can be concerned about <em>positioning</em> a button without needing to expose an <abbr>API</abbr> for all of the button’s own mechanics for handling clicks and performing whatever actions necessary. That makes our modal <em>and</em> our button way more reusable across our application. The button can be used <em>wherever</em> a button is useful, and the modal doesn’t need to know or care anything about it. Likewise, the button has no need to know anything about the context where it’s being used; from the button component’s perspective, it just gets wired up to some actions as usual. The same thing goes for the modal sections: they let us abstract over how the DOM is laid out, what classes are applied to it, and so on—they chunk up the modal, but the modal itself maintains responsibility for how that chunking up happens. And the caller doesn’t even <em>have</em> to use that; it’s just a tool that’s available for that purpose.</p>
<p>To sum it all up, I’ll just reiterate my earlier description: components are just like pure functions: the same inputs give you the same outputs—and, just like functions, those inputs and outputs can be other functions, that is, other components.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>If you want a good way to document the things a component <code>yield</code>s, check out <a href="https://ember-learn.github.io/ember-cli-addon-docs/latest/docs/api/components/docs-demo">ember-cli-addon-docs</a>, which can read an <code>@yield</code> JSDoc annotation.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>And it could just as well be a component; the top-level controller template is just where we put our main app functionality.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>We could also simplify this since we’re only returning one component, and if we had the full Glimmer component story, this could look <em>very</em> nice:</p>
<pre class="ts"><code><Modal @button={{component 'Button'}} as |Section|>
<Section>
Some content!
</Section>
<Section>
Some more content!
</Section>
<Section>
<p>The content can have its own sections, as you'd expect!</p>
<p>Nothing crazy going on here. Just a normal template!</p>
</Section>
</Modal></code></pre>
<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></li>
<li id="fn4" role="doc-endnote"><p>If you’re inclined to “well actually” me about <em>helpers</em> being the real functions of Handlebars templates: in the Glimmer <abbr>VM</abbr> world, helpers are just a kind of component.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
#EmberJS2018, Part 32018-05-23T07:30:00-04:002018-05-23T07:30:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-05-23:/2018/emberjs2018-part-3.htmlThere are often good reasons to have private discussions in any kind of core team—but they should not be the default. The default should be public.<p>Following <a href="https://blog.rust-lang.org/2018/01/03/new-years-rust-a-call-for-community-blogposts.html">the example</a> of the Rust community, the <a href="https://emberjs.com">Ember.js</a> team has <a href="https://emberjs.com/blog/2018/05/02/ember-2018-roadmap-call-for-posts.html" title="Ember's 2018 Roadmap: A Call for Blog Posts">called for blog posts</a> as the first step in setting the 2018 roadmap (which will formally happen through the normal <a href="https://github.com/emberjs/rfcs"><abbr title="Request for Comments">RFC</abbr> process</a>). This is my contribution.</p>
<p>There are three major themes I think should characterize the Ember.js community and project for the rest of 2018:</p>
<ol type="1">
<li><a href="http://v4.chriskrycho.com/2018/emberjs2018-part-1.html"><strong>Finishing What We’ve Started</strong></a></li>
<li><a href="https://v4.chriskrycho.com/2018/emberjs2018-part-2.html"><strong>Doubling Down on Documentation</strong></a></li>
<li><strong>Defaulting to Public for Discussions</strong> (this post)</li>
<li><a href="https://v4.chriskrycho.com/2018/emberjs2018-part-4.html"><strong>Embracing the Ecosystem</strong></a></li>
</ol>
<hr />
<p>One of the small changes I think would substantially improve the Ember.js ecosystem is: <strong>defaulting to public for discussions</strong> among the core team. Indeed: for any open-source project with community involvement like Ember.js has, that should be the default. Not the <em>only</em> option, just the default option.</p>
<p>There is plenty of value in having private channels for discussion in contexts like this. Sometimes you have to deal with something awkward or socially difficult. Sometimes you have already taken the community’s input and just have to come to a decision about what to do on something. Private channels are useful.</p>
<p>But: they shouldn’t be the default. They should be what you turn to when you’re in one of those particular kinds of situations which require it. The default should be public discussion and interaction.</p>
<p>Over the last year, the maintainer-ship (and therefore decision-making) of ember-cli-typescript and the surrounding TypeScript ecosystem has grown from being pretty much just me to being a small group of four of us: Derek Wickern, Dan Freeman, James Davis, and me. We have the “final say,” so to speak, on the things we’re doing with the addon and the typings and so on. (What that actually means in practice is mostly just we all try to shoulder the burden of staying on top of pull requests.) And we have a private channel for discussions as a “core team” for projects in the <a href="https://github.com/typed-ember">typed-ember</a> organization.</p>
<p>But: it’s not the default. It’s what we turn to when we’re in one of those particular kinds of situations which require it. The default is public discussion and interaction.</p>
<p>And this isn’t just an unspoken norm or something. As a team, we all explicitly agreed that we default to public. Pretty much the only times we chat in our private channel is if we’re figuring out how to diffuse an awkward situation kindly, or if we’re adding someone else to the team. Otherwise, we try to have all our discussions in the GitHub issues for the projects or the <code>#topic-typescript</code> room in the Ember Community Slack.</p>
<p>This has a few major effects, as I see it:</p>
<ul>
<li><p>No one should feel left out or in the dark about what we’re up to. Even if we’re hashing out crazy-seeming ideas for how to move stuff forward, it’s all there for everyone to see. This includes neat things like Dan Freeman’s proof-of-concept on <a href="https://twitter.com/__dfreeman/status/994410180661170177">type-checked templates</a>, or our mad sprint (as a team!) to get some core improvements landed before I gave a workshop at EmberConf, or anything else we’re going after.</p></li>
<li><p>We’re obviously available for input on things as people have questions, because we’re interacting with <em>each other</em> in those public forums. And if we’d like to start moving some of the oft-repeated questions over to the <a href="https://discuss.emberjs.com">Ember Discourse</a> or to <a href="https://stackoverflow.com/questions/tagged/ember.js">Stack Overflow</a>, it’s still really helpful for people who <em>are</em> on the Slack to see that we’re there and available for help.</p></li>
<li><p>We get to see the regular pain points others run into. That often turns into issues, priorities, etc. for us as a group. The slowly growing issue <a href="https://github.com/typed-ember/ember-cli-typescript/issues/170">tracking things we need to document</a> is essentially a direct product of that constant cycle of interaction.</p></li>
<li><p>We get the benefit of input from others! If we’ve missed something, or simply failed to think of something, others in the community often haven’t. One prime example of this: the “registry” strategy we use for making things like Ember Data store, adapter, etc. lookups work came out of conversations with a community member (<a href="https://github.com/maerten">Maarten Veenstra</a>) which happened many months before we were in a spot where we could land that kind of thing—and initially I was pretty skeptical of it, but they were totally right, and it’s now core to how Ember’s typings work!</p></li>
</ul>
<p>I recommend—very strongly—that the Ember.js core team adopt the same strategy. Teams <em>do</em> need private channels sometimes. But they shouldn’t be the default. They should be for those particular circumstances which <em>require</em> it.</p>
<p>The biggest things I think could come out of this are:</p>
<ul>
<li><p>A greater confidence from within the Ember.js community about what the core team is up to and where we’re going. Technical leadership seems to me to be about 10% technical brilliance and 90% clear communication. We have loads of technical brilliance; we need more communication!</p></li>
<li><p>More confidence in the trajectory of Ember.js from <em>outside</em> its existing community. Seeing that there is active leadership is essential for people to have confidence that choosing Ember.js is a good choice both today and for the medium-to-long-term.</p></li>
</ul>
<p>And we need both of those—a lot—for Ember.js to continue to grow and thrive in the years ahead!</p>
How To Bundle TypeScript Type Definitions2018-05-21T07:00:00-04:002018-05-21T07:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-05-21:/2018/how-to-bundle-typescript-type-definitions.htmlCreate a custom build that puts the type definitions in the root of your package, instead of putting them alongside the compiled JavaScript files. Because if your consumers have to use compiler options, they will be very sad.
<p>One of the lessons that led to the True Myth 2.0.0 release was the difficulty of consuming the library under its original packaging strategy. There are a few things that are <em>not</em> obvious about how TypeScript type definitions get consumed when you’re first starting out, and a few things that seem like they should work <em>don’t</em>. This is my attempt to help <em>you</em> (and the people consuming your TypeScript libraries!) avoid the same pain I (and the people consuming mine) have felt.</p>
<section id="the-problem" class="level2">
<h2>The Problem</h2>
<p>The problem is the result of the ways TypeScript resolves type definitions, and the kinds of type definition files it can (and cannot) generate for you.</p>
<p>TypeScript only properly resolves two kinds of type definition distributions automatically:</p>
<ul>
<li>A single-file type definition, located anywhere in the package as long as <code>package.json</code> has a <code>types</code> key pointing to it.</li>
<li>Type definition module files in the <em>root</em> of the distributed package, mapping to the distributed modules of the package (wherever they live).</li>
</ul>
<p>TypeScript will only generate a single-file type definition for the <abbr>AMD</abbr> and SystemJS standards—which <em>cannot</em> be imported with ES6 module imports. If you want to use an output mode which generates a JS file per originating TS file—Node, ES6, etc.—you will get individual TS module file type definitions as well. It is not that the type definition files themselves can’t be written to support Node or ES6-style module layouts in a single-file definition. To the contrary: hand-written definitions for libraries <em>often</em> do just that. It is just a matter of what the compiler supports generating.</p>
<p>The net of this is: if you want module type definitions to go with ES6 modules to import, they <em>must</em> live in the root of your distributed bundle.</p>
<p>However, most libraries I’m familiar with—because I work in the <em>browser</em> ecosystem, not the <em>Node</em> ecosystem—do not work with the root of their repository as the place where their source lives, or for the place where the output of their build process lives. It’s far more common to have a <code>src</code> directory and <code>dist</code> or <code>build</code> directory, the latter of which is where the build artifacts go.</p>
</section>
<section id="the-solution" class="level2">
<h2>The Solution</h2>
<p>The solution—which we shipped for ember-cli-typescript some time ago, and which I switched to this past week for True Myth—is to have separate build artifacts for the type definitions and the JavaScript output. Put the JavaScript output in the <code>dist</code> or <code>build</code> directory as usual, without type declarations. Then, put the type definitions in the root of the repository.</p>
<p>In the case of both ember-cli-typescript and True Myth, we’re doing the type generation step in the <code>prepublishOnly</code> hook and cleaning it up in the <code>postpublish</code> hook. Your <code>package.json</code> might look like something like this, assuming your <code>tsconfig.json</code> is set to generate JavaScript artifacts in <code>dist</code> as your build directory.</p>
<pre class="json"><code>{
"scripts": {
"ts:js": "tsc",
"ts:defs": "tsc --declaration --outDir . --emitDeclarationOnly",
"prepublishOnly": "yarn ts:js && yarn ts:defs",
"postpublish": "rm -r *.d.ts dist"
}
}</code></pre>
<p>(If you have nested modules, your <code>postpublish</code> hook there should clean up the generated folders as well as the generated files.)</p>
<p>You can see the full setup I built for True Myth—which generates type defs along these lines, as well as both CommonJS and ES6 modules—in the repository:</p>
<ul>
<li><a href="https://github.com/chriskrycho/true-myth/blob/v2.0.0/package.json"><code>package.json</code></a>—note especially the <a href="https://github.com/chriskrycho/true-myth/blob/v2.0.0/package.json#L32:L42"><code>"scripts"</code></a> configuration</li>
<li><a href="https://github.com/chriskrycho/true-myth/blob/v2.0.0/tsconfig.json">root <code>tsconfig.json</code></a>, with derived<a href="https://github.com/chriskrycho/true-myth/blob/v2.0.0/ts/cjs.tsconfig.json">CommonJS <code>tsconfig.json</code></a> and <a href="https://github.com/chriskrycho/true-myth/blob/v2.0.0/ts/es.tsconfig.json">ES6 <code>tsconfig.json</code></a> files.</li>
</ul>
<hr />
<p>This isn’t an especially complicated thing, but the scenario leading to the need for this is common enough, and the dance frustrating enough and easy enough to get wrong, that I really wish the TypeScript team would make it possible to generate single-file type definitions for <em>all</em> kinds of JavaScript module systems.</p>
</section>
Rust is Incredibly Productive for CLIs2018-05-20T08:35:00-04:002018-05-20T08:35:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-05-20:/2018/rust-is-incredibly-productive-for-clis.htmlI built a little tool in Rust to convert an Evernote export file to Markdown. It was impressively easy.
<p>There are <em>reasons</em> I’m a Rust fanboy. One of them is the kind of thing I proved out to myself today—again, because I’ve had this experience before, albeit not with anything quite this “complicated.”</p>
<p>I built <a href="https://github.com/chriskrycho/evernote2md">a little tool</a> in Rust to convert Evernote exports (in their custom <code>.enex</code> <abbr>XML</abbr> format) to Markdown files with <abbr>YAML</abbr> metadata headers—mostly just to see how quickly and effectively I could do it, because I’ve never actually had an excuse to use <a href="https://serde.rs">Serde</a> and I thought this might be a nice spot to try it.</p>
<p>There’s a lot this little library <em>doesn’t</em> do. (Like include the creation and modification timestamps in the header, for example.) But all of those things would be <em>very</em> straightforward to do. I built this functioning little “script” in about two hours. For context: I’ve taken multiple passes at this in Python—which in the way people normally think about these things should be way <em>easier</em>—and I’ve failed both times.</p>
<p>Rust’s compiler just helps you out <em>so much</em> along the way, not only with the type-checking but with the really amazing metaprogramming capabilities you get with it. Being able to slap <code>#[derive(Deserialize)]</code> on a struct and a couple attributes on struct fields and having it Just Work™ to deserialize XML into local types is mind-blowing. (The only thing I know of that’s playing the same game is F<sup>♯</sup> type-providers. I’d love to hear about similar capabilities in other languages!)</p>
<p>I’m basically at the point where if I need a small command-line tool, I write it in Rust, <em>not</em> in a conventional scripting language like Python, because the benefits I get more than outweigh whatever small extra amount of mental overhead there is. And there’s not much of that mental overhead anyway for this kind of thing! As you can see <a href="https://github.com/chriskrycho/evernote2md/blob/master/src/main.rs#L71">in the actual code</a>, I make free and liberal use of <a href="https://doc.rust-lang.org/1.26.0/std/option/enum.Option.html"><code>expect</code></a> for this kind of tool.</p>
<p>It’s also hard to oversell the ecosystem—even as relatively nascent as it is compared to some much older languages, the tools which exist are just really good. This project uses <a href="https://serde.rs">Serde</a> for deserializing from <abbr>XML</abbr> and serializing to <abbr>YAML</abbr>; <a href="https://github.com/rust-lang/regex">Regex</a>; <a href="https://clap.rs">Clap</a> for command line parsing; a nice little wrapper around <a href="https://pandoc.org">pandoc</a>; and, superpower even among superpowers, <a href="https://docs.rs/rayon/1.0.1/rayon/">Rayon</a>: free parallelization.</p>
<p>Rust is, in short, <em>very productive</em> for things in this space. Far more than you might expect from the billing. Yes, it’s a “systems programming language” and you can write operating systems with it. But it’s also just a really great tool for <em>all sorts</em> of domains, including little <abbr>CLI</abbr> tools like this one.</p>
Destructuring with True Myth 1.3+2018-05-19T12:20:00-04:002018-05-19T12:20:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-05-19:/2018/destructuring-with-true-myth-13.htmlMaking the value and error properties available means you can now use destructuring.
<p>I just realized a neat capability that <a href="#">True Myth 1.3+</a> unlocks: you can now use destructuring of the <code>value</code> property on <code>Just</code> and <code>Result</code> and the <code>error</code> property on <code>Error</code> instances.</p>
<p>With <code>Maybe</code> instances:</p>
<pre class="ts"><code>import Maybe, { just, nothing, isJust } from 'true-myth/maybe';
const maybeStrings: Maybe<string>[] =
[just('hello'), nothing(), just('bye'), nothing()];
const lengths = maybeStrings
.filter(Maybe.isJust)
.map(({ value }) => value.length);</code></pre>
<p>With <code>Result</code> instances:</p>
<pre class="ts"><code>import Result, { ok, err } from 'true-myth/result';
const results: Result<number, string>[] =
[ok(12), err('wat'), err('oh teh noes'), ok(42)];
const okDoubles = results
.filter(Result.isOk)
.map(({ value }) => value * 2);
const errLengths = results
.filter(Result.isErr)
.map(({ error }) => error.length);</code></pre>
<p>None of this is especially novel or anything. It was just a neat thing to realize after the fact, because it wasn’t something I had in mind when I was making these changes!<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>This was a very strange experience. There’s nothing quite like learning something about a library <em>you wrote</em>.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
The Chinese Room Argument2018-05-19T11:20:00-04:002018-05-19T11:20:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-05-19:/2018/the-chinese-room-argument.htmlIf “formal operations on symbols cannot produce thought,” what (if anything) does that say about today’s “strong AI” projects and the Turing test itself?
<p>I took a bunch of half days last week, because <em>goodness</em> but I was tired. Too long running at full-throttle, and <a href="http://v4.chriskrycho.com/2018/on-steam-specifically-running-out-of-it.html" title="On Steam (Specifically, Running Out Of It)">I’d been running out of steam</a> as a result. And what did I do instead, that ended up being so effective in <a href="https://v4.chriskrycho.com/2018/vacation-as-recharging.html" title="Vacation as Recharging">recharging</a>? Well, mostly… read literature reviews on interesting topics in philosophy, at least for the first few days. Dear reader, I am a nerd. But I thought I’d share a few of the thoughts I jotted down in my notebook from that reading.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<hr />
<section id="the-chinese-room-argument" class="level2">
<h2>“The Chinese Room Argument”</h2>
<p>This was an argument whose <em>influence</em> I’ve certainly encountered, but the actual content of which I was totally unfamiliar with.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<p>The argument, in <em>exceedingly</em> brief summary, is that “formal operations on symbols cannot produce thought”—that syntax is insufficient for conveying semantics. Searle made the argument by way of a thought experiment; a reference in a review on “Thought Experiments” I may post about later is how I found my way to the discussion. That thought experiment supposes a man being handed symbols (under a door, perhaps) which are questions in Chinese, who has a set of rules for constructing correct answers to those questions, also in Chinese—but the man himself, however much he gives every <em>appearance</em> of knowing Chinese to the person passing in questions by way of the answers he gives, does not in fact know Chinese. He simply has a set of rules that allow him to give the appearance of knowledge. The Chinese Room argument, in other words, is a(n attempted) refutation of the <a href="https://en.m.wikipedia.org/wiki/Turing_test">Turing Test</a> as a metric for evaluating intelligence.</p>
<p>The rejoinders to this are varied, of course, and I encourage you simply to follow the link above and read the summary—it’s good.</p>
<p>There were two particularly interesting points to me in reading this summary: the Churchland response, and the Other Minds response. To these I’ll add a quick note of my own.</p>
<section id="the-churchland-response" class="level3">
<h3>1: The Churchland response</h3>
<p>Searle’s argument specifically addressed an approach to <abbr title="artificial intelligence">AI</abbr> (and especially so-called “strong <abbr title="artificial intelligence">AI</abbr>,” i.e. <abbr title="artificial intelligence">AI</abbr> that is genuinely intelligent) that was very much in vogue when he wrote the article in the 1980s, but which is very much <em>out</em> of vogue now: rule-driven computation. One of the responses, which looks rather prescient in retrospect, was the Churchland reply that the brain is not a symbolic computation machine (i.e. a computer as we tend to think of it) but “a vector transformer”… which is a precise description of the “neural network”-based <abbr title="artificial intelligence">AI</abbr> that is now dominating research into e.g. self-driving cars and so on.</p>
<p>The main point of interest here is not so much whether the Churchlands were correct in their description of the brain’s behavior, but in their point that any hypothesis about neural networks is <em>not</em> defeated by Searle’s thought experiment. Why not? Because neural networks are not performing symbolic computation.</p>
</section>
<section id="the-other-minds-response" class="level3">
<h3>2: The Other Minds response</h3>
<p>The other, and perhaps the most challenging response for Searle’s argument, is the “other minds” argument. Whether in other humans, or in intelligent aliens should we encounter them, or—and this is the key—in a genuinely intelligent machine, we attribute the existence other minds <em>intuitively</em>. Nor do we (in general) doubt our initial conclusion that another mind exists merely because we come to have a greater understanding of the underlying neuro-mechanics. We understand far more about human brains and their relationship to the human mind than we did a hundred years ago; we do not therefore doubt the reality of a human mind. (Most of us, anyway! There are not many hard determinists of the sort who think consciousness is merely an illusion; and there are not many solipsists who think only their own minds exist.)</p>
<p>But the “other minds” objection runs into other intuitive problems all its own: supposing that we learned an apparently-conscious thing we interacted with were but a cleverly-arranged set of waterworks, we would certainly revise our opinion. Which intuition is to be trusted? Either, neither, or in some strange way both?</p>
<p>And this gets again at the difficulty of using thought experiments to reason to truth. What a thought experiment can genuinely be said to show is complicated at best. Yet their utility—at least in raising problems, but also in making genuine advances in understanding the world—seems clear.</p>
</section>
<section id="lowered-standards" class="level3">
<h3>Lowered standards</h3>
<p>The other thing I think is worth noting in all these discussions is a point I first saw Alan Jacobs raise a few years ago, but which was only <em>alluded</em> to in this literature review. Jacobs <a href="http://text-patterns.thenewatlantis.com/2010/08/advice-from-jaron-lanier.html">cites</a> Jaron Lanier’s <cite>You Are Not A Gadget</cite>. (I don’t have a copy of the book, so I’ll reproduce Jacobs’ quotation here.)</p>
<blockquote>
<p>But the Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?</p>
</blockquote>
<p>This is one of the essential points often left aside. Is the test itself useful? Is “ability to fool a human into think you’re human” <em>actually</em> pointing at what it means to be intelligent? This is sort of the unspoken aspect of the “other minds” question. But it’s one we <em>ought</em> to speak when we’re talking about intelligence!</p>
</section>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>For those of you following along at home: I wrote all but the last 100 or so words of this a week ago and just hadn’t gotten around to publishing it. It’s not the even more absurd contradiction to yesterday’s post on writing plans than it seems. Really. I promise.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>It’s occasionally frustrating to find that there is <em>so</em> much I’m unfamiliar with despite attempting to read broadly and, as best I can, deeply on subjects relevant to the things I’m talking about on Winning Slowly, in programming, etc. One of the great humility-drivers of the last few years is finding that, my best efforts to self-educate notwithstanding, I know <em>very little</em> even in the fields I care most about.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
#EmberJS2018, Part 22018-05-18T22:00:00-04:002018-05-18T22:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-05-18:/2018/emberjs2018-part-2.htmlA project is only as good as its documentation. Ember’s documentation has come a long way… but it still has a long way to go, and it's essential for helping Ember thrive.
<p>Following <a href="https://blog.rust-lang.org/2018/01/03/new-years-rust-a-call-for-community-blogposts.html">the example</a> of the Rust community, the <a href="https://emberjs.com">Ember.js</a> team has <a href="https://emberjs.com/blog/2018/05/02/ember-2018-roadmap-call-for-posts.html" title="Ember's 2018 Roadmap: A Call for Blog Posts">called for blog posts</a> as the first step in setting the 2018 roadmap (which will formally happen through the normal <a href="https://github.com/emberjs/rfcs"><abbr title="Request for Comments">RFC</abbr> process</a>). This is my contribution.</p>
<p>There are three major themes I think should characterize the Ember.js community and project for the rest of 2018:</p>
<ol type="1">
<li><a href="http://v4.chriskrycho.com/2018/emberjs2018-part-1.html"><strong>Finishing What We’ve Started</strong></a></li>
<li><strong>Doubling Down on Documentation</strong> (this post)</li>
<li><a href="http://v4.chriskrycho.com/2018/emberjs2018-part-3.html"><strong>Defaulting to Public for Discussions</strong></a></li>
<li><a href="https://v4.chriskrycho.com/2018/emberjs2018-part-4.html"><strong>Embracing the Ecosystem</strong></a></li>
</ol>
<hr />
<section id="part-2-double-down-on-docs" class="level2">
<h2>Part 2: Double down on docs</h2>
<p>The best project in the world is useless without documentation. As such, my <em>second</em> major goal for Ember.js this year is to see our documentation story improve dramatically across a number of fronts. This is not just the kind of thing that’s important in principle or because we care about doing the right thing, though those alone <em>are</em> sufficient motivation. It’s <em>also</em> absolutely necessary for Ember to grow and thrive in the ways it deserves to in the years ahead.</p>
<p>To be clear: Ember’s story around documentation is <em>pretty good</em> and it continues to improve all the time. A few years ago, the base documentation was a mess and even figuring out where to start was hard. Today, Ember.js itself has great guides along with versioned-and-searchable <abbr title="application programming interface">API</abbr> documentation. The gaps now are in the <em>surrounding ecosystem</em> and in the <em>framework internals</em>. That’s huge progress! But if we want Ember to excel, we need to go after both of these with gusto.</p>
<section id="the-surrounding-ecosystem" class="level3">
<h3>The surrounding ecosystem</h3>
<p>Ember Data, Ember Engines, and perhaps most important Ember <abbr title="command line interface">CLI</abbr> and its core dependency Broccoli all <em>desperately</em> need documentation work just at the “how do you even use these things level.”</p>
<ul>
<li><p><strong>Broccoli.js</strong> in particular is core to pretty much everything in Ember’s ecosystem, and its docs today are in roughly the state Webpack’s were back in its sad 1.0 days. We should take a page out of our own history (and Webpack’s for that matter!) and make it easy for people to use Broccoli in whatever ways their apps need, and that mostly means documenting it!<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> Oli Griffith’s recent <a href="http://www.oligriffiths.com/broccolijs/">blog post series</a> is an incredibly valuable first step in that direction. But we need really solid documentation for <a href="http://broccolijs.com">Broccoli itself</a>, and also for the equally important <a href="https://www.npmjs.com/search?q=keywords:broccoli-plugin">plugin ecosystem</a> which is the primary way people interact with it.</p></li>
<li><p>The docs for <strong>Ember <abbr>CLI</abbr></strong> itself are <em>decent</em>, but they’re quite out of date and are about to be a lot more so because of the previously-mentioned packager bits. We need accurate and up-to-date guides and <abbr>API</abbr> docs for the <abbr>CLI</abbr>, and we also need clarity about the seams between Ember <abbr>CLI</abbr> and Broccoli—something I’ve only begun to become clear on after a year of hacking on <a href="https://github.com/typed-ember/ember-cli-typescript">ember-cli-typescript</a>! This includes a number of kinds of documentation:</p>
<ul>
<li>up-to-date guides</li>
<li>complete <abbr>API</abbr> documentation</li>
<li>a “cookbook” of common patterns to use</li>
</ul></li>
<li><p>The <strong>Ember Data</strong> docs need to be split into two parts: one for <em>users</em> of Ember Data, and one for people building Ember Data integrations and addons. Right now, all the docs are targeted squarely at implementors of Ember Data addons. This means that one of the pieces of the Ember ecosystem that’s in widest use (and is <em>most</em> distinct from the rest of the JS ecosystem!) is really, really hard to learn. This is the part of the framework I still struggle the most with, despite having worked full time on an Ember app for over two years now.</p></li>
<li><p><strong>Ember Engines</strong> are really need for manually breaking up your app into discrete sections which can be worked on independently and even loaded dynamically as you need them, and they provide a different level of abstraction than route-splitting and other similar approaches. (Not necessarily better or worse, but different.) Unfortunately, most of the documentation hasn’t been touched in over a year. That means if you <em>want</em> to use Ember Engines, almost all of the information is in an example here and a Slack conversation there. We need to turn that sort of “tribal knowledge” into actual docs!</p></li>
</ul>
<p>To be clear, the Ember docs team is doing great work and is already going after a lot of these areas; but there’s an enormous amount of ground to cover. They could use your help! Because if Ember is going to flourish in the year(s) ahead, we need good docs. And users are the people best-placed in all the world to help write docs.</p>
<p>So <strong>how you can help:</strong></p>
<ul>
<li><p>Open issues about things you don’t understand.</p></li>
<li><p>If you see an error in the documentation, open a pull request to fix it.</p></li>
<li><p>Volunteer to proofread or edit as new materials are produced. Yes, seriously: proofreading is <em>incredibly</em> valuable.</p></li>
<li><p>Volunteer to write documentation of things you <em>do</em> understand where you see gaps.</p></li>
</ul>
</section>
<section id="framework-internals" class="level3">
<h3>Framework internals</h3>
<p>Every time I have started poking into Ember’s own codebase—to ship a fix for some small bug, or simply to understand the behavior of my own application—I have found myself stymied by a really serious issue. <em>Almost nothing is documented.</em> This is true of Ember proper, of Ember Data, of Ember <abbr>CLI</abbr>, of Broccoli’s internals… Everything I named above as being in need of <em>user</em>-facing documentation also desperately needs <em>developer</em>-facing documentation.</p>
<p>A lot of this happens naturally in projects developed organically by small teams. I’ve seen it in my own current job: the <em>vast</em> majority of our codebase is without any formal documentation, because it didn’t <em>require</em> it when we were a much smaller organization working on a much smaller codebase. But no project—whether private or open-source—can grow or thrive unless it becomes possible for new contributors to come in, understand the system as it exists, and start making changes effectively. “Tribal knowledge” is <em>not</em> a bad thing in some contexts, but it does not scale.</p>
<p>The Ember.js ecosystem needs developer documentation of several sorts, then:</p>
<ul>
<li><p><strong>Architecture documents:</strong> what are the pieces of the framework or library in question, and how do they fit together? This is often the hardest piece to maintain, simply because it changes organically over time, and unlike the next couple examples it doesn’t have an inherent attachment to the code. However, it’s also the piece that’s absolutely the most important, because it’s what gives anyone trying to dive in and contribute the orientation they need to be effective.</p></li>
<li><p><strong>“Why” comments:</strong> The internals of the core libraries very often have good reasons for doing things even in apparently odd ways. However, the reasons for those are <em>very</em> rarely written down anywhere. This is <em>precisely</em> what comments are for! If some implementation actually <em>can’t</em> be simplified in the way it looks like it can, write it down right there in a comment! This will save both you and other developers lots of wasted time with false starts and useless pull requests and so on.</p></li>
<li><p><strong>Documentation of private <abbr>API</abbr>:</strong> Much of the public-facing <abbr>API</abbr> for Ember is fairly clear (modulo caveats around completeness and accuracy). However, most internal <abbr>API</abbr> is essentially entirely undocumented. This makes it <em>extremely</em> difficult for someone to know how to use the internal <abbr>API</abbr>s when working on internal code!</p></li>
</ul>
<p>All of these things came home to me pretty sharply as I started poking at the Glimmer VM project to see where and how I can pull together my knowledge of both TypeScript and Rust to drive some of those efforts forward. The core team folks I’ve interacted with have all been <em>extremely</em> helpful—and that’s always been true all along the way!—but they’re also busy, and taking the time to write down something <em>once</em> ends up being a major “force multiplier”. You can explain the same thing to multiple different people via multiple different conversations, or you can write it down <em>once</em> and make it a resource that anyone can use to start working effectively in the system!</p>
<p><strong>How you can help:</strong></p>
<ul>
<li><p>If you’re a current Ember developer in any part of the ecosystem: <em>start writing down what you know.</em> If a question comes up more than once, put it in a document somewhere. If nothing else, then you can link to it instead of typing it up one more time in Slack!</p></li>
<li><p>If you’re just getting started on developing core Ember functionality: <em>write down what you learn.</em> If you’re working through some section of the codebase, don’t understand it, and then come to understand it by way of asking questions, add documentation for that! You’ll help the next person coming along behind you!</p></li>
</ul>
<hr />
<p>In short: please write more things down! We need user-facing and developer-facing documentation; they need to be different and distinct from each other; and we need the whole range in both. That’s an <em>enormous</em> amount of work, and it’s very different from programming (and therefore harder for many of us).<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> But it’s also work that will pay equally enormous dividends in enabling the Ember community to grow in both the <em>number</em> and the <em>effectiveness</em> of its contributors—and that’s something we very much need!</p>
</section>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Most of Webpack’s bad reputation is long-since undeserved: it <em>was</em> poorly documented… a few years ago. So was Ember!<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>I’ll let you draw your own conclusions about my own relationship to writing given the absurd number of words I put out on this site.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
True Myth 1.3.0 and 2.0.02018-05-18T19:15:00-04:002018-05-18T19:15:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-05-18:/2018/true-myth-130-and-200.htmlGet `value` and `error` directly after type narrowing, make type definitions Just Work™, drop Flow types, and simplify the contents of the distributed build.
<p>Today I released two versions of <a href="https://github.com/chriskrycho/true-myth">True Myth</a>: <a href="https://github.com/chriskrycho/true-myth/releases/tag/v1.3.0">1.3.0</a> and <a href="https://github.com/chriskrycho/true-myth/releases/tag/v2.0.0">2.0.0</a>. You can read the <a href="https://v4.chriskrycho.com/2017/announcing-true-myth-10.html">1.0 announcement</a> from last November for an overview of the library and a discussion of why you might want to use the library in the first place!</p>
<p>Since its initial release last November, True Myth has gone through a number of small <a href="https://github.com/chriskrycho/true-myth/releases" title="True Myth releases on GitHub">feature and bug fix releases</a>, each of which is more interesting in its own right than 2.0 is—because there are almost no new “features” here, and the changes to the <em>functionality</em> which are in 2.0 are purely additive and could readily have gone in 1.3 instead.</p>
<p>In fact, the act of writing that sentence made me realize that there really <em>should</em> be a 1.3 which people can trivially upgrade to and then take on the changes in 2.0 later.</p>
<section id="section" class="level2">
<h2>– 1.3.0 –</h2>
<p>There are a few very small changes in 1.3 that are just nice ergonomic wins. (You may also be interested in looking back at the <a href="https://github.com/chriskrycho/true-myth/releases">list of other releases</a> to see what else has landed since 1.0.)</p>
<section id="expose-value-and-error" class="level3">
<h3>Expose <code>value</code> and <code>error</code></h3>
<p>The <code>value</code> property in <code>Maybe.Just</code> and <code>Result.Ok</code> instances, and the <code>error</code> property in <code>Result.Err</code> instances, are now <em>public, readonly properties</em> instead of <em>private properties</em>. I made those private in the initial implementation because I thought it made more sense to expose them via methods, but experience showed that this is a relatively common pattern in practice:</p>
<pre class="ts"><code>function dealsWithAMaybe(couldBeAString: Maybe<string>) {
if (couldBeAString.isJust()) {
console.log(`It was! ${couldBeAString.unsafelyUnwrap()}`);
}
}</code></pre>
<p>This is a contrived example of course, but I and my colleagues found in practice that this is a scenario that comes up relatively often, <em>especially</em> when integrating with existing code rather than writing new code – control flow patterns there tend to assume early-return-on-<code>null</code> or similar instead.</p>
<p>So I made a change (leaning on TypeScript’s notion of <a href="https://www.typescriptlang.org/docs/handbook/advanced-types.html#user-defined-type-guards" title="“User-Defined Type Guards” in the TypeScript handbook">“type narrowing”</a>) so that you don’t have to use <code>unsafelyUnwrap</code> in this scenario anymore! You can use the method types, the standalone functions, or direct matching against the variants on the property</p>
<pre class="ts"><code>import Maybe from 'true-myth/maybe';
function dealsWithAMaybe(maybe: Maybe<string>) {
if (maybe.isJust()) {
console.log(`It was! ${maybe.value}`);
}
}</code></pre>
<p>In the <code>Result</code> case this is even nicer (notice that I’m using the variant, rather than a function, to discriminate between the two and narrow the types here):</p>
<pre class="ts"><code>import Result, { Variant } from 'true-myth/result';
function dealsWithAResult(result: Result<string, Error>) {
if (result.variant === Variant.Ok) {
console.log(`Huzzah: ${result.value}`);
} else {
console.log(`Alas: ${result.error.message}`);
}
}</code></pre>
<p>Basically: you now have more options for handling these scenarios, a nicer <abbr title="application programming interface">API</abbr>, and—not that it should <em>usually</em> matter that much, but for whatever it’s worth—better performance by way of doing things with property lookups instead of function invocations in quite a few places.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
</section>
<section id="static-helper-methods" class="level3">
<h3>Static helper methods</h3>
<p>At my friend and collaborator <a href="https://mobile.twitter.com/bmakuh">Ben Makuh</a>’s suggestion, I built a couple static helper methods to go with those. These helpers just give you nice abstractions to drop into functional pipelines. For example, you can lean on the type-narrowing capabilities described above while working through a <em>list</em> of <code>Maybe</code>s to <em>know</em> that an item is a <code>Just</code> and use the new <code>Just.unwrap</code> static method in the pipeline:</p>
<pre class="ts"><code>import Maybe, { Just } from 'true-myth/maybe';
function justLengths(maybeStrings: Array<Maybe<string>>) {
return maybeStrings
.filter(Maybe.isJust)
.map(Just.unwrap)
.map(s => s.length);
}</code></pre>
<p>Analogous helpers exist for <code>Result</code> in the form of the <code>Ok.unwrap</code> and <code>Err.unwrapErr</code> methods. (<code>Nothing</code> has no analog for what I hope are obvious reasons!)</p>
</section>
<section id="tweaks-to-the-variant-properties" class="level3">
<h3>Tweaks to the <code>variant</code> properties</h3>
<p>The <code>variant</code> property on both <code>Maybe</code> and <code>Result</code> has changed in two ways:</p>
<ol type="1">
<li><p>It is now <code>readonly</code>. This was an implicit invariant previously—you would break <em>everything</em> in the library if you changed the <code>variant</code> value—and I’ve just made it explicit in the type system.</p></li>
<li><p>It is now properly constrained with a <em>literal type</em> on the concrete instances. That is, the type of <code>Just.variant</code> is no longer <code>Variant</code> but specifically <code>Variant.Just</code>. (This is what enables you to use the variant for narrowing as demonstrated above. I should have done this in 1.0, and just forgot to!)</p></li>
</ol>
<p>And that’s it for 1.3.0!</p>
</section>
</section>
<section id="section-1" class="level2">
<h2>– 2.0.0 –</h2>
<p>The 2.0 release is identical in <em>features</em> with the 1.3 release. However, it makes a breaking change to how consumers interact with the application, requiring updates to your <code>tsconfig.json</code> file and your bundler configuration, and removing support for Flow types.</p>
<section id="configuration-file-updates" class="level3">
<h3>Configuration file updates</h3>
<p>Getting True Myth working nicely with consuming TypeScript packages has been a source of frustration for me <em>and</em> others. In short, requiring you to use the <code>"paths"</code> key in the <code>"compilerOptions"</code> section of the <code>tsconfig.json</code> made for an annoying amount of setup work, <em>and</em> it meant that using True Myth in a library <em>required</em> you to set it up in any consuming app. No good.</p>
<p>For type resolution to Just Work™, the types <em>must</em> be at the root of the distributed package.</p>
<p>As a result, I’ve stopped using <a href="https://github.com/tildeio/libkit">libkit</a>, which put the generated types in a reasonable-seeming but (in my experience) painful-to-use place, and have simplified the build layout substantially.</p>
<ul>
<li>The types themselves are generated only when publishing an update to npm. They go in the root at that point, and they get cleaned up after publishing. (This is pretty much identical to the solution we came up in <a href="https://github.com/typed-ember/ember-cli-typescript">ember-cli-typescript</a>.)</li>
<li>The other build files no longer get dropped in a nested <code>src</code> directory.</li>
<li>Since I was already at it, I renamed the two build directories from <code>commonjs</code> to <code>cjs</code> and from <code>modules</code> to <code>es</code></li>
</ul>
<p>So the distributed build now looks something like this:</p>
<pre><code>/
index.d.ts
maybe.d.ts
result.d.ts
unit.d.ts
utils.d.ts
dist/
cjs/
index.js
maybe.js
result.js
unit.js
utils.js
es/
index.js
maybe.js
result.js
unit.js
utils.js</code></pre>
<p>You’ll just need to completely remove the <code>"paths"</code> mapping for True Myth from your <code>tsconfig.json</code> and, if you’ve done anything unusual with it, update your bundler configuration to point to the new build location, i.e. <code>dist/commonjs/src</code> should now just be <code>dist/cjs</code>. Bundlers which respect the <code>modules</code> key in <code>package.json</code> will pick it up automatically, as will Ember <abbr>CLI</abbr>.</p>
</section>
<section id="removing-flow-types" class="level3">
<h3>Removing Flow types</h3>
<p>To my knowledge, no one is actually using the Flow types for the library. When I first started on it, my collaborator <a href="https://github.com/bmakuh">Ben Makuh</a> <em>was</em> using Flow, but he ended up migrating to TypeScript in the intervening time, and there are no consumers I know of. I was always relatively unsure of their correctness, <em>and</em> I don’t have a good way to validate their correctness, <em>and</em> maintaining them involved doing manual work on every release to update the types by hand.</p>
<p>If you <em>do</em> use True Myth with Flow, and you’re missing the types, please let me know. I just can’t maintain them myself at this point!</p>
<hr />
<p>And that’s it! We’ve been using True Myth in production at Olo for quite some time, and it’s proved to be a really valuable tool. Give it a spin and let me know how these latest versions work for you!</p>
</section>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I’ve made some changes under the hood to take advantage of this as well, so the library should be faster. Probably <em>trivially</em> faster, but my philosophy around library code is very much <em>be as fast as you can</em>; it’s a way of considering the people using your code—not just the developers, but the end users.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Aesthetics and Programming Languages2018-05-13T11:00:00-04:002018-05-13T11:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-05-13:/2018/aesthetics-and-programming-languages.htmlRust isn’t exactly prettier than C♯, but its aesthetics don’t drive me up the wall the same way. Why not?
<p>My distaste for the aesthetics of C<sup>♯</sup> are fairly well known to people I talk to about programming languages—perhaps equally as well known as my love of Rust. So much so that both are running jokes among some of my colleagues and friends. My hypersensitivity to aesthetics both in general and also specifically in programming languages and work environment is <em>also</em> so well-known as to be a gag.</p>
<p>But I was writing a bunch of Rust this weekend, and looking at it and thinking about it and wondering why it is that C<sup>♯</sup> drives me so up the wall aesthetically and experientially, while Rust doesn’t. On the surface, they don’t actually look all that different.</p>
<p>Here’s <em>roughly</em> equivalent code in each:</p>
<pre class="cs"><code>public class Person {
public string Name { get; set; } = "Chris";
public void greet() {
Console.WriteLine($"Hello, {Name}");
}
}</code></pre>
<pre class="rust"><code>struct Person {
name: String,
}
impl Person {
pub fn new() -> Person {
Person { name: String::from("Chris") }
}
pub fn greet(&self) {
println!("Hello, {}", self.name);
}
}</code></pre>
<p>When you start tossing in generics and lifetimes, Rust can actually end up looking a <em>lot</em> messier than C<sup>♯</sup>.</p>
<pre class="rust"><code>impl<'a, 'b, T, U> SomeTrait<'a, U> for SomeType<'b, U>
where
T: SomeOtherTrait + YetAnotherTrait,
U: OhWowSoManyTraits
{
fn some_trait_method(&self) {
// ...
}
}</code></pre>
<p>Nothing about that is what I would call aesthetically beautiful in a general sense! There’s a <em>lot</em> of syntax.</p>
<p>What I’ve concluded so far, though, is that my difference in feelings comes down to the way that syntax maps back to the underlying semantics, and my feelings about those underlying semantics. The basic language design approach C<sup>♯</sup> takes—i.e. everything is a class; mutation is both encouraged and implicit; don’t bother with value types—drives me batty. I don’t love the syntax, not least because it ends up being <em>so</em> verbose and noisy (you can express the same things in F<sup>♯</sup> much more briefly)—but also because I actively dislike the programming models it encourages (I don’t like the C<sup>♯</sup> programming model when I see in in F<sup>♯</sup> either!).</p>
<p>Rust, by contrast, matches the way I <em>do</em> and <em>want to</em> think about the world. Mutability is allowed but neither actively encouraged nor actively discouraged; more to the point it’s <em>explicit</em>. Insofar as “shared mutable state is the root of all evil,” Rust has two legs up on C<sup>♯</sup>: it (a) doesn’t <em>allow</em> shared mutable state and (b) makes explicit where mutation <em>is</em> happening. It also separates data from behavior. It also has real value types. It also has sum types and pattern matching. In both cases, a lot of the syntactical noise is inessential, a holdover from the legacy of C; but in Rust’s case the way it maps onto a <em>programming model</em> that is more like OCaml than like C decreases the pain I feel from that noise.</p>
<p>This <em>could</em> be taken to validate the idea that syntax doesn’t matter, that the underlying semantics are everything, but that’s not the case. It’s not that I <em>love</em> Rust’s syntax. It’s that, although I dislike it at times, it doesn’t rise to the level of frustration I feel in C<sup>♯</sup> because it’s not coupled to a programming model that I loathe. The syntax matters; it’s just not the <em>only</em> thing that matters.</p>
<p>An interesting thing to consider: what Rust would look like in a world where it embraced its OCaml roots. (I don’t think Rust should have done this; spending its complexity budget on ideas instead of syntax was the right choice. But it’s still interesting.) The simplest level of translation might look something (very) roughly like this:</p>
<pre class="haskell"><code>impl 'a 'b T U SomeTrait 'a T for SomeType 'b U
where T : SomeOtherTrait + YetAnotherTrait
some_trait_method :: &self -> void
some_trait_method self =
-- ...</code></pre>
<p>This is obviously still a lot of syntax, but it’s all basically necessary given the things Rust is trying to express with lifetimes, ownership, etc.—and I did this off the top of my head with literally <em>no</em> consideration other than “what’s the most direct translation into roughly Haskell-ish syntax I can write?” It makes me genuinely curious where a language that aimed for Rust’s same kinds of guarantees but actively embracing the ML/Haskell family’s syntax might end up. I have a guess that I’d like it even better than I do Rust.</p>
#EmberJS2018, Part 12018-05-11T09:30:00-04:002018-05-11T20:45:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-05-11:/2018/emberjs2018-part-1.htmlWe don’t need more new features this year. We need to ship the things we already have in progress.<p>Following <a href="https://blog.rust-lang.org/2018/01/03/new-years-rust-a-call-for-community-blogposts.html">the example</a> of the Rust community, the <a href="https://emberjs.com">Ember.js</a> team has <a href="https://emberjs.com/blog/2018/05/02/ember-2018-roadmap-call-for-posts.html" title="Ember's 2018 Roadmap: A Call for Blog Posts">called for blog posts</a> as the first step in setting the 2018 roadmap (which will formally happen through the normal <a href="https://github.com/emberjs/rfcs"><abbr title="Request for Comments">RFC</abbr> process</a>). This is my contribution.</p>
<p>There are three major themes I think should characterize the Ember.js community and project for the rest of 2018:</p>
<ol type="1">
<li><strong>Finishing What We’ve Started</strong> (this post)</li>
<li><a href="https://v4.chriskrycho.com/2018/emberjs2018-part-2.html"><strong>Doubling Down on Docs</strong></a></li>
<li><a href="http://v4.chriskrycho.com/2018/emberjs2018-part-3.html"><strong>Defaulting to Public for Discussions</strong></a></li>
<li><a href="https://v4.chriskrycho.com/2018/emberjs2018-part-4.html"><strong>Embracing the Ecosystem</strong></a></li>
</ol>
<hr />
<section id="finishing-what-weve-started" class="level2">
<h2>Finishing What We’ve Started</h2>
<p>What I want, more than any new feature anyone could come up with, is for this to be the year Ember.js commits to <em>finishing what we have started</em>. The last few years have seen the Ember team do a lot of really important exploratory work, including projects like <a href="https://glimmerjs.com">Glimmer.js</a>; and we have landed some of the initiatives we have started. But I think it’s fair to say that focus has not been our strong suit. It’s time for a year of <em>shipping</em>.</p>
<p>We need to land all the things we have in flight, and as much as possible avoid the temptation (much though I feel it myself!) to go haring off after interesting new ideas. As such, literally everything I list below is an effort <em>already in progress</em>. It’s just a matter of making concerted efforts as a community to land them.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>And that way of putting it is important: we have to make concerted efforts <em>as a community</em> to land these things. Very, very few people are paid to work on Ember.js full time—far too few to accomplish all of this! If these things matter to you and your company, find a way to carve out time for it. Even if it’s just a few hours a week, even if it’s “just” (and there’s no “just” about these!) helping out with triage of open issues or answering questions in Slack or Discourse or Stack Overflow, even if it doesn’t <em>feel</em> like a lot… it adds up.</p>
<p>To be very clear, before I go any further: none of this is a knock on everything that the Ember core team and community have done in the last couple years. A lot of things that have landed along the way—dropping in the Glimmer rendering engine midway through the 2.x series, landing ES5 getters just weeks ago in Ember 3.1, and so on—are genuinely great! <em>All</em> that I mean is, a year where we land and polish everything would make everything that much more awesome (and make Ember that much more competitive a choice in the client-side framework world).</p>
<p>So: what do we need to ship this year?</p>
<section id="land-glimmer-components-in-ember.js-proper" class="level3">
<h3>Land Glimmer <code><Component></code>s in Ember.js proper</h3>
<p>We’ve taken the first steps toward this already via a number of <abbr title="Request for Comments">RFC</abbr>s that were written late last year and merged since. We need to finish the implementation for these. That means getting the <a href="https://github.com/emberjs/ember.js/issues/16301">Glimmer Components in Ember</a> quest across the finish line.</p>
<p>The whole story here will make Ember <em>feel</em> much more modern in a variety of ways, as well as enabling some great performance and programming model wins: Immutable component arguments! Auto-tracked class properties! <code><AngleBracketComponent></code> invocation! Clear semantic distinctions between arguments and local context! So many good things. We just need to land it! <a href="https://github.com/emberjs/ember.js/issues/16301">The quest</a> needs to be moving forward, not stagnant.</p>
<p><strong>How you can help:</strong></p>
<ul>
<li>Show up and volunteer to go after pieces of the quest. There are people willing to mentor you through the work that needs to be done!</li>
<li>Test it as it lands! You don’t have to commit to <em>shipping</em> things in your app to <em>test</em> them in your app.</li>
</ul>
</section>
<section id="land-a-lot-of-ember-cli-efforts" class="level3">
<h3>Land a <em>lot</em> of Ember CLI efforts</h3>
<p>There are a great many Ember CLI efforts in flight. Every last one of them should be on stable and in use before the end of the year.</p>
<section id="module-unification" class="level4">
<h4>Module Unification</h4>
<p>The <a href="https://github.com/dgeb/rfcs/blob/module-unification/text/0000-module-unification.md">Module Unification <abbr title="Request for Comments">RFC</abbr></a> was opened in May 2016 and merged October 2016. There has been a lot of progress made, but we need to <em>ship it</em>—from where I stand, it’d be nice if it landed less than 2 years after we approved it! And we’re <a href="https://github.com/emberjs/ember.js/issues/16373">getting pretty close</a>; you can actually use the Module Unification blueprint in an Ember application today. Some stuff doesn’t work <em>quite</em> right yet, but it’s getting close.</p>
<p><strong>How you can help:</strong> try it out! Spin up new apps with the module unification blueprint flag, and try running the migrator codemod, and report back on what breaks.</p>
</section>
<section id="broccoli-1.0" class="level4">
<h4>Broccoli 1.0</h4>
<p>We’re <em>super</em> close on this one—Oli Griffiths has done some heroic work on this since EmberConf—but we need to finish it. Ember CLI, for historical reasons, has been using a fork of Broccoli.js for quite some time. This divergence has caused all manner of trouble, including compatibility issues between Broccoli plugins and an inability to take advantage of the best things that have landed in Broccoli since the fork happened.</p>
<p>Perhaps the single most important example of that is that Broccoli 1.0 supports the use of the system <code>tmp</code> directory. That single change will improve the performance of Ember CLI <em>dramatically</em>, especially on Windows. It will also flat-out eliminate a number of bugs and odd behaviors that appear when trying to integrate Ember CLI with other file watching tools (e.g. TypeScript’s <code>--watch</code> invocation).</p>
<p><strong>How you can help:</strong> once the Ember CLI team says it’s ready for testing, test your app and addons with it! Make sure that everything works as it should—specifically, that you’re not making any assumptions that depend on either the forked <abbr>API</abbr> or the location of the <code>tmp</code> directory used for intermediate build steps.</p>
</section>
<section id="the-new-packager-setup-with-tree-shaking-and-app-splitting" class="level4">
<h4>The new <code>Packager</code> setup, with tree-shaking and app-splitting</h4>
<p>One of the current major pain points with Ember’s build pipeline is that it’s hard to extend, and not really documented at all. (I’ll have a <em>lot</em> more to say on the question of documentation in the next post!) However, work is in progress to change that, too!</p>
<p>The accepted-and-actively-being-worked-on <a href="https://github.com/ember-cli/rfcs/blob/master/active/0051-packaging.md">Packaging Ember CLI <abbr title="Request for Comments">RFC</abbr></a> aims to fix both of these. Quoting from it:</p>
<blockquote>
<p>The current application build process merges and concatenates input broccoli trees. This behaviour is not well documented and is a tribal knowledge. While the simplicity of this approach is nice, it doesn’t allow for extension. We can refactor our build process and provide more flexibility when desired.</p>
</blockquote>
<p>A few of the things we can expect to be possible once that effort lands:</p>
<ul>
<li>tree-shaking – we can lean on Rollup.js to get <em>only</em> the code we actually need, cutting shipped file size dramatically</li>
<li>app-splitting – lots of different strategies to explore, including route-based or “section”-based, etc.</li>
<li>static-build-asset-splitting – no reason to cache-bust your <em>dependencies</em> every time the app releases!</li>
<li>distinct app builds – you could ship one build of your app for browsers which support ES Modules and one for browsers which don’t (heeeeey, IE11) – letting you minimize the payload size for the ones that do</li>
</ul>
<p><strong>How you can help:</strong></p>
<ul>
<li>If you know Ember CLI internals: pop into #-dev-ember-cli and ask how you can help land the features</li>
<li>If you don’t know Ember CLI internals: also pop into #-dev-ember-cli, but ask instead how you can <em>test</em> the changes</li>
<li>Help document those internals (see the next post in this series)</li>
</ul>
</section>
</section>
<section id="install-your-way-to-ember" class="level3">
<h3>Install-your-way-to-Ember</h3>
<p>We need to finish splitting apart the Ember source from its current state of still being fairly monolith and get it turned into a true set of packages. The new Modules API which landed last year was a huge step toward this and made the experience on the developer side <em>look</em> like this should be possible—but that’s still a shim around the actual non-modularized Ember core code. The process of splitting it apart <em>is happening</em>, but we need to finish it.</p>
<p>The promise here is huge: Ember will be able to be the kind of thing you can progressively add to your existing applications and slowly convert them, rather than something that comes along all as a large bundle. It’s technically possible to do this today, but you cannot drop in <em>just the view layer</em>, for example, and that’s a huge value for people who want to try out the programming model or add it for just one feature in an existing application.</p>
<p>Making it possible for people to install Glimmer components, then the service layer, then the router, and so on as they need it will make adoption easier for people who are curious about the framework. But it will also be a huge boon to those of us already using Ember and wanting to migrate existing applications (often a tangled mix of server-side rendering and massive jQuery spaghetti files!) to Ember progressively. I’ve had multiple scenarios come up at my own job in just the last month where this would have been hugely useful.</p>
<p><strong>How you can help:</strong> make it known that you’re willing to help work on breaking apart Ember into its constituent pieces, and as that effort lands (hopefully over the rest of this year!) test it in your own apps and addons, and find the pain points in the install-your-way-to-the-framework process.</p>
</section>
<section id="make-typescript-great-everywhere" class="level3">
<h3>Make TypeScript <em>great</em> everywhere</h3>
<p>This one is near and dear to my heart… and it also really falls in no small part to me and the rest of the group working on ember-cli-typescript and type definitions for the Ember ecosystem!</p>
<p>There are two big wins we can land this year:</p>
<ol type="1">
<li>Built-in support in Ember.js itself.</li>
<li>Solid type definitions for the rest of the Ember.js ecosystem</li>
</ol>
<p>If you don’t like TypeScript, don’t panic! The upshot here will actually be a better experience for <em>all</em> users of Ember.js.</p>
<section id="built-in-support-in-ember.js-itself" class="level4">
<h4>1. Built-in support in Ember.js itself</h4>
<p>One of my goals for this summer<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> is to finish an <abbr title="Request for Comments">RFC</abbr> making TypeScript a first-class citizen of the Ember.js ecosystem. To clarify what this will and won’t entail (assuming it’s accepted, assuming I ever manage to finish writing it!):</p>
<ul>
<li><p>Ember will <em>always</em> be JS-first, and it will <em>never</em> require type metadata reflected to runtime, unlike e.g. Angular. No one will ever have a <em>worse</em> experience because they prefer JS to TS. The idea will be to make TypeScript an <em>equally</em> good experience, and to include it for consideration when thinking about design choices for new features.</p></li>
<li><p>Ember users, both JS and TS, will get the <em>benefits</em> of having good types available right out of the box: many editors and IDEs can use TypeScript type definitions to enable better docs, autocompletion, etc.—and we may even be able to leverage it for <a href="https://twitter.com/__dfreeman/status/994410180661170177">better validation of Handlebars templates</a>!</p></li>
<li><p>We’ll have (because we’ll have to have!) a story on what we support in terms of backwards compatibility and SemVer for TypeScript and Ember and the type definitions. Necessarily, it has been the Wild West for the first year of concentrated effort here, trying to get our type definitions from “barely exist and not useful” to “full coverage and 99% right.” But as TypeScript becomes more widely used, we have to have a stability story, and we very soon will.</p></li>
</ul>
<p>There’s also ongoing work to convert Ember’s own internals to TypeScript, and landing that will help guarantee that the type definitions for Ember are actually <em>correct</em>, which in turn will make the experience for everyone better. (Bad type definitions are worse than <em>no</em> type definitions!)</p>
<p><strong>How you can help:</strong> engage in the <abbr title="Request for Comments">RFC</abbr> process once we get it started, and if you are up for it show up to help convert the Ember internals to TypeScript as well.</p>
</section>
<section id="solid-type-definitions-for-the-rest-of-the-ember.js-ecosystem" class="level4">
<h4>2. Solid type definitions for the rest of the Ember.js ecosystem</h4>
<p>Closely related to making TypeScript a first-class citizen for Ember.js itself is getting the pieces in place for the rest of the ecosystem as well. That means we need type definitions for addons—a <em>lot</em> of them! The ember-cli-typescript team will (hopefully late this month or in early June) be launching a quest issue to get type definitions for the whole Ember ecosystem in place—by helping convert addons to TS if their authors desire it, or by adding type definitions to the addons if they’re up for it, or by getting them up on DefinitelyTyped if they’re totally disinterested. (And, as I’ll note again in that quest issue, it’s totally fine for people <em>not</em> to be interested: there <em>is</em> a maintenance burden there!) The goal, again, is that when you’re using <em>any</em> part of the Ember ecosystem it’ll be easy to get all the benefits of TypeScript—and indeed that in many cases you’ll get a fair number of those benefits as a JS user.</p>
<p><strong>How you can help:</strong> participate in the quest issue once it’s live! We’ll help mentor you through the process of converting addons to TypeScript, writing type definitions and getting them well-validated, and so on!</p>
<hr />
<p>That’s a lot to do. More than enough all by itself, and a lot of moving parts. As such, I’ll reiterate what I said at the start: we don’t need new features this year. <strong>It’s time for a year of <em>shipping</em>.</strong></p>
</section>
</section>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>To put it in the terms the Rust community used for their similar push at the end of 2017, and which we have often used to describe the ongoing efforts in Rust to land the “Rust 2018 edition”: this is an “impl period”—a play on the Rust <code>impl</code> keyword, used to describe the <em>implementation</em> of the behavior associated with a given data type. You can think of this as the same: it’s the implementation of the good ideas we have.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Confession: it was a goal for the spring but I found myself utterly exhausted after EmberConf… and had a full month with <em>another</em> major talk given for internal purposes afterwards. I’m worn out.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
A Humanist Frame2018-05-01T07:00:00-04:002018-05-01T07:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-05-01:/2018/a-humanist-frame.htmlA few thoughts this morning on technologies, community, and the need for a positive (as well as a negative) vision of technology, cities, and indeed liberty—taking the latest issue of Michael Sacasas' newsletter The Convivial Society as a jumping off point.
<p>A few thoughts this morning on technologies, community, and the need for a positive (as well as a negative) vision of technology, cities, and indeed liberty—taking <a href="https://tinyletter.com/lmsacasas/letters/the-convivial-society-no-4-community" title="No. 4: Community">the latest issue</a> of Michael Sacasas’ newsletter <a href="https://tinyletter.com/lmsacasas/archive">The Convivial Society</a> as a jumping off point.</p>
<aside>
I don’t expect to link quite so often to the same writer, so don’t worry: this isn’t about to become a secondary feed for Sacasas’ writing. I do commend the newsletter, and especially this issue of it, to you. This essay, which I quote in brief, does something I hope to be able to do in a piece of writing someday: it <em>hangs together</em> marvelously. There are standout paragraphs, but each one connects to those before and after it, and the essay is—in the best way—not excerpt-able. You should read the whole thing. If you have to choose, read that instead of this (seriously).
</aside>
<p>Early in the newsletter, Sacasas offers this note on technological visionaries stretching back to the telegraph (at least):</p>
<blockquote>
<p>It seems that none of these visionaries ever took into consideration the possibility that the moral frailties of human nature would only be amplified by their new technologies.</p>
</blockquote>
<p>He shortly thereafter suggests why that vision proved so alluring—the too-readily amplified frailties of human nature notwithstanding:</p>
<blockquote>
<p>The rise of communication technologies from the mid-19th century through today has roughly coincided with the dissolution and degradation of the traditional communities, broken and often cruel though they may have been, that provided individuals with a relatively integrated experience of place and self. In 1953, the sociologist Robert Nisbett could write of the “quest for community” as the “dominant social tendency of the twentieth century.” Framing a new technology as a source of community, in other words, trades on an unfulfilled desire for community.</p>
</blockquote>
<p>What strikes me as most interesting here is that Sacasas notes, even if only as an aside, one of the most important things that most critics of our current techno/cultural milieu seem entirely content to skip over: that the traditional communities <em>were</em> “broken and often cruel.” One of the reasons that the social revolutions of the last 150 years have had such force is precisely this: that the traditional communities so casually valorized today (though not by Sacasas himself) may have helped people have “an integrated experience of place and self”—but that experience was, often as not, one of <em>abuse</em>: of ethnic minorities, of women, of anyone outside the gentry…</p>
<p>Sacasas’ description—“broken and often cruel”—is more right than is usually granted in these discussions. If we want to escape the shackles of atomistic individualism, we had best be thinking of something other than the glorious past, because the past was not glorious.</p>
<p>Third, and closely related to the above considerations: Sacasas closes the newsletter with a quote from Willa Cather’s <em>O Pioneers!</em>, adding his own emphasis. I’ll reproduce the quotation in full here as he gave it (so: emphasis his) and then comment below.</p>
<blockquote>
<p>“You see,” he went on calmly, “measured by your standards here, I’m a failure. I couldn’t buy even one of your cornfields. I’ve enjoyed a great many things, but I’ve got nothing to show for it all.”</p>
<p>“But you show for it yourself, Carl. I’d rather have had your freedom than my land.”</p>
<p>Carl shook his head mournfully. “<strong>Freedom so often means that one isn’t needed anywhere.</strong> Here you are an individual, you have a background of your own, you would be missed. But off there in the cities there are thousands of rolling stones like me. We are all alike; we have no ties, we know nobody, we own nothing. When one of us dies, they scarcely know where to bury him. Our landlady and the delicatessen man are our mourners, and we leave nothing behind us but a frock-coat and a fiddle, or an easel, or a typewriter, or whatever tool we got our living by. All we have ever managed to do is to pay our rent, the exorbitant rent that one has to pay for a few square feet of space near the heart of things. We have no house, no place, no people of our own. We live in the streets, in the parks, in the theaters. We sit in restaurants and concert halls and look about at the hundreds of our own kind and shudder.”</p>
<p>Alexandra was silent. She sat looking at the silver spot the moon made on the surface of the pond down in the pasture. He knew that she understood what he meant. At last she said slowly, “And yet I would rather have Emil grow up like that than like his two brothers. We pay a high rent, too, though we pay differently. We grow hard and heavy here. We don’t move lightly and easily as you do, and our minds get stiff. If the world were no wider than my cornfields, if there were not something beside this, I wouldn’t feel that it was much worth while to work. No, I would rather have Emil like you than like them. I felt that as soon as you came.”</p>
</blockquote>
<p>Sacasas—with many others who are rightly critical of the social situation we have made for ourselves here in late modernity, including many of my friends over at <a href="https://mereorthodoxy.com/book-review-liberalism-failed-patrick-deneen/" title="Example: Jake Meador's sympathetic review of Deneen's Why Liberalism Failed">Mere Orthodoxy</a>—calls out the ways that our unrestrained freedom has come at a great cost to us. These critics are right to do so. But the bit that caught my attention as <em>equally</em> worthy of notice in the section from Cather is Alexandra’s response (emphasis mine):</p>
<blockquote>
<p>We pay a high rent, too, though we pay differently. We grow hard and heavy here. We don’t move lightly and easily as you do, and our minds get stiff. If the world were no wider than my cornfields, if there were not something beside this, I wouldn’t feel that it was much worth while to work.</p>
</blockquote>
<p>There is something in this exchange of the tension between agrarianism and urbanism that seems to come up every time Wendell Berry is mentioned. It is true that cities (and late modernity!) often offer a kind of freedom that itself is slavery. But it is also true that the slavery of freedom is not the only kind of slavery.</p>
<p>It remains one of my chief concerns that few who <em>are</em> taking seriously the problems we have made for ourselves in modernity seem interested in finding solutions that work <em>in cities</em>. It is one thing to have a healthy suspicion of the kind of city-centrism and techno-centrism and indeed techno-fundamentalism that is largely the order of the day. It is something else entirely, however, to fail to imagine either city or technological milieu as <em>possibly good</em>. (To be clear, this does not seem to be the tack that Sacasas is taking; and it is not so much that someone like my friend Jake Meador is <em>hostile</em> to cities as that his own sympathies run more to rural life.) This is, I think, part of what <a href="http://stephencarradini.com">Stephen Carradini</a> has been getting at in his own more optimistic view in especially our <a href="https://winningslowly.org/6.04/" title="6.04: Move Slowly and Fix Things">most recent Winning Slowly episode</a>.</p>
<p>We must reject techno-utopianism; we must repent of our worship of technique. But we <em>must not</em> make the usual conservative mistake and stop with the mere rejection of something bad. <a href="http://bib.ly/luke11.24-26" title="Luke 11:24–26">That tends not to go so well.</a> Instead, we need to consciously develop a frame that situates technology as properly subordinate to the humane, and which sets cities and farms and small towns and moon colonies not in opposition to each other but as complements.</p>
Exploring 4 Languages: Integrity and Consistency2018-03-24T22:00:00-04:002018-03-24T22:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-03-24:/2018/exploring-4-languages-integrity-and-consistency.htmlUsing the type systems of Rust, Elm, F♯, and ReasonML to not only model a domain but to make sure we keep our promises.
<p>In chapter 6, Wlaschin turns to one of the most important aspects of “domain modeling”: keeping it consistent. It’s all well and good to set up a domain model, but if you don’t have a way to make sure that model is reliable everywhere you use it, well… you’ve done a lot of extra work and you’re not going to see a lot of results for all that effort! But as Wlaschin points out, we can actually use the type systems, and the types we wrote up in the previous chapter, to help us enforce the business <em>rules</em> for our domain (as well as the business <em>shapes</em> in the domain).</p>
<p>An important note: you can see the latest version of this code (along with history indicating some of my travails in getting there!) in <a href="https://github.com/chriskrycho/dmmf">this public repository on GitHub</a>.</p>
<section id="a-simple-example-widgetcode" class="level2">
<h2>A simple example: <code>WidgetCode</code></h2>
<p>We’ll start with one of the simpler examples: validating that a <code>WidgetCode</code> is legitimate. A <code>WidgetCode</code>, in this domain, is valid if, and <em>only</em> if, it has a <code>W</code> followed by four digits.</p>
<p>The basic tack we’ll take, in all four languages, is to leverage the way the types work to make it so we have to use a function to create a valid instance of a <code>WidgetCode</code>. That’s a bit of extra work (though especially in the functional-first languages, it ends up not being a <em>lot</em> of extra work) but it lets us use <code>Result</code> types to handle invalid data up front.</p>
<p>The downside is that we can’t just get directly at the value inside our wrapper types using basic pattern matching. Instead, we need to be provide a function for “unwrapping” it. Tradeoffs!</p>
<p>We’ll go at this using the most appropriate tool from each language, but in every case we’ll end up with a <code>create</code> function that takes a string and returns a <code>Result</code> with the successful option being a <code>WidgetCode</code> and the error option being a string describing the error; and a <code>value</code> function to unwrap a valid code. Throughout, I also assume an essentially-identical implementation of a related <code>GizmoCode</code> type; I pull both in to show how they end up being used side by side.</p>
<section id="rust" class="level3">
<h3>Rust</h3>
<p>We are using a tuple struct to wrap the string value here. Since there is no <code>pub</code> modifier in the wrapped <code>String</code>, it’s opaque from the perspective of the caller—and this is exactly what we want. We’ll pull in <a href="https://docs.rs/regex/0.2.10/regex/">the <code>Regex</code> crate</a> and validate the code passed to us on creation.</p>
<pre class="rust"><code>use regex::Regex;
pub struct WidgetCode(String);
impl WidgetCode {
pub fn create(code: &str) -> Result<WidgetCode, String> {
let re = Regex::new(r"W\d{4}").expect(r"W\d{4} is a valid regex");
if re.is_match(code) {
Ok(WidgetCode(String::from(code)))
} else {
Err(String::from(
"`WidgetCode` must begin with a 'W' and be followed by 4 digits",
))
}
}
pub fn value(&self) -> &str {
&self.0
}
}</code></pre>
<p>This is fairly idiomatic Rust: we’re <em>borrowing</em> a <em>reference</em> to the code as a “string slice”, and creating a new, wrapped <code>String</code> instance to wrap up the code <em>or</em> return a new <code>String</code> as an error. When we get the value out, we return a reference to the string,<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> with <code>&self.0</code>: <code>&</code> to indicate a reference, <code>.0</code> to indicate the first item of a tuple. Note as well that the final <code>if</code> block here is an expression. There’s no semicolon terminating it, and this whole <code>if</code> block ends up being the resulting value of the function.</p>
<p>One other point of interest here is that the creation of the regex <em>itself</em> is checked by the compiler for us! If we pass an invalid regular expression, this simply won’t compile.</p>
<p>This could also live in its own module, <code>ordering/widget_code.rs</code>, and in fact that’s how I would normally do this (and have in the repository where I’m working): every one of these small types would get its own module file within the containing <code>Ordering</code> module. It’s not <em>necessary</em>, but as the domain model grows, it becomes increasingly <em>convenient</em> in that you always know where to find things.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<p>Then we can import it and use it like this in <code>ordering/mod.rs</code>:</p>
<pre class="rust"><code>mod widget_code;
mod gizmo_code;
use widget_code::WidgetCode;
use gizmo_code::GizmoCode;
pub enum ProductCode {
Widget(WidgetCode),
Gizmo(GizmoCode),
}
fn demo_it() {
let valid = WidgetCode::create("W1234");
let invalid = WidgetCode::create("wat");
let unwrapped = match valid {
Ok(ref code) => code.value(),
Err(_) => "",
};
}</code></pre>
<p>Notice that in Rust, the <code>mod.rs</code> file declares all child modules. If you had a <code>widget_code.rs</code> on the file system but no <code>mod widget_code;</code>, Rust would just ignore the declaration entirely. Then Rust also requires us to <code>use widget_code;</code> to access its contents. The distinction between declaring and using a given module makes some sense: by the time all is said and done with this exercise, we won’t be doing much of anything in this <code>Ordering</code> module; it’ll exist primarily as a grouping construct for all the <em>other</em> modules.</p>
<p>In this case, we go ahead and import the <code>WidgetCode</code> type from the module. We only have the one type there, with no standalone functions: everything is attached to the type via the <code>impl</code> block; so we can just call everything directly off of the type. This ends up feeling <em>kind of</em> like the way we’d do things in a traditional OOP language, but also <em>really not</em>, because we still have a separation between the data type and the implementation of functionality attached to it. It’s not obvious <em>here</em>, but we could write <code>impl WidgetCode</code> in some <em>other</em> module in the crate, and as long as there’s no conflict between the implementations, it’s fine! And then we could call whatever function we defined in <em>that</em> block “on” <code>WidgetCode</code>. This is on the one hand <em>totally</em> unlike what we’ll see in the other languages, and on the other hand <em>weirdly analogous</em> to them.</p>
<p>I’m going to pass over why we need <code>ref code</code> here, as it gets into details of Rust’s model of ownership and reference borrowing <em>and</em> it’s going to be unneeded because of improvements to Rust’s compiler fairly soon. The one thing to note here is that we get nice memory/allocation behavior, i.e. we’re not doing a bunch of separate heap string allocations here. This is one of the big upsides to Rust in general! It’s not quite as pretty as what we’ll see below, but the performance wins are awesome.</p>
</section>
<section id="elm" class="level3">
<h3>Elm</h3>
<p>Elm introduces us to a pattern we’ll see in each of the more traditional “functional” languages: the use of <em>modules</em> for this kind of structure. First the code, then some comments on it:</p>
<pre class="elm"><code>-- src/ordering/WidgetCode.elm
module Ordering.WidgetCode exposing (WidgetCode, create, value)
import Regex exposing (contains, regex)
type WidgetCode
= WidgetCode String
create : String -> Result String WidgetCode
create code =
if contains (regex "W\\d{4}") code then
Ok (WidgetCode code)
else
Err "`WidgetCode` must begin with a 'W' and be followed by 4 digits"
value : WidgetCode -> String
value (WidgetCode code) =
code</code></pre>
<p>Elm’s module system lets you choose exactly what to expose. In this case, we’re only exporting the type itself along the <code>create</code> and <code>value</code> functions—but, importantly, <em>not</em> the normal type constructors for the type.</p>
<p>You can import the things exposed both as a module and as individual items. Assume we implemented <code>GizmoCode</code> the same way. We’d import and use them in <code>Ordering.elm</code> like this:</p>
<pre class="elm"><code>-- Ordering.elm
import Ordering.WidgetCode as WidgetCode exposing (WidgetCode)
import Ordering.GizmoCode as GizmoCode exposing (GizmoCode)
type ProductCode
= Widget WidgetCode
| Gizmo GizmoCode
valid =
WidgetCode.create "W1234"
invalid =
WidgetCode.create "wat"
unwrapped =
case valid of
Result.Ok code ->
WidgetCode.value (code)
Result.Err _ ->
""</code></pre>
<p>As with Rust, we can’t construct the type without using the provided function. As I’ve written the imports, you’d create a <code>WidgetCode</code> by writing <code>WidgetCode.create "W1234"</code>. You could also import it directly, but that would have its own problems once you had the <code>create</code> function imported for <code>GizmoCode</code> as well.</p>
<p>Finally, notice the way we aliased the module name here with <code>as</code> on the import: we don’t have to write out the fully qualified path this way. And there’s no conflict between the aliased module name and the type name – they live in their own namespaces (as it should be!). Importing the type name distinctly is handy because it means we don’t have to write the body of the union type out as <code>Widget WidgetCode.WidgetCode</code>.</p>
</section>
<section id="f" class="level3">
<h3>F<sup>♯</sup></h3>
<p>The F<sup>♯</sup> code looks a <em>lot</em> like the Elm code. The main differences here have to do with their module systems.</p>
<pre class="fsharp"><code>namespace Ordering
type WidgetCode = private WidgetCode of string
module WidgetCode =
let create code =
if Regex.IsMatch(code, @"W\d{4}") then
Ok (WidgetCode code)
else
Error "`WidgetCode` must begin with a 'W' and be followed by 4 digits"
let value (WidgetCode code) = code</code></pre>
<p>Here we declare that we’re in the <code>namespace Ordering</code>. Everything here will be publicly visible to everything <em>else</em> in the <code>namespace Ordering</code>. We could also make this a <code>module</code>, and in that case we’d need to explicitly open it in other modules. Because it’s part of the base namespace we’re using for <code>Ordering</code>, though, we get it for “free”. There’s a downside to this, though. More on that below.</p>
<p>Also notice that this means that we have yet one more “namespace” for names to live in: <code>namespace</code> names are different from <code>module</code> are different from type names! So here we declare a top-level <code>module Ordering</code> here so that we can actually write code that <em>does something</em> in the file – <code>namespace</code>s can only contain type definitions (including <code>module</code> definitions).</p>
<pre class="fsharp"><code>namespace Ordering
type ProductCode =
| Widget of WidgetCode
| Gizmo of GizmoCode
module DemoIt =
let valid = WidgetCode.create "W1234"
let invalid = WidgetCode.create "wat"
let unwrapped =
match valid with
| Ok(code) -> WidgetCode.value code
| Error(_) -> ""</code></pre>
<p>The things to notice here as particularly different from the others:</p>
<ol type="1">
<li>We don’t have to explicitly import the module names, because we used the same namespace (<code>Ordering</code>) to group them. We could also have done <code>namespace Ordering.WidgetCode</code> and <code>open Ordering.WidgetCode</code>; that might actually make more or less sense in the context. I <em>think</em> this is probably more idiomatic, however, which is why I picked it.</li>
<li>Since we’re keeping the rest of the containing module in the same namespace, we <em>do</em> have to declare <code>module DemoIt</code> for functionality – not just types – to live in. This is true for both <code>Ordering.fs</code> and <code>WidgetCode.fs</code> and so on.</li>
</ol>
<p>This way of structuring things works really well, but it has one major downside compared to Elm and Rust: where any given name comes from is <em>not</em> obvious from any given text file. Using modules instead of namespaces and using more fully qualified names <em>could</em> help here, but the reality is simply that F<sup>♯</sup> (like C<sup>♯</sup>) basically leaves you out to dry here. My take is that this is basically what happens when you design a language <em>assuming</em> IDE-like tooling. But especially when looking at e.g. GitHub diff views, or just browsing source code in general, I strongly prefer the way Elm and Rust generally lead you to do explicit imports or fully qualified paths. (Both have an escape hatch: Rust’s <code>use path::to::module::*;</code> and Elm’s <code>import Path.To.Module exposing (..)</code>, but both are actively discouraged as bad practice in <em>most</em> situations.)</p>
</section>
<section id="reason" class="level3">
<h3>Reason</h3>
<p>Interestingly, Reason <em>looks</em> most like Rust but <em>behaves</em> most like F<sup>♯</sup>. The biggest difference is that I need a separate <em>interface file</em> for Reason to get the privacy benefits that I’m getting in all the other languages.</p>
<p>We put the definition file at <code>ordering/Ordering_WidgetCode.rei</code>. (I’ll comment on the long name in a moment.)</p>
<pre class="reason"><code>type gizmoCode = pri | GizmoCode(string);
let create: string => Js.Result.t(widgetCode, string);
let value: widgetCode => string;</code></pre>
<p>With that module definition in place, we can separately supply the implementation, in <code>ordering/Ordering_WidgetCode.re</code>.</p>
<pre class="reason"><code>type widgetCode =
| WidgetCode(string);
let create = code => {
let isMatch =
Js.Re.fromString("W\\d{4}") |> Js.Re.exec(code) |> Js.Option.isSome;
if (isMatch) {
Js.Result.Ok(WidgetCode(code));
} else {
Js.Result.Error(
"`WidgetCode` must begin with a 'W' and be followed by 4 digits"
);
};
};
let value = (WidgetCode(code)) => code;</code></pre>
<p>Note that you could do the same thing with an interface file for F<sup>♯</sup>. We’re also doing something that’s similar in principle to the use of private types in in F<sup>♯</sup>, but unlike in F<sup>♯</sup> we <em>have</em> to use the module interface to make it work as far as I can tell. The <em>interface</em> can declare the type private, but in the actual implementation, the type has to be non-private to be constructable. (If I’m wrong, please send me a note to let me know! But that’s what I gathered from reading OCaml docs, as well as from command line error messages as I played around.) Also, the fact that Reason has landed on the keyword <code>pri</code> instead of OCaml and F<sup>♯</sup>’s much saner <code>private</code> is super weird.</p>
<p>The interface file just defines the types, and has the <code>.rei</code> extension. <code>type widgetCode</code> here is an <em>abstract</em> type, which provides no information about what it contains. Note the function types are provided as well. Here I’m using specifically the <code>Js.Result</code> type; there is also a <code>Result</code> type in at least one of the OCaml standard libraries. This is one of the more complicated things about Reason compared to the others: there are… <em>several</em> standard libraries to choose from, which will or won’t work differently depending on what compile target you’re picking.</p>
<p>In any case, once we have both the module and the implementation defined, we can use it like this in <code>ordering.re</code>:</p>
<pre class="reason"><code>module WidgetCode = Ordering_WidgetCode;
module GizmoCode = Ordering_GizmoCode;
open WidgetCode;
open GizmoCode;
type productCode =
| Widget(widgetCode)
| Gizmo(gizmoCode);
let valid = WidgetCode.create("W1234");
let invalid = WidgetCode.create("wat");
let unwrapped =
switch valid {
| Js.Result.Ok(code) => WidgetCode.value(code)
| Js.Result.Error(_) => ""
};</code></pre>
<p>We do this mapping from <code>Ordering_WidgetCode</code> to <code>WidgetCode</code> here because OCaml and therefore Reason has only a single global namespace for its module names as defined by the file system. You can nest modules, but only <em>within</em> files. The workaround is, well… <code>Ordering_</code> and remapping the name as we have here. This lets you access the nested modules as <code>Ordering.WidgetCode</code> and so on elsewhere.</p>
<p>Then we <code>open WidgetCode</code> etc. so that we can write <code>widgetCode</code> instead of <code>WidgetCode.widgetCode</code> in the <code>productCode</code> definition. This is basically the same effect we get from just being in the same <code>namespace</code> in F<sup>♯</sup> (which, again, we could rewrite exactly this way), or from the kinds of imports we discussed above for Rust and Elm.</p>
</section>
</section>
<section id="numeric-validation-unitquantity" class="level2">
<h2>Numeric validation: <code>UnitQuantity</code></h2>
<p>So far, the showing tilts <em>heavily</em> in F<sup>♯</sup>’s and Elm’s favor in terms of expressiveness and elegance. However, there’s a lot of variation depending on exactly what you’re doing. If, for example, you want to validate a <em>range</em>, well… then Rust actually has a pretty good approach! Once again, you’ll note that these all have a lot in common; the difference mostly comes down to the degree of syntactical noise required to express the same basic thing.</p>
<p>In this section, I’m not really going to spend a lot of time discussing the details and differences; I’m just leaving it here to show an interesting example where the languages’ design decisions end up have slightly different ergonomic tradeoffs.</p>
<section id="rust-1" class="level3">
<h3>Rust</h3>
<pre class="rust"><code>// ordering/unit_quantity.rs
pub struct UnitQuantity(u32);
impl UnitQuantity {
pub fn create(qty: u32) -> Result<UnitQuantity, String> {
match qty {
0 => Err(String::from("`UnitQuantity` cannot be less than 1")),
1...1000 => Ok(UnitQuantity(qty)),
_ => Err(String::from("`UnitQuantity` cannot be greater than 1000")),
}
}
pub fn value(&self) -> u32 {
self.0
}
pub fn minimum() -> UnitQuantity {
UnitQuantity(1)
}
}</code></pre>
</section>
<section id="elm-1" class="level3">
<h3>Elm</h3>
<pre class="elm"><code>-- ordering/UnitQuantity.elm
module Ordering.UnitQuantity exposing (UnitQuantity, create, value)
type UnitQuantity
= UnitQuantity Int
create : Int -> Result String UnitQuantity
create qty =
if qty < 1 then
Err "`UnitQuantity` cannot be less than 1"
else if qty > 1000 then
Err "`UnitQuantity` cannot be greater than 1000"
else
Ok (UnitQuantity qty)
value : UnitQuantity -> Int
value (UnitQuantity qty) =
qty
minimum : UnitQuantity
minimum = UnitQuantity 1</code></pre>
</section>
<section id="f-1" class="level3">
<h3>F<sup>♯</sup></h3>
<pre class="fsharp"><code>// ordering/UnitQuantity.fs
namespace Ordering
type UnitQuantity = private UnitQuantity of uint32
module UnitQuantity =
let create qty =
if qty < 1u then
Error "`UnitQuantity` cannot be less than 1"
else if qty > 1000u then
Error "`UnitQuantity` cannot be greater than 1000"
else
Ok (UnitQuantity qty)
let value (UnitQuantity qty) = qty
let minimum = UnitQuantity 1</code></pre>
</section>
<section id="reason-1" class="level3">
<h3>Reason</h3>
<pre class="reason"><code>/* ordering/Ordering_UnitQuantity.rei */
type unitQuantity = pri | UnitQuantity(int);
let create: int => Js.Result.t(unitQuantity, string);
let value: unitQuantity => int;
let minimum: unitQuantity;</code></pre>
<pre class="reason"><code>/* ordering/Ordering_UnitQuantity.re */
type unitQuantity =
| UnitQuantity(int);
let create = qty =>
if (qty < 1) {
Js.Result.Error("`UnitQuantity` cannot be less than 1");
} else if (qty > 1000) {
Js.Result.Error("`UnitQuantity` cannot be greater than 1000");
} else {
Js.Result.Ok(UnitQuantity(qty));
};
let value = (UnitQuantity(qty)) => qty;
let minimum = UnitQuantity(1);</code></pre>
</section>
</section>
<section id="aside-on-documentation" class="level2">
<h2>Aside: On Documentation</h2>
<p>One thing that became <em>extremely</em> clear in the course of working all of this out is that the documentation stories for these languages are in vastly, <em>vastly</em> different places.</p>
<p>Figuring out how to write this private <code>create</code>/<code>value</code> approach was <em>very</em> straightforward in Rust, because it’s literally just right there in how <code>impl</code> blocks and the <code>pub</code> keyword work: things default to private, including the contents of a struct, and you <em>always</em> define the related functionality with <code>pub fn</code> declarations in the related <code>impl</code> block.</p>
<p>Elm and F<sup>♯</sup> were both slightly harder, in that I had to poke around a bit to figure out the right way to do it. But not <em>that</em> much harder. Both use module-level isolation to accomplish this; the main difference there was that F<sup>♯</sup> just lets you do it inline and Elm explicitly ties modules to files.</p>
<p>Reason… was very, <em>very</em> difficult to get sorted out. This is just a function of the state of the ecosystem. Reason is <em>distinct syntax</em> for OCaml, but it also leans on BuckleScript. That means that if you want to figure out how to do anything, you probably need to search in the docs for all of those, and if your answer turns out to come from OCaml then you have to figure out how to translate it back into Reason and BuckleScript! Ultimately, I was able to figure it out and get the project layout to how you see it in the repository, but… it took a lot more digging than with any of the other projects!</p>
</section>
<section id="summary" class="level2">
<h2>Summary</h2>
<p>As with our <a href="http://v4.chriskrycho.com/2018/exploring-4-languages-starting-to-model-the-domain.html">previous foray</a>, we can see a ton of similarities across these languages. All lean heavily on pattern-matching for dealing with different scenarios; all let us make use of a <code>Result</code> type for handling success or failure; all make heavy use of expression-bodied-ness; and all supply <em>some</em> way to make types constructable only in safe/controlled ways.</p>
<p>For Rust, that’s a matter of leaving the internals of a <code>struct</code> private and making <code>pub fn</code> helpers to do the construction and value retrieval. For Elm, F<sup>♯</sup>, and Reason, that’s a matter of having the normal type <em>constructors</em> be private while exposing the types themselves normally. They do that in different ways (F<sup>♯</sup>’s <code>private type</code>, Elm’s <code>exposing</code>, and Reason’s <code>pri</code> annotation on the type variant in a module interface file), but the effect is essentially identical, and functionally equivalent to what we see in Rust.</p>
<p>The main differences we see across Elm, F<sup>♯</sup>, and Reason have to do with the nature of the various module systems. In a lot of ways, Reason’s is the least capable <em>for this specific purpose</em>, because it’s directly tied to OCaml’s module system, which substantially predates any of the others. (I say “in a lot of ways” because OCaml’s modules are surprisingly capable; they end up being their own kind of types and you can do some crazy things with them, all of which I’d like to actually come to understand… eventually.) Rust’s module system, meanwhile, has a lot of similarities to Elm’s in particular, but because we actually carry functions along with the types they <code>impl</code> (though they get defined separately, with all the power that entails), we have a bit less boilerplate we need to write just to get at the specific functions in play.</p>
<p>Next time (probably only a couple of weeks away because we’re working through the book at work in a book club!), I’ll be looking at Chapter 7: Modeling Workflows as Pipelines. I suspect this will be a place where the true functional orientation of Elm, F<sup>♯</sup>, and Reason will much more sharply differentiate them from the sometimes-functionalish-but-not-actually-functional way we write things in Rust.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>This reference will live and be valid as long as the underlying <code>WidgetCode</code> is. We could also return a <code>String</code> if we wanted that value to live independently of the <code>WidgetCode</code> instance backing it.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Putting it in its own module, whether in a separate <em>does</em> have implications for privacy, though we don’t much care about them in this case. Rust lets us set the privacy on <a href="https://doc.rust-lang.org/1.24.1/reference/visibility-and-privacy.html">a whole spectrum</a>, from “visible everywhere” to “only visible in this specific module.”<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Announcing ember-cli-typescript 1.1.02018-02-12T07:00:00-05:002018-02-12T07:00:00-05:00Chris Krychotag:v4.chriskrycho.com,2018-02-12:/2018/announcing-ember-cli-typescript-110.htmlNow with generators, support for addons, and incremental compilation! A lot has changed in the last six months, and we’re ready to kick the ecosystem into high gear!<p>I’m delighted to announce the release of <a href="https://github.com/typed-ember/ember-cli-typescript/releases/tag/v1.1.0">ember-cli-typescript 1.1.0</a>. This first minor release since 1.0 includes the following shiny and awesome new features:</p>
<ul>
<li><a href="#generators">Generators</a></li>
<li><a href="#developing-addons">Support for developing addons in TypeScript</a></li>
<li><a href="#incremental-compilation">Incremental compilation (a.k.a. fast rebuilds in <code>ember serve</code> mode)</a></li>
</ul>
<section id="generators" class="level2">
<h2>Generators</h2>
<p>We’ve now added support for generating <em>all</em> standard Ember items as TypeScript files instead of JavaScript files. So now when you run <code>ember generate component user-profile</code> for example, you’ll get <code>user-profile.ts</code>, <code>user-profile-test.ts</code>, and <code>user-profile.hbs</code>. For most files, this is just a nicety—just two files you don’t have to rename!—but in the case of services, controllers, and Ember Data models, adapters, and serializers it will actually make a really big difference in your experience of using TypeScript in your app or addon.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>Those generators are <em>mostly</em> identical with ones in Ember and Ember Data, just with <code>.ts</code> instead of <code>.js</code> for the extension. The only changes we have made are: (a) we’ve tweaked them to use classes where possible, and (b) we have customized the controller, service, and Ember Data model, adapter, and serializer generators so you get the most mileage out of TypeScript for the least effort we can manage today. So when you do <code>ember generate service session</code>, this is what you’ll see:</p>
<pre class="ts"><code>import Service from "@ember/service";
export default class Session extends Service.extend({
// anything which *must* be merged on the prototype
}) {
// normal class definition
}
// DO NOT DELETE: this is how TypeScript knows how to look up your services.
declare module "ember" {
interface ServiceRegistry {
session: Session;
}
}</code></pre>
<p>Courtesy of these generators, you can now write <em>almost</em> exactly what you’d write in vanilla Ember and get full support for autocompletion of properties and methods on the <code>Session</code> service, as well as type-checking for how you use those. Service and controller injections just require you to explicitly name the service or controller being injected:</p>
<pre class="ts"><code>import Component from "@ember/component";
import { inject as service } from "@ember/service";
export default class UserProfile extends Component {
session = service("session");
// note the string ^ naming the service explicitly
}</code></pre>
<p>So, for example, if your <code>session</code> service had a <code>login</code> method on it:</p>
<pre class="ts"><code>import Service from "@ember/service";
import RSVP from "rsvp";
export default class Session extends Service {
login(email: string, password: string): RSVP.Promise<string> {
// some API call to log in
}
}
// DO NOT DELETE: this is how TypeScript knows how to look up your services.
declare module "ember" {
interface ServiceRegistry {
session: Session;
}
}</code></pre>
<p>Then anywhere you injected and used it, you’ll get auto-complete suggestions and type checking:</p>
<figure>
<img src="https://f001.backblazeb2.com/file/chriskrycho-com/images/ts-autocomplete.png" alt="autocompletion" /><figcaption>autocompletion</figcaption>
</figure>
<figure>
<img src="https://f001.backblazeb2.com/file/chriskrycho-com/images/ts-type-checking.png" alt="type-checking" /><figcaption>type-checking</figcaption>
</figure>
<p>(You’ll see the same kinds of things in other editors, from Vim to IntelliJ IDEA. Visual Studio Code is just my current editor of choice.)</p>
</section>
<section id="addon-development" class="level2">
<h2>Addon development</h2>
<p>As <a href="http://v4.chriskrycho.com/2017/announcing-ember-cli-typescript-100.html#the-roadmap">promised with the 1.0 release</a>, 1.1 (though arriving much later than I hoped it would) includes support for developing addons with TypeScript.</p>
<p>Strictly speaking, of course, you could <em>always</em> develop addons using TypeScript, but there were two problems with it: (1) dependency management and (2) manual work required to deal with the dependency management problems.</p>
<section id="dependency-management" class="level3">
<h3>1. Dependency management</h3>
<p>In the normal Ember CLI workflow, TypeScript had to be a <code>dependency</code>—not a <code>devDependency</code>—of the addon, because the normal pattern with Ember CLI is to ship the uncompiled files and have the consumer compile them all together at build time.</p>
<p>This makes a certain amount of sense for Babel given the Ember community’s shared reliance on Babel: it’s just assumed to be part of every app build. In that case, it gives consumers control over their compilation target. If an app only needs to target evergreen browsers, it can do that and ship a smaller payload, because an addon won’t have pre-compiled in things like generator support, etc.</p>
<p>In the case of TypeScript, however, this makes a lot less sense: many (probably <em>most</em>) consumers of addons written in TypeScript will still be normal JavaScript consumers. We did not want to burden normal consumers with a TypeScript compile step. We <em>also</em> didn’t want to burden any consumers with the reality that TypeScript is a <em>large</em> install. TypeScript 2.6.2 is 32MB on disk for me. Even with some degree of deduplication by npm or yarn, if addons used a variety of versions of TypeScript for development—as they surely would!—the install cost for consumers would quickly spiral into a nasty spot. And again: that’s bad enough for someone who <em>wants</em> to use TypeScript in their app; it’s far worse for someone who just wants to consume the compiled JavaScript.</p>
</section>
<section id="manual-workarounds" class="level3">
<h3>2. Manual workarounds</h3>
<p>You could work around all of that by building the JavaScript (and TypeScript definitions) yourself. But as part of that, you had to do all the work of making sure both the JavaScript files and the type definitions you generated ended up in the right place for distribution and consumption. That was always possible, but it was also always going to be a lot of work. In practice, as far as I know, <em>no one has done this</em>.</p>
</section>
<section id="solution" class="level3">
<h3>Solution</h3>
<p>We now support TypeScript as a <code>devDependency</code> and also manage the work of generating JavaScript and type definitions for you. All you have to do is install ember-cli-typescript into an addon, and then when you do your build step, we’ll automatically do the work (on prepublish) of generating TypeScript <code>.d.ts</code> files and JavaScript source for you.</p>
<p>Consumers of your addon, therefore, will (a) not know or care that the addon is written in TypeScript if they just want to consume it as normal JavaScript<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> or (b) will get the benefits of your having written the library in TypeScript without paying the penalty of having to have multiple versions of the TypeScript compiler downloaded to their own app.</p>
<p>One important caveat: we do <em>not</em> support TypeScript in an addon’s <code>app</code> directory. However, for most addons, we don’t think this should be a problem. It’s rare for addons to put actual implementation in the <code>app</code> directory; instead it has simply become conventional for the <code>app</code> directory simply to have re-exports for convenient access to the functionality supplied by the addon.</p>
<p>Also note that you can supply type definitions for your addon <em>without</em> developing the addon itself in TypeScript.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> You do <em>not</em> need ember-cli-typescript installed for that. You only need the addon if you actually want to take advantage of the opportunities TypeScript affords for developing your own addon.</p>
</section>
</section>
<section id="incremental-compilation" class="level2">
<h2>Incremental compilation</h2>
<p>Last but not least, we’ve managed—mostly through the hard work of both Dan Freeman (<a href="https://github.com/dfreeman">@dfreeman</a>) and Derek Wickern (<a href="https://github.com/dwickern">@dwickern</a>—to get support for TypeScript’s <code>--watch</code> mode integrated.<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a> What this means in practice is: <em>way</em> faster iteration as you work.</p>
<p>Previously, every time you triggered <em>any</em> change in your app (even if it didn’t involve any TypeScript files at all), the TypeScript compiler would recompile <em>all</em> the TypeScript files in your application. We didn’t initially have a good way to make TypeScript and Broccoli (and therefore Ember CLI) communicate clearly about what had changed. Now, courtesy of Dan and Derek’s hard work (and my cheerleading, testing, and fixing a few corner pieces along the way), we do! So when you change a <code>.hbs</code> file or a <code>.js</code> file… the TypeScript compiler won’t do anything. And when you change a TypeScript file, the TypeScript compiler will <em>only</em> recompile that file.</p>
<p>On my own app (~35,000 lines of TypeScript across ~700 files), that’s the difference between rebuilds involving TypeScript taking 15–20 seconds and their taking 1–2 seconds. Literally an order of magnitude faster! Over the course of a day of development, that saves a <em>huge</em> amount of time.</p>
<p>The way we did it also solved an incredibly annoying problem we had in the previous pass: <em>any</em> change to your app was triggering <code>tsc</code> to rebuild the entire TypeScript tree of your app, even if you didn’t so much as look at <code>.ts</code> file. This was particularly annoying when combined with the long rebuild times: change a CSS file and wait for your TypeScript files to rebuild? Ugh. But not anymore!</p>
</section>
<section id="credit-and-thanks" class="level2">
<h2>Credit and Thanks</h2>
<p>Massive credit goes to Dan Freeman (<a href="https://github.com/dfreeman">@dfreeman</a>) and Derek Wickern (<a href="https://github.com/dwickern">@dwickern</a>), who did most of the heavy lifting on the internals for this release, and together unlocked both incremental compilation and addon support. Derek also did the lion’s share of the work on writing the types for Ember and Ember Data.</p>
<p>Thanks to Maarten Veenstra (<a href="https://github.com/maerten">@maerten</a>) for the original inspiration (and a spike last summer) for using a type registry, and to Mike North (<a href="https://github.com/maerten">@mike-north</a>) for some discussion and planning around the idea late in 2017. I may have implemented them, but the ideas came from the community!</p>
<p>Thanks to Frank Tan (<a href="https://github.com/tansongyang">@tansongyang</a>) for doing a lot of the work on porting the generators from the Ember and Ember Data repositories to ember-cli-typescript, as well as converting them to TypeScript and to use the new formats. He also contributed the type definitions for the new (<a href="https://github.com/emberjs/rfcs/pull/232/">RFC #232</a>) QUnit testing API.</p>
<p>Thanks to everyone who contributed to ember-cli-typescript or the Ember typings in any way since we released 1.0.0:</p>
<ul>
<li><p>ember-cli-typescript contributors (note that I intentionally include here everyone who opened issues on the repository: that is <em>not</em> a small thing and has helped us immensely):</p>
<ul>
<li>Bryan Crotaz (<a href="https://github.com/BryanCrotaz">@BryanCrotaz</a>)</li>
<li>Daniel Gratzl (<a href="https://github.com/danielgratzl">@danielgratzl</a>)</li>
<li>Guangda Zhang (<a href="https://github.com/inkless">@inkless</a>)</li>
<li><a href="https://github.com/guangda-prosperworks">@guangda-prosperworks</a></li>
<li>Krati Ahuja (<a href="https://github.com/kratiahuja">@kratiahuja</a>)</li>
<li>Martin Feckie (<a href="https://github.com/mfeckie">@mfeckie</a>)</li>
<li>Nikos Katsikanis (<a href="https://github.com/QuantumInformation">@QuantumInformation</a>)</li>
<li>Per Lundberg (<a href="https://github.com/perlun">@perlun</a>)</li>
<li>Prabhakar Poudel (<a href="https://github.com/prabhakar-poudel">@Prabhakar-Poudel</a>)</li>
<li>Ryan LaBouve (<a href="https://github.com/ryanlabouve">@ryanlabouve</a>)</li>
<li>Simon Ihmig (<a href="https://github.com/simonihmig">@simonihmig</a>)</li>
<li>Theron Cross (<a href="https://github.com/theroncross">@theroncross</a>)</li>
<li>Thomas Gossman (<a href="https://github.com/gossi">@gossi</a>)</li>
<li>Vince Cipriani (<a href="https://github.com/vcipriani">@vcipriani</a>)</li>
</ul></li>
<li><p>Ember typings contributors:</p>
<ul>
<li>Adnan Chowdhury (<a href="https://github.com/bttf">@bttf</a>)</li>
<li>Derek Wickern (<a href="https://github.com/dwickern">@dwickern</a>)</li>
<li>Frank Tan (<a href="https://github.com/tansongyang">@tansongyang</a>)</li>
<li>Guangda Zhang (<a href="https://github.com/inkless">@inkless</a>)</li>
<li>Ignacio Bona Piedrabuena (<a href="https://github.com/igbopie">@igbopie</a>)</li>
<li>Leonard Thieu <a href="https://github.com/leonard-thieu">@leonard-thieu</a></li>
<li>Logan Tegman <a href="https://github.com/ltegman">@ltegman</a></li>
<li>Martin Feckie (<a href="https://github.com/mfeckie">@mfeckie</a>)</li>
<li>Mike North (<a href="https://github.com/maerten">@mike-north</a>)</li>
<li>Nathan Jacobson (<a href="https://github.com/natecj">@natecj</a>)</li>
<li>Per Lundberg (<a href="https://github.com/perlun">@perlun</a>)</li>
<li>Robin Ward (<a href="https://github.com/eviltrout">@eviltrout</a>)</li>
</ul></li>
</ul>
<p>Thanks to Rob Jackson (<a href="https://github.com/rwjblue">@rwjblue</a>) and Tobias Bieniek (<a href="https://github.com/Turbo87">@Turbo87</a> on GitHub, @tbieniek in the Ember Slack) for answering tons of questions and putting up with regular pestering about Ember CLI.</p>
<p>And last but not least, thanks to everyone who’s popped into #topic-typescript on the Ember Community Slack with questions, comments, problem reports, and the occasional word of encouragement. It really does help.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>For details on how this all works, see <a href="http://v4.chriskrycho.com/2018/typing-your-ember-update-part-4.html">TypeScript and Ember.js Update: Part 4</a>, where I discuss these changes in detail.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>although they may actually get some benefits in a number of modern editors, since e.g. VS Code and the JetBrains IDEs will leverage types if they exist!<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>More on that in a post to be released in the next couple weeks—one I promised <em>long</em> ago, but which we’re now in a place to actually do: a plan and a roadmap for typing the whole Ember ecosystem!<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>And of course, right as we finally landed our support for it, by hacking around the <code>--watch</code> invocation in a lot of really weird ways, Microsoft shipped API-level support for it. We hope to switch to using that under the hood, but that shouldn’t make any difference at all to you as a consumer of the addon, except that if/when we land it at some point, you’ll just have a nicer experience.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
TypeScript and Ember.js Update, Part 42018-02-08T07:30:00-05:002018-07-10T20:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-02-08:/2018/typing-your-ember-update-part-4.htmlUsing Ember Data effectively, and migrating to new (better, easier!) approaches for service and controller lookup while we’re at it.<p><i class='series-overview'>You write <a href="https://emberjs.com">Ember.js</a> apps. You think <a href="http://www.typescriptlang.org">TypeScript</a> would be helpful in building a more robust app as it increases in size or has more people working on it. But you have questions about how to make it work.</i></p>
<p><i class='series-overview'>This is the series for you! I’ll talk through everything: from the very basics of how to set up your Ember.js app to use TypeScript to how you can get the most out of TypeScript today—and I’ll be pretty clear about the current tradeoffs and limitations, too.</i></p>
<p><i class='series-overview'><a href="/typing-your-ember.html">(See the rest of the series. →)</a></i></p>
<hr />
<p>In the previous posts in this series, I introduced the big picture of how the story around TypeScript and Ember.js has improved over the last several months, walked through some important background on class properties, and dug deep on computed properties, actions, and mixins.</p>
<p>In today’s post, we’ll look at how to write Ember Data models so they work correctly throughout your codebase, and see some improvements to how we can do <code>Service</code> and <code>Controller</code> injections even from a few weeks ago.</p>
<aside>
<p>If you’re interested in all of this and would like to learn more in person, I’m <a href="http://emberconf.com/speakers.html#chris-krycho">leading a workshop on it at EmberConf 2018</a>—I’d love to see you there!</p>
</aside>
<p>Here’s the outline of this update sequence:</p>
<ol type="1">
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-1.html">Overview, normal Ember objects, component arguments, and injections.</a></li>
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-2.html">Class properties—some notes on how things differ from the <code>Ember.Object</code> world.</a></li>
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-3.html">Computed properties, actions, mixins, and class methods.</a></li>
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-4.html"><strong>Using Ember Data, and service and controller injections improvements.</strong> (this post)</a></li>
<li>Mixins and proxies; or: the really hard-to-type-check bits.</li>
</ol>
<section id="ember-data" class="level2">
<h2>Ember Data</h2>
<p>There remains one significant challenges to using Ember Data effectively with TypeScript today: Ember Data, for reasons I haven’t yet dug into myself, does not play nicely with ES6 classes. However, we <em>need</em> named class exports for the sake of being able to use them as types elsewhere in our programs. The hack to work around this is much the same as anywhere else we need named exports but have to get things back into the prototype:</p>
<pre class="ts"><code>import DS from "ember-data";
export default class Person extends DS.Model.extend({
firstName: DS.attr("string"),
lastName: DS.attr("string")
}) {}</code></pre>
<p>You can still define other items of the class normally, but attributes have to be prototypally bound or <em>you will have problems</em>. Note that this only applies (as far as I can tell) to Ember Data <code>Model</code>s specifically—<code>Adapter</code> and <code>Serializer</code> classes work just fine.</p>
<p>The other problem we’ve historically had was dealing with lookups—the situation was similar to that I described in <a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-3.html">Part 3</a> for service injection. However, as of <em>this week</em>, we’re landing a solution that means you can drop the type coercions and just do a lookup like you would normally, and it will Just Work™️.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> Keep your eyes open for the ember-cli-typescript 1.1 release in the next couple days!</p>
<p>Once this release of both ember-cli-typescript and the updated typings land, when you generate an Ember Data model by doing <code>ember generate model person firstName:string lastName:string</code>, it will look like this:</p>
<pre class="ts"><code>import DS from "ember-data";
export default class Person extends DS.Model.extend({
firstName: DS.attr("string"),
lastName: DS.attr("string")
}) {
// normal class body definition here
}
// DO NOT DELETE: this is how TypeScript knows how to look up your models.
declare module "ember-data" {
interface ModelRegistry {
person: Person;
}
}</code></pre>
<p>That module and interface declaration at the bottom <em>merges</em> the declaration for this model with the declarations for all the other models. You’ll see the same basic pattern for <code>DS.Adapter</code> and <code>DS.Serializer</code> instances. The result is that <em>using</em> a model will now look like this. In addition to the <code>Person</code> model definition just above, our adapter might be like this:</p>
<pre class="ts"><code>import DS from "ember-data";
export default class Person extends DS.Adapter {
update(changes: { firstName?: string; lastName?: string }) {
fetch("the-url-to-change-it", {
method: "POST",
body: JSON.stringify(changes)
});
}
}
declare module "ember-data" {
interface ModelRegistry {
person: Person;
}
}</code></pre>
<p>Then putting the pieces together, our component definition will just look like this:</p>
<aside>
<p>*<strong>Note:</strong> please see the <a href="https://v4.chriskrycho.com/2018/ember-ts-class-properties.html">update about class properties published mid-2018</a>. The examples below are incorrect in several important ways.</p>
</aside>
<pre class="ts"><code>import Component from "@ember/component";
import { inject as service } from "@ember/service";
export default class PersonCard extends Component {
id: string | number;
store = service("store");
model = this.store.findRecord("person", this.id);
actions = {
savePerson(changes: { firstName?: string; lastName?: string }) {
this.store.adapterFor("person").update(changes);
}
};
}</code></pre>
<p>The type of <code>model</code> here is now <code>Person & DS.PromiseObject<Person></code> (which is actually what Ember Data returns for these kinds of things!), and the <code>this.store.adapterFor</code> actually correctly returns the <code>Person</code> adapter as well, so the call to its <code>update</code> method type-checks as well (including guaranteeing that the arguments to it are correct). That also means you’ll get autocompletion for those, including for their types, if you’re using an editor configured for it. And, happily for everyone, if you mistype a string (<code>preson</code> instead of <code>person</code>, for example), you’ll get a compile-time error!</p>
<p>Notice as well that the service injection is much cleaner than it was in earlier examples in the series. That’s because we made the same “registry”-type changes—as I suggested we might back in <a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-1.html">Part 1</a>!—for controller and service injections. Before, for this kind of thing:</p>
<pre class="ts"><code>export default class PersonCard extends Component {
store: Computed<DS.Store> = service();
}</code></pre>
<p>Now:</p>
<pre class="ts"><code>export default class PersonCard extends Component {
store = service("store");
}</code></pre>
<p>That’s not <em>quite</em> as minimalist as what you get in vanilla Ember (where the name of the property is used to do the lookup at runtime), but it’s pretty close, and a huge improvement! Not least since it’s <em>exactly</em> as type-checked, and therefore as friendly to autocomplete/IntelliSense/etc. as it was before.</p>
<section id="migrating-existing-items" class="level3">
<h3>Migrating existing items</h3>
<p>Your path forward for using the new approach is straightforward and fairly mechanical:</p>
<ol type="1">
<li>Add the module-and-interface declaration for each Ember Data <code>Model</code>, <code>Adapter</code>, and <code>Serializer</code>; and also each Ember <code>Service</code> and <code>Controller</code> you have defined.</li>
<li>Remove any type coercions you’ve written out already for these.</li>
</ol>
<section id="add-declaration" class="level4">
<h4>1. Add declaration</h4>
<section id="ds.model" class="level5">
<h5><code>DS.Model</code></h5>
<p><strong>Before:</strong></p>
<pre class="ts"><code>import DS from "ember-data";
export default class Person extends DS.Model.extend({
firstName: DS.attr("string"),
lastName: DS.attr("string")
}) {}</code></pre>
<p><strong>Now:</strong></p>
<pre class="ts"><code>import DS from "ember-data";
export default class Person extends DS.Model.extend({
firstName: DS.attr("string"),
lastName: DS.attr("string")
}) {}
declare module "ember-data" {
interface ModelRegistry {
person: Person;
}
}</code></pre>
</section>
<section id="ds.adapter" class="level5">
<h5><code>DS.Adapter</code></h5>
<p><strong>Before:</strong></p>
<pre class="ts"><code>import DS from "ember-data";
export default class Person extends DS.Adapter {
// customization
}</code></pre>
<p><strong>Now:</strong></p>
<pre class="ts"><code>import DS from "ember-data";
export default class Person extends DS.Adapter {
// customization
}
declare module "ember-data" {
interface AdapterRegistry {
person: Person;
}
}</code></pre>
</section>
<section id="ds.serializer" class="level5">
<h5><code>DS.Serializer</code></h5>
<p><strong>Before:</strong></p>
<pre class="ts"><code>import DS from "ember-data";
export default class Person extends DS.Serializer {
// customization
}</code></pre>
<p><strong>Now:</strong></p>
<pre class="ts"><code>import DS from "ember-data";
export default class Person extends DS.Serializer {
// customization
}
declare module "ember-data" {
interface SerializerRegistry {
person: Person;
}
}</code></pre>
</section>
<section id="service" class="level5">
<h5><code>Service</code></h5>
<p><strong>Before:</strong></p>
<pre class="ts"><code>import Service from "@ember/service";
export default class ExternalLogging extends Service {
// implementation
}</code></pre>
<p><strong>Now:</strong></p>
<pre class="ts"><code>import Service from "@ember/service";
export default class ExternalLogging extends Service {
// implementation
}
declare module "ember" {
interface ServiceRegistry {
"external-logging": ExternalLogging;
}
}</code></pre>
</section>
<section id="controller" class="level5">
<h5><code>Controller</code></h5>
<p><strong>Before:</strong></p>
<pre class="ts"><code>import Controller from "@ember/controller";
export default class Profile extends Controller {
// implementation
}</code></pre>
<p><strong>Now:</strong></p>
<pre class="ts"><code>import Controller from "@ember/controller";
export default class Profile extends Controller {
// implementation
}
declare module "@ember/controller" {
interface ControllerRegistry {
profile: Profile;
}
}</code></pre>
<p>If you <em>don’t</em> do add the type registry declarations, you’ll just get back:</p>
<ul>
<li><p><em>compiler errors</em> for any use of a string key in your service and controller lookups</p></li>
<li><p><code>Service</code> and <code>Controller</code> (the top-level classes we inherit from) instead of the specific class you created if you use the no-argument version of the <code>inject</code> helpers</p></li>
<li><p><em>compiler errors</em> for <code>DS.Model</code>, <code>DS.Adapter</code>, and <code>DS.Serializer</code> lookups (since they always have a string key)</p></li>
</ul>
<p>If you’re looking to allow your existing code to all just continue working while you <em>slowly</em> migrate to TypeScript, you can add this as a fallback somewhere in your own project (adapted to whichever of the registries you need):</p>
<pre class="ts"><code>declare module "ember-data" {
interface ModelRegistry {
[key: string]: DS.Model;
}
}</code></pre>
<p>This will lose you the type-checking if you type a key that doesn’t exist, but it means that models you haven’t yet added the type definition for won’t throw compile errors. (We’ve made this opt-in because otherwise you’d never be able to get that type-checking for using an invalid key.)</p>
</section>
</section>
<section id="remove-any-existing-coercions" class="level4">
<h4>2. Remove any existing coercions</h4>
<p>Now that we have the necessary updates to be able to do these lookups automatically in the compiler, we need to remove any existing type coercions.</p>
<section id="service-and-controller" class="level5">
<h5><code>Service</code> and <code>Controller</code></h5>
<p>This change is really straightforward (and actually just simplifies things a lot!) for <code>Service</code> and <code>Controller</code> injections.</p>
<pre class="diff"><code> import Component from '@ember/component';
import { inject as service } from '@ember/service';
- import Computed from '@ember/object/computed';
-
- import ExternalLogging from 'my-app/services/external-logging';
export default class UserProfile extends Component {
- externalLogging: Computed<ExternalLogging> = service();
+ externalLogging = service('external-logging');
// other implementation
}</code></pre>
</section>
<section id="ember-data-1" class="level5">
<h5>Ember Data</h5>
<p>This looks <em>slightly</em> different for the Ember Data side.</p>
<p>If you’ve been using the type coercion forms we shipped as a stopgap, like this—</p>
<pre class="ts"><code>const person = this.store.findRecord<Person>("person", 123);</code></pre>
<p>—you’ll need to drop the type coercion on <code>findRecord<Person></code>, which will give you a type error:</p>
<blockquote>
<p>[ts] Type ‘Person’ does not satisfy the constraint ‘string’.</p>
</blockquote>
<p>This is because, behind the scenes, <code>findRecord</code> still takes a type parameter, but it’s now a string—the name of the model you’re looking up—<em>not</em> the model itself. As such, you should <em>never</em> supply that type parameter yourself; it’s taken care of automatically. As a result, your invocation should just be:</p>
<pre class="ts"><code>const person = this.store.findRecord("person", 123);</code></pre>
</section>
</section>
</section>
<section id="the-full-type-of-lookups" class="level3">
<h3>The full type of lookups</h3>
<p>One last note on Ember Data: calls like <code>findRecord('person', 123)</code> actually return the type <code>Person & DS.PromiseObject<Person></code> – i.e., a type that acts like both the model and a promise wrapping the model. This is, to be sure, <em>weird</em>, but it’s the reality, so that’s what our types give you.</p>
<p>If you find yourself needing to write out that type locally for some reason—e.g. because part of your app deals explicitly with the result of a lookup—you may find it convenient to define a global type alias like this:</p>
<pre class="ts"><code>type Loaded<T> = T & DS.PromiseObject<T>;
const person: Loaded<Person> = this.store.findRecord("person", 123);</code></pre>
<p>Given the new support for getting that type automatically, you shouldn’t <em>normally</em> need that, but it’s convenient if or when you <em>do</em> need it. For example, if a component is passed the result of a <code>Person</code> lookup and needs to be able to treat it as a promise <em>or</em> the model, you could write it like this:</p>
<pre class="ts"><code>import Component from "@ember/component";
export default class PersonDisplay extends Component {
model: Loaded<Person>; // instead of just `model: Person`
}</code></pre>
</section>
<section id="preview-mirage" class="level3">
<h3>Preview: Mirage</h3>
<p>As it turns out, Ember CLI Mirage’s approach is a lot like Ember Data’s (although it’s actually a lot more dynamic!), so I have a very similar approach working in our codebase for doing lookups with Mirage’s database. Sometime in February or March, we hope to get that completed and upstreamed into Mirage itself, so that you can get these exact same benefits when using Mirage to write your tests.</p>
</section>
</section>
<section id="conclusion" class="level2">
<h2>Conclusion</h2>
<p>And that’s pretty much a wrap on Ember Data! The <em>next</em> post you can expect in this series will be a break from nitty-gritty “how to use TS in Ember” posts for a very exciting, closely related announcement—probably tomorrow or Monday! The post after that will be a deep dive into (mostly the limitations of!) writing types for mixins and proxies.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>If you’re curious about the mechanics, we’re basically setting up a “type registry” which maps the string keys to the correct model, so that the type of e.g. <code>store.createRecord('some-model', { ... })</code> will do a lookup in an interface which defines a mapping from model name, i.e. <code>some-model</code> here, to the model type, e.g. <code>export default class SomeModel extends DS.Model.extend({ ... }) {}</code>. I’ll write up a full blog post on the mechanics of that sometime soon.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
TypeScript and Ember.js Update, Part 32018-01-25T07:00:00-05:002018-07-10T20:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-01-25:/2018/typing-your-ember-update-part-3.htmlNow that we know a bit more about how computed properties work, we’ll talk about computed properties, actions, and mixins on the Ember.js side, along with the normal class methods.
<p><i class='series-overview'>You write <a href="https://emberjs.com">Ember.js</a> apps. You think <a href="http://www.typescriptlang.org">TypeScript</a> would be helpful in building a more robust app as it increases in size or has more people working on it. But you have questions about how to make it work.</i></p>
<p><i class='series-overview'>This is the series for you! I’ll talk through everything: from the very basics of how to set up your Ember.js app to use TypeScript to how you can get the most out of TypeScript today—and I’ll be pretty clear about the current tradeoffs and limitations, too.</i></p>
<p><i class='series-overview'><a href="/typing-your-ember.html">(See the rest of the series. →)</a></i></p>
<hr />
<p><strong>Note:</strong> if you’re following along with this <em>as I publish it</em> in late January 2018, please go back and read the end of <a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-2.html">Part 2</a>, which I updated substantially yesterday evening to include more material I missed in the first version of that post, but which belonged there and not here.</p>
<p>In the previous posts in this series, I introduced the big picture of how the story around TypeScript and Ember.js has improved over the last several months and walked through some important background on class properties. In this post, I’ll build on that foundation to look closely at computed properties, actions, and mixins.</p>
<aside>
<p>If you’re interested in all of this and would like to learn more in person, I’m <a href="http://emberconf.com/speakers.html#chris-krycho">leading a workshop on it at EmberConf 2018</a>—I’d love to see you there!</p>
</aside>
<p>Here’s the outline of this update sequence:</p>
<ol type="1">
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-1.html">Overview, normal Ember objects, component arguments, and injections.</a></li>
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-2.html">Class properties—some notes on how things differ from the <code>Ember.Object</code> world.</a></li>
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-3.html"><strong>Computed properties, actions, mixins, and class methods (this post).</strong></a></li>
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-4.html">Using Ember Data, and service and controller injections improvements.</a></li>
<li>Mixins and proxies; or: the really hard-to-type-check bits.</li>
</ol>
<section id="a-detailed-example-contd.-computed-properties-mixins-actions-and-class-methods" class="level2">
<h2>A detailed example (cont’d.) – computed properties, mixins, actions, and class methods</h2>
<aside>
<p>*<strong>Note:</strong> please see the <a href="https://v4.chriskrycho.com/2018/ember-ts-class-properties.html">update about class properties published mid-2018</a>. The example below and in the following posts is incorrect in several important ways.</p>
</aside>
<p>Let’s start by recalling the example Component we’re working through:</p>
<pre class="typescript"><code>import Component from '@ember/component';
import { computed, get } from '@ember/object';
import Computed from '@ember/object/computed';
import { inject as service } from '@ember/service';
import { assert } from '@ember/debug';
import { isNone } from '@ember/utils';
import Session from 'my-app/services/session';
import Person from 'my-app/models/person';
export default class AnExample extends Component {
// -- Component arguments -- //
model: Person; // required
modifier?: string; // optional, thus the `?`
// -- Injections -- //
session: Computed<Session> = service();
// -- Class properties -- //
aString = 'this is fine';
aCollection: string[] = [];
// -- Computed properties -- //
// TS correctly infers computed property types when the callback has a
// return type annotation.
fromModel = computed(
'model.firstName',
function(this: AnExample): string {
return `My name is ${get(this.model, 'firstName')};`;
}
);
aComputed = computed('aString', function(this: AnExample): number {
return this.lookAString.length;
});
isLoggedIn = bool('session.user');
savedUser: Computed<Person> = alias('session.user');
actions = {
addToCollection(this: AnExample, value: string) {
const current = this.get('aCollection');
this.set('aCollection', current.concat(value));
}
};
constructor() {
super();
assert('`model` is required', !isNone(this.model));
this.includeAhoy();
}
includeAhoy(this: AnExample) {
if (!this.get('aCollection').includes('ahoy')) {
this.set('aCollection', current.concat('ahoy'));
}
}
}</code></pre>
<section id="computed-properties" class="level3">
<h3>Computed properties</h3>
<p>We already covered component arguments and injections as well as basic class properties and the exceptions to normal class-property ways of doing things, in Parts <a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-2.html">1</a> and <a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-2.html">2</a>. With that background out of the way, we can now turn to computed properties. I’m including the component arguments in this code sample because they’re referenced in the computed property. Assume <code>Person</code> is a pretty “person” representation, with a <code>firstName</code> and a <code>lastName</code>and maybe a few other properties.</p>
<pre class="typescript"><code> // -- Component arguments -- //
model: Person; // required
modifier?: string; // optional, thus the `?`
// -- Computed properties -- //
// TS correctly infers computed property types when the callback has a
// return type annotation.
fromModel = computed(
'model.firstName',
function(this: AnExample): string {
return `My name is ${get(this.model, 'firstName')};`;
}
);
aComputed = computed('aString', function(this: AnExample): number {
return this.lookAString.length;
});
isLoggedIn = bool('session.user');
savedUser: Computed<Person> = alias('session.user');</code></pre>
<section id="computed-properties-1" class="level4">
<h4><code>computed</code> properties</h4>
<p>When using a computed property in the brave new world of ES6 classes, we normally just assign them as instance properties. As mentioned in <a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-2.html">the previous post</a>, and in line with my comments above, this has some important tradeoffs around performance. If you need the absolute <em>best</em> performance, you can continue to install them on the prototype by doing this instead:</p>
<pre class="typescript"><code>export default class MyComponent extends Component.extend({
fromModel: computed(
'model.firstName',
function(this: AnExample): string {
return `My name is ${get(this.model, 'firstName')};`;
}
),
}) {
// other properties
}</code></pre>
<p>Whichever way you do it, TypeScript will correctly infer the type of the computed property in question (here <code>fromModel</code>) as long as you explicitly annotate the return type of the callback passed to <code>computed</code>. Accordingly, in this case, the type of <code>fromModel</code> is <code>ComputedProperty<string></code>. The fact that it’s a <code>ComputedProperty</code> means if you try to treat it as a plain string, without using <code>Ember.get</code> to unwrap it, TypeScript will complain at you.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<pre class="typescript"><code>// type checking error:
this.fromModel.length;
// type checking valid:
this.get('fromModel').length;</code></pre>
<p>The other really important thing to note here is the use of <code>this: MyComputed</code>. By doing this, we’re telling TypeScript explicitly that the type of <code>this</code> in this particular function is the class context. We have to do this here, because we don’t have any way to tell the <code>computed</code> helper itself that the function inside it will be bound to the <code>this</code> context of the containing class. Put another way: we don’t have any <em>other</em> way to tell TypeScript that one of the things <code>computed</code> does is bind <code>this</code> appropriately to the function passed into it; but gladly we do have <em>this</em> way—otherwise we’d be out of luck entirely! (You’ll see the same thing below when we look at actions). The boilerplate is a bit annoying, admittedly—but it at least makes it type-check.</p>
</section>
<section id="computed-property-macros" class="level4">
<h4>Computed property macros</h4>
<p>Beyond <code>computed</code>, there are a lot of other computed property tools we use all the time. Some of them can (and therefore <em>do</em>) infer the type of the resulting computed property correctly. But there are a bunch of idiomatic things that TypeScript does not and cannot validate – a number of the computed property macros are in this bucket, because they tend to be used for nested keys, and as noted above, TypeScript does not and <em>cannot</em> validate nested keys like that.</p>
<p>We have a representative of each of these scenarios:</p>
<pre class="typescript"><code> isLoggedIn = bool('session.user');
savedUser: Computed<Person> = alias('session.user');</code></pre>
<p>In the case of <code>isLoggedIn</code>, the <code>bool</code> helper only ever returns a boolean, so the type of <code>isLoggedIn</code> is <code>ComputedProperty<boolean></code>. In the case of <code>savedUser</code>, since TypeScript can’t figure out what the nested key means, we have to specify it explicitly, using <code>Computed<Person></code>.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> In these cases, you have to do the work yourself to check that the type you specify is the <em>correct</em> type. If you write down the wrong type here, TypeScript will believe you (it doesn’t have any other good option!) and you’ll be back to things blowing up unexpectedly at runtime.</p>
<p>The typings supply the concrete (non-<code>any</code>) return type for: <code>and</code>, <code>bool</code>, <code>equal</code>, <code>empty</code>, <code>gt</code>, <code>gte</code>, <code>lt</code>, <code>lte</code>, <code>match</code>, <code>map</code>, <code>max</code>, <code>min</code>, <code>notEmpty</code>, <code>none</code>, <code>not</code>, <code>or</code>, and <code>sum</code>.</p>
</section>
<section id="on-nested-keys" class="level4">
<h4>On nested keys</h4>
<p>As noted above, TypeScript cannot do a lookup for any place using nested keys—which means that <code>this.get('some.nested.key')</code> won’t type-check, sadly. This is an inherent limitation of the type system as it stands today, and for any future I can foresee. The problem is this: what exactly <em>is</em> <code>'some.nested.key'</code>? It <em>could</em> be what we use it for in the usual scenario in Ember, of course: a string representing a lookup on a property of a property of a property of whatever <code>this</code> is. But it could equally well be a key named <code>'some.nested.key'</code>. This is perfectly valid JavaScript, after all:</p>
<pre class="javascript"><code>const foo = {
['some.nested.key']: 'Well, this is weird, but it works',
};</code></pre>
<p>TypeScript does not today and presumably <em>never will</em> be able to do that lookup. The workaround is to do one of two things:</p>
<ol type="1">
<li><p>If you <em>know</em> you have a valid parent, you can do the (catastrophically ugly, but functional) nested <code>Ember.get</code> that now litters our codebase:</p>
<pre class="typescript"><code>import { get } from '@ember/object';
const value = get(get(get(anObject, 'some'), 'nested'), 'key');</code></pre>
<p>Yes, it’s a nightmare. But… it type-checks, and it works well <em>enough</em> in the interim until we get a decorators-based solution that lets us leverage <a href="https://github.com/emberjs/rfcs/pull/281">RFC #281</a>.</p></li>
<li><p>Use the <code>// @ts-ignore</code> to simply ignore the type-unsafety of the lookup. This approach is preferable when you don’t know if any of the keys might be missing. If, for example, either <code>some</code> or <code>nested</code> were <code>undefined</code> or <code>null</code>, the lookup example above in (1) would fail.</p>
<pre class="typescript"><code>import { get } from '@ember/object';
// @ts-ignore -- deep lookup with possibly missing parents
const value = get(anObject, 'some.nested.key');</code></pre></li>
</ol>
</section>
</section>
<section id="actions" class="level3">
<h3>Actions</h3>
<p>What about actions? As usual, these just become class instance properties in the current scheme.</p>
<pre class="typescript"><code> actions = {
addToCollection(this: AnExample, value: string) {
const current = this.get('aCollection');
this.set('aCollection', current.concat(value));
}
};</code></pre>
<p>As with computed properties, we need the <code>this</code> type declaration to tell TypeScript that this method is going to be automatically bound to the class instance. Otherwise, TypeScript thinks the <code>this</code> here is the <code>actions</code> hash, rather than the <code>MyComponent</code> class.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a></p>
<p>Happily, that’s really all there is to it for actions: they’re quite straightforward other than needing the <code>this</code> type specification.</p>
</section>
<section id="types-in-.extend...-blocks" class="level3">
<h3>Types in <code>.extend({...})</code> blocks</h3>
<p>By and large, you can get away with using the same <code>this: MyComponent</code> trick when hacking around prototypal extension problems, or performance problems, by putting computed properties in a <code>.extend({...}</code> block. However, you <em>will</em> sometimes see a type error indicating that the class is referenced in its own definition expression. In that case, you may need to judiciously apply <code>any</code>, if you can’t make it work by using normal class properties.</p>
</section>
<section id="constructor-and-class-methods" class="level3">
<h3><code>constructor</code> and class methods</h3>
<p>ES6 class constructors and class methods both work as you’d expect, though as we’ll see you’ll need an extra bit of boilerplate for methods, at least for now.</p>
<pre class="typescript"><code> constructor() {
super();
assert('`model` is required', !isNone(this.model));
this.includeAhoy();
}
includeAhoy(this: AnExample): void {
if (!this.get('aCollection').includes('ahoy')) {
this.set('aCollection', current.concat('ahoy'));
}
}</code></pre>
<p>For the most part, you can just switch to using normal ES6 class constructors instead of the Ember <code>init</code> method. You can, if you so desire, also move existing <code>init</code> functions passed to a <code>.extends({ ...})</code> hash to class methods, and they’ll work once you change <code>this._super(...arguments)</code> to <code>super.init(...arguments)</code>. It’s worth pausing to understand the relationship between <code>init</code> and prototypal <code>init</code> and the <code>constructor</code>. An <code>init</code> in the <code>.extends()</code> hash runs first, then an <code>init</code> method on the class, then the normal <code>constructor</code>.<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a></p>
<p>Note that you do not need to (and cannot) annotate the <code>constructor</code> with <code>this: MyComponent</code>. Depending on the class you’re building, you may <em>occasionally</em> have type-checking problems that come up as a result of this. I’ve only ever seen that happen when using computed properties while defining a proxy,<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> but it does come up. In that case, you can fall back to using <code>init</code> as a method, and set <code>this: MyComponent</code> on <em>it</em>, and things will generally fall out as working correctly at that point. When it comes up, this seems to be just a limitation of what <code>this</code> is understood to be in a <code>constructor</code> given Ember’s rather more-complex-than-normal-classes view of what a given item being constructed is.</p>
<p>Other class methods do also need the <code>this</code> type specified if they touch computed properties. (Normal property access is fine without it.) That’s because the lookups for <code>ComputedProperty</code> instances (using <code>Ember.get</code> or <code>Ember.set</code>) need to know what <code>this</code> is where they should do the lookup, and the full <code>this</code> context isn’t inferred correctly at present. You can either write that on every invocation of <code>get</code>and <code>set</code>, like <code>(this as MyComponent).get(...)</code>, or you can do it once at the start of the method. Again, a bit boiler-platey, but it gets the job done and once you’re used to it it’s minimal hassle.<a href="#fn6" class="footnote-ref" id="fnref6" role="doc-noteref"><sup>6</sup></a></p>
<p>One last note, which I didn’t include in the example: if you have a function (usually an action) passed into the component, you can define it most simply by just using <code>onSomeAction: Function;</code> in the class definition, right with other class arguments. However, it’s usually most helpful to define what the type should actually <em>be</em>, for your own sanity check if nothing else. As with e.g. <code>model</code> in this example, we don’t actually have a good way to type-check that what is passed is correct. We can, however, at least verify in the constructor that the caller passed in a function using <a href="https://emberjs.com/api/ember/2.18/classes/@ember%2Fdebug/methods/assert?anchor=assert"><code>assert</code></a>, just as with other arguments.</p>
</section>
</section>
<section id="summary" class="level2">
<h2>Summary</h2>
<p>So that’s a wrap on components (and controllers, which behave much the same way).</p>
<p>In the next post, I’ll look at the elephant in the room: Ember Data (and closely related concern Ember CLI Mirage). While you <em>can</em> make Ember Data stuff largely work today, it’s still a ways from <em>Just Works™️</em>, sadly, but we’ll cover how to work around the missing pieces—we’ve gotten there in our own codebase, so you can, too!</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>As mentioned in <a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-2.html">Part 2</a>, this problem doesn’t go away until we get decorators, unless you’re putting them on the prototype via <code>.extends()</code>—but see below for the problems with <em>that</em>. The short version is, we need decorators for this to actually be <em>nice</em>. Once we get decorators, we will be able to combine them with the work done for <a href="https://github.com/emberjs/rfcs/pull/281">RFC #281</a> and normal lookup will just work:</p>
<pre class="typescript"><code>@computed('model.firstName')
get fromModel() {
return `My name is ${this.model.firstName};`;
}</code></pre>
<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></li>
<li id="fn2" role="doc-endnote"><p>I’ve used <code>Computed<Person></code> and similar throughout here because it’s the most clear while still being reasonably concise. The actual type name in Ember’s own code is <code>ComputedProperty</code>, but <code>ComputedProperty<Person></code> is <em>long</em>, and it wouldn’t have added any real clarity here. In my own codebase, we use <code>CP</code> (for “<strong>C</strong>omputed <strong>P</strong>roperty”) for the sake of brevity—so here that would just be <code>CP<Person></code>.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>In the future, this problem will hopefully be solved neatly by decorators:</p>
<pre class="typescript"><code> @action
addToCollection(value: string) {
const current = this.get('aCollection');
this.set('aCollection', current.concat(value));
}</code></pre>
<p>For today, however, specifying a <code>this</code> type is where it’s at.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>You can see this for yourself in <a href="https://ember-twiddle.com/36844717dcc50d734139368edf2e87da">this Ember Twiddle</a>—just open your developer tools and note the sequence.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn5" role="doc-endnote"><p>Proxies, along with details of mixins, are a subject I’m leaving aside for Part 5, otherwise known as the “wow, this stuff is really weird to type” entry in the series.<a href="#fnref5" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn6" role="doc-endnote"><p>Not <em>no</em> hassle, though, and I look forward to a future where we can drop it, as Ember moves more and more toward modern JavaScript ways of solving these same problems!<a href="#fnref6" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
TypeScript and Ember.js Update, Part 22018-01-24T07:00:00-05:002018-07-10T20:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-01-24:/2018/typing-your-ember-update-part-2.htmlFor years, you've been using Ember Object and .extend()—but the rules are different with classes.
<p><i class='series-overview'>You write <a href="https://emberjs.com">Ember.js</a> apps. You think <a href="http://www.typescriptlang.org">TypeScript</a> would be helpful in building a more robust app as it increases in size or has more people working on it. But you have questions about how to make it work.</i></p>
<p><i class='series-overview'>This is the series for you! I’ll talk through everything: from the very basics of how to set up your Ember.js app to use TypeScript to how you can get the most out of TypeScript today—and I’ll be pretty clear about the current tradeoffs and limitations, too.</i></p>
<p><i class='series-overview'><a href="/typing-your-ember.html">(See the rest of the series. →)</a></i></p>
<hr />
<p>In the previous post in this series, I introduced the big picture of how the story around TypeScript and Ember.js has improved over the last several months. In this post, I’ll be pausing from TypeScript-specific to take a look at how things work with <em>class properties</em>, since they have some big implications for how we work, which then have ripple effects on computed properties, actions, etc.</p>
<aside>
<p>If you’re interested in all of this and would like to learn more in person, I’m <a href="http://emberconf.com/speakers.html#chris-krycho">leading a workshop on it at EmberConf 2018</a>—I’d love to see you there!</p>
</aside>
<p>Here’s the outline of this update sequence:</p>
<ol type="1">
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-1.html">Overview, normal Ember objects, component arguments, and injections.</a></li>
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-2.html"><strong>Class properties—some notes on how things differ from the <code>Ember.Object</code> world (this post).</strong></a></li>
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-3.html">Computed properties, actions, mixins, and class methods.</a></li>
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-4.html">Using Ember Data, and service and controller injections improvements.</a></li>
<li>Mixins and proxies; or: the really hard-to-type-check bits.</li>
</ol>
<section id="a-detailed-example-contd.-class-properties" class="level2">
<h2>A detailed example (cont’d.) – class properties</h2>
<aside>
<p>*<strong>Note:</strong> please see the <a href="https://v4.chriskrycho.com/2018/ember-ts-class-properties.html">update about class properties published mid-2018</a>. The example below and in the following posts is incorrect in several important ways.</p>
</aside>
<p>Let’s start by recalling the example Component we’re working through:</p>
<pre class="typescript"><code>import Component from '@ember/component';
import { computed, get } from '@ember/object';
import Computed from '@ember/object/computed';
import { inject as service } from '@ember/service';
import { assert } from '@ember/debug';
import { isNone } from '@ember/utils';
import Session from 'my-app/services/session';
import Person from 'my-app/models/person';
export default class AnExample extends Component {
// -- Component arguments -- //
model: Person; // required
modifier?: string; // optional, thus the `?`
// -- Injections -- //
session: Computed<Session> = service();
// -- Class properties -- //
aString = 'this is fine';
aCollection: string[] = [];
// -- Computed properties -- //
// TS correctly infers computed property types when the callback has a
// return type annotation.
fromModel = computed(
'model.firstName',
function(this: AnExample): string {
return `My name is ${get(this.model, 'firstName')};`;
}
);
aComputed = computed('aString', function(this: AnExample): number {
return this.lookAString.length;
});
isLoggedIn = bool('session.user');
savedUser: Computed<Person> = alias('session.user');
actions = {
addToCollection(this: AnExample, value: string) {
const current = this.get('aCollection');
this.set('aCollection', current.concat(value));
}
};
constructor() {
super();
assert('`model` is required', !isNone(this.model));
this.includeAhoy();
}
includeAhoy(this: AnExample) {
if (!this.get('aCollection').includes('ahoy')) {
this.set('aCollection', current.concat('ahoy'));
}
}
}</code></pre>
<p>Throughout, you’ll note that we’re using <em>assignment</em> to create these class properties—a big change from the key/value setup in the old <code>.extends({ ... })</code> model:</p>
<pre class="typescript"><code> // -- Class properties -- //
aString = 'this is fine';
aCollection: string[] = [];</code></pre>
<p>Class properties like this are <em>instance properties</em>. These are compiled to, because they are <em>equivalent to</em>, assigning a property in the constructor. That is, these two ways of writing class property initialization are equivalent—</p>
<p>At the property definition site:</p>
<pre class="typescript"><code>export default class AnExample extends Component {
// snip...
// -- Class properties -- //
aString = 'this is fine';
aCollection: string[] = [];
// snip..
constructor() {
super();
assert('`model` is required', !isNone(this.model));
this.includeAhoy();
}
// snip...
}</code></pre>
<p>In the constructor:</p>
<pre class="typescript"><code>export default class AnExample extends Component {
// snip...
// -- Class properties -- //
aString: string;
aCollection: string[];
constructor() {
super();
this.aString = 'this is fine';
this.aCollection = [];
assert('`model` is required', !isNone(this.model));
this.includeAhoy();
}
// snip...
}</code></pre>
<p>You can see why the first one is preferable: if you don’t need any input to the component to set the value, you can simply set the definition inline where the property is declared.</p>
<p>However, this is <em>quite</em> unlike using <code>.extend</code>, which installs the property on the prototype. Three very important differences from what you’re used to fall out of this, and <em>none of them are specific to TypeScript.</em><a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<section id="default-values" class="level3">
<h3>1. Default values</h3>
<p>Since class property setup runs during the constructor, if you want the caller to be able to override it, you <em>must</em> give it an explicit fallback that references what’s passed into the function. Something like this:</p>
<pre class="typescript"><code>class AnyClass {
aDefaultProp = this.aDefaultProp || 0;
}</code></pre>
<p>Again, translated back into the constructor form:</p>
<pre class="typescript"><code>class AnyClass {
constructor() {
this.aDefaultProp = this.aDefaultProp || 0;
}
}</code></pre>
<p>Here, you can see that if something has <em>already set</em> the <code>aDefaultProp</code> value (before the class constructor is called), we’ll use that value; otherwise, we’ll use the default. You can think of this as being something like default arguments to a function. In our codebase, we have started using <a href="https://lodash.com/docs/4.17.4#defaultTo"><code>_.defaultTo</code></a>, which works quite nicely. In the old world of declaring props with their values in the <code>.extends({ ... })</code> hash, we got this behavior “for free”—but without a lot of other benefits of classes, so not <em>actually</em> for free.</p>
</section>
<section id="no-more-shared-state" class="level3">
<h3>2. No more shared state</h3>
<p>Because these are instance properties, <em>not</em> assigned on the prototype, you do not have to worry about the problem—<a href="https://dockyard.com/blog/2014/04/17/ember-object-self-troll">well-known among experienced Ember.js developers, but prone to bite people new to the framework</a>—where you assign an array or object in the <code>.extend()</code> method and then find that it’s shared between instances.</p>
<pre class="typescript"><code>export default Component.extend({
anArray: [], // <- this *will* be shared between instances
});</code></pre>
<p>We’ve long had to handle this by setting up those properties in our <code>init()</code> method instead, so that they are created during object instantiation, rather than on the prototype. This problem goes away entirely with classes, including in TypeScript.</p>
<pre class="typescript"><code>export default class MyComponent extends Component {
anArray = []; // <- this will *not* be shared between instances
}</code></pre>
<p>(Note that here, we don’t have a type for the array, so it’s of type <code>any[]</code>; we <em>always</em> need type annotations for empty arrays if we want them to be a “narrower,” or more specific, type than that.)</p>
</section>
<section id="performance-changes" class="level3">
<h3>3. Performance changes</h3>
<p>The flip-side of this is that the only way we currently have to create computed property instances (until decorators stabilize) is <em>also</em> as instance, not prototype, properties. I’ll look at computed properties (and their types) in more detail in the next post, so here mostly just note how the computed is set up on the class: by assignment, <em>not</em> as a prototypal property.</p>
<pre class="typescript"><code>export default class MyComponent extends Component {
aString = 'Hello, there!';
itsLength = computed('aString', function(this: MyComponent): number {
return this.aString.length;
});
}</code></pre>
<p>This <em>does</em> have a performance cost, which will be negligible in the ordinary case but pretty nasty if you’re rendering hundreds to thousands of these items onto the page. You can use this workaround for these as well as for any other properties which need to be prototypal (more on <em>that</em> in the next post as well):<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<pre class="typescript"><code>export default class MyComponent extends Component.extend({
itsLength: computed('aString', function(this: MyComponent): number {
return this.aString.length;
}
);
}) {
aString = 'Hello, there!';
}</code></pre>
<p>This <em>looks</em> really weird, but it works exactly as you’d expect.</p>
</section>
</section>
<section id="class-property-variants" class="level2">
<h2>Class property variants</h2>
<p>There are two times when things will look different from basic class properties. Both have to do with setting up the prototype to work the way other parts of the Ember object ecosystem expect.</p>
<section id="variant-1-prototypalmerged-properties" class="level3">
<h3>Variant 1: Prototypal/merged properties</h3>
<p>The first is when you’re using properties that need to be merged with properties in the prototype chain, e.g. <code>attributeBindings</code> or <code>classNameBindings</code>, or which (because of details of how components are constructed) have to be set on the prototype rather than as instance properties, e.g. <code>tagClass</code>.</p>
<p>For those, we can just leverage <code>.extend</code> in conjunction with classes:</p>
<pre class="typescript"><code>import Component from '@ember/component';
export default class MyListItem extends Component.extend({
tagName: 'li',
classNameBindings: ['itemClass']
}) {
itemClass = 'this-be-a-list';
// etc.
}</code></pre>
<p>This is also how you’ll <em>use</em> mixins (on defining them, see below):</p>
<pre class="typescript"><code>import Component from '@ember/component';
import MyMixin from 'my-app/mixins/my-mixin';
export default class AnExample extends Component.extend(MyMixin) {
// the rest of the definition.
}</code></pre>
<p>Note, however—and this is very important—that you cannot <code>.extend</code> an existing <code>class</code> implementation. As a result, deep inheritance hierarchies <em>may</em> make transitioning to classes in Ember painful. Most importantly: they may work <em>some</em> of the time in <em>some</em> ways, but will break when you least expect. So don’t do that! (This isn’t a TypeScript limitation; it’s a limitation of classes in Ember today.)<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a></p>
</section>
<section id="variant-2-mixins" class="level3">
<h3>Variant 2: Mixins</h3>
<p>The other time you’ll have to take a different tack—and this falls directly out of the need for prototypal merging—is with <code>Mixin</code>s, which don’t yet work properly with classes. Worse, it’s difficult (if not impossible) to get rigorous type-checking internally in <code>Mixin</code> definitions, because you cannot define them as classes: you <em>have</em> to use the old style throughout, because mixins are created with <code>.create()</code>, not <code>.extend()</code>.</p>
<aside>
<p>Note that if you’re writing <em>new</em> code in Ember.js—using TypeScript or not—I strongly encourage you to simply avoid using mixins at all. Instead, use services (or, occasionally, inheritance). This will require you to change how you write some of your code, but in my experience that change will make your codebase much easier to understand, and therefore much easier to maintain.</p>
</aside>
<p>I’ll have a lot more to say about these in part 5 of this series, including a detailed example of how to carefully type-annotate one and use it in another class. For now, suffice it to say that you’ll still need to incorporate <code>Mixin</code>s via <code>.extend()</code>:</p>
<pre class="typescript"><code>import Component from '@ember/component';
import MyMixin from 'my-app/mixins/my-mixin';
export default class SomeNewComponent extends Component.extend(MyMixin) {
// normal class properties
}</code></pre>
</section>
</section>
<section id="summary" class="level2">
<h2>Summary</h2>
<p>Those are the <em>biggest</em> differences from <code>Ember.Object</code> that you need to be aware of when working with class properties in Ember.js today, at least in my experience working with them day to day. These are not the only differences with <em>classes</em>, though, especially when dealing with TypeScript, so in my next entry we’ll take a look at how classes work (and work well!) with most things in Ember.js and TypeScript together.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>You can use this same feature on classes using Babel, with the <a href="https://babeljs.io/docs/plugins/transform-class-properties/">class properties transform</a>.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Even when <a href="https://github.com/emberjs/rfcs/pull/281">Ember.js RFC #281</a> lands, this problem will not go away, at least under the current implementation, since <a href="https://github.com/emberjs/rfcs/pull/281#issuecomment-360023258"><em>these</em> will <em>not</em> be transformed into getters on the prototype</a>. We are waiting for decorators to solve this problem completely.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>In the future, we’ll (hopefully and presumably 🤞🏼) have an escape hatch for those merged or prototypally-set properties via decorators. That’ll look something like this:</p>
<pre class="typescript"><code>import Component from '@ember/component';
import { className, tagName } from 'ember-decorators/component';
@tagName("li")
export default class MyListItem extends Component {
@className itemClass = 'this-be-a-list';
@action
sendAMessage(contents: string): void {
}
// etc.
}</code></pre>
<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></li>
</ol>
</section>
TypeScript and Ember.js Update, Part 12018-01-22T07:10:00-05:002018-07-10T20:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2018-01-22:/2018/typing-your-ember-update-part-1.htmlA bunch has changed for the better in the TypeScript/Ember.js story over the last six months. Here’s an overview of the changes and a look at normal Ember objects, "arguments" to components (and controllers), and service (or controller) injections.<p><i class='series-overview'>You write <a href="https://emberjs.com">Ember.js</a> apps. You think <a href="http://www.typescriptlang.org">TypeScript</a> would be helpful in building a more robust app as it increases in size or has more people working on it. But you have questions about how to make it work.</i></p>
<p><i class='series-overview'>This is the series for you! I’ll talk through everything: from the very basics of how to set up your Ember.js app to use TypeScript to how you can get the most out of TypeScript today—and I’ll be pretty clear about the current tradeoffs and limitations, too.</i></p>
<p><i class='series-overview'><a href="/typing-your-ember.html">(See the rest of the series. →)</a></i></p>
<hr />
<p>Back in July 2017, I wrote <a href="https://v4.chriskrycho.com/2017/typing-your-ember-part-3.html">a post</a> on how to using TypeScript in your Ember.js apps. At the time, we were still busy working on getting the typings more solid for Ember itself, and <code>class</code> syntax for Ember was apparently a long way away.</p>
<p>Things have gotten quite a bit better since then, so I thought I’d update that post with recommendations for using TypeScript in an app <em>now</em> with the updated typings, as well as with another six months of experience using TypeScript in our app at Olo (~20k lines of code in the app and another ~15k in tests).</p>
<aside>
<p>If you’re interested in all of this and would like to learn more in person, I’m <a href="http://emberconf.com/speakers.html#chris-krycho">leading a workshop on it at EmberConf 2018</a>—I’d love to see you there!</p>
</aside>
<p>Here’s how I expect this update series to go:</p>
<ol type="1">
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-1.html"><strong>Overview, normal Ember objects, component arguments, and injections (this post).</strong></a></li>
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-2.html">Class properties—some notes on how things differ from the <code>Ember.Object</code> world.</a></li>
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-3.html">Computed properties, actions, mixins, and class methods.</a></li>
<li><a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-4.html">Using Ember Data, and service and controller injections improvements.</a></li>
<li>Mixins and proxies; or: the really hard-to-type-check bits.</li>
</ol>
<section id="normal-ember-objects" class="level2">
<h2>Normal Ember objects</h2>
<p>For normal Ember objects, things now <em>mostly</em> just work if you’re using class-based syntax, with a single (though very important) qualification I’ll get to in a minute. And you can use the class-based syntax <em>today</em> in Ember.js—all the way back to 1.13, as it turns out. If you want to learn more, you can read <a href="https://github.com/emberjs/rfcs/blob/master/text/0240-es-classes.md">this RFC</a> or <a href="https://medium.com/build-addepar/es-classes-in-ember-js-63e948e9d78e">this blog post</a>, both by <a href="https://github.com/pzuraq">@pzuraq (Chris Garrett)</a>, who did most of the legwork to research this and flesh out the constraints, and who has also been doing a lot of work on <a href="https://ember-decorators.github.io/ember-decorators/docs/index.html">Ember Decorators</a>.</p>
<p>Accordingly, I’m assuming the use of ES6 <code>class</code> syntax throughout. The big reason for this is that things mostly just <em>don’t work</em> without it. And we’ll see (in a later post) some hacks to deal with places where parts of Ember’s ecosystem don’t yet support classes properly. In general, however, if you see an error like <code>"Cannot use 'new' with an expression whose type lacks a call or construct signature."</code>, the reason is almost certainly that you’ve done <code>export default Component.extend({...})</code> rather than creating a class.</p>
</section>
<section id="a-detailed-example" class="level2">
<h2>A detailed example</h2>
<p>That means that every new bit of code I write today in our app looks roughly like this, with only the obvious modifications for services, routes, and controllers—I picked components because they’re far and away the most common things in our applications.</p>
<p>In order to explain all this clearly, I’m going to start by showing a whole component written in the new style. Then, over the rest of this post and the next post, I’ll zoom in on and explain specific parts of it.</p>
<aside>
<p>*<strong>Note:</strong> please see the <a href="https://v4.chriskrycho.com/2018/ember-ts-class-properties.html">update about class properties published mid-2018</a>. The example below and in the following posts is incorrect in several important ways.</p>
</aside>
<pre class="typescript"><code>import Component from "@ember/component";
import { computed, get } from "@ember/object";
import Computed from "@ember/object/computed";
import { inject as service } from "@ember/service";
import { assert } from "@ember/debug";
import { isNone } from "@ember/utils";
import Session from "my-app/services/session";
import Person from "my-app/models/person";
export default class AnExample extends Component {
// -- Component arguments -- //
model: Person; // required
modifier?: string; // optional, thus the `?`
// -- Injections -- //
session: Computed<Session> = service();
// -- Class properties -- //
aString = "this is fine";
aCollection: string[] = [];
// -- Computed properties -- //
// TS correctly infers computed property types when the callback has a
// return type annotation.
fromModel = computed("model.firstName", function(this: AnExample): string {
return `My name is ${get(this.model, "firstName")};`;
});
aComputed = computed("aString", function(this: AnExample): number {
return this.lookAString.length;
});
isLoggedIn = bool("session.user");
savedUser: Computed<Person> = alias("session.user");
actions = {
addToCollection(this: AnExample, value: string) {
const current = this.get("aCollection");
this.set("aCollection", current.concat(value));
}
};
constructor() {
super();
assert("`model` is required", !isNone(this.model));
this.includeAhoy();
}
includeAhoy(this: AnExample) {
if (!this.get("aCollection").includes("ahoy")) {
this.set("aCollection", current.concat("ahoy"));
}
}
}</code></pre>
<section id="component-arguments" class="level3">
<h3>Component arguments</h3>
<pre class="typescript"><code>export default class AnExample extends Component {
// Component arguments
model: Person; // required
modifier?: string; // optional, thus the `?`</code></pre>
<p>I always put these first so that the “interface” of the object is clear and obvious. You can do the same thing on a controller instance; in that case you would export a <code>Model</code> from the corresponding <code>Route</code> class and import it into the <code>Controller</code>. It’s a bit of boilerplate, to be sure, but it lets you communicate your interface clearly to consumers of the <code>Component</code> or <code>Controller</code>.</p>
<p>An important note about these kind of arguments: you do <em>not</em> have to do <code>this.get(...)</code> (or, if you prefer, <code>get(this, ...)</code>) to access the properties themselves: they’re class instance properties. You can simply access them as normal properties: <code>this.model</code>, <code>this.modifier</code>, etc. That even goes for referencing them as computed properties, as we’ll see below.</p>
<p>For optional arguments, you use the <code>?</code> operator to indicate they may be <code>undefined</code>. To get the <em>most</em> mileage out of this, you’ll want to enable <code>strictNullChecks</code> in the compiler options.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> However, note that we don’t currently have any way to validate component argument invocation.[^ts-templates] The way I’ve been doing this is using Ember’s debug <a href="https://emberjs.com/api/ember/2.18/classes/@ember%2Fdebug/methods/assert?anchor=assert"><code>assert</code></a> in the constructor:</p>
<pre class="typescript"><code>assert("`model` is required", !isNone(this.model));</code></pre>
<pre class="typescript"><code>import Component from "@ember/component";
import { Maybe } from "true-myth";
export default class MyComponent extends Component {
optionalArg?: string;
optionalProperty = Maybe.of(this.optionalArg);
}</code></pre>
<p>Then if you invoke the property without the argument, it’ll construct a <code>Nothing</code>; if you invoke it with the argument, it’ll be <code>Just</code> with the value. [^ts-templates]: A few of us have batted around some ideas for how to solve that particular problem, but <em>if</em> we manage those, it’ll probably be way, way later in 2018.</p>
<p><strong>Edit, January 24, 2018:</strong> Starting in TypeScript 2.7, you can enable a flag, <code>--strictPropertyInitialization</code>, which requires that all declared, non-optional properties on a class be initialized in the constructor or with a class property assignment. (There’s more on class property assignment in <a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-2.html">part 2</a> of this series.) If you do that, all <em>arguments</em> to a component should be defined with the <em>definite assignment assertion modifier</em>, a <code>!</code> after the name of the property, as on <code>model</code> here:</p>
<pre class="typescript"><code>export default class AnExample extends Component {
// Component arguments
model!: Person; // required
modifier?: string; // optional, thus the `?`</code></pre>
<p>You should still combine that with use of <a href="https://emberjs.com/api/ember/2.18/classes/@ember%2Fdebug/methods/assert?anchor=assert"><code>assert</code></a> so that any misses in template invocation will get caught in your tests.</p>
</section>
<section id="injections" class="level3">
<h3>Injections</h3>
<pre class="typescript"><code> // -- Injections -- //
session: Computed<Session> = service();</code></pre>
<p>Here, the most important thing to note is the required type annotation. In principle, we could work around this by requiring you to explicitly name the service and using a “type registry” to look up what the service type is – more on that below in my discussion of using Ember Data – but I’m not yet persuaded that’s better than just writing the appropriate type annotation. Either way, there’s some duplication. 🤔 We (everyone working in the <a href="https://github.com/typed-ember">typed-ember</a> project) would welcome feedback here, because the one thing we <em>can’t</em> do is get the proper type <em>without</em> one or the other of these.</p>
<p><strong>Edit, February 5, 2018:</strong> see <a href="https://v4.chriskrycho.com/2018/typing-your-ember-update-part-4.html">Part 4</a> for some updates to this—I actually went ahead and built and implemented that approach, and everything is much nicer now.</p>
<pre class="typescript"><code> // the current approach -- requires importing `Session` so you can define it
// on the property here
session: Computed<Session> = service();
// the alternative approach I've considered -- requires writing boilerplate
// elsewhere, similar to what you'll see below in the Ember Data section
session = service('session');</code></pre>
<p>One other thing to notice here is that because TypeScript is a <em>structural</em> type system, it doesn’t matter if what is injected is the actual <code>Session</code> service; it just needs to be something that <em>matches the shape</em> of the service – so your normal behavior around dependency injection, etc. is all still as expected.</p>
<p>That’s enough for one post, I think. In the next entry, we’ll pick up with how you handle class properties, including computed properties, and then talk about mixins as well. In the post after that, we’ll look at Ember Data and some related concerns.</p>
</section>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>This isn’t my preferred way of handling optional types; <a href="https://true-myth.js.org">a <code>Maybe</code> type</a> is. And you can, if you like, use <code>Maybe</code> here:<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Exploring 4 Languages: Starting to Model the Domain2018-01-14T09:00:00-05:002018-01-14T09:00:00-05:00Chris Krychotag:v4.chriskrycho.com,2018-01-14:/2018/exploring-4-languages-starting-to-model-the-domain.htmlUsing the type systems of Rust, Elm, F♯, and ReasonML to encode the elements of a domain model—and starting to get some idea how the languages are like and unlike each other.<p>In the first three chapters of <em>Domain Modeling Made Functional</em>, Wlaschin walks through the creation of a “domain model” for an order-taking system. (It’s well worth reading the book just for a bunch of the lessons in that section—I found them quite helpful!) Then, after spending a chapter introducing F<sup>♯</sup>’s type system, he introduces the ways you can <em>use</em> those type mechanics to express the domain. In today’s post, I’ll show the idiomatic implementations of these types in each of Rust, Elm, F<sup>♯</sup>, and ReasonML.</p>
<section id="simple-values" class="level2">
<h2>Simple values</h2>
<p>Simple wrapper types let you take simple types like strings, numbers, etc. and use types to represent part of the business domain you’re dealing with—the basic idea being that a Customer ID may be a number, but it’s not interchangeable with <em>other</em> numbers such as Order IDs.</p>
<p>Here’s the most ergonomic and effective (and automatically-formatted in line with the language standards, where applicable!) way to do that in each of the languages:</p>
<p>Rust:</p>
<pre class="rust"><code>struct CustomerId(i32);</code></pre>
<p>Elm:</p>
<pre class="elm"><code>type CustomerId
= CustomerId Int</code></pre>
<p>F<sup>♯</sup>:</p>
<pre class="fsharp"><code>type CustomerId = CustomerId of int</code></pre>
<p>ReasonML:</p>
<pre class="reason"><code>type customerId =
| CustomerId(int);</code></pre>
<p>Note how similar these all are! The Rust implementation is the <em>most</em> distinctive, though you can do it with the same kind of union type as the others. Here’s how that would look:</p>
<pre class="rust"><code>enum CustomerId {
CustomerId(i32),
}</code></pre>
<p>For performance reasons, you might also choose to implement the F<sup>♯</sup> type as a struct:</p>
<pre class="fsharp"><code><Struct>
type CustomerId = CustomerId of int</code></pre>
</section>
<section id="complex-data" class="level2">
<h2>Complex data</h2>
<p>Wlaschin then moves on to showing how to model more complex data structures: types that “and” or “or” together other data. We “and” data together using record or struct types, and “or” data together using “union” or “enum” types. (Assume we’ve defined <code>CustomerInfo</code>, <code>ShippingAddress</code>, etc. types for all of these.)</p>
<p>Rust:</p>
<pre class="rust"><code>// "and"
struct Order {
customer_info: CustomerInfo,
shipping_address: ShippingAddress,
billing_address: BillingAddress,
order_lines: Vec<OrderLine>,
billing_amount: BillingAmount,
}
// "or"
enum ProductCode {
Widget(WidgetCode),
Gizmo(GizmoCode),
}</code></pre>
<p>Elm:</p>
<pre class="elm"><code>-- "and"
type alias Order =
{ customerInfo : CustomerInfo
, shippingAddress : ShippingAddress
, billingAddress : BillingAddress
, orderLines : List OrderLine
, billingAmount : BillingAmount
}
-- "or"
type ProductCode
= Widget WidgetCode
| Gizmo GizmoCode</code></pre>
<p>F<sup>♯</sup>:</p>
<pre class="fsharp"><code>// "and"
type Order = {
CustomerInfo : CustomerInfo
ShippingAddress : ShippingAddress
BillingAddress : BillingAddress
OrderLines : OrderLine list
AmountToBill: BillingAmount
}
// "or"
type ProductCode =
| Widget of WidgetCode
| Gizmo of GizmoCode</code></pre>
<p>ReasonML—note that since we’re assuming we’ve already defined the other types here, you can write this without duplicating the name and type declaration, just like you can with JavaScript object properties.</p>
<pre class="reason"><code>/* "and" */
type order = {
customerInfo,
shippingAddress,
billingAddress,
orderLine,
billingAmount
};
/* "or" */
type productCode =
| Widget(widgetCode)
| Gizmo(gizmoCode);</code></pre>
<p>An interesting aside: unless you planned to reuse these types, you wouldn’t usually write these as standalone types with this many wrapper types in it in Rust in particular (even if the compiler would often recognize that it could squash them down for you).<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> Instead, you’d normally write <em>only</em> the base enum type to start, and refactor out the <code>struct</code> wrapper later only if you found you needed it elsewhere:</p>
<pre class="diff"><code>enum ProductCode {
- Widget(WidgetCode),
+ Widget(String),
- Gizmo(GizmoCode),
+ Gizmo(String),
}</code></pre>
<p>That said: given how the book is tackling things, and the fact that you might want to <em>validate</em> these types… having them as these low-cost wrappers is probably worth it. (In fact, having read a bit further than I’ve managed to write out yet, I can guarantee it.)</p>
<p>We work through the rest of the basic types this way. But what about the types where we don’t yet have a good idea how we want to handle them?</p>
<p>Each of these languages gives us an out (or more than one) for how to say “I don’t know what to put here yet.”</p>
<p>Rust (which does not have a built-in <code>Never</code> type… yet; see below):</p>
<pre class="rust"><code>// Make an empty enum (which you by definition cannot construct)
enum Never {}
// Use it throughout where we don't know the type yet. It will fail to compile
// anywhere we try to *use* this, because you can't construct it.
type OrderId = Never;</code></pre>
<p>Elm (which has a built-in <code>Never</code> type):</p>
<pre class="elm"><code>-- It will fail to compile anywhere we try to *use* this, because you cannot
-- construct `Never`.
type alias OrderId =
Never</code></pre>
<p>F<sup>♯</sup> (which <em>sort</em> of does):</p>
<pre class="fsharp"><code>// Make a convenience type for the `exn`/`System.Exception` type
type Undefined = exn
type OrderId = Undefined</code></pre>
<p>Reason (which also <em>sort</em> of does—identically with F<sup>♯</sup>):</p>
<pre class="reason"><code>/* Make a convenience type for the `exn`/`System.Exception` type */
type undefined = exn;
/*
Use it throughout where we don't know the type yet. It will compile, but fail
to run anywhere we try to *use* this.
*/
type orderId = undefined;</code></pre>
<p>For both F<sup>♯</sup> and Reason, that’s following Wlaschin’s example. The main reason to do that is to make explicit that we’re not actually wanting an <em>exception</em> type in our domain model, but just something we haven’t <em>yet</em> defined. Anywhere we attempted to use it, we’d have to handle it like, well… an exception, instead of an actual type.</p>
<pre class="rust"><code>type OrderId = !;</code></pre>
</section>
<section id="workflows-and-functions" class="level2">
<h2>Workflows and functions</h2>
<p>Once we have the basic types themselves in place, we need to write down the ways we transform between them. In a functional style, we’re not going to implement instance methods—though as we’ll see in the next post, what we do in Rust will have <em>some</em> similarities to class methods—we’re going to implement standalone functions which take types and return other types.</p>
<p>Again, you’ll note that despite the common lineage, there is a fair amount of variation here. (Note that we’d also have defined the <code>UnvalidatedOrder</code>, <code>ValidationError</code>, and <code>ValidatedOrder</code> types for all of this; I’m mostly interested in showing <em>new</em> differences here.)</p>
<p>Rust (using the <a href="https://github.com/alexcrichton/futures-rs">Futures</a> library to represent eventual computation):</p>
<pre class="rust"><code>type ValidationResponse<T> = Future<Item = T, Error = ValidationError>;
fn validate_order(unvalidated: UnvalidatedOrder) -> Box<ValidationResponse<ValidatedOrder>> {
unimplemented!()
}</code></pre>
<p>Elm (using the built-in <code>Task</code> type for eventual computation; <code>Task</code>s encapsulate both eventuality and the possibility of failure):</p>
<pre class="elm"><code>type ValidationResponse a
= Task (List ValidationError) a
type alias ValidateOrder =
UnvalidatedOrder -> ValidationResponse ValidatedOrder</code></pre>
<p>F<sup>♯</sup> (using the built-in <code>Async</code> type for eventual computation):</p>
<pre class="fsharp"><code>type ValidationResponse<'a> = Async<Result<'a,ValidationError list>>
type ValidateOrder =
UnvalidatedOrder -> ValidationResponse<ValidatedOrder></code></pre>
<p>Reason (using the built-in JavaScript-specific <code>Js.Promise</code> type—which is exactly what it sounds like—for eventual computation):</p>
<pre class="reason"><code>type validationResponse('a) = Js.Promise.t(Js.Result.t('a, list(validationError)));
type validateOrder = unvalidatedOrder => validationResponse(validatedOrder);</code></pre>
<p>Once again Rust is much <em>more</em> different here from the others than they are from each other. The biggest difference between Elm, F<sup>♯</sup>, and Reason is how they handle generics and type parameters.</p>
<p>You’ll note that in Elm, they just follow the name of the wrapping type. This is a kind of syntactic symmetry: the way you <em>name</em> a generic type like this is the same basic way you <em>construct</em> it. It’s quite elegant. And as it turns out, the same is true of Reason; it’s just that its authors have chosen to follow OCaml and use parentheses for them instead of following Haskell with spaces—a reasonable choice, given Reason is surface syntax for OCaml and not Haskell.</p>
<p>F<sup>♯</sup> uses angle brackets, I strongly suspect, because that’s what C<sup>#</sup> uses for generics, and keeping them syntactically aligned in things like this is very helpful. Rust similarly uses angle brackets for similarity with other languages which have similar surface syntax—especially C++ (with its templates).</p>
<p>The way you <em>name</em> generic parameters differs between the languages as well. Elm, following Haskell, uses lowercase letters to name its generics (usually called <em>type parameters</em> in Elm). F<sup>#</sup> and Reason both (unsurprisingly) follow OCaml in using lowercase letters preceded by an apostrophe to name generics—in F<sup>#</sup>, <code>TypeGenericOver<'a></code>; in Reason, <code>typeGenericOver('a)</code>. Rust follows the convention from languages like C++, Java, and C<sup>#</sup> and uses capital letters, <code>TypeGenericOver<T></code>. The use of specific letters is conventional, not mandated by the language (unlike the casing). The ML family usually starts with <code>a</code> and moves through the alphabet; Rust and the languages it follows usually start with <code>T</code> (for <em>type</em>) and moves forward through the alphabet. (Sometimes you’ll also see different letters where it’s obviously a better fit for what’s contained.)</p>
<p>These languages also vary in the syntax for constructing a <em>list</em> of things. In F<sup>#</sup> has convenience syntax for a few built-ins (the most common being the <code>List</code> and <code>Option</code> types), allowing you to write them <em>either</em> as e.g. <code>List<ConcreteType></code> or <code>ConcreteType list</code> (as here in the example). Elm, Reason, and Rust all just use the standard syntax for generic types—<code>List a</code>, <code>list('a)</code>, and <code>Vec<T></code> respectively.</p>
<p>Finally, you’ll also note that we haven’t written out a <em>type</em> declaration here for Rust; we’ve actually written out a stub of a function, with the <a href="https://doc.rust-lang.org/std/macro.unimplemented.html"><code>unimplemented!()</code></a> <a href="https://doc.rust-lang.org/1.17.0/reference/macros-by-example.html">macro</a>. If you invoke this function, you’ll get a clear crash with an explanation of which function isn’t implemented.</p>
<p>Now, Rust also <em>does</em> let us write out the type of these functions as type aliases if we want:</p>
<pre class="rust"><code>type ValidateOrder =
Fn(UnvalidatedOrder) -> Box<ValidationResponse<ValidatedOrder>>;</code></pre>
<p>You just don’t use these very often in idiomatic Rust; it’s much more conventional to simply write out what I did above. However, the one time you <em>might</em> use a type alias like this is when you’re defining the type of a closure and you don’t want to write it inline. This is a pretty sharp difference between Rust and the other languages on display here, and it goes to the difference in their approaches.</p>
<p>Rust is <em>not</em> a functional-first language in the way that each of the others are, though it certainly draws heavily on ideas from functional programming throughout and makes quite a few affordances for a functional style. Instead, it’s a programming language first and foremost interested in combining the most screaming performance possible with true safety, and leaning on ideas from the ML family (among others!) as part of achieving that.</p>
<p>Among other things, this is why you don’t have currying or partial application in Rust: those essentially <em>require</em> you to have invisible heap-allocation to be ergonomic. We <em>don’t</em> have that in Rust, as we do in Elm, Reason, and F<sup>♯</sup>. If we want to pass around a function, we have to explicitly wrap it in a pointer to hand it around if we construct it in another function. (I won’t go into more of the details of this here; I’ve covered it some <a href="http://www.newrustacean.com/show_notes/e004/index.html">on New Rustacean</a> and some <a href="http://v4.chriskrycho.com/2015/rust-and-swift-viii.html">in my Rust and Swift comparison</a> a couple years ago.)</p>
<p>That same underlying focus on performance and explicitness is the reason we have <code>Box<ValidationResponse<ValidatedOrder>></code> in the Rust case: we’re explicitly returning a <em>pointer</em> to the type here. In Elm, F<sup>♯</sup>, and Reason, that’s <em>always</em> the case. But in Rust, you can and often do return stack-allocated data and rely on “move” semantics to copy or alias it properly under the hood.</p>
</section>
<section id="summary" class="level2">
<h2>Summary</h2>
<p>So: lots of similarities here at first blush. The biggest differences that show up at this point are purely syntactical, other than some mildly sharper differences with Rust because of its focus on performance. The fact that these languages share a common lineage means it’s not hard to read any of them if you’re familiar with the others, and it’s actually quite easy to switch between them at the levels of both syntax and semantics.</p>
<p>As usual, when dealing with languages in a relatively similar family, it’s <em>most</em> difficult to learn the <em>library</em> differences. The most obvious example of that here is Reason’s <code>Js.Promise</code>, Elm’s <code>Task</code>, F<sup>♯</sup>’s <code>Async</code>, and Rust’s <code>Future</code> types: each of those has their own quirks, their own associated helper functions or methods, and their own ways of handling the same basic patterns.</p>
<p>Still, if you have played with any one of these, you could pretty easily pick up one of the others. It’s sort of like switching between Python and Ruby: there are some real differences there, but the similarities are greater than the differences. Indeed, if anything, these languages are <em>more</em> similar than those.</p>
<p>Next time I’ll dig into Wlaschin’s chapter on <em>validating</em> the domain model, and here some of the not-just-syntax-level differences in the languages will start to become more apparent.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I can’t speak to what’s idiomatic this way in any of the non-Rust languages, because I just haven’t used them enough yet.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Exploring 4 Languages: Project Setup2018-01-01T13:00:00-05:002018-01-01T13:00:00-05:00Chris Krychotag:v4.chriskrycho.com,2018-01-01:/2018/exploring-4-languages-project-setup.htmlGetting Rust, Elm, F♯, and ReasonML installed; their editor plugins configured; and the project ready for implementing the exercises in Scott Wlaschin’s Domain Modeling Made Functional.<p>In this post, I’m just going to briefly talk through the steps I needed to do to set up each of the languages and my editor setup for them. Gladly, it was pretty simple. At the end, I’ll offer a note on my thoughts on the setup processes. (Note that this isn’t “How to do this for anyone ever”—it’s “how I did it, with some notes where it might be relevant to you.”)</p>
<p>For context, I’m running macOS and using <a href="https://code.visualstudio.com">VS Code</a> as my editor. Whenever I say “Install the VS Code extension,” you can do it either by opening the extension side panel and searching for <code><Extension Name></code>, or by typing <code>ext install <extension label></code>—I’ll write it like <code><Extension Name></code>/<code><extension label></code>.</p>
<p>The source code as of what I’m describing in this post is <a href="https://github.com/chriskrycho/dmmf/tree/project-setup">at the <code>project-setup</code> tag</a> in <a href="https://github.com/chriskrycho/dmmf/">the repo</a>.</p>
<section id="rust" class="level2">
<h2>Rust</h2>
<ul>
<li><strong>Language installation:</strong> Install <a href="https://rustup.rs"><em>rustup</em></a>: <code>curl https://sh.rustup.rs -sSf | sh</code><a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a>.</li>
<li><strong>Editor setup:</strong> Installed the VS Code extension: <code>Rust (rls)</code>/<code>rust</code>.</li>
<li><strong>Project setup:</strong> In the root of <a href="https://github.com/chriskrycho/dmmf">my repo</a>, I ran <code>cargo new rust</code>.</li>
</ul>
</section>
<section id="elm" class="level2">
<h2>Elm</h2>
<ul>
<li><strong>Language installation</strong>: There are installers, but I just did <code>npm i -g elm</code>.</li>
<li><strong>Editor setup:</strong> Installed the VS Code Elm extension: <code>Elm</code>/<code>elm</code>.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></li>
<li><strong>Project setup:</strong>
<ul>
<li>Install the <code>create-elm-app</code> tool: <code>npm i -g create-elm-app</code></li>
<li>In the root of the project, I ran <code>create-elm-app elm</code>.</li>
</ul></li>
</ul>
</section>
<section id="f" class="level2">
<h2>F<sup>♯</sup></h2>
<ul>
<li><strong>Language installation</strong>: Install <a href="http://www.mono-project.com">mono</a>: <code>brew install mono</code> (note installation instructions <a href="option-5-install-f-with-mono-via-homebrew-64-bit">here</a>).</li>
<li><strong>Editor setup:</strong> Install the VS Code Ionide extension: <code>Ionide-fsharp</code>/<code>ionide-fsharp</code>. It’ll automatically install the associated Paket and FAKE extensions from the Ionide project as well, and those will install Paket and FAKE during installation.</li>
<li><strong>Project setup:</strong>
<ul>
<li>In the root of the repo, I created the <code>fsharp</code> directory.</li>
<li>Then I opened a VS Code instance to to that directory, opened the command palette, and ran <code>F#: New Project</code>.
<ul>
<li>I chose <code>console</code></li>
<li>I left the directory blank</li>
<li>I named the project <code>dmmf</code> (for <em>D</em>omain <em>M</em>odeling <em>M</em>ade <em>F</em>unctional).</li>
<li>Since F<sup>♯</sup> (like C<sup>♯</sup>) prefers PascalCase names, I renamed the generated module <code>DMMF</code>.</li>
</ul></li>
</ul></li>
</ul>
</section>
<section id="reasonml" class="level2">
<h2>ReasonML</h2>
<ul>
<li><strong>Language installation</strong>: Following the setup instructions <a href="https://reasonml.github.io/guide/javascript/quickstart">here</a>, I ran <code>npm install -g bs-platform</code>.</li>
<li><strong>Editor setup:</strong> following <a href="https://reasonml.github.io/guide/editor-tools/global-installation">the official instructions</a>—
<ul>
<li>I ran <code>npm install -g https://github.com/reasonml/reason-cli/archive/3.0.4-bin-darwin.tar.gz</code> to install the dependencies for the editor configuration.</li>
<li>I installed the VS Code extension: <code>Reason</code>/<code>reasonml</code>.</li>
</ul></li>
<li><strong>Project setup:</strong> In the root of the repo, I ran <code>bsb -init reason -theme basic-reason</code>.</li>
</ul>
</section>
<section id="comments-on-the-setup-processes" class="level2">
<h2>Comments on the setup processes</h2>
<p>Most of the languages have <em>fairly</em> straightforward processes to get up and running with a good-to-excellent tooling experience.</p>
<p>The best of them is Rust, which is <em>extremely</em> easy to get up and running with.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> Elm is roughly in the middle—it’s less straightforward than Rust in that <code>create-elm-app</code> is <em>not</em> an officially supported approach, unlike <code>rustup</code> and <code>cargo</code>, so you’re going to have a much less awesome experience if you don’t know about it.</p>
<p>Reason and F<sup>♯</sup> both have slightly larger negatives.</p>
<p>Reason requires you to <code>npm install</code> a large, gzipped file with multiple dependencies all bundled, instead of having a dedicated installer <em>a la</em> <code>rustup</code>. It also has the possibility for a not-so-great first-run experience in the editor, which <a href="https://github.com/facebook/reason/issues/1729">I discovered</a> all too quickly.</p>
<p>F<sup>♯</sup> essentially requires you to use an editor extension to get the language setup with <a href="https://fsprojects.github.io/Paket/">Paket</a>, which is a <em>much</em> better choice of package manager than the default .NET package manager NuGet. Command line tools exist and are improving rapidly, and you <em>can</em> <a href="https://fsprojects.github.io/Paket/paket-and-dotnet-cli.html">get them working</a>… but it’s harder than it needs to be. And that project setup wizard is <em>fine</em>, but it’s a lot noisier than just doing <code>create-elm-app</code> or especially <code>cargo new</code>.</p>
<p>In any case, though, I have them all up and running now! More soon!</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>If you’re uncomfortable with running that script, there are <a href="https://www.rust-lang.org/en-US/other-installers.html">other options</a> as well.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Note that the VS Code extension is <em>not</em> the best experience out there for Elm: the Atom extensions (<a href="https://atom.io/packages/language-elm">language-elm</a> and <a href="https://atom.io/packages/elmjutsu">elmjutsu</a>) are. I stuck with VS Code because it’s <em>good enough</em> and, more importantly, the Code extensions are arguably best in class for the <em>other</em> languages… and it’s what I use every day.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>I’m not just saying that because I’m a Rust fanboy, either! If Rust were hard to use, I’d be complaining <em>louder</em> because of my enthusiasm for the language.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Exploring 4 Languages2017-12-31T20:20:00-05:002017-12-31T20:20:00-05:00Chris Krychotag:v4.chriskrycho.com,2017-12-31:/2017/exploring-4-languages.htmlI’m going to implement the exercises from Domain Modeling Made Functional in Rust, Elm, ReasonML, and F♯… because I think it'll be an interesting learning experience and a lot of fun!<p>Today, as I hit the first of the implementation chapters in <a href="https://pragprog.com/book/swdddf/domain-modeling-made-functional"><em>Domain Modeling Made Functional</em></a>, I started thinking about how I wanted to implement it. As I’ve noted <a href="https://twitter.com/chriskrycho/status/934170826718429184">elsewhere</a> in the past, very little of the book is <em>truly</em> specific to F<sup>♯</sup>, though that’s the language Wlaschin uses in the book—and Wlaschin himself <a href="https://twitter.com/ScottWlaschin/status/934177554331848705">agrees</a>:</p>
<blockquote>
<p>Thanks! Yes, it’s true that you could easily use #ElmLang, #RustLang, #Scala, or especially #OCaml to work through the book. I use hardly any F# specific features.</p>
</blockquote>
<p>So… I decided to try something a little bit bonkers. I’m going to implement these exercises in <em>four different languages</em>:</p>
<ul>
<li><a href="https://www.rust-lang.org">Rust</a></li>
<li><a href="http://elm-lang.org">Elm</a></li>
<li><a href="http://fsharp.org">F<sup>♯</sup></a></li>
<li><a href="https://reasonml.github.io">ReasonML</a></li>
</ul>
<p>These languages are all related: they’re descended from <a href="http://smlnj.org/sml.html">Standard ML</a>. ReasonML and F<sup>♯</sup> are like siblings: Reason is merely a custom syntax for OCaml; F<sup>♯</sup> is (originally) an implementation of OCaml on .NET (though the two languages have diverged since F<sup>♯</sup> came into existence). Elm and Rust are cousins of each other and of Reason and F<sup>♯</sup>, though they’re both drawing on other languages besides OCaml as well. I also have some familiarity with Rust, Elm, and F<sup>♯</sup> already, and have read the docs for Reason a couple times. So this is a <em>bit</em> less crazy than it might otherwise be.</p>
<p>Why, though? Mostly because I think it’ll be interesting to compare the implementations of the domain model from the book side by side. It’ll look just a bit different in each language, and I expect to learn a bit more of the <em>feel</em> of each language by doing this. (That side by side comparison is something I’ve <a href="http://v4.chriskrycho.com/rust-and-swift.html" title="Series: Rust and Swift">done before</a> and <a href="http://v4.chriskrycho.com/2015/rust-and-swift-v.html" title="Part V: The value (and challenge) of learning languages in parallel.">found very profitable</a>.) I’ll also turn it into blog posts, which hopefully will be interesting to others!</p>
<p>More to come, and soon.</p>
Types are Small2017-12-29T14:00:00-05:002017-12-29T14:00:00-05:00Chris Krychotag:v4.chriskrycho.com,2017-12-29:/2017/types-are-small.htmlA really fabulous quote from Scott Wlaschin's book Domain Modeling Made Functional crystallized an important point for me: types (in typed functional programming) are small.
<p>I’ve been reading through <a href="https://fsharpforfunandprofit.com" title="F♯ for Fun and Profit">Scott Wlaschin</a>’s really excellent book <a href="https://pragprog.com/book/swdddf/domain-modeling-made-functional"><em>Domain Modeling Made Functional</em></a> and this quote (from his chapter introducing the idea of <em>types</em> in <em>typed functional programming</em>) crystallized something for me:</p>
<blockquote>
<p>A type in functional programming is not the same as a class in object-oriented programming. It is much simpler. In fact, a type is just the name given to the set of possible values that can be used as inputs or outputs of a function.</p>
</blockquote>
<p>A lot of times when I’m trying to explain how I use types in a typed functional programming style, this is a serious point of confusion—both for the Java or C♯ OOP programmer and for the programmers coming from dynamic languages. When people think of “types” they tend to think of <em>classes and interfaces and methods, oh my!</em><a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>One is the big heavy class. The other is a nice little LEGO block. The difference is <em>huge</em> in my day to day experience, but I’ve never been able to express it so clearly as Wlaschin’s quote.</p>
<p>I suspect that when I’m talking to most people coming from dynamically typed languages <em>or</em> from the standard OOP languages, they hear “Write three interfaces and six classes” when I say “using types to help me with my program.” But what I mean is “Write three tiny little shapes, and then one more that shows how they snap together in a slightly bigger one.” Types aren’t big heavy things. They’re just the shapes I want to flow through my program, written down like documentation for later… that gets checked for me to make sure it stays up to date, and lets me know if I missed something in my description of the shape of the data, or tried to do something I didn’t mean to before.</p>
<p>You <em>can</em> write a language like F♯ or TypeScript or Elm like you would C♯, but it’s generally not going to be an especially <em>happy</em> experience (and it’ll be less happy the more “purely functional,” <em>a la</em> Elm, you go). But you don’t have to! Types are just tiny little descriptions of the shapes you plan to deal with in a particular spot—more concise and more dependable than writing a JSDoc or something like that.</p>
<p>Types are small. You can build big things with them, but <em>types are small</em>.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>In fact, nearly every “I’m just not into types” or even “I think types are worse for most things” talks I’ve seen—including <a href="https://www.youtube.com/watch?v=2V1FtfBDsLU">this recent and popular one by Rich Hickey</a>—tend to conflate <em>all</em> type systems together. But the experience of writing TypeScript is <em>very</em> different from the experience of writing C♯. (You’ll note that in that talk, for example, Hickey freely jumps back and forth between Java-style types and Haskell-style types when it suits his purposes, and he entirely skips past the structural type systems currently having something of a heyday.) In many cases, I <em>suspect</em> this is simply a lack of deep experience with the whole variety of type systems out there (though I’d not attribute that to any specific individual).<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
I Want JSON Decoders2017-12-25T19:20:00-05:002017-12-25T19:20:00-05:00Chris Krychotag:v4.chriskrycho.com,2017-12-25:/2017/i-want-json-decoders.htmlParsing JavaScript well is a solved problem in lots of contexts. But it's time for JavaScript to take a page from Elm.
<p><i class=editorial>This post was originally published at <a href="https://www.dailydrip.com/blog/i-want-json-decoders.html">DailyDrip.com</a>. They’re doing really great work over there, so I encourage you to check out their content and consider subscribing!</i></p>
<hr/>
<p>The other day, I got a report about the Ember.js app I’m working on: when a customer applied a coupon in the basket, they’d see an indication that the coupon was applied, but the basket total would still display as if it hadn’t been updated. Orders were <em>placed</em> correctly, but they wouldn’t render right. I dug around for a bit, and then discovered that it was one of the (many) places where <code>undefined</code> was biting us.</p>
<p>How did this happen? It turned out it was a perfect storm: a confusingly-designed <abbr>API</abbr> combined with a reasonable (but in this case, very unhelpful) assumption in our data layer. When the total on a given basket dropped to zero, our <abbr>API</abbr> simply didn’t send back a value on the payload at all. Instead of <code>{ total: 0, ... }</code>, there was just, well, <code>{ ... }</code> – no <code>total</code> field at all. Meanwhile, our data layer was designed to let a server send back only the fields which <em>required</em> updating. That way, you can send back partial records to indicate only what has changed, instead of having to send back the whole of what might be a very large record, or a very large collection of records.</p>
<p>The combination was terrible, though: because the server didn’t send back the <code>total</code> field at all when it dropped to <code>0</code>, the client never updated the total it displayed to the user: as far as it was concerned, the server was saying “no change here!”</p>
<p>The first and most obvious solution here, of course, is the one we implemented: we had the <abbr>API</abbr> always send back a value, even if that value was <code>0</code>. But it seems like there should be a better way.</p>
<p>Lots of languages have fairly nice facilities for parsing JavaScript. Several languages even have tools for automatically constructing local, strongly-typed data structures from the structure of a <abbr>JSON</abbr> response on an <abbr>API</abbr>. F♯’s <a href="https://docs.microsoft.com/en-us/dotnet/fsharp/tutorials/type-providers/">type providers</a> are like this and <em>really fancy</em> in the way they’ll automatically derive the type for you so you don’t even have to write it out as you would in everything from Haskell to C#. But for the most part in JavaScript, you have at most a way to map data to a local record in your data store – certainly none of those type safe guarantees. In TypeScript, you can write the types you receive out carefully – though, as I discovered in this case, probably not carefully <em>enough</em> unless you model <em>everything</em> as an optional field, and then you’re back to checking for <code>null</code> or <code>undefined</code> everywhere, and <em>why isn’t this already a solved problem?</em></p>
<p>And it turns out, it <em>is</em> a solved problem – or at least, it is in Elm, <a href="https://guide.elm-lang.org/interop/json.html">via</a> those <a href="https://guide.elm-lang.org/interop/json.html"><abbr>JSON</abbr> Decoders</a>. I don’t get to write Elm at work right now (or any time in the foreseeable future) – but if I can’t write Elm, I can at least try to steal a bunch of its great ideas and push them back into my TypeScript.</p>
<p>So… what exactly are <abbr>JSON</abbr> Decoders and how would they have solved this problem? (And why, if you’re already familiar a little with Elm and possibly feeling frustrated with decoding, are they actually worth it?)</p>
<p>A <abbr>JSON</abbr> Decoder is just a way of guaranteeing that once you’re inside the boundary of your program, you <em>always</em> have a valid instance of the data type you’ve decoded it into, <em>or</em> an error which tells you why you <em>don’t</em> have a valid instance of the data. They’re composable, so you can stack them together and take smaller decoders to build bigger ones, so if you have a complex <abbr>JSON</abbr> structure, you can define repeated substructures in it, or decoders for dissimilar sibling items in it, and use them to put together a grand decoder for your whole final structure. The decoders use the <a href="http://package.elm-lang.org/packages/elm-lang/core/5.1.1/Result"><code>Result</code></a> type, and they hand back either <code>Ok</code> with the decoded value or <code>Err</code> with the reason for the failure – and if <em>any</em> piece of a decoded type doesn’t match with what you’ve specified, you’ll end up with an <code>Err</code>.</p>
<p>Now, initially that might sound like a recipe for disaster – <abbr>JSON</abbr> payloads can be formed in weird ways all the time! – but in fact it encourages you to think through the various ways your payloads can be formed and to account for them. <em>Sometimes</em>, if the payload doesn’t have what you expect, that really does mean something is wrong either in your request or in the server-side implementation. In that case, getting an <code>Err</code> is <em>exactly</em> what you want. Other times, the server might be perfectly legitimate in sending back a variety of shapes in its response, and your responsibility is to decide how to decode it to make sense in your app. Remember, the problem I had was that I received a payload which didn’t have the data. With Elm’s decoders, I would have had three choices:</p>
<ol type="1">
<li>I could have treated this as an error, and passed that along to be dealt with in some way.</li>
<li>I could have normalized it as a 0-value payload.</li>
<li>I could have treated it <em>explicitly</em> as a no-op, maintaining whatever previous state I had in the data store, i.e. the implicit behavior of my actual data store.</li>
</ol>
<p>What I <em>couldn’t</em> do, though, is do any one of those <em>accidentally</em>. I could still support incomplete payloads (via option 3), but I’d be explicitly opting into that, and there would be an obvious place where that was the case. This would be particularly helpful in a scenario where I wasn’t also in charge of the <abbr>API</abbr>: if I couldn’t just go change it so the <abbr>API</abbr> itself had a more sensible behavior, I could enforce whichever desired behavior on my own end. More than that, with something modeled on the Elm <abbr>JSON</abbr> Decoders, I would <em>have</em> to: there would be no implicit consumption of raw <abbr>JSON</abbr>.</p>
<p>The first time I played with the Elm <abbr>JSON</abbr> Decoder approach, I thought it was a lot of work. I was used to just doing <code>JSON.parse()</code> in JS or <code>json.loads()</code> in Python. Now I needed to define a whole series of decode steps explicitly for every field in a response? Good grief! But it grew on me. More than that, I now actively miss it in my apps; I’d have been really happy not to have to spend a morning hunting down this particular bug.</p>
<p>Sometimes that explicitness can seem like quite a lot of boilerplate, and indeed it is: there’s a reason the Elm <a href="https://github.com/NoRedInk/elm-decode-pipeline">elm-decode-pipeline</a> project exists. But even given the <em>initial</em> nicety of something like F♯ type providers, I think the Elm approach has a slight edge in the long-term for <em>maintainability</em> specifically. It’s one thing to be able to just get to work right away and have a type definition you know to conform to a given <abbr>API</abbr> response. It’s something else entirely to be able to <em>know</em> that you’ve accounted for all the varieties of responses you might get (and without throwing an exception for failed <abbr>JSON</abbr> decoding at that!).</p>
<p>Given all of this, I’ve started mentally teasing out what such a <abbr>JSON</abbr> decoding library for Ember.js might look like in TypeScript. It’s a long way off, but it’s the kind of thing that I <em>really</em> want to experiment with, and that I think would make for a big win for the maintainability of our apps. Keep your eyes peeled, because I suspect this is another thing JS will steal from Elm, and that’s <em>great</em> in my book.</p>
Chrome is Not the Standard2017-12-21T07:10:00-05:002017-12-21T07:10:00-05:00Chris Krychotag:v4.chriskrycho.com,2017-12-21:/2017/chrome-is-not-the-standard.htmlNo single browser vendor represents ”the future of the web.” Each ships in line with its own business priorities. And that's a good thing.
<p><i class=editorial>This got an enormous amount of play around the web, and as a result people have ended up translating it to other languages. If you have a translation, I’ll be happy to link it here!</i></p>
<ul>
<li><a href="http://softdroid.net/chrome-ne-yavlyaetsya-standartom">Russian</a>, translated by Vlad Brown (<a href="http://softdroid.net" class="uri">http://softdroid.net</a>)</li>
<li><a href="http://getdrawings.com/uz-chrome-standart-emas">Uzbek</a>, translated by Alisher</li>
<li><a href="http://jeremylee.sh/various/chromp.html">French</a>, translated by Jeremy Lee Shields (<a href="http://jeremylee.sh" class="uri">http://jeremylee.sh</a>)</li>
</ul>
<hr />
<section id="the-post" class="level2">
<h2>The post</h2>
<p>Over the past few years, I’ve increasingly seen articles with headlines that run something like, “New Feature Coming To the Web”—followed by content which described how Chrome had implemented an experimental new feature. “You’ll be able to use this soon!” has been the promise.</p>
<p>The reality is a bit more complicated. Sometimes, ideas the Chrome team pioneers make their way out to the rest of the browsers and become tools we can all use. Sometimes… they get shelved because none of the other browsers decide to implement them.</p>
<p>Many times, when this latter tack happens, developers grouse about the other browser makers who are “holding the web back.” But there is a fundamental problem in this way of looking at things: <em>Chrome isn’t the standard.</em> The fact that Chrome proposes something, and even the fact that a bunch of developers like it, does not a standard make. Nor does it impose an obligation to other browsers to prioritize it, or even to ship it.</p>
<p>As web developers, it can be easy to become focused on interesting new features for the platform we work on. That’s no different than the excitement Android and iOS developers have when Google and Apple release new SDKs for developing on their platforms. It’s healthy to be excited about possible new features, things that might make our jobs easier or enable us to do things we couldn’t do before.</p>
<p>But there <em>is</em> an important difference between those platforms and the web. Those platforms are the domain of a single vendor. The web is a shared platform. This is its unique benefit, and its unique cost. It uniquely allows us to write software that can actually run, and run reasonably well, <em>everywhere</em>. But it also means that a minimum of four companies—the major browser vendors—get a say in whether a feature is a <em>feature</em> or whether it’s just an interesting idea one of the teams had.</p>
<p>Let’s get concrete about an example that’s been extremely high-profile for the last couple years—and, to be clear, one I think is a <em>good</em> idea from Google: <a href="https://developers.google.com/web/progressive-web-apps/" title="Google’s PWA page">progressive web apps</a> (hereafter <abbr title='Progressive Web App'>PWA</abbr>). They have been pitched by Google and other supporters as an unambiguous win for the user experience of complex web applications. And, as a web developer myself, I’m actually inclined to agree with that assessment! However, I have fairly regularly seen people getting angry at especially Apple for not prioritizing support for <abbr title='Progressive Web App'>PWA</abbr>s in (especially iOS) Safari—Apple is, in this view, “holding back the future of the web.”</p>
<p>Well… no. For any given idea Google pitches, Apple may or may not be sold on Google’s vision of the web, or they may even think it’s a good idea but not <em>more</em> important than other things they’re working on.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>And this is what it <em>means</em> to be part of the web platform. No single company gets to dominate the others in terms of setting the agenda for the web. Not Firefox, with its development and advocacy of <a href="http://webassembly.org/">WebAssembly</a>, dear to my heart though that is. Not Microsoft and the IE/Edge team, with its proposal of the CSS grid spec in <em>2011</em>, sad though I am that it languished for as long as it did. Not Apple, with its pitch for <a href="https://webkit.org/blog/7846/concurrent-javascript-it-can-work/" title="“Concurrent JavaScript: it can work!”">concurrent JavaScript</a>. And not—however good its developer relations team is—Chrome, with any of the many ideas it’s constantly trying out, including <abbr title='Progressive Web App'>PWA</abbr>s.</p>
<p>It’s also worth recognizing how these decisions aren’t, in almost any case, unalloyed pushes for “the future of the web.” They reflect <em>business</em> priorities, just like any other technical prioritization. Google cares about <abbr title='Progressive Web App'>PWA</abbr>s because Google makes its money from the web and wants people to spend more of their time on the web. Apple cares about things like the battery life implications and the sheer speed of its iOS JavaScript engine because it makes money from hardware and it wants people to be happy with their iPhones and iPads.</p>
<p>Does any one of those browser’s commitments map cleanly to <em>all</em> users’ (or even all <em>developers’</em>) priorities? Of course not! This is and always has been the beauty of a competitive browser landscape. I’m a web developer who wants <abbr title='Progressive Web App'>PWA</abbr> support everywhere—so I want Apple supporting it. But I’m also a smartphone user who wants those applications to <em>scream</em> on my device—not to crawl, like they do on Chrome on Android, which is still years behind iOS in performance. As an end user, not just a developer, it matters to me that running Safari on my laptop instead of Chrome can dramatically increase my battery life.</p>
<p>These are tradeoffs, plain and simple. Chrome ships new features fast, but they’re not always stable and they often have performance costs. Safari ships new features on a much slower cadence, but they’re usually solid and always perform incredibly well. These are both engineering and business tradeoffs, and the companies behind the browsers are making because of their own business and engineering priorities. Don’t valorize any of the browser vendors, and don’t act as if <em>any</em> of them is the standard, or a reliable predictor of the future. Instead, value what each brings to the table, but also value the interplay <em>at</em> the table, and the ways each of these vendors pushes the others and challenges the others’ assumptions of what is most important. That’s what makes the web so great, even when it makes things move more slowly. Sometimes—often, even!—moving more slowly not in the <em>experimental</em> phase but in the <em>finalizing</em> phase makes for a much better outcome overall.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>In this case, it seems to have been the latter, since yesterday’s release of Safari Tech Preview enabled Service Workers, one of the major pieces of the <abbr title='Progressive Web App'>PWA</abbr> push.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Vexing Ironies2017-12-17T20:50:00-05:002017-12-17T20:50:00-05:00Chris Krychotag:v4.chriskrycho.com,2017-12-17:/2017/vexing-ironies.htmlOne of the most vexing problems in thinking and responding well to problems of ethics in information technology specifically is the way that so much of thinking and responding about information technology takes place within the context of, well, information technology. Two examples of that challenge caught my attention this evening, ironies I noticed in reading the very same article.
<p>One of the most vexing problems in thinking and responding well to problems of ethics in <em>information technology</em> specifically is the way that so much of thinking and responding about information technology takes place within the context of, well, information technology. Two examples of that challenge caught my attention this evening, ironies I noticed in reading the very same article.</p>
<p>In <a href="http://www.roughtype.com/?p=8248">“How smartphones hijack our minds”</a>, Nick Carr explores much of the evidence for ways that use of smartphones can have seriously negative effects on our thinking in ways that are both pernicious (because we usually do not notice them consciously) and pervasive (in that they happen simply by dint of the <em>presence</em> of the devices). It’s a good article, and I commend it you as a helpful summary of a lot of the most current research on attention, smartphones, and the like; you should read it and think about how you use your phone.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>But the first irony was that I read that article… on my smartphone. As indeed I read <em>many</em> of Nick Carr’s articles. There’s something more than a little odd about considering the use of a smartphone by way of reading an article on a smartphone. But in a very real sense, there’s almost no way I <em>could</em> have read Carr’s article otherwise.</p>
<p>And accordingly, the second irony is that, although Carr, who has staked out a position as a popular-level writer tackling issues of how modern information technology affects us, <a href="https://t.alibris.com/The-Shallows-What-the-Internet-Is-Doing-to-Our-Brains-Nicholas-Carr/book/11882057" title="The Shallows">certainly <em>is</em> published in hard copy</a>, everything of his I’ve ever read has been in digital form. Indeed, although his books are important in their own ways, I think it’s fair to say that <em>most</em> people’s interaction with his ideas, including his critiques of the ways we use and indeed rely on the internet, have all happened via and only because of the internet.</p>
<p>I’m not entirely sure what to make of these observations. I don’t fault Carr for publishing a blog, exactly; and though I am increasingly chastened about my own at-times-unwise use of a smartphone, I don’t fault myself for having an RSS reader there. (Better than that a Twitter app, to be sure!) But there is something at a minimum <em>odd</em> and perhaps even something <em>off</em> about the ways that we tend to use the very tools we are critiquing as the medium for advancing our critiques. We implicate ourselves.</p>
<p>But what is the alternative? On the one hand Carr’s message—which is important!—has likely been heard and even internalized by a far broader audience because he has transmitted it digitally than it would have been had he conscientiously limited it to books (and perhaps print media articles). The efficacy of the medium for distribution is the internet’s greatest strength. Likewise, I would never have run into Carr’s writing in the first place apart from articles in my RSS feed which linked it; and I do a great deal of my RSS feed reading on my iPhone and iPad, both of which are much better <em>reading</em> environments than a laptop or a desktop computer. Indeed, much of what I find most helpful in my reading on technology and ethics I find my way to via articles in my RSS feed, and I often items for reading later by simply tapping a an interesting-looking link in an article I’m reading in that RSS feed.</p>
<p>What do we make of this tension, these ironies? Especially when our concerns begin to rise to the level not merely of prudential judgments (though that level alone is perhaps sufficient reason to do more than we let ourselves) but deep ethical worries—do we abandon the smartphone altogether, cease blogging for fear of how it only contributes to the Google-ified and Facebook-ified age we live in?</p>
<p>I don’t know.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>One of my concerns in my <a href="http://v4.chriskrycho.com/2017/why-do-i-need-a-research-tool.html">ongoing project</a> is to prompt people around me—friends and family, but perhaps also blog readers!—to consider how our use of technology <em>forms</em> us. More on that in a future post.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Becoming a Contributor2017-11-02T07:00:00-04:002017-11-02T07:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-11-02:/2017/becoming-a-contributor.htmlThe prepared script for my talk at Rust Belt Rust 2017, given October 27, 2017 in Columbus, Ohio.<p><i class=editorial>Here is the full text of the talk I gave at Rust Belt Rust, as it was prepared; headings correspond to individual slides. You can see the slides as they were presented <a href="/talks/rust-belt-rust/">here</a>. Note that I extemporize fairly freely when actually giving a talk, so this is <em>not</em> a word-for-word equivalent of the talk as delivered, but the gist is the same!</i></p>
<p><i class=editorial>I’ll update this post with the video once it’s available!</i></p>
<hr />
<figure>
<img src="/talks/rust-belt-rust/img/family.jpg" alt="family" /><figcaption>family</figcaption>
</figure>
<p>Hello, everyone! It’s good to see all of you. We only have half an hour, and even if that’s ten to fifteen minutes longer than a normal New Rustacean episode, that’s still not much time, so let’s jump right in! Our theme is “Becoming a Contributor.” There are two prongs to this talk, two big ideas I hope you all walk away with.</p>
<section id="introduction-the-big-ideas" class="level3">
<h3>Introduction: The Big Ideas</h3>
<p>The first thing I hope all of you take away is that <strong>there is no reason <em>you</em> cannot contribute meaningfully</strong> to the success of Rust – or indeed any open-source project you care about. Anyone can be a contributor. And not “even you” but perhaps “<em>especially</em> you”. The fact that you’re an outsider, or new to programming, or new to systems programming: sometimes that makes you a <em>better</em> contributor. Because you don’t necessarily share the biases of – you’re not wearing the same blinders that – someone who’s been writing systems-level code for 20 years have. So the first idea: <strong>you can contribute</strong>.</p>
<p>The second idea I hope you take away is <strong>just <em>how many</em> ways there are to contribute meaningfully</strong>. It has almost become a cliche in the Rust community to say “code isn’t the only thing that matters,” but I want to show you today just how true that is. And I want to make that point again more forcefully, because for all that we often say that, the idea that <em>shipping code</em> is what really matters is the kind of pernicious lie that can come back and bite any of us. It certainly gets to me at times! But it’s a lie, and we’re going to see that in detail. That’s the second big idea: <strong>there are an <em>astounding</em> number of ways you can contribute</strong>.</p>
</section>
<section id="introduction-why" class="level3">
<h3>Introduction: Why?</h3>
<p>There are a lot of things to be passionate about in the world of software development. But at the end of the day, I care about software because I care about <em>people</em>. To borrow a label from Scott Wlaschin – a developer I admire enormously, mostly working over in the F# community – I am a <em>humanist</em>, not a <em>technologist</em>. The technologies are interesting in themselves to a degree; but I mostly care about the ways that technologies can help us serve people more effectively. As software developers, that takes a lot of shapes. But today I want to zoom in on just these two ideas about open-source software:</p>
</section>
<section id="introduction-the-big-ideas-1" class="level3">
<h3>Introduction: The Big Ideas</h3>
<p>So: why these two ideas? For one thing, because I think they are among the most applicable to everyone here. We have an enormous open-source focus. But for another, because they can also serve as windows into the ways we can – and should – think about software more generally. So: let’s talk about how you become a <em>contributor</em>.</p>
</section>
<section id="introduction-outline" class="level3">
<h3>Introduction: Outline</h3>
<p>We’re going to take this on in the good old grammar-school fashion: <em>who</em>, <em>what</em>, <em>when</em>, <em>where</em>, <em>why</em>, and <em>how</em>. We’re not going to take them in that order though, and we might smash a few of them together.</p>
<ol type="1">
<li>Introduction</li>
<li>Why bother contributing? <!-- .element: class="fragment" data-fragment-index="1" --></li>
<li>Who is a contributor? <!-- .element: class="fragment" data-fragment-index="2" --></li>
<li>What is a contribution? How can you contribute? <!-- .element: class="fragment" data-fragment-index="3" -->
<ul>
<li>…so many things they won’t fit on this slide. <!-- .element: class="fragment" data-fragment-index="4" --></li>
</ul></li>
<li>When and where to contribute? <!-- .element: class="fragment" data-fragment-index="5" --></li>
<li>Conclusion <!-- .element: class="fragment" data-fragment-index="6" --></li>
</ol>
</section>
<section id="why-bother-contributing" class="level2">
<h2>Why bother contributing?</h2>
<p>The first question we might be asking is: <em>why contribute at all</em>? Why should you be interested in becoming a contributor? And the best answer I can offer is: because there is more work than hands to do it. Always. Every open-source maintainer can tell you the truth of this.</p>
</section>
<section id="who-is-a-contributor" class="level2">
<h2>Who is a contributor?</h2>
<p>People define this differently, but I have a very simple definition: <strong>A contributor is <em>anyone</em> who improves a project.</strong></p>
<section id="who-is-a-contributor-examples" class="level3">
<h3>Who is a contributor? Examples</h3>
<p>For example:</p>
<ul>
<li>submit a patch to fix a typo <!-- .element: class="fragment" data-fragment-index="1" --></li>
<li>add a small correction for a code sample in a project <!-- .element: class="fragment" data-fragment-index="2" --></li>
<li>file an issue instead of just suffering through a problem in silence <!-- .element: class="fragment" data-fragment-index="3" --></li>
<li>everything else we’re going to talk about today <!-- .element: class="fragment" data-fragment-index="4" --></li>
</ul>
</section>
<section id="who-is-a-contributor-me" class="level3">
<h3>Who is a contributor? Me!</h3>
<p>That might sound overblown, but it’s really not. I am literally standing on this stage in front of you today because I submitted some small typo and code sample improvements to “Rust by Example” a few years ago, and realized: I can make a difference in this community. And that gave me the motivation I needed to <em>keep</em> contributing.</p>
<p><img src="/talks/rust-belt-rust/img/first-commit.png" alt="my first Rust commit" /><!-- .element: class="fragment" data-fragment-index="1" --></p>
</section>
<section id="who-is-a-contributor-1" class="level3">
<h3>Who is a contributor?</h3>
<p>I don’t imagine the story is all that different for <em>most</em> people who are open-source contributors in this room. Something got them over the hump, and it was probably something small, insignificant-seeming at the time. They might be particularly skilled in this thing or that thing, but in fact a lot of them are in those roles just because they saw a need and stepped up to fill it. And then kept at it for a long time. But it made them a contributor. And that feeling – of helping build something bigger than you can build on your own – is a good one. I’d go so far as to say it’s part of what humans are <em>meant</em> for. It’s part of us in a deep, deep way.</p>
</section>
</section>
<section id="who-is-a-contributor-2" class="level2">
<h2>Who is a contributor?</h2>
<p>If you’re inclined to quibble with that definition, I challenge you to ask <em>why?</em> I think, most often, it’s because we feel defensive about wanting to project our own particular kinds of contribution as the most important, or the most valuable. But I’m more of the mindset that, as I read recently, “anyone who would be first… must be last of all, and servant of all.” We should stop worrying about our own prestige and turf-marking, and start rejoicing in the many different ways people are able to make our projects better.</p>
<p>There’s no magic that makes you qualified to be a contributor. There’s just a willingness to serve where you see a need.</p>
</section>
<section id="what-how-can-you-contribute" class="level2">
<h2>What & how can you contribute?</h2>
<p>And that takes us into the “what” of all of this, the <em>how</em>. (Yes, I’m combining those two). <strong><em>What</em> is a contribution? <em>How</em> can you contribute?</strong> Turns out, this is a <em>long</em> list.</p>
<section id="what-how-code" class="level3">
<h3>What & how: code</h3>
<p>Let’s get this right out of the way up front, because it’s the most obvious: you can write code. You can fix bugs or help implement new features. You can do that even if you’re not an expert – especially in the Rust community. Many Rust projects have gone out of their way to mark issues as good-first-issues, or easy-to-tackle, or mentorship-available. Maybe it’s your first contribution to an open-source project: that’s okay. You can take a stab at it, and the fact that it might not be good <em>is okay</em>. The whole point of these kinds of issues is that they give you a place where you can jump in safely.</p>
<p><img src="/talks/rust-belt-rust/img/good-first-issue.png" alt="good first issue" /> <img src="/talks/rust-belt-rust/img/mentored.png" alt="mentored" /> <img src="/talks/rust-belt-rust/img/easy.png" alt="easy" /></p>
<p>That goes equally for everything from the Rust compiler itself to many of the other projects in the ecosystem. Look at the repository, for example! And it’s not just this project. <em>Lots</em> of projects in the Rust ecosystem are like this.</p>
<section id="what-how-code-were-kind-here" class="level4">
<h4>What & how: code – we’re kind here</h4>
<p>And no one is going to swear at you or insult for making a mistake here. Not even if you’re working on something important, and not even if you’ve been doing it for a while. That is not. how. we. roll. here. <em>Everyone</em> makes mistakes!</p>
<p>Instead, we <em>want</em> people to show up, knowing nothing: we’re happy to help. Remember: we want people to contribute! So: try opening a PR and let people help you learn how to do it well! In fact, if you haven’t ever opened a PR on a Rust project, find one that looks interesting to you and has an issue tagged that way, and submit a PR before the weekend is out! You can do it!</p>
<p><img src="/talks/rust-belt-rust/img/good-first-issue.png" alt="good first issue" /> <img src="/talks/rust-belt-rust/img/mentored.png" alt="mentored" /> <img src="/talks/rust-belt-rust/img/easy.png" alt="easy" /></p>
</section>
<section id="what-how-code-a-caveat" class="level4">
<h4>What & how: code – a caveat</h4>
<p>But code is not the only thing that makes you a contributor. I put it up front because I think it’s worth doing – but I also wanted to get it out of the way. In every software community, it’s easy to <em>over</em>-value the code. That might sound crazy, given that it’s open-source <em>software</em>, but the reality is that no one fails to value the code. We <em>do</em> often fail to value all the other things that make an open-source software project actually useful. It’s certainly true that there’s no project without the code. But it’s also the case that there’s no <em>useful</em> software without a lot of other things besides the code, and we often undervalue those.</p>
</section>
</section>
<section id="filing-bugs" class="level3">
<h3>Filing bugs</h3>
<p>So let’s take one step away from code, and talk about what is probably the single <em>easiest</em> way anyone can contribute. <em>File issues.</em> If you’re using a binary and it doesn’t work, open a ticket. If you’re integrating a library and it seems like the API doesn’t do what it should, or if it seems like it’s missing some functionality… well, you can suffer in silence, or you can open a bug ticket! Many times, the author of the software <em>doesn’t know there’s a problem</em>. The only way they can fix it is if they know about it!</p>
<figure>
<img src="/talks/rust-belt-rust/img/new-issue.png" alt="filing bugs" /><figcaption>filing bugs</figcaption>
</figure>
</section>
<section id="docs" class="level3">
<h3>Docs</h3>
<p>Perhaps the thing most of you will be most persuaded of the utility of is <em>documentation</em>. All of us have faced the difficulty of trying to figure out how to integrate some poorly-documented (or undocumented!) library into our own codebase. That experience, in word, <em>sucks</em>.</p>
<p>So working on documentation is one of the highest-value areas you can contribute to any project. It’s also really hard, in a bunch of ways – harder, in some ways, than writing the code is!</p>
<section id="docs-who" class="level4">
<h4>Docs: who?</h4>
<p>One kind of documentation is <strong>explanation of how things work under the hood</strong>. The implementer is the most qualified there! That doesn’t mean they don’t still need help even with that, though! Some people are incredible implementors and terrible explainers; you can often do a great service by serving as an “interpreter” for them – taking their explanations and making the literary tweaks and cleanups and polish that they need.</p>
<p>Another kind of documentation, though, developers and maintainers are often really poorly equipped to write, and that’s <strong>introductory documentation</strong>. This is the problem of expertise: when you know exactly how something is <em>meant</em> to work, and especially when you’re the one who implemented it, there are things that seem obvious to you which simply aren’t obvious to someone approaching it for the first time. And as hard as you try, you <em>can’t</em> escape that entirely. You can imagine what it might be like not to know something, but there’s no substitute for actually not knowing something.</p>
</section>
<section id="docs-how" class="level4">
<h4>Docs – how?</h4>
<p>What that means is that one of the most valuable things you can do as you learn a new library is <em>write down the things you don’t understand from the docs as you go</em>. And when you figure them out, <em>write that down, too</em>. If nothing else, writing up that experience – filing it as an issue on the bug tracker, or otherwise getting it in the hands of the maintainers – can help them make important changes to things like the order various concepts are introduced, or adding little notes to help people feel comfortable with not knowing something until it <em>can</em> be introduced later, and other things like that. It can help them recognize and fill in gaps in their docs – things they simply assumed but which they didn’t realize they were assuming – and fill those in. At the most extreme, you might even help them realize that some parts of the docs need full rewrites… and the work you’ve done in writing things down might just be the foundation or the actual content of those new docs.</p>
<ol type="1">
<li>Write down the things you don’t understand from the docs as you go.<!-- .element: class="fragment" data-fragment-index="1" --></li>
<li>When you figure them out, write that down, too.<!-- .element: class="fragment" data-fragment-index="2" --></li>
<li>Then: file an issue or write a PR to improve it!<!-- .element: class="fragment" data-fragment-index="3" --></li>
</ol>
</section>
<section id="docs-varieties" class="level4">
<h4>Docs: varieties</h4>
<p>So what kinds of things would we call <em>documentation</em>?</p>
<ul>
<li>API documentation<!-- .element class="fragment" data-fragment-index="1" --></li>
<li>READMEs<!-- .element class="fragment" data-fragment-index="2" --></li>
<li>Tutorials<!-- .element class="fragment" data-fragment-index="3" --></li>
<li>Books<!-- .element class="fragment" data-fragment-index="4" --></li>
<li>The Rust Reference<!-- .element class="fragment" data-fragment-index="5" --></li>
</ul>
<p>Okay, books are a <em>huge</em> undertaking, but they can genuinely serve as documentation. Especially for large projects. In fact, several of the most important pieces of “documentation” the Rust project itself has are books: “The Rust Programming Language”, “Rust by Example”, and “The Rustonomicon”. But there are also important but totally unofficial books like Daniel Keep’s “A Practical Intro to Macros in Rust 1.0” and “The Little Book of Rust Macros”, or Jorge Aparicio’s book on microcontrollers with Rust.</p>
<p>The Rust Reference: This is a special category, and one that’s especially important to me. The Rust Reference is supposed to be an exhaustive guide to the language, and the value of that being complete and accurate is hard to overstate. It’s also wildly out of date today. I wrote an RFC last year that said, basically, “We need to actually document everything! That includes updating the Reference!” The trick is: it’s a huge undertaking, and while I and a few others made a good start on it earlier this year, that effort got bogged down by life, and it needs to be resuscitated. And it’s not just Rust which could use investment in that area. Other languages and frameworks have the same issue. It’s <em>really</em> important that there be an answer other than “dive into the source and try to figure out what its intent is” – the more central the component is in the ecosystem, the more important that is.</p>
</section>
<section id="docs-translation" class="level4">
<h4>Docs: Translation</h4>
<p>Another huge place you can contribute to documentation is <em>translation</em>. For good or ill, English has become the sort of <em>primary</em> language of programming, but that doesn’t mean we should treat it as the <em>only</em> language, or as <em>more important</em> than other languages. Translating documentation is amazing and very needed work, and it’s work that not everyone is really capable of! I’m fluent in English and… ancient Hebrew and ancient Greek. For some reason, there’s not much demand for technical writing in Greek from the era when Plato was alive. So I’m not much use at translation.</p>
<figure>
<img src="/talks/rust-belt-rust/img/translation.png" alt="translation" /><figcaption>translation</figcaption>
</figure>
<p>But many of you out there <em>are</em> multilingual, and could take docs written in English and convert them for, say, Czech-speaking developers. Perhaps just as importantly, you can go the <em>other</em> direction, and help non-English-speaking maintainers reach a broader audience. Take an amazing project which only has documentation in Amharic (because its developers don’t feel comfortable enough in English to translate it themselves) and translate it to English: <em>use</em> the fact that English <em>is</em> the common language to increase the reach of non-Western developers!</p>
</section>
</section>
<section id="visual-design" class="level3">
<h3>Visual Design</h3>
<p>One of the areas where you could move the ball down the field fastest in the Rust community is with <strong><em>visual</em> design</strong>. (To be clear, the <em>language</em> design is great!) But our websites could sometimes use some work.</p>
<section id="visual-design-its-not-just-us" class="level4">
<h4>Visual design: it’s not just us</h4>
<p>Systems programming language types have historically <em>not</em> spent a lot of time on the <em>presentation</em> of their tools. In part this is just a matter of what these kinds of languages have been oriented towards: if you spend all day hacking on kernel code, you’re <em>likelier</em> to be a person for whom user interface and visual design is less interesting than, say, optimizing memory performance or minimizing the number of cache misses a given approach has. But presentation <em>does</em> matter, and it matters especially as we want to enable more and more people to be able to write this kind of code.</p>
<p>Speaking frankly, though I’ve spent a large chunk of my career to date writing systems-level languages, I’ve found the way a lot of these tools are presented to be a huge turn-off, and at times a barrier even to getting them working for me locally. Perhaps the most egregious example of that was some of the “documentation” – I’m not sure I should even call it that! – for Fortran, when I was first getting started programming back in college. The presentation of the material was essentially hacker-ish in a <em>bad</em> way: no CSS, no attention to organization of the material, no structure to help you find your way through it.</p>
</section>
<section id="visual-design-how" class="level4">
<h4>Visual design: how</h4>
<p>If you’re an expert or just a talented amateur, please pitch in<!-- .element: class="fragment" data-fragment-index="1" --></p>
<p>You can help here even if you’re not especially comfortable with visual design or even if you’re outright bad at it if you’re willing to spend just a little time on it! For example, you can simply help a team adopt something like Bootstrap. Yes, it’ll look like many other open-source projects out there. But it won’t be horribly, catastrophically ugly and unreadable! Or you can do use one of these simple starter kits:</p>
<ul>
<li><a href="http://usewing.ml">Wing</a></li>
<li><a href="https://purecss.io">Pure.css</a></li>
<li><a href="http://getskeleton.coma">Skeleton</a></li>
</ul>
<p>So don’t think that just because you aren’t a design expert means you can’t help here.</p>
<p>Just as important as the <em>visual</em> design is thinking about and actively designing the <strong>information hierarchy</strong> of your content. What leads to what? Which pieces go together, and which pieces can be broken up into their own pages or sections within pages? Think about the content like an <em>outline</em>. Many sites don’t have any such structure to them; they’re kind of haphazardly cobbled together. If you can help the maintainers with the <em>structure</em> and <em>organization</em> of their content, that can make an enormous differences as well.</p>
</section>
</section>
<section id="blogging" class="level3">
<h3>Blogging</h3>
<p>One of the other big ways you can help a project may not even end up in the repository at all. You can <em>blog</em>.</p>
<p>I know blogging can seem intimidating, for many of the same reasons that writing documentation can. Technical writing is hard, and it’s a completely different skill from programming. But it doesn’t have to be amazing; it just has to get the information out there – and you’ll get better as you practice.</p>
<section id="blogging-easy-mode" class="level4">
<h4>Blogging: “Easy Mode”</h4>
<p>You can start on “easy mode”, too. I mentioned this earlier when talking about documentation, but “just write down what you’re learning” is an incredibly effective technique for generating content. If you look at a lot of the technical blogging I’ve done over the years, it has been nothing more complicated than “here is what I just learned.” And if you want a <em>superb</em> example of this which is <em>very</em> different from mine, take a look at the work that Julia Evans does on her blog! She regularly writes down, in an inimitable way, highly technical ideas she’s just learning. If you want someone to make arcane Linux command line tools seem amazing and approachable, her blog is your ticket.</p>
<blockquote>
<p>Just write down what you’re learning.<br/> —Me, just now</p>
</blockquote>
</section>
<section id="blogging-good-examples" class="level4">
<h4>Blogging: good examples</h4>
<p>But even beyond “what I just learned,” blogging is a superb way for teaching in general. Over the course of this year, for example, Vaidehi Joshi has been writing what is essentially a friendly introduction to computer science on her blog on Medium. This is a totally different style of <em>content</em> (as well as of presentation!) from the kind of “what I just learned” content that Julia Evans writes,but it’s also really effective, because she takes her knowledge and translates it into something others can pick up. That’s obviously more work than just writing down things you just learned, but it can also pay really high dividends as others are able to substantially deepen their knowledge.</p>
</section>
<section id="blogging-all-the-options" class="level4">
<h4>Blogging: all the options!</h4>
<p>In blogging, as in documentation, there is a whole spectrum of basic teaching content you can contribute! And communities need the whole spectrum for simple introductions to extremely thorough, advanced tutorials.</p>
<p>But blog posts can also be much more versatile than traditional documentation.</p>
<ul>
<li><strong>They can be one-offs, or series.</strong> You can give a topic as much depth, or as little depth, as you <em>care about</em> or <em>think it deserves</em>. I wrote an 18-part series comparing Rust and Swift, and it could have been 30 parts if I hadn’t eventually gotten derailed. That’s not <em>documentation</em>, but there’s a lot people can learn from those kinds of things.</li>
<li><strong>They can introduce a technology, or dig deep into how to use it, or how it’s built.</strong> You’re not limited to just one particular tack when blogging. Is your interest in the specific implementation details of some corner of the compiler? Write about that! Is your interest in how a given Rust library solves a specific kind of problem you’ve run into with another library, or with a similar library in another language? Write about that! You get the idea.</li>
<li><strong>They can critique or highlight problems with specific pieces of the ecosystem!</strong> A careful, well-articulated, critical blog post can do wonders for showing the problems with a given approach and can even sometimes help suggest the right solutions to those problems. I’ve repeatedly watched, for example, as people have blogged about their struggles getting their heads around the Tokio tooling; the result has been a <em>lot</em> of work by the Tokio team to respond to those problems. The more thoughtful and careful you are in that kind of criticism, the better! Good criticism is <em>incredibly</em> valuable. Because we all have blind spots, and someone else’s perspective can help jar us out of those.</li>
<li><strong>They can show how to <em>integrate</em> different parts of the ecosystem.</strong> For example, as part of the “Increasing Rust’s Reach” initiative, Ryan Blecher recently wrote up a detailed walk-through on how to use the Diesel ORM and the Rocket web framework together to build a small blogging engine. That’s <em>huge</em>! It makes it that much easier for someone who’s just starting out with Rust, coming in from something like Python or Ruby, to dive in and get that intensely rewarding feeling of <em>having built something</em> in a relatively small amount of time. That’s also helpful because (almost) no one is building something with <em>just</em> Diesel, or just <em>any</em> crate. A huge part of what every software developer does is about fitting together other pieces of software.</li>
<li><strong>They can invite feedback on your own projects.</strong> Talk about what you’re doing, what your stumbling blocks are, what you don’t understand. People will often show up and help you with comments and clarifications!</li>
</ul>
<p>And that’s just scratching the surface. Blogs are incredibly versatile, and you should lean on that.</p>
</section>
</section>
<section id="audio-and-video" class="level3">
<h3>Audio and Video</h3>
<p>Not just words! Noises and pictures, too!</p>
<section id="audio-podcasts" class="level4">
<h4>Audio: podcasts</h4>
<ul>
<li>Not everyone learns the same way.<!-- .element: class="fragment" data-fragment-index="1" --></li>
<li>Lots of people have commutes.<!-- .element: class="fragment" data-fragment-index="2" --></li>
</ul>
</section>
<section id="audio-but-there-are-already-podcasts" class="level4">
<h4>Audio: but there are already podcasts</h4>
<p>Everything I’ve talked about so far has been in written form. But audio and video media can also be really helpful. Not everyone learns best by reading. And not everyone has tons of time to sit down and read a book every day. One of the reasons I started the New Rustacean podcast is that it gives people a way to get up to speed on the language while on a daily commute. But there’s still a <em>huge</em> need for more audio and video content in this space!</p>
<p>One podcast is not enough!</p>
<figure>
<img src="/talks/rust-belt-rust/img/newrustacean.png" alt="New Rustacean" /><figcaption>New Rustacean</figcaption>
</figure>
<p><em>Two</em> podcasts is not enough!</p>
<figure>
<img src="/talks/rust-belt-rust/img/rfe.png" alt="Request for Explanation" /><figcaption>Request for Explanation</figcaption>
</figure>
<p>Seriously, not even <em>three</em> podcasts is enough!</p>
<figure>
<img src="/talks/rust-belt-rust/img/rusty-spike.png" alt="Rusty Spike" /><figcaption>Rusty Spike</figcaption>
</figure>
<p>So I’m laying down another challenge: there’s plenty of room for more, and more kinds, of audio content in this ecosystem.</p>
</section>
<section id="video" class="level4">
<h4>Video</h4>
<p>Again: people have different learning styles!</p>
<p>There’s also a huge opening for people to produce good video content. I’ve heard often from people that things like RailsCasts were essential in helping them learn the Ruby on Rails ecosystem. We <em>need</em> video tutorials which might look kind of like that, or like the kinds of things I’m doing on the podcast. If you have any skill that way, and any interest in teaching, you should make Rust videos – there aren’t many out there.</p>
</section>
<section id="video-what" class="level4">
<h4>Video: what</h4>
<p>There are lots of options here—not just live streaming!</p>
<p>Another, totally different tack you can take with video is <em>live-streaming</em>. Sean Griffin has done this at times, and I’ve actually done it just once, and it’s a ton of fun – and it can be incredibly illuminating for other people to see how you work and how you solve problems. You can also do like I did and live-pair on something. It’s a pain to set up, but it’s also a lot of fun.</p>
<p>And no doubt there are more ideas you have—please just go do them!</p>
</section>
</section>
<section id="talk-to-people" class="level3">
<h3>Talk to people</h3>
<p>Just talking with people matters. And there are lots of places to do it:</p>
<ul>
<li>IRC/Gitter/Slack/Discourse</li>
<li>Meetups</li>
<li>Conferences</li>
</ul>
<p>You can also host or help with a local meet-up! For a lot of people, one of the major challenges of learning <em>any</em> new piece of technology is that – even with IRC and Gitter and Slack and so on – you can feel isolated and alone. And people can help you solve problems in person, and make you feel supported in person, in ways that even a great community can’t really manage online. So <em>go</em> to meet-ups, at a minimum. And help the organizers. And if there isn’t a meet-up in your community… you can start one! The #rust-community team has a ton of resources.</p>
<p>Physicality matters. Presence matters. (We know this! We’re at a conference!)</p>
</section>
<section id="being-inviting" class="level3">
<h3>Being inviting</h3>
<p>Last but not least in this list of <em>how</em> to be a contributor, I want to take a minute and talk about “being a contributor” to those of you who’ve been contributors for a long time. Some of you have been shipping open-source software for years – some of you even for decades. Much of what I’ve said so far is old hat for you. Maybe not the design bits quite so much! But you’ve been doing this for a long time, and you’re not trying to get over the hump of making your first contribution. You have other things to contribute here:</p>
<ul>
<li><p>The most important thing you can do is practice <strong>welcome people.</strong> The Rust community does this well, in general, but it’s something we need to keep in front of us as a goal as the community grows. It’s easy to get frustrated with newcomers as your project grows, demands on your time increase, and your work as a maintainer seems less like fun and more like a second job. But continuing to actively welcome newcomers in is <em>incredibly</em> powerful. You can make it possible for people to go from zero to really making a difference. And remember: so once were you. None of us started out as magical wizards of Rust and open-source.</p></li>
<li><p>The second big thing you can do is <strong>mentoring.</strong> As I mentioned, I’m now the maintainer of one of the core pieces necessary to make Ember.js and TypeScript play nicely together. But while I’ve done <em>some</em> writing-of-code with that, a much larger part of my current and future work there is about helping other people learn TypeScript well enough to start using it in their apps and add-ons. But the flip-side of that is: even a fair bit of the code I <em>have</em> written, I was able to write because someone more comfortable with some of the infrastructure mentored <em>me</em> through its quirks and oddities.</p></li>
</ul>
</section>
</section>
<section id="when-where-to-contribute" class="level2">
<h2>When & where to contribute</h2>
<p>The last thing I want to touch on is <em>when and where</em> to contribute. There are two things I’d suggest you should consider here:</p>
<section id="when-where-you" class="level3">
<h3>When & where: you</h3>
<p>Where are <em>you</em> in the process of becoming comfortable with contributing?</p>
<ul>
<li>Just getting started?<!-- .element: class="fragment" data-fragment-index="1" --></li>
<li>Already comfortable?<!-- .element: class="fragment" data-fragment-index="2" --></li>
</ul>
<p>If you’ve never done any open-source work at all before, that’s different than if you’ve gotten pretty comfortable with it in a different ecosystem and are just figuring out where to make yourself useful in <em>this</em> ecosystem.</p>
<section id="when-where-if-youre-just-getting-started" class="level4">
<h4>When & where: if you’re just getting started</h4>
<p>If you’re just getting started, I’d pick a big project with lots of those “Help Wanted” and “Mentoring” and “Easy” tags on issues, and let the size of the project help you out. Those are projects that are <em>used to</em> helping people make their first contributions. Crazy as it seems, something like Servo can actually be an <em>easier</em> place to start out that a much smaller project. Sure, the technical lift is higher, but there are also a lot more people actively invested in your success there.</p>
<ol type="1">
<li><p>Look for these!<!-- .element: class="fragment" data-fragment-index="1" --></p>
<p class="fragment" data-fragment-index="1">
<p><img src="/talks/rust-belt-rust/img/help-wanted.png" alt="help wanted" /> <img src="/talks/rust-belt-rust/img/easy.png" alt="help wanted" /></p>
</p></li>
<li><p>Pick big projects!<!-- .element: class="fragment" data-fragment-index="2" --></p></li>
</ol>
</section>
<section id="when-where-if-youre-experienced" class="level4">
<h4>When & where: if you’re experienced</h4>
<p>On the other hand, if you’re already comfortable contributing and have some idea what you’re best at, you might look around and find smaller projects with fewer contributors which look interesting and <em>could use the help</em>. Because again, there’s always more work to do than hands to do it.</p>
</section>
<section id="when-where-project-lifecycles" class="level4">
<h4>When & where: project lifecycles</h4>
<p>The second consideration dovetails nicely with that: <strong>where is a given project at in its life-cycle?</strong> As enthusiastic as you might be about some project, if it’s a small project and it’s already in a “basically done” state, well… that’s probably a lot less useful a place to invest your time <em>if</em> you’re focusing on code. On the other hand, it’s often the case that projects are “done” in terms of code, but desperately need help with documentation, their web site, etc. Big projects, or projects just starting out, are often better places to dig in if you’re really looking to flex your coding muscles (but both of them <em>also</em> usually have huge needs in terms of all those non-code avenues we talked about).</p>
<p>Where is a given project at in its life-cycle?</p>
<ul>
<li>small project, basically done?<!-- .element: class="fragment" data-fragment-index="1" --></li>
<li>need docs?<!-- .element: class="fragment" data-fragment-index="1" --></li>
<li>big project, a billion needs?<!-- .element: class="fragment" data-fragment-index="1" --></li>
<li>etc.<!-- .element: class="fragment" data-fragment-index="1" --></li>
</ul>
<p>Think about those, and then see if you can pick a project that’s a good fit for your current skillset and comfort level and jump in!</p>
</section>
</section>
</section>
<section id="conclusion" class="level2">
<h2>Conclusion</h2>
<p>And that’s a good place to wrap things up! I hope you’re feeling like <em>you can do this</em>. Because you can. Open-source a project of your own and see where it goes. Write a blog post. Add some docs. Open a PR. Record a podcast. Make some videos. Start a meet up. Become a contributor! And remember:</p>
<ul>
<li>Anyone can contribute meaningfully.</li>
<li>People can contribute in a stunning variety of ways.</li>
</ul>
</section>
<section id="more-info" class="level2">
<h2>More info</h2>
<ul>
<li><a href="https://www.rust-lang.org/en-US/contribute.html" class="uri">https://www.rust-lang.org/en-US/contribute.html</a></li>
<li><a href="https://blog.rust-lang.org/2017/09/18-impl-future-for-rust.html" class="uri">https://blog.rust-lang.org/2017/09/18-impl-future-for-rust.html</a></li>
<li><a href="https://internals.rust-lang.org/" class="uri">https://internals.rust-lang.org/</a></li>
<li><code>#rust</code>, <code>#rust-community</code>, <code>#rust-internals</code>, etc. on irc.mozilla.org</li>
</ul>
</section>
Announcing True Myth 1.02017-11-01T08:40:00-04:002017-11-01T08:40:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-11-01:/2017/announcing-true-myth-10.html<p>I’m pleased to announce the release of <a href="https://github.com/chriskrycho/true-myth">True Myth 1.0</a>! True Myth is a library I’ve been working on over the last month or so, for saner programming in JavaScript, with first-class support for TypeScript (and Flow).</p>
<p>True Myth provides standard, type-safe wrappers and helper functions to …</p><p>I’m pleased to announce the release of <a href="https://github.com/chriskrycho/true-myth">True Myth 1.0</a>! True Myth is a library I’ve been working on over the last month or so, for saner programming in JavaScript, with first-class support for TypeScript (and Flow).</p>
<p>True Myth provides standard, type-safe wrappers and helper functions to help you with two <em>extremely</em> common cases in programming:</p>
<ul>
<li>not having a value—which it solves with a <code>Maybe</code> type and associated helper functions and methods</li>
<li>having a <em>result</em> where you need to deal with either success or failure—which it solves with a <code>Result</code> type and associated helper functions and methods</li>
</ul>
<p>You could implement all of these yourself – it’s not hard! – but it’s much easier to just have one extremely well-tested library you can use everywhere to solve this problem once and for all.</p>
<p>Even better to get one of these with no runtime overhead for using it other than the very small cost of some little container objects—which we get by leaning hard on the type systems in TypeScript or Flow!</p>
<p><strong>Aside:</strong> If you’re familiar with <a href="http://folktale.origamitower.com">Folktale</a> or <a href="https://sanctuary.js.org">Sanctuary</a>, this has a lot in common with them—its main differences are:</p>
<ul>
<li>True Myth has a much smaller API surface than they do</li>
<li>True Myth aims to be much more approachable for people who aren’t already super familiar with functional programming concepts and jargon</li>
<li>True Myth does <em>no</em> runtime checking of your types, whereas both those libraries do by default—it relies on TypeScript or Flow instead</li>
</ul>
<p>I really like both of those libraries, though, so you might check them out as well!</p>
<section id="maybe" class="level2">
<h2><code>Maybe</code></h2>
<p>Sometimes you don’t have a value. In JavaScript land, we usually represent that with either <code>null</code> or <code>undefined</code>, and then trying to program defensively in the places we <em>think</em> we might get <code>null</code> or <code>undefined</code> as arguments to our functions. For example, imagine an endpoint which returns a JSON payload shaped like this:</p>
<pre class="json"><code>{
"hopefullyAString": "Hello!"
}</code></pre>
<p>But sometimes it might come over like this:</p>
<pre class="json"><code>{
"hopefullyAString": null
}</code></pre>
<p>Or even like this:</p>
<pre class="json"><code>{}</code></pre>
<p>Assume we were doing something simple, like logging the length of whatever string was there or logging a default value if it was absent. In normal JavaScript we’d write something like this:</p>
<pre class="javascript"><code>function logThatValue(thePayload) {
const length = !!thePayload.hopefullyAString
? thePayload.hopefullyAString.length
: 0;
console.log(length);
}
fetch(someUrl)
.then(response => response.json())
.then(logThatValue);</code></pre>
<p>This isn’t a big deal right here… but—and this <em>is</em> a big deal—we have to remember to do this <em>everywhere</em> we interact with this payload. <code>hopefullyAString</code> can <em>always</em> be <code>undefined</code> or <code>null</code> everywhere we interact with it, anywhere in our program. 😬</p>
<p><code>Maybe</code> is our escape hatch. If, instead of just naively interacting with the payload, we do a <em>very small</em> amount of work up front to normalize the data and use a <code>Maybe</code> instead of passing around <code>null</code> or <code>undefined</code> values, we can operate safely on the data throughout our application. If we have something, we get an instance called <code>Just</code>—as in, “What’s in this field? Just a string” or “Just the string ‘hello’”. If there’s nothing there, we have an instance called <code>Nothing</code>. <code>Just</code> is a wrapper type that holds the actual value in it. <code>Nothing</code> is a wrapper type which has no value in it. But both of them are concrete types and you’ll never get an <code>undefined is not an object</code> error when trying to use them!</p>
<p>Both of them have all the same methods available on them, and the same static functions to work on them. And, importantly, you can do a bunch of neat things with a <code>Maybe</code> instance without checking whether it’s a <code>Nothing</code> or a <code>Just</code>. For example, if you want to double a number if it’s present and do nothing if it isn’t, you can use the <code>Maybe.map</code> function:</p>
<pre class="typescript"><code>import Maybe from 'true-myth/maybe';
const hereIsANumber = Maybe.just(12); // Just(12)
const noNumberHere = Maybe.nothing<number>(); // Nothing
const double = (n: number) => n * 2;
hereIsANumber.map(double); // Just(24)
noNumberHere.map(double); // Nothing</code></pre>
<p>There are <a href="https://true-myth.js.org/modules/_maybe_.html">a <em>lot</em></a> of those helper functions and methods! Just about any way you would need to interact with a <code>Maybe</code> is there.</p>
<p>So now that we have a little idea what <code>Maybe</code> is for and how to use it, here’s that same example, but rewritten to normalize the payload using a <code>Maybe</code> instance. We’re using TypeScript, so we will get a compiler error if we don’t handle any of these cases right—or if we try to use the value at <code>hopefullyAString</code> directly after we’ve normalized it!</p>
<p>(Note that <code>Maybe.of</code> will construct either a <code>Maybe.Just</code> if the string is present, or <code>Maybe.Nothing</code> if the value supplied to it is <code>null</code> or <code>undefined</code>.)</p>
<pre class="typescript"><code>import Maybe from 'true-myth/maybe';
type Payload = { hopefullyAString?: string };
type NormalizedPayload = { hopefullyAString: Maybe<string> };
function normalize(payload: Payload): NormalizedPayload {
return {
hopefullyAString: Maybe.of(payload.hopefullyAString)
};
}
function logThatValue(payload: NormalizedPayload) {
const length = payload.hopefullyAString.mapOr(0, s => s.length);
console.log(length);
}
fetch(someUrl)
.then(response => response.json())
.then(normalize)
.then(logThatValue);</code></pre>
<p>Now, you might be thinking, <em>Sure, but we could get the same effect by just supplying a default value when we deserialize the data.</em> That’s true, you could! Here, for example, you could just normalize it to an empty string. And of course, if just supplying a default value at the API boundary is the right move, you can still do that. <code>Maybe</code> is another tool in your toolbox, not something you’re <em>obligated</em> to use everywhere you can.</p>
<p>However, sometimes there isn’t a single correct default value to use at the API boundary. You might need to handle that missing data in a variety of ways throughout your application. For example, what if you need to treat “no value” distinctly from “there’s a value present, and it’s an empty string”? <em>That’s</em> where <code>Maybe</code> comes in handy.</p>
</section>
<section id="result" class="level2">
<h2><code>Result</code></h2>
<p>Another common scenario we find ourselves in is dealing with operations which might fail. There are a couple patterns we often use to deal with this: <em>callbacks</em> and <em>exceptions</em>. There are major problems with both, especially around reusability and composability.</p>
<p>The callback pattern (as in e.g. Node) encourages a style where literally every function starts with the exact same code:</p>
<pre class="js"><code>function getMeAValue(err, data) {
if (err) {
return handleErr(err);
}
// do whatever the *actual* point of the function is
}</code></pre>
<p>There are two major problems with this:</p>
<ol type="1">
<li><p>It’s incredibly repetitive – the very opposite of “Don’t Repeat Yourself”. We wouldn’t do this with <em>anything</em> else in our codebase!</p></li>
<li><p>It puts the error-handling right up front and <em>not in a good way.</em> While we want to have a failure case in mind when designing the behavior of our functions, it’s not usually the <em>point</em> of most functions – things like <code>handleErr</code> in the above example being the exception and not the rule. The actual meat of the function is always after the error handling.</p></li>
</ol>
<p>But if we’re not using some similar kind of callback pattern, we usually resort to exceptions. But exceptions are unpredictable: you can’t know whether a given function invocation is going to throw an exception until runtime as someone calling the function. No big deal if it’s a small application and one person wrote all the code, but with even a few thousand lines of code or two developers, it’s very easy to miss that. And then this happens:</p>
<pre class="js"><code>// in one part of the codebase
function getMeAValue(url) {
if (isMalformed(url)) {
throw new Error(`The url `${url}` is malformed!`);
}
// do something else to load data from the URL
return data;
}
function render(toRender) {
// if toRender can't generate valid HTML, throw Error("invalid HTML");
// if it can, theRenderedHTML;
}
function setDom(html) {
/* magic to render into DOM */
}
// somewhere else in the codebase -- throws an exception
const badUrl = 'http:/www.google.com'; // missing a slash
const response = getMeAValue(badUrl); // throws here
// we never get here, but it could throw too
const htmlForPage = render(value);
// so we definitely can't get here safely
setDom(htmlForPage);</code></pre>
<p>Notice: there’s no way for the caller to know that the function will throw. Perhaps you’re very disciplined and write good docstrings for every function – <em>and</em> moreover, perhaps everyone’s editor shows it to them <em>and</em> they pay attention to that briefly-available popover. More likely, though, this exception throws at runtime and probably as a result of user-entered data – and then you’re chasing down the problem through error logs.</p>
<p>More, if you <em>do</em> want to account for the reality that any function anywhere in JavaScript might actually throw, you’re going to write something like this:</p>
<pre class="js"><code>try {
const badUrl = 'http:/www.google.com'; // missing a slash
const response = getMeAValue(badUrl); // throws here
// we never get here, but it could throw too
const htmlForPage = render(value);
// so we definitely can't get here safely
setDom(htmlForPage);
} catch (e) {
handleErr(e); // ends up here
}</code></pre>
<p>This is like the Node example <em>but even worse</em> for repetition!</p>
<p>And TypeScript and Flow can’t help you here! They don’t have type signatures to say “This throws an exception!” (TypeScript’s <code>never</code> might come to mind, but it might mean lots of things, not just exception-throwing.)</p>
<p>Instead, we can use a <code>Result</code> to get us a container type, much like <code>Maybe</code>, to let us deal with this scenario. A <code>Result</code> is either an <code>Ok</code> wrapping around a value (like <code>Just</code> does) or an <code>Err</code> wrapping around some type defining what went wrong (<em>not</em> like <code>Nothing</code>, which has no contents). Both of them have the same sets of methods on them, and the same static functions which can operate on them.</p>
<pre class="typescript"><code>import Result from 'true-myth/result';
type Payload = {/* details of the payload...*/}
function getMeAValue(url: string): Result<Payload, string> {
if (isMalformed(url)) {
return Result.err(`The url '${url}' is malformed`);
}
// do something else to load data from the url
return Result.ok(data);
}
function render(toRender: string): Result<HTMLElement, string> {
// if toRender can't generate valid HTML, return Err("invalid HTML");
// if it can, return Ok(theRenderedHTML);
}
function setDom(html: HTMLElement) {
}
// somewhere else in the codebase -- no exception this time!
const badUrl = 'http:/www.google.com'; // missing a slash
// value = Err(The url '${http:/www.google.com}' is malformed)
const value = getMeAValue(badUrl);
// htmlForPage = the same error! or, if it was Ok, could be a different
// `Err` (because of how `andThen` works).
const htmlForPage = value.andThen(render);
// we can't just invoke `setDom` because it doesn't take a `Result`.
value.match({
Ok: html => setDom(html);
Err: reason => alert(`Something went seriously wrong here! ${reason}`);
})</code></pre>
<p>When we have a <code>Result</code> instance, we can perform tons of operations on whether it’s <code>Ok</code> or <code>Err</code>, just as we could with <code>Maybe.Just</code> and <code>Maybe.Nothing</code>, until we <em>need</em> the value. Maybe that’s right away. Maybe we don’t need it until somewhere else deep in our application! Either way, we can deal with it easily enough, and have type safety throughout!</p>
</section>
<section id="conclusion" class="level2">
<h2>Conclusion</h2>
<p>Give it a spin!</p>
<ul>
<li><code>yarn add true-myth</code></li>
<li><code>npm install true-myth</code></li>
<li>You can even just <code>ember install true-myth</code> and use it if you’re using Ember (in which case I encourage you to also use <a href="https://github.com/typed-ember/ember-cli-typescript">ember-cli-typescript</a>)</li>
</ul>
<p>Let me know what you think – if there’s stuff missing, <a href="https://github.com/chriskrycho/true-myth">open issues</a>! And if it’s just not to your taste, again, I encourage you to take a look at <a href="http://folktale.origamitower.com">Folktale</a> and <a href="https://sanctuary.js.org">Sanctuary</a>, which are both excellent and land in very different design spaces in many ways.</p>
</section>
Announcing ember-cli-typescript 1.0.02017-08-08T09:00:00-04:002017-08-08T09:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-08-08:/2017/announcing-ember-cli-typescript-100.htmlA stable foundation for building Ember.js apps with TypeScript, and a roadmap toward a flourishing, TypeScript-friendly ecosystem in the future!
<p>I’m extremely pleased to announce the release of <a href="https://github.com/typed-ember/ember-cli-typescript/releases/tag/v1.0.0">ember-cli-typescript 1.0.0</a>! You can get it the same way you do <em>any</em> Ember addon:</p>
<pre class="sh"><code>$ ember install ember-cli-typescript</code></pre>
<p>For a detailed walkthrough of adding TypeScript to your projects, see:</p>
<ul>
<li><a href="http://v4.chriskrycho.com/2017/typing-your-ember-part-1.html">Typing Your Ember, Part 1: Set your Ember.js project up to use TypeScript.</a></li>
<li><a href="http://v4.chriskrycho.com/2017/typing-your-ember-part-2.html">Typing Your Ember, Part 2: Adding TypeScript to an existing Ember.js project.</a></li>
</ul>
<p>So what are we shipping today, and what’s on the roadmap?</p>
<section id="whats-in-1.0" class="level2">
<h2>What’s In 1.0?</h2>
<p>This release is intentionally relatively minimal: the goal here is provide stable foundation for building Ember.js applications with TypeScript in the toolchain. This means that in any app you can install the add-on and just start <a href="http://v4.chriskrycho.com/2017/typing-your-ember-part-3.html">progressively converting your app over to TypeScript</a>. However, we don’t expect to change the way you <em>use</em> the addon at all in the foreseeable future.</p>
<p>I’ll give you fair warning that there is one <em>major</em> challenge you will find as you work with ember-cli-typescript today: the lack of type definitions for most projects, and the limits of the existing type definitions for Ember.js itself. That’s not as bad as it sounds, though:</p>
<ol type="1">
<li>See the <a href="#the-roadmap"><strong>Roadmap</strong></a> below—we’re working on that, and you can help!</li>
<li>I’ve been using TypeScript successfully in the app I work on at my day job for the last nine months or so. While the lack of (good or any) typings has had its frustrations, <a href="https://www.dailydrip.com/blog/domain-driven-design-and-typed-functional-programming-in-typescript">TypeScript has already added a <em>lot</em> of value for us</a>.</li>
</ol>
</section>
<section id="the-roadmap" class="level2">
<h2>The Roadmap</h2>
<p>We have a bunch of things we’re actively working on and which you can expect to land in the next few weeks to months.</p>
<ul>
<li><a href="#1-1-a-prepublish-build-process-for-addons">1.1: A prepublish build process for addons</a></li>
<li><a href="#community-driven-work-on-typings">Community-driven work on typings</a></li>
</ul>
<section id="a-prepublish-build-process-for-addons" class="level3">
<h3>1.1: A prepublish build process for addons</h3>
<p>The major priority for the 1.1 release is an npm prepublication step to generate JavaScript and typing files from add-ons which are using TypeScript. Currently, addons have to take TypeScript as a full dependency, not a dev dependency, because they currently just ship the <code>.ts</code> files up to npm and they have to be compiled in your app at build time.</p>
<p>We really don’t want to make any app developer who is using your addon download either the TypeScript files or <em>especially</em> the TypeScript compiler if we can avoid it. There are three reasons for this:</p>
<ol type="1">
<li><p>The fact that an add-on is developed in TypeScript really shouldn’t affect app developers. If they’re writing a plain-old JavaScript app, the fact that your addon is originally written in TypeScript is irrelevant to them.</p></li>
<li><p>TypeScript is <em>large</em>. The v2.4 installation I have in the app I’m working on right now weights 26MB. If I were using four add-ons which required TypeScript, my install cost could easily go up by a hundred megabytes. That’s not always a huge deal on a corporate network, but even where people <em>do</em> have good download speeds, it’s a hit to developer time. Every time someone has to reinstall all the dependencies, those 26MB have to come down again. If TypeScript becomes common, you might suddenly find yourself with addons using 2.4, 2.5, 2.6, etc.; it’s not hard to see that ballooning up the size of your installation in a really non-trivial way: 26MB × <em>n</em> versions of TypeScript = <em>do not want</em>.</p></li>
<li><p>The TypeScript compilation step takes time. Addons can do this <em>once</em> and save every consuming app build time. This isn’t the end of the world, but anything we can do to keep build times lower is a real win for developer productivity.</p></li>
</ol>
<p>Accordingly the plan is to automatically add a build step which runs the TypeScript compiler on your addon and generates plain-old-JavaScript and the corresponding type definition files (<code>.d.ts</code>) prior to publishing to npm. That way, TypeScript can remain a dev dependency (rather than a full dependency) of each addon, and not be installed alongside the addon for consumers. Just-JavaScript consumers can just consume the normal JavaScript generated by the build. TypeScript consumers will get the full benefits of the types via the generated typing files.</p>
<p>This <em>should</em> hopefully land by late August or early September. Fingers crossed.</p>
</section>
<section id="community-driven-work-on-typings" class="level3">
<h3>Community-driven work on typings</h3>
<p>The process of getting type definitions in place for <em>all</em> of Ember.js and its ecosystem is way, <em>way</em> too big for any one person or even a small handful of people to manage alone. This is something we’re going to take on as a community.</p>
<section id="new-typings-for-ember.js-itself" class="level4">
<h4>New typings for Ember.js itself</h4>
<p>We’re actively working on type definitions for Ember which will give us actually-useful-and-correct type checking for Ember’s custom object model. Today, if you use <code>Ember.get</code> or <code>Ember.set</code>, you get <em>no</em> help from the type system. When we finish, those will be type-checked by the compiler and will error if you try to assign the wrong values!</p>
<p>Importantly, the typings we’re shipping will be backwards compatible with the existing Ember API, but will also include support for the <a href="https://github.com/emberjs/rfcs/pull/176">RFC #176 JavaScript Modules API</a>. TypeScript’s module definition system will let us support both in parallel, and we will. Backwards compatibility and <em>stability without stagnation</em> are things we value for this addon just as much as the rest of the Ember.js ecosystem does.</p>
<p>This effort, led by Derek Wickern (<a href="https://github.com/dwickern">@dwickern</a>), is ongoing in the <a href="https://github.com/typed-ember/ember-typings">typed-ember/ember-typings</a> repository. (If you’re wondering why we’re not just doing it in the DefinitelyTyped repository, see below.) We probably won’t be able to get to 100% of everything the Ember Object model does—Ember’s custom object model is <em>incredibly</em> sophisticated, and TypeScript actually <a href="https://github.com/Microsoft/TypeScript/issues/16699">still can’t</a> <em>totally</em> express it—but Derek already has most of it working. This will be a <em>huge</em> step forward.</p>
<p>To be clear, we’re not forking the way you get types. We’ll upstream all of this work to DefinitelyTyped as soon as we have them working, but the DefinitelyTyped repo is <em>huge</em> and very busy; it’s not a great place to do this kind of substantial rework of existing types. And we really don’t need to have all the <em>other</em> type definitions DefinitelyTyped supplies in our way as we’re working, either. Having a separate repo gives us a place we can work on types, try them out as a community, etc. before creating PRs on DefinitelyTyped and publishing them officially.</p>
</section>
<section id="addon-typings" class="level4">
<h4>Addon typings</h4>
<p>We need to get type definitions in place for the addons in the ecosystem! That way when you’re using, say, <a href="https://github.com/simplabs/ember-test-selectors">ember-test-selectors</a>, you’ll get an error if you try to use the functions it provides incorrectly. Right now, every addon out there is missing types entirely, so everything gets treated as taking the useless <code>any</code> type.</p>
<p>In a week or so, I’ll have a blog post with a fleshed-out <a href="https://github.com/typed-ember/ember-cli-typescript/issues/48">quest issue</a> for tackling it in detail, but here’s the short version: we’re going to try to get type definitions for all the top addons in the ecosystem so that it’s <em>easy</em> to use TypeScript in your Ember.js app. That blog post and quest issue will explain how to write good typings, and also how to contribute them to a project which may or may not be interested in using TypeScript itself.</p>
</section>
</section>
</section>
Typing Your Ember, Part 42017-07-31T19:30:00-04:002017-07-31T19:30:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-07-31:/2017/typing-your-ember-part-4.htmlIn the last post, I mentioned putting your business logic outside Ember's tools and treating it as plain-old TypeScript. Here's what that might look like.
<p><i class='series-overview'>You write <a href="https://emberjs.com">Ember.js</a> apps. You think <a href="http://www.typescriptlang.org">TypeScript</a> would be helpful in building a more robust app as it increases in size or has more people working on it. But you have questions about how to make it work.</i></p>
<p><i class='series-overview'>This is the series for you! I’ll talk through everything: from the very basics of how to set up your Ember.js app to use TypeScript to how you can get the most out of TypeScript today—and I’ll be pretty clear about the current tradeoffs and limitations, too.</i></p>
<p><i class='series-overview'><a href="/typing-your-ember.html">(See the rest of the series. →)</a></i></p>
<hr />
<p>In the <a href="/2017/typing-your-ember-part-3">previous post</a> in this series, I noted that one of the most effective current strategies for using TypeScript effectively in an Ember app is to push as much of your logic possible <em>out</em> of the Ember layer and into plain-old-TypeScript. Unsurprisingly, people had some questions about how to do this, so here’s a brief example.</p>
<p>As I suggested in that post, we now have a <code>lib</code> directory in our app, and all new business logic for the app lives there instead of directly on e.g. an <code>Ember.Service</code> instance. Our current directory structure looks like this:</p>
<pre><code>app/
adapters/
components/
config/
controllers/
helpers/
initializers/
instance-initializers
lib/ <-- this is the one we care about
billing/
utilities/
numeric.ts
routes/
serializers/
services/
templates/
transforms
app.ts
router.ts
tests/
package.json
bower.json
// etc.</code></pre>
<p>The main thing to notice here is that <code>lib</code> is just a directory in the app like any other, and its child directories likewise. This means that Ember <abbr title="command line interface">CLI</abbr> will resolve it just like normal, too—there’s no need to mess with the resolver or anything.</p>
<p>Say we had a set of numeric utilities in that <code>numeric.ts</code> file like this:</p>
<pre><code>// Make text out of numbers, like "1st", "2nd", "3rd", etc.
export const withEnding = (val: number): string => {
// boring implementation details elided
};</code></pre>
<p>Then using it in an Ember component might look like this (where <code>currentNumber</code> is passed into the component):</p>
<pre class="typescript"><code>import Component from '@ember/component';
import { get, set } from '@ember/object';
import * as Num from '../lib/utilities/numeric';
export default Component.extend({
init() {
const currentNumber = get(this, 'currentNumber');
const displayNumber = Num.withEnding(currentNumber);
set(this, 'displayNumber', displayNumber);
},
})</code></pre>
<p>You might wonder why we’d do this instead of using an <code>Ember.Service</code>. In the above example, I could of course make <code>Num</code> a service and inject it…</p>
<pre class="typescript"><code>import Component from '@ember/component';
import { getProperties, set } from '@ember/object';
import { inject } from '@ember/service';
export default Component.extend({
num: inject(),
init() {
const { currentNumber, num } =
getProperties(this, 'currentNumber', 'num');
const displayNumber = num.withEnding(currentNumber);
set(this, 'displayNumber', displayNumber);
},
})</code></pre>
<p>…but that doesn’t actually <em>gain</em> me anything—the service here is just a way of exposing a function, after all—and it actually makes everything a bit more verbose. It also decreases the overall analyzability of this for things like tree-shaking: that module dependency is now something that Ember itself has to manage, instead of being statically analyzable at build time. Taking this approach also diminishes the reusability of any numeric helpers I put in there. If we couple them to an <code>Ember.Service</code>, instead of using an ES6 module, they would stop being things we can easily reuse in non-Ember projects. Instead, by using modules, we leave ourselves the ability to easily extract those numeric helpers, and publish them for either internal or external consumption.</p>
<p>Along those lines, we actually have a module to support <abbr title="Block-Element-Modifier"><a href="https://en.bem.info/methodology/quick-start/">BEM</a></abbr> with Ember Components—and we plan to extract both the basic TypeScript library as well as a <code>BemComponent</code> Ember-specific wrapper as open-source libraries in the near future. Besides the Ember addon, <em>anyone</em> will be able to consume and use the underlying TypeScript library, whatever their framework or library of choice. Importantly, that includes us in our other codebases, which include lots of old jQuery and some new React, and might include some Glimmer.js in the future. Any or all of our utilities for these kinds of things become reusable if they’re just TypeScript.</p>
<p>Pragmatically, it’s also just easier to do and get good help from TypeScript by going this way. It also means that unit-testing requires <em>no</em> context from Ember whatsoever, which keeps those tests lighter and faster. Even though Ember’s unit tests are already super quick, when you have hundreds or thousands of unit tests, every little bit matters. It also, and probably even more importantly, means there are fewer places where you could mess things up when configuring tests—not that I have any experience messing up test configurations in Ember!</p>
<p>One important thing to note is that this all works best with Ember—by far—when your <code>lib</code> modules aren’t managing stateful objects, but rather defining data structures and functions which just transform those structures in some way. This approach is a great fit for us, because we’re increasingly writing a lot of our business and even <abbr title="user interface">UI</abbr> logic in terms of <a href="http://v4.chriskrycho.com/2016/what-is-functional-programming.html#pure-functions">pure functions</a> which transform simple “record” types. That keeps each controller, route, component, or service doing relatively little work: they are responsible for getting and passing around data in the application, and for triggering actions—but they’re not responsible for <em>understanding</em> or <em>manipulating</em> that data. Meanwhile the module code doesn’t do <em>any</em> stateful work; there’s no mutation—just boring, input-to-output functions.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> By contrast, if you’re dealing with stateful objects, you’re apt to end up running into places where you have lifecycle concerns, and that’s where Ember excels.</p>
<p><strong>In summary:</strong> in this model, Ember handles all the lifecycle and view management, and is responsible for sending data in and out of the application. Plain old modules handle defining what the core internal data types are, and for manipulating, transforming, and creating data.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>If you’re wondering: we’re not using anything like Redux or Immutable.js yet, but both <a href="https://github.com/ember-redux/ember-redux">ember-redux</a> and <a href="https://github.com/rtfeldman/seamless-immutable">seamless-immutable</a> would be great fits for the way we’re building the app at this point, and it’s likely at least <a href="https://github.com/ember-redux/ember-redux">ember-redux</a> will become part of our stack in the relatively near future.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Typing Your Ember, Part 32017-07-28T12:00:00-04:002017-07-28T12:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-07-28:/2017/typing-your-ember-part-3.html<p><i class='series-overview'>You write <a href="https://emberjs.com">Ember.js</a> apps. You think <a href="http://www.typescriptlang.org">TypeScript</a> would be helpful in building a more robust app as it increases in size or has more people working on it. But you have questions about how to make it work.</i></p>
<p><i class='series-overview'>This is the series for you! I’ll talk through everything …</i></p><p><i class='series-overview'>You write <a href="https://emberjs.com">Ember.js</a> apps. You think <a href="http://www.typescriptlang.org">TypeScript</a> would be helpful in building a more robust app as it increases in size or has more people working on it. But you have questions about how to make it work.</i></p>
<p><i class='series-overview'>This is the series for you! I’ll talk through everything: from the very basics of how to set up your Ember.js app to use TypeScript to how you can get the most out of TypeScript today—and I’ll be pretty clear about the current tradeoffs and limitations, too.</i></p>
<p><i class='series-overview'><a href="/typing-your-ember.html">(See the rest of the series. →)</a></i></p>
<hr />
<p>In the <a href="/2017/typing-your-ember-part-1">first</a> of this series, I described how to set up a brand new Ember.js app to use TypeScript. In the <a href="/2017/typing-your-ember-part-2">second</a> part, walked through adding TypeScript to an existing Ember.js app. In this part, I’m going to talk about using TypeScript effectively in a modern Ember.js app.</p>
<section id="heavy-lifting-so-so-results" class="level2">
<h2>Heavy lifting, so-so results</h2>
<p>Let’s get this out of the way up front: right now, using types in anything which extends <code>Ember.Object</code> is going to be a lot of work for a relatively low reward. <code>Ember.Object</code> laid the foundation for the modern JavaScript class system (and thus the TypeScript class system), but it has a huge downside: it’s string keys and referennces all the way down. This kind of thing is just normal Ember code—and note all the string keys:<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<pre class="javascript"><code>import Component from '@ember/component';
import { computed, get } from '@ember/object';
export default Component.extend({
someProperty: 'with a string value',
someOther: computed('someProperty', function() {
const someProperty = get(this, 'someProperty');
return someProperty + ' that you can append to';
}),
});</code></pre>
<p>What this comes out to—even with a lot of the very helpful changes made to TypeScript itself in the 2.x series to help support object models like this one—is a lot of work adding types inline, and having to be really, really careful that your types are <em>correct</em>. If that property you’re <code>Ember.get</code>-ing can ever be <code>undefined</code> or <code>null</code>, you’d better write the type as <code>string | void</code> instead of just <code>string</code>. For example: this code is written with the correct types:</p>
<pre class="typescript"><code>import Component from '@ember/component';
import { computed, get } from '@ember/object';
export default Component.extend({
someProperty: 'with a string value', // no type annotation
someOther: computed('someProperty', function() {
const someProperty: string = get(this, 'property');
return someProperty + ' that you can append to';
}),
});</code></pre>
<p>Note two important things about it, however:</p>
<ol type="1">
<li>TypeScript does not (and, with the <em>current</em> typings for Ember, cannot) figure out the type of <code>someProperty</code> from this definition; <code>get</code> currently just hands back <code>any</code> as the type of these kinds of things. That type annotation is necessary for you to get any mileage out of TypeScript <em>at all</em> in a computed property like this.</li>
<li>If, anywhere in your code, you <em>set</em> the value of <code>someProperty</code>—including to <code>undefined</code> or <code>null</code>, or to <code>{ some: 'object' }</code>—this could fail.</li>
</ol>
<p>Unfortunately, this second point means that TypeScript actually <em>can’t</em> guarantee this the way we’d like. There’s hope coming for this in the future in several ways—more on that in a moment—but for now, I’ll summarize this by saying TypeScript is really helpful <em>within</em> a function, once you’ve correctly defined the types you’re using. That means that you have to continue to be <em>very</em> careful in what you’re doing in the context of any <code>Ember.Object</code> instance, including all the Ember types which descend from <code>Object</code>, and therefore also any types <em>you</em> define which extend those in turn.</p>
</section>
<section id="future-niceties" class="level2">
<h2>Future niceties</h2>
<p>In the future, we’ll be able to get away from a lot of these difficulties by way of two changes coming down the line: Ember embracing ES6 classes to replace its current custom object system, and embracing decorators as a way of replacing the current approach to computed properties. Let’s take those in turn.</p>
<section id="class-syntax" class="level3">
<h3><code>class</code> syntax</h3>
<p>When Ember was birthed in the early 2010s (first as “SproutCore 2” and then “Amber.js” and finally “Ember.js”), the JavaScript world was a <em>remarkably</em> different place. The current pace of change year to year is nothing short of astounding for any language, but doubly so for one that sat languishing for so long. When Ember came around, something like today’s <code>class</code> syntax was unimaginable, and so essentially every framework had its own class system of some sort. Over the past few years, with the proposal and standardization of the <code>class</code> syntax as nice sugar for JavaScript’s prototypal inheritance, the need for a custom object and inheritance model has essentially gone away entirely. However, Ember doesn’t do breaking changes to its API just because; we as a community and the core team in particular have chosen to place a high priority on backwards compatibility. So any adoption of ES6 classes had to work in such a way that we got it <em>without</em> making everyone rewrite their code from scratch.</p>
<p>All of this impacts our story with TypeScript because, well, TypeScript for a long time couldn’t even begin to handle this kind of complexity (it’s a lot for a static type system to be able to express, given how <em>very</em> dynamic the types here can be). As of TS 2.3, it can express <em>most</em> of this object model, which is great… but it’s forever out of step with the rest of the JS/TS ecosystem, which is not so great. ES6 classes are first-class items in TypeScript and the support for getting types right within them is much, <em>much</em> stronger than the support for the mixin/extension style object model Ember currently uses. So moving over to ES6 classes will make it much easier for TS to do the work of telling you <em>you’re doing it wrong with that class</em>—and most importantly, it’ll be able to do that automatically, without needing the incredibly hairy type definition files that we’re still trying to write to get Ember’s current model represented. It Will Just Work. That means less maintenance work and fewer places for bugs to creep in.</p>
<p>Gladly, we’re getting there! Already today, in the most recent versions of Ember, you can write this, and it will work:</p>
<pre class="typescript"><code>import Component from '@ember/component';
export default class MyComponent extends Component {
theAnswer = 42;
andTheQuestionIs =
"What is the meaning of life, the universe, and everything?";
}</code></pre>
<p>When I say “it will work,” I mean you can then turn around and write this in your <code>my-component.hbs</code> and it’ll be exactly what you would expect from the old <code>Ember.Component.extend()</code> approach:</p>
<pre class="hbs"><code>{{andTheQuestionIs}} {{the Answer}}</code></pre>
<p>There is one serious limitation of that today: you can’t do that with a class you need to extend <em>further</em>. So if, for example, you do like we do and customize the application route rinstance and then reuse that in a couple places, you’ll still have to use the old syntax:</p>
<pre class="typescript"><code>import Route from '@ember/route';
export default Route.extend({
// your customizations...
});</code></pre>
<p>But everywhere you consume that, you can use the new declaration:</p>
<pre class="typescript"><code>import ApplicationRoute from 'my-app/routes/application';
export default class JustSomeRoute extends ApplicationRoute {
model() {
// etc.
}
}</code></pre>
<p>There’s more work afoot here, too, to make it so that these restrictions can go away entirely… but those changes will undoubtedly be covered in considerable detail on <a href="http://www.emberjs.com/blog/">the official Ember blog</a> when they roll out.</p>
</section>
<section id="decorators" class="level3">
<h3>Decorators</h3>
<p>Now, that’s all well and good, but it doesn’t necessarily help with this scenario:</p>
<pre class="typescript"><code>import Component from '@ember/component';
import { computed, get } from '@ember/object';
export default class MyComponent extends Component {
someProperty = 'is just a string';
someOtherProperty = computed('someProperty', function() {
const someProperty = get(this, 'someProperty');
return someProperty + ' and now I have appended to it';
});
}</code></pre>
<p>We’re back in the same spot of having unreliable types there. And again: some really careful work writing type definitions to make sure that <code>computed</code> and <code>get</code> both play nicely together with the class definition would help somewhat, but… well, it’d be nice if the types could just be determined automatically by TypeScript. (Also, there’s an <a href="https://github.com/Microsoft/TypeScript/issues/16699">open bug</a> on the TypeScript repository for trying to deal with <code>computed</code>; suffice it to say that computed as it currently stands is a sufficiently complicated thing that even with all the incredible type machinery TS 2.1, 2.2, and 2.3 have brought to bear on exactly these kinds of problems… it still can’t actually model <code>computed</code> correctly.)</p>
<p>For several years now, Rob Jackson has maintained [a small library] that let you write computed properties with decorators. Up till recently, those were incompatible with TypeScript, because they used to work in the context of object literals rather than classes—and TypeScript never supported that. However, as of about a month ago as I’m writing this, they’ve been updated and they <em>do</em> work with ES6 classes. So, given the class syntax discussed above, you can now <code>ember install ember-decorators</code> and then do this:</p>
<pre class="typescript"><code>import Component from '@ember/component';
import { computed } from 'ember-decorators/object';
export default class MyComponent extends Component {
someProperty = 'with a string value';
@computed('someProperty')
someOther(someProperty: string) {
return someProperty + ' that you can append to';
}
}</code></pre>
<p>Here, we can provide a type on the parameter to <code>someOther</code>, which at a minimum makes this enormously cleaner and less repetitive syntactically. More interestingly, however, we <em>should</em> (though no one has done it just yet, to my knowledge) be able to write a type definition for <code>@computed</code> such that TypeScript will already know that <code>someProperty</code> here <em>is</em> a string, because it’ll have the context of the class in which it’s operating. So that example will be even simpler:</p>
<pre class="typescript"><code>import Component from '@ember/component';
import { computed } from 'ember-decorators/object';
export default class MyComponent extends Component {
someProperty = 'with a string value';
@computed('someProperty')
someOther(someProperty) {
return someProperty + ' that you can append to';
}
}</code></pre>
<p>And in that imagined, wonderful future world, if we tried to do something that isn’t a valid string operation—say, we tried <code>someProperty / 3</code>—TypeScript would complain to us, loudly.</p>
<p>Although this is still a future plan, rather than a present reality, it’s not <em>that</em> far off. We just need someone to write that type definition for the decorators, and we’ll be off to the races wherever we’re using the new ES6 class approach instead of the existing <code>Ember.Object</code> approach. So: <em>soon</em>. I don’t know how soon, but soon.</p>
</section>
</section>
<section id="current-ameliorations" class="level2">
<h2>Current ameliorations</h2>
<p>In the meantime, of course, many of us are maintaining large codebases. I just checked, and our app (between the app itself and the tests) has around 850 files and 34,000 lines of code. Even as those new abilities land, we’re not going to be converting all of them all at once. And we want to get some real mileage out of TypeScript in the meantime. One of the best ways I’ve found to do this is to take a step back and think about the pieces of the puzzle which Ember is solving for you, and which it <em>isn’t</em>. That is, Ember is really concerned with managing application state and lifecycle, and with rendering the UI. And it’s <em>fabulous</em> about those things. What it’s not particularly concerned with (and what it shouldn’t be) is the particulars of how your business logic is implemented. And there’s no particular reason, <em>especially</em> if most of that business logic is implemented in terms of a bunch of pure, straightforward, input-to-output functions that operate on well-defined data types, for all of your business logic to live in <code>Ember.Object</code>-descended classes.</p>
<p>Instead, we have increasingly chosen to write our business logic in bog-standard TypeScript files. These days, our app has a <code>lib</code> directory in it, with packages like <code>utilities</code> for commonly used tools… but also like <code>billing</code>, where we implement <em>all</em> of our client-side billing business logic. The display logic goes in the <code>Ember.Controller</code> and <code>Ember.Component</code> classes, and the routing and state management goes in the <code>Ember.Route</code> and <code>Ember.Data</code> pieces as you’d expect. But none of the business logic lives there. That means that we’re entirely free of the aforementioned constraints for the majority of the time dealing with that data. If we do a good job making sure the data is good at the boundaries—route loads, for example, and when we send it back to the server—then we can effectively treat everything else as just boring old (new?) TypeScript.</p>
<p>So far we’ve only taken that approach with about a quarter of our app, but it’s all the latest pieces of our app, and it has been incredibly effective. Even once we’re able to take advantage of all those shiny new features, we’re going to keep leaning heavily on this approach, because it lets Ember do what Ember is best at, and keeps us from coupling our business logic to the application state management or view rendering details.</p>
</section>
<section id="conclusion" class="level2">
<h2>Conclusion</h2>
<p>So that’s the state of things in Ember with TypeScript today. Your best bet for getting real mileage out of TypeScript today is to use the new class syntax support and decorators wherever you can within Ember-specific code, and then to write as much of your business logic outside the Ember system as possible. Gladly, all of that points you right at the future (in the case of syntax) and just good practice (in the case of separating out your business logic). So: not too shabby overall. It’s working well for us, and I hope it does for you as well!</p>
<p>Next time: how we got here with the <code>ember-cli-typescript</code> compiler, and where we hope to go from here!</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Note that here and throughout, I’m using the <a href="https://github.com/emberjs/rfcs/blob/master/text/0176-javascript-module-api.md#addendum-1---table-of-module-names-and-exports-by-global">RFC #176 Module API</a>, which you can use today via <a href="https://github.com/ember-cli/babel-plugin-ember-modules-api-polyfill">this polyfill</a>.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
The Book of F♯2017-07-21T19:30:00-04:002017-07-21T19:30:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-07-21:/2017/the-book-of-f.htmlRecommended With Qualifications: This book is just okay, and at this point it’s a bit outdated—but if you're in its fairly narrow target audience, it’s a decent way to get up to speed on F♯.
<p><i class=editorial>I keep my book review ratings simple—they’re either <em>required</em>, <em>recommended</em>, <em>recommended with qualifications</em>, or <em>not recommended</em>. If you want the TL;DR, this is it:</i></p>
<p><strong>Recommended With Qualifications:</strong> This book is just okay, and at this point it’s a bit outdated—but if you’re in its fairly narrow target audience, it’s a decent way to get up to speed on F#.</p>
<hr />
<p><em>The Book of F♯: Breaking Free With Managed Functional Programming</em> is a No Starch Press publication by Dave Fancher, published in 2014. I read it over the course of the last four or so months, just plugging away in my spare cycles. A couple qualifications on the short list of observations that follow:</p>
<ol type="1">
<li><p>I don’t have any experience whatsoever writing production F♯ (though I have <em>read</em> a fair bit of it). I am interested because it’s a functional programming language on the .NET stack—which isn’t my own personal favorite stack, but <em>is</em> the stack at Olo. If we’re going to ship functional code on the server, it’ll be in F♯.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p></li>
<li><p>I am also not a C♯ developer. As such, I’m <em>explicitly</em> not the audience of this book. As Fancher put it in the intro:</p>
<blockquote>
<p>I wrote this book for people like me: experienced .NET developers looking to break into functional programming while retaining the safety net of the tools and libraries they’re already using.</p>
</blockquote>
<p>The net of that is that a lot of what frustrated me about the book is just a result of my not being the target audience.</p></li>
</ol>
<p>Those qualifications aside, some assorted thoughts on the book:</p>
<p>First, as the intro and my above qualification suggest: this is <em>really</em> not interesting or useful as a general introduction to F♯. Throughout, it assumes a very high baseline of C♯ knowledge. In fact, the majority of the discussion of F♯, even in the section of the book which turns away from object oriented programming toward functional programming, focuses on comparing F♯ to C♯. This makes sense for the target audience, but this is <em>not</em> the book for you if you’re not a C♯ developer.</p>
<p>That said, if you <em>are</em> a C♯ developer, this could be a useful resource as you’re spinning up. It also might be a useful book to work through with a group of C♯ developers who want to learn F♯. The comparisons <em>do</em> generally work in F♯’s favor, even when doing exactly what you would be doing in the C♯, which makes it an easier “sell” in that regard.</p>
<p>Along the same lines, the book is structured as a <em>very gradual</em> introduction to functional programming ideas. Roughly the first half of the book emphasizes F♯’s object-oriented programming abilities, and only in the second half does Fancher turn to a functional style. Again, this is probably the right move given the audience, but it means the book spends a <em>lot</em> of time on kinds of F♯ you won’t actually be writing very often once you’re going. Idiomatic F♯ isn’t object-oriented. But as a way of helping someone make the transition, it’s not a bad plan: object-oriented F♯ is briefer and nicer in many ways than the exact same code in C♯. It meant that the first half of the book was completely uninteresting to <em>me</em>, though: I don’t want to write a line of object-oriented F♯.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<p>All of this had a pretty serious downside even for existing C♯ developers, though: the book often ends up seeming like it’s sort of apologizing for or defending F♯ against an expected audience of people asking “What’s wrong with C♯?” And even though there’s a real sense in which that’s true—that <em>is</em> what a lot of the audience is asking, no doubt—it became quite annoying rhetorically.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> It’s also unnecessary: if someone is picking up a book on F♯, you can assume that they’re alredy at least a little interested in the language and what it might offer! Along those lines, I much prefer the tack taken in what I’ve seen of Scott Wlaschin’s upcoming <em>Domain Modeling Made Functional: Tackle Software Complexity with Domain-Driven Design and F♯</em> (The Pragmatic Bookshelf, expected in fall 2017)—which shows not how to do the same things as in C♯, just more briefly; but how to solve the same problems much more effectively.</p>
<p>Those problems aside, the book was… <em>fine</em>. I wouldn’t call it scintillating reading, but this kind of technical writing, especially at this length, is really hard work. Credit to Fancher for managing an introduction to an entire programming language in a relatively approachable way, and credit to him and his editors for making sure it remains lucid throughout. Still: I’d love to see the bar for programming books be higher. We need more books which are genuinely engaging in the world of programming language texts. These things are <em>interesting</em>; we don’t have to make them dry and dull! (And if you want a pretty good example of that: everything I’ve read of Edwin Brady’s <em>Type-Driven Development with Idris</em> hits the mark.)</p>
<hr />
<p>A few other observations about the language itself from reading the book.</p>
<p><strong>First,</strong> reading this highlighted a lot of strange things about F♯, all of which ultimately come down to the ways F♯’s development has been driven by concerns for interoperability with C♯. Worse, there are a lot of places where the influence of C♯ casts this shadow <em>entirely unnecessarily</em>. One particular expression of this which drove me crazy: F♯ far too often uses exceptions instead of <code>Option</code>s. It’s <a href="http://v4.chriskrycho.com/2017/better-off-using-exceptions.html">one thing</a> to make sure the language gracefully handle exceptions: you <em>will</em> have them coming from outside contexts. It is another entirely to design core parts of the language to throw exceptions where it doesn’t have to.</p>
<p>Perhaps the most prominent example is the <code>List.head</code> function. Its type signature is <code>'T list -> 'T</code>, where I would expect it to be <code>'T list -> 'T option</code>. If you call <code>List.head</code> on an empty list, you get an exception. It would make far more sense for it to return an <code>Option</code> and just give you <code>None</code> if there’s no item. Then you’re not worried about <code>try</code> expressions and the type system will actually help you! This is one of the most valuable parts of having a type system like F♯’s! I really don’t understand a lot of these decisions, not least since this isn’t for interop with C♯ collections—these are for native F♯ collections.</p>
<p><strong>Second,</strong> the use of things like computation expressions instead of type machinery has an interesting effect: it makes it simpler to read when you first encounter it, but harder to compose, build, etc.—and it’s more syntax to remember. Computation expressions just end up being a way to do “monadic” transformations, from what I can tell. But as I noted often in my discussion of <a href="http://v4.chriskrycho.com/rust-and-swift.html">Rust and Swift</a>, I profoundly prefer approaches that build on the same existing machinery—even in the surface syntax of the language—rather than constantly building new machinery. It makes it easier to deeply internalize new concepts and to <em>understand</em> the language (rather than just being able to <em>use</em>) the language. It also seems (from my admittedly limited vantage point) that computation expressions are as a result much less <em>composable</em> than actual type machinery of the sort available in other languages (Haskell, Idris, etc.).</p>
<p>Now, the tradeoff there is that adding those adds a lot of complexity both to the compiler and to the libraries people are apt to write; there’s a reason Elm has totally eschewed that kind of type machinery to date. But Elm has also refused to just add syntax around ideas like this the way F♯ has here, and it makes for a much cleaner and frankly <em>nicer</em> language.</p>
<p>And that brings me to my <strong>third and final</strong> point: I’m really glad F♯ exists, and that it’s providing a pretty good experience of functional programming on the <abbr title='Common Language Runtime'>CLR</abbr>. But—and I fully grant that a fair bit of this kind of thing is almost entirely subjective—it doesn’t <em>feel</em> good in the same way that Elm or Rust do. There is something very difficult to nail down here, but I get a vsiceral experience of joy when writing some languages and not others. Again: that will vary person to person, but I think there are things that make it more or less likely. Things that make it more likely, at least for me, include everything from self-consistency and predictability at the semantic level to the way the code lays out and flows at the visual/syntactical level.<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a> Sadly, F♯ just doesn’t hit the right notes<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> for me. I’ll be much, much happier to write it than C♯ at work… but I really just want Elm and Rust and Idris to come save the day.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I am of course writing a <em>lot</em> of functional code in our JavaScript; JavaScript is a surprisingly good language for it.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>It’s not that OOP is <em>bad</em>, exactly; it’s just that what passes for OOP in languages like C♯, Java, and yes, F♯, is relatively low utility to me—and I think OOP ideas are much more interesting and useful when applied at a systems level, e.g. in an Actor system, than at the level of individual “actors” within the system. Compare Erlang/Elixir: functional components, organized in what is arguably an <em>incredibly</em> object-oriented way.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>The temptation extends beyond this book; O’Reilly’s <em>Programming Rust</em> (Jim Blandy and Jason Orendorff) reads as the same kind of defensive introduction to Rust for C++ developers.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>And yes, nerds, syntax <em>does</em> matter. Try reading this sentence, nicely punctuated, and with spaces and capitalization. Now: tryreadingthissentencewithoutpunctuationorspacesorcapitalization. There may be a point after which it becomes less important, and a range of things which are equally good in an absolute sense, but it matters.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn5" role="doc-endnote"><p>Pun not intended, but inevitable given the language names here.<a href="#fnref5" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Farewell, Dropbox2017-07-06T21:00:00-04:002017-07-06T21:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-07-06:/2017/farewell-dropbox.html<p>Over the last few years, I’ve grown increasingly annoyed with Dropbox. There have been a number of fairly high-profile misbehaviors on their part—most notably, <a href="http://applehelpwriter.com/2016/07/28/revealing-dropboxs-dirty-little-security-hack/">this one</a>—and then this past week, they started sending me notifications advertising Dropbox for Business.</p>
<figure>
<img src="https://f001.backblazeb2.com/file/chriskrycho-com/images/bad-dropbox.png" alt="Notification ads are the worst." /><figcaption>Notification ads are the worst.</figcaption>
</figure>
<p>So, as I …</p><p>Over the last few years, I’ve grown increasingly annoyed with Dropbox. There have been a number of fairly high-profile misbehaviors on their part—most notably, <a href="http://applehelpwriter.com/2016/07/28/revealing-dropboxs-dirty-little-security-hack/">this one</a>—and then this past week, they started sending me notifications advertising Dropbox for Business.</p>
<figure>
<img src="https://f001.backblazeb2.com/file/chriskrycho-com/images/bad-dropbox.png" alt="Notification ads are the worst." /><figcaption>Notification ads are the worst.</figcaption>
</figure>
<p>So, as I was with Google a few years ago <a href="http://v4.chriskrycho.com/2014/goodbye-chrome.html" title="Goodbye, Chrome: You're just too creepy now.">when they pushed me over the edge</a>—<em>also</em> with notifications!—I’m out.</p>
<p>I don’t mind Dropbox’s wanting to have a sustainable business. To the contrary: as I often note, I’m quite willing to pay for software I use, and I currently use a number of paid services where free alternatives exist because I’d rather do that than pay for ads.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> I <em>do</em> mind when a company—<em>any</em> company—decides that building their business means mistreating their users and customers. And harassing me with notifications about a variant of their product I don’t care about certainly crosses that line. Combine that with the misbehavior <em>and</em> the fact that Dropbox has a tendency to hammer my system for no apparent reason, and, well, I’m out.</p>
<section id="transition-plans" class="level2">
<h2>Transition Plans</h2>
<section id="file-syncing" class="level3">
<h3>File syncing</h3>
<p>For basic storage and access to files across my devices, the shift will be pretty easy: I already have paid iCloud storage for backing up Photos (it’s far easier and comparably priced to all the other options, so that’s what we use). So everything I’ve <em>been</em> doing with Dropbox I’ll be doing with iCloud Drive instead. And I have <em>way</em> more overhead there with a 250GB plan that I do in my current 9GB of Dropbox storage.</p>
</section>
<section id="file-sharing" class="level3">
<h3>File sharing</h3>
<p>For things where I need to share files with other people, I’ll be using <a href="https://droplr.com">Droplr</a>. If or when I find a need to share something for a longer period of time, more often, or with more people, I’ll think about the Pro plan, but for right now the free plan will <em>more</em> than suffice for, say, sending an audio file to <a href="http://independentclauses.com">Stephen</a> for editing <a href="http://www.winningslowly.org">Winning Slowly</a> episodes. (Also, iOS 11’s Files app <a href="https://www.imore.com/files-app" title="iOS 11's Files app FAQ">will support</a> this sharing workflow natively.)</p>
</section>
<section id="my-writing-setup" class="level3">
<h3>My writing setup</h3>
<p>Probably the <em>most</em> vexing (or at least: vexing-seeming) change here will be to my writing workflow. For a long time, I’ve made an alias pointing from a folder in Dropbox on my main machine into the clone of the <a href="https://github.com/chriskrycho/chriskrycho.com">Git repository</a> on that machine where I manage my website.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> That has meant that I can edit the source version of a given post anywhere at any time, with any editor that has Dropbox integration. That was a winning combo for a long time, and it’s one thing I actually <em>can’t</em> do with iCloud Drive. (I tried, and it sort of works, for a little while; but iCloud Drive doesn’t seem to expect this scenario. In its defense, it’s a weird setup.) I realized in thinking it through this evening, though: it doesn’t actually matter to me with the ways my workflow has shifted—and, perhaps just as importantly, with the way that the iOS ecosystem has shifted.</p>
<p>For one thing, there are a <em>lot</em> of options for directly editing files from Git repositories on iOS now. I don’t need to have it in Dropbox to be able to open it in any one of several <em>great</em> iOS writing environments, whether to make a quick edit or to create a post from scratch. Both <a href="https://workingcopyapp.com/">Working Copy</a> and <a href="https://git2go.com">Git2Go</a> work <em>very</em> well. But for another thing, I currently can’t <em>generate</em> the site without logging into my home machine anyway.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> So if I need to make a tweak, well… <a href="http://www.blink.sh">Blink.sh</a> or <a href="https://www.panic.com/prompt/">Prompt</a> will let me log in remotely and do what I need to. And a little bit of Vim or Emacs will let me make any quick edits that way if I really feel I must.</p>
<p>And one side effect of realizing <em>that</em> is that I can easily enough just copy a file from iCloud storage to my site’s working directory after writing it in a writing folder in iCloud if I so desire. Sure, that’s a <em>little</em> finicky, but for the most part I won’t really need to mess with it: I can just <code>git push</code> from my iPad, <code>git pull</code> on my iMac and be ready to do whatever I need.</p>
</section>
<section id="other-apps" class="level3">
<h3>Other apps</h3>
<p>The last piece of the puzzle is the other “apps” that have made a home in my Dropbox. The reality, though, is that almost none of those actually matter to me. I don’t even look at the majority of that data, and other pieces of it —backups of GPS and heart-rate data from workouts, or copies of all my tweets from when I wanted to maintain a microblog on this site, for example—are really just needless at this point, as I have all of that data stored in <em>several</em> cloud platforms (in the case of workout data) and/or don’t care about being able to retrieve it (in the case of tweets). I can happily just shut those things down and call it a day.</p>
</section>
</section>
<section id="in-conclusion" class="level2">
<h2>In Conclusion</h2>
<p>So that’s it: goodbye Dropbox; hello other tools. (This post written from an iPad, and stored in iCloud Drive before publishing.) It’s been a long, and mostly just-fine ride, but I’m getting off here.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Full disclosure here: I am <em>not</em> a Dropbox paying customer—though that is the fault of their perhaps overly aggressive early customer acquisition strategy. I have never <em>needed</em> to pay for Dropbox, even though I have many gigabytes stored in it, because I earned so much free storage for inviting other users early on.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p><a href="https://stackoverflow.com/questions/19305033/why-is-putting-git-repositories-inside-of-a-dropbox-folder-not-recommended">You don’t want a Git repo sitting inside your Dropbox folder</a>, but a symlink like this works just fine: you don’t end up with the conflicts that can happen with a full repo in Dropbox.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>I’m hoping to change that a bit in two ways in the future, by having the generator live on a not-my-home-machine server and by making Lighting much easier to just drop in and use than my finicky Pelican setup currently is. But that depends on actually making Lightning, you know, <em>work</em>.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Set Up Mosh on macOS2017-06-29T08:10:00-04:002017-06-29T08:10:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-06-29:/2017/set-up-mosh-on-macos.html<p>Last night I bumped back into <a href="https://mosh.org">Mosh</a> (by way of <a href="https://medium.com/@searls/giving-the-ipad-a-full-time-job-3ae2440e1810">this post</a>), and decided to give it a whirl – I had seen it before, and in fact had even installed it, but had never gotten around to giving it a try.</p>
<p>If you’re not familiar with Mosh, it’s …</p><p>Last night I bumped back into <a href="https://mosh.org">Mosh</a> (by way of <a href="https://medium.com/@searls/giving-the-ipad-a-full-time-job-3ae2440e1810">this post</a>), and decided to give it a whirl – I had seen it before, and in fact had even installed it, but had never gotten around to giving it a try.</p>
<p>If you’re not familiar with Mosh, it’s like SSH: a remote (terminal) connection to another machine. Unlike SSH, though, a single session can survive disconnects: it sets up a small server on the host machine and will reestablish the connection if it drops. It also responds immediately when you’re typing, even if there’s serious lag to the other server – it just gives you a nice visual signal (underlining) to let you know the other side hasn’t received what you’ve typed. This seems pretty nice, so I thought I’d set it up on my iMac so I could hit it from my iPad.</p>
<p>This isn’t complicated, but it also isn’t well-documented after the first step!</p>
<section id="steps" class="level2">
<h2>Steps</h2>
<ol type="1">
<li><p>Install mosh.</p>
<ul>
<li>via the <a href="https://mosh.org/#getting">binary</a> on their site</li>
<li>by running <code>brew install mosh</code></li>
</ul></li>
<li><p>Find the install location for the server from your Terminal:</p>
<pre class="sh"><code>$ which mosh-server</code></pre></li>
<li><p>Configure the firewall to allow the mosh server to install connections.</p>
<ol type="1">
<li>Open the <strong>Security and Privacy</strong> pane of the <strong>System Preferences</strong> app.</li>
<li>Choose the <strong>Firewall</strong> tab. Unlock it to make changes.</li>
<li>Click <strong>Firewall Options</strong>.</li>
<li>On the pane that opens, click the <strong>+</strong> button to add a new rule.</li>
<li>Navigate to the location you got in step 2 above. (One easy way to do this: hit <kbd>⌘ Cmd</kbd><kbd>⇧ Shift</kbd><kbd>G</kbd>, and paste in the output from the <code>which</code> command.) Click <strong>Add</strong>.</li>
<li>Find “mosh-server” in the list, and set it to <strong>Allow incoming connections</strong>.</li>
<li>Hit <strong>OK</strong>.</li>
</ol></li>
<li><p>Persuade macOS to reload its firewall rules. (This <em>may</em> not be necessary, but it was for me.) You can do one of the following:</p>
<ul>
<li><p>restart your machine</p></li>
<li><p>reload the normal rules manually:</p>
<pre class="sh"><code>$ sudo pfctl -f /etc/pf.conf</code></pre></li>
</ul></li>
<li><p>You may also need to open these ports on your router firewall. You should consider carefully whether you want a bunch of open ports sitting there or whether you want to just use a specific port and then always target that specific port by running mosh with the <code>-p</code> option:</p>
<pre class="sh"><code>$ mosh -p 60000 [email protected]</code></pre>
<p>If you can connect locally but not remotely, this is probably what you need!</p></li>
</ol>
<p>That should be all you need!</p>
</section>
Write! app review2017-06-26T21:15:00-04:002017-06-26T21:15:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-06-26:/2017/write-app-review.htmlWrite! app is a distraction-free text editor—solid enough, but entering a very crowded field, at least on macOS.<p>As I’ve noted in the past, I’m always <a href="http://v4.chriskrycho.com/2016/ulysses-byword-and-just-right.html">on the lookout</a> for top-notch writing environments. I was recently contacted by the team behind <a href="https://writeapp.co">Write!</a> and asked if I would take a look at and review their app, and I was happy to obliged. I tested the app out fairly thoroughly by doing what I normally do with my writing apps: putting together a blog post or the like. I’ve written this review from start to finish in it, across my two Mac machines. I promised the authors an unbiased review, so here we go!</p>
<section id="overview" class="level2">
<h2>Overview</h2>
<p>Write!<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> describes itself as a distraction-free text editor. It enters the market in an interesting way: the Mac offerings here are numerous, varied, and excellent. Offerings on Windows are fewer and further between, and in my experience of much lower quality. Distraction-free text editors outside the world of <em>programming</em> text editors barely exist at all on Linux, as far as I can tell.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> Write! is cross-platform, targeting all three of these. And that, as we’ll see, <em>is</em> the story of this particular app—for good and for ill.</p>
</section>
<section id="the-good" class="level2">
<h2>The Good</h2>
<p>First, the good: the app seems to perform relatively well. Text entry, even on a fairly large document, is smooth and quick. (I imported the text of <a href="http://v4.chriskrycho.com/2016/realism-and-antirealism.html">this ~7200-word paper</a> to test it and it didn’t stutter a bit.) Especially given the time I’m going to spend on the not-so-good below, I want to take a moment to applaud the developers for getting that right. It’s one of the most important aspects of an app like this, and any number of apps I’ve used just fall down on large documents. Everything I’ve seen here makes it seem like Write! would handle much larger documents even than that paper with aplomb.</p>
<p>The app’s main writing area looks fairly nice, and the distraction-free/full-screen mode gets out of the way readily enough. The cloud sync that comes with the app is quick and seems reliable. I’ve worked on this document across the two Macs I use, with no sync issues whatsoever. The writing area also has a (toggleable) overview of the document on the right, <em>a la</em> Sublime Text. To the left is a toggleable outline view, which lets you drill down into the structure of your document if you have multiple heading levels. And within the writing area itself, you can expand and collapse sections demarcated by headings.</p>
<p>In general, the experience of writing in the app is <em>nice</em>. Not <em>amazing</em>, but genuinely nice.</p>
</section>
<section id="the-just-okay" class="level2">
<h2>The Just Okay</h2>
<p>There’s a bit of a delay before any open tabs are hidden in that fullscreen mode, but it’s otherwise fairly typical of most “distraction-free” writing environments in that regard. The colors chosen for the light and dark writing themes are fine, but not great. Much the same is true of the typography: it’s relatively pleasant, if bland. There are a number of built-in themes, but no apparent way to customize them to be more to your liking.</p>
<p>The app also features built-in autocomplete—but I’m not really sure who the target audience is for auto-complete in this kind of environment. It’s not <em>bad</em>, per se, to have it, but it doesn’t add a lot of value for <em>writing</em> (as opposed to, say, programming), and I turned it off fairly quickly in the process of writing this review.</p>
<section id="publishing" class="level3">
<h3>Publishing</h3>
<p>The app includes some “publishing” tools. Currently it supports writing to either Write’s own site, or to Medium. Medium publishing is nice—it’s certainly the hip tool <em>du jour</em>—but you’re out of luck if you use WordPress, much less something like Ghost.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a></p>
<p>Publishing to Write! itself seems to be mostly a way of letting people see in-progress drafts. The links aren’t particularly friendly, and while they’d be easy enough to share to Facebook or Twitter or the like, they have serious downsides over any of the free blogging options out there for anything other than getting some early feedback—there’s no organizational or navigational structure available, and for that matter nothing that even ties it to your name! At a minimum, Write! should clarify what this is for.</p>
</section>
<section id="business-model" class="level3">
<h3>Business model</h3>
<p>The business model here is a curious mix: they’re selling the app at $19.95 (USD), with a year included of their custom sync solution. That sync solution is one of the things they advertise most heavily, and while I can attest that it works well, adding another, bespoke sync solution to my life is <em>not</em> on my list of things I’d like to do. It’s particularly an issue from where I stand because it doesn’t actually get me any benefits over a syncing solution using Dropbox or iCloud, both of which I’ve used extensively with other writing apps in the last few years, with no issues.</p>
<p>Add onto that the fact that the sync and future updates become an annual purchase—</p>
<blockquote>
<p>Starting one year after purchase, Cloud access and maintenance updates are $4.95/yr.</p>
</blockquote>
<p>—and any of the myriad other editors look much better: they all just use a sync engine I <em>already</em> use and like, and they <em>don’t</em> have annual fees for a service I don’t care about.</p>
<p>That goes double when you consider that I’ll often do different phases of drafting a given post in different editors, depending on the kind of content and what I’m doing with it. For example, I often use <a href="https://caret.io">Caret</a> for drafting technical blog posts, but at times I’ll switch over to using <a href="https://www.sublimetext.com">Sublime Text</a>, <a href="https://atom.io">Atom</a>, or <a href="https://code.visualstudio.com">VS Code</a> for working on the details of a given code snippet. If I’m using Write’s custom sync solution, my documents don’t exist in a normal folder on my machine, so they aren’t available for that kind of easy switching and editing. Double that <em>again</em> because it also means I don’t have access to the content on my iPad—where I often use <a href="https://www.ulyssesapp.com">Ulysses</a>, <a href="http://omz-software.com/editorial/">Editorial</a>, <a href="http://1writerapp.com">1Writer</a>, or <a href="https://bywordapp.com">Byword</a> to work on posts when I’m away from my Mac. There are no upsides for <em>me</em>, as far as I can tell, to using their sync system.</p>
<p>I put this in the “just okay” section, however, because I can imagine that it <em>might</em> be nice for someone who’s not already invested in an existing sync solution. Whether or not there are enough of those people out there to support the business model—I suspect not—is a separate question to whether it’s good or bad for users in a direct sense. Again: the custom sync system works well; I just don’t know whether it’s necessary (or worth the development time that had to be spent on it).</p>
<p>As for the business model on the whole: I’m not at all opposed to paying for good apps on an ongoing basis. To the contrary, I actually <em>embrace</em> it: as a software developer myself, I recognize that there are few (if any) other sustainable business models. However, the application needs to be pretty amazing to get me to buy it in the first place, still less to justify a recurring purchase.</p>
</section>
</section>
<section id="the-bad" class="level2">
<h2>The Bad</h2>
<p>Sad to say, from my perspective—to be clear, as a long-time Mac user with <a href="http://v4.chriskrycho.com/2016/ulysses-byword-and-just-right.html">very high standards for my writing tools</a>—this isn’t an amazing app. In fact, <em>on macOS</em>, it’s actually a bad app in many ways.</p>
<section id="non-native-ui" class="level3">
<h3>Non-native UI</h3>
<p>First, Write’s UI looks and behaves like a Windows app. It’s built on <a href="https://www.qt.io">Qt</a>, which does support native(-looking) widgets, but the developers chose not to use them – I assume in the interest of speed of development. If you’re on Windows, that’s fine. But this app will never look remotely native on macOS,<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a> and given the plethora of other really high quality writing apps on macOS—some of them with their own publication options!—there’s just no reason why you would pick this over one of those at that most basic level.</p>
<p>Two examples should illustrate how painfully non-native this app is visually. First, note the window action buttons in the upper right:</p>
<figure>
<img src="https://f001.backblazeb2.com/file/chriskrycho-com/images/write-app-review/draft.png" alt="not native windows" /><figcaption>not native windows</figcaption>
</figure>
<p>These are Windows window action buttons; the normal Mac action buttons simply don’t exist! Similarly, there’s a slide-out menu that appears when you tap the hamburger in the top left:</p>
<figure>
<img src="https://f001.backblazeb2.com/file/chriskrycho-com/images/write-app-review/slide-out-menu.png" alt="slide out menu" /><figcaption>slide out menu</figcaption>
</figure>
<p>This is a reasonably nice, though not totally native-feeling, way of tackling the menu problem… on Windows. On Mac, it’s just duplicating the functionality of the normal menubar. And when I say duplicating, I mean it exactly: those menus are the same as the ones the app puts in the real menubar; there’s no reason for them to appear within the body of the app, other than that the app isn’t designed to work without them.</p>
<p>Right-click behavior is strange: instead of the normal Mac (or even Windows!) menu, they’ve supplied their own, and it’s actually its own little modal window, not a menu at all:</p>
<figure>
<img src="https://f001.backblazeb2.com/file/chriskrycho-com/images/write-app-review/right-click-modal.png" title="right click modal" alt="right-click modal window" /><figcaption>right-click modal window</figcaption>
</figure>
<p>I definitely see the utility of the little modal, but most other apps I’ve seen with similar approaches do it on highlighting some text. That way they can leave the normal right-click menu in place, which helps keep the user comfortable in their normal workflows. That’s going to be particularly annoying if you happen to make heavy use of macOS’s services menu—I don’t use it often, but when I want it, I <em>want</em> it.</p>
</section>
<section id="keyboard-shortcuts" class="level3">
<h3>Keyboard shortcuts</h3>
<p>Similarly, a number of standard keyboard shortcuts don’t work the same way, or don’t work at all, in Write! as they do in native Mac apps. Navigation controls aren’t quite right: <kbd>⌥</kbd><kbd>→</kbd> jumps to the start of the next word instead of the end of the current word; <kbd>⌥</kbd><kbd>←</kbd> doesn’t skip over punctuation; both stop on e.g. apostrophes in Write! (they skip over them natively). Other common shortcuts are bound to the wrong things: <kbd>Shift</kbd><kbd>⌥</kbd><kbd>-</kbd>, for example, increases heading size instead of inserting an em dash. <kbd>⌘</kbd><kbd>Delete</kbd> doesn’t do anything; neither do <kbd>^</kbd><kbd>⌘</kbd><kbd>Space</kbd>, (normally used for bringing up the special-character selector) or my beloved <kbd>^</kbd><kbd>K</kbd> (“kill to end of line”) or <kbd>^</kbd><kbd>T</kbd> ("transpose characters around cursor) combos.<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> I imagine the list is longer; that’s just what I noticed in the course of writing this review!</p>
<p>Most egregiously, Write! steals the keyboard shortcut <kbd>⌘</kbd><kbd>`</kbd>, normally used to switch between windows on macOS, to focus itself. Failing to implement and indeed overriding text input commands is one (very bad) thing; this is another kind of failure entirely. Apps should <em>never</em> override core system behavior with their shortcuts! The fact that you can customize them doesn’t make this better; and the one time I <em>tried</em> to customize it (to turn off stealing the switch-window shortcut) it ended up overriding the <kbd>A</kbd> key’s behavior to create new documents instead of to, well, enter the letter “a”.</p>
<p>A lot of apps get some of those more obscure ones wrong, sadly, but proper use of <a href="https://developer.apple.com/documentation/coretext">Core Text</a> is a <em>must</em> for a native app in my book—and missing those super common ones is a big no-no. I simply won’t use an app long term that doesn’t do that, because I find the mismatch between the rest of the OS (and my muscle memory!) and what the apps do too frustrating.</p>
</section>
<section id="markdown-support" class="level3">
<h3>Markdown support</h3>
<p>The app claims Markdown support, and it <em>sort of</em> has it. But the goal is clearly to have a rich-text editing experience which can translate Markdown into whatever the underlying format is on the fly, and then export it back out when desired—<em>not</em> to be a Markdown writing application. You can see direct evidence that this is their approach by writing in Markdown and e.g. creating italics with * characters. When you view the exported Markdown, it’ll be using _ characters instead. Other little things flag it up equally: Markdown items don’t get converted to their rich text implementations unless you add a space or some punctuation after typing them; if you go back and wrap words in link syntax, for example, or try to make it bold with a pair of *s, it won’t be converted at all. The export still works fine in that case,<a href="#fn6" class="footnote-ref" id="fnref6" role="doc-noteref"><sup>6</sup></a> but it certainly doesn’t come off well for the writing experience in the app, inconsistent as it is.</p>
<p>It also doesn’t support Markdown itself fully or properly. Inline backtick characters (`) don’t generate inline code snippets. Instead, they generate standalone code blocks, as if using the usual four-space-indent or triple-backtick markers for code blocks in the actual Markdown spec and as supported in other apps. Nor can I find a way to insert hrules/divisions with triple-stars or triple-dashes.</p>
</section>
<section id="other-nits" class="level3">
<h3>Other nits</h3>
<p>There are a few other small but significant problems as well. One is related to the business model: you actually have to sign in to start using the app. Granted all my positive comments about subscriptions above, it’s still the case that needing to sign in to a <em>writing</em> app (especially just to use the app for local documents!) is a non-starter for me. As with so many of the other negatives I noted, this is a compromise that I don’t need to make, because the other alternatives don’t force it on me.</p>
<p>There are also a bunch of basically rough edges. Pasting with <kbd>⌘</kbd><kbd>V</kbd> does indeed paste the text… and scrolls you to the top of the current document every time. A number of times, the selection of a given option failed: it simply wouldn’t stick. Other times, especially when selecting the default text theme, cursor selection seemed broken. I’m not sure whether those are problems with the Qt engine, the implementation, or some of both, but again: not a good look, especially in a crowded market. Right-clicking, beyond the problems mentioned above, also just wouldn’t work consistently. Sometimes I would right-click and the menu would close immediately so you couldn’t take any actions in it at all—probably a result of using a modal instead of a normal menu there. Regardless of the reason, it was frustrating.</p>
<p>Last but not least, the app is unsigned, which means that it literally won’t open by default on macOS as of a few versions back. Users can certainly get around that, but they shouldn’t <em>have</em> to: there’s no excuse for not signing a paid app for macOS (or Windows! But I’m not sure what its status is there) in 2017.</p>
</section>
</section>
<section id="conclusion" class="level2">
<h2>Conclusion</h2>
<p>This is an interesting approach for an editor. Trying to build a truly cross-platform app, and especially one that isn’t using web technologies like <a href="https://electron.atom.io">Electron</a>, is an admirable goal—in fact, it’s one that I may dare to tackle myself at some point. Cross-platform UI is also a very hard problem, and unfortunately this app makes clear just how difficult it is by falling down so often on really important details. In reality, the only way to do it well is to write all your core business logic in a way you can share and then supply actually-native user interfaces. Anything else will inevitably feel out of place at best.</p>
<p>As a result, Write! is deeply compromised as a Mac app, to the extent that I simply cannot recommend it for Mac users. If you’re on a Mac, you should look instead at <a href="https://www.ulyssesapp.com">Ulysses</a>, <a href="https://bywordapp.com">Byword</a>, and <a href="https://caret.io">Caret</a>. All of them feel much more native, and though they have different strengths and weaknesses, they’re all native (or mostly-very-effectively native-acting, in Caret’s case) apps. That doesn’t mean Write! is <em>bad</em>; it just means it’s not worth your time (a) if you’re on a Mac or (b) if you really care about standard Markdown behaviors.</p>
<p>As noted, though, the developers got some important parts of this <em>very</em> right: the app performs well, it looks decent on Windows, and their sync engine seems incredibly solid. Accordingly, if you’re on Windows, and don’t already have a particular commitment to Markdown proper, I might even cautiously recommend it—as a replacement for something like the old <a href="https://www.microsoft.com/en-us/download/details.aspx?id=8621">LiveWriter</a> app, for example. The biggest hesitation I’d have there is the business model—and, as noted above, I’m not opposed in principle to subscription models for good apps; but I’m not really sure what the value proposition here is.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Yes, the app is named “Write!” – not “Write”. It’s not my favorite, not least because it means you have to type an exclamation point every time you write (!) it.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>There are many reasons for that, including things to do with many Linux users’ antipathy toward paid or non-open software, which makes it very difficult for not only developers but especially <em>designers</em> to make a living. Never mind the incredibly small size of the audience by comparison.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>And who knows if Medium will still be around in five years? But that’s for another post another time.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>Or Linux, but then what exactly <em>is</em> native on Linux anyway? 😏 More seriously, this will look out of place on <em>any</em> Linux desktop environment.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn5" role="doc-endnote"><p>These latter ones are sadly too often the case for cross-platform tech; I’ve filed issues on <a href="https://code.visualstudio.com">VS Code</a> and <a href="https://atom.io">Atom</a> in the past that way.<a href="#fnref5" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn6" role="doc-endnote"><p>I’ll actually give Write! one point over Ulysses here: Ulysses does some similar conversions under the hood to make the writing experience seem snazzier, and things which don’t get turned into their custom “text objects” can end up exported <em>very</em> strangely.<a href="#fnref6" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
“Collection-Last Auto-Curried Functions”2017-06-24T17:35:00-04:002017-06-24T17:35:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-06-24:/2017/collection-last-auto-curried-functions.html<p>I’ve been using <a href="https://lodash.com">lodash</a> for a while at work, and I love having it in our toolbox. But, as I increasingly embrace <em>composition of smaller functions</em> as a helpful approach to building up the final version of an overall transformation of some piece of data, I’ve increasingly wanted …</p><p>I’ve been using <a href="https://lodash.com">lodash</a> for a while at work, and I love having it in our toolbox. But, as I increasingly embrace <em>composition of smaller functions</em> as a helpful approach to building up the final version of an overall transformation of some piece of data, I’ve increasingly wanted to be using <a href="https://github.com/lodash/lodash/wiki/FP-Guide">lodash-fp</a> instead—those “auto-curried… data-last methods” are <em>nice</em>.</p>
<p>I could belabor the difference with words, but a code sample will do better. Here’s how I would write the same basic transformation in both Lodash and lodash-fp.</p>
<pre class="javascript"><code>// Lodash
const breakfasts = ['pancakes', 'waffles', 'french toast']
const uniqueLetters = _.flow([
bs => _.map(bs, words),
_.flatten,
bs => _.map(bs, b => split(b, '')),
_.flatten,
_.uniq,
ls => _.sortBy(ls, id),
])
console.log(uniqueLetters(breakfasts))</code></pre>
<p>That gets the job done, but wouldn’t it be nice if we didn’t have to have all those anonymous functions (lambdas) throughout?</p>
<pre class="javascript"><code>// lodash-fp
const uniqueLettersFp = _.flow([
_.map(words),
_.flatten,
_.map(split('')),
_.flatten,
_.uniq,
_.sortBy(id),
])
const breakfasts = ['pancakes', 'waffles', 'french toast']
console.log(uniqueLettersFp(breakfasts))</code></pre>
<p>Suddenly the intent is much clearer with the noise introduced by the lambdas gone. You get this because the lodash-fp functions are:</p>
<ul>
<li><strong>auto-curried:</strong> that is, even though <code>_.split</code> takes the splitter and then a string, you can just write <code>_.split('')</code> and get back a function which takes a string as an argument.</li>
<li><strong>data-last:</strong> because <code>_.split</code> takes the string to split <em>last</em>, it can be passed into an auto-curried function.</li>
</ul>
<p>You need <em>both</em> to get that nice clean call to <code>_.flow</code>. But once you have both, it’s really, really hard ever to go back, because it’s so much nicer for building pipelines of functions.</p>
<p>…I need to see if I can help <a href="https://github.com/mike-north/ember-lodash/issues/21">do the work</a> to make lodash-fp available in Ember.js.</p>
Typing Your Ember, Part 22017-05-07T22:00:00-04:002017-05-07T22:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-05-07:/2017/typing-your-ember-part-2.htmlAdding TypeScript to your existing Ember.js app is easy!—here's how to do it, some of the current "gotchas," and a few tips to make the on-ramp a bit easier.
<p><i class='series-overview'>You write <a href="https://emberjs.com">Ember.js</a> apps. You think <a href="http://www.typescriptlang.org">TypeScript</a> would be helpful in building a more robust app as it increases in size or has more people working on it. But you have questions about how to make it work.</i></p>
<p><i class='series-overview'>This is the series for you! I’ll talk through everything: from the very basics of how to set up your Ember.js app to use TypeScript to how you can get the most out of TypeScript today—and I’ll be pretty clear about the current tradeoffs and limitations, too.</i></p>
<p><i class='series-overview'><a href="/typing-your-ember.html">(See the rest of the series. →)</a></i></p>
<hr />
<p>In the <a href="/2017/typing-your-ember-part-1">first part</a> of this series, I described how to set up a brand new Ember.js app to use TypeScript. In this part, I’m going to talk about starting to use TypeScript in the context of an existing Ember.js app.</p>
<p>This is, in many ways, even simpler than setting up an app for the first time, because you already have almost everything you need. The steps here are exactly what you’re used to if you’re used to using the Ember CLI ecosystem:</p>
<ol type="1">
<li><p>Install <code>ember-cli-typescript</code>:</p>
<pre class="sh"><code>ember install ember-cli-typescript</code></pre></li>
<li><p>Start using TypeScript wherever you want in your app.</p></li>
</ol>
<p>It really is that simple, for the most part. There are a couple qualifications, and a couple tips, though.</p>
<p>Let’s start with <strong>qualifications</strong>. There are open, unresolved <a href="https://github.com/emberwatch/ember-cli-typescript/issues/">issues</a> about using <code>ember-cli-typescript</code> in your app in certain contexts. For example: <a href="https://github.com/emberwatch/ember-cli-typescript/issues/8">using it with <code>ember-browserify</code></a>. While everything will <em>build</em> correctly in that case (even if the TypeScript compiler complains about being unable to resolve some things, the Ember CLI build pipeline will still work as expected), your editor integration won’t. There are a bunch of corners like this we’re still hammering out; those are the main things we need to get resolved before we can call this a “1.0.” We have the <em>main</em> stuff working, but, well… there’s more to do.</p>
<p>Along those same lines, you should take a close look at the <a href="https://github.com/emberwatch/ember-cli-typescript#not-yet-supported"><strong>Not yet supported</strong></a> section of the README. There are parts of Ember’s programming model which TypeScript certainly <em>can</em> support, but which we haven’t done the lifting to get the type declaration file to help with yet. (Looking for a place to pitch in and already comfortable doing some heavy lifting with some of TypeScript’s <a href="http://www.typescriptlang.org/docs/handbook/mixins.html">most advanced type features</a>? We could use the help.)</p>
<p>One other thing to be aware of is that your <code>tsconfig.json</code> settings will affect what kind of resolution your editor gives you. If you have <code>allowJs</code> set to <code>true</code> in your <code>tsconfig.json</code>, your editors will resolve JS modules. Otherwise, they’ll <em>only</em> resolve TS modules. This can be incredibly annoying at times. However, this isn’t something we’ve nailed down in terms of what the default should be yet. (You can <a href="https://github.com/emberwatch/ember-cli-typescript/issues/">come tell us</a> on GitHub if you have thoughts or insights there!) And the fact that Microsoft has left this configurable is suggestive: different projects may have different preferences here.</p>
<p>Now, for the <strong>tips</strong>. Note that these are just a couple quick pointers; I’ll come back and talk about structuring your project and more sophisticated uses of TypeScript in the future.</p>
<ul>
<li><p>Don’t turn on <code>--strict</code> or the corresponding individual flags on day 1. Unless you have an extremely unusual and disciplined Ember.js codebase, you’ll have an incredible set of errors to deal with.</p></li>
<li><p>Don’t set the <code>noEmitOnError</code> flag to <code>true</code> in your <code>tsconfig.json</code>, for much the same reason. Since the state of type declaration files for Ember is best described as <em>nascent</em> at present, many of your files will have errors in them just from failed imports!</p></li>
<li><p>Don’t try to convert everything at once. Just pick the next feature or bug you’re working on, and start with the files you’re touching for that bug. Rename it to <code>.ts</code>, fix any major issues it flags up that you can—but stick as locally as possible. You’re apt to find a <em>lot</em> of small bugs as you start migrating, and some of them are things which are apt to affect your whole system because they touch central data types. It’s okay. You can come back to those later. For today, you can just be explicit about the weirdnesses.</p></li>
<li><p>As part of that: get comfortable—really, really comfortable—with <a href="http://www.typescriptlang.org/docs/handbook/advanced-types.html#union-types">union types</a>. They’ll make it much easier to express the kind of code you’ve <em>actually</em> written.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p></li>
<li><p>Don’t worry about adding explicit types to <em>everything.</em> In fact, depending on how comfortable you are already with typed languages, you should probably take a pretty different tack with this:</p>
<ul>
<li><p>If you’re just stepping into the world of typed programming languages, you might start adding types where they’re the <em>lowest risk</em>: some place like your automated tests. That’ll help you start to see how to take advantage of them, while not impacting the way you write your app code until you have a better idea how best to employ the types.</p></li>
<li><p>If you’re already really comfortable with typed programming languages, you might employ types where they’re <em>most helpful:</em> start with some types in the hairiest or trickiest spots of your app.</p></li>
</ul></li>
</ul>
<p>There is plenty more I could say, but I think that’s a good start for now. I’ll have lots more to add in later posts about the details of how specifically to get the most mileage out of types within an Ember.js app today.</p>
<hr />
<ul>
<li><a href="/2017/typing-your-ember-part-1"><strong>Previous:</strong> Part 1 – Set your Ember.js project up to use TypeScript.</a></li>
</ul>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Also, I <em>strongly</em> encourage you to write types in terms of unions of types rather than in terms of <a href="http://www.typescriptlang.org/docs/handbook/interfaces.html#optional-properties">optional properties on types</a>. That might be surprising; I’ll explain it in more detail in a future post.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Typing Your Ember, Part 12017-05-05T00:10:00-04:002017-05-05T00:10:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-05-05:/2017/typing-your-ember-part-1.htmlIn this first post in the series, we're going to keep things simple and easy: we're going to get an Ember.js app configured to use TypeScript. Later posts will cover some of the other details.
<p><i class='series-overview'>You write <a href="https://emberjs.com">Ember.js</a> apps. You think <a href="http://www.typescriptlang.org">TypeScript</a> would be helpful in building a more robust app as it increases in size or has more people working on it. But you have questions about how to make it work.</i></p>
<p><i class='series-overview'>This is the series for you! I’ll talk through everything: from the very basics of how to set up your Ember.js app to use TypeScript to how you can get the most out of TypeScript today—and I’ll be pretty clear about the current tradeoffs and limitations, too.</i></p>
<p><i class='series-overview'><a href="/typing-your-ember.html">(See the rest of the series. →)</a></i></p>
<hr />
<p>In this first post in the series, we’re going to keep things simple and easy: we’re going to get an Ember.js app configured to use TypeScript. Later posts will cover some of the other details.</p>
<p>Because of the lovely <a href="https://ember-cli.com">Ember CLI</a> ecosystem, this is a pretty straightforward process. I’m going to start from <em>zero</em> so that even if you’ve never written an Ember app before, you can get this up and running by following these instructions. These instructions have also been tested and confirmed to work across platforms—you can do this equally on Windows, macOS, or Linux.</p>
<ol type="1">
<li><p>Make sure you have Ember’s prerequisites installed. Get <a href="https://nodejs.org/en/">Node</a> for your platform. Optionally (but highly recommended) install <a href="https://yarnpkg.com">Yarn</a> to manage your Node packages.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p></li>
<li><p>Install the Ember command lines tools globally:</p>
<pre class="sh"><code>yarn global add ember-cli</code></pre>
<p>or</p>
<pre class="sh"><code>npm install --global ember-cli</code></pre></li>
<li><p>Create an Ember app.</p>
<pre class="sh"><code>ember new my-ts-app --yarn</code></pre>
<p>(Using the <code>--yarn</code> flag will make it so your app uses <a href="https://yarnpkg.com"><code>yarn</code></a> and creates a <code>yarn.lock</code> file instead of using <code>npm</code> when it installs its dependencies.)</p></li>
<li><p>Now move to the root of the newly created app: this is where we’ll do everything else in the post.</p>
<pre class="sh"><code>cd my-ts-app</code></pre></li>
<li><p>Add the <a href="https://emberobserver.com/addons/ember-cli-typescript"><em>ember-cli-typescript</em> add-on</a>.</p>
<pre class="sh"><code>ember install ember-cli-typescript</code></pre></li>
<li><p>Generate your first UI component.</p>
<pre class="sh"><code>ember generate component some-input</code></pre></li>
<li><p>Rename the files it generated from <code>.js</code> to <code>.ts</code>:</p>
<ul>
<li><code>app/components/some-input.js</code> → <code>app/components/some-input.ts</code></li>
<li><code>tests/integration/components/some-input-test.js</code> → <code>tests/integration/components/some-input-test.ts</code></li>
</ul>
<p>(Eventually, we’ll make it so that you get TypeScript for all newly generated components when using <em>ember-cli-typescript</em>.)</p></li>
<li><p>Add some content to the files:</p>
<pre class="handlebars"><code>{{!-- some-input.hbs --}}
{{input value=theValue change=(mut theValue)}}
{{theValue}}</code></pre>
<pre class="typescript"><code>// some-input.ts
import Ember from 'ember';
export default Ember.Component.extend({
theValue: '',
});</code></pre></li>
<li><p>Update your <code>application.hbs</code> file to remove the default <code>{{welcome}}</code> template and replace it with <code>{{some-input}}</code></p></li>
<li><p>Spin up the Ember application with Ember CLI’s development server:</p>
<pre class="sh"><code>ember serve</code></pre>
<p>You’ll likely note some warnings: the TypeScript compiler won’t be able to find some of the modules imported in your files. I’ll have more to say about this in a future post. For now, suffice it to say: don’t worry, Ember CLI is still resolving and compiling your modules just fine.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p></li>
<li><p>Load the application by going to <code>localhost:4200</code> in your browser. You should see a blank white screen with an input in it. Type in it, and see the input rendered to the page. Simple enough, but it’s using a TypeScript file compiled along the way!</p></li>
</ol>
<p>And that’s it: we’re done setting up an Ember.js app to use TypeScript! In the next post, I’ll talk a bit about strategies for migrating an existing app to TypeScript—not just the mechanics of it, but also where and how to start actually integrating types into your code.</p>
<ul>
<li><a href="/2017/typing-your-ember-part-2"><strong>Next:</strong> Part 2 – Adding TypeScript to an existing Ember.js project.</a></li>
</ul>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I strongly prefer to use <code>yarn</code> over <code>npm</code> because <code>yarn</code> installs are predictable and repeatable, and if there’s one thing I don’t need to spend time on when developing our Ember.js app at Olo, it’s chasing problems with transitive dependencies that are different in the build server than in my local development environment. Yarn’s lockfiles mean what ends up built on the server is <em>exactly</em> what ended up built on my machine.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>But if you’re curious, here’s a preview: we really need more <a href="http://www.typescriptlang.org/docs/handbook/declaration-files/introduction.html">type definitions</a> for the Ember ecosystem. I’ll be covering <em>how</em> we build those in much more detail in a future installment.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Why Elm Instead of TypeScript?2017-04-23T17:20:00-04:002017-04-23T17:20:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-04-23:/2017/why-elm-instead-of-typescript.html<p>A few weeks ago, an acquaintance asked in a Slack channel we’re both in:</p>
<blockquote>
<p>Can I ask a noob type Elm / JS question?</p>
<p>Why Elm instead of Typescript? The dev stack and functional programming?</p>
</blockquote>
<p>I responded as follows, with only light tweaks to clarify a couple things (and I …</p><p>A few weeks ago, an acquaintance asked in a Slack channel we’re both in:</p>
<blockquote>
<p>Can I ask a noob type Elm / JS question?</p>
<p>Why Elm instead of Typescript? The dev stack and functional programming?</p>
</blockquote>
<p>I responded as follows, with only light tweaks to clarify a couple things (and I’ll be reusing some of this material as the basis of an internal tech talk I’m giving on the same subject at Olo in a few weeks):</p>
<p>A couple things Elm gives you:</p>
<ol type="1">
<li><p>It’s not tied to JS directly, which means it’s free to just do what is the best fit for the language rather than needing to be able to express all the quirks and oddities of JS. That’s the single biggest thing I find all the time with TS (which I use every day and do quite like): as good as it is, and as both powerful and expressive as its type system is, at the end of the day it’s… still a superset of JavaScript, and that can mean some really nice things, but it also means a lot of <em>weird</em> things.</p></li>
<li><p>Elm’s type system is <em>sound</em>; TypeScript’s is not. At a practical level, that means that if an Elm program type-checks (and thus compiles), you can be <em>sure</em> – not mostly sure, 100% sure – that it is free of things like <code>undefined is not a function</code>. TypeScript does not (and by design cannot) give you that guarantee. And when I say “by design,” I mean that its designers believed from the outset that soundness was in tension with developer productivity, so they intentionally left a number of “soundness holes” in the type system—there’s still a lot of opportunity for <code>undefined is not a function</code>, sad to say. You can make it <em>less</em> than in JS… but not none. (That’s even still true in the TypeScript 2.x series, though the various soundness flags they added in 2.0 and the <code>--strict</code> option <a href="https://blogs.msdn.microsoft.com/typescript/2017/04/10/announcing-typescript-2-3-rc/">coming in 2.3</a> do get you closer.) In Elm, you can make it truly <em>none</em>. It’s just a sort of known fact at this point that Elm codebases tend to <em>have zero runtime errors</em>.</p></li>
<li><p>Elm’s language design is a huge win.</p>
<ol type="1">
<li><p>Elm is a <em>pure functional language</em>. Because non-pure things are offloaded to the Elm runtime, every single function <em>you</em> write is pure. Same input means the same output.</p></li>
<li><p>Elm supports first-class currying and partial application. This makes it much, much easier to do the kind of functional-building-block approach that is natural in FP and which is <em>attractive</em> in (but a lot more work in) JS or TS. Example code to show what I mean—</p>
<p>Javascript:</p>
<pre class="js"><code>const add = (a, b) => a + b;
const add2 = (c) => add(2, c);
const five = add2(3);</code></pre>
<p>Elm:</p>
<pre class="elm"><code>add a b = a + b
add2 = add 2
five = add2 3</code></pre></li>
<li><p>The combination of the above means that you can refactor and <em>always be sure you get everything</em>, which is truly magical. And the compiler errors are the best in the world (and that’s no exaggeration).</p></li>
</ol></li>
</ol>
<p>The way I’d summarize it is to say that Elm makes it easy to do the right thing and hard or impossible to do the wrong thing. TypeScript makes it possible to do the right thing, and gives you a couple switches you can flip to make it harder to do the wrong things, but will ultimately let you do anything.</p>
Functions, Objects, and Destructuring in JavaScript2017-03-27T18:00:00-04:002017-03-27T18:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-03-27:/2017/functions-objects-and-destructuring-in-javascript.html<p><i class=editorial>A colleague just getting his feet wet with JavaScript, and coming from a background with much more C# than JavaScript, sent me a question on Slack the other day, and I realized the answer I’d written up was more generally helpful, so here you go!</i></p>
<p>I’m including the …</p><p><i class=editorial>A colleague just getting his feet wet with JavaScript, and coming from a background with much more C# than JavaScript, sent me a question on Slack the other day, and I realized the answer I’d written up was more generally helpful, so here you go!</i></p>
<p>I’m including the context of original question because I want to call out something really important: there are no dumb questions. When you’re just coming up to speed on <em>any</em> technology, stuff is going to be confusing. That goes double when making the jump as far as between something like C# and something like modern JS.</p>
<blockquote>
<p>Hey this may be a really dumb question</p>
<p>but I’m a JavaScript n00b, and I have no idea what’s going on here</p>
<p>I’m not used to this syntax</p>
<p>I have this program:</p>
<pre class="js"><code>function ab() {
function fa() { console.log("A"); };
function fb() { console.log("B"); };
return {fa, fb};
};
let {fa, fb} = ab();
fa();
fb();</code></pre>
<p>and it outputs</p>
<pre><code>A
B</code></pre>
<p>(as expected)</p>
<p>What I don’t understand is the syntax for the <code>let</code> part (or maybe even the return from <code>ab()</code>)</p>
<ol type="A">
<li><p>What is <code>ab()</code> actually returning? An object with 2 function pointers?</p></li>
<li><p>What can’t I do a <code>let {a, b} = ab()</code> and then call <code>a()</code> and <code>b()</code>? I get syntax errors that <code>a</code> and <code>b</code> aren’t defined</p></li>
</ol>
<p><em>edit to show code that doesn’t work (definition of ab() remains the same):</em></p>
<pre class="js"><code>let {a, b} = ab();
a(); // will throw an error here
b();</code></pre>
<p>I don’t understand why the names for <code>fa</code> and <code>fb</code> have to be the same across all scopes/closures (? am I using those terms correctly? JavaScript is an odd dance partner at times)</p>
</blockquote>
<hr />
<p>First, your (A) is <em>basically</em> correct, but the phrase “function pointers” is one you should banish from your mind entirely in this context. In JavaScript, functions are just items like any other. From the language’s perspective, there’s no difference between these things other than what you can do with them:</p>
<pre class="js"><code>let foo = "a string";
function quux(blah) { console.log("blah is " + blah); }
let bar = quux;</code></pre>
<p>Both <code>foo</code> and <code>bar</code> are just variables. (<code>quux</code> is a variable, too, but it behaves a little differently; I’ll cover that in a minute.) They have different types, and therefore different things you can do on them. <code>foo</code> has the <code>length</code> property and a bunch of string-specific methods attached. <code>bar</code> is callable. But both of them are just <em>things</em> in the same way, and at the same level in the program.</p>
<p>So in your original <code>function ab() { ... }</code>, what you’re doing is declaring two functions, <code>fa</code> and <code>fb</code>, and returning them attached to an object.</p>
<p>For various reasons which aren’t especially interesting, functions can have <em>names</em>…</p>
<pre class="js"><code>function fa() { ... }</code></pre>
<p>…and can be <em>assigned to other variables</em>…</p>
<pre class="js"><code>let trulyISayToYou = function waffles() { console.log("are so tasty"); };</code></pre>
<p>…and in fact you can define the functions <em>themselves</em> anonymously, that is, without any name attached to the function declaration itself: combine those:</p>
<pre class="js"><code>let lookMa = function() { console.log("no function name!"); };</code></pre>
<p>Doing <code>function ab() { ... }</code> simultaneously <em>declares</em> the function and <em>hoists</em> it, that is, it makes it available in that entire scope, regardless of where it is defined. So you can do this, even though it’s kind of insane most of the time and you shouldn’t:</p>
<pre class="js"><code>quux();
function quux() { console.log('SRSLY?'); }</code></pre>
<hr />
<p>Now, about returning <code>fa</code> and <code>fb</code> from the function.</p>
<p>First, note that you normally define objects in a long form, like so:</p>
<pre class="js"><code>let someObject = {
a: true,
b: 'some string'
};
console.log(someObject.a); // prints true
console.log(someObject.b); // prints "some string"</code></pre>
<p>However, very, <em>very</em> often, you find yourself doing something like this:</p>
<pre class="js"><code>// do some work to define what `a` and `b` should be, then...
let someObject = {
a: a,
b: b
};</code></pre>
<p>Because this is such a common pattern, the 2015 version of JS introduced a “shorthand,” which lets you just write that last assignment like this:</p>
<pre class="js"><code>let someObject = {
a,
b
};</code></pre>
<p>And of course, for convenience we often write that on one line:</p>
<pre class="js"><code>let someObject = { a, b };</code></pre>
<p>Then you can combine that with the fact that you declared two items (functions, but again: that <em>really</em> doesn’t matter, they could be anything) with the names <code>fa</code> and <code>fb</code>, and what you’re doing is returning an object containing those two items in it: <code>return {fa, fb}</code> is equivalent to this:</p>
<pre class="js"><code>let theFunctions = {
fa: fa,
fb: fb,
};
return theFunctions;</code></pre>
<hr />
<p>What about the <code>let</code> assignment?</p>
<p>JS has three kinds of name bindings: <code>var</code>, <code>let</code>, and <code>const</code>. <code>var</code> bindings act like <code>function</code>: the names you use get “hoisted”. So:</p>
<pre class="js"><code>console.log(neverDefined); // throws an error
console.log(definedLater); // prints undefined
var definedLater = "what";
console.log(definedLater); // prints "what"</code></pre>
<p><code>let</code> and <code>const</code> behave much more like you’d expect: they’re only valid <em>after</em> they’re defined, and they’re scoped to the blocks they appear in. (<code>var</code> will escape things like <code>if</code> blocks, too. It’s crazy-pants.) The difference between <code>let</code> and <code>const</code> is that they create <em>mutable</em> or <em>immutable</em> <em>bindings</em> to a name.</p>
<p>So <code>let a = true;</code> is just creating a name, <code>a</code>, and binding the value <code>true</code> to it. Likewise, with <code>const b = false;</code> it’s creating a name, <code>b</code>, and binding the value <code>false</code> to it. And those <em>won’t</em> be hosted. Now, having done <code>let a = true;</code> we could on the next line write <code>a = false;</code> and that’s fine: <code>let</code> bindings are mutable; they can change. We’ll get an error if we try to do <code>b = true;</code> though, because <code>const</code> bindings are <em>not</em> mutable.</p>
<p>One thing to beware of with that: things like objects and arrays, being reference types, are not themselves created as immutable when you use <code>const</code>. Rather, the specific <em>instance</em> is immutably bound to the name. So:</p>
<pre class="js"><code>const foo = { a: true };
foo.b = 'I can add properties!'; // okay
delete foo.a; // okay
foo = { c: "assign a new object" }; // will error</code></pre>
<p>You can change the internals of the item bound to the name, but not assign a new item to the name. For value types (numbers, booleans, etc.), that makes them behave like <em>constants</em> in other languages. You have to use something like <code>Object.freeze</code> to get actually constant object types.</p>
<p>That was a long digression to explain what you’re seeing in a general sense with <code>let</code>.</p>
<hr />
<p>Finally, let’s come back around and talk about that assignment and why you need the names <code>fa</code> and <code>fb</code>.</p>
<p>As noted, <code>ab()</code> returns an object with two items attached, <code>fa</code> and <code>fb</code>. (And again: functions are <em>just</em> items in JS.) So you could also write that like this:</p>
<pre class="js"><code>let theFunctions = ab(); // theFunctions is now the object returned
theFunctions.fa(); // and it has the `fa` item on it
theFunctions.fb(); // and the `fb` item, too</code></pre>
<p>Of course, if your original <code>ab()</code> function had returned other properties, they’d be accessible there, too, in just the same way (though they wouldn’t be callable if they weren’t functions).</p>
<p>Again, this is a super common pattern: you want to immediately do something with the values returned on an object by some function, and you don’t necessarily want to type out the name of the object every time. So ES2015 introduced <em>destructuring</em> to help with this problem. I’ll do it without the function in the way to show how it works at the simplest level first.</p>
<pre class="js"><code>let someObject = {
foo: 'what is a foo anyway',
bar: 'hey, a place to drink *or* a thing to hit people with',
quux: 'is this like a duck'
};
console.log(someObject.foo); // etc.</code></pre>
<p>Now, if we wanted to get at <code>foo</code>, <code>bar</code>, and <code>quux</code>, we could always do that with <code>someObject.quux</code> and so on. But, especially if we have some large object floating around, we often just want a couple properties from it—say, in this case, <code>foo</code> and <code>quux</code>. We could do that like this:</p>
<pre class="js"><code>let foo = someObject.foo;
let quux = someObject.quux;</code></pre>
<p>And of course those new names <em>don’t</em> have to match:</p>
<pre class="js"><code>let whatever = someObject.foo;
let weLike = someObject.quux;</code></pre>
<p>However, because wanting to snag just a couple items off of objects like this is so common, the shorthand is available. In the case of the shorthand for <em>destructuring</em>, just like the case of the shorthand for object creation, the names have to match: otherwise, it wouldn’t know what to match them with.</p>
<pre class="js"><code>let { foo, quux } = someObject;</code></pre>
<p>So, going back to your original example: <code>ab()</code> returns an object which has the items <code>fa</code> and <code>fb</code> on it. You’re using the destructuring assignment there to get just <code>fa</code> and <code>fb</code>. There’s no reason they <em>have</em> to be those names in the outer scope, other than that you’re using the destructuring assignment. You could also do this:</p>
<pre class="js"><code>let theFunctions = ab();
let oneOfThem = theFunctions.fa;
let theOtherOne = theFunctions.fb;
oneOfThem(); // does what fa() does
theOtherOne(); // does what fb() does</code></pre>
<hr />
<p>I <em>think</em> that covers everything your questions brought up; but please feel free to ask more!</p>
<p>The most important thing to take away is that even though yes, those are pointers to functions under the hood, in JS that’s <em>absolutely</em> no different than the fact that there are pointers to objects and arrays under the hood. Functions are just more items you can do things with. You can put them on objects, you can return them directly, you can take them as arguments, etc.</p>
<p>Hopefully that’s helpful!</p>
<hr />
<p>Bonus content: in ES2015 and later, you can also define anonymous functions like this:</p>
<pre class="js"><code>let someFunction = (someArg) => { console.log(someArg); };</code></pre>
<p>This has some interesting side effects about the value of <code>this</code> in the body of the function you declare… but that’s for another time.</p>
Pick the Right Tool for the Job2017-03-17T22:00:00-04:002017-03-17T22:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-03-17:/2017/pick-the-right-tool-for-the-job.html<p>Over the past few years, I’ve been experimenting with publishing microblog posts here on my website as the “canonical” source for them, inspired by some of Manton Reece’s <a href="http://www.manton.org/2014/09/owning-the-microblog.html">early experiments</a> that way. I have also spent a considerable amount of time trying to come up with a good …</p><p>Over the past few years, I’ve been experimenting with publishing microblog posts here on my website as the “canonical” source for them, inspired by some of Manton Reece’s <a href="http://www.manton.org/2014/09/owning-the-microblog.html">early experiments</a> that way. I have also spent a considerable amount of time trying to come up with a good way to share links, and have been rather stymied by the limitations of the static site generator I use (<a href="https://blog.getpelican.com/">Pelican</a>): it does not support customizing the link target of RSS items.</p>
<p>Both of these desires, combined with the breadth of my interests, have been motivating factors in my desire to <a href="http://v4.chriskrycho.com/lightning-rs/">build my own CMS/site generator</a>.</p>
<p>But as of today, I think I am setting aside those two needs, at least for the present (though the underlying information architecture needs for my site are not thereby particularly diminished, so Lightning will still aim for roughly the same goals when i get back to it).</p>
<p>My reasoning here is two-fold.</p>
<ol type="1">
<li><p>The simple reality is that the vast majority of my microblogging has zero historical value. It is ephemeral; archiving it is essentially a useless gesture, a “because I can” or perhaps “the internet should be permanent” act of defiance. But in truth, if every one of my tweets vanished… it would not matter one whit. I have been considering this for some time, but it came home to me tonight while considering my second point:</p></li>
<li><p>I have been experimenting with <a href="https://pinboard.in">Pinboard</a> as a bookmark management service over the past few weeks, spurred on by yet once more needing to dig out if an email from three years ago a particular post. (You can see my public bookmarks <a href="https://pinboard.in/u:chriskrycho">here</a>. It’s a work in progress as far as organization goes.) One lovely Pinboard feature (and there are <a href="http://text-patterns.thenewatlantis.com/2016/07/happy-birthday-pinboard.html">many others,</a> including <a href="https://blog.pinboard.in/2016/07/pinboard_turns_seven/">having a simple, profitable business model</a>! Yes, that <em>is</em> a feature as far as I am concerned) is the option of public RSS feeds for publicly-bookmarked items by author, by tag, etc. This is actually what led me to Pinboard in the first place (thanks, <a href="http://text-patterns.thenewatlantis.com/2011/05/pinboard.html">ayjay</a>). My Pinboard RSS feed is <a href="http://feeds.pinboard.in/rss/u:chriskrycho/">http://feeds.pinboard.in/rss/u:chriskrycho/</a>, and if you want to follow along and see what I think is worth reading with occasional comments… that’s where it will be.</p></li>
</ol>
<p>In the course of writing this post, I also remembered that last night, App.net shut down. I downloaded my archive a few weeks ago… I think. Honestly, I don’t recall, and the truth is that I haven’t looked at old posts there in years, though I amassed some 15,000 in the years I was active there. ADN was beautiful and wonderful. But even the very good conversations I had there are <em>past</em> in much the same way a conversation when physically present with each other would be. We do not suppose we need audio/video recordings of conversations just in case we might want to search them later. I understand why someone might want to <a href="http://www.manton.org/2017/03/app-net-archive.html">archive all of ADN</a>. But I don’t feel that desire myself anymore.</p>
<p>Archival has value. But its value is not ultimate, and its value is not universal.</p>
<p>The somewhat-ephemeral things I care about archiving are, I find, <em>links</em>—not random thoughts or comments or even conversations, but articles and posts I want to be able to come back to later, or quickly find to share with someone.</p>
<p>So no more microblog posts here. If you want them, you can follow me on Twitter. (If I hear from enough people who would prefer to keep getting them via RSS, I will think about setting up some sort of automated RSS mirror of my stream.) But for my own part, I am content to let the ephemeral be ephemeral. And that is easier to countenance now that microblogging isn’t also a poor-man’s bookmarking tool for me.</p>
<p>This takes me around to the meta point I had in mind when I started the post: use tools for what they’re good at and don’t try to force them into roles they’re not well-suited for. Twitter is good for ephemera, bad for permanence, decent for finding content I wouldn’t encounter via RSS, horrible for conversation or substantive commentary. Pinboard is great for bookmarking things, for sharing links via RSS, and for seeing what bookmarks others are sharing; but it is not at all “social” in the modern sense, with no facilities for discussion or interaction other than reading others’ links and copying them to your own board. Twotter for ephemera and trivial conversations. Pinboard for links. Blog for longer content.</p>
Differences of Opinion2017-03-12T08:05:00-04:002017-03-12T08:05:00-04:00Chris Krychotag:v4.chriskrycho.com,2017-03-12:/2017/differences-of-opinion.html<p>I could not possibly agree more with <a href="http://www.pyret.org/pyret-code/">this view of teaching software/CS</a>.</p>
<blockquote>
<p>We are focused on introductory programming education at a high-school and collegiate level — what is often called “CS 1” and “CS 2” (roughly, the first year of college). Pyret is being actively used in everything from high-schools …</p></blockquote><p>I could not possibly agree more with <a href="http://www.pyret.org/pyret-code/">this view of teaching software/CS</a>.</p>
<blockquote>
<p>We are focused on introductory programming education at a high-school and collegiate level — what is often called “CS 1” and “CS 2” (roughly, the first year of college). Pyret is being actively used in everything from high-schools to upper-level collegiate courses, giving us a tight feedback loop.</p>
<p>Of course, even in that setting there are differences of opinion about what needs to be taught. Some believe inheritance is so important it should be taught early in the first semester. We utterly reject this belief (as someone once wisely said, “object-oriented programming does not scale down”: what is the point of teaching classes and inheritance when students have not yet done anything interesting enough to encapsulate or inherit from?). Some have gone so far as to start teaching with Turing Machines. Unsurprisingly, we reject this view as well.</p>
<p>What we do not take a dogmatic stance on is exactly how early state and types should be introduced. Pyret has the usual stateful operations. We discussed this at some length, but eventually decided an introduction to programming must teach state. Pyret also has optional annotations, so different instructors can, depending on their preference, introduce types at different times.</p>
</blockquote>
<p>I’m <em>delighted</em> to see work on languages like Dr. Racket and Pyret, and the more so because the teams behind both have been willing to set aside many of the dogmas of how CS has been taught and actually do <em>pedagogical research</em>. Also: OOP is a useful tool, but I’m with them: treating inheritance as a first-semester concept is… nutty.</p>
<p>The whole <a href="http://www.pyret.org/pyret-code/">“Why Pyret?”</a> page is worth reading if you have any interest in programming languages or teaching software development and computer science.</p>
Dear Tech CEOs: Yes, That Is Your Culture.2017-02-21T17:00:00-05:002017-02-21T17:00:00-05:00Chris Krychotag:v4.chriskrycho.com,2017-02-21:/2017/dear-tech-ceos-yes-that-is-your-culture.html<p>It is common, when stories break about horrible company cultures in —as <a href="https://www.susanjfowler.com/blog/2017/2/19/reflecting-on-one-very-strange-year-at-uber">one did this week about Uber</a>, and as <a href="https://www.nytimes.com/2015/08/16/technology/inside-amazon-wrestling-big-ideas-in-a-bruising-workplace.html?_r=1">one did last year about Amazon</a>—for the <abbr>CEO</abbr>s to say things like:</p>
<blockquote>
<p>What she describes is abhorrent and against everything Uber stands for and believes in. (Uber <abbr>CEO …</abbr></p></blockquote><p>It is common, when stories break about horrible company cultures in —as <a href="https://www.susanjfowler.com/blog/2017/2/19/reflecting-on-one-very-strange-year-at-uber">one did this week about Uber</a>, and as <a href="https://www.nytimes.com/2015/08/16/technology/inside-amazon-wrestling-big-ideas-in-a-bruising-workplace.html?_r=1">one did last year about Amazon</a>—for the <abbr>CEO</abbr>s to say things like:</p>
<blockquote>
<p>What she describes is abhorrent and against everything Uber stands for and believes in. (Uber <abbr>CEO</abbr> Travis Kalanick’s statement quoted at <a href="http://www.recode.net/2017/2/19/14665076/ubers-travis-kalanick-susan-fowler-sexual-harassment-investigation">Recode</a>)</p>
</blockquote>
<p>Or:</p>
<blockquote>
<p>The article doesn’t describe the Amazon I know or the caring Amazonians I work with every day. (Amazon <abbr>CEO</abbr> Jeff Bezos’ statement quoted at <a href="http://www.geekwire.com/2015/full-memo-jeff-bezos-responds-to-cutting-nyt-expose-says-tolerance-for-lack-of-empathy-needs-to-be-zero/">Geekwire</a>)</p>
</blockquote>
<p>The problem with this is simple. You can say all day that these events don’t represent your culture. But they <em>do</em>. And if you’re not aware of it, that means you have two problems: the culture problem, and the fact that you’re so out of touch with what your company is actually like that you don’t know it has that culture problem.</p>
<p>And you have one other, even bigger problem. If that’s the culture of the company you’ve built, it’s <em>your fault</em>. You can’t foist it off on your underlings: you hired them. You can’t foist it off on the bureaucracy: you built it. You can’t foist it off on wrong priorities: you set those priorities. It’s on you.</p>
Better Off Using Exceptions?2017-02-20T12:00:00-05:002017-02-20T12:00:00-05:00Chris Krychotag:v4.chriskrycho.com,2017-02-20:/2017/better-off-using-exceptions.html<p>I saw this post on error-handling in F<sup>♯</sup>, <a href="https://eiriktsarpalis.wordpress.com/2017/02/19/youre-better-off-using-exceptions/" title="You’re better off using Exceptions">“You’re better off using Exceptions”</a> making the rounds on Twitter:</p>
<blockquote>
<p>Exception handling is an error management paradigm that has often been met with criticism. Such criticisms typically revolve around scoping considerations, exceptions-as-control-flow abuse or even the assertion that exceptions are really …</p></blockquote><p>I saw this post on error-handling in F<sup>♯</sup>, <a href="https://eiriktsarpalis.wordpress.com/2017/02/19/youre-better-off-using-exceptions/" title="You’re better off using Exceptions">“You’re better off using Exceptions”</a> making the rounds on Twitter:</p>
<blockquote>
<p>Exception handling is an error management paradigm that has often been met with criticism. Such criticisms typically revolve around scoping considerations, exceptions-as-control-flow abuse or even the assertion that exceptions are really just a type safe version of goto. To an extent, these seem like valid concerns but it is not within the scope of this article to address those per se.</p>
<p>Such concerns resonate particularly well within FP communities, often taken to the extreme: we should reject exceptions Show more…</p>
</blockquote>
<p>And I get the argument, and in the specific context of F<sup>♯</sup>—especially given how much C<sup>♯</sup>-interoperating and therefore exception-throwing-code-interoperating there is there—it’s reasonable.</p>
<p>But it still makes me sad. (To be clear: exceptions were and are a big win over what you get in languages like C. I’ll take them any day over <code>goto</code> or <code>segfault</code>.)</p>
<p>You need to embrace exceptions in F<sup>♯</sup> <em>because F<sup>♯</sup> has exceptions</em> and because <em>many of its libraries rely on exceptions</em>. But my experience with Rust and other non-exception-using languages is that you <em>don’t</em> need exceptions in the general case.</p>
<p>The questions are: whether your language has good support for things like flat-mapping, and whether you’re willing to commit to letting the compiler help you with these problems.</p>
<p>To be sure: there’s more work involved up front to deal with that. But that’s a tradeoff I’m <em>always</em> willing to make. I’d rather have the compiler tell me if I’m failing to account for something than learn because I saw a runtime error report come up in <a href="https://raygun.com">Raygun</a>, especially because that tends to mean an error that affects the user in some way.</p>
<p>Rust’s model gives you something like exceptions for truly unrecoverable errors, “panics.” A panic gives you all the context you’d get from an exception (one of the virtues of exceptions highlighted in that post), but you can only “catch” it at thread boundaries, and it otherwise just kills the program. Because it’s catastrophic, you only use it where you don’t have any way to recover in your immediate context. But where you can recover in your immediate context… using something like a highly descriptive enum (just as suggested at the end of <a href="https://eiriktsarpalis.wordpress.com/2017/02/19/youre-better-off-using-exceptions/" title="You’re better off using Exceptions">that original post</a>!) is a better option.</p>
<p>It’s well-understood in my circles that you shouldn’t use exceptions for things you can recover from; you should use them for things you <em>can’t</em> recover from. But in most languages which lean heavily on exceptions, you inevitably start using them for control flow. I say: if you can recover from an error… just recover from it! Account for recoverable errors as possible conditions in your program and carry on! If you can’t recover… don’t. Die and let some other part of your system kick things back off.</p>
<p>In summary: yes, if you’re in F<sup>♯</sup>, use exceptions. It <em>is</em> the right thing to do in many cases (and you don’t have a choice in many others). But I’m hopeful for a future where we handle recoverable errors locally, and <a href="http://elixir-lang.org/getting-started/mix-otp/supervisor-and-application.html">act like Erlang or Elixir otherwise</a>.</p>
On Blogging2017-01-04T23:25:00-05:002017-01-04T23:25:00-05:00Chris Krychotag:v4.chriskrycho.com,2017-01-04:/2017/on-blogging.htmlA few thoughts on Medium, micro.blog, and the open web.<p>My wife is out of town and I had coffee at our church small group tonight, so I’m wide awake and up late, thinking about <em>blogging</em>. It’s been on my mind a lot lately. (And what follows is, appropriately, as you will see, blogging in the old style—which is to say: a bit rambly. I apologize. It’s the coffee.)</p>
<p>A friend at our small group meeting tonight mentioned his intent to start blogging this year. He had a lot of good reasons for jumping in, and I strongly encouraged it. Blogging is not for everyone—we’ll get to that—but blogging is <em>good</em>. This newish thing, writing-on-the-web-in-a-log, has been a part of my life for over a decade now. I wrote my first post on Xanga in the fall of 2005, and I have not gone more than a matter of weeks between posts since. It is not hyperbole to say I cannot imagine <em>not</em> blogging at this point. (The sheer number of words I <a href="http://v4.chriskrycho.com/2016/2016-review-2.html" title="So. many. words. I had no idea how many words.">published last year</a> should serve to drive home that point: even in a year which was full to the brim, I somehow ended up publishing almost 70,000 words.)</p>
<p>And yet, blogging is not at all like it was in when I started in 2005. Both for good and for ill. The mid-2000s were in many ways the height of blogs’ power and reach. Individual sites still hosted all their own content; blogging networks were nascent; and the ability of small content providers to outdo the old guard was beginning to be felt at all levels. Bloggers made a difference in the 2004 election not <em>so</em> different than the way Twitter shaped all sorts of politics in 2016.</p>
<p>Twitter. That brings us to one of the things that has changed. Microblogging is new. And social media more generally has changed a great deal. Facebook was still the fledgling upstart nipping at MySpace’s heels when I published that first post on Xanga. Today, Facebook dominates the web, and Twitter—not even a product at all in 2005—has taken enormous chunks of the time and attention of the would-be writerly class (journalists especially).</p>
<p>I first took my own blogging <em>seriously</em> on <a href="http://blog.chriskrycho.com">the Blogger site</a> which first ran in parallel with and then displaced that Xanga. And Blogger, too, evokes a different time: when individuals setting up blogs was trendy, and when the competition between WordPress and Blogger could be called a competition. (Much-neglected Blogger trucks on still, but WordPress powers perhaps a quarter of the sites on the web.)</p>
<p>But for all that, some things haven’t changed. Business plans still matter—and Ev Williams, founder of Blogger, Twitter, and Medium, still hasn’t figured out something truly sustainable. Attention-driven advertising of the same sort that powered Blogger then and now, and which powers Twitter and Facebook equally, continues to be a race toward the bottom. Sustainable publishing on the web is a mirage for all but a few, because <a href="https://stratechery.com/2014/publishers-smiling-curve/">content is plentiful and distinguishing features few</a>. The <a href="http://daringfireball.net">Daring Fireball</a>s of the world are notable, these days, not least for how few of them there are.</p>
<p>In some ways, there is something real to mourn in the passing of the web of those early days when I started blogging. People <em>did</em> own their own content (at least, to a far greater degree than now). Blogs linked to each other, using <a href="https://en.support.wordpress.com/comments/pingbacks/">ping-backs</a> to let sites know when they’d been linked. Comment sections flourished.</p>
<p>But that era also required a level of technical knowledge that was simply too high a bar for most people. To be sure, anyone could set up a blog with enough grit, and WordPress and Blogger lowered the bar. But subscribing to another blog meant wrangling RSS and learning the arcana of managing Google Reader (which soon swallowed all competitors before its own too-delayed demise). Twitter’s “follow” button seems a revelation by comparison; it is no wonder at all that first Tumblr and then Medium embraced the idea of blogs-as-social-media, “following” and all. Being able not only to respond, and only if the author so allowed, but also to <em>initiate</em> with anyone else on the service… the first time you @-mentioned someone well-known in your circles, and they responded—that was (and is) a heady thing.</p>
<p>Centralization is often a function of convenience. Facebook and Twitter make it simple for you to “connect with” or “follow” whomever you like. No digging for RSS feeds, wondering if they have a non-standard symbol for it or hoping desperately that it’s at the root of the site + <code>feed.xml</code>, or (if you really know the secrets of the web) that they set it up as a <code><link></code> tag with a <code>rel='alternate'</code> tag so it could just be discovered automatically by a smart-enough feed reader…</p>
<p>You see? If you aren’t technical yourself, your eyes just glazed over in that paragraph, and that’s the point. The technical details make sense if you understand them. But understanding them is hard; and more to the point, they don’t matter for what people actually want to accomplish.</p>
<p>This is the fundamental mistake of Manton Reece’s new <a href="http://micro.blog">micro.blog</a> project (which I like in principle, and whose goals <a href="http://v4.chriskrycho.com/micro/">I clearly share</a>). People as a whole don’t even know there might be a reason to prefer the open web, where everyone owned their own content and there was no central clearing-house of information. Facebook offers real value to people: it shows them things they’re interested, and keeps them coming back precisely by tailoring its algorithm to make sure they don’t see too many things they <em>don’t</em> want to see. (The polarization that helps foster may be dreadful, but it’s <a href="https://stratechery.com/2016/fake-news/">very good business</a>.) The same goes for Twitter, regardless of the structure of its timeline: people self-select into their lists of whom to follow. Manton’s project is a good one in many ways—but the problem it solves is a <a href="http://www.winningslowly.org/5.03/">Winning Slowly</a> kind of problem, and one that takes a lot of selling when the problem Facebook solves is obvious: <em>I want a new story and a picture of my cousin’s kid and a funny cat video.</em> Decentralizing, whatever its benefits (and again: note well my <em>bona fides</em> <a href="https://github.com/chriskrycho/chriskrycho.com">here</a>), makes those basic tasks <em>harder</em>. I’ve followed what is now the micro.blog project from a distance for years now—and I’ve always had this one, nagging but oh-so-important question. <em>How does this solve a <strong>user</strong> problem?</em></p>
<p>The answer, if there is one, is a decades-long play. It’s a hedge against technological oligarchy. But how do you get people to care? What’s the pitch? The technical problems are easy compared to that—and the technical problems <em>are not easy</em>; they remain almost untouched in the last decade, and micro.blog has no intent to address some of the core issues. Real-time interaction is what makes Twitter Twitter; ping-backs aren’t even close. And that’s just Twitter; Facebook outstrips it by far.</p>
<p>And Medium? Medium doesn’t know what it wants to be when it grows up (and it never has; the same as Twitter). As a second pass at Blogger, it has better aesthetics and something like a mission. Ev <a href="https://blog.medium.com/renewing-mediums-focus-98f374a960be">writes</a>:</p>
<blockquote>
<p>So, we are shifting our resources and attention to defining a new model for writers and creators to be rewarded, based on the value they’re creating for people. And toward building a transformational product for curious humans who want to get smarter about the world every day.</p>
</blockquote>
<p>That’s a lovely-sounding sentiment. It’s also—for today, at least—contentless business blather. “Building a transformational product for curious humans who want to get smarter about the world every day” sounds great, but it doesn’t mean anything. Medium is a beautiful product without a reason to exist. (How often do you see a founder basically admit: <em>We have no idea what we’re doing here</em>? But that’s roughly what Ev did.) That doesn’t mean it shouldn’t exist. It just means no one has thought of a <em>good</em> reason for it to just yet.</p>
<p>Medium as a centralized, social medium for longer-form writing is better than nothing. I’ll take Medium + Facebook + Google over just Facebook + Google any day. But is there something lost when every blog post looks the same, and when everyone is locked into one more centralized platform? Yes. Just as there is something <em>gained</em> by people having a place to look. The questions are: whether the costs are indeed higher than the benefits; and even if so whether people can be persuaded of those costs when they all take a decade to appear, and Medium is really pleasant to scroll through <em>right now</em>.</p>
<p>By whatever quirk of temperament, I’m old-school about blogs. I’d love for the open web to win out over all the centralizers. That’s not going to happen: <em>centralization provides too much value to users</em>. But we can hope the open web will flourish alongside centralized sources. And, far more importantly, we can work to that end.</p>
<p>We can show people why it matters, and teach them how to own for themselves even the things you publish on Facebook. We can make philosophies like <abbr title="Publish (on your) Own Site, Syndicate Everywhere"><a href="https://indieweb.org/POSSE">POSSE</a></abbr> easier to implement (because right now it’s just plain <em>hard</em>). We can let Twitter be secondary and our own blogs primary as a way of setting an example. We can do the hard technical work of figuring out something like real-time, decentralized, better-than-ping-back commenting-and-threading-and-responding for all sorts of content on the web.</p>
<p>But even solving those technical problems will require us to recognize that the bigger and more important problems are <em>human</em> problems. It’s going to require distinguishing, for example, between <a href="https://medium.com/matter/the-web-we-have-to-save-2eb1fe15a426">the web we have to save</a> and <a href="http://v4.chriskrycho.com/2016/12-31-0817.html">mere ephemera</a>. If there are goods of how-blogging-was-in-2008 that are worth keeping, what if anything do they have to do with whether the content of nearly any Twitter account is “owned” by the user who generated them? Will the user care, two decades from now? (Or two days?)</p>
<p>Put another way, we need to care about the open web not in some general or abstract sense, and certainly not just on its technical merits, but instead—and quite specifically—as one means of serving other people. If we cannot express it in those terms, we show that we do not understand the real problems at all. It wouldn’t be the first time a bunch of technically-oriented nerds missed the boat.</p>
TypeScript keyof Follow-Up2017-01-03T20:35:00-05:002017-01-08T17:47:00-05:00Chris Krychotag:v4.chriskrycho.com,2017-01-03:/2017/typescript-keyof-follow-up.html<p>I recently wrote up some neat things you can do with <a href="http://v4.chriskrycho.com/2016/keyof-and-mapped-types-in-typescript-21.html"><code>keyof</code> and mapped types</a> in TypeScript 2.1. In playing further with those bits, I ran into some interesting variations on the approach I outlined there, so here we are.</p>
<p>In the previous post, I concluded with an example …</p><p>I recently wrote up some neat things you can do with <a href="http://v4.chriskrycho.com/2016/keyof-and-mapped-types-in-typescript-21.html"><code>keyof</code> and mapped types</a> in TypeScript 2.1. In playing further with those bits, I ran into some interesting variations on the approach I outlined there, so here we are.</p>
<p>In the previous post, I concluded with an example that looked like this:</p>
<pre class="typescript"><code>type UnionKeyToValue<U extends string> = {
[K in U]: K
};
type State
= 'Pending'
| 'Started'
| 'Completed';
// Use `State` as the type parameter to `UnionKeyToValue`.
const STATE: UnionKeyToValue<State> = {
Pending: 'Pending',
Started: 'Started',
Completed: 'Completed',
}</code></pre>
<p>That <code>UnionKeyToValue<State></code> type constraint requires us to fill out the <code>STATE</code> object as expected. The whole point of this exercise was to give us the benefit of code completion with that STATE type so we could use it and not be worried about the kinds of typos that too-often bite us with stringly-typed arguments in JavaScript.</p>
<p>It turns out we don’t <em>need</em> that to get completion, though. All editors which use the TypeScript language service will give us the same degree of completion if we start typing a string and then trigger completion:</p>
<figure>
<img src="https://f001.backblazeb2.com/file/chriskrycho-com/images/more-ts.gif" title="screen capture of string completion in VS Code" alt="string completion with TypeScript 2.1" /><figcaption>string completion with TypeScript 2.1</figcaption>
</figure>
<p>Granted that you have to know this is a string (though the JetBrains <abbr title="integrated development environment">IDE</abbr>s will actually go a step further and suggest the right thing <em>without</em> needing the string key). But that’s roughly equivalent to knowing you need to import the object literal constant to get the completion that way. Six one, half dozen the other, I think.</p>
<p>This makes it something of a wash with the original approach, as long as you’re dealing in a pure-TypeScript environment. The big advantage that the original approach still has, of course, is that it also plays nicely with a mixed TypeScript and JavaScript environment. If you’re just progressively adding TypeScript to an existing JavaScript codebase, that’s possibly reason enough to stick with it.</p>
<p><strong>Edit</strong>: an additional reason to prefer my original solution:</p>
<blockquote class="twitter-tweet" data-lang="en">
<p lang="en" dir="ltr">
<a href="https://twitter.com/chriskrycho">@chriskrycho</a> <a href="https://twitter.com/typescriptlang">@typescriptlang</a> I think a benefit of your previous solution is that you can rename keys and all their usages.
</p>
— Timm (@timmpreetz) <a href="https://twitter.com/timmpreetz/status/816672215924097024">January 4, 2017</a>
</blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
On ebooks (again)2017-01-02T07:55:00-05:002017-01-02T07:55:00-05:00Chris Krychotag:v4.chriskrycho.com,2017-01-02:/2017/on-ebooks-again.html<p>Yesterday, I <a href="https://twitter.com/chriskrycho/status/815583525177004032">tweet-complained</a> about Lifeway’s ebook policy; posted <a href="http://v4.chriskrycho.com/2017/01-01-1056.html">here</a> on this site:</p>
<blockquote>
<p>For the record: <a href="https://twitter.com/LifeWay">@LifeWay</a> has the single most user-hostile ebook policy I’ve encountered anywhere. Shame on whoever made these decisions. <a href="https://twitter.com/chriskrycho/status/815583525177004032">∞</a></p>
</blockquote>
<p>They <a href="https://twitter.com/LifeWay/status/815665143388442624">invited feedback</a>:</p>
<blockquote>
<p><a href="https://twitter.com/chriskrycho">@chriskrycho</a> We’re sorry you feel this way. Please DM us the specifics …</p></blockquote><p>Yesterday, I <a href="https://twitter.com/chriskrycho/status/815583525177004032">tweet-complained</a> about Lifeway’s ebook policy; posted <a href="http://v4.chriskrycho.com/2017/01-01-1056.html">here</a> on this site:</p>
<blockquote>
<p>For the record: <a href="https://twitter.com/LifeWay">@LifeWay</a> has the single most user-hostile ebook policy I’ve encountered anywhere. Shame on whoever made these decisions. <a href="https://twitter.com/chriskrycho/status/815583525177004032">∞</a></p>
</blockquote>
<p>They <a href="https://twitter.com/LifeWay/status/815665143388442624">invited feedback</a>:</p>
<blockquote>
<p><a href="https://twitter.com/chriskrycho">@chriskrycho</a> We’re sorry you feel this way. Please DM us the specifics of your concern. We are always trying to improve. <a href="https://twitter.com/LifeWay/status/815665143388442624">∞</a></p>
</blockquote>
<p>So they got it. But since the content of my response is not specific to Lifeway—I’ve seen the same kind of user/customer-hostile behavior in many places, albeit not to the same degree—I thought I’d repost it publicly:</p>
<blockquote>
<p>Per your request, sending some info about ebooks. To be clear, I know whoever runs the social media account is <em>far</em> removed from this kind of decision, and I don’t hold anyone but the corporate decision-makers remotely responsible.</p>
<p>The current status on ebooks purchased at LifeWay is that you can read them only in the dedicated LifeWay app or in a web view (the latter of which is, frankly, terrible—both as a consumer of many such apps and as a web software developer).</p>
<p>Experientially, this makes for a massively more frustrating experience for the user. There is no reason why a customer should not be able to read the book on her Kindle, his Kobo, her Dell PC, his Mac, etc. except the idea that it will somehow “prevent piracy” to do so.</p>
<p>Empirically, however, these kinds of moves <em>do not</em> prevent piracy. Any and all DRM methods can and will be cracked (and indeed have been).</p>
<p>The net result, then, is a massive inconvenience to customers, with no actual increase in sales for the distributor.</p>
<p>For some corroborating evidence in this direction, please see <a href="http://www.tor.com/2013/04/29/tor-books-uk-drm-free-one-year-later/">here</a>. Note especially this comment (emphasis mine):</p>
<blockquote>
<p>Protecting our author’s intellectual copyright will always be of a key concern to us and we have very stringent anti-piracy controls in place. But DRM-protected titles are still subject to piracy, and we believe a great majority of readers are just as against piracy as publishers are, understanding that piracy impacts on an author’s ability to earn an income from their creative work. <strong><em>As it is, we’ve seen no discernible increase in piracy on any of our titles, despite them being DRM-free for nearly a year.</em></strong></p>
</blockquote>
<p>Please respect your customers. Let us put the ebooks in whatever reading app or device we choose.</p>
<p>And please also understand that until you do so, I will go out my way to warn people off of buying <em>any</em> ebook from Lifeway, because the current approach is so consumer-hostile.</p>
<p>Again, I understand entirely that the folks running social media have nothing to do with these policies; I hope you’re able to pass it along to those who can and do. Have a great 2017!</p>
</blockquote>
The Itch2016-12-19T21:45:00-05:002016-12-19T21:45:00-05:00Chris Krychotag:v4.chriskrycho.com,2016-12-19:/2016/the-itch.htmlLearning functional programming has been simultaneously wildly new and deeply familiar. It's the answer to questions I've been asking for most of a decade.
<p>It took me until just a few weeks ago to put my finger on why typed functional programming, as a style and approach, has appealed to me so much as I started picking it up over the last year. For all its novelty, typed FP feels—over and over again—<em>familiar</em>. Strange to say, but it’s true.</p>
<p>This came home to me again when reading a <a href="https://medium.com/@dtinth/what-is-a-functor-dcf510b098b6#.e887yz63p">short post on functors</a>—i.e., <em>mappable</em> types. I’ve written a lot of JavaScript in the last few years, and it has been a source of constant frustration to me that <code>Array</code> implements the <code>map</code> method, but <code>Object</code> does not. Countless times, I have wanted to take an object shaped like <code>{ count: <number> }</code> and transform that <code>count</code>. I’m not alone in that. There’s a reason that libraries like <a href="http://underscorejs.org">Underscore</a>, <a href="https://lodash.com">Lodash</a>, and <a href="http://ramdajs.com">Ramda</a> all supply utilities to allow you to map over objects. There are also good reasons why it <em>isn’t</em> implemented for on <code>Object.prototype</code>: the reality is that coming up with a predictable <em>and</em> useful API for <em>all</em> <code>Object</code> instances is difficult at best: Objects are used for everything from dictionaries to records and strange combinations of the two. But still: there’s something there.</p>
<p>And reading this post on functors, it struck me what that “something” is: object types are, in principle, functors. Maybe it doesn’t make sense to have a single <code>map</code> implementation for every <code>Object</code> instance out there. But they’re perfectly mappable. I didn’t have a word for this before tonight, but now I do. Over and over again, this is my experience with functional programming.</p>
<p>There’s this familiar feeling of frustration I’m slowly coming to recognize—a mental sensation which is a little like the intellectual equivalent of an itch in a spot you can’t quite reach. You’re reaching for an abstraction to express an idea, but you don’t even know that there <em>is</em> an abstraction for it. You want to map over objects, and you don’t know why that seems so reasonable, but it does. And then someone explains functors to you. It scratches the itch.</p>
<p>Another example. Since I started programming eight and a half years ago, I’ve worked seriously with Fortran, C, C++ PHP, Python, and JavaScript. In each of those languages (and especially in the C-descended languages), I have found myself reaching for enums or things like them as a way of trying to represent types and states in my system in a more comprehensive way. I figured out that you should <a href="http://wiki.c2.com/?UseEnumsNotBooleans">use enums not booleans</a> a long time before I found the advice on the internet. I was encoding error types as enum values instead of just using <code>int</code>s almost as soon as I started, because it was obvious to me that <code>ErrorCode someFunction() { ... }</code> was far more meaningful than <code>int someFunction() { ... }</code> (even if the context of C meant that the latter often implied the former, and even if it was trivial to coerce one to the other).</p>
<p>Then I read <a href="https://gumroad.com/l/maybe-haskell/"><em>Maybe Haskell</em></a>, a book I’ve mentioned often on this blog because it was so revelatory for me. This is what I had been reaching for all those years—and then some. Handing data around with the constraints? Yes, please! I had played with unions, enums, structs with enums inside them, anything to try to get some type-level clarity and guarantees about what my code was doing. Haskell showed me the way; and since then Rust and Elm and F# have reinforced it many times over. <a href="https://guide.elm-lang.org/types/union_types.html">Tagged unions</a> are a joy. They let me say what I mean—finally.</p>
<p>I can still feel that itch. It’s shifted a little, but it’s still there: reaching for higher abstractions to let me tell the machine more clearly what I intend. Half a dozen times this year, I’ve realized: <em>Here</em> is where dependent types would be useful. They’re far beyond me, but close enough now I can see. I’m sure a year from now, I’ll have find some tools to scratch <em>these</em> itches, only to discover a few more.</p>
keyof and Mapped Types In TypeScript 2.12016-12-17T23:25:00-05:002016-12-18T09:55:00-05:00Chris Krychotag:v4.chriskrycho.com,2016-12-17:/2016/keyof-and-mapped-types-in-typescript-21.html<p>In the last few months, I’ve been playing with both <a href="https://flowtype.org">Flow</a> and <a href="http://www.typescriptlang.org">TypeScript</a> as tools for increasing the quality and reliability of the JavaScript I write at Olo. Both of these are syntax that sits on top of normal JavaScript to add type analysis—basically, a form of <a href="https://en.wikipedia.org/wiki/Gradual_typing">gradual …</a></p><p>In the last few months, I’ve been playing with both <a href="https://flowtype.org">Flow</a> and <a href="http://www.typescriptlang.org">TypeScript</a> as tools for increasing the quality and reliability of the JavaScript I write at Olo. Both of these are syntax that sits on top of normal JavaScript to add type analysis—basically, a form of <a href="https://en.wikipedia.org/wiki/Gradual_typing">gradual typing</a> for JS.</p>
<p>Although TypeScript’s tooling has been better all along<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> I initially preferred Flow’s type system quite a bit: it has historically been much more focused on <a href="http://stackoverflow.com/questions/21437015/soundness-and-completeness-of-systems">soundness</a>, especially around the <em>many</em> problems caused by <code>null</code> and <code>undefined</code>, than TypeScript. And it had earlier support for <a href="https://flowtype.org/docs/disjoint-unions.html">tagged unions</a>, a tool I’ve come to find invaluable since picking them up from my time with Rust.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> But the 2.0 and 2.1 releases of TypeScript have changed the game substantially, and it’s now a <em>very</em> compelling language in its own right—not to mention a great tool for writing better JavaScript. So I thought I’d highlight how you can get a lot of the benefits you would get from the type systems of languages like Elm with some of those new TypeScript features: the <em><code>keyof</code> operator</em> and <em>mapped types</em>.</p>
<hr />
<p><i>Some readers may note that what I’m doing here is a <em>lot</em> of wrangling to cajole TypeScript into giving me the kinds of things you get for free in an ML-descended language. Yep. The point is that you <em>can</em> wrangle it into doing this.</i></p>
<hr />
<section id="plain-old-javascript" class="level3">
<h3>Plain old JavaScript</h3>
<p>Let’s say we want to write a little state machine in terms of a function to go from one state to the next, like this:</p>
<pre class="javascript"><code>function nextState(state) {
switch(state) {
case 'Pending': return 'Started';
case 'Started': return 'Completed';
case 'Completed': return 'Completed';
default: throw new Error(`Bad state: ${state}`);
}
}</code></pre>
<p>This will work, and it’ll even throw an error if you hand it the wrong thing. But you’ll find out at runtime if you accidentally typed <code>nextState('Pednign')</code> instead of <code>nextState('Pending')</code>—something I’ve done more than once in the past. You’d have a similar problem if you’d accidentally written <code>case 'Strated'</code> instead of <code>case 'Started'</code>.</p>
<p>There are many contexts like this one in JavaScript—perhaps the most obvious being <a href="http://redux.js.org/docs/basics/Actions.html">Redux actions</a>, but I get a lot of mileage out of the pattern in Ember, as well. In these contexts, I find it’s convenient to define types that are kind of like pseudo-enums or pseudo-simple-unions, like so:<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a></p>
<pre class="javascript"><code>const STATE = {
Pending: 'Pending',
Started: 'Started',
Completed: 'Completed',
};</code></pre>
<p>Once you’ve defined an object this way, instead of using strings directly in functions that take it as an argument, like <code>nextState('Started')</code>, you can use the object property: <code>nextState(STATE.Started)</code>. You can rewrite the function body to use the object definition instead as well:</p>
<pre class="javascript"><code>function nextState(state) {
switch(state) {
case STATE.Pending: return STATE.Started;
case STATE.Started: return STATE.Completed;
case STATE.Completed: return STATE.Completed;
default: throw new Error(`Bad state: ${state}`);
}
}</code></pre>
<p>Using the object and its keys instead gets you something like a namespaced constant. As a result, you can get more help with things like code completion from your editor, along with warnings or errors from your linter if you make a typo. You’ll also get <em>slightly</em> more meaningful error messages if you type the wrong thing. For example, if you type <code>STATE.Strated</code> instead of <code>STATE.Started</code>, any good editor will give you an error—especially if you’re using a linter. At Olo, we use <a href="http://eslint.org">ESLint</a>, and we have it <a href="https://github.com/ember-cli/ember-cli-eslint/">set up</a> so that this kind of typo/linter error fails our test suite (and we never merge changes that don’t pass our test suite!).</p>
<p>This is about as good a setup as you can get in plain-old JavaScript. As long as you’re disciplined and always use the object, you get some real benefits from using this pattern. But you <em>always</em> have to be disciplined. If someone who is unfamiliar with this pattern types <code>nextState('whifflebats')</code> somewhere, well, we’re back to blowing up at runtime. Hopefully your test suite catches that.</p>
</section>
<section id="typescript-to-the-rescue" class="level3">
<h3>TypeScript to the rescue</h3>
<p>TypeScript gives us the ability to <em>guarantee</em> that the contract is met (that we’re not passing the wrong value in). As of the latest release, it also lets us guarantee the <code>STATES</code> object to be set up the way we expect. And last but not least, we get some actual productivity boosts when writing the code, not just when debugging it.</p>
<p>Let’s say we decided to constrain our <code>nextState</code> function so that it had to both take and return some kind of <code>State</code>, representing one of the states we defined above. We’ll leave a <code>TODO</code> here indicating that we need to figure out how to write the type of <code>State</code>, but the function definition would look like this:</p>
<pre class="typescript"><code>// TODO: figure out how to define `State`
function nextState(state: State): State {
// the same body...
}</code></pre>
<p>TypeScript has had <a href="https://www.typescriptlang.org/docs/handbook/advanced-types.html#union-types">union types</a> since the <a href="https://www.typescriptlang.org/docs/handbook/release-notes/typescript-1-4.html">1.4 release</a> so they might seem like an obvious choice, and indeed we could write easily a type definition for the strings in <code>STATES</code> as a union:</p>
<pre class="typescript"><code>type State = 'Pending' | 'Started' | 'Completed';</code></pre>
<p>Unfortunately, you can’t write something like <code>State.Pending</code> somewhere; you have to write the plain string <code>'Pending'</code> instead. You still get some of the linting benefits you got with the approach outlined above via TypeScript’s actual type-checking, but you don’t get <em>any</em> help with autocompletion. Can we get the benefits of both?</p>
<p>Yes! (This would be a weird blog post if I just got this far and said, “Nope, sucks to be us; go use Elm instead.”)</p>
<p>As of the 2.1 release, TypeScript lets you define types in terms of keys, so you can write a type like this:<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a></p>
<pre class="typescript"><code>const STATE = {
Pending: 'Pending',
Started: 'Started',
Completed: 'Completed',
};
type StateFromKeys = keyof typeof STATE;</code></pre>
<p>Then you can use that type any place you need to constrain the type of a variable, or a return, or whatever:</p>
<pre class="typescript"><code>const goodState: StateFromKeys = STATE.Pending;
// error: type '"Blah"' is not assignable to type 'State'
const badState: StateFromKeys = 'Blah';
interface StateMachine {
(state: StateFromKeys): StateFromKeys;
}
const nextState: StateMachine = (state) => {
// ...
}</code></pre>
<p>The upside to this is that now you can guarantee that anywhere you’re supposed to be passing one of those strings, you <em>are</em> passing one of those strings. If you pass in <code>'Compelte'</code>, you’ll get an actual error—just like if we had used the union definition above. At a minimum, that will be helpful feedback in your editor. Maximally, depending on how you have your project configured, it may not even generate any JavaScript output.<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> So that’s a significant step forward beyond what we had even with the best linting rules in pure JavaScript.</p>
</section>
<section id="going-in-circles" class="level3">
<h3>Going in circles</h3>
<p>But wait, we can do more! TypeScript 2.1 <em>also</em> came with a neat ability to define “mapped types,” which map one object type to another. They have a few <a href="http://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-1.html#mapped-types">interesting examples</a> which are worth reading. What’s interesting to us here is that you can write a type like this:</p>
<pre class="typescript"><code>type StateAsMap = {
[K in keyof typeof STATE]: K
}</code></pre>
<p>And of course, you can simplify that using the type we defined above, since <code>StateFromKeys</code> was just <code>keyof typeof STATE</code>:</p>
<pre class="typescript"><code>type StateAsMap = {
[K in StateFromKeys]: K
}</code></pre>
<p>We’ve now defined an object type whose <em>key</em> has to be one of the items in the <code>State</code> type.</p>
<p>Now, by itself, this isn’t all that useful. Above, we defined that as the keys on the <code>STATE</code> object, but if we tried to use that in conjunction with this new type definition, we’d just end up with a recursive type definition: <code>StateFromKeys</code> defined as the keys of <code>STATE</code>, <code>StateAsMap</code> defined in terms of the elements of <code>StateFromKeys</code>, and then <code>STATE</code> defined as a <code>StateAsMap</code>…</p>
<pre class="typescript"><code>const STATE: StateAsMap = {
Pending: 'Pending',
Active: 'Active',
Completed: 'Completed',
}
type StateFromKeys = keyof typeof STATE;
type StateAsMap = {
[K in StateFromKeys]: K
}</code></pre>
<p>You end up with multiple compiler errors here, because of the circular references. This approach won’t work. If we take a step back, though, we can work through this (and actually end up someplace better).</p>
</section>
<section id="join-forces" class="level3">
<h3>Join forces!</h3>
<p>First, let’s start by defining the mapping generically. After all, the idea here was to be able to use this concept all over the place—e.g. for <em>any</em> Redux action, not just one specific one. We don’t need this particular <code>State</code>; we just need a constrained set of strings (or numbers) to be used as the key of an object:</p>
<pre class="typescript"><code>type MapKeyAsValue<Key extends string> = {
[K in Key]: K
};</code></pre>
<p>In principle, if we didn’t have to worry about the circular references, we could use that to constrain our definition of the original <code>STATE</code> itself:</p>
<pre class="typescript"><code>const STATE: MapKeyAsValue<State> = {
Pending: 'Pending',
Started: 'Started',
Completed: 'Completed',
};</code></pre>
<p>So how to get around the problem of circular type definitions? Well, it turns out that the <code>K</code> values in these <code>StateObjectKeyToValue</code> and <code>StateUnionKeyToValue</code> types are equivalent:</p>
<pre class="typescript"><code>// Approach 1, using an object
const STATE = {
Pending: 'Pending',
Started: 'Started',
Completed: 'Completed',
};
type StateFromKeys = keyof typeof STATE;
type StateObjectKeyToValue = {
[K in StateFromKeys]: K // <- K is just the keys!
};
// Approach 2, using unions
type StateUnion = 'Pending' | 'Started' | 'Completed';
type StateUnionKeyToValue = {
[K in StateUnion]: K // <- K is also just the keys!
};</code></pre>
<p>Notice that, unlike the <code>StateObjectKeyToValue</code> version, <code>StateUnionKeyToValue</code> doesn’t make any reference to the <code>STATE</code> object. So we can use <code>StateUnionKeyToValue</code> to constrain <code>STATE</code>, and then just use <code>StateUnion</code> to constrain all the places we want to <em>use</em> one of those states. Once we put it all together, that would look like this:</p>
<pre class="typescript"><code>type StateUnion = 'Pending' | 'Started' | 'Completed';
type StateUnionKeyToValue = {
[K in StateUnion]: K
};
const STATE: StateUnionKeyToValue = {
Pending: 'Pending',
Started: 'Started',
Completed: 'Completed',
};</code></pre>
<p>By doing this, we get two benefits. First, <code>STATE</code> now has to supply the key and value for <em>all</em> the union’s variants. Second, we know that the key and value are the same, and that they map to the union’s variants. These two facts mean that we can be 100% sure that wherever we define something as requiring a <code>State</code>, we can supply one of the items on <code>STATE</code> and it will be guaranteed to be correct. If we change the <code>State</code> union definition, everything else will need to be updated, too.</p>
<p>Now we can make this generic, so it works for types besides just this one set of states—so that it’ll work for <em>any</em> union type with string keys, in fact. (That string-key constraint is important because objects in TypeScript can currently only use strings or numbers as keys; whereas union types can be all sorts of things.) Apart from that constraint on the union, though, we can basically just substitute a generic type parameter <code>U</code>, for “union,” where we had <code>StateUnion</code> before.</p>
<pre class="typescript"><code>type UnionKeyToValue<U extends string> = {
[K in U]: K
};</code></pre>
<p>Then any object we say conforms to this type will take a union as its type parameter, and every key on the object must have exactly the same value as the key name:</p>
<pre class="typescript"><code>type State = 'Pending' | 'Started' | 'Completed';
// Use `State` as the type parameter to `UnionKeyToValue`.
const STATE: UnionKeyToValue<State> = {
Pending: 'Pending',
Started: 'Started',
Completed: 'Completed',
}</code></pre>
<p>If any of those don’t have <em>exactly</em> the same value as the key name, you’ll get an error. So, each of the following value assignments would fail to compile, albeit for different reasons (top to bottom: capitalization, misspelling, and missing a letter).</p>
<pre class="typescript"><code>const BAD_STATE: UnionKeyToValue<State> = {
Pending: 'pending', // look ma, no capitals
Started: 'Strated', // St-rated = whuh?
Completed: 'Complete', // so tense
};</code></pre>
<p>You’ll see a compiler error that looks something like this:</p>
<blockquote>
<div class="line-block">[ts]<br />
Type ‘{ Pending: “pending”; Started: “Strated”; Completed: “Complete” }’ is not assignable to type ‘UnionKeyToValue<State>’.<br />
Types of property ‘Pending’ are incompatible.<br />
Type ‘“pending”’ is not assignable to type ‘“Pending”’.</div>
</blockquote>
<p>Since the key and the name don’t match, the compiler tells us we didn’t keep the constraint we defined on what these types should look like. Similarly, if you forget an item from the union, you’ll get an error. If you add an item that isn’t in the original union, you’ll get an error. Among other things, this means that you can be confident that if you add a value to the union, the rest of your code won’t compile until you include cases for it. You get all the power and utility of using union types, <em>and</em> you get the utility of being able to use the object as a namespace of sorts.<a href="#fn6" class="footnote-ref" id="fnref6" role="doc-noteref"><sup>6</sup></a></p>
<p>And the TypeScript language service—which you can use from a <em>lot</em> of editors, including VS Code, Atom, Sublime Text, and the JetBrains IDEs—will actually give you the correct completion when you start definition a type. So imagine we were defining some other union type elsewhere in our program to handle events. Now we can use the same <code>UnionKeyToValue</code> type to construct this type, with immediate, <em>correct</em> feedback from the TypeScript language service:</p>
<figure>
<video autoplay=autoplay muted=muted playsinline=playsinline loop=loop>
<source type='video/mp4' src='https://f001.backblazeb2.com/file/chriskrycho-com/images/completion.mp4'>
</video>
<figcaption>
TypeScript live code completion of the mapped type
</figcaption>
</figure>
<p>By inverting our original approach of using <code>keyof</code> (itself powerful and worth using in quite a few circumstances) and instead using the new mapped types, we get a <em>ton</em> of mileage in terms of productivity when using these types—errors prevented, and speed of writing the code in the first place increased as well.</p>
<p>Yes, it’s a little verbose and it does require duplicating the strings whenever you define one of these types.<a href="#fn7" class="footnote-ref" id="fnref7" role="doc-noteref"><sup>7</sup></a> But, and this is what I find most important: there is only one <em>source</em> for those string keys, the union type, and it is definitive. If you change that central union type, everything else that references it, including the namespace-like object, will fail to compile until you make the same change there.</p>
<figure>
<video autoplay=autoplay muted=muted playsinline=playsinline loop=loop>
<source type='video/mp4' src='https://f001.backblazeb2.com/file/chriskrycho-com/images/change-union.mp4'>
</video>
<figcaption>
Updating a union
</figcaption>
</figure>
<p>So it’s a lot more work than it would be in, say, Elm. But it’s also a lot more guarantees than I’d get in plain-old-JavaScript, or even TypeScript two months ago.</p>
<p>I’ll call that a win.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>it’s no surprise that Microsoft’s developer tooling is stronger than Facebook’s<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>along with all the other ML-descended languages I’ve played with, including Haskell, F<sup>♯</sup>, PureScript, and Elm.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>Aside: to be extra safe and prevent any confusion or mucking around, you should probably call <code>Object.freeze()</code> on the object literal, too:</p>
<pre class="javascript"><code>const STATE = Object.freeze({
Pending: 'Pending',
Started: 'Started',
Completed: 'Completed',
})</code></pre>
<p>Both convention and linters make it unlikely you’ll modify something like this directly—but impossible is better than unlikely.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>Flow has supported this feature for some time; you can write <code>$Keys<typeof STATE></code>—but the feature is entirely undocumented.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn5" role="doc-endnote"><p>Set your <code>"compilerOptions"</code> key in your <code>tsconfig.json</code> to include <code>"noEmitOnError": true,</code>.<a href="#fnref5" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn6" role="doc-endnote"><p>For namespacing in a more general sense, you should use… <a href="http://www.typescriptlang.org/docs/handbook/namespaces.html">namespaces</a>.<a href="#fnref6" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn7" role="doc-endnote"><p>It would be great if we could get these benefits without the duplication—maybe someday we’ll have better support in JS or TS natively.<a href="#fnref7" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Rust development using VS Code on OS X, debugging included2016-11-18T20:48:00-05:002016-11-18T20:48:00-05:00Chris Krychotag:v4.chriskrycho.com,2016-11-18:/2016/rust-development-using-vs-code-on-os-x-debugging-included.html<p><a href="https://medium.com/@royalstream/rust-development-using-vs-code-on-os-x-debugging-included-bc10c9863777#.wgjbgie5a">Super handy guide for getting a debugging/IDE environment set up for Rust.</a></p>
<p><a href="https://medium.com/@royalstream/rust-development-using-vs-code-on-os-x-debugging-included-bc10c9863777#.wgjbgie5a">Super handy guide for getting a debugging/IDE environment set up for Rust.</a></p>
Using Rust for ‘Scripting’2016-11-14T22:00:00-05:002016-11-15T09:00:00-05:00Chris Krychotag:v4.chriskrycho.com,2016-11-14:/2016/using-rust-for-scripting.htmlWhy I might use Rust instead of Python, with walkthroughs for building a simple "script"-like program and a guide for cross-compiling Rust code to Windows from macOS.
<p><i class=editorial><strong>Edit</strong>: fixed some typos, cleaned up implementation a bit based on feedback around the internet.</i></p>
<p><i class=editorial>A lightly edited version of this post was syndicated in <a href="https://hackerbits.com/chris-krycho-using-rust-for-scripting/">Hacker Bits, Issue 13</a>.</i></p>
<hr />
<section id="i.-using-rust-instead-of-python" class="level2">
<h2>I. Using Rust Instead of Python</h2>
<p>A friend asked me today about writing a little script to do a simple conversion of the names of some files in a nested set of directories. Everything with one file extension needed to get another file extension. After asking if it was the kind of thing where he had time to and/or wanted to learn how to do it himself (always important when someone has expressed that interest more generally), I said, “Why don’t I do this in Rust?”</p>
<p>Now, given the description, you might think, <i class=thought>Wouldn’t it make more sense to do that in Python or Perl or even just a shell script?</i> And the answer would be: it depends—on what the target operating system is, for example, and what the person’s current setup is. I knew, for example, that my friend is running Windows, which means he doesn’t have Python or Perl installed. I’m not a huge fan of either batch scripts or PowerShell (and I don’t know either of them all that well, either).</p>
<p>I could have asked him to install Python. But, on reflection, I thought: <i class=thought>Why would I do that? I can write this in Rust.</i></p>
<p>Writing it in Rust means I can compile it and hand it to him, and he can run it. And that’s it. As wonderful as they are, the fact that languages like Python, Perl, Ruby, JavaScript, etc. require having the runtime bundled up with them makes just shipping a tool a lot harder—<em>especially</em> on systems which aren’t a Unix derivative and don’t have them installed by default. (Yes, I know that <em>mostly</em> means Windows, but it doesn’t <em>solely</em> mean Windows. And, more importantly: the vast majority of the desktop-type computers in the world <em>still run Windows</em>. So that’s a big reason all by itself.)</p>
<p>So there’s the justification for shipping a compiled binary. Why Rust specifically? Well, because I’m a fanboy. (But I’m a fanboy because Rust often gives you roughly the feel of using a high-level language like Python, but lets you ship standalone binaries. The same is true of a variety of other languages, too, like Haskell; but Rust is the one I know and like right now.)</p>
<p><i class=editorial><strong>Edit the second:</strong> this is getting a lot of views from Hacker News, and it’s worth note: I’m not actually advocating that everyone stop using shell scripts for this kind of thing. I’m simply noting that it’s <em>possible</em> (and sometimes even <em>nice</em>) to be able to do this kind of thing in Rust, cross-compile it, and just ship it. And hey, types are nice when you’re trying to do more sophisticated things than I’m doing here! Also, for those worried about running untrusted binaries: I handed my friend the code, and would happily teach him how to build it.</i></p>
</section>
<section id="ii.-building-a-simple-script" class="level2">
<h2>II. Building a Simple “Script”</h2>
<p>Building a “script”-style tool in Rust is pretty easy, gladly. I’ll walk through exactly what I did to create this “script”-like tool for my friend. My goal here was to rename every file in a directory from <code>*.cha</code> to <code>*.txt</code>.</p>
<ol type="1">
<li><p><a href="https://www.rust-lang.org/en-US/downloads.html">Install Rust.</a></p></li>
<li><p>Create a new binary:</p>
<pre class="sh"><code>cargo new --bin rename-it</code></pre></li>
<li><p>Add the dependencies to the Cargo.toml file. I used the <a href="https://doc.rust-lang.org/glob/glob/index.html">glob</a> crate for finding all the <code>.cha</code> files and the <a href="https://clap.rs">clap</a> crate for argument parsing.</p>
<pre class="toml"><code>[package]
name = "rename-it"
version = "0.1.0"
authors = ["Chris Krycho <[email protected]>"]
[dependencies]
clap = "2.15.0"
glob = "0.2"</code></pre></li>
<li><p>Add the actual implementation to the <code>main.rs</code> file (iterating till you get it the way you want, of course).</p>
<pre class="rust"><code>extern crate clap;
extern crate glob;
use glob::glob;
use std::fs;
use clap::{Arg, App, AppSettings};
fn main() {
let path_arg_name = "path";
let args = App::new("cha-to-txt")
.about("Rename .cha to .txt")
.setting(AppSettings::ArgRequiredElseHelp)
.arg(Arg::with_name(path_arg_name)
.help("path to the top directory with .cha files"))
.get_matches();
let path = args.value_of(path_arg_name)
.expect("You didn't supply a path");
let search = String::from(path) + "/**/*.cha";
let paths = glob(&search)
.expect("Could not find paths in glob")
.map(|p| p.expect("Bad individual path in glob"));
for path in paths {
match fs::rename(&path, &path.with_extension("txt")) {
Ok(_) => (),
Err(reason) => panic!("{}", reason),
};
}
}</code></pre></li>
<li><p>Compile it.</p>
<pre class="sh"><code>cargo build --release</code></pre></li>
<li><p>Copy the executable to hand to a friend.</p></li>
</ol>
<p>In my case, I actually added in the step of <em>recompiling</em> it on Windows after doing all the development on macOS. This is one of the real pieces of magic with Rust: you can <em>easily</em> write cross-platform code. The combination of Cargo and native-compiled-code makes it super easy to write this kind of thing—and, honestly, easier to do so in a cross-platform way than it would be with a traditional scripting language.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>But what’s really delightful is that we can do better. I don’t even need to install Rust on Windows to compile a Rust binary for Windows.</p>
</section>
<section id="iii.-cross-compiling-to-windows-from-macos" class="level2">
<h2>III. Cross-Compiling to Windows from macOS</h2>
<p>Once again, let’s do this step by step. Three notes: First, I got pretty much everything other than the first and last steps here from WindowsBunny on the <a href="https://botbot.me/mozilla/rust/">#rust</a> <abbr>IRC</abbr> channel. (If you’ve never hopped into #rust, you should: it’s amazing.) Second, you’ll need a Windows installation to make this work, as you’ll need some libraries. (That’s a pain, but it’s a one-time pain.) Third, this is the setup for doing in on macOS Sierra; steps may look a little different on an earlier version of macOS or on Linux.</p>
<ol type="1">
<li><p>Install the Windows compilation target with <code>rustup</code>.</p>
<pre class="sh"><code>rustup target add x86_64-pc-windows-msvc</code></pre></li>
<li><p>Install the required linker (<a href="http://lld.llvm.org"><code>lld</code></a>) by way of installing the LLVM toolchain.</p>
<pre class="sh"><code>brew install llvm</code></pre></li>
<li><p>Create a symlink somewhere on your <code>PATH</code> to the newly installed linker, specifically with the name <code>link.exe</code>. I have <code>~/bin</code> on my <code>PATH</code> for just this kind of thing, so I can do that like so:</p>
<pre class="sh"><code>ln -s /usr/local/opt/llvm/bin/lld-link ~/bin/link.exe</code></pre>
<p>(We have to do this because the Rust compiler <a href="https://github.com/rust-lang/rust/blob/master/src/librustc_trans/back/msvc/mod.rs#L300">specifically goes looking for <code>link.exe</code> on non-Windows targets</a>.)</p></li>
<li><p>Copy the target files for the Windows build to link against. Those are in these directories, where <code><something></code> will be a number like <code>10586.0</code> or similar (you should pick the highest one if there is more than one):</p>
<ul>
<li><code>C:\Program Files\Windows Kits\10\Lib\10.0.<something>\ucrt\x64</code></li>
<li><code>C:\Program Files\Windows Kits\10\Lib\10.0.<something>\um\x64</code></li>
<li><code>C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\lib\amd64</code></li>
</ul>
<p>Note that if you don’t already have <abbr>MSVC</abbr> installed, you’ll need to install it. If you don’t have Visual Studio installed on a Windows machine <em>at all</em>, you can do that by using the links <a href="http://landinghub.visualstudio.com/visual-cpp-build-tools">here</a>. Otherwise, on Windows, go to <strong>Add/Remove Programs</strong> and opting to Modify the Visual Studio installation. There, you can choose to add the C++ tools to the installation.</p>
<p>Note also that if you’re building for 32-bit Windows you’ll want to grab <em>those</em> libraries instead of the 64-bit libraries.</p></li>
<li><p>Set the <code>LIB</code> environment variable to include those paths and build the program. Let’s say you put them in something like <code>/Users/chris/lib/windows</code> (which is where I put mine). Your Cargo invocation will look like this:</p>
<pre class="sh"><code>env LIB="/Users/chris/lib/windows/ucrt/x64/;/Users/chris/lib/windows/um/x64/;/Users/chris/lib/windows/VC_lib/amd64/" \
cargo build --release --target=x86_64-pc-windows-msvc</code></pre>
<p>Note that the final <code>/</code> on each path and the enclosing quotation marks are all important!</p></li>
<li><p>Copy the binary to hand to a friend, without ever having had to leave your Mac.</p></li>
</ol>
<p>To be sure, there was a little extra work involved in getting cross-compilation set up. (This is the kind of thing I’d love to see further automated with <code>rustup</code> in 2017!) But what we have at the end is pretty magical. Now we can just compile cross-platform code and hand it to our friends.</p>
<p>Given that, I expect not to be using Python for these kinds of tools much going forward.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Again: you can do similar with Haskell or OCaml or a number of other languages. And those are great options; they are in <em>some</em> ways easier than Rust—but Cargo is really magical for this.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
What is Functional Programming?2016-11-11T22:30:00-05:002016-11-14T07:30:00-05:00Chris Krychotag:v4.chriskrycho.com,2016-11-11:/2016/what-is-functional-programming.htmlFunctional programming—though not a panacea—is a really great tool to have in our toolbelt. (And you don’t have to be a mathematician to use it.)
<p><i class='editorial'>The following is a script I wrote for a tech talk I gave on functional programming. The recording isn’t (and won’t be) publicly available; but a script is often easier to reference anyway!</i></p>
<p><i class='editorial'><strong>Edit:</strong> updated with corrected performance characterstics.</i></p>
<hr />
<p>Hello, everyone. Today, we are going to talk about functional programming—asking what it is, and why we should care.</p>
<section id="clearing-the-table-functional-programmings-reputation" class="level2">
<h2>Clearing the Table: Functional Programming’s Reputation</h2>
<p>Functional programming has something of a reputation: on the one hand, as incredible difficult, dense, full of mathematical jargon, applicable only to certain fields like machine learning or massive data analysis; on the other hand, as a kind of panacea that solves all of your problems. The reality, I think, is a little bit of both.</p>
<p>The world of functional programming <em>does</em> include a lot of jargon from the math world, and there are good reasons for that, but there is also a lot we could do to make it more approachable to people who don’t have a background in, say category. Category theory is useful, of course, and I think there are times when we might want to be able to draw on it. But gladly, functional programming doesn’t require you to know what an <em>applicative functor</em> is to be able to use it. (And, gladly, there’s a lot of increasingly-solid teaching material out there about functional programming which <em>doesn’t</em> lean on math concepts.)</p>
<p>On the other side, functional programming does give us some real and serious benefits, and that’s what I’m going to spend the first third or so of this talk looking at. But of course, it’s still just a tool, and even though it is a very helpful and very powerful tool, it can’t keep us from writing bugs. Still, every tool we can add to our belt for writing correct software is a win.</p>
<p>One more prefatory note before we get into the meat of this talk: unfamiliar terminology is not specific to functional programming. So, yes, when you see this list, it might seem a little out there:</p>
<ul>
<li>Functor</li>
<li>Applicative</li>
<li>Monoid</li>
<li>Monad</li>
</ul>
<p>And in truth, a number of those could have better names. <em>But</em> we have plenty of terminology we throw around in the world of imperative, object-oriented programming. To pick just one, obvious and easy example—what are the <abbr>SOLID</abbr> principles?</p>
<ul>
<li>Single reponsibility</li>
<li>Open/closed</li>
<li>Liskov substitution</li>
<li>Interface segregation</li>
<li>Dependency inversion</li>
</ul>
<p>You may not remember what it felt like the first time you encountered <abbr>SOLID</abbr>, but suffice it to say: “Liskov substitution principle” isn’t any more intuitive or obvious than “Monad”. You’re just familiar with one of them. The same is true of “applicative” and “Visitor pattern”. And so on. Granted, again: it would be nice for some of these things to have easier names, a <em>big</em> part of the pain here is just unfamiliarity.</p>
<p>So, with that out of the way, what <em>is</em> functional programming?</p>
</section>
<section id="what-is-functional-programming" class="level2">
<h2>What is functional programming?</h2>
<p>Functional programming is a style of programming that uses <em>pure functions</em> and <em>immutable data</em> for as many things as possible, and builds programs primarily out of <em>functions</em> rather than other abstractions. I’ll define all of those terms in a moment, but first…</p>
<section id="why-do-we-care" class="level3">
<h3>Why do we care?</h3>
<p>We care, frankly, because <em>we’re not that smart</em>. Let’s think about some of the kinds of things we’re doing with, say, restaurant software: clients, with locations, building baskets, composed of products with options and modifiers, which have a set of rules for what combinations are allowed both of products and of their elements as making up a basket, which turn into orders, which have associated payment schemes (sometimes a lot of them), which generate data to send to a point-of-sale as well as summaries for the customer who ordered it, and so on. There are a <em>lot</em> of moving pieces there. I’m sure a missed some non-trivial pieces, too. And if all of that is <em>stateful</em>, that’s a lot of state to hold in your head.</p>
<p>Let me be a bit provocative for a moment. Imagine you were reading a JavaScript module and it looked like this:</p>
<pre class="js"><code>var foo = 12;
var bar = 'blah';
var quux = { waffles: 'always' };
export function doSomething() {
foo = 42;
}
export function getSomething() {
bar = quux;
quux.waffles = 'never';
return bar;
}</code></pre>
<p>Everyone watching would presumably say, “No that’s bad, don’t do that!” Why? Because there is <em>global state</em> being changed by those functions, and there’s nothing about the functions which tells you what’s going on. Global variables are bad. Bad bad bad. We all know this. Why is it bad? Because you have no idea when you call <code>doSomething()</code> or <code>getSomething()</code> what kinds of side effects it might have. And if <code>doSomething()</code> and <code>getSomething()</code> affect the same data, then the order you call them in matters.</p>
<p>In a previous job, I spent literally months chasing a bunch of bugs in a C codebase where all of the state was global. <em>We don’t do this anymore.</em></p>
<p>But really, what’s different about this?</p>
<pre class="js"><code>class AThing {
constructor() {
this.foo = 12;
this.bar = 'blah';
this.quux = { waffles: 'always' };
}
doSomething() {
this.foo = 42;
}
getSomething() {
this.bar = this.quux;
this.quux.waffles = 'never';
return this.bar;
}
}</code></pre>
<p>We have some “internal” data, just like we had in the module up above. And we have some public methods which change that state. In terms of these internals, it’s the same. There are differences in terms of having <em>instances</em> and things like that, but in terms of understanding the behavior of the system—understanding the state involved—it’s the same. It’s global, mutable state. Now it’s not global like attaching something to the <code>window</code> object in JavaScript, and that’s good, but still: at the module or class level, it’s just global mutable state, with no guarantees about how anything works. And this is normal—endemic, even—in object-oriented code. We encapsulate our state, but we have <em>tons</em> of state, it’s all mutable, and as far as any given class method call is concerned, it’s all global to that class.</p>
<p>You have no idea, when you call a given object method, what it might do. The fact that you call it with an <code>Int</code> and get out a <code>String</code> tells you almost nothing. For all you know, it’s triggering a <abbr>JSON-RPC</abbr> call using the int as the <abbr>ID</abbr> for the endpoint, which in turn triggers an operation, responds with another <abbr>ID</abbr>, which you then use to query a database, and load a string from there, which you then set on some other member of the object instance, and then return. Should you write a method that does that? Probably not. But you can; nothing stops you.</p>
<p>When you call a method, you have no idea what it will do. JavaScript, TypeScript, C<sup>♯</sup>, it doesn’t matter. You have literally no idea. And that makes things <em>hard</em>.</p>
<ul>
<li>It often makes fixing bugs hard, because it means you have to figure out which particular <em>state</em> caused the issue, and find a way to reproduce that state. Which usually means calling methods in a particular order.</li>
<li>It makes testing hard. Again, it often entails calling methods in a particular order. It also means you often need mocks for all those outside-world things you’re trying to do.</li>
</ul>
<p>Functional programming is an out. An escape hatch. An acknowledgement, a recognition, that holding all of this in our heads is too much for us. No one is that smart. And our software, even at its best, is hard to hold in our heads, hard to make sure that our changes don’t break something seemingly unrelated, hard to see how the pieces fit together—hard, in a phrase you’ll often hear from functional programming fans, hard to reason about.</p>
<p>So, how do we solve these problems? With functional programming!</p>
</section>
<section id="what-is-functional-programming-1" class="level3">
<h3>What <em>is</em> functional programming?</h3>
<p>Functional programming is basically combining four bigs ideas:</p>
<ol type="1">
<li>First class functions</li>
<li>Higher-order functions</li>
<li>Pure functions</li>
<li>Immutable data</li>
</ol>
<p>The combination of these things leads us to a <em>very</em> different style of programming than traditional <abbr>OOP</abbr>. Let’s define them.</p>
<section id="first-class-functions-and-higher-order-functions" class="level4">
<h4>First class functions and higher-order functions</h4>
<p>We’ll start by looking at the things that are probably most familiar to you if you’re a JavaScript developer (even if you haven’t necessarily heard the names): first-class functions and higher-order functions.</p>
<p>When we talk about <em>first class functions,</em> we mean that functions are just data—they’re first-class items in the language just like any other type. As such, a function is just another thing you can hand around as an argument to other functions. There’s no distinction between a function and a number or a string or some complex data structure. This is essential because, when you combine it with higher-order functions, it allows for incredible <em>simplicity</em> and incredible <em>reusability</em>.</p>
<p>Higher-order functions, in turn, are functions which take other functions as parameters or return them as their values. We’ll see this in detail in a worked example in a few, but for right now, let’s just use a really simple example that will be familiar to anyone who’s done much JavaScript: using <code>map</code>.</p>
<p>If we have a collection like an array and we want to transform every piece of data in it, we could of course do it with a for loop, and with iterable types we could use <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...of"><code>for ... of</code></a>. But with <code>map</code>, we can just leave the implementation details of <em>how</em> the items in the array are iterated through, and instead worry about what we want to change. We can do that because <code>map</code> takes functions as arguments.</p>
<pre class="js"><code>const initialValues = [1, 2, 3];
const doubledValues = initialValues.map(value => value * 2);</code></pre>
<p>We did it there with a function explicitly, but we could just as easily extract the function like this:</p>
<pre class="js"><code>const double = value => value * 2;
const initialValues = [1, 2, 3];
const doubledValues = initialValues.map(double);</code></pre>
<p>This is possible because <em>functions are just data</em>—they’re first-class members of the language—and therefore <em>functions can be arguments or return values</em>—the language supports higher-order functions.</p>
</section>
<section id="pure-functions" class="level4">
<h4>Pure functions</h4>
<p>What about <em>pure functions</em>? Pure functions are functions with <em>no effects</em>. The input directly translates to the output, every time. The examples we looked at just a moment ago with <code>map</code> are all pure functions (and it’s a really weird antipattern to use effectful functions with <code>map</code>! Don’t do that! Use <code>forEach</code> if you must have an effect). Here are a few more super simple examples:</p>
<pre class="js"><code>const add = (a, b) => a + b;
const toString = (number) => `The value is ${number}`;
const toLength = (list) => list.length;</code></pre>
<p>Here are some examples of straightforward functions which are <em>not</em> pure:</p>
<pre class="js"><code>const logDataFromEndpoint = (endpoint) => {
fetch(endpoint).then(response => {
console.log(response);
});
};
let foo = 42;
const setFoo = (newValue) => {
foo = newValue;
};
const getFoo = () => foo;</code></pre>
<p>So a pure function is one whose output is <em>solely</em> determined by its input That means no talking to a database, no making <abbr>API</abbr> calls, no reading from or writing to disk.</p>
<p>And of course, you can’t do anything meaningful with <em>just</em> pure functions. We need user input, and we need to put the results of our computation somewhere. So the goal isn’t to write <em>only</em> pure functions. It’s to write <em>mostly</em> pure functions and to <em>isolate</em> all impure functions.</p>
<p>What this gets us is two things:</p>
<ol type="1">
<li>A much smaller list of things to worry about when we’re looking at a given function.</li>
<li>The ability to <em>compose</em> functions together more easily.</li>
</ol>
<p>We have fewer things to keep in our heads when we look at any given pure function, because we don’t have to worry at all about whether something it touches has been changed by another function or not. We have inputs. We transform them into outputs. That’s it. Compare these two things in practice.</p>
<p>Here’s a traditional <abbr>OOP</abbr> approach:</p>
<pre class="js"><code>class Order {
constructor() {
this.subTotal = 0.0;
this.taxRate = 0.01;
}
getTotal() {
return this.subTotal * (1 + this.taxRate);
}
}
const order = new Order();
order.subTotal = 42.00;
const total = order.getTotal();</code></pre>
<p>Note that the total is always dependent on what has happened in the object. If we write <code>order.subTotal = 43</code>, <code>order.total</code> will change. So if we want to test how <code>total</code> behaves, or if there’s a bug in it, we need to make sure we’ve made all the appropriate transformations to the object ahead of time. That’s no big deal here; the <code>total</code> getter is incredibly simple (and in fact, we’d normally just write it with a property getter). But still, we have to construct an order and make sure all the relevant properties are set to get the right value out of <code>getTotal()</code>. Things outside the method call itself affect what we get back. We have no way to test <code>getTotal()</code> by itself, and no way to debug it if there’s a bug without first doing some object setup.</p>
<p>Now, here’s a functional approach.</p>
<pre class="js"><code>const order = {
subTotal: 42.0,
taxRate: 0.01
}
const getTotal = (subTotal, taxRate) => subTotal * (1 + taxRate);
const total = getTotal(order.subTotal, order.taxRate);</code></pre>
<p>Note that the object is <em>just data</em>. It’s a <em>record</em>. And the function just takes a couple of arguments. If there needed to be a more complicated transformation internally, we could do that just as easily. Note that it also decouples the structure of the data from the actual computation (though we could pass in a record as well if we had a good reason to).</p>
<p>This makes it easily testable, for free. Want to make sure different tax rates get the correct output? Just… pass in a different tax rate. You don’t have to do any complicated work setting up an object instance first (which is especially important for more complex data types). It also makes it easier to chase down any bugs: the only thing you have to care about is that simple function body. There’s no other state to think about, because there’s no state at all here from the perspective of the function: just inputs and outputs.</p>
<p>This has one other <em>really</em> important consequence, which goes by the name <strong>referential transparency</strong>. All that means is that anywhere you see a pure function, you can always substitute the value it produces, or vice versa. This is quite unlike the <code>Order::getTotal()</code> method, where (a) it’s attached to an object instance and (b) it’s dependent on other things about that object. You can’t just substitute it in, or freely move it around, when you’re doing a refactor. <em>Maybe</em> you can, but you’d better hope that all the other state is shuffled around with it correctly. Whereas, with the standalone <code>getTotal()</code> function, all you need is its arguments, and you’ll always get the same thing back.</p>
<p>This is just like math: if you say, <span class="math inline"><em>x</em> = 5</span> when solving an algebraic equation, you can put <span class="math inline">5</span> <em>anywhere you see <span class="math inline"><em>x</em></span></em>; or, if it’s useful for factoring the equation or something, you can just as easily put <span class="math inline"><em>x</em></span> anywhere you see <span class="math inline">5</span>. And in math, that’s true for <span class="math inline"><em>f</em>(<em>x</em>)</span> as well. When we use pure functions, it’s true for programming, too! That makes refactoring much easier.</p>
<p>As we’ll see in the example I walk through in a minute, it also lets us <em>compose</em> functions together far more easily. If all we have are inputs and outputs, then I can take the output from one function and use it as the input to the next.</p>
</section>
<section id="immutable-data" class="level4">
<h4>Immutable data</h4>
<p>Complementing the use of mostly pure functions is to use <em>immutable data</em>. Instead of having objects which we mutate, we create copies of the data as we transform it.</p>
<p>You’re probably wondering how in the world this can work (and also how you avoid it being incredibly computationally expensive). For the most part, we can rely on two things: smart compilers and runtimes, and the fact that we often don’t need to reuse the <em>exact</em> same data because we’re transforming it. However, as we’ll see below, in languages which don’t have native support for immutability, it can impose a performance penalty. Gladly, there are ways to work around this!</p>
<hr />
</section>
</section>
</section>
<section id="a-worked-example" class="level2">
<h2>A Worked Example</h2>
<p>Let’s get down to a real example of these ideas. This is a ‘code kata’ I do every so often. In this particular kata, you get a list of burger orders which looks like this:</p>
<pre class="js"><code>[
{ condiments: ['ketchup', 'mustard', 'pickles'] },
{ condiments: ['tomatoes'] },
{ condiments: ['mustard', 'ketchup'] },
// etc...
]</code></pre>
<p>You’re supposed to take this list (of 10,000-some-odd burger variations!) and determine what the top ten most common orders (not just condiments, but orders) are. (The truth is, the list actually mostly looks like <code>condiments: ['ketchup']</code> over and over again.) So as a preliminary, you can assume that the data is being loaded like this:</p>
<pre class="js"><code>const getBurgers = () =>
fetch('http://files.example.com/burgers.json')
.then(request => request.json());</code></pre>
<p>And we’ll print our results (which will always end up in the same format) like this:</p>
<pre class="js"><code>const descAndCountToOutput = descAndCount => `${descAndCount[0]}: ${descAndCount[1]}`;</code></pre>
<p>This is actually a perfect case to demonstrate how functional programming ideas can help us solve a problem.</p>
<section id="imperative" class="level3">
<h3>Imperative</h3>
<p>First, let’s look at what I think is a <em>relatively</em> reasonable imperative approach. Our basic strategy will be:</p>
<ol type="1">
<li>Convert condiments to descriptions.
<ol type="1">
<li>Convert the objects to just their lists of condiments.</li>
<li>Sort those strings.</li>
<li>Turn them into descriptions by joining them with a comma.</li>
</ol></li>
<li>Build up a mapping from description to count.</li>
<li>Sort that by count.</li>
<li>Get the top 10.</li>
<li>Print out the results.</li>
</ol>
<pre class="js"><code>getBurgers().then(burgers => {
let totals = {};
// 2. Build up a mapping from description to count.
for (let burger of burgers) {
// 1. Convert condiments to descriptions.
// 1.1. Convert the objects to just their lists of condiments.
const condiments = burger.condiments;
// 1.2. Sort those strings.
condiments.sort();
// 1.3. Turn them into descriptions by joining them with a comma.
const description = condiments.join(', ');
// 2. Build up a mapping from description to count.
const previousCount = totals[description];
totals[description] = previousCount ? previousCount + 1 : 1;
}
// 3. Sort that by count.
const sortableCondiments = Object.entries(totals);
sortableCondiments.sort((a, b) => b[1] - a[1]);
// 4. Get the top 10.
const topTen = sortableCondiments.slice(0, 10);
// 5. Print out the results.
for (let descAndCount of topTen) {
console.log(descAndCountToOutput(descAndCount));
}
});</code></pre>
<p>That’s pretty well-factored. But it’s pretty wrapped up on the specific details of this problem, and there’s basically nothing here I could reuse. It’s also relatively hard to test. There aren’t really a lot of pieces there we could break up into smaller functions if we wanted to figure out why something was broken. The way you’d end up fixing a bug here is probably by dropping <code>debugger</code> or <code>console.log()</code> statements in there to see what the values are at any given spot.</p>
<p>And this is where functional programming really does give us a better way.</p>
</section>
<section id="functional" class="level3">
<h3>Functional</h3>
<p>Instead of thinking about the specific details of <em>how</em> to get from A to B, let’s think about what we start with and what we finish with, and see if we can build up a pipeline of transformations that will get us there.</p>
<p>We start with a <em>list</em> of <em>objects</em> containing <em>arrays</em> of <em>strings</em>. We want to end up with a <em>list</em> of the <em>distinct combinations</em> and their <em>frequency</em>. How can we do this? Well, the basic idea is the same as what we did above:</p>
<ol type="1">
<li>Convert condiments to descriptions.
<ol type="1">
<li>Convert the objects to just their lists of condiments.</li>
<li>Sort those strings.</li>
<li>Turn them into descriptions by joining them with a comma.</li>
</ol></li>
<li>Build up a mapping from description to count.</li>
<li>Sort that by count.</li>
<li>Get the top 10.</li>
<li>Print out the results.</li>
</ol>
<p>To someone acquainted with functional programming, that looks like a bunch of <code>map</code>s, a <code>reduce</code>, and some <code>sort</code>s. And each of those using just simple, pure functions. Let’s see what that might look like. First, what are our transformations?</p>
<p>The step 1 transformations are all quite straightforward:</p>
<pre class="js"><code>// 1. Convert condiments to descriptions.
// 1.1. Convert the objects to just their lists of condiments.
const toCondiments = burger => burger.condiments ? burger.condiments : [];
// 1.2. Sort those strings.
const toSortedCondiments = condiments => condiments.concat().sort();
// 1.3. Turn them into descriptions by joining them with a comma.
const toDescriptions = condiments => condiments.join(', ');</code></pre>
<p>Step 2 is a little more involved: it involves building up a new data structure (<code>totals</code>) from an old one. This function is a <em>reducer</em>: it will build up <code>totals</code> by updating <code>totals</code> with each <code>description</code> from an array of them.</p>
<pre class="js"><code>// 2. Build up a mapping from description to count.
const toTotals = (totals, description) => {
const previousCount = totals[description];
const count = previousCount ? previousCount + 1 : 1;
totals[description] = count;
return totals;
};
// 3. Sort that by count.
const byCount = (a, b) => b[1] - a[1];</code></pre>
<p>We’ll see how to get just 10 in a moment; for now, let’s also wrap up the output:</p>
<pre class="js"><code>// 5. Print it out
const output = value => { console.log(value); };</code></pre>
<p>These are our base building blocks, and we’ll re-use them in each of the approaches I cover below. Note that we’ve now taken those same basic steps from our imperative approach and turned them into standalone, testable functions. They’re small and single-purpose, which always helps. But more importantly, (with two exceptions we’ll talk about in a minute) all of those transformations are <em>pure functions</em>, we know that we’ll get the same results every time we use them. If I want to make sure that burger condiments are converted correctly, I can test <em>just that function</em>.</p>
<pre class="js"><code>describe('toCondiments', () => {
it('returns an empty list when there is no `condiments`', () => {
toCondiments({}).should.deepEqual([]);
});
it('returns the list of condiments when it is passed', () => {
const condiments = ['ketchup', 'mustard'];
toCondiments({ condiments }).should.deepEqual(condiments);
});
})</code></pre>
<p>This is a trivial example, of course, but it gets the point across: all we have to do to test this is pass in an object. It doesn’t depend on anything else. It doesn’t have <em>any knowledge</em> of how we’re going to use it. It doesn’t know that it’s going to be used with data coming from an array. All it knows is that if you give it an object with a <code>condiments</code> property, it’ll hand you back the array attached to that property.</p>
<p>The result is that, with all of these functions, we don’t have to deal with mocks or stubs or anything like that to be testable. Input produces output. Pure functions are great for this. Now, some of you may be thinking, “That’s great, but what about <abbr>I/O</abbr>, or databases, or any other time we actually interact with the world? What about talking to a point-of-sale?” I actually have some tentative thoughts about a future tech talk to look at how to do that in some detail, but for today, just remember that the goal is to write as many pure functions as possible, and to isolate the rest of your code from knowing about that. And of course, that’s best practice anyway! We’re just codifying it. We’ll see what that looks like in practice in just a minute.</p>
<p>Now, while we’re on the topic of pure functions, some of you with quick eyes may have noticed that two of these little functions we laid out are actually <em>not</em> pure: JavaScript’s <code>Array.sort</code> method operates in-place, for performance reasons, and so does our <code>toTotals</code> function. So a truly pure version of the sorting function looks like this:</p>
<pre class="js"><code>const toSortedCondiments = condiments => condiments.concat().sort();</code></pre>
<p>Similarly, we <em>could</em> define the <code>toTotals</code> to return a new object every time, like this:</p>
<pre class="js"><code>const toTotals = (totals, description) => {
const previousCount = totals[description];
const count = previousCount ? previousCount + 1 : 1;
const update = { [description]: count };
return Object.assign({}, totals, update);
};</code></pre>
<p>Unfortunately, given the amount of data we’re dealing with, that’s prohibitively expensive. We end up spending a <em>lot</em> of time allocating objects and garbage-collecting them. As a result, it’s tens of thousands of times slower. Running it on my 4GHz iMac, the in-place version takes less than 40ms. Doing it the strictly pure way—returning copies every time—takes ~53s. And if you profile it, almost all of that time is spent in <code>assign</code> (52.95s).</p>
<p>This takes us to an important point, though: it’s actually not a particularly big deal to have this particular data changed in place, because we’re not going to do anything <em>else</em> with it. And in fact, under the hood, this is exactly what pure functional languages do with these kinds of transformations—precisely because it’s perfectly safe to do so, because we’re the only ones who have access to this data. We’re generating a <em>new</em> data structure from the data that we were originally handed, and the next function will make its own new data structure (whether a copy or something else).</p>
<p>In other words, when we’re talking about a <em>pure function</em>, we don’t really care about internal mutability (though of course, that can bite us if we’re not careful). We’re really concerned about <em>external</em> mutability. As long as the same inputs get the same outputs every time, the rest of the world doesn’t have to care how we got that result.</p>
<p>Now let’s see how we use these functions.</p>
<section id="pure-javascript" class="level4">
<h4>Pure JavaScript</h4>
<p>First, here’s a pure-JavaScript approach, but a more functional one instead of an imperative one:</p>
<pre class="js"><code>getBurgers().then(burgers => {
const descriptionToCount = burgers
.map(toCondiments)
.map(toSortedCondiments)
.map(toDescriptions)
.reduce(toTotals, {})
const entries = Object.entries(descriptionToCount);
[...entries]
.sort(byCount)
.slice(0, 10) // 4. Get the top 10.
.map(descAndCountToOutput)
.forEach(output);
});</code></pre>
<p>First, the good: our transformation is no longer all jumbled together. In fact, our code reads a lot like our original description did. Also, notice that we just have a bunch of functions operating on data: none of the functions used here have any knowledge about where the data comes from that they operate on.</p>
<p>But then we also have a couple things that are a <em>little</em> bit clunky. The main thing that sticks out is that sudden stop in the chain in the middle.</p>
<p>When we’re dealing with the <code>Array</code> type, everything is fine, but when we convert our data into a <code>Map</code>, we no longer have that option, so we have to jump through some hoops to do the transformation back into the data type we need. We’re stuck if the object type doesn’t have the method we need. We’re kind of trying to mash together the imperative and functional styles, and it’s leaving us in a little bit of a weird spot.</p>
<p>There’s another issue here, though, and it’s the way that using the method-style calling convention obscures something important. When we call <em>most</em> of those methods, we’re doing something quite different from what most <em>methods</em> do. A method normally is an operation on an object. These methods—most of them—are operations that return <em>new</em> objects. So it’s nice from a syntax perspective, but if we’re not <em>already</em> familiar with the behavior of a given method, it won’t be clear at all that we’re actually generating a bunch of totally new data by calling those methods.</p>
<p>And… two of these methods (<code>sort</code> and <code>forEach</code>) <em>are</em> not doing that, but are modifying an array in place instead.</p>
</section>
<section id="lodash" class="level4">
<h4>Lodash</h4>
<p>The first step away from this problem is to use a tool like <a href="https://lodash.com">Lodash</a>.</p>
<pre class="js"><code>// More functional, with _:
// We tweak how a few of these work slightly to play nicely.
const _toDescriptions = condiments => _.join(condiments, ', ');
const _byCount = _.property(1);
getBurgers().then(burgers => {
const condiments = _.map(burgers, toCondiments);
const sortedCondiments = _.map(condiments, toSortedCondiments);
const descriptions = _.map(sortedCondiments, _toDescriptions);
const totals = _.reduce(descriptions, toTotals, {});
const totalPairs = _.toPairs(totals);
const sortedPairs = _.sortBy(totalPairs, _byCount);
const sortedPairsDescending = _.reverse(sortedPairs);
const topTen = _.take(sortedPairsDescending, 10);
const forOutput = _.map(topTen, descAndCountToOutput)
_.forEach(forOutput, output);
});</code></pre>
<p>But it seems like we lost something when we moved away from the object-oriented approach. Being able to chain things, so that each item worked with the previous item, was actually pretty nice. And needing all these intermediate variables is <em>not</em> so nice.</p>
<p>One way around this is to use Lodash’s <code>_.chain</code> method. That would have let us write it like this:</p>
<pre class="js"><code>getBurgers().then(burgers => {
const foo = _.chain(burgers)
.map(toCondiments)
.map(toSortedCondiments)
.map(_toDescriptions)
.reduce(toTotals, {})
.toPairs()
.sortBy(_byCount)
.reverse()
.take(10)
.map(descAndCountToOutput)
.value()
.forEach(output);
});</code></pre>
<p>And that <em>is</em> a win. But it only works because JavaScript is <em>incredibly</em> dynamic and lets us change the behavior of the underlying <code>Array</code> type. (You’d have a much harder time doing that in Java or C<sup>♯</sup>!)</p>
<p>Perhaps just as importantly, it requires us to make sure that we do that <code>_.chain()</code> call on on anything we want to tackle this way. So, can we get the benefits of this some <em>other</em> way? Well, obviously the answer is <em>yes</em> because I wouldn’t be asking otherwise.</p>
</section>
<section id="with-ramda." class="level4">
<h4>With Ramda.</h4>
<p>But we can actually go a bit further, and end up in a spot where we don’t need to modify the object prototype at all. We can just do this with a series of standalone functions which don’t depend on being attached to <em>any</em> object. If we use the <a href="http://ramdajs.com">Ramda</a> library,<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> we can tackle this with nothing but functions.</p>
<pre class="js"><code>const getTop10Burgers = R.pipe(
R.map(R.prop('condiments')),
R.map(R.sortBy(R.toString)),
R.map(R.join(', ')),
R.reduce(toTotals, {}),
R.toPairs,
R.sortBy(R.prop(1)), // will give us least to greatest
R.reverse,
R.take(10),
R.map(descAndCountToOutput)
);
return getBurgers()
.then(getTop10Burgers)
.then(R.forEach(output));</code></pre>
<p>Notice the difference between here and even where we started with Lodash: we’re no longer dependent on a specific piece of data being present. Instead, we’ve created a standalone function which can operate on that data, simply by “piping” together—that is, <em>composing</em>—a bunch of other, smaller functions. The output from each one is used as the input for the next.</p>
<p>One of the many small niceties that falls out of this is that we can refactor this just by pulling it apart into smaller acts of compositions.</p>
<p>Here’s an example of how we might use that. We defined those simple transformations for the condiments as a set of three functions, which converted them from objects with <code>condiments</code> elements, sorted them, and joined them into a string. Now, let’s build those into meaningful functions for each step:</p>
<pre class="js"><code>// 1. Convert condiments to descriptions.
const burgerRecordsToDescriptions = R.pipe(
R.map(R.prop('condiments')),
R.map(R.sortBy(R.toString)),
R.map(R.join(', ')),
);
// 2. Build up a mapping from description to count.
const descriptionsToUniqueCounts = R.pipe(
R.reduce(toTotals, {}),
R.toPairs,
);
// 3. Sort that by count.
const uniqueCountsToSortedPairs = R.pipe(
R.sortBy(R.prop(1)),
R.reverse,
);
// For (4), to get the top 10, we'll just use `R.take(10)`.
// We could also alias that, but it doesn't gain us much.
// 5. Print it out
const sortedPairsToConsole = R.pipe(
R.map(descAndCountToOutput),
R.forEach(output)
);</code></pre>
<p>Then we can put those together into another, top-level function to do <em>exactly</em> our steps.</p>
<pre class="js"><code>const getTop10Burgers = R.pipe(
burgerRecordsToDescriptions, // (1)
descriptionsToUniqueCounts, // (2)
uniqueCountsToSortedPairs, // (3)
R.take(10) // (4)
);
getBurgers()
.then(getTop10Burgers)
.then(sortedPairsToConsole); // (5)</code></pre>
<p>Notice that, because each step is just composing together functions, “refactoring” is easy. And, to be sure, you have to be mindful about what comes in and out of each function. But that’s true in the imperative approach, too: you always have to keep track of the state of the object you’re building up, but there you’re doing it in the middle of a loop, so you’re keeping track of a lot <em>more</em> state at any given time. Functions with simple inputs and outputs give us a more explicit way of specifying the structure and state of the data at any given time. That’s true even in JavaScript, but it goes double if we’re in a typed language like F<sup>♯</sup>, Elm, etc., where we can specify those types for the function as a way of designing the flow of the program. (That’s such a helpful way of solving problems, in fact, that I may also do a talk on type-driven design in the future!)</p>
<p>Note, as well, that we’ve now completely isolated our input and output from everything else. The middle there is a chain of pure functions, built out of other pure functions, which neither know nor care that the data came in from an <abbr>API</abbr> call, or that we’re going to print it to the console when we finish.</p>
<hr />
<p>So this takes us back around to that first question: why do we care? At the end of the day, is this really a win over the imperative style? Is the final version, using Ramda, really better than the pure-JavaScript mostly-functional version we used at first?</p>
<p>Obviously, I think the answers there are yes. The Ramda version there at the end is <em>way</em> better than the imperative version, and substantially better than even the first “functional” JavaScript versions we wrote.</p>
<p>For me, at least, the big takeaway here is this: we just built a small but reasonable transformation of data out of a bunch of really small pieces. That has two big consequences—consequences we’ve talked about all along the way, but which you’ve now seen in practice:</p>
<ol type="1">
<li><p>Those pieces are easy to test. If something isn’t working, I can easily take those pieces apart and test them individually, or test the result of any combination of them. As a result, I can test any part of that pipe chain, and I can <em>fix</em> pieces independent of each other. No part depends on being in the middle of a looper where transformations are done to other parts.</p></li>
<li><p>Because they’re small and do one simple things, I can recombine those pieces any way I like. And you see that in the Ramda examples in particular: most of what we’re doing in those examples is not even something we wrote ourselves. They’re also <em>really</em> basic building blocks, available in basically every standard library.</p></li>
</ol>
<p>One last thing: if you’re curious about performance… you should know that it does matter for data at scale. In my tests (which are admittedly extremely unscientific; unfortunately, I couldn’t get JSPerf running nicely with this particular set of variations), I found that the time it took to run these varied depending on the approach <em>and</em> the library. With a ~10k-record data set:</p>
<ul>
<li>The imperative version, unsurprisingly, was the fastest, taking ~16–17ms.</li>
<li>After that, the chained lodash version and the pure-<abbr>JS</abbr> version were comparable, at ~32–36ms, or around twice as long to finish as the imperative version.</li>
<li>The plain lodash version was consistently a <em>little</em> slower yet, at ~38–43ms.</li>
<li>Ramda is <em>slow</em>: both variations consistently took over 90ms to finish.</li>
</ul>
<p>Those differences added up on larger data sets: dealing with ~10,000,000 records, the times ranged from ~12s for the imperative version, to ~15s for the lodash and pure-<abbr>JS</abbr> variants, to ~50s for the Ramda version.</p>
<p>They were all pretty darn quick. Compilers, including JavaScript <abbr>JIT</abbr>s, are incredibly smart. Mostly you can just trust them; come back and profile before you even <em>think</em> about optimizing things. But you <em>should</em> know the performance characteristics of different libraries and consider the implications of what the language does well and what it doesn’t. Ramda is likely slower because of the way it curries every function—something that works well in languages with native support for it, e.g. F<sup>♯</sup> or Elm or Haskell, but imposes a penalty in languages which don’t… like JavaScript. That said, if you’re not in the habit of processing tens of thousands of records, you’re probably okay using any of them.</p>
</section>
</section>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>or <a href="https://github.com/lodash/lodash/wiki/FP-Guide">lodash-fp</a>, but Ramda is a bit better documented and I just like it a little better<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Why Everything is Broken2016-11-01T20:45:00-04:002016-11-01T20:45:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-11-01:/2016/why-everything-is-broken.html<p>It’s something of a joke among many of the software developers I know to wonder aloud how <em>anything</em> works. We’re all very painfully aware of how hard it is to write correct code, of how hard it is to account for all the corner cases that will arise …</p><p>It’s something of a joke among many of the software developers I know to wonder aloud how <em>anything</em> works. We’re all very painfully aware of how hard it is to write correct code, of how hard it is to account for all the corner cases that will arise, and of how hard it is to write user interfaces (of any sort) that make good sense to the user.</p>
<p>And our assumptions are broken in weird ways, but we don’t even realize it. Our paradigms for computing are built on decisions made 40–50 years ago, and in many cases there is no <em>good</em> reason to continue doing things that way in a vacuum. But we’re not in a vacuum, and we have incredible resources built on top of those existing paradigms, and rewriting everything form scratch in a saner way with the lessons we’ve learned in the intervening years seems impossible.</p>
<p>All of this came home to me again this evening in one of those startlingly painful moments of realization at how ridiculous this stack of cards we’ve built really is.</p>
<p>I was helping a colleague, a designer who’s been learning HTML and CSS, figure out why his page wasn’t displaying properly on GitHub Pages. The site was loading, and the image assets were loading, but the style sheets weren’t. In fairly short order, I pulled up the site, noticed that the paths were to <code>/css/style.css</code>, glanced at the source and noted that the actual content was at <code>/CSS/style.css</code>, and said: “Oh, you just need to make the case of these match!” I explained: the URL proper (everything up through <code>.com</code> or whatever domain ending) is case-insensitive, but everything after that is case-sensitive.</p>
<p>There are reasons for that, some historical and some having to do with the fact that you can just serve a web page directly from a server, so the paths on your file system map to the paths on the web. And if your file system is case-sensitive, then the URL has to respect that.</p>
<p>That is, in a word, <em>dumb</em>. Don’t get me wrong: again, I see the perfectly defensible technical reasons why that is so. But it’s a leaky abstraction. And when you look closely at them, nearly <em>all</em> of our abstractions leak, and badly.</p>
<p>The pain of that moment was realizing, that like so many other things in tech, this particular thing is still broken for two reasons:</p>
<ol type="1">
<li>It’s too hard or too painful to change it. (That’s a big one here; the web has a pretty firm commitment to absolute backwards compatibility forever, <em>modulo</em> a few things like killing Flash.)</li>
<li>We get used to it, and just come to accept the ten thousand papercuts as normal, and eventually even forget about them until something comes up again and <em>forces</em> us to see them. Usually in the form of someone learning for the first time.</li>
</ol>
<p>We can’t necessarily do a lot about (1). We don’t have infinite time or money, and reinventing everything really is impossible. We can do wacky experiments, and iterate toward better solutions that can gradually replace what was there originally.</p>
<p>But (2) is the bigger one. We need to stop accepting the papercuts as just part of how things are—and especially, stop seeing our acclimation to them as a badge of honor to be earned—and start treating them as rough edges that ought to be sanded off over time wherever possible. Notice the things that trip up new learners, and if you can, <em>get rid of them</em>. If you can’t get rid of them, make note so that you are sure to cover it when you’re helping someone in the future. And explain the <em>whys</em> for those little edge cases: even worse than not knowing them, in some ways, is knowing them but not understanding them—having a bag of little tricks you can use but never being able to progress because you can’t see how they fit together.</p>
<p>Making our tech better starts, in many ways, with recognizing the problems we have. It requires us not to accept (much less embrace or revel in) the status quo, and always to push ourselves to do better. So iterate like made to get away from (1) and to fix (2).</p>
Rust vs. React Native—What?2016-10-07T08:20:00-04:002016-10-07T08:20:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-10-07:/2016/rust-vs-react-native-what.html<p><i class=editorial>I was recently discussing some thoughts I’ve had on building a top-notch application experience in a Slack team I belong to, and noted that I believe that a Rust core with native UIs is a <em>massively</em> winning strategy. A friend in the group responded that he thinks “React + JS …</i></p><p><i class=editorial>I was recently discussing some thoughts I’ve had on building a top-notch application experience in a Slack team I belong to, and noted that I believe that a Rust core with native UIs is a <em>massively</em> winning strategy. A friend in the group responded that he thinks “React + JS is eating the world right now” and that “Rust as awesome for if you want to write a JS vm, or something like that… or a compiler… anything involving lots of speed and stability.” What follows is my response, lightly edited to remove details specific to that friend and to add a few further thoughts.</i></p>
<hr />
<blockquote>
<p>Here’s the thing: I don’t <em>care</em> what’s eating the world today, for three reasons:</p>
<ol type="1">
<li>I just want to build the best stuff I can build, and native UIs are still massively better than React and even React Native<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> in innumerable ways. There are clear advantages to React Native + JavaScript, and times when you absolutely should take that approach. But there are also a lot of times and reasons why you shouldn’t. Heck, even if you just want killer performance <em>in browsers</em>, our future includes things like Rust-to-WebAssembly, and that’s a good thing.</li>
<li>What was eating the world five years ago? Ten? Is it still eating the world today? I don’t feel obliged to follow those trends (not least because, not being a consultancy, following those trends doesn’t buy me anything for the things I want to do; your tradeoffs and mine look way different).</li>
<li>I’m actually getting really tired of just treating as acceptable or normative the performance characteristics of browsers. Browsers are awesome. But we can (and should) do a <em>lot</em> better in terms of user experience, and I don’t see browsers catching up to what you can do with e.g. Cocoa (Touch). Sure, that doesn’t matter that much for building yet-another-storefront. (Again, there are different tradeoffs for every single app!) But why in the world are we in a spot now where one of the most popular text editors in the world is <em>slower</em> than any text editor of five years ago? That’s not a <em>necessary</em> decision, and you can (and should) go after the same degree of ease-of-extensibility that Atom has had—perhaps even using things like HTML and CSS for skinning!—while not tying yourself to the browser and its upsides and downsides for <em>everything</em>. We have <em>incredibly</em> powerful machines, and the user experience is often getting <em>slower</em>. I’m looking for ways to change that.</li>
</ol>
<p>Again, JS+React<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> may be <em>exactly</em> the right tradeoff for a lot of apps, and given what consultancies (like my friends’s!) are doing, I think doing that with ReactNative for apps is a <em>very</em> good move. It makes good sense business-wise, and it makes good sense in terms of the apps you’re likely to be delivering. Don’t hear me for a second saying Rust is the best for <em>everything</em>. I think it, or something like it, is a very good choice for <em>many</em> things, though, and it shouldn’t be dismissed simply because it’s a very different world from doing Ruby or Elixir or JavaScript.</p>
</blockquote>
<hr />
<p><i class=editorial>So much for my initial response. On reflection, I wanted to expand it a bit. So here’s another few hundred words!</i></p>
<p>Beyond this, I think there’s a bit of a false dichotomy here: the idea that “lots of speed and stability” <em>aren’t</em> values we should be seeking more aggressively for <em>all</em> our apps. Fully granted that not every app needs the same <em>degree</em> of each of those, and moreover that there are a lot of ways to get to those goals. Still: speed and stability are <em>core</em> user experience values. I don’t really care how you get at those goals, whether it’s with Rust, or Elixir or Clojure, or, yes, React with TypeScript or <a href="https://flowtype.org">Flow</a>. I <em>do</em> think that Rust is, for the moment at least, uniquely positioned to add real value in this space because it gives screaming performance but with so many niceties we’re used to when writing languages like Python or Ruby and so much of the power you get in languages like OCaml or F♯.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> But at the end of the day, I think <em>all</em> apps should focus much more on speed and stability than they do today. We have supercomputers in our pockets, and we’re often shipping apps that are slower and more finicky.</p>
<p>But I have this dream of a world where apps aren’t needlessly power-hungry or memory-intensive, where every swipe and or click or scroll results in buttery-smooth responses. We won’t get there by saying, “You know, Facebook is doing <em>x</em> so that’s good enough for me.”</p>
<p>Of course every developer, and any given product shop or consultancy, is going to have to make decisions about which stacks it invests in. If you’re primarily shipping web applications, investing in Elixir and React with React Native for your apps is a very sensible move. Most of your clients’ native apps may not <em>need</em> the degree of polished performance you might get from writing their iOS app in Swift and their Android app in Kotlin and the core in Rust (or even C++). That tradeoff is a <em>tradeoff</em>.</p>
<p>But let’s remember that there is real value there, and that some apps <em>do</em> deserve that investment. We should evaluate the tradeoffs at every turn, and our core considerations should enduringly include <em>speed and stability</em>. Don’t dismiss Rust (or Swift, or F♯) out of hand.</p>
<p>Equally importantly, we need to stop assuming that just because something is eating the world today means it’s also the future. Betting big on Flash in the mid-2000s wasn’t a <em>bad</em> move by a long shot. But its massive popularity then wasn’t a good predictor for its future. That goes double, frankly, for projects coming out of Facebook or Google or similar: big companies like that have the resources to drop everything and use a new language, or a new tool, as it suits them. If you don’t believe me, look at the actual open-source records of both of those companies! What’s hot today is far more relevant to a consultancy than to a product shop. And in both cases, choosing tech suitable for the job at hand is more important yet.</p>
<p>My friend gets that, for what it’s worth. He’s making the right moves for his business as the owner of a consultancy. I just want him—and lots of other people—to see where languages like Rust and Swift and F♯ might be worth considering. And speed and stability matter in a lot of places besides just compilers and VMs.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I’m aware that React-Native ultimately binds down to native widgets. It’s still not quite the same.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>or, frankly, Ember or whatever else; React is great, but it is also overhyped.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>Swift too, and honestly for a lot of things Swift is an easier experience for not <em>that</em> much less performance than Rust. But as of today you <em>can’t</em> ship core functionality in Swift for Android or Windows.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
32 Theses (and several more words) on Podcasting2016-08-09T10:30:00-04:002016-08-09T10:30:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-08-09:/2016/32-theses-and-several-more-words-on-podcasting.html<p>A month ago, Alan Jacobs asked about quality conservative Christian podcasts. <a href="https://mereorthodoxy.com/theses-on-podcasting/">Here’s</a> a big part of why there are so few (at Mere Orthodoxy):</p>
<blockquote>
<p>As a Christian in the world of podcasting—I have both a “two dudes talking” show (Winning Slowly) and also a “one dude talking with …</p></blockquote><p>A month ago, Alan Jacobs asked about quality conservative Christian podcasts. <a href="https://mereorthodoxy.com/theses-on-podcasting/">Here’s</a> a big part of why there are so few (at Mere Orthodoxy):</p>
<blockquote>
<p>As a Christian in the world of podcasting—I have both a “two dudes talking” show (Winning Slowly) and also a “one dude talking with maybe a brief musical intro and outro” show (New Rustacean)—I found much to agree with, but also much to clarify and a few things to disagree with…</p>
<p>…</p>
<p>First, a set of theses on podcasting as a medium. Some of these are obvious; none are intended to be tendentious. Some of them warrant further explanation—for which, see below….</p>
</blockquote>
<p>After which, <a href="https://mereorthodoxy.com/theses-on-podcasting/">32 theses (and another ~3,000 words)</a> on the constraints and challenges of podcasting as a medium.</p>
<p>Aside: the format of this particular piece is heavily inspired by Jacobs’ own <a href="http://iasc-culture.org/THR/channels/Infernal_Machine/2015/03/79-theses-on-technology-for-disputation/">“79 Theses on Technology. For Disputation.”</a></p>
Rust and Swift (xviii)2016-07-24T15:10:00-04:002016-07-24T15:10:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-07-24:/2016/rust-and-swift-xviii.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="http://v4.chriskrycho.com/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<section id="part-i-ownership-semantics-vs.-reference-counting" class="level2">
<h2>Part I: Ownership Semantics vs. Reference Counting</h2>
<p>Perhaps unsurprisingly, the Swift book follows on from its discussion of initialization with a discussion of deinitialization, and here the differences between Rust and Swift are substantial, but (as has so often been the case) so are the analogies.</p>
<p>In Rust, memory is, by default, stack-allocated and -deallocated, but with a very impressive system for tracking the lifetime of that data and allowing its to be moved from one function to another. The Rust compiler tracks the <em>ownership</em> of every given item in the program as it is passed from one function to another, allowing other parts of the program to “borrow” the data safely, until a given piece of data goes out of scope entirely. At that point, Rust runs its destructors automatically. As part of its system for managing memory safely, Rust also tracks where and when a program attempts to access any given piece of data (whether directly or via reference), and will refuse to compile if you try to reference data in a place where it has already gone out of scope and been cleaned up (“dropped,” in Rust-speak).</p>
<p>If this was a bit fuzzy, don’t worry: there’s a lot to say here. It’s arguably the most distinctive feature of the language, and it’s also the main thing that tends to trip up newcomers to the language. If you’re interested in further material on the topic, my own most succinct treatment of it is in <a href="http://www.newrustacean.com/show_notes/e002/index.html" title="e002: Something borrowed, something... moved?">an early episode</a> of New Rustacean, my Rust developer podcast, and <a href="https://doc.rust-lang.org/book/ownership.html">the official documentation</a> is <em>very</em> good. For now, suffice it to say: Rust does extremely rigorous <em>compile-time</em> checks to let you do C or C++-style memory management, but with absolute guarantees that you won’t have e.g. use-after-free bugs, with a default to handling everything on the stack.</p>
<p>It is of course impossible to handle <em>everything</em> on the stack, so there are heap-allocated types (e.g. vectors, a dynamically sized array-like type), which are fundamentally reference (or pointer) types. But those follow the same basic rules: Rust tracks the <em>pointers</em> throughout their uses, and when they go out of scope, Rust automatically tears down not only the pointer but also the data behind it. There are times, though, when you can’t comply with Rust’s normal rules for handling multiple-access to the same data. For those situations, it also supplies some “smart pointer” container types, <code>Rc</code> and <code>Arc</code>, the <em>reference-counted</em> (non-thread-safe) and <em>atomically reference-counted</em> (thread-safe) types. Both types just wrap up a type that you intend to put on the heap with reference-counters, which increment and decrement as various pieces of a program get access to them. Note that, unlike the compiler-level, <em>compile-time</em> checks mentioned earlier, these are <em>run-time</em> counts and they therefore incur a small but real runtime performance penalty. (The distinctions between the two types have to do with how they guarantee their memory safety and what kinds of a guarantees are required for cross-thread safety, and they’re important for writing Rust but not so important for this comparison, so I’ll leave them aside.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a>)</p>
<p>In Swift, all class instances (which are pass-by-reference types) are tracked with <em>automatic reference counting</em> and cleaned up automatically when there are no more references to them. Don’t confuse Rust’s “<em>atomically</em> reference-counted” type with Swift’s “<em>automatically</em> reference-counted” type. Unlike Rust’s behavior in having everything checked at compile-time, reference counting is a run-time check in Swift, just as it is with the <code>Rc</code> and <code>Arc</code> types in Rust.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> But it happens for all reference types all the time in Swift, not just when specified manually as in Rust. (Value types seem to be <em>always</em> passed by value, though the compiler has some smarts about that so it doesn’t get insanely expensive.) It’s <em>automatic</em> in that the compiler and runtime handle it “behind the scenes” from the developer’s perspective.</p>
<p>Swift’s approach here isn’t quite the same as having a full-on garbage-collected runtime like you’d see in Java, C<sup>#</sup>, Python, Ruby, JavaScript, etc. (and so doesn’t have the performance issues those often can). But it also isn’t like Rust’s default of having <em>no</em> runtime cost. It’s somewhere in the middle, with a goal of very good performance but good developer ergonomics. I think it achieves that latter goal: for the most part, it means that you don’t have to think about memory allocation and deallocation explicitly. Certainly there are times when you have to think about how your program handles those issues, but neither is it right up in your face like it is in Rust,<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> nor does it come with the costs of a heavier runtime (from startup, to GC pauses, to non-deterministic performance characteristics).<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a></p>
<p>To make it concrete, the following snippets do <em>basically</em> the same thing—but note that the reference counting is explicit in Rust. We’ll start with Rust, doing it the normal way:</p>
<pre class="rust"><code>struct WouldBeJedi {
name: String,
rank: u8,
description: String,
}
impl WouldBeJedi {
fn new(name: &str, rank: u8, description: &str) -> WouldBeJedi {
WouldBeJedi {
name: name.to_string(),
rank: rank,
description: description.to_string()
}
}
}
fn main() {
let trainee = WouldBeJedi::new(
"Zayne Carrick", 1, "not very competent, but still a great hero");
// When calling the function, we pass it a reference, and it
// "borrows" access to the data. But the validity of that access
// is checked at compile time. `main()` keeps the "ownership"
// of the data.
describe(&trainee);
// When `main` ends, nothing owns the data anymore, so
// Rust cleans it up. If something were still borrowing the
// data (say, if we'd passed a reference into another thread),
// this would actually be a compile error, because references
// have to be guaranteed to live as long as the thing they
// point back to. Rust has tools for managing that, as well,
// its "lifetimes", but we can leave them aside for this example.
}
fn describe(trainee: &WouldBeJedi) {
// Rust checks at compile time to make sure there are no
// mutable "borrows" of the data, and therefore knows
// that it is safe to reference the data here, because it can
// be *sure* nothing will change it at the same time.
// Under the covers, this macro will actually call a
// function with the data we pass it, so Rust actually checks
// the ownership and borrowing state here, too. Again, all
// at compile time, and therefore with no runtime penalty.
println!("{} (rank {}) is {}.",
trainee.name,
trainee.rank,
trainee.description);
// When we exit the function, Rust notes that it is no
// longer "borrowing" the data.
}</code></pre>
<p>And here’s the Swift code—note as well that we use a <code>class</code> not a <code>struct</code> here:</p>
<pre class="swift"><code>class WouldBeJedi {
let name: String
let rank: UInt8
let description: String
init(name: String, rank: UInt8, description: String) {
self.name = name
self.rank = rank
self.description = description
}
}
func main() {
let aTrainee = WouldBeJedi(
name: "Zayne Carrick",
rank: 1,
description: "not very competent, but a great hero")
// When calling the function, the reference count goes up
// here, too, but it's implicit, rather than explicit.
describe(aTrainee)
// The implicit reference count Swift maintains for `aTrainee`
// will go from 1 to 0 here, and Swift will do its cleanup of the
// object data.
}
func describe(_ trainee: WouldBeJedi) {
// When we enter this function, Swift bumps the reference
// count, from 1 to 2. Both `main` and `describe` now have a
// reference to the data.
// No need for the unwrapping or any of that; Swift handles it
// all automatically... thus the name of the technology!
print("\(trainee.name) (rank \(trainee.rank)) is \(trainee.description).")
// When we exit the function, Swift bumps the reference count
// back down to 1 automatically.
}</code></pre>
<p>Finally, here is the (much longer, because all the reference counting is done explicitly) reference-counted Rust version:</p>
<pre class="rust"><code>use std::rc::Rc;
pub struct WouldBeJedi {
name: String,
rank: u8,
description: String,
}
fn main() {
let trainee = WouldBeJedi {
name: "Zayne Carrick".to_string(),
rank: 1,
description: "not very competent, but a great hero".to_string()
};
let wrapped_trainee = Rc::new(trainee);
// Start by calling `clone()` to get a *reference* to the
// trainee. This increases the reference count by one.
let ref_trainee = wrapped_trainee.clone();
// Then pass the reference to the `describe()` function.
// Note that we *move* the reference to the function, so
// once the function returns, the reference will go out
// of scope, and the reference count will decrement.
describe(ref_trainee);
// When `main` ends, several things will happen in order:
// 1. The reference count on the `wrapped_trainee` will
// go to zero. As a result, the `wrapped_trainee`
// pointer---the `Rc` type we created---will get
// cleaned up.
// 2. Once `wrapped_trainee` has been cleaned up, Rust
// will notice that there are no more references
// anywhere to `trainee` and clean it up as well.
// (More on this below.)
}
fn describe(trainee: Rc<WouldBeJedi>) {
// We now have a *reference* to the underlying data, and
// therefore can freely access the underlying data.
println!("{} (rank {}) is {}.",
trainee.name,
trainee.rank,
trainee.description);
// When we exit the function, Rust destroys this *owned*
// clone of the reference, and that bumps the reference
// count back down to 1 automatically.
}</code></pre>
<p>Note that if we strip out all the explanatory comments and details, the <em>normal</em> versions of the Rust and Swift code are pretty similar.</p>
<p>Rust—</p>
<pre class="rust"><code>struct WouldBeJedi {
name: String,
rank: u8,
description: String,
}
impl WouldBeJedi {
fn new(name: &str, rank: u8, description: &str) -> WouldBeJedi {
WouldBeJedi {
name: name.to_string(),
rank: rank,
description: description.to_string()
}
}
}
fn main() {
let trainee = WouldBeJedi::new(
"Zayne Carrick",
1,
"not very competent, but still a great hero");
describe(&trainee);
}
fn describe(trainee: &WouldBeJedi) {
println!("{} (rank {}) is {}.",
trainee.name,
trainee.rank,
trainee.description);
}</code></pre>
<p>Swift (as usual, is <em>slightly</em> briefer than Rust)—</p>
<pre class="swift"><code>class WouldBeJedi {
let name: String
let rank: UInt8
let description: String
init(name: String, rank: UInt8, description: String) {
self.name = name
self.rank = rank
self.description = description
}
}
func main() {
let aTrainee = WouldBeJedi(
name: "Zayne Carrick",
rank: 1,
description: "not very competent, but a great hero")
describe(aTrainee)
}
func describe(_ trainee: WouldBeJedi) {
print("\(trainee.name) (rank \(trainee.rank)) is \(trainee.description).")
}</code></pre>
<p>Note that in both of these implementations, all the actual cleanup of the memory is handled behind the scenes—this feels much more like writing Python than writing C, <em>especially</em> for complex data types. Not least because this same kind of nice cleanup can happen for complex, heap-allocated types like dynamically-sized vectors/arrays, etc. Both languages just manage it automatically. (The same is true of modern C++, for the most part, but it has a more complicated story there because of its relationship with C, where <code>malloc</code> and <code>free</code> and friends run rampant and are quite necessary for writing a lot of kinds of code.) Most of the time, when you’re done using data, you just <em>stop using it</em>, and both Rust and Swift will clean it up for you. The feel of using either language is fairly similar, though the underlying semantics are quite different.</p>
<hr />
</section>
<section id="part-2-deconstructiondeinitialization" class="level2">
<h2>Part 2: Deconstruction/Deinitialization</h2>
<p>Both Rust and Swift recognize that, the ordinary case notwithstanding, there are many times when you <em>do</em> need to run some cleanup as part of tearing down an object. For example, if you had an open database connection attached to an object, you should return it to the collection pool before finishing tear-down of the object.</p>
<p>In Rust, this is accomplished by implementing the <code>Drop</code> trait and supplying the requisite <code>drop</code> method. Imagine we had defined a <code>Jedi</code> type, with a bunch of details about the Jedi’s lightsaber (including whether the Jedi even <em>has</em> a lightsaber. We know from the <em>Star Wars</em> movies that lightsabers turn off automatically when the Jedi dies, or even just drops it for that matter. We can implement <em>all</em> of this in Rust using just the <code>Drop</code> trait. Here’s a pretty full example.<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> (Note that both of these implementations draw heavily on material I covered in <a href="http://v4.chriskrycho.com/rust-and-swift.html">previous posts</a>.)</p>
<pre class="rust"><code>#[derive(Debug)]
enum Color {
Red,
Blue,
Green,
Purple,
Yellow
}
enum SaberState {
On,
Off,
}
struct Lightsaber {
color: Color,
blades: u8,
state: SaberState
}
impl Lightsaber {
pub fn new(color: Color, blades: u8) -> Lightsaber {
if blades > 2 {
panic!("That's just silly. Looking at you, Kylo.");
}
Lightsaber { color: color, blades: blades, state: SaberState::Off }
}
pub fn on(&mut self) {
self.state = SaberState::On;
}
pub fn off(&mut self) {
self.state = SaberState::Off;
}
}
struct WouldBeJedi {
name: String,
lightsaber: Option<Lightsaber>,
}
impl WouldBeJedi {
pub fn new(name: &str, lightsaber: Option<Lightsaber>) -> WouldBeJedi {
WouldBeJedi { name: name.to_string(), lightsaber: lightsaber }
}
pub fn describe(&self) {
let lightsaber = match self.lightsaber {
Some(ref saber) =>
format!("a {:?} lightsaber with {:} blades.", saber.color, saber.blades),
None => "no lightsaber.".to_string()
};
println!("{} has {}", self.name, lightsaber)
}
}
// Here's the actually important bit.
impl Drop for WouldBeJedi {
fn drop(&mut self) {
if let Some(ref mut lightsaber) = self.lightsaber {
lightsaber.off();
}
}
}
fn main() {
let saber = Lightsaber::new(Color::Yellow, 1);
let a_jedi = WouldBeJedi::new("Zayne Carrick", Some(saber));
a_jedi.describe();
}</code></pre>
<p>We can do much the same in Swift, using its deinitializers, which are fairly analogous to (but much simpler than) <a href="http://v4.chriskrycho.com/2016/rust-and-swift-xvii.html">its initializers</a>, and fulfill the same role as Rust’s <code>Drop</code> trait and <code>drop()</code> method.</p>
<pre class="swift"><code>enum Color {
case red, blue, green, purple, yellow
}
enum SaberState {
case on, off
}
struct Lightsaber {
let color: Color
let blades: UInt8
var state: SaberState = .off
init?(color: Color, blades: UInt8) {
if blades > 2 {
print("That's just silly. Looking at you, Kylo.")
return nil
}
self.color = color
self.blades = blades
}
mutating func on() {
state = .on
}
mutating func off() {
state = .off
}
}
class WouldBeJedi {
let name: String
var lightsaber: Lightsaber?
init(name: String, lightsaber: Lightsaber?) {
self.name = name
self.lightsaber = lightsaber
}
deinit {
self.lightsaber?.off()
}
func describe() {
let saberDescription: String
if let saber = self.lightsaber {
saberDescription = "a \(saber.color) lightsaber with \(saber.blades) blades."
} else {
saberDescription = "no lightsaber."
}
print("\(name) has \(saberDescription)")
}
}
func main() {
let saber = Lightsaber(color: .yellow, blades: 1)
let aJedi = WouldBeJedi(name: "Zayne Carrick", lightsaber: saber)
aJedi.describe();
}</code></pre>
<p>This is a bit briefer, but that’s mostly down to Swift’s shorthand for optionals (the <code>?</code> operator), which we’ll get to in a future post.</p>
<p>Curiously, <code>struct</code> and <code>enum</code> types <em>cannot</em> have deinitializers in Swift. I expect this has something to do with their being value types rather than reference types, but the book offers no comment. (If a reader knows the answer, I’d welcome clarification.)</p>
<p>Much as in the discussion of of initializers, the usual patterns with Rust and Swift’s approach come into play. Rust opts to build the pattern on the same basic language machinery (traits). Swift uses a bit of syntactical sugar dedicated to the purpose. It’s undeniable that the Swift is a bit briefer.</p>
<p>However, there are a couple upsides to Rust’s approach. First, it is applicable on <em>all</em> types, where Swift’s applies only to classes. Second, there is no additional syntax to remember. <code>Drop</code> is just a trait like any other, and <code>drop</code> a method like any other. Third, then, this means that you can run it explicitly elsewhere if you need to, and as a result you can define whatever kind of custom deconstruction behavior you might need. If we’d created <code>a_jedi</code> above in Rust, we could simply write <code>a_jedi.drop()</code> anywhere:</p>
<pre class="rust"><code>fn prove_incompetent(a_jedi: WouldBeJedi) {
// make some series of grievous mistakes which mean
// you're no longer able to be a Jedi and as such,
// among other things, lose your lightsaber...
a_jedi.drop();
// other stuff
}</code></pre>
<p>Or (going a bit more abstract) we could define a <code>daring_derring_do()</code> method which called <code>drop()</code> itself:</p>
<pre class="rust"><code>impl WouldBeJedi {
pub fn daring_derring_do(self) {
// do some other operation, like freeing slaves from
// a secret colony of slavers. But if it fails...
self.drop();
}
}</code></pre>
<p>Or, really, define <em>any</em> behavior which culminated in a <code>drop()</code> call. That’s extremely powerful, and it’s the upside that comes with its just being a trait whose behavior we have to define ourselves.</p>
<p>That takes us back to one of the fundamental differences in design between the two languages. Rust goes out of its way to leave power in the hands of the user, at the cost of requiring the user to be a bit more explicit. Swift prioritizes brevity and productivity, but it gets there by taking some of the power out of the hands of the developer. Neither of these is wrong, <em>per se</em>. They’re just aiming for (and in this case, I think, fairly successfully landing in) somewhat different spots on a spectrum of tradeoffs.</p>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2016/rust-and-swift-xvii.html"><strong>Previous:</strong> More on initializers!</a></li>
</ul>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I did, however, cover them <a href="http://www.newrustacean.com/show_notes/e015/index.html" title="e015: Not dumb pointers">quite recently</a> on my podcast. Yes, this <em>is</em> another shameless plug.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Mostly, anyway. I believe the Swift compiler also does some degree of static analysis similar to that done by Rust—though to a <em>much</em> lesser extent and, speaking purely descriptively, much less rigorously (it just has different goals). Swift then uses that analysis to handle things at compile time rather than via reference counts if it’s able to determine that it can do so.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>We could, if we so desired, get this same basic behavior in Rust. We can easily imagine a world in which every type was automatically wrapped in <code>Rc</code> or <code>Arc</code>, and in fact, I’d be very interested to see just such a language—something which was only a thin layer over Rust, keeping all its semantics but wrapping some or all non-stack-allocated types in <code>Rc</code> or <code>Arc</code> as appropriate. (Something like <a href="http://manishearth.github.io/blog/2015/09/01/designing-a-gc-in-rust/">this</a>, but done behind the scenes rather than manually opted into.) You’d incur some performance coasts, but with the benefit that you’d have an <em>extremely</em> ergonomic, practical, ML-descended language quite appropriate for slightly higher-level tasks, and without the radical shift required by switching to a lazily-evaluated, purely functional language like Haskell.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>Notably, those tradeoffs are often entirely worth it, and high-performance VMs have astoundingly good characteristics in many ways. The JVM, the CLR, and all the JavaScript VMs have astonishingly excellent performance at this point.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn5" role="doc-endnote"><p>I <em>might</em> have gotten slightly carried away in the details here. I’m just a little bit of a nerd.<a href="#fnref5" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Consistency in User Interfaces2016-07-15T10:37:00-04:002016-07-15T10:37:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-07-15:/2016/consistency-in-user-interfaces.html<p>People sometimes ask what I mean when I say Git’s UI is maddeningly inconsistent. Here’s a concrete example: what are the commands to list tags, branches, and stashes?</p>
<ul>
<li><code>git tag --list</code></li>
<li><code>git branch --list</code></li>
<li><code>git stash list</code></li>
</ul>
<p>Follow that up by noticing the difference in meaning for the …</p><p>People sometimes ask what I mean when I say Git’s UI is maddeningly inconsistent. Here’s a concrete example: what are the commands to list tags, branches, and stashes?</p>
<ul>
<li><code>git tag --list</code></li>
<li><code>git branch --list</code></li>
<li><code>git stash list</code></li>
</ul>
<p>Follow that up by noticing the difference in meaning for the <code>-v</code> flag between the commands:</p>
<ul>
<li><code>git branch -v</code>: <em>verbose</em> mode: list the hash with an abbreviated commit summary</li>
<li><code>git tag -v</code>: <em>verify</em> a tag against its GPG signature</li>
<li><code>git stash list -v</code>: no-op, completely ignored</li>
</ul>
<p>This is <em>disastrously</em> bad user interface design, and there is literally no reason for it except that the developers of Git, led by Linus Torvalds, don’t care about designing for end users. They hack in whatever commands seem to make the most sense right here and right now, and call it good—and then imply or directly state that anyone who has a problem with it is stupid or lazy.</p>
<p>But users are neither stupid nor lazy, and it is not stupid or lazy to want a system to behave in a a consistent way. Imagine if the buttons on you car’s media dashboard (a plastic one where the labels stay the same) did different things depending on whether you were in <em>Drive</em> or <em>Reverse</em>. Or if the light switches in your house behaved differently if you were using your toaster than if you were vacuuming, “on” and “off” labels notwithstanding.</p>
<p>Good user interface design is no less applicable to a command-line utility than to a pretty iOS app. Don’t let Linus Torvalds or anyone else tell you otherwise.</p>
Rust and Swift (xvii)2016-06-30T23:00:00-04:002016-07-04T10:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-06-30:/2016/rust-and-swift-xvii.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="http://v4.chriskrycho.com/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<p>In the <a href="http://v4.chriskrycho.com/2016/rust-and-swift-xvi.html">last part</a>, I talked through the first chunk of the Swift book’s material on initializers. But it’s a long section, and I definitely didn’t cover everything. (I also got a few bits incorrect, and thankfully got great feedback to tighten it up from Twitter, so if you read it right after I posted it, you might skim back through and find the places where I added “<strong>Edit:</strong> …”)</p>
<p>Picking up from where we left on, then. Swift has a number of further initializer types, some of which map rather directly to the way initializers work in Rust, and some of which have no <em>direct</em> analog at all.</p>
<p>In the first category are the memberwise initializers Swift supplies by default for <em>all</em> types. The most basic <code>init</code> method just uses the names of the members of any given <code>struct</code> or <code>class</code> type in Swift (as in the previous section, I’m going to use the types the Swift book uses for simplicity):</p>
<pre class="swift"><code>struct Size {
var height = 0.0, width = 0.0
}
someSize = Size(height: 1.0, width: 2.0)</code></pre>
<p>This actually looks almost exactly like the normal way we construct types in Rust, where the same basic pattern would look like this:</p>
<pre class="rust"><code>struct Size {
height: f64,
width: f64,
}
some_size = Size { height: 1.0, width: 2.0 }</code></pre>
<p>There are two big differences between the languages here. The first, and most immediately apparent, is syntactical: in this case, Rust doesn’t have a function-call syntax for creating instances, and Swift does. Swift’s syntax is similar to one of the several C++ constructor patterns, or especially to Python’s initializer calls (if we made a point to be explicit about the keyword arguments):</p>
<pre class="python"><code>class Size:
height = 0.0
width = 0.0
def __init__(height, width):
self.height = height
self.width = width
someSize = Size(height=1.0, width=2.0) # unnecessarily explicit</code></pre>
<p>The second, and more significant, is that the default, memberwise initializer in in Swift is only available <em>if you have not defined any other initializers</em>. This is very, <em>very</em> different from Rust, where there’s not really any such thing as a dedicated initializer—just methods. If we defined <code>Size::new</code> or <code>Size::default</code> or <code>Size::any_other_funky_initializer</code>, it wouldn’t make a whit of difference in our ability to define the type this way.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> However, and this is important: because Rust has field-level public vs. private considerations, we cannot always do memberwise initialization of any given <code>struct</code> type there, either; it is just that the reasons are different. So:<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<pre class="rust"><code>mod Shapes {
struct Rectangle {
pub height: f64,
pub width: f64,
area: f64,
}
}
fn main() {
// This won't work: we haven't constructed `Size::area`, and as we noted
// last time, you cannot partially initialize a struct.
let some_size = Shapes::Size { height: 1.0, width: 2.0 };
// But neither will this, because `area` isn't public:
let some_other_size = Shapes::Size { height: 1.0, width: 2.0, area: 2.0 };
}</code></pre>
<p>Swift lets you refer to <em>other</em> initializers on the same type (reinforcing that <code>init()</code> is basically a kind of method, albeit one with some special rules and some special sugar). You do that by calling <code>self.init()</code>, and—very importantly—you can only call it from within another initializer. No funky reinitializations or anything like that. The net result is that if you have a couple different variations on ways you might initialize a type, you still get the benefit of reusability; you don’t have to reimplement the same initialization function over and over again. Do whatever <em>additional</em> setup is required in any given instance, and then call a common base initializer.</p>
<p>With Rust, again, we just have methods, so you <em>could</em> of course call them wherever you like. However, those methods are distinguished as being type-level or instance-level methods by their signatures, rather than by keyword. If the first argument is (some variant on) <code>self</code>, it’s an instance method, otherwise, a type-level method. This eliminates any potential confusion around the initializers:</p>
<pre class="rust"><code>struct Foo {
pub a: i32
}
impl Foo {
pub fn new(a: i32) -> Foo {
Foo { a: a }
}
pub fn bar(&self) {
// yes:
let another_foo = Foo::new();
// no (won't even compile):
// let self_foo = self.new();
}
}</code></pre>
<p>You can (of course!) build up a type through multiple layers of methods which are useful to compose an instance <em>together</em>. This is what the <a href="http://doc.rust-lang.org/stable/style/ownership/builders.html"><em>builder pattern</em></a> is all about. There are definitely times when you want to be able to tweak how your initialization plays out, and being able to do that without just passing in some hairy set of options in a special data type is nice.</p>
<p>One other important qualification on the Swift initializers: those default, memberwise constructors you get for free? You <em>only</em> get them for free if you don’t define your own initializers. (The closest analogy to this in Rust is that you’ll have issues if you try to both <code>#[derive(Default)]</code> <em>and</em> <code>impl Default for Foo</code>, since both will give you an implementation of <code>Foo::default()</code>.) You can get around this in Swift by using an <em>extension</em>. We’ll come back to that in a future post.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> You can also get around it by supplying a parameter-less, body-less initializer in addition to any other initializers you supply, so: <code>init() {}</code>. (This, frankly, seems like a hack to me. It’s a <em>useful</em> hack, given the other constraints, but these kinds of things pile up.) Similarly, you can just reimplement member-wise initializers yourself if you have a reason to (say, if you’ve implemented any <em>others</em> and therefore the defaults no longer exist).</p>
<p>Now things take a turn into Swift-only territory <a href="http://v4.chriskrycho.com/2016/rust-and-swift-xv.html">again</a> as we look at initialization in the context of inheritance. (As mentioned last time: Rust will eventually get inheritance-like behavior, but it’s coming much later, and is not going to be <em>exactly</em> like classical inheritance. Rust <em>strongly</em> favors composition over inheritance, where Swift <em>lightly</em> does but still supports the latter.)</p>
<p>Swift has two kinds of initializers for class initializers. One, a <em>designated initializer</em>, is required; a designated initializer must fully initialize every property on a class, and call the superclass initializer (assuming there is one). These can be inherited, but again: they are required.</p>
<p>There are also <em>convenience initializers</em>, which provide variant APIs for setting up any given class. These (by definition, given what we said a moment ago) <em>must</em> call a designated initializer along the way. These could be useful in a lot of different scenarios: setting up variants on the class (as in our temperature examples from before), doing alternate setup depending on initial conditions, etc.</p>
<p>The only difference between the two syntactically is that <em>convenience</em> initializers get the <code>convenience</code> keyword in front of the <code>init</code> declaration, so:</p>
<pre class="swift"><code>class Foo {
var bar : Int
let quux: String
// designated
init(_ bar: Int, _ quux: String) {
self.bar = bar
self.quux = quux
}
// A convenience method which only takes the string.
convenience init(_ quux: String) {
self.init(0, quux)
}
}</code></pre>
<p>The Swift book gives a set of rules about how these delegated and convenience initializers must behave. The short version is that convenience initializers (eventually) have to call a delegated initializer from <em>their own</em> class, and designated initializers have to call a designated initializer from the <em>superclass</em>. This is an implementation detail, though: from the perspective of a <em>user</em> of the class, it doesn’t matter which initializer is called.</p>
<p>The other important bit about Swift <em>class</em> initialization is that it is a two-phase process, which you might think of as “primary initialization” and “customization.” The primary initialization sets up the properties on a class <em>as defined by the class which introduced them</em>. The following sample should illustrate how it plays out:</p>
<pre class="swift"><code>class Foo {
let plainTruth = "Doug Adams was good at what he did."
let answer = 0
init() {
baz = answer / 2
}
}
// Bar inherits from Foo
class Bar: Foo {
let question = "What is the meaning of life, the universe, and everything?"
let answer = 42
init() {
super.init() // calls Foo.init()
}
convenience init(newQuestion question: String, newAnswer answer: Int) {
self.question = question
self.answer = answer
self.init() // calls own `init()`
}
}</code></pre>
<p>When building a <code>Bar</code> via either the designated or convenience initializer, <code>plainTruth</code> and <code>answer</code> will be set up from <code>Foo</code>, then <code>question</code> will be set and <code>answer</code> will be reassigned in <code>Bar</code>. If the convenience initializer is used, then it will also override those new defaults with the arguments passed by the caller, before running the designated initializer, which will in turn call the superclass designated initializer. The machinery all makes good sense; I appreciate that there are no weird edge cases in the initialization <em>rules</em> here. (There <em>are</em> a bunch of special rules about which initializers get inherited; I’m just going to leave those aside at this point as they’re entirely irrelevant for a comparison between the languages. We’re already pretty far off into the weeds here.)</p>
<p>Obviously, none of this remotely applies to Rust at all. Not having inheritance <em>does</em> keep these things simpler (though of course it also means there’s a tool missing from your toolbox which you might miss). And of course, the rules around <em>method resolution</em> are not totally trivial there, especially now that <a href="https://github.com/rust-lang/rfcs/blob/master/text/1210-impl-specialization.md"><code>impl</code> specialization</a> is making its way <a href="https://github.com/rust-lang/rust/issues/31844">into the language</a>. But those don’t strictly speaking, affect <em>initialization</em>.</p>
<p>To account for the case that initialization can fail, Swift lets you definite <em>failable</em> initializers, written like <code>init?()</code>. Calling such an initializer produces an optional. You trigger the <code>nil</code> valued optional state by writing <code>return nil</code> at some point in the body of the initializer. Quoting from the Swift book, though, “Strictly speaking, initializers do not return a value…. Although you write <code>return nil</code> tro trigger an initialization failure, you do not use the <code>return</code> keyword to indicate initialization success.” These failable initializers get the same overall behavior and treatment as normal initializers in terms of delegating to other initializers within the same class, and inheriting them from superclasses.</p>
<pre class="swift"><code>class Foo {
let bar: Int
init?(succeed: Bool) {
if !succeed {
return nil
}
bar = 42
}
}
let foo = Foo(true)
print("\(foo?.bar)") // 42
let quux = Foo(false)
Print("\(foo?.bar)") // nil</code></pre>
<p>This is another of the places where Swift’s choice to treat initialization as a special case, not just another kind of method, ends up having some weird side effects. If <code>init</code> calls were <em>methods</em>, they would always just be <em>returning the type</em>. This is exactly what we see in Rust, of course. To be clear, there are reasons why the Swift team made that choice, and many of them we’ve already touched on incidentally; the long and short of it is that inheritance adds some wrinkles. These aren’t <em>constructors</em>, they’re <em>initializers</em>. The point, per the Swift book, is “to ensure that <code>self</code> is fully and correctly initializer by the time that initialization ends.” If you’re familiar with Python, you can think of Swift initializers as being quite analogous to <code>__init__(self)</code> methods, which similarly are responsible for <em>initialization</em> but not <em>construction</em>. When we build a type in Rust, by contrast, we’re doing something much more like calling Python <code>__new__(cls)</code> methods, which <em>do</em> construct the type.</p>
<p><em><strong>Edit:</strong> interestingly, I’m <a href="https://twitter.com/austinzheng/status/749831726122217473">informed via Twitter</a> that Swift initializers can also throw errors. (Thanks, Austin!) The Swift book doesn’t mention this because it hasn’t gotten to error-handling yet (and so, neither have we).<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a></em></p>
<p>You can of course write failable constructors in Rust, too:</p>
<pre class="rust"><code>struct Foo {
bar: i64,
};
impl Foo {
pub fn optional_new(succeed: bool) -> Option<Foo> {
if succeed { Some(Foo { bar: 0 }) }
else { None }
}
}
let foo = Foo::optional_new(true);
match foo {
Some(f) => println!("{}", f.bar),
None => println!("None"),
};</code></pre>
<p>There are conditions in both languages where you’d want to do this: places where an initialization <em>can</em> fail, e.g. trying to open a file, or open a websocket, or anything where the type represents something that is not guaranteed to return a valid result. It makes sense then that in both cases, returning an <em>optional</em> value is the outcome. Of course, Rust can equally well have an initializer return a <code>Result<T, E></code>:</p>
<pre class="rust"><code>struct Waffles {
syrup: bool,
butter: bool,
}
impl Waffles {
fn properly(all_supplies: bool) -> Result<Waffles, String> {
if all_supplies {
Ok(Waffles { syrup: true, butter: true } )
}
else {
let msg = "Who makes waffles this way???";
Err(msg.to_string())
}
}
}
let waffles = Waffles::properly(true);
match waffles {
Ok(_) => println!("Got some waffles, yeah!"),
Err(s) => println!("{:}", s),
};</code></pre>
<p><del>This is simply not the kind of thing you can do in Swift, as far as I can tell. The upside to Swift’s approach is that there is one, standard path. The downside is that if you have a scenario where it makes sense to return an error—i.e., to indicate <em>why</em> a class failed to initialize and not merely <em>that</em> it failed—you’re going to have to jump through many more hoops.<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a></del> <em><strong>Edit:</strong> See above; Swift <em>can</em> do this. Moreover, the underlying semantics aren’t especially different from Rust’s. However, it does introduce <em>yet more</em> syntax, rather than just being a normal return. But we’ll talk about that in more detail when we get to error-handling.</em><a href="#fn4"><sup>4</sup></a> The downside for Rust is that there’s no shorthand; everything is explicit. The upside is the flexibility to do as makes the most sense in a given context, including defining whatever types you need and returning them as you see fit. If you need a type like <code>PartialSuccessPossible<C, P, E></code> where <code>C</code> is a complete type, <code>P</code> a partial type, and <code>E</code> an error, you can do that. (I’m not saying that’s a good idea, for the record.) That in turn flows out of building even higher level language features on lower-level features and not introducing new syntax for the most part. Trade-offs!</p>
<p>And with that, we’re done talking about initializers. This was a <em>huge</em> topic—but it makes sense. If you don’t nail this down carefully, you’ll be in for a world of hurt later, and that goes whether you’re designing a language or just using it to build things.</p>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2016/rust-and-swift-xvi.html"><strong>Previous:</strong> Initialization: another area where Swift has a lot more going on than Rust.</a></li>
<li>[**Next: Deinitialization: ownership semantics and automatic reference counting][18]</li>
</ul>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Also recall that in Rust, we would set the default values either by using the <code>#[derive(Default)]</code> annotation or by implementing the <code>Default</code> trait ourselves.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>I’m including a module because of a quirk around the public/private rules: within the same module, <code>area</code> isn’t hidden and you can actually go ahead and initialize the object.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>Depending on how you think about extensions, <em>either</em> Rust doesn’t have anything quite like them… <em>or</em> every type implementation is just an extension, because <code>impl</code> allows you to extend <em>any</em> data type in basically arbitrary ways (a few caveats of course). More on all of this when we get there.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>Here’s a preview of what that would look like, though (fair warning, there’s a lot going on here we haven’t talked about!):</p>
<pre class="swift"><code>enum Setup {
case succeed
case error
case fail
}
enum BarSetupError: ErrorProtocol {
case argh
}
class Bar {
let blah: Int
init?(setup: Setup) throws {
switch setup {
case .succeed:
blah = 42
case .error:
throw BarSetupError.argh
case .fail:
return nil
}
}
}
do {
let bar = try Bar(setup: .succeed)
print("\(bar!.blah)")
let baz = try Bar(setup: .fail)
print("\(baz?.blah)")
let quux = try Bar(setup: .error)
print("\(quux?.blah)")
} catch BarSetupError.argh {
print("Oh teh noes!")
}</code></pre>
<p>The output from this would be <code>42</code>, <code>nil</code>, and <code>Oh teh noes!</code>.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn5" role="doc-endnote"><p><del>It’s conceivable this is actually possible, but nothing in <em>The Swift Programming Language</em> even hints at it, if so.</del> See above!<a href="#fnref5" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Y Combinators, how do they even work?2016-06-19T09:20:00-04:002016-06-19T09:20:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-06-19:/2016/y-combinators-how-do-they-even-work.html<p><i class=editorial>I was reading <a href="http://matt.might.net/articles/implementation-of-recursive-fixed-point-y-combinator-in-javascript-for-memoization/">a post</a> by <a href="http://matt.might.net">Matt Might</a>, a computer science professor at the University of Utah, about Y Combinators, and I was having a hard time tracking with some of it just by reading. The way I normally solve this problem is to write it out—and, optimally, to …</i></p><p><i class=editorial>I was reading <a href="http://matt.might.net/articles/implementation-of-recursive-fixed-point-y-combinator-in-javascript-for-memoization/">a post</a> by <a href="http://matt.might.net">Matt Might</a>, a computer science professor at the University of Utah, about Y Combinators, and I was having a hard time tracking with some of it just by reading. The way I normally solve this problem is to write it out—and, optimally, to write it out in something roughly like <a href="https://wiki.haskell.org/Literate_programming">Literate Haskell</a> or <a href="http://coffeescript.org/#literate">Literate CoffeeScript</a>. That’s exactly what you’ll find below; this is basically <em>commentary</em> on Might’s original post.</i></p>
<p><i class=editorial>A few other prefatory notes:</i></p>
<ol type="1">
<li><i class=editorial>Since this is commentary, I’m not focusing on explaining combinators in general. For a very helpful explanation, though, both of what combinators are and why you’d ever want to use them, <a href="http://programmers.stackexchange.com/a/117575">read this</a>.</i></li>
<li><i class=editorial>The Y Combinator itself isn’t all that useful for ordinary programming. It <em>is</em> really useful as a way of thinking about how programming <em>works</em>, and that’s why I was reading about it and trying to figure out what was going on in Might’s original post.</i></li>
<li><i class=editorial>This didn’t actually all make sense to me until I also read Might’s post, <a href="http://matt.might.net/articles/python-church-y-combinator/">“Equational derivations of the Y combinator and Church encodings in Python”</a>. Which is a crazy post. But kind of fun. </i></li>
</ol>
<hr />
<p>Note for background (this was new to me today): <span class="math inline"><em>λ</em><em>v</em>.<em>e</em></span> is the function which maps v to e. In ECMAScript 2015 or later (hereafter just JS):</p>
<pre class="js"><code>const λv_e = v => e</code></pre>
<p>The Y Combinator is a higher-order functional: it is a function which takes a functional/higher-order function. Quoting from Might:</p>
<blockquote>
<p>The Y combinator takes a functional as input, and it returns the (unique) fixed point of that functional as its output. A functional is a function that takes a function for its input. Therefore, the fixed point of a functional is going to be a function.</p>
</blockquote>
<p>And a “fixed point” is an input to a function equal to the <em>output</em> of the function. (Not all functions have such.) A fixed point is where <span class="math inline"><em>f</em>(<em>x</em>) = <em>x</em></span>. He uses the example <span class="math inline"><em>x</em> = <em>x</em><sup>2</sup> − 1</span>, which has two solutions, two <em>fixed points</em>.</p>
<p>He starts out with the total recursion form—also known as the “crash all the things!” form—of the Y-combinator. (I’m using letters to denote the version of the combinator; this is Y-naive.)</p>
<pre class="js"><code>const Yn = (F) => F(Yn(F)) // all the recursing!</code></pre>
<p>“Crash all the things”… because of one pesky little detail: it calls itself immediately, and so recurses infinitely. Which is actually kind of a problem.</p>
<p>Might then asks: What if we transformed this a bit? He notes that we can <em>transform</em> with lambda calculus to expand what we’re doing, so:</p>
<figure>
<span class="math inline"><em>Y</em>(<em>F</em>) = <em>F</em>(<em>λ</em><em>x</em>.(<em>Y</em>(<em>F</em>))(<em>x</em>))</span>
</figure>
<p>(I haven’t done this kind of thing since undergraduate math work I did for physics, but as I was thinking about it, it made sense. I’m used to trying to <em>remove</em> extraneous variables when dealing with software, but in this case we’re using it as a tool for transforming the equation into a form that is <em>equivalent</em> but <em>expressed differently</em>.)</p>
<p>And <span class="math inline"><em>λ</em><em>x</em>.(<em>Y</em>(<em>F</em>))(<em>x</em>)</span> is equivalent to the fixed point. It’s the function which takes <span class="math inline"><em>x</em></span> as an argument and results in <span class="math inline"><em>Y</em>(<em>F</em>)(<em>x</em>)</span>; but <span class="math inline"><em>Y</em>(<em>F</em>)</span> is just another argument, so this looks just like our original <span class="math inline"><em>f</em>(<em>x</em>) = <em>x</em></span>, but with <span class="math inline"><em>Y</em>(<em>F</em>)</span> substituted for <span class="math inline"><em>f</em></span>. Can we write this in JS?</p>
<p>Here’s my implementation, using modern JS; note that it still recurses. (I’m calling this updated Y-transformed, so <code>Yt</code>.)</p>
<pre class="js"><code>const Yt = (F) => F((x) => Yt(F)(x))</code></pre>
<p>His version:</p>
<pre class="js"><code>function Y(F) { return F(function(x) { return Y(F)(x); }); }</code></pre>
<p>Mine and his are equivalent; here’s his version transformed to modern JS:</p>
<pre class="js"><code>const Y = (F) => F((x) => Y(F)(x))</code></pre>
<p>Might then says:</p>
<blockquote>
<p>Using another construct called the U combinator, we can eliminate the recursive call inside the Y combinator, which, with a couple more transformations gets us to:</p>
</blockquote>
<p>I hated it when profs (or books!) did this when I was in college, and it frustrates me here, too. I want to <em>see</em> the transformation. I really wish Might didn’t skip how the U combinator works or what transformations he applies, because then he jumps to this form:</p>
<figure>
<span class="math inline"><em>Y</em> = (<em>λ</em><em>h</em>.<em>λ</em><em>F</em>.<em>F</em>(<em>λ</em><em>x</em>.((<em>h</em>(<em>h</em>))(<em>F</em>))(<em>x</em>)))(<em>λ</em><em>h</em>.<em>λ</em><em>F</em>.<em>F</em>(<em>λ</em><em>x</em>.((<em>h</em>(<em>h</em>))(<em>F</em>))(<em>x</em>)))</span>
</figure>
<p>Writing this out in JS is going to be a real bear. More to the point, I don’t know how he got to it; now I need to go look up the U Combinator it seems.</p>
<p>…which I’ve <a href="http://www.ucombinator.org">now done</a>. So:</p>
<blockquote>
<p>In the theory of programming languages, the U combinator, <span class="math inline"><em>U</em></span>, is the mathematical function that applies its argument to its argument; that is <span class="math inline"><em>U</em>(<em>f</em>) = <em>f</em>(<em>f</em>)</span>, or equivalently, <span class="math inline"><em>U</em> = <em>λ</em><em>f</em>.<em>f</em>(<em>f</em>)</span>.</p>
</blockquote>
<ul>
<li>That is, the U Combinator is the case where you apply a function to itself: <span class="math inline"><em>U</em>(<em>f</em>) = <em>f</em>(<em>f</em>)</span>—you can see that in the result there, where the first expression is the same as the argument handed to it (and both are functions). It’s also there in the <span class="math inline"><em>h</em>(<em>h</em>)</span> calls.</li>
<li>The transformations are just transforming from a function-argument for to a lambda form, I think. The kind of thing where you go from <code>function a(b) { return c }</code> to <code>var a = function(b) { return c }</code> in JS. (Better, in <em>modern</em> JS, to <code>const a = (b) => c</code>.)</li>
</ul>
<p>I’ll return to that in a moment. First, writing up the JS. The innermost term is (repeated) <span class="math inline"><em>λ</em><em>x</em>.((<em>h</em>(<em>h</em>))(<em>F</em>))(<em>x</em>)</span>, so we’ll start by writing this out.</p>
<pre class="js"><code>const λ_inner = (x) => (h(h)(F))(x)</code></pre>
<p>We need the definition of <span class="math inline"><em>h</em></span> next; this comes from further out, the transformation <span class="math inline"><em>λ</em><em>h</em>.<em>λ</em><em>F</em>.<em>F</em>(<em>λ</em><sub><em>i</em></sub><em>n</em><em>n</em><em>e</em><em>r</em>)</span> (where we’re substituting the <code>λ_inner</code> we just wrote to make this a bit easier to get our heads around).</p>
<p>Remembering that each “.” in the equation represents a mapping, i.e. a JS function call, we have this (writing it with function definitions starting new lines to clarify):</p>
<p>Here’s what I came up with as a fairly direct translation into JS:</p>
<pre class="js"><code>const Y = (
(h) =>
(F) => F((x) => (h(h)(F))(x)) // substituting λ_inner from above
) (
(h) =>
(F) => F((x) => (h(h)(F))(x)) // substituting λ_inner from above
)</code></pre>
<p>His (note that things are aligned as they are so that it’s clear which functions match up):</p>
<pre class="js"><code>var Y = function (F) {
return (function (x) {
return F(function (y) { return (x(x))(y);});
})
(function (x) {
return F(function (y) { return (x(x))(y);});
}) ;
} ;</code></pre>
<p>His transformed to modern JS:</p>
<pre class="js"><code>const Y = (F) => (
(x) => F((y) => x(x)(y))
) (
(x) => F((y) => x(x)(y))
)</code></pre>
<p>His and mine are not <em>quite</em> the same (though I know they’re equivalent because they both work). I really wish he’d explained how he got <em>this</em> substitution as well! More importantly, I wish he’d been consistent in his notation; changing variable names is… frustrating when you’re trying to follow someone’s work.</p>
<p><i class=editorial>When I get stuck on something like <em>this</em>, the way I figure it out is by writing out how the substitutions would work at each step. See below.</i></p>
<p>In any case, now that we have the Y combinator, we can use it with <code>FactGen</code>, a functional which, if you pass it the factorial function, passes back the factorial function. <code>FactGen</code> itself isn’t recursive. But with the Y Combinator, it builds a function which is <em>not</em> recursive; it doesn’t reference itself anywhere. It just needs the right kind of “factory”: a function which returns <em>another</em> funtion which itself <em>is</em> recursive. Here’s a standard recursive factorial implementation (identical to the one Might supplies, though modernized):</p>
<pre class="js"><code>const FactGen =
(fact) =>
(n) => n === 0 ? 1 : n * fact(n - 1)</code></pre>
<p>You call that like this:</p>
<pre class="js"><code>Y(FactGen)(5) // 120</code></pre>
<p>The <code>Y(FactGen)</code> call gets back a function which then runs on whatever input you hand it (a fairly standard pattern with curried arguments), so you could also write it like this:</p>
<pre class="js"><code>const factorial = Y(FactGen)
factorial(5) // 120</code></pre>
<p>But I’m still not sure how his and mine are equivalent.</p>
<p>A note: wrapping things in <code>(...)</code> in JS defines that wrapped content as a distinct <em>expression</em>. As long as the type of a given expression is a function, it can be called with an argument. So <code>(function() {})()</code> or <code>(() => {})()</code> takes a no-operation function and immediately executes it.</p>
<p>So in his Y combinator, the substitution goes like this:</p>
<pre class="js"><code>const Y = (F) => ( // F is FactGen
// x is the identical function passed as argument below
(x) =>
// Run FactGen by taking the function below as its `fact`
// argument.
F(
// `y` is the argument passed to the result of Y, e.g.
// `fact(5)`. Recall that `x` is the function below; we
// call it with itself. Calling x(x) will get the actual
// factorial function returned by `FactGen`.
(y) => x(x)(y)
)
// We close the *expression* which defines the outer function,
// and call it with this next expression as an argument.
) (
// and x here is the same function, passed as argument
(x) =>
// Again, run `FactGen` with this function as its argument.
F(
// `y`, again, will be the integer. `x(x)` again will be
// the actual factorial function.
(y) => x(x)(y)
)
)</code></pre>
<p>This is pretty funky! But it works; the two anonymous functions call <em>each other</em> rather than recursing directly.</p>
<p>In mine, it goes like this, instead:</p>
<pre class="js"><code>const Ymine = (
// Where in Might's example, the `x` function was where the
// U Combinator was applied, here (because I followed the
// original notation he gave) it's `h`. So it's `h` which is
// the same function handed back and forth as argument
// to itself.
(h) =>
// `h` takes a functional, which takes `FactGen` as its
// parameter. This is similar to the outermost function in
// Might's version.
(F) =>
// As in Might's version, we call `FactGen` here.
F(
// The form is *similar* but not identical to his,
// because of the extra call structure. `h(h)(F)` is the
// factorial function.
//
// Note that then he has `y` where I have `x`; my `x`
// and his `y` are just the result of the computation
// (in this case, the integer factorial).
(x) => (h(h)(F))(x))
) (
// This is identical to the above; it's using the U Combinator.
(h) => (F) => F((x) => (h(h)(F))(x))
)</code></pre>
<p>This is how his simplification worked: instead of generating the factorial function each time, it generated it just the once and then <em>used</em> it.</p>
<p>I still couldn’t <em>do</em> the simplification he did myself. It’ll take more practice using and thinking about combinators and combinatorial logic before I get there, but that’s okay. That’s how learning works.</p>
<p>And that’s enough playing with combinatorials for now. (Except that I’m kind of tempted to see if I can go implement the U or Y combinators—or both—in Rust.)</p>
<hr />
<p><i class=editorial>If you’re curious how I worked this out… I expanded the JS representations of the final forms (<a href="//v4.chriskrycho.com/extra/ycombinator.js">here’s the code</a>) and then stepped through the result in my JavaScript dev tools, watching how the function calls worked and what the values of each intermediate value were. It’s fascinating, and well worth your time.</i></p>
Vectors and Iterator Access in Rust2016-06-16T20:59:00-04:002016-06-16T20:59:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-06-16:/2016/vectors-and-iterator-access-in-rust.html<p><i class="editorial">In the midst of doing my reading and research for New Rustacean episode 15 (which will be out fairly soon after I post this), I bumped into this little tidbit. It doesn’t fit in the episode, so I thought I’d share it here.</i></p>
<p>When you’re dealing with …</p><p><i class="editorial">In the midst of doing my reading and research for New Rustacean episode 15 (which will be out fairly soon after I post this), I bumped into this little tidbit. It doesn’t fit in the episode, so I thought I’d share it here.</i></p>
<p>When you’re dealing with vectors in Rust, a common misstep when working with them via iterators is to <em>move</em> them when you only to <em>borrow</em> them. If you write <code>for i in x</code> where <code>x</code> is an iterator, you’ll <em>move</em> the iterator into the looping construct. Instead, you should nearly always write <code>for i in &x</code> to borrow a reference to the iterator, or <code>for i in &mut x</code> if you need to get a mutable reference to it.</p>
Testing Ember.js Mixins (and Helpers) With a Container2016-06-09T20:35:00-04:002017-04-20T07:15:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-06-09:/2016/testing-emberjs-mixins-with-a-container.html<p><i>Updated to note that the same concerns apply to helpers. You can always see the full revision history of this item <a href="https://github.com/chriskrycho/chriskrycho.com/commits/master/content/tech/ember-js-mixins-container.md">here</a>.</i></p>
<hr />
<p>Today I was working on an Ember.js <a href="http://emberjs.com/api/classes/Ember.Mixin.html#content">mixin</a> for the new mobile web application we’re shipping at Olo, and I ran into an interesting problem when …</p><p><i>Updated to note that the same concerns apply to helpers. You can always see the full revision history of this item <a href="https://github.com/chriskrycho/chriskrycho.com/commits/master/content/tech/ember-js-mixins-container.md">here</a>.</i></p>
<hr />
<p>Today I was working on an Ember.js <a href="http://emberjs.com/api/classes/Ember.Mixin.html#content">mixin</a> for the new mobile web application we’re shipping at Olo, and I ran into an interesting problem when trying to test it.</p>
<p>When you’re testing mixins (or helpers), you’re generally not working with the normal Ember container.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> In fact, the default test setup for mixins doesn’t have <em>any</em> container in play. It just looks like this (assuming you ran <code>ember generate mixin bar</code> in an app named <code>foo</code>):</p>
<pre class="js"><code>import Ember from 'ember';
import BarMixin from 'foo/mixins/bar';
import { module, test } from 'qunit';
module('Unit | Mixin | bar');
// Replace this with your real tests.
test('it works', function(assert) {
let BarObject = Ember.Object.extend(BarMixin);
let subject = BarObject.create();
assert.ok(subject);
});</code></pre>
<p>Note two things:</p>
<ol type="1">
<li>It uses the basic Qunit <code>module</code> setup, not the ember-qunit <code>moduleFor</code> setup.</li>
<li>It assumes you’re generating a new object instance for every single test.</li>
</ol>
<p>Both of those assumptions are fine, <em>if you don’t need to interact with the container</em>. In many cases, that’s perfectly reasonable—I’d go so far as to say that most mixins and helpers probably <em>shouldn’t</em> have any dependency on the container.</p>
<p>In the specific case I was working on, however, the point of the mixin was to abstract some common behavior which included all the interactions with a <a href="https://guides.emberjs.com/v2.6.0/applications/services/">service</a>. This meant making sure the dependency injection worked in the unit test. This in turn meant dealing with the container. So let’s see what was involved in that. (You can generalize this approach to any place in the Ember ecosystem where you need to test something which doesn’t normally have the container set up.)</p>
<p>We start by switching from the basic <code>qunit</code> helpers to using the <code>ember-qunit</code> helpers.</p>
<pre class="js"><code>// Replace this...
import { module, test } from 'qunit';
module('Unit | Mixin | bar');
// with this:
import { moduleFor, test } from 'ember-qunit';
moduleFor('mixin:bar', 'Unit | Mixin | Bar');</code></pre>
<p>The <code>moduleFor()</code> helper has two things going for it—one of which we <em>need</em>, and one of which isn’t strictly <em>necessary</em>, but has some nice functionality. In any case, this will help when registering a container. Those two features:</p>
<ol type="1">
<li>It does support the use of the container. In fact, it’s declaring how this mixin relates to the container in the first argument to the helper function: <code>'mixin:foo'</code> is the definition of the mixin for injection into the container.</li>
<li>Any functions we define on the options argument we can pass to the <code>moduleFor()</code> helper are available on the <code>this</code> of the test.</li>
</ol>
<p>Now, in the first version of this, I had set up a common <code>Ember.Object</code> which had mixed in the <code>BarMixin</code>, so:</p>
<pre class="js"><code>const BarObject = Ember.Object.extend(BarMixin);</code></pre>
<p>Then, in each test, I created instances of this to use:</p>
<pre class="js"><code>test('test some feature or another', function(assert) {
const subject = BarObject.create();
// ...do stuff and test it with `assert.ok()`, etc.
});</code></pre>
<p>The problem was that any of those tests which required a container injection always failed. Assume we have a service named <code>quux</code>, and that it’s injected into the mixin like this in <code>foo/app/mixins/bar.js</code>:</p>
<pre class="js"><code>import Ember from 'ember';
export default Ember.Mixin.create({
quux: Ember.inject.service()
});</code></pre>
<p>Any test which actually tried to <em>use</em> <code>quux</code> would simply fail because of the missing container (even if you specified in the test setup that you needed the service):</p>
<pre><code>test('it uses quux somehow', function(assert) {
const subject = BarObject.create();
const quux = subject.get('quux'); // throws Error
});</code></pre>
<p>Specifically, you will see <code>Attempting to lookup an injected property on an object without a container</code> if you look in your console.</p>
<p>Taking advantage of the two <code>ember-qunit</code> features, though, we can handle all of this.</p>
<pre class="js"><code>import Ember from 'ember';
import { moduleFor, test } from 'ember-qunit';
const { getOwner } = Ember;
moduleFor('mixin:bar', 'Unit | Mixin | bar', {
// The `needs` property in the options argument tells the test
// framework that it needs to go find and instantiate the `quux`
// service. (Note that if `quux` depends on other injected
// services, you have to specify that here as well.)
needs: ['service:quux'],
// Again: any object we create in this options object will be
// available on the `this` of every `test` function below. Here,
// we want to get a "test subject" which is attached to the
// Ember container, so that the container is available to the
// test subject itself for retrieving the dependencies injected
// into it (and defined above in `needs`).
subject() {
BarObject = Ember.Object.extend(BarMixin);
// This whole thing works because, since we're in a
// `moduleFor()`, `this` has the relevant method we need to
// attach items to the container: `register()`.
this.register('test-container:bar-object', BarObject);
// `Ember.getOwner` is the public API for getting the
// container to do this kind of lookup. You can use it in lots
// of places, including but not limited to tests. Note that
// that because of how the dependency injection works, what we
// get back from the lookup is not `BarObject`, but an
// instance of `BarObject`. That means that we don't need to
// do `BarObject.create()` when we use this below; Ember
// already did that for us.
return getOwner(this).lookup('test-container:bar-object');
}
});
test('the mixin+service does what it should', function(assert) {
// We start by running the subject function defined above. We
// now have an instance of an `Ember.Object` which has
// `BarMixin` applied.
const subject = this.subject();
// Now, because we used a test helper that made the container
// available, declared the dependencies of the mixin in `needs`,
// and registered the object we're dealing with here, we don't
// get an error anymore.
const quux = subject.get('quux');
});</code></pre>
<p>So, in summary:</p>
<ol type="1">
<li>Use the <code>ember-qunit</code> helpers if you need the container.</li>
<li>Define whatever dependencies you have in <code>needs</code>, just as you would in any other test.</li>
<li>Register the mixin-derived object (whether <code>Ember.Object</code>, <code>Ember.Route</code>, <code>Ember.Component</code>, or whatever else) in a method on the options argument for <code>moduleFor()</code>. Use that to get an instance of the object and you’re off to the races!</li>
</ol>
<p>One final consideration: while in this case it made good sense to use this approach and make the service injection available for the test, there’s a reason that the tests generated by Ember CLI don’t use <code>moduleFor()</code> by default. It’s a quiet but clear signal that you should reevaluate whether this <em>is</em> in fact the correct approach.</p>
<p>In general, mixins are best used for self-contained units of functionality. If you <em>need</em> dependency injection for them, it may mean that you should think about structuring things in a different way. Can all the functionality live on the service itself? Can all of it live in the mixin instead of requiring a service? Can the service calls be delegated to whatever type is using the mixin?</p>
<p>But if not, and you <em>do</em> need a mixin which injects a service, now you know how to do it!</p>
<hr />
<p><strong>Side note:</strong> The documentation around testing mixins is relatively weak, and in general the testing docs are the weak bits in the Ember guides right now.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> After a conversation with <a href="https://github.com/rwjblue">@rwjblue</a> on the <a href="https://ember-community-slackin.herokuapp.com">Ember Community Slack</a>, though, I was able to get a handle on the issue, and here we are. Since it stumped me, I’m guessing I’m not the only one.</p>
<p>When this happens, <em>write it up</em>. I’ve been guilty of this too often in the past few months: learning something new that I couldn’t find anywhere online, and then leaving it stored in my own head. It doesn’t take a particularly long time to write a blog post like this, and if you’re stuck, chances are <em>very</em> good someone else is too.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>If you’re not familiar with the “container”, this is where all the various dependencies are registered, and where Ember looks them up to inject them when you use methods like <code>Ember.inject.service()</code>.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Something I intend to help address in the next week or two via a pull request, so if you’re my Ember.js documentation team friend and you’re reading this… it’s coming. 😉<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Rust and Swift (xvi)2016-06-07T23:30:00-04:002019-06-22T10:55:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-06-07:/2016/rust-and-swift-xvi.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="http://v4.chriskrycho.com/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<p><i class="editorial">Thanks to ubsan, aatch, and niconii on the <a href="https://client00.chat.mibbit.com/?server=irc.mozilla.org&channel=%23rust">#rust-lang IRC</a> for a fascinating discussion of the current status of Rust’s initialization analysis, as well as some very interesting comments on what might be possible to do in the future. Everything actually interesting about Rust in this post comes from the conversation I had with them on the evening of March 13.</i></p>
<hr />
<p>The rules various languages have around construction and destruction of objects are <em>extremely</em> important for programmer safety and ergonomics. I think it’s fair to say that both Swift and rust are actively trying to avoid some of the mistakes made in e.g. C++ which poorly affect both its safety and its ease of use for developers, albeit it in some superficially different ways. Both languages also support defining how types are destroyed, which we’ll come back to in a future discussion.</p>
<p>The basic aim both Rust and Swift have in this area seems to be the same: avoid <em>partially</em> initialized objects. (You don’t want partially initialized objects. Ask Objective C developers.)</p>
<p>Swift does this via its rules around <em>initializers</em>. Rust does it by requiring that all the values of a type be initialized at its creation. So, for example, the following <em>looks</em> like it should work, but it doesn’t. You can initialize the variable piecemeal, but you cannot <em>use</em> it:</p>
<pre class="rust"><code>#[derive(Debug)] // to make it printable.
struct Foo {
pub a: i32,
pub b: f64,
}
fn main() {
// This will compile, but `foo` will be useless.
let mut foo: Foo;
foo.a = 14;
foo.b = 42.0;
// This would actually fail to compile. Surprising? A bit!
// println!("{:?}", foo);
// This will work, though, because it fully constructs the type.
let foo2 = Foo { a: 14, b: 42.0 };
println!("{:?}", foo);
}</code></pre>
<p>(The reasons why this is so are fairly complicated. See the addendum at the end for a brief discussion.)</p>
<p>In any case, this means that especially with more complex data types, providing standard constructor-style methods like <code>new</code> or <code>default</code> is conventional and helpful. (If the type has non-public members, it’s also strictly necessary.)</p>
<p>Swift has a number of options for initializers, which correspond to things you in most cases can do in Rust, but in a very different way.</p>
<p>First, Swift allows you to overload the <code>init</code> method on a type, so that you can have different constructors for different starting conditions. (This is, to my recollection, the first time any kind of overloading has come up so far in the Swift book—but that could just be my memory failing me. Certainly I haven’t referenced it in any previous discussion, though.)</p>
<p>The example offered by the Swift book is illuminating for the different approaches the languages take, so we’ll run with it. Here’s a class defining a Celsius type in Swift:</p>
<pre class="swift"><code>struct Celsius {
let temp: Double
init(fromFahrenheit f: Double) {
temp = 1.8 * (f - 32.0)
}
init(fromKelvin k: Double) {
temp = k - 273.15
}
}
// Create an instance each way
let freezing = Celsius(temp: 0)
let balmy = Celsius(fromFahrenheit: 75.0)
let absoluteZero = Celsius(fromKelvin: 0.0)</code></pre>
<p>Note the internal and external parameter names. This is a common idiom Swift keeps (albeit with some non-trivial modification, and with <a href="%7B%3E%3E%20TODO:%20Swift%203%20naming%20changes%20%3C%3C%7D">more to come</a>). More on this below; first, the same basic functionality in Rust:</p>
<pre class="rust"><code>struct Celsius {
temp: f64
}
impl Celsius {
fn from_fahrenheit(f: f64) -> Celsius {
Celsius { temp: 1.8 * (f - 32.0) }
}
fn from_kelvin(k: f64) -> Celsius {
Celsius { temp: k - 273.15 }
}
}
// Create an instance each way
let freezing = Celsius { temp: 0 };
let balmy = Celsius::from_fahrenheit(75.0);
let absoluteZero = Celsius::from_kelvin(0.0);</code></pre>
<p>(Note that there might be other considerations in implementing such types, like using a <code>Temperature</code> base <code>trait</code> or <code>protocol</code>, or employing type aliases, but those are for later entries!)</p>
<p>You can see a point I made about Swift’s initializer syntax back in <a href="http://v4.chriskrycho.com/2016/rust-and-swift-x.html">part x</a>: the way Rust reuses normal struct methods while Swift has the special initializers. Neither is clearly the “winner” here. Rust gets to use existing language machinery, simplifying our mental model a bit by not adding more syntax. On the other hand, the addition of initializer syntax lets Swift use a fairly familiar type construction syntax even for special initializer cases, and a leaves us with a bit less noise in the constructor method. Note, though, that initializers in Swift <em>are</em> special syntax; they’re not just a special kind of method (as the absence of the <code>func</code> keyword emphasizes)—unlike Rust, where initializers really are just normal struct or instance methods.</p>
<p>The Swift book notes this distinction:</p>
<blockquote>
<p>In its simplest form, an initializer is like an instance method with no parameters, written using the <code>init</code> keyword.</p>
</blockquote>
<p>The new keyword is the thing I could do without. Perhaps it’s just years of writing Python, but I really prefer it when constructors for types are just sugar and you can therefore reimplement them yourself, provide custom variations, etc. as it suits you. Introducing syntax instead of just picking a standard function to call at object instantiation means you lose that. At the same time, and in Swift’s defense, I’ve only rarely wanted or needed to use those facilities in work in Python. It’s a pragmatic decision—and it makes sense as such; it’s just not where my preference lies. The cost is a bit higher than I’d prefer relative to the gain in convenience.</p>
<p>Back to the initializers and the issue of overloading: the external parameter names (the <em>first</em> parameter) is one of the main ways Swift tells apart the initializers. This is necessitated, of course, by the choice of a keyword for the initializer; Rust doesn’t have any <em>need</em> for this, and since Rust doesn’t have overloading, it also <em>can’t</em> do this. In Rust, different constructors/initializers will have different names, because they will simply be different methods.</p>
<p>[<i class='editorial'><strong>Edit:</strong> I’m leaving this here for posterity, but it’s incomplete. See below.</i>] One other important thing falls out of this: the external parameter names are <em>required</em> when initializing a type in Swift. Because those parameter names are used to tell apart the constructor, this is not just necessary for the compiler. It’s also an essential element of making the item readable for humans. Imagine if this were <em>not</em> the case—look again at the <code>Celsius</code> example:</p>
<pre class="swift"><code>struct Celsius {
let temp: Double
init(fromFahrenheit f: Double) {
temp = 1.8 * (f - 32.0)
}
init(fromKelvin k: Double) {
temp = k - 273.15
}
}
// Create an instance each way
let freezing = Celsius(0)
let balmy = Celsius(75.0) // our old fromFahrenheit example
let absoluteZero = Celsius(0.0) // our old "fromKelvin example</code></pre>
<p>We as humans would have no idea what the constructors are supposed to do, and really at this point there would <em>necessarily</em> just be one constructor unless the later options took elements of another <em>type</em>. That would be fairly similar to how overloading works in C++, Java, or C<sup>♯</sup>, and while method overloading in those langauges is very <em>powerful</em>, it can also make it incredibly difficult to figure out exactly what method is being called. That includes when the constructor is being called. Take a look at the <em>long</em> list of <a href="https://msdn.microsoft.com/en-us/library/system.datetime(v=vs.110)">C<sup>♯</sup> <code>DateTime</code> constructors</a>, for example: you have to either have this memorized, have the documentation open, or be able simply to infer from context what is going on.</p>
<p><em>Given</em> the choice of a keyword to mark initializers, then, Swift’s rule about external parameter name usage wherever there is more than one initializer is quite sensible.</p>
<p>[<i class='editorial'><strong>Edit:</strong> several readers, most notably including <a href="https://twitter.com/jckarter/status/740763363626586112">Joe Groff</a>, who works on Swift for Apple, pointed out that Swift <em>does</em> support overloading, including in <code>init()</code> calls, and uses types to distinguish them. Moreover, you can leave off the label for the parameter. My initial summary was simply incorrect. I think this is a function of my not having finished the chapter yet.</i>]</p>
<p>Second, both languages support supplying default values for a constructed type. Swift does this via default values defined at the site of the property definition itself, or simply set directly from within an initializer:</p>
<pre class="swift"><code>struct Kelvin {
var temp: Double = 0.0 // zero kinetic energy!!!
init () {
temp = 305.0 // Change of plans: maybe just freezing is better
}
}</code></pre>
<p>In Rust, you can not supply default values directly on a property, but you can define any number of custom constructors:</p>
<pre class="rust"><code>struct Kelvin {
temp: f64,
}
impl Kelvin {
fn abs_zero() -> Kelvin {
Kelvin { temp: 0.0 }
}
fn freezing() -> Kelvin {
Kelvin { temp: 305.0 }
}
}</code></pre>
<p>We could of course shorten each of those two one line, so:</p>
<pre class="rust"><code>fn abs_zero() -> Kelvin { Kelvin { temp: 0.0 } }</code></pre>
<p>The Rust is definitely a little noisier, and that is the downside of this tack. The upside is that these are just functions like any other. This is, in short, <em>exactly</em> the usual trade off we see in the languages.</p>
<p>Rust also has the <code>Default</code> trait and the <code>#[derive(default)]</code> attribute for getting some basic defaults for a given value. You can either define a <code>Default</code> implementation yourself, or let Rust automatically do so if the underlying types have <code>Default</code> implemented:</p>
<pre class="rust"><code>struct Kelvin {
temp: f64,
}
// Do it ourselves
impl Default for Kelvin {
fn default() -> Kelvin {
Kelvin { temp: 305.0 }
}
}
// Let Rust do it for us: calling `Celsius::default()` will get us a default
// temp of 0.0, since that's what `f64::default()` returns.
#[derive(default)]
struct Celsius {
temp: f64,
}</code></pre>
<p>This doesn’t get you quite the same thing as Swift’s initializer values. It requires you to be slightly more explicit, but the tradeoff is that you also get a bit more control and flexibility.</p>
<p>There’s actually a lot more to say about initializers—there are <em>many</em> more pages in the Swift book about them—but this is already about 1,700 words long, and I’ve been slowly chipping away at it since March (!), so I’m going to split this chapter of the Swift book into multiple posts. More to come shortly!</p>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2016/rust-and-swift-xv.html"><strong>Previous:</strong> Inheritance: a Swiftian specialty (for now).</a></li>
<li><a href="/2016/rust-and-swift-xvii.html"><strong>Next:</strong> More on initializers!</a></li>
</ul>
<hr />
<section id="addendum-no-late-initialization-in-rust" class="level2">
<h2>Addendum: No Late Initialization in Rust</h2>
<p>Returning to the first Rust example—</p>
<pre class="rust"><code>#[derive(Debug)] // to make it printable.
struct Foo {
pub a: i32,
pub b: f64,
}
fn main() {
// This will compmile, but `foo` will be useless.
let mut foo: Foo;
foo.a = 14;
foo.b = 42.0;
// This would actually fail to compile. Surprising? A bit!
// println!("{:?}", foo);
}</code></pre>
<p>You can’t do anything with that data for a few reasons (most of this discussion coming from ubsan, aatch, and niconii on the <a href="https://client00.chat.mibbit.com/?server=irc.mozilla.org&channel=%23rust">#rust-lang IRC</a> back in March):</p>
<ol type="1">
<li>Rust lets you “move” data out of a struct on a per-field basis. (Rust’s concept of “ownership” and “borrowing” is something we haven’t discussed a lot so far in this series; my <a href="http://www.newrustacean.com/show_notes/e002/index.html" title="New Rustacean e002: Something borrowed, something... moved?">podcast episode</a> about it is probably a good starting point.) The main takeaway here is that you could return <code>foo.a</code> distinctly from returning <code>foo</code>, and doing so would hand that data over while running the <code>foo</code> destructor mechanism. Likewise, you could pass <code>foo.b</code> to the function created by the <code>println!</code> macro</li>
<li>Rust allows you to re-initialize moved variables. I haven’t dug enough to have an idea of what that would look like in practice.</li>
<li>Rust treats uninitialized variables the same as moved-from variables. This seems to be closely related to reason #2. The same “I’m not sure how to elaborate” qualification applies here.</li>
</ol>
<p>I’ll see if I can add some further comments on (2) and (3) as I hit the later points in the Swift initialization chapter.</p>
</section>
Rust and C++ function definitions2016-06-03T18:01:00-04:002016-06-07T23:16:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-06-03:/2016/03-1801.html<p>I just put my finger on one of the (many) reasons Rust reads better than C++: the visual consistency of its function definitions. Compare—</p>
<p>Rust has:</p>
<pre class="rust"><code>fn foo() -> i32 { /* implementation */ }
fn bar() -> f32 { /* implementation */ }</code></pre>
<p>C++ has:</p>
<pre class="cpp"><code>int foo() { /* implementation */ }
double bar() { /* implementation */ }</code></pre>
<p>That consistency adds up over many lines of …</p><p>I just put my finger on one of the (many) reasons Rust reads better than C++: the visual consistency of its function definitions. Compare—</p>
<p>Rust has:</p>
<pre class="rust"><code>fn foo() -> i32 { /* implementation */ }
fn bar() -> f32 { /* implementation */ }</code></pre>
<p>C++ has:</p>
<pre class="cpp"><code>int foo() { /* implementation */ }
double bar() { /* implementation */ }</code></pre>
<p>That consistency adds up over many lines of code. There are many other such choices; the net effect is that Rust is <em>much</em> more pleasant to read than C++.</p>
<hr />
<p>Note: I’m aware that C++11 added the <code>auto foo() -> <type></code> syntax. But this actually <em>worsens</em> the problem. A totally new codebase which uses that form exclusively (which may not always be possible, because the semantics aren’t the same) would have roughly the same visual consistency as Rust <em>in that particular category</em>. (Plenty of others would still be a mess.) But the vast majority of C++ codebases are <em>not</em> totally new. Adding the form means your codebase is more likely to look this this:</p>
<pre class="cpp"><code>int foo() { /* implementation */ }
auto quux() -> uint32_t { /* implementation */ }
double bar() { /* implementation */ }</code></pre>
<p>That is, for the record, <em>more</em> visual inconsistency—not less!</p>
Free Dynamic DNS for Remote Login via SSH2016-05-31T20:10:00-04:002016-05-31T20:10:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-05-31:/2016/free-dynamic-dns-for-remote-login-via-ssh.html<p>I recently set up a hostname and mapped it to a dynamic IP address for my home machine so that I can log into it via SSH<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> from <em>anywhere</em> without needing to know what the IP address is. This is handy because I need to do just that on …</p><p>I recently set up a hostname and mapped it to a dynamic IP address for my home machine so that I can log into it via SSH<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> from <em>anywhere</em> without needing to know what the IP address is. This is handy because I need to do just that on a semi-regularly basis: I’ll be out with my work laptop at a coffee shop, and need something that’s on my personal machine at home, for example.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<p>A friend <a href="https://twitter.com/toddheitmann/status/728222459413958656">asked</a> me to describe it, so here I am. (Hi, Todd!) This was pretty straightforward for me, and it should be for you, too.</p>
<ol type="1">
<li>Pick one of the <a href="https://duckduckgo.com/?q=free+dynamic+dns+providers&t=osx&ia=web">many</a> free dynamic DNS providers. I picked <a href="http://www.noip.com/free">No-IP</a> after a very short bit of digging. In the future I may switch to a more full-featured solution, not least because I’m planning to separate out my DNS management from my hosting and my domain registrar later this year.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> For now, though, No-IP is good enough.
<ul>
<li>Register.</li>
<li>Pick a domain name.</li>
<li>Add your current IP address. (If you need to find out what it is, you can literally just ask the internet: <a href="http://www.whatsmyip.org">whatsmyip.org</a> will tell you.)</li>
</ul></li>
<li>Set up a local service to talk to the dynamic DNS provider, so that when your external IP address changes (and from time to time it will, if you’re not paying your ISP for a dedicated IP address). You can do this one of two ways:
<ul>
<li><strong>By installing a service on your main machine.</strong> No-IP and other large providers all have downloads where you can just install an app on your machine that goes out and talks to the service and keeps the IP address up to date.<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a></li>
<li><strong>By configuring your router.</strong> This is the route I took, because the router I have<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> fully supports dynamic DNS services right out of the box.<a href="#fn6" class="footnote-ref" id="fnref6" role="doc-noteref"><sup>6</sup></a> Look for something like <em>Dynamic DNS</em> and follow the configuration instructions there to get it talking to your dynamic DNS service provider. Mine has a built-in list which included No-IP; I just added my username and password and the domain name I specified back in Step 1, checked an <em>Enable DDNS</em> box, and connected.</li>
</ul></li>
</ol>
<p>That’s it. Even if you’re not a huge networking geek (which, for all my nerdiness, I really am not), you can set it up. From that point forward, if you have <em>other</em> things configured locally on your machine for network access (e.g. enabling SSH by toggling <em>Remote Login</em> to <em>On</em> in the <strong>Sharing</strong> preferences pane on OS X), you can just use the new domain you configured instead of the IP address. If that domain was e.g. <chriskrycho.example.com>, you could just <code>ssh [email protected]</code> and be off to the races.</p>
<p>Have fun!<a href="#fn7" class="footnote-ref" id="fnref7" role="doc-noteref"><sup>7</sup></a></p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>or <a href="https://mosh.mit.edu">mosh</a>, which I’m hoping to check out this week<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Or, when I was traveling and my Windows VM crashed while I was in the airport, and I was able to work from the VM on my home machine instead via SSH magic I’ll cover in a future blog post.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>Having each of those in a separate place is nice: it means that if the others change, you only have to deal with <em>that</em> set of concerns. For example, if you move hosting providers, you don’t <em>also</em> have to migrate all your DNS settings—just tweak the couple that are relevant to the move.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p><a href="https://www.noip.com/download">Here’s the download page</a> for No-IP, for example.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn5" role="doc-endnote"><p><a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16833704177&nm_mc=AFC-C8Junction&cm_mmc=AFC-C8Junction-Skimlinks-_-na-_-na-_-na&cm_sp=&AID=10446076&PID=5431261&SID=skim45704X1167592X2be13284148d669370b61074c119afc2">this one</a>, as <a href="http://thewirecutter.com/reviews/best-wi-fi-router/">recommended</a> by The Wirecutter<a href="#fnref5" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn6" role="doc-endnote"><p>So will most open-source router firmwares, especially OpenWRT or DD-WRT, if they run on your router. I’ve done that in the past.<a href="#fnref6" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn7" role="doc-endnote"><p>This tiny post has a <em>hilarious</em> number of footnotes. I noticed this early on, and instead of reworking it… I just ran with it.<a href="#fnref7" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
SpaceX: "First-stage landing | Onboard camera"2016-05-28T20:58:00-04:002016-05-28T20:58:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-05-28:/2016/spacex-first-stage-landing-onboard-camera.html<p>OH WOW: <a href="http://www.spacex.com">SpaceX</a> first-stage landing footage… <a href="https://youtu.be/4jEz03Z8azc">from the onboard camera</a>. This is blow-your-mind incredible.</p>
<p>OH WOW: <a href="http://www.spacex.com">SpaceX</a> first-stage landing footage… <a href="https://youtu.be/4jEz03Z8azc">from the onboard camera</a>. This is blow-your-mind incredible.</p>
Ben Thompson: "Peter Thiel, Comic Book Hero"2016-05-28T20:50:00-04:002016-05-28T20:50:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-05-28:/2016/ben-thompson-peter-thiel-comic-book-hero.html<p>I always appreciate Ben Thompson’s takes, but <a href="https://stratechery.com/2016/peter-thiel-comic-book-hero/">this</a>—on the Thiel/Gawker imbroglio—is one of his best posts ever.</p>
<p>I always appreciate Ben Thompson’s takes, but <a href="https://stratechery.com/2016/peter-thiel-comic-book-hero/">this</a>—on the Thiel/Gawker imbroglio—is one of his best posts ever.</p>
Ember.js: "Introducing Subteams"2016-05-24T19:10:00-04:002016-05-24T19:10:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-05-24:/2016/emberjs-introducing-subteams.html<p>In which one tech I really like (<a href="http://emberjs.com">Ember.js</a>) steals a great idea from another tech I really like (<a href="https://www.rust-lang.org">Rust</a>).</p>
<p>In which one tech I really like (<a href="http://emberjs.com">Ember.js</a>) steals a great idea from another tech I really like (<a href="https://www.rust-lang.org">Rust</a>).</p>
2016-05-12 13:012016-05-12T13:01:00-04:002016-05-12T13:01:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-05-12:/2016/2016-05-12-1301.html<p>This bit from the <a href="http://fishshell.com">fish</a> <a href="http://fishshell.com/docs/current/design.html#ortho">design document</a> perfectly captures what git does wrong (emphasis mine):</p>
<blockquote>
<p>When designing a program, one should first think about how to make a intuitive and powerful program. Implementation issues should only be considered once a user interface has been designed.</p>
<p>Rationale:</p>
<p>This design rule is …</p></blockquote><p>This bit from the <a href="http://fishshell.com">fish</a> <a href="http://fishshell.com/docs/current/design.html#ortho">design document</a> perfectly captures what git does wrong (emphasis mine):</p>
<blockquote>
<p>When designing a program, one should first think about how to make a intuitive and powerful program. Implementation issues should only be considered once a user interface has been designed.</p>
<p>Rationale:</p>
<p>This design rule is different than the others, since it describes how one should go about designing new features, not what the features should be. <strong>The problem with focusing on what can be done, and what is easy to do, is that too much of the implementation is exposed. This means that the user must know a great deal about the underlying system to be able to guess how the shell works, it also means that the language will often be rather low-level.</strong></p>
</blockquote>
Ulysses, Byword, and “Just Right”2016-03-26T08:00:00-04:002016-03-26T08:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-03-26:/2016/ulysses-byword-and-just-right.html<p>I’m trying out Ulysses again, as it’s been updated substantially since I last used it. I think the main thing to say about it is that it’s gorgeous and a really great editor, and that there is nonetheless something about it which makes it feel not quite …</p><p>I’m trying out Ulysses again, as it’s been updated substantially since I last used it. I think the main thing to say about it is that it’s gorgeous and a really great editor, and that there is nonetheless something about it which makes it feel not quite as <em>fluid</em> as Byword always has.</p>
<p>Neither of them quite <em>nails</em> it for my purposes, though:</p>
<ul>
<li>Neither is quite there for text that includes a lot of code samples. (Basically: neither supports the GitHub variations on Markdown, which are incredibly important for <a href="http://v4.chriskrycho.com/rust-and-swift.html">a lot of my writing</a></li>
<li>Neither has the ability to do things like autocompletion of citations from something like BibLatex. (No standalone app does, to my knowledge.)</li>
<li>Ulysses’ most powerful features only work in its iCloud bucket. And they’re not standard: rather than embracing <a href="http://criticmarkup.com">CriticMarkup</a> for comments, they have their own. The same is true of e.g. their code blocks.</li>
<li>Ulysses <em>converts</em> any other Markdown documents to its own custom variant when you open them. Had those documents formatted a way you liked (e.g. with specific kinds of link or footnote formatting)? Don’t expect them to still be that way.</li>
<li>Byword really does one thing well: opening and writing single documents. It does this extremely well, but it also has none of the library management that is useful for larger projects.</li>
</ul>
<p>Both of these apps are really wonderful in many ways, and I think it’s fair to say that they’re <em>perfect</em> for many writers. <a href="http://jaimiekrycho.com/">My wife</a>, for example, does nearly all her fiction writing in Ulysses; it works wonderfully for her. But for the kinds of writing I do—usually technical in one way or another—it is limited in its utility. That’s not really a critique of the apps. It’s more the recognition that I have some pretty unusual requirements of my writing apps.</p>
<p>That said, I don’t think I’m the only person out there who has these particular needs. I am, for example, hardly the only person working with citations and academic text, or writing Markup with lots of code samples in it. And as much as you can bend general-purpose text editors like <a href="https://atom.io">Atom</a> to your will,<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> it’s not the same as a dedicated writing app that focuses—in the ways that Ulysses and Byword both do—on just being a great tool for <em>writing</em>. Writing and writing <em>code</em> are not the same, after all. A tool that’s really well-optimized for the latter isn’t necessarily well-optimized for the former.</p>
<p>Keep your ears open. You might just be hearing more about this in the future.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Trust me, I have: I have Zen mode installed, a custom Byword-like theme I use when I just want to write, and even a citation autocompletion package integrated with it. It’s not bad. But I still don’t love it as a first-choice <em>writing</em> tool.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Rust and Swift (xv)2016-03-12T14:45:00-05:002016-03-12T14:45:00-05:00Chris Krychotag:v4.chriskrycho.com,2016-03-12:/2016/rust-and-swift-xv.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="http://v4.chriskrycho.com/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<p>The next chapter in the Swift book focuses on <em>inheritance</em>, a concept which does not yet exist in Rust.</p>
<p>Swift embraces classical inheritance for <code>class</code> data types. As noted <a href="http://v4.chriskrycho.com/2015/rust-and-swift-x.html">previously</a>, Rust’s <code>struct</code> covers much of the ground covered by Swift’s <code>struct</code> and <code>class</code> types together (value and reference types, etc.). However, what Swift’s <code>class</code> types bring to the table is inheritance-based (and not just composition-based) extension of types.</p>
<p>This is a bit of an interesting point: it is an area where, <em>as of today</em>, Swift can do something that is flat impossible in Rust—a rarity.</p>
<p>However, the <em>status quo</em> will be changing sometime in the next year or so, as there is a <a href="https://github.com/rust-lang/rfcs/pull/1210">Rust RFC</a> which has been accepted and is in the process of being implemented which paves the way for inheritance. (Discussions are <a href="https://aturon.github.io/blog/2015/09/18/reuse/">ongoing</a> as to the best way to implement it for Rust. Classical inheritance with vtables as in Swift is probably <em>not</em> going to be the approach.)</p>
<p>The reason Rust’s core team chose to proceed without inheritance for the 1.0 release of the language last May is simple: at a philosophical level, they prefer (as in general most developers increasingly acknowledge that we should all prefer) composition over inheritance. <em>Prefer</em>, not <em>universally choose</em>, because there are situations in which inheritance is the correct choice. But there is a reason that programming with interfaces rather than via sub-classing is a “best practice” for many scenarios in languages like Java or C#.</p>
<p>Rust’s <code>trait</code> system gives you <em>composition</em> in some remarkably powerful ways, allowing you to do things that in C++, for example, have to be accomplished via a combination of inheritance and overloading. Swift, likewise, supplies a <code>protocol</code> system and allows extensions to define further behavior on top of existing data structures. From what I’ve gathered, those approaches are preferred over inheritance in Swift for the same reason Rust shipped 1.0 without it!</p>
<p>But Swift does have inheritance, so it’s worth seeing how it works.</p>
<p>First, any <code>class</code> which doesn’t declare a parent from which to inherit is a base class. This is an important difference from, say, Python, where all classes inherit from <code>Object</code> (leaving aside custom metaclasses).</p>
<p>The syntax choices Swift has made around sub-class declarations are sensible and readable: <code>class SubClass: ParentClass</code> is eminently readable and doesn’t have any obvious points of overlap with other elements in the language.</p>
<p>Indeed, <em>many</em> of the choices made around classes are quite sensible. Overrides, for example, are made explicit via the <code>override</code> keyword. While I’ve sometimes poked fun at Swift’s tendency to add keywords everywhere, this seems like a reasonable place to have one, and it’s nice that overrides are explicit rather than implicit. The same is true of the use of <code>super</code> to refer to the superclass. I’m not sure of the implementation details, but <code>super</code> <em>appears</em> to act as just a special/reserved name for an object: all the syntax around it is normal object instance syntax, which is as it should be.</p>
<p>The limitations around overriding properties all make sense. You can override a read- or write-only parent property as both readable and writable, but you can’t override a readable or writable property <em>not</em> to be readable or writable respectively. Presumably this is because the method lookup for properties always checks up the inheritance chain for getters or setters, so if one is present, you can’t just get rid of it. (You could of course override with a no-op function that spews a warning or some such, but that would pretty clearly be an abuse of the parent API. There might be times you would do that with a third-party library parent class, but in your own code it should be avoided: it indicates a problem in your API design that you need to address instead.)</p>
<p>Finally, we have Swift’s <code>final</code> keyword—and yes, pun intended. It marks whatever block-level item it is attached to—whether class, method, or property—as non-overridable. Attempts to override an item marked final are compile-time failures. (The same kind of thing exists in Java and C#.) In and of itself, this isn’t especially interesting. It is interesting to ponder whether you should make classes subclass-able or not in your API design. There has been <a href="http://mjtsai.com/blog/2015/12/21/swift-proposal-for-default-final/">an active debate</a>, in fact, whether classes in Swift should become final <em>by default</em> in Swift 3.0, rather than open by default. The debate centers on the danger of unintended consequences of overriding, which ultimately takes us back around to the preference for composition, of course.</p>
<p>All of this, among other things, raises the very interesting question of what this will look like in Rust when, eventually, we get inheritance there. After all, we know it will be quite different in some ways:</p>
<ul>
<li><p>It presumably won’t involve a distinct data type constructor, <em>a la</em> Swift’s distinction between <code>struct</code> and <code>class</code>: there may be syntactic sugar involved, and there will definitely be new functionality present, but it will certainly be built on the existing language features as well. There’s a good chance it will basically <em>look</em> like just a special case of <code>impl SomeTrait for SomeStruct</code>, which would fit very well with the ways Rust solves so many other problems.</p></li>
<li><p>Rust doesn’t have many of the things which Swift takes care to special-case for overriding with <code>final</code>, but it will need to address that case for inherited methods and data in some way. (The proposal linked above uses a distinction between <code>default</code> and blanket implementations for trait specialization to pull this off; if those words don’t mean anything to you, don’t worry: I’ve read that post and RFC half a dozen times before I got a really solid handle on all the pieces involved.)</p></li>
<li><p>It will be a relative latecomer to the language, rather than baked in from the start, and therefore will likely seem a secondary way of solving problems, especially at first. (This is, I think, both intentional and good.)</p></li>
</ul>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2016/rust-and-swift-xiv.html"><strong>Previous:</strong> Indexing and subscripts, or: traits vs. keywords again.</a></li>
<li><a href="http://v4.chriskrycho.com/2016/rust-and-swift-xvi.html"><strong>Next:</strong> Initialization: another area where Swift has a lot more going on than Rust.</a></li>
</ul>
Rust and Swift (xiv)2016-03-10T21:25:00-05:002016-03-10T21:25:00-05:00Chris Krychotag:v4.chriskrycho.com,2016-03-10:/2016/rust-and-swift-xiv.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<p>Rust and Swift both support defining subscript access to a given data type, like <code>SomeType[accessedByIndex]</code>. Unsurprisingly, given <a href="/rust-and-swift.html">everything we’ve seen so far</a>, Rust does this with traits, and Swift with a keyword.</p>
<p>In Rust, you can define subscript-style access to a type by implementing the <code>Index</code> and/or <code>IndexMut</code> traits, which allow <em>indexing</em> into a given location in a kind of type. The implementation simply requires one function, which is called when you use the <code>[]</code> operator. That function, <code>index</code> or <code>index_mut</code>, implements how to do the lookup for the specific type. The <code>impl</code> block indicates not only that <code>Index</code> or <code>IndexMut</code> is being implemented, but also the type of the <em>key</em> used: <code>impl Index<Bar> for Foo { ... }</code>, where access would look like <code>a_foo[some_bar]</code>.</p>
<p>The two kinds of traits and corresponding methods define the behavior for immutable and mutable data type, as their name suggest.</p>
<p>Since the trait is defined generically, you can implement whatever kinds of accessors you like to the same underlying data structure, including generics accessors with trait bounds.</p>
<p>It is perhaps telling that in Rust you just find these traits in the general <code>std::ops</code> module, where all the core language operations and associated operators are defined. Rust doesn’t do “operator overloading” so much as it simply provides operators as one more class of trait potentially applicable to your type. (The family resemblance to Haskell’s type classes and similar in other languages is obvious.)</p>
<p>In Swift, you define indexing behavior with the <code>subscript</code> keyword. Subscripts act very similarly to Swift’s <a href="http://v4.chriskrycho.com/2016/rust-and-swift-xii.html">computed properties</a>. They can be made read- or write-only by including or excluding <code>get</code> and <code>set</code> function definitions, just like computed properties.</p>
<p>The behavior is in fact so closely aligned with the computed property syntax and behavior that I initially wondered if it wasn’t just a special case. It is not (though I’m sure much of the parsing machinery can be shared). As the designation of <code>subscript</code> as a keyword strongly implies, and unlike in Rust, this is a separate language construct, not building on existing language machinery.</p>
<p>Swift, like Rust, allows you to define arbitrary accessors. However, since the behavior relies on the <code>subscript</code> construct rather than generics and protocols (Swift’s equivalent to Rust’s traits), you define different kinds of accessors via multiple <code>subscript</code> blocks. (Presumably these could take generic arguments, but I haven’t tested that to be sure.)</p>
<p>Both languages proceed to use these as ways of accessing types as makes sense—e.g. for not only arrays or vectors, but also dictionaries in Swift and <code>HashMap</code> types in Rust.</p>
<p>Since you can define the behavior yourself, you can also use complex types as keys. The languages approach this a bit differently, though. In Rust, if you wanted a compound key, you would need to define either a simple container <code>struct</code> or use a tuple as the argument. In Swift, because it uses the same basic syntax as computed properties, you can just define as many method arguments, of whatever type, as you want.</p>
<p>Takeaway: Rust uses traits; Swift uses a keyword. We probably could have guessed that when we started, at this point!</p>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2016/rust-and-swift-xiii.html"><strong>Previous:</strong> Methods, instance and otherwise.</a></li>
<li><a href="http://v4.chriskrycho.com/2016/rust-and-swift-xv.html"><strong>Next:</strong> Inheritance: a Swiftian specialty (for now).</a></li>
</ul>
The Future of JavaScript2016-03-02T12:30:00-05:002016-03-02T12:30:00-05:00Chris Krychotag:v4.chriskrycho.com,2016-03-02:/2016/the-future-of-javascript.htmlJavaScript (ECMAScript) is in a state of substantial change. And nearly all of those changes make our software development safer and more ergonomic! A short talk covering some of the biggest changes.
<p>I gave a short tech talk at my new employer <a href="http://www.olo.com">Olo</a> today, covering a number of the changes current and forthcoming in ECMAScript 2015 and later. Alas, I ran out of time in preparation and didn’t get to cover everything I wanted—I would have liked very much to cover modules, and to cover fat-arrow-functions in more depth than I did. I’ll look forward to hopefully giving further tech talks at Olo in the future, and perhaps giving this one, expanded and finished out a bit, elsewhere. (If you’d like me to give a talk, including this one, just let me know!) In the meantime, you can take a look at the <a href="//v4.chriskrycho.com/talks/es-future-olo">slides</a>, which I think will be helpful and interesting!</p>
<p>And yes, there <em>were</em> a lot of really delightful <em>Doctor Who</em> references in this talk. Because <em>of course</em> there were!</p>
Static Site Generators and Podcasting2016-02-28T12:50:00-05:002016-02-28T12:50:00-05:00Chris Krychotag:v4.chriskrycho.com,2016-02-28:/2016/static-site-generators-and-podcasting.html<p>Presently, I publish both <a href="http://www.winningslowly.org/">Winning Slowly</a> and <a href="http://www.newrustacean.com/">New Rustacean</a><a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> using what is admittedly a bit of a quirky approach. It works well for me, and I think it’s worth documenting for other nerdy types out there, but if you’re just getting going with podcasting and you’re …</p><p>Presently, I publish both <a href="http://www.winningslowly.org/">Winning Slowly</a> and <a href="http://www.newrustacean.com/">New Rustacean</a><a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> using what is admittedly a bit of a quirky approach. It works well for me, and I think it’s worth documenting for other nerdy types out there, but if you’re just getting going with podcasting and you’re looking for the easy way to do it, let me warn you: <em>this isn’t it</em>. Something like <a href="https://soundcloud.com/for/podcasting">SoundCloud</a> and a blog for show notes, or <a href="https://wordpress.org">WordPress</a> with <a href="https://wordpress.org/plugins/powerpress/">Blubrry PowerPress</a> is what you want instead. This approach works <em>extremely</em> well for statically-generated sites, however, and I imagine a few people out there might find it useful.</p>
<section id="the-short-version" class="level2">
<h2>The short version</h2>
<ul>
<li>Generate the feeds with <a href="http://reinventedsoftware.com/feeder/">Feeder</a>.</li>
<li>Generate the site statically with something else (and it <em>really</em> doesn’t matter what).</li>
<li>Copy the feed into the generated site.</li>
</ul>
</section>
<section id="the-long-version" class="level2">
<h2>The long version</h2>
<p>I generate the sites themselves with <a href="http://docs.getpelican.com/en/3.6.3/">Pelican</a> and <a href="http://www.newrustacean.com/show_notes/e001/index.html"><code>cargo doc</code></a>, respectively. I was already comfortable with Pelican because it’s what I use to generate <em>this</em> site (with a few <a href="https://github.com/chriskrycho/chriskrycho.com/blob/master/pelicanconf.py">tweaks</a> to the standard configuration, especially using <a href="http://pandoc.org/">Pandoc</a> rather than the Python Markdown implementation), so I ran with it for building the Winning Slowly site, and it has worked quite well for building the site itself. It just gets built locally and deployed via <a href="https://pages.github.com/">GitHub Pages</a>.</p>
<p>However, it does not have built-in support for generating <a href="https://en.wikipedia.org/wiki/RSS_enclosure">podcast feeds</a>, even just the general case with enclosures. <a href="https://itunespartner.apple.com/en/podcasts/overview">iTunes podcast support</a> would have taken a lot of work to add.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> Instead, I chose to build the RSS feed semi-manually. <em>Semi</em>-manually, because doing it totally manually is a recipe for making mistakes. XML is many things, but “easy to write correctly by hand” is not one of them. I use <a href="http://reinventedsoftware.com/feeder/">Feeder</a> to manage the feeds, and <em>it</em> makes sure that the enclosure and iTunes elements are set correctly.</p>
<p>The biggest upside to this is that I can use Pelican without modification to how it generates feeds (apart from optionally turning them off entirely). It just <a href="https://github.com/WinningSlowly/winningslowly.org/blob/master/pelicanconf.py#L99">copies</a> the feed I generate to the output file during its normal build process. As suggested above, I also <em>don’t</em> generate the other feeds which Pelican supports, as we have no need for them; we only care about the podcast feed.</p>
<p>This process works equally well, with very little modification, for New Rustacean. In that case, I’m generating the content by running Rust’s documentation tool, <code>cargo doc</code><a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> to render the “API docs” which serve as show notes. Notice the family resemblance between <a href="http://www.newrustacean.com/show_notes/">my “show notes”</a> and, say, the <a href="http://sgrif.github.io/diesel/diesel/index.html">Diesel docs</a>, which are both generated the same way. This is <em>not</em> a normal way of building a podcast website; you can hear me explain why I did it this way in <a href="http://www.newrustacean.com/show_notes/e001/index.html">New Rustacean e001: Document all the things!</a> In any case, I just take the show note-relevant parts of the documentation and put it in Feeder, generate the feed, and <a href="https://github.com/chriskrycho/newrustacean.com/blob/master/Makefile#L32">copy that as part of the build process</a>.</p>
<p>That’s it!</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>And, incidentally, <a href="http://www.sap-py.com">Sap.py</a> and my <a href="http://v4.chriskrycho.com/sermons.xml">sermons</a> feed.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>If I stick with Pelican long-term, I might look into adding it anyway, but honestly, I don’t love Pelican. The reasons have little to do with Pelican for itself, and a lot more to do with my particular and somewhat peculiar needs. That’s a post for another day. In any case, I’m likelier to use another generator—even one I write myself!—than to do the work to make Pelican do what I want.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>Technically, Rust’s documentation tool is <code>rustdoc</code>, which <code>cargo doc</code> wraps around. I never actually use <code>rustdoc</code> directly, though.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Rust and Swift (xiii)2016-02-28T11:15:00-05:002016-03-06T13:20:00-05:00Chris Krychotag:v4.chriskrycho.com,2016-02-28:/2016/rust-and-swift-xiii.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<p>Rust and Swift both have methods which are attached to given data types. However, whereas Rust takes its notion of separation of data and functions rather strictly, Swift implements them on the relevant data structures (classes, structs, or enums) directly. In other words, the implementation of a given type’s methods is within the body of the type definition itself in swift, whereas in Rust it is in an <code>impl</code> block, usually but not always immediately adjacent in the code.</p>
<p>This goes to one of the philosophical differences between the two languages. As we’ve discussed often in the series, Rust reuses a smaller set of concepts—language-level primitives—to build up its functionality. So methods on a type and methods for a trait on a type are basically the same thing in Rust; they’re defined in almost exactly the same way (the latter includes <code>for SomeTrait</code> in the <code>impl</code> expression). In Swift, a method is defined differently from a protocol definition, which we’ll get to in the future. The point is simply this: the two take distinct approaches to the relationship between a given type definition and the implementations of any functions which may be attached to it.</p>
<p>Another important difference: access to other members of a given data type from within a method is <em>explicit</em> in Rust and <em>implicit</em> in Swift. In Rust, the first parameter to an instance method is always <code>self</code> or <code>&self</code> (or a mutable version of either of course), much as in Python. This explicitness distinction is by now exactly what we expect from the two languages.</p>
<p>Both use dot notation, in line with most other languages with a C-like syntax, for method calls, e.g. <code>instance.method()</code> in Swift and <code>instance.method()</code> in Rust. The latter is just syntactical sugar for <code>T::method(&instance)</code> or <code>T::method(instance)</code> where <code>T</code> is the type of the instance (depending on whether the item is being borrowed or moved). Given its implicit knowledge of/access to instance-local data, and the distinctive behavior of Swift methods (see below), I don’t <em>think</em> the same is, or even could be, true of Swift.</p>
<p>All of Swift’s <a href="http://v4.chriskrycho.com/2015/rust-and-swift-viii.html">other behaviors around functions</a>—internal and external names, and all the distinctions that go with those—are equally applicable to methods. Similarly, with the sole change that the first parameter is always the instance being acted on, a Rust methods follow all the same rules as ordinary Rust functions (which is why you can call the struct or enum method with an instance parameter as in the example above).</p>
<p>Swift does <em>have</em> a <code>self</code>—it is, of course, implicit. It’s useful at times for disambiguation—basically, when a parameter name shadows an instance name. This will look familiar to people coming from Ruby.</p>
<p>The strong distinction Swift makes <a href="http://v4.chriskrycho.com/2015/rust-and-swift-x.html">between reference and value types</a> comes into play on methods, as you might expect, as does its approach to mutability. Methods which change the values in value types (<code>struct</code> or <code>enum</code> instances) have to be declared <code>mutating func</code>. This kind of explicit-ness is good. As we discussed in <a href="http://v4.chriskrycho.com/2015/rust-and-swift-x.html">Part 10</a>, Rust approaches this entire problem differently: types are not value or reference types; they are either mutable and passed mutably (including as <code>mut self</code> or <code>&mut self</code>), or they are not. If an instance is mutable and passed mutably, a method is free to act on instance data. And in fact both languages require that the instance in question not be immutable. In fact, everything we said in Part 10 about both languages applies here, just with the addendum that private properties are available to methods.</p>
<p>The distinction, you’ll note, is in where the indication that there’s a mutation happens. Swift has a special keyword combination (<code>mutating func</code>) for this. With Rust, it’s the same as every other function which mutates an argument. This makes Rust slightly more verbose, but it also means that in cases like this, the existing language tooling is perfectly capable of handling what has to be a special syntactical case in Swift.</p>
<p>Both Swift and Rust let you out-and-out change the instance by assigning to <code>self</code>, albeit in fairly different ways. In Swift, you’d write a mutating method which updates the instance proper like this:</p>
<pre class="swift"><code>struct Point {
var x = 0.0, y = 0.0
mutating func changeSelf(x: Double, y: Double) {
self = Point(x: x, y: y)
}
}</code></pre>
<p>In Rust, you’d need to explicitly pass a mutable reference and dereference it. (If you tried to pass <code>mut self</code> instead of <code>&mut self</code>, it would fail unless you returned the newly created object and assigned it outside.) Note that while the full implementation here is a couple lines longer, because of the data-vs.-method separation discussed earlier, the implementation of the method itself is roughly the same length.</p>
<pre class="rust"><code>pub struct Point {
pub x: f64,
pub y: f64,
}
impl Point {
pub fn change_self(&mut self, x: i32, y: i32) {
*self = Point { x: x, y: y };
}
}</code></pre>
<p>Note that though you <em>can</em> do this, I’m not sure it’s particularly Rustic. My own instinct would be to get a <em>new</em> <code>Point</code> rather than mutate an existing one, in either language, and let the other be cleaned up “behind the scenes” as it were (with automatic memory management in Swift or the compiler’s automatic destruction of the type in Rust)—purer functions being my preference these days.</p>
<p>You can do this with <code>enum</code> types as well, which the Swift book illustrates with a three-state switch which updates the value type passed to a new value when calling its <code>next()</code> method. You can do the same in Rust, with the same reference/dereference approach as above.</p>
<p>Here’s a three-state switch in Swift:</p>
<pre class="swift"><code>enum ThreeState {
case First, Second, Third
mutating func next() {
switch self {
case First:
self = Second
case Second:
self = Third
case Third
self = First
}
}
}</code></pre>
<p>And the same in Rust:</p>
<pre class="rust"><code>enum ThreeState { First, Second, Third }
impl ThreeState {
pub fn next(&mut self) {
match *self {
ThreeState::First => *self = ThreeState::Second,
ThreeState::Second => *self = ThreeState::Third,
ThreeState::Third => *self = ThreeState::First,
}
}
}</code></pre>
<p>Both languages also have what Swift calls “type methods”, and which you might think of as “static class methods” coming from a language like Java or C♯. In Swift, you define them by adding the <code>static</code> or <code>class</code> keywords to the <code>func</code> definition. The <code>class func</code> keyword combo is only applicable in <code>class</code> bodies, and indicates that sub-classes may override the method definition.</p>
<pre class="swift"><code>struct Bar {
static func quux() { print("Seriously, what's a `quux`?") }
}
func main() {
Bar.quux()
}</code></pre>
<p>In Rust, you simply drop <code>self</code> as a first parameter and call it with <code>::</code> syntax instead of <code>.</code> syntax:</p>
<pre class="rust"><code>struct Bar;
impl Bar {
pub fn quux() { println!("Seriously, what's a `quux`?"); }
}
fn main() {
Bar::quux();
}</code></pre>
<p>As usual, Rust chooses to use existing language machinery; Swift uses new (combinations of) keywords.</p>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2016/rust-and-swift-xii.html"><strong>Previous:</strong> Properties: type and instance, stored and computed.</a></li>
<li><a href="http://v4.chriskrycho.com/2016/rust-and-swift-xiv.html"><strong>Next:</strong> Indexing and subscripts, or: traits vs. keywords again.</a></li>
</ul>
Rust and Swift (xii)2016-02-27T22:30:00-05:002019-06-22T10:55:00-04:00Chris Krychotag:v4.chriskrycho.com,2016-02-27:/2016/rust-and-swift-xii.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<p><i class="editorial">A note on publication: I had this drafted in early January and simply forgot to publish it. Whoops!</i></p>
<hr />
<ul>
<li><p>As noted in <a href="http://v4.chriskrycho.com/2015/rust-and-swift-x.html">my discussion of the product types in Rust and Swift</a>, Swift distinguishes between classes and structs, with the former being reference types and the latter being value types. All structs are value types in Rust. (That you can wrap them in a pointer for heap-allocation with one of the smart pointer types, e.g. <code>Box</code> or <code>Rc</code> or <code>Arc</code>, doesn’t change this fundamental reality.) This underlying difference gives rise to one the big difference between Swift classes and Rust structs: a constant <code>class</code> instance in Swift can still have its fields mutated; not so with a Rust <code>struct</code> instance. But also not so with a <em>Swift</em> <code>struct</code> instance, as it turns out! There isn’t a straightforward way to do this with <code>Box<T></code> in Rust; you <em>could</em> do it with something like an <code>Rc<T></code> or <code>Arc<T></code>, though.</p></li>
<li><p>Swift’s <code>lazy</code> keyword, and associated delayed initialization of properties has, as far as I know, no equivalent whatsoever in Rust. And while I can see the utility in principle, I’m hard-pressed to think of any time in my working experience where the behavior would actually be useful. Rather than having <code>lazy</code> properties, I would be far more inclined to separate the behavior which should be initialized at a later time into its own data structure, and supplying it via <em>inversion of control</em> if it is necessary for an actions taken by other data structures. (This seems—at first blush at least—to be a way of supporting the un- or partially-initialized data types possible in Objective C?)</p></li>
<li><p>Swift has computed properties, a concept familiar to Python developers (and relatively recently introduced in JavaScript). These can be quite handy, as they let you define a property to be accessed like any other (<code>someInstance.theProperty</code>) while being defined with functions which compute the value dynamically. A common, trivial example: if you defined a <code>Person</code> with <code>firstName</code> and <code>lastName</code> members, you could define a computed property, <code>fullName</code>, which was built using the existing values.</p></li>
<li><p>Rust doesn’t have computed properties at all. This is because of its design decision to deeply separate <em>data</em> from <em>behavior</em>, essentially stealing a page from more pure-functional languages (Haskell etc.). This is (one reason) why you don’t define the implementation of a <code>struct</code> method in the same block as the members of the struct. See an excellent explanation <a href="https://www.reddit.com/r/rust/comments/2uvfic/why_doesnt_rust_have_properti%20es/cocmunq">here</a>.</p></li>
<li><p>It’s also closely related the way Rust favors composition over inheritance (by making the latter impossible, at least for now!). By separating <code>impl</code> from <code>struct</code> and <code>enum</code>, Rust makes it not only straightforward but <em>normal</em> to define new behavior for a given item separately from the data description. This, combined with the use of traits (like Swift’s protocols) as the primary way of sharing behavior between objects, means that you don’t have to worry about conforming to some interface when you define a given type; it can always<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> be defined later, even by entirely other modules or even other crates (packages).</p></li>
<li><p>In any case, the result is that it’s not at all Rustic<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> to have something like getters or setters or computed properties. It makes sense to have them in Swift, though, which has a more traditionally object-oriented type system (though with some neat additions in the form of its <code>protocol</code> type classes, which are analogous to Rust’s <code>trait</code>s—but we’ll come to those in a future post). This is a wash: it’s just a function of the slightly different approaches taken in object design in the two systems. If you have a Swift-style type system, you should have computed properties. If you have a Rust-like type system, you shouldn’t.</p></li>
<li><p>I’m shocked—utterly shocked!—to find that Swift provides a default <code>newValue</code> argument for setters for computed properties, and shorthand for defining read-only properties. By which I mean: I find this kind of thing entirely unsurprising at this point in Swift, but I don’t like it any better. Making so much implicit just rubs me the wrong way. Once you know the language, it’s fine of course: you’ll recognize all the patterns. It just seems, in an interesting way, to add cognitive load rather than reducing it. That may just be me, though!</p></li>
<li><p>Interestingly, Swift also allows you to set watchers on given properties—functions called with the new or the removed value whenever the value of the computed property is updated or touched for any reason. It has two of these built in: <code>willSet</code> and <code>didSet</code>. You can override these to get custom behavior when a normal property is about to change. (You can of course just implement the desired behavior yourself in the <code>set</code> method for a computed property.)</p></li>
<li><p>Since Rust doesn’t have properties, it doesn’t have anything analogous. I can’t think of a particularly straightforward way to implement it, either, though you might be able do some chicanery with a trait. Of course you can always define a setter method which takes a value and optional callbacks for actions to take before and after setting the value; the thing that’s nice in Swift is that it gives you these as built-in capabilities within the language itself. (Now I’m wondering if or how you could implement an <code>Observable</code> trait, though! Might have to play with that idea more later.) It’s worth remembering , in any case, that Rust doesn’t have these <em>because it doesn’t have properties</em>.</p></li>
<li><p>Curiously, Swift provides the same functionality for “global” and “local” variables in a given context. In both cases, this is suggestive of the underlying object model for both modules and functions in Swift.</p></li>
<li><p>Now I’m curious what the representation of a module is in Swift; is it part of the general object system in some way?</p></li>
<li><p>This likewise gets me asking: what <em>is</em> a module in Rust? It’s a block item, clearly, and accordingly defines a scope (as do functions, if and match expressions, and so on). It’s <em>not</em> a compilation unit (as it is in C or C++). What other machinery is attached to it?</p></li>
<li><p>Both of these questions can be answered by reading the source code for the languages (<a href="https://github.com/rust-lang/rust">Rust</a>, <a href="https://github.com/apple/swift">Swift</a>), of course. Putting that on my to-do list.</p></li>
<li><p>Swift also has <em>type properties</em>: values common to all instances of a given type. These are directly analogous to <em>class properties</em> (or <em>class attributes</em>) in Python or prototype properties in JavaScript.</p></li>
<li><p>Rust doesn’t have anything like this to my knowledge. You could accomplish something similar using a module-level variable with a <code>'static</code> lifetime,<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> much as you could in C—but that wouldn’t be an item on the type itself, of course.</p></li>
<li><p>The <code>static</code> declaration of item in Swift suggests what a possible implementation might look like in Rust: defining a member like <code>a_static_long: 'static i64</code>. There might be some interesting challenges around that, though; I don’t know enough to comment meaningfully. At the least, it seems like it would be an odd fit with the rest of the memory management approach Rust takes, and it would make it a bit harder to reason correctly about the behavior of data in a given type. (There are certainly issues there around mutability guarantees and lifetime checking!)</p></li>
<li><p>Because of the differences in underlying approach to data types and implementation, this is one of the areas where the superficially (and sometimes actually) similar languages diverge <em>a lot</em>.</p></li>
</ul>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2016/rust-and-swift-xi.html"><strong>Previous:</strong> Hopes for the next generation of systems programming.</a></li>
<li><a href="http://v4.chriskrycho.com/2016/rust-and-swift-xiii.html"><strong>Next:</strong> Methods, instance and otherwise.</a></li>
</ul>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>leaving aside details about <code>trait</code> specialization <a href="https://github.com/aturon/rfcs/blob/impl-specialization/text/0000-impl-specialization.md">still being hashed out</a> in Rust<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>This is now my preferred term for “idiomatic Rust”—directly analogous to “Pythonic,” but with the upside of being an actual word, and one with pleasantly evocative connotations to boot.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>There’s nothing analogous to Rust’s concept of explicit lifetimes in Swift, as far as I can tell. The <code>static</code> keyword in Swift, like that in C, Objective-C, and C++, is <em>sort of</em> like Rust’s <code>'static</code> lifetime specifically, for variables at least—but Rust’s lifetime is substantially more sophisticated and complex than that analogy might suggest.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
“I Don't Know When I'd Use That”2016-01-17T10:00:00-05:002016-01-17T10:00:00-05:00Chris Krychotag:v4.chriskrycho.com,2016-01-17:/2016/i-dont-know-when-id-use-that.html<p>I was reading an interesting Stack Overflow <a href="http://stackoverflow.com/questions/21170493/when-are-higher-kinded-types-useful">discussion</a> of the value of <a href="http://stackoverflow.com/questions/6246719/what-is-a-higher-kinded-type-in-scala">higher-kinded types</a> (hereafter <abbr>HKTs</abbr>), and noted someone repeatedly commenting, “But when would you use this in a <em>real app</em>?” To put it the way another <a href="https://m4rw3r.github.io/rust-and-monad-trait/">blog post</a> about <abbr>HKTs</abbr> (in Rust), they are “a feature people do not …</p><p>I was reading an interesting Stack Overflow <a href="http://stackoverflow.com/questions/21170493/when-are-higher-kinded-types-useful">discussion</a> of the value of <a href="http://stackoverflow.com/questions/6246719/what-is-a-higher-kinded-type-in-scala">higher-kinded types</a> (hereafter <abbr>HKTs</abbr>), and noted someone repeatedly commenting, “But when would you use this in a <em>real app</em>?” To put it the way another <a href="https://m4rw3r.github.io/rust-and-monad-trait/">blog post</a> about <abbr>HKTs</abbr> (in Rust), they are “a feature people do not really know what to do with.”</p>
<p>Don’t get me wrong: I’m sympathetic to that desire for concrete examples. I’m interested in these kinds of things not primarily for their intellectual value but for their pragmatic value (though I don’t think those two are as distinct as many people do). I’d <em>also</em> love to see some more real-world examples in those discussions. All too often, the discussions of types in Haskell end up being quite abstract and academic—no surprise, given the language’s origin. But I’m also aware that quite often it’s difficult to see how a given kind of abstraction is useful without jumping into a language which has that abstraction available and <em>using</em> it.</p>
<p>People often get turned off by Haskell (and other similarly high-abstraction languages like Scala) because of challenging terms like <em>monad</em>, <em>applicative</em>, <em>functor</em>, and so on. And again: I get that. To grok Haskell, you need to wrap your head around a lot of <em>math</em> ideas—mainly various properties of <em>sets</em>.</p>
<p>But I remember feeling the same way six years ago when I started playing with JavaScript and jQuery and every tutorial out there simply assumed existing familiarity and comfort with functions as arguments or return values. Coming from the world of Fortran and C, my head ached for weeks as I tried to make sense of what I was seeing. Even when I finally got it, <em>I didn’t like it</em>. Over the last several years, though, I’ve become increasingly comfortable and even reliant on closures, composition of functions to transform data, and so on as I worked regularly in Python and JavaScript.</p>
<p>That experience has taught me that my current inability to see the utility of a given abstraction means little about the abstraction. It’s primarily an indicator of my own inexperience.</p>
<p>To the question of the utility <abbr>HKTs</abbr> in general—in Haskell, Rust, or somewhere else—I don’t have the knowledge myself (yet) to supply a good answer. Heck, I can’t even <em>explain</em> them very well. (<a href="http://adriaanm.github.io/research/2010/10/06/new-in-scala-2.8-type-constructor-inference/">Other people can, though!</a>) But I can say that reading <a href="https://gumroad.com/l/maybe-haskell"><em>Maybe Haskell</em></a> showed me clearly that such things can be very useful. Even if I am not yet comfortable using that tool, I see how learning to use it would be profitable in the long-term. And like any good tool, even if you don’t need it every day… when you want it, you <em>really</em> want it.</p>
Women in Rust2016-01-10T15:25:00-05:002016-01-10T15:25:00-05:00Chris Krychotag:v4.chriskrycho.com,2016-01-10:/2016/women-in-rust.html<p><i class=editorial>I posted these bullet points last night as a series of tweets on my <a href="https://www.twitter.com/chriskrycho">main account</a>.</i></p>
<ul>
<li><p><a href="https://twitter.com/chriskrycho/status/686007510147309568">∞ January 9, 2016 21:11</a></p>
<blockquote>
<p>A thing I’d really, really like to see change—this is from the <a href="http://www.newrustacean.com/">New Rustacean</a> Twitter data. Unsurprising, but awful:</p>
<figure>
<img src="//cdn.chriskrycho.com/images/new-rustacean-followers.png" alt="@newrustacean Twitter follower gender data" /><figcaption><a href="https://www.twitter.com/newrustacean">@newrustacean</a> Twitter follower gender data</figcaption>
</figure>
</blockquote></li>
<li><p><a href="https://twitter.com/chriskrycho/status/686007729371148289">∞ January 9, 2016 …</a></p></li></ul><p><i class=editorial>I posted these bullet points last night as a series of tweets on my <a href="https://www.twitter.com/chriskrycho">main account</a>.</i></p>
<ul>
<li><p><a href="https://twitter.com/chriskrycho/status/686007510147309568">∞ January 9, 2016 21:11</a></p>
<blockquote>
<p>A thing I’d really, really like to see change—this is from the <a href="http://www.newrustacean.com/">New Rustacean</a> Twitter data. Unsurprising, but awful:</p>
<figure>
<img src="//cdn.chriskrycho.com/images/new-rustacean-followers.png" alt="@newrustacean Twitter follower gender data" /><figcaption><a href="https://www.twitter.com/newrustacean">@newrustacean</a> Twitter follower gender data</figcaption>
</figure>
</blockquote></li>
<li><p><a href="https://twitter.com/chriskrycho/status/686007729371148289">∞ January 9, 2016 21:12</a></p>
<blockquote>
<p>Takeaway: the <a href="https://www.twitter.com/rustlang">@rustlang</a> community has many strengths, but like every tech community, we need to improve here—a lot.</p>
</blockquote></li>
<li><p><a href="https://twitter.com/chriskrycho/status/686008145752272896">∞ January 9, 2016 21:14</a></p>
<blockquote>
<p>Standing offer: if you’re a female <a href="https://www.twitter.com/rustlang">@rustlang</a> dev, I’d <em>love</em> to feature your experience learning Rust on the show.</p>
</blockquote></li>
<li><p><a href="https://twitter.com/chriskrycho/status/686008527937245185">∞ January 9, 2016, 21:15</a></p>
<blockquote>
<p>I’ll be doing some interview <a href="https://www.twitter.com/newrustacean">@newrustacean</a> episodes soon-ish—I want as many female voices in the mix as possible.</p>
</blockquote></li>
</ul>
Rust and Swift (xi)2016-01-10T10:00:00-05:002016-01-10T10:00:00-05:00Chris Krychotag:v4.chriskrycho.com,2016-01-10:/2016/rust-and-swift-xi.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<p>I’ve still been (slowly) working through the Swift book and comparing Swift and Rust; I have another draft started which I’ll hopefully finish this week. And I still find the comparison deeply profitable. The two languages continue to evolve in interesting ways, and the comparison is all the more interesting <a href="https://github.com/apple/swift">now that Swift is open-source</a> and its future <a href="https://github.com/apple/swift-evolution">open for community input</a> (just as <a href="https://github.com/rust-lang/rfcs">Rust is</a>).</p>
<p>Something I’ve been thinking about for several months, and which the <a href="https://overcast.fm/+CdSzsTIY/1:16:42">brief discussion of Swift, Go, and Rust</a> at the end of the latest <a href="http://atp.fm/episodes/151">Accidental Tech Podcast</a> brought back to my mind, is the question of what the next generation of systems-level programming language should be. And my answer is: there shouldn’t be <em>just one</em>. The best possible thing for the space, in many ways, is for there to be a healthy diversity of options and lots of competition in the space. We don’t want to have <em>ten</em> different systems programming languages to deal with, I think—but three or four or five would be <em>much</em> preferable to having one or two (closely related) as we have in the decades of C and C++ dominance.</p>
<p>Don’t get me wrong: both languages (and perhaps especially C) do many things exceptionally well. For all that they are (justly) maligned for some of their problems, the longevity of both C and C++ has a great deal to do with how well they fit the problem domain, and how much they’ve empowered developers to accomplish within that space (which is very, <em>very</em> large).</p>
<p>The problem, though, at least as I see it, is that the existence of only two really serious systems programming languages for the last several decades has led a lot of developers to think that C and C++‘s ways of solving problems are the <em>only</em> way to solve problems. The languages we use shape the way we think about possible solutions, and when a given language doesn’t recognize entire classes of different approaches, that deeply limits developers’ ability to tackle certain issues. (See also the interesting CppCast <a href="http://cppcast.com/2015/10/andrei-alexandrescu/">interview with D’s Andrei Alexandrescu</a> in which he makes similar points.)</p>
<p>The most obvious thing missing from both is the ability to do truly functional-style programming. C of course is also lacking classes and thus is much more difficult to use for any sort of object-oriented programming.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> Neither has anything remotely like Rust’s traits or Swift’s extensions; C++ has only gotten lambdas recently.</p>
<p>All of this comes out to mean that the set of <em>tools</em> available to systems programmer has necessarily been missing any number of things available in languages outside that context. In some cases, this may be a necessary consequence of the kinds of programming being done: when you need totally deterministic memory and compiler behavior, dynamic typing and a non-trivial runtime are simply not options. But in many cases, they are simply a function of the history of the languages’ development and history. Being an ALGOL descendant, and especially a C descendant, means there are some fundamental choices about the language which will differ from those made in a language descended from ML.</p>
<p>All of which is to say: C and C++ have been really useful tools in many ways, but having <em>only</em> C and C++ available for serious systems programming work over the last decades has left many developers blind to or simply unaware of the real advantages other paradigms might offer them.</p>
<p>So going forward, I don’t want there to be <em>a winner</em> in the systems programming space. I’d rather see D, Rust, Swift, Go, and maybe even a few other contenders all stay strong—finding their own niches and continually pushing each other and learning from each other. That will give us a space in which different languages are free to try out different approaches to the same problems, without being tied to the specific constraints faced by other languages. Built-in greenthreading? Go! Hindley-Milner types, memory safety, and zero runtime? Rust! Something in beween, highly expressive and with different type systems and tradeoffs around memory management, etc.? Swift, or D!</p>
<p>Having a robust, thriving set of competitors in the market will be good for the languages themselves. But it will also be good for developers. It will take off some of the blinders that come from a single language (or a pair of very closely related languages) dominating the ecosystem. It will make it likelier that people will be more familiar with different programming paradigms. And that can only be a good thing, as far as I’m concerned.</p>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-x.html"><strong>Previous:</strong> Classes and structs (product types), and reference and value types.</a></li>
<li><a href="http://v4.chriskrycho.com/2016/rust-and-swift-xii.html"><strong>Next:</strong> Properties: type and instance, stored and computed.</a></li>
</ul>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>It is of course entirely possible to do non-classical OOP; the point is that C entirely lacks <em>language-level</em> facilities for OOP, inheritance, etc.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Rust and Swift (x)2015-12-06T11:25:00-05:002015-12-22T13:30:00-05:00Chris Krychotag:v4.chriskrycho.com,2015-12-06:/2015/rust-and-swift-x.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<ul>
<li><p>Swift and Rust both have “product types” as well as the <code>enum</code> “sum types.” In Rust, these are <code>struct</code> types; Swift splits them into <code>class</code>es and <code>struct</code>s.</p></li>
<li><p>“Product types” will be much more familiar to programmers coming from a C-like background, or indeed most object-oriented programming languages: these are the same basic kind of thing as classes, structs, and objects in other languages. These include <em>all</em> the value types which compose them, unlike sum types—<code>enum</code>—which have <em>only one</em> of the value types which compose them.</p></li>
<li><p>Right off the bat, I note the Swift book’s somewhat amusing reticence to call out C and C-descended languages:</p>
<blockquote>
<p>Unlike other programming languages, Swift does not require you to create separate interface and implementation files for custom classes and structures.</p>
</blockquote>
<p>Because there’s such a long list of languages not directly descended from C which do that, right? 😉</p></li>
<li><p>Rust differs not only from Swift but from every other modern language I have used in not having a constructor <em>syntax</em> for its instantiations. Whereas C++ has <code>new NameOfType()</code> and Python and Swift both have <code>NameOfType()</code>, “constructors” for Rust <code>struct</code>s are just functions which return an instance constructed using literal syntax, by convention <code>NameOfType::new()</code>.</p></li>
<li><p>Let’s make a <code>struct</code> defining a location in a plane, you might do this in Swift (leaving aside initializer values; I’ll come back to those later). These definitions look <em>very</em> similar. Swift:</p>
<pre class="swift"><code>struct Point {
var x: Double var y: Double
}</code></pre>
<p>Rust:</p>
<pre class="rust"><code>struct Point {
x: f64,
y: f64,
}</code></pre></li>
<li><p>Creating the types looks a little different, though. Here’s a constructor in Swift:</p>
<pre class="swift"><code>let point = Point(x: 0, y: 0)</code></pre>
<p>And the two ways we could construct the type in Rust, a literal constructor (fairly similar to constructing <code>dict</code> literals in Python or object literals in JavaScript):</p>
<pre class="rust"><code>let point = Point { x: 0.0, y: 0.0 };</code></pre>
<p>Or a constructor method, <code>new</code>:</p>
<pre class="rust"><code>// "Constructor"
impl Point {
fn new(x: f64, y: f64) -> Point {
Point { x: x, y: y }
}
}
let another_point = Point::new(0, 0);</code></pre>
<p>Observe: these two things in Rust are the same under the covers (though if <code>Point</code>s had non-public internals, they would be non-trivially different: you couldn’t construct it with its private members externally). As usual, Rust opts to keep the language relatively small in these core areas. Given the plethora of ways you can construct something in e.g. C++, I count that a big win.</p></li>
<li><p>Another difference: Swift has <em>syntax</em> for default values; Rust uses a <code>trait</code> instead. In Swift, you simply supply the default value in the definition of the <code>struct</code> or <code>class</code>:</p>
<pre class="swift"><code>struct Point {
var x = 0.0 var y = 0.0
}
let point = Point()</code></pre>
<p>In Rust, you use <code>std::default::Default</code>, which provides a standard value for a given type, and for simple types can be supplied by the compiler even for custom types. Here is the equivalent Rust code:</p>
<pre class="rust"><code>use std::default::Default;
#[derive(Default)]
struct Point {
x: f64,
y: f64,
}
let point = Point::default();</code></pre>
<p>This is reasonable enough, but we can also supply our own custom implementation if we so desire:</p>
<pre class="rust"><code>use std::default::Default;
struct Point {
x: f64,
y: f64,
}
impl Default for Point {
fn default() -> Point {
Point { x: 0.0, y: 0.0 }
}
}
let point = Point::default();</code></pre>
<p>Of course, this is trivial for this type, but you can see how it could be useful for more complex types.</p></li>
<li><p>The tradeoffs here are our usual suspects: Rust’s re-use of an existing concept/tool within the language (<code>trait</code>) vs. Swift’s use of syntax. Rust is slightly more explicit, making it obvious that a default value is being created—but Swift is perfectly readable and the syntax is consistent with many other languages, and it <em>is</em> shorter.</p></li>
<li><p>Both languages use <code>.</code> syntax for member access. Swift:</p>
<pre class="swift"><code>println("The point is: \(point.x), \(point.y)")</code></pre>
<p>Rust:</p>
<pre class="rust"><code>println!("The point is {:}, {:}", point.x, point.y);</code></pre></li>
<li><p>Swift lets you define items <em>within</em> a struct as mutable or constant. So you can create a variable struct instance, with some of its items immutable:</p>
<pre class="swift"><code>struct PointOnZAxis {
var x: Double var y: Double let z = 0.0
}
var point = PointOnZAxis(x: 4.0, 5.0)
point.x = 5.0 point.y = 6.0
// This wouldn't compile, though:
// point.z = 1.0</code></pre>
<p>This is pretty handy for a lot of object-oriented programming approaches.</p></li>
<li><p>And Rust doesn’t have it. There are ways to accomplish the same thing; this isn’t the end of the world. Still, it’s an interesting omission, and it’s very much by design. Rust <em>used</em> to have this feature, and dropped it—and for good reason. Say you had a mutable field in a mutable struct, and then an immutable reference to it; should the mutable field be mutable, or immutable, with that reference?</p></li>
<li><p>The Rusty way to do this is to differentiate between public and private data. The above examples don’t make the public/private distinction particularly clear, because they assume everything is within the same module. However, many times, this will not be the case.</p>
<pre class="rust"><code>mod geometry {
pub struct Point {
x: f64,
pub y: f64,
}
impl Point {
pub fn new() -> Point {
Point { x: 0.0, y: 0.0 }
}
pub fn set_x(&mut self, x: f64) {
self.x = x;
}
}
}
fn main() {
// Won't compile: the `x` field is private.
// let mut p = geometry::Point { x: 0.0, y: 0.0 };
// Will compile: the `new` method is public.
let mut p = geometry::Point::new();
// Won't compile: `x` isn't public.
// p.x = 4.0;
// You can use the setter, though:
p.set_x(4.0);
// You *can* set `y` directly, though, because it's public.
p.y = 14.0;
// You can't set fields either way if the instance is immutable.
let q = geometry::Point::new();
// This fails because `set_x` requires a mutable reference, but `q` is
// immutable.
// q.set_x(4.0);
// This fails because `q` is immutable, and so all its fields are, too.
// q.y = 14.0;
}</code></pre></li>
<li><p>This is an interesting way of handling this issue. Rust takes the fairly standard use of information hiding (one of the basic principles of most object-oriented programming techniques) and combines it with the language’s normal mutability rules to make it so that the mutability of any given instance data is quite clear: all public members are just as mutable as the struct. If a member isn’t potentially publicly mutable, it isn’t publicly accessible. I really like this, though it took some mental readjustment.</p></li>
<li><p>There’s one other difference here, and it’s actually one of the areas Swift and Rust diverge substantially. Rust has <code>struct</code> for all product types; Swift splits them into <code>struct</code> types and <code>class</code> types.</p></li>
<li><p>Swift <code>class</code>es have inheritance; there is presently <em>no</em> inheritance in Rust.</p></li>
<li><p>Additionally, whereas Rust determines whether to use pass-by-reference or-value depending on details of the type (whether it implements the <code>Copy</code> <code>trait</code>) and expected arguments to a function, Swift makes that distinction between <code>class</code> (pass-by-reference) and <code>struct</code> (pass-by-value) types. Quirky.</p></li>
<li><p>Not bad, <em>per se</em>. But quirky.</p>
<p><strong>Edit:</strong> I recently bumped into some discussion of data types in C♯ along with C, C++, and Java (<a href="http://joeduffyblog.com/2015/12/19/safe-native-code/">here</a>) and discovered that Swift is stealing this idea from C♯, which <a href="https://msdn.microsoft.com/en-us/library/0taef578.aspx">makes the same copy/reference distinction</a> between <code>struct</code> and <code>class</code>.</p></li>
<li><p>One consequence of this: in Rust, you’re always rather explicit about whether you’re accessing things by value vs. by reference. Not so in Swift; you have to remember whether the item you’re touching is a <code>struct</code> type or a <code>class</code> type, so that you can <em>remember</em> whether a given assignment or function call results in a reference or a copy. This is necessary because Swift doesn’t let you make that explicit (trying to hide the memory management from you). And it’s not alone in that, of course; many other high-level languages obscure that for convenience but still require you to think about it in certain circumstances. I’ve been bitten in the past by the value/reference distinction when thinking through the behavior of Python objects, for example, so that’s not a critique of Swift. Moreover, having the distinction between <code>struct</code> and <code>class</code> types does let you be <em>more</em> explicit than you might in e.g. Python about how given data will be handled.</p></li>
<li><p>I won’t lie, though: I like Rust’s approach better. (Shocking, I know.)</p></li>
<li><p>All that nice initializer syntax for Swift <code>struct</code> types is absent for its <code>class</code> types, which seems strange to me.</p></li>
<li><p>Swift supplies some syntax for object identity, since it’s useful to know not only whether two <code>class</code> instances have the same data, but are in fact the same instance. You can use <code>===</code> and <code>!==</code>. Handy enough. To get at this kind of equivalence in Rust, you have to use raw pointers (which are often but not always <code>unsafe</code>; you can do this specific comparison <em>without</em> being <code>unsafe</code>, for example) to check whether the memory addresses are the same.</p></li>
</ul>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-ix.html"><strong>Previous:</strong> Sum types (<code>enum</code>s) and more on pattern matching.</a></li>
<li><a href="http://v4.chriskrycho.com/2016/rust-and-swift-xi.html"><strong>Next:</strong> Hopes for the next generation of systems programming.</a></li>
</ul>
Rust and Swift (ix)2015-11-09T22:20:00-05:002019-06-22T10:55:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-11-09:/2015/rust-and-swift-ix.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<ul>
<li><p>Right off the bat when looking at the definitions for Swift’s and Rust’s <code>enum</code> types, a difference pops out: the use of the keyword <code>case</code> to introduce an enum member in Swift. In one sense, this overloads that keyword, but in another sense it’s fine: pattern matching and enums go hand in hand, so the use in both cases is fairly natural. Rust doesn’t have any special syntax to designate the elements of an enum; they’re just separated by commas.</p></li>
<li><p>I am not at all shocked to find that Swift has a variant syntax for its unit type case declarations, where a single <code>case</code> keyword precedes a list of comma-separated cases defined on a single line. (At this point, I would be more surprised <em>not</em> to find a variant syntax for something in Swift!)</p></li>
<li><p>Something truly wonderful about both Rust and Swift: enumerated types aren’t just wrappers around integer values. They’re real types of their own. This is powerful.</p></li>
<li><p>Rust and Swift also share in having enumerated types that can hold values. The most prominent of these so far in the Swift book are optionals, the <code>Optional</code> enum type, corresponding very closely to Rust’s <code>Option</code> type. Having had these for a bit in playing with Rust, and having gotten familiar with the utility of types like these while reading <a href="https://gumroad.com/l/maybe-haskell"><em>Maybe Haskell</em></a>—a delightful book which introduces Haskell and functional programming using Haskell’s <code>Maybe</code> type—I now miss them profoundly in languages which don’t have them. (Which is to say: every language I use on a regular basis professionally: C, C++, Python, JavaScript, etc.).</p></li>
<li><p>Swift’s enum types don’t have integer values <em>by default</em>—but they can have them if you define a type and assign a value to each enum case at the definition. These “raw values” are distinct from the “associated values” noted just above. I expect these exist primarily for ease of interoperation with Objective-C.</p></li>
<li><p><del>Rust doesn’t have anything like this, at least that I can think of. The main place it would be useful would be for foreign function interfaces (as in Swift), and this is one of several such gaps in Rust,</del> along with the lack of a straightforward way to map to C’s <code>union</code> types. <del>There are trade offs in terms of adding the functionality to the language, though, as it substantially increases the complexity of what an enum value can be, I think.</del></p>
<p><strong>Edit:</strong> This was incorrect. From the <a href="https://doc.rust-lang.org/reference.html">Rust Reference</a> section on <a href="https://doc.rust-lang.org/reference.html#enumerations">Enumerations</a>:</p>
<blockquote>
<p>Enums have a discriminant. You can assign them explicitly:</p>
<pre class="rust"><code>enum Foo {
Bar = 123,
}</code></pre>
<p>If a discriminant isn’t assigned, they start at zero, and add one for each variant, in order.</p>
<p>You can cast an enum to get this value:</p>
<pre class="rust"><code>let x = Foo::Bar as u32; // x is now 123u32</code></pre>
<p>This only works as long as none of the variants have data attached. If it were <code>Bar(i32)</code>, this is disallowed.</p>
</blockquote></li>
<li><p>Initialization of Swift’s raw-valued enum type is quite similar, and pleasantly so, to Python’s initialization of enums.</p></li>
<li><p>In a surprising change from the usual, Swift’s syntax for binding variable names when pattern matching against an enum is <em>more</em> verbose than Rust’s, requiring the use of either a leading <code>let</code> on the <code>case</code> statement if all the elements are of the same type, or a <code>let</code> in front of each element otherwise:</p>
<pre class="swift"><code>var matchedValue: String
let matchee = 3.14159
switch matchee {
case 3.14159:
matchedValue = "pi"
case _:
matchedValue = "not pi"
}</code></pre>
<p>In Rust, a matched pattern can simply bind its value directly:</p>
<pre class="rust"><code>let matchee = 3.14159;
let matchedValue = match matchee {
3.14159 => "pi".to_string(),
_ => "not pi".to_string()
};</code></pre></li>
<li><p>Swift has the ability to do recursive enumerations with its <code>indirect</code> type. This is conceptually interesting, but off the top of my head I can’t think of a time when this would have been useful at any point since I started programming seven and a half years ago. The book’s example of a recursive function aliasing arithmetic expressions is fine, but not particularly illuminating to me. I suspect, though, that it might make more sense if I were more familiar with pure functional programming paradigms.</p>
<p><strong>Edit:</strong> a friend <a href="https://alpha.app.net/jws/post/65990633">points out</a>:</p>
<blockquote>
<p>Indirect enums are useful for recursive types in general. There are a lot of these: Lists, trees, and streams are the big ones that come to mind.</p>
</blockquote></li>
<li><p>All those same lines: Rust does <em>not</em> have the ability to have recursive enumerations at present (or recursive <code>struct</code> types, for that matter), at least without heap-allocating with <code>Box</code> along the way. You <em>can</em> construct such a type, in other words, but you have to be explicit about how you’re handling the memory, and it can’t be stack-allocated.</p>
<ul>
<li><p>For an example of a recursive enumeration type (as well as an interesting/hilarious example of how you can easily confuse the compiler if you do this wrong), see <a href="https://users.rust-lang.org/t/recursive-enum-types/2938">this Rust forum post</a>.</p></li>
<li><p>For some discussion on stack- and heap-allocated memory in Rust, I’ll shamelessly promote my Rust podcast, <a href="http://www.newrustacean.com">New Rustacean</a>: take a listen to <a href="http://www.newrustacean.com/show_notes/e005/index.html">e005: Allocate it where?</a></p></li>
</ul></li>
</ul>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-viii.html"><strong>Previous:</strong> Functions, closures, and an awful lot of Swift syntax.</a></li>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-x.html"><strong>Next:</strong> Classes and structs (product types), and reference and value types.</a></li>
</ul>
CSS Fallback for OpenType Small Caps2015-10-19T20:00:00-04:002015-10-19T20:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-10-19:/2015/css-fallback-for-opentype-small-caps.html<p><i class=editorial>I wrote this up as <a href="http://stackoverflow.com/questions/24846264/css-fallback-for-opentype-small-caps/25172932#25172932">a question on Stack Overflow</a> a bit over a year ago. It has continued to get a fair bit of traffic, so I’ve republished it here and cleaned it up a bit.</i></p>
<section id="the-problem" class="level2">
<h2>The Problem</h2>
<p>Over the last year, I’ve worked on <a href="//holybible.com">a site …</a></p></section><p><i class=editorial>I wrote this up as <a href="http://stackoverflow.com/questions/24846264/css-fallback-for-opentype-small-caps/25172932#25172932">a question on Stack Overflow</a> a bit over a year ago. It has continued to get a fair bit of traffic, so I’ve republished it here and cleaned it up a bit.</i></p>
<section id="the-problem" class="level2">
<h2>The Problem</h2>
<p>Over the last year, I’ve worked on <a href="//holybible.com">a site</a> where small caps are important: setting the text of the Bible. In the Old Testament the name of God is transliterated as <code>Lord</code> but in small caps—not “LORD” but <span class="divine-name">Lord</span> (RSS readers will want to click through and see this on my site). However, the state of OpenType small caps support at the moment is… less than optimal. Safari (even up through Safari 9 on El Capitan, from which I am typing this) still doesn’t support the <code>-webkit-font-feature-settings: 'smcp'</code> option, and a lot of the hits for this website will be coming from mobile.</p>
<p>Unfortunately, “graceful degradation” is problematic here: if you specify both <code>font-variant: small-caps</code> and <code>font-feature-settings: 'smcp'</code> in a browser that supports the latter (e.g. Chrome), the <code>font-variant</code> declaration overrides it, so the horribly ugly old-style version still comes into play. (Note: this is as it should be per the <a href="http://www.w3.org/TR/css-fonts-3/#feature-precedence">spec</a>: the <code>font-variant</code> declaration has a higher priority than the <code>font-feature-settings</code> declaration). Given the current implementations of <code>font-variant: small-caps</code>, though—shrunken capitals rather than actual small capitals—the result is that using <code>font-variant: small-caps</code> realists in not-so-gracefully degrading <em>everyone’s</em> reading experience.</p>
<p>In the past, I have exported the small caps as a distinct webfont and specified them directly; see <a href="http://v4.chriskrycho.com/2014/learning-qml-part-1.html">this post</a> for a simple example: the first line of each paragraph is specified that way.</p>
<p>While I <em>can</em> do the same thing here (and at least in theory could deliver a pretty small typeface, since I really only need three characters: <code>o</code>, <code>r</code>, and <code>d</code>), I’d prefer simply to enable sane fallbacks. As noted above, however, that’s not possible. I am <em>open to</em> but would very much prefer to avoid server-side solutions (browser detection, etc.) as a point of complexity that is better to minimize, especially given how rapidly browsers change. How else might one solve this problem, and especially are there existing solutions for it?</p>
<p>In the future, <code>font-variant: small-caps</code> will handle this nicely, as per <a href="http://www.w3.org/TR/css3-fonts/#small-caps">the spec</a> it should display a small-capitals-variant of the typeface if the typeface supplies it. However, at present, <em>no browser supports this</em> (at least, none that I can find!). This means that instead, they all render fake small capitals simply by scaling down actual capitals. The result is typographically unpleasant, and unacceptable on this project.</p>
</section>
<section id="the-solutions" class="level2">
<h2>The Solution(s)</h2>
<p>I spent a considerable amount of time researching this and wrestling with it. After digging around as best I could, the top solutions for now are:</p>
<section id="supports" class="level3">
<h3><code>@supports</code></h3>
<p>Take advantage of the <code>@supports</code> rule in browsers. This is what I initially opted to do on this project.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> You use the rule this way:</p>
<pre class="css"><code>.some-class {
font-variant: small-caps;
}
@supports(font-feature-settings: 'smcp') {
.some-class {
font-variant: normal;
font-feature-settings: 'smcp';
}
}</code></pre>
<p>(I’ve simplified by leaving out the prefixed versions; you’ll need to add the <code>-webkit-</code> and <code>-moz-</code> prefixes to get this actually working.) This has the advantage that support for real small caps and support for the <code>@supports</code> rule are very similar:</p>
<ul>
<li><code>@supports</code>: <a href="http://caniuse.com/#feat=css-featurequeries">Can I Use Feature Queries?</a>: Chrome 31+, Firefox 29+, Opera 23+, Android 4.4+, Safari 9+, Edge 12+, Chrome for Android</li>
<li><code>font-feature-settings</code>: <a href="http://usabilitypost.com/2014/05/10/using-small-caps-and-text-figures-on-the-web/">Using Small Caps & Text Figures on the Web</a>: Chrome, Firefox, IE10+</li>
</ul>
<p>This isn’t perfect: since IE10/11 don’t implement <code>@supports</code>, you miss one browser—sort of. At this point, IE is a legacy browser, and Edge has had <code>@supports</code> available from the start. Thus, this gets you most of the way there, and it should be future-facing: this should progressively enhance the site nicely. The normal (bad, but functional) small caps are displayed in the meantime, and when browsers eventually get around to using OpenType small caps by default for <code>font-variant: small-caps</code>, this will continue to work just fine. It’s “progressive enhancement” and it’ll work nicely for most purposes.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
</section>
<section id="typeface-subsetting" class="level3">
<h3>Typeface subsetting</h3>
<p>As mentioned above, one can create a subset of the typeface that includes only small capitals. This is what I have done for the small caps on this site; see the example in the first paragraph.</p>
<p>To pull this off, you’ll need to start by subsetting the typeface. You can do this manually with a font tool, or (the simpler way) you can use FontSquirrel’s custom subsetting tool in their <a href="http://www.fontsquirrel.com/tools/webfont-generator">webfont generator</a>. (<strong><em>Note:</em></strong> You <em>must</em> check the license and confirm that the typeface in question allows this kind of modification. See below.) In the web font generator, first upload the file you wish to modify. Then choose the <strong>Expert</strong> radio button. Most of the settings you can leave as they are; they’re good sane defaults. Midway down the page you’ll see <strong>OpenType Flattening</strong> options. Here, select only “Small Caps”. Run the generator. The result will be a complete replacement of the normal lowercase letters with the small caps set.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a></p>
<p>In that case, you can simply apply a style to the elements you want to have small capitals, e.g.:</p>
<pre class="css"><code>.divine-name {
font-family: 'my_typeface_smcp', 'my_typeface', serif;
}</code></pre>
<p>The major advantage to this approach is consistency: that typeface is going to display on every browser out there, back to IE5.5, as long as you deliver it correctly using the various hooks required by <code>@font-face</code>.</p>
<p>There are a few disadvantages to this approach, though:</p>
<ol type="1">
<li><p>It means delivering another font file. In my case, this would be an acceeptably low size (since I actually only need four characters), but it’s still something to consider in general. It is in any case another HTTP request, which is going to further slow the page load time or at least give you some flash of unstyled text when it reloads.</p></li>
<li><p>It may violate the licenses of the typefaces in question. For at least one of the fonts I used on this project, it <em>does</em>: the license explicitly forbids rebuilding the font using tools like FontSquirrel. (FontSquirrel was the tool I used for this approach before, and it works quite well.) This is a make-or-break issue for using a subset of a typeface to accomplish the goal. That being said, if you have a good reason to do it, you may be able to get support from the vendor (especially if they’re a small shop). For the project that prompted this question, I was able to do just that with a nice email—the designer is a great guy.</p></li>
</ol>
<p>The other major reason not to do it this way is that it has a significantly higher maintenance cost. If at any point you need to change or update the typeface, you have to go through the subsetting process all over again. By contrast, the first option will simply <em>work</em>, though admittedly not as pleasantly as one might hope, and will not only continue to work but will actually improve over time as browsers increase their implementation of the CSS3 standard.</p>
</section>
</section>
<section id="conclusion" class="level2">
<h2>Conclusion</h2>
<p>I opted for the second solution on HolyBible.com—typography was one of the driving differentiators for the site, so I prioritized it and did the necessary legwork for it. In general, though, the first option should work well for most sites. In any case, both ways work, though the first one is a <em>better</em> example of progressive enhancement. And we can all look forward to the day when true small-caps support is available on every browser, right?</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>For various reasons (especially see note 2 below), I actually opted for the second approach outlined here, which is the same approach I was trying to avoid. Alas.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Issues remain: even in the latest Chrome (46 as of the time of this post), using the <code>font-feature-settings: 'smcp'</code> approach has some issues. For example, if you turn on <code>letter-spacing</code> (a fairly common <a href="http://practicaltypography.com/letterspacing.html">recommendation</a> for small caps), the small caps will revert to normal lowercase letters.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>From the FontSquirrel blog post that introduced the feature:</p>
<blockquote>
<p>If you have a font with OpenType features, you can now flatten some of them into your webfont. For instance, some fonts have small caps built in, but they are completely inaccessible in a web browser. By selecting the “Small Cap” option, the Generator will replace all the lowercase glyphs with the small cap variants, giving you a small cap font. Please note that not all OpenType features are supported and if the font lacks OpenType features, using these options won’t create them.</p>
</blockquote>
<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></li>
</ol>
</section>
Rust and Swift (viii)2015-10-18T11:50:00-04:002015-10-19T20:15:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-10-18:/2015/rust-and-swift-viii.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past few months. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<ul>
<li><p>Rust and Swift handle function definition fairly similarly, at least for basic function definitions. In fact, for most basic functions, the only difference between the two is the keyword used to indicate that you’re declaring a function: <code>fn</code> in Rust and <code>func</code> in Swift.</p></li>
<li><p>Likewise, both return an empty tuple, <code>()</code>, called the <em>unit type</em> in Rust or <code>Void</code> in Swift. Note, however, that this unit/<code>Void</code> type is <em>not</em> like C(++)’s <code>void</code> or Java’s <code>null</code>: you cannot coerce other types to it; it really is an empty tuple.</p></li>
<li><p>Type declarations on functions are basically identical for simple cases, though they vary into the details as you get into generics and constraints in generics.</p></li>
<li><p>I have no idea why the Swift team chooses to represent function names like this: <code>function_name(_:second_param:third_param:<etc.>)</code>. Perhaps it’s a convention from other languages I’m simply unfamiliar with, but it seems both odd and unhelpful: eliding the first parameter name obscures important information. Also, why use colons for the delimiter?</p>
<p><strong>Edit:</strong> I’m informed via Twitter and App.net that this reflects how function names work in Objective C, and derives ultimately from Smalltalk.</p></li>
<li><p>Being able to name the items in a returned type in Swift is quite handy; it’s something I have often wanted and had to work around with dictionaries or other similar types in Python.</p></li>
<li><p>We’ll see how I feel once I’ve been writing both for a while, but initially I <em>strongly</em> prefer Rust’s more-obvious (if also somewhat longer) <code>-> Option<i32></code> to return an optional integer to Swift’s <code>-> Int?</code>. I am quite confident that I’ll miss that trailing <code>?</code> somewhere along the way.</p></li>
<li><p>I’m sure there’s a reason for Swift’s internal and external parameter names and the rules about using <code>_</code> to elide the need to use keyword arguments (but automatically eliding the first one) and so on… but I really can’t see the utility, overall. It seems like it would be better just to have Python-like args and keyword args.</p></li>
<li><p>That’s doubly so given that Swift’s rules for default-valued parameters map exactly to Python’s: they need to go at the end, after any parameters which don’t have default values.</p></li>
<li><p>Swift’s variadic parameters are nice—though of course limited, since if you have more than one, the compiler may not know how to resolve which destination parameter a given argument belongs with. (I imagine the compiler <em>could</em> be extended to be able to handle multiple variadic parameters as long as they were all of different types, but that’s probably not worth the work or the potential confusion it would introduce.) In any case, it’s a small nicety that I do wish Rust had.</p></li>
<li><p>Swift’s variable parameters are… interesting. I can see the utility, sort of, but (probably from years of habit with C and Python and pass-by-reference types), it’s just not a pattern that makes a lot of sense to me right now. No doubt I’ll get used to them in idiomatic Swift, but while Rust doesn’t have a similar feature, I suspect I won’t miss it.</p></li>
<li><p>In/out parameters—that is, mutable pass-by-reference types—are available in both languages. The syntax is <em>very</em> different here, as are the semantics.</p>
<p>Swift has the <code>inout</code> keyword, supplied before a parameter definition:</p>
<pre class="swift"><code>func adds4ToInput(inout num: Int) {
num += 4;
}</code></pre>
<p>Rust has instead a variation on every other type definition, declaring the type in this case to be a mutable reference:</p>
<pre class="rust"><code>fn adds_4_to_input(num: &mut i32) {
num += 4;
}</code></pre>
<p>As usual, in other words, Swift opts to use new syntax (in this case, a dedicated keyword) while Rust opts to use the same syntax used everywhere else to denote a mutable reference. In fairness to Swift, though, this is something of a necessity there. From what I’ve seen so far, Swift generally doesn’t (and perhaps can’t?) do pointers or references explicitly (though of course it’s handling lots of things that way under the covers); arguments to functions are a special case, presumably present primarily for interoperability with Objective-C.</p></li>
<li><p>Swift’s function type definitions, as used in e.g. function parameter definitions, are quite nice, and reminiscent of Haskell in the best way. Rust’s are pretty similar, and add in its <code>trait</code> usage—because functions types <em>are</em> <code>trait</code>s. Once again, I really appreciate how Rust builds more complicated pieces of functionality on lower-level constructs in the language. (Swift may be doing similar under the covers, but the Swift book doesn’t say.)</p></li>
<li><p>Again, though, the downside to Rust’s sophistication is that it sometimes bundles in some complexity. Returning a function in Swift is incredibly straightforward:</p>
<pre class="swift"><code>func getDoubler() -> (Int) -> Int {
func doubler(number: Int) -> Int {
return number * 2
}
return doubler
}
func main() {
let doubler = getDoubler()
println("\(doubler(14))") // -> 28
}</code></pre>
<p>Doing the same in Rust is a bit harder, because—as of the 1.3 stable/1.5 nightly timeframe—it requires you to explicitly heap-allocate the function. Swift just takes care of this for you.</p>
<pre class="rust"><code>fn get_doubler() -> Box<Fn(i32) -> i32> {
fn doubler(number: i32) -> i32 {
number * 2
}
Box::new(doubler)
}
fn main() {
let doubler = get_doubler();
println!("{:}", doubler(14)); // -> 28
}</code></pre>
<p>If you understand what’s going on under the covers here, this makes sense: Rust normally stack-allocates a function in a scope, and therefore the <code>doubler</code> function goes out of scope entirely when the <code>get_doubler</code> function returns if you don’t heap-allocate it with <code>Box::new</code>.</p></li>
<li><p>In both languages, closures and “ordinary” functions are variations on the same underlying functionality (as it should be). In Rust’s case, functions and closures both implement the <code>Fn</code> trait. In Swift’s case, named functions are a special case of closures.</p></li>
<li><p>The Swift syntax for a closure is, well, a bit odd to my eye. The basic form is like this (with the same “doubler” functionality as above):</p>
<pre class="swift"><code>{ (n: Int) -> Int in return n * 2 }</code></pre>
<p>For brevity, this can collapse down to the shorter form with types inferred from context, parentheses dropped and the <code>return</code> keyword inferred from the fact that the closure has only a single expression (note that this wouldn’t be valid unless in a context where the type of <code>n</code> could be inferred):</p>
<pre class="swift"><code>{ n in n * 2 }</code></pre>
<p>The simplicity here is nice, reminiscent in a good way of closures/lambdas in other languages.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> The fact that it’s a special case is less to my taste.</p></li>
<li><p>Rust’s closure syntax is fairly similar to Swift’s brief syntax. More importantly, there’s no special handling for closures’ final expressions. Remember: the final expression of <em>any</em> block is always returned in Rust.</p>
<pre class="rust"><code>|n| n * 2</code></pre>
<p>If we wanted to fully annotate the types, as in the first Swift example, it would be like so:</p>
<pre class="rust"><code>|n: i32| -> i32 { n * 2 }</code></pre></li>
<li><p>There are even <em>more</em> differences between the two, because of Rust’s ownership notion and the associated need to think about whether a given closure is being borrowed or moved (if the latter, explicitly using the <code>move</code> keyword).</p></li>
<li><p>Swift has the notion of shorthand argument names for use with closures.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> The arguments to a closure get the default names <code>$0</code>, <code>$1</code>, etc. This gets you even <em>more</em> brevity, and is quite convenient in cases where closures get used a lot (<code>map</code>, <code>sort</code>, <code>fold</code>, <code>reduce</code>, etc.).</p>
<pre class="swift"><code>{ $0 * 2 }</code></pre></li>
<li><p>If that weren’t enough, Swift will go so far as to simply reuse operators (which are special syntax for functions) as closures. So a closure call could simply be <code>+</code> for a function expecting a closure operating on two numbers; Swift will infer that it needs to map back to the relevant method definition on the appropriate type.</p></li>
<li><p>The upside to this is that the code can be incredibly brief, and—once you’re used to it, at least—still fairly clear. The downside to this is yet <em>more</em> syntax for Swift, and the ever-growing list of things to remember and ways to write the same thing I expect will lead to quite a bit of instability as the community sorts out some expectations for what is idiomatic in any given instance.</p></li>
<li><p>And if that weren’t enough, there is more than one way to supply the body of a closure to a Swift function that expects it: you can supply a block (<code>{ /* closure body */ }</code>) <em>after</em> the function which expects it. Yes, this can end up looking nearly identical to the form for declaring a function:</p>
<pre class="swift"><code>someFunctionExpectingAnIntegerClosure() { n * 2 }</code></pre>
<p>But you can also drop the parentheses if that’s the only argument.</p>
<pre class="swift"><code>someFunctionExpectingAnIntegerClosure { n * 2 }</code></pre></li>
<li><p>In terms of the <em>mechanics</em> of closures, and not just the syntax, the one significant difference between Rust and Swift is the same one we’ve seen in general between the two languages: Swift handles the memory issues automatically; Rust makes you be explicit about ownership. That is, as noted above about the closures themselves, in Rust you may have to <code>move</code> ownership to get the expected behavior. Both behave basically like closures in any other language, though; nothing surprising here. Both also automatically copy values, rather than using references, whever it makes sense to do so.</p></li>
<li><p>Swift autoclosures allow for lazy evaluation, which is neat, but: <em>yet more syntax</em>! Seriously. But I think all its other closure syntaxes <em>also</em> allow for lazy evaluation. The only reason I can see to have the special attribute (<code>@autoclosure</code>) here is because they added this syntax. And this syntax exists so that you can call functions which take closures as if they <em>don’t</em> take closures, but rather the argument the closure itself takes. But of course, this leads the Swift book to include the following warning:</p>
<blockquote>
<p><strong>Note:</strong> Overusing autoclosures can make your code hard to understand. The context and function name should make it clear that the evaluation is being deferred.</p>
</blockquote>
<p>Yes, care needed indeed. (Or, perhaps, you could just avoid adding more special syntax that leads to unexpected behaviors?)</p></li>
<li><p>Good grief. I’m tired now. That’s a half-dozen variants on <em>closure syntax</em> in Swift.</p></li>
<li><p>Remember: there’s still just one way to write and use a closure in Rust.</p></li>
<li><p>This takes me back to something I noticed <a href="/2015/rust-and-swift-ii.html">early on</a> in my analysis of the two languages. In Swift, there’s nearly always more than one way to do things. In Rust, there’s usually one way to do things. Swift prefers brevity. Rust prefers to be explicit. In other words, Swift borrows more of its philosophy from Perl; Rust more from Python.</p></li>
<li><p>I’m a Python guy, through and through. Perl drives me crazy every time I try to learn it. You could guess (even if you hadn’t already seen) where this lands me between Rust and Swift.</p></li>
</ul>
<p>This post is incredibly long, but I blame that on the (frankly incredible) number of variants Swift has on the same concept.</p>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-vii.html"><strong>Previous:</strong> Pattern matching and the value of expression blocks.</a></li>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-ix.html"><strong>Next:</strong> Sum types (<code>enum</code>s) and more on pattern matching.</a></li>
</ul>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Compare the closure syntaxes especially in Ruby and ES6+.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>For a similar example in another up-and-coming language, see <a href="http://elixir-lang.org/getting-started/modules.html#function-capturing">Elixir</a>, which does almost exactly the same but with <code>&</code> in place of <code>$</code>.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Jeb Bush on net neutrality2015-09-24T07:15:00-04:002015-09-24T07:15:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-09-24:/2015/jeb-bush-on-net-neutrality.html<p>Dear Republicans: your <a href="http://arstechnica.com/tech-policy/2015/09/if-elected-president-jeb-bush-will-get-rid-of-net-neutrality-rules/">opposition to net neutrality</a> might be justifiable as something other than kowtowing to megacorporations <em>if you ever got around to proposing something else</em>. As is, all you’re doing is propping up some of the nastiest, most anti-consumer companies in the country and sustaining monopolies and duopolies …</p><p>Dear Republicans: your <a href="http://arstechnica.com/tech-policy/2015/09/if-elected-president-jeb-bush-will-get-rid-of-net-neutrality-rules/">opposition to net neutrality</a> might be justifiable as something other than kowtowing to megacorporations <em>if you ever got around to proposing something else</em>. As is, all you’re doing is propping up some of the nastiest, most anti-consumer companies in the country and sustaining monopolies and duopolies, supposedly in the name of “free markets”.</p>
<p><strong>Stop it.</strong></p>
<hr />
<p>N.b. This isn’t intrinsically a partisan issue. It’s become one, but mostly because Republicans have felt compelled to do the bidding of the telecom industry for… reasons.</p>
<p>The only thing worse than a government monopoly is a <em>private</em> monopoly.</p>
<p>If Republicans wanted to push for <a href="https://en.wikipedia.org/wiki/Local-loop_unbundling">local loop unbundling</a> in place of net neutrality, <em>almost everyone</em> would be for it. (The exception: telecom companies.)</p>
Rust and Swift (vii)2015-09-19T15:00:00-04:002015-09-20T13:42:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-09-19:/2015/rust-and-swift-vii.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past month. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past month. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<ul>
<li><p>Both Rust and Swift have <em>pattern-matching</em>, and with what appears to be fairly similar basic behavior. (I touched on this briefly in my <a href="/2015/rust-and-swift-i.html">first post in the series</a>.) In Rust this goes under the <code>match</code> construct, with matches specified like <code><pattern> => <expression|statement></code>, optionally with guards specified with <code>if</code> expressions. In Swift, patterns are matched using the <code>switch</code> construct, with matches specified like <code>case <pattern>: <expression|statement></code>, optionally with guards specified with <code>where</code> expressions. (<code>where</code> is also used in Rust, but for generic constraints, not pattern match guards.)</p></li>
<li><p>Both languages allow you to bind names to a matched pattern: Swift with <code>case let <name></code> and Rust simply by using the name in a normal destructuring expression as part of the match definition.</p>
<p><strong>Edit:</strong> that’s not <em>quite</em> right. In Rust, you use the <code>@</code> operator with the variable name you want to bind in the match.</p>
<p><strong>Edit the second:</strong> I was mixed up, because Rust actually has <em>both</em> of those options. You can either match directly, e.g. when getting the value of an <code>Option</code> type: <code>Some(value)</code> as the pattern will bind <code>value</code>. But if you need to bind a specific part of more complicated data structure, the <code>@</code> operator is present to let you do it in a fairly straightforward way.</p></li>
<li><p>Both languages allow for the use of <code>_</code> as a “wildcard” in match definitions. Since match definitions in Rust use the patterns directly, the equivalent of Swift’s C-like <code>default</code> is simply a wildcard match pattern (<code>_ => <-expression|statement></code>).</p></li>
<li><p>One significant difference: like its <code>if</code> blocks, Rust’s <code>match</code> blocks are expressions, so they can be assigned. I.e., you can write this:</p>
<pre class="rust"><code>let test = 5u32;
let description = match test {
0..10 => "less than ten",
_ => "greater than ten",
}
println!("{?:}"); // "less than ten"</code></pre>
<p>Swift doesn’t let you do this; the same thing there would be written like this:</p>
<pre class="swift"><code>let test: UInt32 = 5
var description: String
switch test {
case 0..<10:
description = "less than ten"
default:
description = "greater than ten"
}
println("\(description)")</code></pre></li>
<li><p>Both languages have <code>break</code> statements, but in Rust they’re only used in loop constructs, while Swift (like C) uses them to escape <code>case</code>s as well. The Swift book gives an example of one place they’re necessary in a <code>switch</code>: to match a case and do nothing there (e.g. <code>default: break</code>). In Rust, you would simply supply an empty block for that scenario (e.g. <code>_ => {}</code>).</p></li>
<li><p>Correctly, both languages force you to match exhaustively on relevant patterns. If you’re matching an enumerated type, for example, you must handle every enumerated value. You can of course do this with wildcard patterns or with Swift’s <code>default</code>, but the good thing is that both languages will refuse even to compile if a given pattern isn’t matched.</p></li>
<li><p>Swift’s default behavior around its <code>switch</code> statements is sane: it does <em>not</em> automatically fall through into the next statement. It does let you do this, without checking the condition on the next statement (as in C), using the <code>fallthrough</code> keyword. Rust, by contrast, simply doesn’t allow this at all.</p></li>
<li><p>Both languages supply named control statements (loops, etc.), with slightly different syntax for naming them. Rust’s, curiously, shares its syntax with lifetime definitions—more on those in a future post.</p></li>
<li><p>I don’t believe Rust has anything quite like Swift’s <code>guard</code>s, which allow you to leave normal or expected control flow in the main body of a block, with a secondary block for cases where the <code>guard</code> isn’t matched. This isn’t a huge deal, but it does fit as a nice convenience into the typical <code>if let</code> pattern in Swift. Basically, it just lets you elide an empty <code>if</code> block and supply only the <code>else</code> block.</p>
<p><strong>Edit:</strong> a friend <a href="https://alpha.app.net/jws/post/64804111">points out</a> that Swift <code>guard</code>s also require you to exit the current scope, so it’s unambiguous what you’re doing if you use them.</p></li>
</ul>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-vi.html"><strong>Previous:</strong> Collection types and the difference between syntax and semantics.</a></li>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-viii.html"><strong>Next:</strong> Functions, closures, and an awful lot of Swift syntax.</a></li>
</ul>
Rust and Swift (vi)2015-09-19T09:00:00-04:002015-09-19T09:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-09-19:/2015/rust-and-swift-vi.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past month. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past month. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<p>It kind of feels like this summarizes a <em>lot</em> of things about the overall design of Swift:</p>
<blockquote>
<p>Although the two forms are functionally identical, the shorthand form is preferred and is used throughout this guide when referring to the type of an array. —<em>The Swift Programming Language (Swift 2 Prerelease)</em></p>
</blockquote>
<p>The documentation for the various types in Rust’s <code>std::collections</code> module is hilarious and great. Highly recommended.</p>
<p>One thing that jumped out at me reading this chapter of the Swift book (though I don’t think it’s been explicitly discussed yet): Rust doesn’t have named parameters; Swift does. There are good reasons for that in both cases, but I suspect this is one of the small details I’ll miss the most in Rust. I’ve been spoiled by Python.</p>
<p>Swift’s <code>Array</code> type is analogous to Rust’s <code>Vec</code> type (usually created with the <code>vec!</code> macro), <em>not</em> its <code>Array</code> type. Rust <code>Vec</code>s and Swift <code>Array</code>s are dynamically sized and created on the heap, whereas Rust’s <code>Array</code>s are statically sized and created on the stack. Syntax for creating <code>Array</code>s in both languages is quite similar (though the results are different):</p>
<ul>
<li>Swift:
<ul>
<li>Fixed size: <code>let an_array: [Int] = [1, 2, 3]</code></li>
<li>Variable size: <code>var an_array = [1, 2, 3]</code></li>
</ul></li>
<li>Rust:
<ul>
<li>Array: <code>let an_array: [i32, 3] = [1, 2, 3];</code></li>
<li>Vector: <code>let a_vector: Vec<i32> = vec![1, 2, 3];</code></li>
</ul></li>
</ul>
<p>That’s the long form, of course; both languages have type inference, so you’d rarely write it like that. The usual form would be with the type in all of those cases:</p>
<ul>
<li>Swift:
<ul>
<li>Fixed size: <code>let an_array = [1, 2, 3]</code></li>
<li>Variable size: <code>var an_array = [1, 2, 3]</code></li>
</ul></li>
<li>Rust:
<ul>
<li>Array: <code>let an_array = [1, 2, 3];</code></li>
<li>Vector: <code>let a_vector = vec![1, 2, 3];</code></li>
</ul></li>
</ul>
<p>Rust also adds the concept of “slices,” which provide access to segments of arrays, and are heap-allocated as pointers to a given item in the array and a length (number of elements) included.</p>
<p><code>Array</code> operations in Swift are all pretty reasonable, and surprisingly descriptive. They remind me in a good way of Python’s <code>list</code> methods.</p>
<p>There are a <em>lot</em> of <a href="http://doc.rust-lang.org/stable/std/vec/struct.Vec.html">ways to interact with <code>Vec</code>s in Rust</a>. (That’s not a bad thing.) A bit surprising to me was the absence of an <code>enumerate</code> method, on <code>Vec</code> itself, but then I discovered that it exists in the <code>IntoIter</code> struct in the same module, which fully implements the <code>Iterator</code> <code>trait</code>. As a result, it has an <code>enumerate</code> function returning an <code>Enumerate</code> <code>struct</code> instance. (Under the covers, I suspect Swift <code>Array</code>s just implement an <code>Iterable</code> <code>protocol</code>, which is similar to this approach in some ways.)</p>
<p>This makes a point I’m coming back to fairly often: Rust doesn’t necessarily put everything on a single object definition, but rather into a set of related <code>struct</code> or <code>enum</code> types and <code>trait</code>s. This is really powerful, but it’s going to take some mental adjustment. In this way, Swift’s structure and semantics are much more like the languages I’m used to than Rust’s are (but even there, the use of <code>protocols</code> gives it considerable new flexibility).</p>
<p>Note that I said <em>semantics</em>, not syntax. Swift and Rust are a great example of how very similar syntax can mask differences in semantics. (For another such example, compare JavaScript’s syntax and semantics to Java’s: they’re superficially similar syntactically, and light years apart semantically.)</p>
<p>Swift’s <code>Set</code> type and Rust’s roughly analogous <code>HashSet</code> both have a <code>contains</code> method which behaves much like Python’s <code>in</code> keyword. Indeed, and perhaps unsurprisingly, the two types implement many of the same methods in general. This is perhaps to be expected given that the language around sets (as a mathematical concept being mapped down into a representation in a program) is quite standardized.</p>
<p>Because of their stricter typing systems, both Rust and Swift require you to specify the types used in their mapping constructs (Rust has <code>HashMap</code> and Swift has <code>Dictionary</code>), though of course both can infer this as well in certain cases. At the most basic level, you can’t use arbitrary (hashable) types as keys in mixed fashion like you can in e.g. Python’s <code>dict</code> type, but in practice this shouldn’t matter, for two reasons:</p>
<ol type="1">
<li>It’s generally inadvisable to use different types for keys in the same dictionary anyway. To me, at least, that usually indicates the need to step back and think more carefully about the types and data structures I’m using.</li>
<li>For the occasional case where it <em>is</em> appropriate, I wonder if you could declare the type as generic in either Rust or Swift. I’m putting this down as a TODO item for myself to find out!</li>
</ol>
<p>I really wish that Swift used the Python-style curly-brace delimited syntax (<code>{'key': 'value'}</code>) for its dictionary literal initializers. I can see, from a syntax reason, why it doesn’t: that would overload the block syntax (which Python can avoid because it’s white-space delimited). But it’s <em>really</em> convenient.</p>
<p>Along similar lines, I can see why the Swift designers chose to make all iterables have literal initializers using braces (<code>[...]</code>); it makes parsing fairly straightforward. However, the result is that it’s pretty difficult to see at first glance what you’re dealing with. It could quite easily be an <code>Array</code>, a <code>Set</code>, or a <code>Dictionary</code>.</p>
<p>This highlights a too-little-appreciated aspect of programming language design: <em>readability</em>. However much we programmers enjoy writing code, the reality is that we will all spend a great deal of our time—probably even a majority of it—reading it instead. Thus, while we should care about conveniences for writing code, and being overly verbose can be a pain, we should also concern ourselves with the ease of comprehending code when it is read, and the syntax and conventions a language embraces are a big part of this.</p>
<p>The <code>Dictionary</code> type in Swift is a pretty close analog to Python’s <code>dict</code>, right down to several of the method names. the same is true of Rust’s <code>HashMap</code>. That’s not a bad thing by any stretch of the imagination.</p>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-v.html"><strong>Previous:</strong> The value (and challenge) of learning languages in parallel.</a></li>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-vii.html"><strong>Next:</strong> Pattern matching and the value of expression blocks.</a></li>
</ul>
Rust and Swift (v)2015-09-12T13:45:00-04:002015-09-12T13:45:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-09-12:/2015/rust-and-swift-v.htmlI have been learning Rust and Swift in parallel. I wouldn’t normally recommend this course of action, but I’m finding it enormously profitable. You might, too, under the right circumstances.
<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past month. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<p>I’ve been working on learning Swift over the past couple weeks, and had spent the month prior to that doing a deep first dive on Rust. This kind of approach, learning two languages basically at the same time, is entirely new to me, and for good reason. Programming languages are not trivial to learn, and to learn them meaningfully one must practice with them a great deal.</p>
<p>I’m doing this largely of necessity. I’m hoping to build an application with a very capable, performant cross-platform core language (Rust), but planning to ship a native OS X app (first) when all is said and done. My desire to make the core libraries portable rules out Swift immediately. To be frank, so does the fact that it’s an Apple language: I am happy to use Apple’s tools on its platform, but I don’t want to shackle myself to their choices in the long run. Too, having good Rust experience is likely to be valuable in many other contexts.</p>
<p>So I need to learn both.</p>
<p>And, while I wouldn’t ordinarily recommend this course of action—indeed, unless you already have a fair bit of programming experience and already know several languages, I’d actively recommend against it—I’m finding it enormously profitable. The languages have been designed in roughly the same time frame, cite many of the same influences, and overlap substantially in terms of audience and goals. Yet they are, as this series has already highlighted, quite different languages in many ways.</p>
<p>Learning them in parallel is helping me see the trade-offs each one has made, and force me to think about <em>why</em> they differ in the ways they do. In particular, I think I have a much better idea what’s going on “under the covers” in each language and therefore know what to expect of them better. This, in turn, has dramatically deepened my grasp of the languages relative to the amount I’ve been looking at them, compared to previous language-learning efforts. (It also helps that I’ve already learned a number of languages, of course, and that I’ve been pushing my brain into the learning-programming-languages space via reading about Haskell, functional patterns in JavaScript, and so on this year.)</p>
<p>I have a long way to go in both languages, of course. Reading on nights and weekends, and the little bit of playing I’ve been able to do with each of them, is no replacement for just sinking my teeth into a project and finding the pain points. Nonetheless, I’m really glad to be learning these two languages <em>together</em>. If you’re up for a challenge, try it sometime! You’ll be surprised how much you learn.</p>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-iv.html"><strong>Previous:</strong> Language design tradeoffs, highlighted by string manipulation.</a></li>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-vi.html"><strong>Next:</strong> Collection types and the difference between syntax and semantics.</a></li>
</ul>
If-expressions in Rust2015-09-12T11:05:00-04:002015-09-12T11:10:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-09-12:/2015/if-expressions-in-rust.html<p>I love the fact that all <code>if</code> statements in Rust are expressions. It gives you a great deal of expressitivity in the language.</p>
<p>Let’s contrast with Python (which I love, for the record). In Python, you can do something like this:</p>
<pre class="python"><code>some_condition = True
if some_condition:
a_value = "Yeah!"
else:
a_value …</code></pre><p>I love the fact that all <code>if</code> statements in Rust are expressions. It gives you a great deal of expressitivity in the language.</p>
<p>Let’s contrast with Python (which I love, for the record). In Python, you can do something like this:</p>
<pre class="python"><code>some_condition = True
if some_condition:
a_value = "Yeah!"
else:
a_value = "Oh, sads."</code></pre>
<p>Those are <em>statements</em> in the body of the <code>if</code>/<code>else</code> block; you can’t assign the block itself to <code>a_value</code>. However, like C, C++, Java, etc., Python does provide an <em>expression</em>-type conditional, a ternary expression.</p>
<p>So you can also do this:</p>
<pre class="python"><code>some_condition = True
a_value = "Yeah" if some_condition else "Oh, sads."</code></pre>
<p>This expression form of the <code>if</code> block is what all Rust <code>if</code> blocks are. So in Rust, the normal long form is:</p>
<pre class="rust"><code>let some_condition = true;
let a_value = if some_condition {
"Yeah!"
}
else {
"Oh, sads."
}</code></pre>
<p>(You could also write this with a <code>let mut a_value</code> and then set its value inside the conditional blocks, but that’s not at all good form in Rust.)</p>
<p>And of course, you can shorten that rather nicely where the expressions are brief enough:</p>
<pre class="rust"><code>let some_condition = true;
let a_value = if some_condition { "Yeah!" } else { "Oh, sads." }</code></pre>
<p>But this gets really nice when you have more complicated work to do in a Rust conditional. It doesn’t matter how many things going on inside an <code>if</code> expression; it’s still an expression. As such, you can also write this:<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<pre class="rust"><code>let some_condition = true;
let a_value = if some_condition {
let the_answer = 42;
let theme = "Take my love, take my land...";
"Yeah!" // An expression!
}
else {
let the_question = "What do you get when you multiply six by nine?";
let song = "You can't take the sky from me!";
"Oh, sads." // An expression!
}</code></pre>
<p>Obviously this is totally contrived and silly; the point is that no matter what the internals are, <code>if</code> blocks are expressions, and their final expressions can be assigned like any other.</p>
<hr />
<p>As a note: I got here because I was originally thinking you couldn’t do a one-liner like you can in Python. As shown above, that’s totally false, and in fact the Rust version is much more capable than Python’s, because you don’t need a dedicated ternary when all <code>if</code> blocks are expressions. Rust used to have a C-style ternary (<code><condition> ? <value if true> : <value if false></code>) but it was <a href="https://github.com/rust-lang/rust/issues/1698">removed</a> during the lead-up to the 1.0 release—a decision I wholeheartedly affirm.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Note that under normal conditions the compiler won’t actually accept this because of the unused names.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Rust and Swift (iv)2015-09-10T21:05:00-04:002019-06-22T10:55:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-09-10:/2015/rust-and-swift-iv.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past month. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past month. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<p>Both Swift and Rust directly address the issue of having to worry about memory allocation and safety. They do it in different ways, though: Swift by automatic reference counting, Rust by its concept of ownership. For a lot of day-to-day development, I can see the Swift approach being a win for the same reason a language like Python or Ruby is: having that all handled for you is <em>nice</em>. Having the power Rust gives you comes at the price of increased cognitive load from having to reason about ownership.</p>
<p>To put it another way: all programming languages have to make trade-offs. Although I like Rust’s better than Swift’s so far, I’ve no doubt I will find any number of things to appreciate about Swift over Rust. You can’t have everything.</p>
<p>This caught my attention in part because dealing with things like strings (or other pass-by-value types) in Swift is rather more straightforward than in Rust. The outcomes are much the same, but since <em>all</em> <code>String</code>s in Swift are passed by value (never by reference), you simply don’t have to think about modification—even safe modification!</p>
<p>Rust of course had the <code>Copy</code> trait which lets you do this, but the point is that the “ergonomics” are slightly nicer in Swift.</p>
<p>Also, the string interpolation Swift does is <em>nice</em>. That’s one thing I really wish Rust had. It’s Python-style string formatting macro is great, but being able to interpolate values (<code>"strings with \(variables)"</code> or even <code>"embedded expressions like \(2 + 4)"</code>) is very nice.</p>
<p>Swift’s approach to strings in general seems well-thought-through and gives appropriate levels of attention to the details which make handling complex or non-Western languages much more manageable. As a typography geek, I appreciate this a great deal.</p>
<p>That said, since Swift’s strings <em>do</em> handle all those edge cases for Unicode, you lose some standard string access patterns and lose much (maybe all?) insight into the internal structure of the string. That may be good, and may be bad, depending on the circumstance. Like I said: trade-offs.</p>
<p>Actually, on reading further, the way Swift handles Unicode strings is pretty nice. It <em>does</em> give you insight into those, via specific methods for different representations. I particularly appreciate that it’s you deal with them as the standalone <code>String</code> type as well as giving you direct access to the code points—and not just one Unicode code point set, but any of <abbr>UTF8</abbr>, <abbr>UTF16</abbr>, or <abbr>UTF32</abbr> (Unicode scalars). Trust Apple to pay close attention to text.</p>
<p>Rust’s strings are <em>good</em>, but not quite as sophisticated (presumably for simplicity around the memory mapping). All Rust <code>String</code> or <code>str</code> instances are composed of <abbr>UTF32</abbr> Unicode scalars, encoded as <abbr>UTF8</abbr> sequences. It doesn’t have some of the convenience methods Swift does for getting any of the other representations. That said, I expect this should show up rarely if at all in my ordinary usage. Importantly, the fundamental storage is the same: both use scalars.</p>
<p>This was the first section where it didn’t feel like Rust was just a clear overall “winner” over Swift. Some of the trade-offs between the language designs are more apparent here, and I do appreciate the “ergonomics” of Swift in a number of these things.</p>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-iii.html"><strong>Previous:</strong> Operators, including overloading, and thoughts on brevity.</a></li>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-v.html"><strong>Next:</strong> The value (and challenge) of learning languages in parallel.</a></li>
</ul>
Rust and Swift (iii)2015-09-07T11:55:00-04:002019-06-22T10:55:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-09-07:/2015/rust-and-swift-iii.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past month. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past month. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<p>I just hit operators in the Swift book. First question: are operators special syntax, or are they sugar for <code>protocol</code>s? (Every modern language I use or even have played with handles them as sugar for another language construct—Python, Ruby, Io, Elixir, and Rust, to name just a few ranging over a substantial variety of ages and styles.)</p>
<p>Oh. I did the requisite digging, and operators are functions (which is okay) defined in the <del>global namespace (<em>:sigh:</em>)</del> <code>Swift</code> module.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> I say “okay” rather than good because the justification offered is that this is the only way to make the operators work as binary operators between existing instances of types. But that elides the fact that, if that’s the case, it is so because of other language design decisions. This seems like a perfect place to use a <code>protocol</code>, but perhaps (unlike Rust’s <code>trait</code>) they’re not sufficiently capable to handle this? That’s an open question; I have no idea about the answer.</p>
<p>Interestingly, Rust has several fewer operators than Swift, even apart from those mentioned in my <a href="http://v4.chriskrycho.com/2015/rust-and-swift-ii.html">previous post</a>. It drops the pre- and post-increment operators entirely (as does Python), since their results can always be accomplished in other ways with less potential for confusion. Swift keeps them, no doubt in part because most (Objective) C programs are deeply familiar with them and with idioms associated with them.</p>
<p>I learned a few new things about Rust’s operators as well: the Boolean <code>||</code> and <code>&&</code> operators and its bitwise <code>|</code> and <code>&</code> operators differ not only in that the former are <em>short-circuit</em> operators and the latter are not. Obviously you can also do things like bit-wise flag operations with the latter, but the reference emphasizes the short-circuiting behavior. This makes perfect sense, but it wasn’t something I’d ever considered explicitly before.</p>
<p>There is no ternary operator in Rust, because of how it handles the relationship between expressions and statements. Swift keeps it. That’s an interesting reflection of differences in design: Rust dropped it because <code>if</code> blocks are expressions, so it’s redundant, and they have had a goal of removing unnecessary features. (See the discussion on dropping the ternary operator—with an interesting aside from Brendan Eich on JavaScript—<a href="https://github.com/rust-lang/rust/issues/1698">here</a>). Note that this is not a criticism of Swift, just an observation, though I do really like Rust’s expression-driven approach.</p>
<p>The <code>??</code> “nil coalescing operator”, on the other hand, I actively dislike. This seems like shorthand for the sake of shorthand, partly necessitated by the existing drive toward shorthand with optional types in Swift. Sometimes brevity can lead to decreased clarity. Eliding too much, or subsuming it into shorthand, makes the language harder to hold in your head and requires you to slow down more for parsing each line.</p>
<p>Nothing surprising (or different) between the standard boolean operators in the two languages.</p>
<p>I wonder how many times the word “concise” (or synonyms of it) appear in the Swift book? It’s increasingly clear to me reading that brevity is one of the primary design goals. Maybe it’s just me, but that actually seems a little weird. Brevity is good so far as it goes, but <em>legibility</em> is much better.</p>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-ii.html"><strong>Previous:</strong> Basic types and the syntax around them.</a></li>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-iv.html"><strong>Next:</strong> Language design tradeoffs, highlighted by string manipulation.</a></li>
</ul>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>See edit in discussion of functions and global namespace in <a href="http://v4.chriskrycho.com/2015/rust-and-swift-ii.html">part ii</a>.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Rust and Swift (ii)2015-09-06T10:20:00-04:002019-06-22T10:55:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-09-06:/2015/rust-and-swift-ii.html<p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past month. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too …</i></p><p><i class="editorial">I am reading through the Swift book, and comparing it to Rust, which I have also been learning over the past month. As with the other posts in this series, these are off-the-cuff impressions, which may be inaccurate in various ways. I’d be happy to hear feedback! Note, too, that my preferences are just that: preferences. Your tastes may differ from mine. <a href="/rust-and-swift.html">(See all parts in the series.)</a></i></p>
<hr />
<p>At first blush, I find the extra syntax around optionals in Swift more confusing than helpful. I think this comes down to my preference for a more Python-like approach: “Explicit is better than implicit” and “There should be one– and preferably only one –obvious way to do it” both militate against the multiple different ways you can handle optional values in Swift. <code>Optional</code> types are created in one of two ways:</p>
<ul>
<li>with the <code>?</code> operator on a type definition, creating an explicitly wrapped type which must be checked in some way.</li>
<li>with the <code>!</code> operator on a type definition, creating an “implicitly unwrapped optional” by forcibly unwrapping it (and creating a runtime error if the optional is empty)</li>
</ul>
<p>After creating an optional, you can get at its contents by:</p>
<ul>
<li>using the <code>if let</code> or <code>while let</code> constructs to bind the optional value’s non-<code>nil</code> value for a block</li>
<li>using the <code>!</code> operator on a variable name, explicitly unwrapping it (and creating a runtime error if the optional is empty)</li>
</ul>
<p>By contrast, in Rust you always have to explicitly unwrap the item, using the <code>unwrap</code> method or pattern matching. There are no implicitly unwrapped types. Moreover, there is no special syntax around creating optional types in Rust: you just declare them with an <code>Option</code> type or another type that <code>impl</code>s the <code>Option</code> behavior. The “shortcut” behavior around error handling, <code>try!</code>, isn’t special syntax, but application of another standard language construct (in this case, a macro).</p>
<p>The discussion of <code>assert</code> in the Swift book re-raises the question about the global namespace:</p>
<blockquote>
<p>“You write an assertion by calling the global <code>assert(_:_:)</code> function.”</p>
</blockquote>
<p>This continues to suggest strongly that Swift does in fact have a true global namespace, <em>not</em> an automatically-imported prelude. That can make a big difference for applications in certain spaces (e.g. systems programming), when you might have good reason to want to replace the standard library’s approach with a different one. (See Rust’s <a href="https://doc.rust-lang.org/book/no-stdlib.html"><code>#[no_std]</code></a> docs and the <a href="https://github.com/rust-lang/rfcs/blob/master/text/1184-stabilize-no_std.md">related RFC</a>.)</p>
<p><strong>Edit:</strong> “strongly suggests” or no, I have now been <a href="https://twitter.com/jckarter/status/708765262309228544" title="Tweet by one of the Swift developers">reliably informed</a> that I was mistaken—and am happy to have been wrong here. As in Haskell, these functions are implicitly imported and belong to the <code>Swift</code> module.</p>
<p>In Rust, <code>assert!</code> is a macro, not a function, which is an interesting but perhaps not <em>especially</em> important distinction in this particular case. (It might be, though; I’d have to see the implementation of each to see how they play out differently.)</p>
<p>In any case, this also highlights another large difference between the two: testing is <a href="https://doc.rust-lang.org/stable/book/testing.html">front and center</a> in Rust, and barely receives a mention so far in the Swift book (and isn’t in the table of contents). Having language-level support for testing is a big deal.</p>
<p>Language tour and first chapter of the language guide down, my sense is that Swift is a substantially better language than C or C++ (and presumably than Objective C, but since I don’t know that language I can’t speak to it) for app design, but that Rust is a better language yet. Both far more modern than their predecessors, but they approach the same problems in surprisingly different ways, relatively similar syntax notwithstanding. So far, I like the Rust approach better.</p>
<p>In particular, more syntax is not my preferred way to tackle these things. Providing good language constructs and primitives on which to build seems better in <em>many</em> ways:</p>
<ul>
<li>It substantially reduces the cognitive load for the developer, by keeping the number of constructs small and simply varying how they are applied.</li>
<li>It increases the quality of those primitives, because it forces the language deadness to make sure they actually address the full problem space.</li>
<li>It lets developers approach the same problem in ways the language design team may not have anticipated, and over time the community may find shared conventions that improve on the <code>std</code> approach, and nothing has to change in the language spec (or the compiler!) to adopt those changes.</li>
<li>In general, then, it makes change much easier to manage, and change can be community-driven rather than requiring the language design team to manage it.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></li>
</ul>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-i.html"><strong>Previous:</strong> Thoughts after reading the introduction to the Swift book.</a></li>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-iii.html"><strong>Next:</strong> Operators, including overloading, and thoughts on brevity.</a></li>
</ul>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>This may of course be intentional on Apple’s part with Swift. Maintaining tight control over its tooling is very typical of modern Apple.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Rust and Swift (i)2015-09-04T22:59:00-04:002019-06-22T10:55:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-09-04:/2015/rust-and-swift-i.html<p><i class=editorial>I started writing these responses in a Slack channel of developers I participate in as I worked through the <a href="https://developer.apple.com/swift/">Swift</a> <a href="https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/">book</a>. I realized after a bit that it would make a better blog post than chat room content, so here we are. This is all entirely off-the-cuff: me just thinking …</i></p><p><i class=editorial>I started writing these responses in a Slack channel of developers I participate in as I worked through the <a href="https://developer.apple.com/swift/">Swift</a> <a href="https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/">book</a>. I realized after a bit that it would make a better blog post than chat room content, so here we are. This is all entirely off-the-cuff: me just thinking out loud as I read; this is by no means expert opinion.</i></p>
<p><i class=editorial>I later turned this into the first part of a whole <a href="/rust-and-swift.html">series</a> comparing Rust and Swift!</i></p>
<hr />
<ul>
<li><p><code>..<</code> – seriously?</p>
<p>That has to be one of the most annoying operators I’ve ever seen. It ends up with cognitive noise because <code><name</code> initially processes as “starting a generic” and you have to re-parse it visually and mentally.</p></li>
<li><p>After the first chapter of the Swift book, my impression is “a poor man’s Rust”; my gut feel based on that first pass and everything I’ve seen and read about Swift over the past two years is that it’s roughly what you would get if you took Rust’s syntax and replaced Rust’s hard safety goals with the aim of mapping to ObjC semantics. (To be fair to Apple, that interoperability was probably necessary.)</p></li>
<li><p>An example that jumps out at me as immediately illustrative of the difference in approach the languages take is the way you pass structures by reference vs. copy. In Swift, that’s done via two completely distinct language constructs, <code>struct</code>s and <code>class</code>es respectively.</p>
<p>In Rust, there is just the <code>struct</code> type to handle both of those. They’re immutable unless you declare them with <code>mut</code>, and you can pass them via copy simply by implementing the <code>Copy</code> <code>trait</code> (which seems roughly analogous to Swift’s <code>protocol</code>, but I’ve not yet dug deeply enough to see how they differ). Those things aren’t baked into the language, but use simpler language building blocks to define behavior into the library.</p></li>
<li><p>I saw someone do a write up a while back arguing that Go isn’t a <em>bad</em> language, it just isn’t a <em>good</em> language. My first impression of Swift, after having spent the last month with Rust, is very much along those lines.</p></li>
<li><p>Huh. Here’s something that I appreciate about Rust, Haskell, and others now that I didn’t before: there’s a difference between implicitly/automatically importing a prelude or a given set of standard library functions, and having actually global functions. Does Swift actually have functions like <code>print</code> in a global namespace, as the book seems to imply, or they being imported automatically <em>a la</em> Rust/Haskell/etc.?</p>
<p><strong>Edit:</strong> it appears Swift does likewise, but that you can’t access the relevant module directly. Which is halfway there.</p></li>
<li><p>Hmm. Why have <code>Double</code> <em>and</em> <code>Float</code>—just for ObjC interop, I guess?</p>
<p><strong>Edit:</strong> follow-up from a conversation with a friend: it’s because you have 32- and 64-bit architectures out there; sometimes you don’t want 64 bits of floating point precision for that reason. Note that Rust <em>also</em> has this distinction; you can declare things as <code>f32</code> or <code>f64</code>.</p></li>
<li><p>Extending the above note on <code>class</code>es and <code>struct</code>s and <code>protocol</code>s vs. Rust’s approach: the same thing is true about <code>extension</code>, which is a distinct concept from implementing a <code>protocol</code>; again, in Rust these are both just handled with a single language construct, <code>impl</code>. That’s not because <code>impl</code> is overloaded, but rather because the underlying language machinery is the same for the two things. (edited)</p></li>
<li><p>(I’ve a feeling learning Swift is going to turn me into even more of a Rust fanboy.)</p></li>
<li><p>Reading the two books in close sequence like this is proving really productive mentally for thinking about how the two handle the same issues. I’ve never done anything quite like this before, and it’s fascinating.</p></li>
<li><p>I have an increased appreciation for Rust’s use of semi-colons to turn expressions into statements, and thereby to distinguish clearly between the two (among other things, allowing for implicit return of anything that’s an expression).</p></li>
<li><p>Another interesting comparison: Rust’s <code>match</code> and Swift’s <code>switch</code> and <code>case</code> fill the same role of pattern matching. I’m curious to see how they differ. Does Swift do matching on arbitrary expressions?</p>
<p>Also, I see where the syntax choices came from in both, and while I slightly prefer Rust’s, I think both make reasonably good sense; Swift’s will understandably be more familiar to C and ObjC programmers, and that’s a perfectly defensible approach. Seen that way, it is expanding on the C-style construct (even if it’s actually doing something substantially more sophisticated than that under the hood by being a form of actual pattern matching).</p></li>
</ul>
<hr />
<ul>
<li><a href="http://v4.chriskrycho.com/2015/rust-and-swift-ii.html"><strong>Next:</strong> Basic types and the syntax around them.</a></li>
</ul>
On Editing Podcasts2015-08-24T20:16:00-04:002015-08-28T19:51:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-08-24:/2015/on-editing-podcasts.htmlMost podcasts are more like blog posts than magazine articles. (That doesn't mean you shouldn't edit, them, though!)
<p>Last week, Alan Jacobs posted <a href="http://text-patterns.thenewatlantis.com/2015/08/podcasts.html">a few thoughts</a> on the overall quality of podcasts. While he’s <a href="http://text-patterns.thenewatlantis.com/2015/08/podcasts-redux.html">since acknowledged</a> that part of his challenge with podcasts is that his bar is extremely high, I think his original piece bears quoting and responding to briefly, including a few thoughts about how Stephen and I handle <a href="http://www.winningslowly.org">Winning Slowly</a>.</p>
<p>From his piece:</p>
<blockquote>
<p>Podcasts, overall, are</p>
<ol type="1">
<li><p>People struggling to articulate for you stuff you could find out by looking it up on Wikipedia (e.g. In Our Time);</p></li>
<li><p>People using old-timey radio tricks to fool you into thinking that a boring and inconsequential story is fascinating (e.g. Serial);</p></li>
<li><p>People leveraging their celebrity in a given field as permission to ramble incoherently about whatever happens to come to their minds (e.g. The Talk Show); or</p></li>
<li><p>People using pointless audio-production tricks to make a pedestrian story seem cutting-edge (e.g. Radiolab).</p></li>
</ol>
</blockquote>
<p>I actually happen to basically agree with those critiques. However, one category he left out is: <em>people podcasting the way people blog</em>. And this is where many of the most interesting podcasts I listen to come in. It’s also basically where Winning Slowly fits: you can think of our show like an audio version of a blog post. It’s not as carefully considered or edited as a long-form magazine piece (or, in its respective medium, a professionally produced radio show). But like blog posts, the fact that it’s a bit more off the cuff and that it’s <em>not</em> the incredibly tight work that you find in a magazine can actually be attractive at times. Many of my favorite podcasts are very conversational and not heavily produced.</p>
<p>But—and here I think Jacobs is absolutely correct—all of the shows I really enjoy make a point to edit their shows. They clean up the audio from artifacts, they cut segments that were off topic, they make sure the levels are good between the different members of the podcast, and so on. And while you don’t have to do those things to have a podcast, any more than you need to edit the things you write to have a blog, you do need to do them if you want to have a <em>good</em> show. Sadly, this is where a number of shows I otherwise might enjoy show themselves to the door.</p>
<p>There is a reason Stephen and I spent a whole <a href="http://www.winningslowly.org/season-0.html">“beta” season</a> of <a href="http://www.winningslowly.org">Winning Slowly</a><a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> working not only on what we wanted the show to be about, but finding its voice and tone, the structure of the episodes, and the quality of our audio. We wrestled with the audio output from mediocre microphones and adopted seemingly silly practices like putting blankets over our heads and microphones and laptops while recording so that we can get better sound spaces. We have taken the time to learn about compression and limiting and other audio editing techniques, and work hard to get the mix between our intro and outro music and our own voices correct. And we cut things mercilessly.</p>
<p>For example, here is the blooper reel from <a href="http://www.winningslowly.org/3.05/">3.05</a>, which consists of only the <em>funny</em> parts of what I cut from the show (there was probably as much again that I just removed and didn’t include):</p>
<audio class="media-embed" title="3.05 Bloopers" controls preload="metadata">
<source src="http://www.podtrac.com/pts/redirect.m4a/cdn.winningslowly.org/3.05-bloopers.m4a">
<source src="http://www.podtrac.com/pts/redirect.mp3/cdn.winningslowly.org/3.05-bloopers.mp3">
</audio>
<p>That doesn’t begin to touch all the “umms” and long pauses and overly heavy breathing and do-overs we cut out (though, because this was a particularly rough episode, it does give you an idea). The result, as I think most of our listeners would agree, is a show that’s pretty tight as far as the audio goes.</p>
<p>In terms of content, different shows will have a different feel, of course. Some will require more planning. <a href="http://www.newrustacean.com/">New Rustacean</a>, a new show on learning Rust I’m hoping to launch later this week or early next week, requires a <em>lot</em> of planning. <a href="http://www.sap-py.com/">Sap.py</a>, the fun little show my wife and I are about to launch, about her adventures in learning Python, requires basically <em>no</em> planning. Winning Slowly doesn’t require a lot of formal planning, but it does require Stephen and me to keep a good eye on ongoing stories in our fields of technology, religion, ethics, and art, and to discuss big-picture ideas regularly and actively. Some episodes, we outline carefully (like the one we recorded today, which will come out next Tuesday). For others, we can basically just wing it (like the one we recorded a week ago and which comes out tomorrow). But if our podcast is good, and I really do think it is, it is because we take the time to work at making it good. Just like you have to do on a blog, or really anything else in life.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>13 published episodes, and one we dropped entirely!<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>One big difference between a podcast and a blog is that it actually takes a lot <em>more</em> work to make a good podcast than a good blog post. Audio editing is much more involved than editing writing, and speaking intelligently for any length of time—whether off the cuff, with a detailed outline, or as an interviewer—is much harder to get right than writing, where you can polish to your heart’s content.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
High- and Low-Level Programming Languages2015-08-07T20:00:00-04:002015-08-07T20:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-08-07:/2015/high-and-low-level-programming-languages.html<p>It occurred to me while listening to <a href="https://edwinb.wordpress.com">Edwin Brady</a> talk about <a href="http://www.idris-lang.org">Idris</a> on the <a href="http://typetheorypodcast.com">Type Theory Podcast</a>,<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> having just spent a few weeks starting to learn <a href="https://www.rust-lang.org">Rust</a>: “low-level” has at least two meanings in software. One is whether something has manual memory management or is garbage collected, reference counted …</p><p>It occurred to me while listening to <a href="https://edwinb.wordpress.com">Edwin Brady</a> talk about <a href="http://www.idris-lang.org">Idris</a> on the <a href="http://typetheorypodcast.com">Type Theory Podcast</a>,<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> having just spent a few weeks starting to learn <a href="https://www.rust-lang.org">Rust</a>: “low-level” has at least two meanings in software. One is whether something has manual memory management or is garbage collected, reference counted, or otherwise manages memory itself. This is what people often mean when they talk about C, C++, etc. as being “low-level” and languages like Python or Ruby or C♯ being high-level.</p>
<p>But then you toss in a language like <a href="https://www.rust-lang.org">Rust</a>, and things start to get a little more complicated. Rust can do the same kind of direct memory management that makes C or C++ a good language for things like writing operating system kernels. [<a href="https://github.com/torvalds/linux">1</a>,<a href="https://en.wikipedia.org/wiki/Architecture_of_Windows_NT">2</a>,<a href="http://www.opensource.apple.com/source/xnu/xnu-2782.10.72/">3</a>] But it is also memory-safe, at least in ordinary usage. Like C♯, you have to be explicit about any unsafe code, with the <code>unsafe</code> keyword on any blocks that do memory management that isn’t safe. And the vast majority of Rust code <em>is</em> safe.</p>
<p>More than that, though, Rust <em>feels</em> like a high-level language. It gives you higher-kinded functions, generics, traits-based composition of types, hygienic macros, and the implementation of many important parts the essentials of the language in the library. If you need to patch something, or extend something, you can do that in a straightforward way. In short, it gives you lots of good abstractions like you would expect in a high-level language.</p>
<p>Rust is low-level in that you can write (and people are writing) systems-level programs in it. It is high-level in that it lets you express things in ways normally associated with languages like Haskell or OCaml or Python or Ruby. To put it simply: it’s <em>low-level</em> in its ability to address the computer, and <em>high-level</em> in the abstractions it hands to a programmer. That’s a powerful combination, and I hope more languages embrace it in the years to come.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Yes, I know that’s insanely nerdy. What did you expect?<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Reeder 3 for Mac Beta2015-07-30T09:26:00-04:002015-07-30T09:26:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-07-30:/2015/reeder-3-for-mac-beta.html<p>Ooh, look! A beta for <a href="http://reederapp.com/beta3/">Reeder 3</a>! Shiny!</p>
<p>Ooh, look! A beta for <a href="http://reederapp.com/beta3/">Reeder 3</a>! Shiny!</p>
SMuFL and MusicXML to W3C2015-07-28T12:29:00-04:002015-07-28T12:29:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-07-28:/2015/smufl-and-musicxml-to-w3c.html<p>Another one in the music industry—but in this case, companies taking the long view and advancing the <a href="http://www.sibeliusblog.com/news/makemusic-and-steinberg-transfer-development-of-musicxml-and-smufl-to-web-community-group/">good of the whole community</a>, rather than just their own bottom line. (Spreadbury, the guy behind SMuFL, was one of the team laid off in the <a href="%7Bfilename%7Dsibelius-8.md">aforementioned</a> layoff from the Sibelius team …</p><p>Another one in the music industry—but in this case, companies taking the long view and advancing the <a href="http://www.sibeliusblog.com/news/makemusic-and-steinberg-transfer-development-of-musicxml-and-smufl-to-web-community-group/">good of the whole community</a>, rather than just their own bottom line. (Spreadbury, the guy behind SMuFL, was one of the team laid off in the <a href="%7Bfilename%7Dsibelius-8.md">aforementioned</a> layoff from the Sibelius team, and now heads the product development for a new notation software tool from Steinberg.)</p>
Sibelius 82015-07-28T12:25:00-04:002015-07-28T12:25:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-07-28:/2015/sibelius-8.html<p>Avid: <a href="http://www.sibeliusblog.com/news/sibelius-8-is-here/">charging Sibelius users more money than ever for less value than ever</a>, after laying off their dev team a couple years ago just to maximize profits.</p>
<p>This is <em>not</em> <a href="http://www.winningslowly.org/">Winning Slowly</a> material here, folks. They lost me (and many other) customers along the way, and they’re headed further …</p><p>Avid: <a href="http://www.sibeliusblog.com/news/sibelius-8-is-here/">charging Sibelius users more money than ever for less value than ever</a>, after laying off their dev team a couple years ago just to maximize profits.</p>
<p>This is <em>not</em> <a href="http://www.winningslowly.org/">Winning Slowly</a> material here, folks. They lost me (and many other) customers along the way, and they’re headed further down that road here.</p>
<p>Subscription models for software can be valuable and reasonable—but the providers have to justify them with product to match. Avid isn’t, and hasn’t been. I’ve no doubt they’re continuing to profit in the short term, but this will no doubt erode their market position and waste an amazing product in the long term. Greed destroys good things.</p>
Academic Markdown and Citations2015-07-26T13:50:00-04:002015-07-26T20:07:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-07-26:/2015/academic-markdown-and-citations.htmlManaging citations is painful—especially in plain text. But with a little setup, Pandoc and BibTEX can take a lot of the pain out of it, whether for Word documents or a static site generator.
<p>Much of my past few weeks were taken up with study for and writing and editing <a href="http://v4.chriskrycho.com/2015/not-exactly-a-millennium.html">a paper</a> for one of my classes at Southeastern. I’ve been writing all of my papers in Markdown ever since I got here, and haven’t regretted any part of that… except that managing references and footnotes has been painful at times.</p>
<p>Footnotes in Markdown look like this:</p>
<pre class="markdown"><code>Here is some text.[^fn]
[^fn]: And the footnote!</code></pre>
<p>This poses no problems at all for normal footnotes. Academic writing introduces a few wrinkles, though, which means that this has always been the main pain point of my use of Markdown for writing papers.</p>
<p>Many academic citation styles (including the Chicago Manual of Style, on which our seminary’s <a href="http://www.press.uchicago.edu/books/turabian/turabian_citationguide.html">style guide</a> is based) tend to have a long version of the footnote appear first, followed by short versions later. Nearly <em>all</em> academic citations styles make free use of the <a href="https://en.wikipedia.org/wiki/Ibid.">“ibid.”</a> abbreviation for repeated references to save space, time, and energy. Here is how that might look in manually-written footnotes, citing the very paper in which I sorted this all out:</p>
<pre class="markdown"><code>Some text in which I cite an author.[^fn1]
More text. Another citation.[^fn2]
What is this? Yet _another_ citation?[^fn3]
[^fn1]: So Chris Krycho, "Not Exactly a Millennium," chriskrycho.com, July 22,
2015, http://v4.chriskrycho.com/2015/not-exactly-a-millennium.html
(accessed July 25, 2015), ¶6.
[^fn2]: Contra Krycho, ¶15, who has everything _quite_ wrong.
[^fn3]: ibid.</code></pre>
<p>This seems straightforward enough, though it is a bit of work to get the format right for each different kind of citation (articles, books, ebooks, electronic references to articles…). Things <em>really</em> get complicated in the editing process, though. For example, what if I needed to flip the order of some of these notes because it became clear that the paragraphs needed to move around? This happens <em>frequently</em> during the editorial process. It becomes particularly painful when dealing with the “ibid.”-type references, because if I insert a new reference between two existing references, I have to go back in and manually add all that the reference content again myself.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>Enter Pandoc and <span class="tex">BibT<span class="texE">E</span>X</span>.</p>
<section id="managing-citations" class="level2">
<h2>Managing Citations</h2>
<p>The idea of plain-text solutions to academic writing is not especially new; only the application of Markdown to it is—and that, only relatively. People have been doing this, and <a href="http://kieranhealy.org/blog/archives/2014/01/23/plain-text/">documenting their approaches</a>, for quite a while. Moreover, tools for managing references and citations have existed for quite some time as well; the entire <a href="http://www.latex-project.org">L<span class="texA">A</span>T<span class="texE">E</span>X</a> toolchain is largely driven by the concerns of academic publishing, and as such there are tools in the <span class="tex">L<span class="texA">A</span>T<span class="texE">E</span>X</span> ecosystem which address many of these problems.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<p>One such is <span class="tex">BibT<span class="texE">E</span>X</span>, and the later (more capable) <span class="tex">BibL<span class="texA">A</span>T<span class="texE">E</span>X</span>: tools for managing bibliographies in <span class="tex">L<span class="texA">A</span>T<span class="texE">E</span>X</span> documents. The <span class="tex">BibT<span class="texE">E</span>X</span>/<span class="tex">BibL<span class="texA">A</span>T<span class="texE">E</span>X</span> approach to managing citations in a document is the use of the <code>\cite</code> command, with the use of “keys” which map to specific documents: <code>\cite{krycho:2015aa}</code>, for example.</p>
<p>This is not Markdown, of course. But other folks who have an interest in Markdown and academic writing have put their minds to the problem already. Folks such as Jon MacFarlane, the originator and lead developer of <a href="http://pandoc.org">Pandoc</a>, perhaps the single most capable text-conversion tool in existence. As it turns out, Pandoc Markdown supports a <a href="http://pandoc.org/README.html#citations">citation extension</a> to the basic markup. It’s just a variant on the <span class="tex">BibT<span class="texE">E</span>X</span> citation style that feels more at home in Markdown: a pair of braces and an <code>@</code>, plus the citation key, like <code>[@krycho]</code>. Moreover, Pandoc knows how to use <span class="tex">BibT<span class="texE">E</span>X</span> libraries, as well as many others, and <a href="http://citationstyles.org">Citation Style Languages</a> (<abbr>CSL</abbr>s) to generate markup in <em>exactly</em> the format needed for any given citation style.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a></p>
<p>Instead of writing out all those citations details by hand, then, I can just format my footnotes like this (assuming the citekey I had set up for the article was <code>krycho:revelation:2015</code>):</p>
<pre class="markdown"><code>Some text in which I cite an author.[^fn1]
More text. Another citation.[^fn2]
What is this? Yet _another_ citation?[^fn3]
[^fn1]: [@krycho:revelation:2015], ¶6.
[^fn2]: Contra [@krycho:revelation:2015], ¶15, who has everything _quite_ wrong.
[^fn3]: [@krycho:revelation:2015].</code></pre>
<p>This is much simpler and, importantly, has the exact same form for each citation. Pandoc will take care of making sure that the first reference is in the long form, later references are in the short form, and repeated references are in the “ibid.” form as appropriate. It even renders a properly sorted and structured Works Cited section.<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a></p>
<p>The slightly complex command I used to generate a Word document from a Markdown file with citations (using my own <span class="tex">BibT<span class="texE">E</span>X</span> library and the Chicago Manual of Style <abbr>CSL</abbr>) on the command line is:<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a></p>
<pre class="bash"><code>$ pandoc revelation.md --smart --standalone \
--bibliography /Users/chris/Documents/writing/library.bib \
--csl=/Users/chris/Documents/writing/chicago.csl -o revelation.docx</code></pre>
<p>To see an extended sample of this kind of usage in practice, take a look at the <a href="http://v4.chriskrycho.com/2015/not-exactly-a-millennium.txt">Markdown source</a> for the paper I wrote last week, using exactly this approach. Every footnote that references a specific source simply has a cite key of this variety. The header metadata includes a path to the bibliography file and a <abbr>CSL</abbr>. (These could be configured globally, as well, but I chose to specify them on a per-file basis so that if I want or need to use <em>different</em> styles or a separate library for another file at a later time, I can do so with a minimum of fuss. More on this below.)</p>
<p><a href="/downloads/revelation.docx">Here</a> is the rendered result. You can see that it automatically generated everything right down to the “ibid.”-style footnotes. I made a few, fairly minimal tweaks (replacing the search <abbr>URL</abbr> with an <abbr>ATLA</abbr> database catalog reference and inserting a section break before the Works Cited list), and turned the paper in—confident, for the first time since I started seminary, that all of the references were in the right order and the right format. With carefully formatted reference documents (with their own style sets),<a href="#fn6" class="footnote-ref" id="fnref6" role="doc-noteref"><sup>6</sup></a> I was able to generate an actually <em>nice</em> <abbr><a href="/downloads/revelation-pretty.pdf">PDF</a></abbr> version of the paper from another Word document, as well.<a href="#fn7" class="footnote-ref" id="fnref7" role="doc-noteref"><sup>7</sup></a></p>
<p>And, better yet, you don’t even have to put citations in footnotes. As <a href="https://twitter.com/anjdunning">@anjdunning</a> pointed out in a <a href="https://twitter.com/anjdunning/status/625415216575197184">tweet</a> response to the original version of this post:</p>
<blockquote>
<p><a href="https://www.twitter.com/chriskrycho">@chriskrycho</a> Don’t put citekeys in a footnote: write everything as inline citations and it will also generate notes when asked by CSL def. <a href="https://twitter.com/anjdunning/status/625415216575197184">∞</a> July 26, 2015 17:19</p>
</blockquote>
<p>In my standard example from above, then, you could simply do this:</p>
<pre class="markdown"><code>Some text in which I cite an author.[@krycho:revelation:2015, ¶6]
More text. Another citation.[Contra @krycho:revelation:2015, ¶15, who has
everything *quite* wrong.]
What is this? Yet _another_ citation?[@krycho:revelation:2015]</code></pre>
<p>This will generate the same markup for my purposes here; and as <a href="https://twitter.com/anjdunning">@anjdunning</a> noted, it goes one step further and does what’s appropriate for the <abbr>CSL</abbr>. This might be handy if, for example, you wanted to use the Chicago notes-bibliography style in one format, but switch to a simpler parenthetical citation style for a different medium—or even if you had a paper to submit to different journals with different standards. Having the citations inline thus has many advantages.</p>
<p>Now, there are still times when you might want to split those out into distinct footnotes, of course. That second one is a good candidate, at least for the way I tend to structure my plain-text source. I find it useful in the case of <em>actual</em> footnote content—i.e. text that I’m intentionally leaving aside from the main text, even with reference to other authors—to split it out from the main flow of the paragraph, so that someone reading the plain text source gets a similar effect to someone reading the web or Word or <abbr>PDF</abbr> versions, with the text removed from the flow of thought. In any case, it’s quite nice that Pandoc has the power and flexibility such that you don’t <em>have</em> to.</p>
<p>Finally, you don’t actually <em>need</em> the brackets around the citekey, depending on how you’re using the reference. If you wanted to cite the relevant author inline, you can—and it will properly display both the inline name and a reference (footnote, parenthetical, etc.) in line with the <abbr>CSL</abbr> you’ve chosen. If I were going to quote myself in a paper, I would do something like this:</p>
<pre class="markdown"><code>As @krycho:revelation:2015 comments:
> This was a hard paper to write.</code></pre>
<p>This is <em>extremely</em> powerful, and while I didn’t take advantage of it in my first paper using these tools, you can bet I will be in every future paper I write.</p>
<section id="all-those-references" class="level3">
<h3>All those references</h3>
<p>Of course, as is probably apparent, managing a <span class="tex">BibT<span class="texE">E</span>X</span> library by hand is no joke. Entries tend to look like this:</p>
<pre class="tex"><code>@book{beale:revelation:2015,
Date-Added = {2015-07-20 21:16:02 +0000},
Date-Modified = {2015-07-20 21:21:05 +0000},
Editor = {G. K. Beale and David H. Campbell},
Publisher = {William B. Eerdmans Publishing Company},
Title = {Revelation: A Shorter Commentary},
Year = {2015}}</code></pre>
<p>While there is a lot of utility in having that data available in text, on disk, no one wants to <em>edit</em> that by hand.<a href="#fn8" class="footnote-ref" id="fnref8" role="doc-noteref"><sup>8</sup></a> Gladly, editing it by hand is not necessary. For this project, I used the freely available <a href="http://bibdesk.sourceforge.net">BibDesk</a> tool, which is a workable (albeit not very pretty and not <em>very</em> capable) manager for <span class="tex">BibT<span class="texE">E</span>X</span>:</p>
<figure>
<img src="//cdn.chriskrycho.com/images/bibdesk.png" title="Not very pretty, but it does work" alt="BibDesk – open to the library for my Revelation paper" /><figcaption>BibDesk – open to the library for my Revelation paper</figcaption>
</figure>
<p>Once I filled in the details for each item and set a citekey for it, I was ready to go: BibDesk just stores the files in a standard <code>.bib</code> file on the disk, which I specified per the Pandoc command above.</p>
<p>BibDesk gets the job done alright, but only alright. Using a citation and reference management tool was a big win, though, and I fully intend to use one for every remaining project while in seminary—and, quite possibly, for other projects as well. Whether that tool is BibDesk or something else is a different matter entirely. (More on this below.)</p>
</section>
</section>
<section id="to-the-web" class="level2">
<h2>To the web!</h2>
<p>I wanted something more out of this process, if I could get it. One of the reasons I use plain text as a source is because from it, I can generate Word documents, <abbr>PDF</abbr>s, and <em>this website</em> with equal ease. However, Python Markdown knows nothing of <span class="tex">BibT<span class="texE">E</span>X</span> or citekeys, to my knowledge—and since I render everything for school with Pandoc, I have long wanted to configure <a href="http://docs.getpelican.com/en/3.6.0/">Pelican</a> to use Pandoc as its Markdown engine instead of Python Markdown anyway.</p>
<p>As it happens, I actually set this up about a month ago. The process was pretty simple:<a href="#fn9" class="footnote-ref" id="fnref9" role="doc-noteref"><sup>9</sup></a></p>
<ol type="1">
<li>I installed the <a href="https://github.com/jstvz/pelican-pandoc-reader">pandoc-reader</a> Pelican extension.</li>
<li>I set the plugin path in my Pelican configuration file.</li>
<li>I specified the arguments to Pelican I wanted to use.</li>
</ol>
<p>The only additional tweaks necessary to get citation support were calling it with the <code>'--filter pandoc-citeproc'</code> arguments, which lets it process any bibliography data supplied in the header metadata for the files. Calling Pandoc with <code>--bibliography <path to bibliography></code> (as in my example above) is a <a href="http://pandoc.org/README.html#citation-rendering">shortcut</a> for calling it with <code>--metadata <path to bibliography></code> <em>and</em> the <code>--filter pandoc-citeproc</code> arguments. I could just supply the bibliography directly in the call from Pelican, but this would limit me to using a single bibliography file for <em>all</em> of my posts—something I’d rather not limit myself to, since it might make sense to build up bibliographies around specific subjects, or even to have smaller bibliographies associated with each project (exported from the main bibliography), which could then be freely available along with the contents of the paper itself.<a href="#fn10" class="footnote-ref" id="fnref10" role="doc-noteref"><sup>10</sup></a> (On this idea, see a bit more below under <strong>The Future</strong>.)</p>
<p>One word of warning: Pandoc is much slower to generate <abbr>HTML</abbr> with <code>--filter pandoc-citeproc</code> than <em>without</em> the filter, and the larger your site, the more you will feel this. (The time to generate the site from scratch jumped from about 10s to about 30s for me, with 270 articles, 17 drafts, 2 pages, and 1 hidden page, according to Pelican.) Pandoc has to process <em>every</em> article to check for citations, and that’s no small task. However, if you have Pelican’s content caching turned on, this is a one-time event. After that, it will only be processing any new content with it; total generation time is back down where it was before for me: the effort is all in generating the large indexes I use to display the content for the landing pages and for category and tag archives.</p>
<p>And the result: that same paper, rendered to <abbr>HTML</abbr> <a href="http://v4.chriskrycho.com/2015/not-exactly-a-millennium.html">on my website</a>, with citations and works cited, generated automatically and beautifully.</p>
<section id="other-site-generators" class="level3">
<h3>Other site generators</h3>
<p>I don’t know the situation around using Pandoc itself in other generators, including Jekyll—I simply haven’t looked. I do know, however, that there <em>is</em> some tooling for Jekyll specifically to allow a similar workflow. If you’re using Jekyll, it looks like your best bet is to check out <a href="https://github.com/inukshuk/jekyll-scholar">jekyll-scholar</a> and the <a href="https://github.com/inukshuk/citeproc-ruby">citeproc-ruby</a> project, which (like pandoc-citeproc) enables you to embed citations and filter them through <abbr>CSL</abbr>s to generate references automatically. As a note: you should definitely be able to get those working on your own deployment sites, but I have no idea whether it’s possible to do them with the GitHub Pages variant of Jekyll. (If anyone who reads this knows the answer to that, let me know on Twitter or App.net, and I’ll update the post accordingly.)</p>
</section>
</section>
<section id="the-future" class="level2">
<h2>The future</h2>
<p>In addition to continuing to use <span class="tex">BibT<span class="texE">E</span>X</span> with BibDesk as a way of managing my citations in the short term, I’m thinking about other ways to improve this workflow. One possibility is integrating with <a href="http://scholdoc.scholarlymarkdown.com">Scholdoc</a> as it matures, instead of <a href="http://pandoc.org">pandoc</a>, and maybe (hopefully, albeit unlikely) even contributing to it somewhat. I’m also open to using other citation library tools, though my early explorations with Mendeley and Zotero did not particularly impress me.</p>
<p>There are substantial advantages for the applications (and thus for most users) to maintaining the data in an application-specific format (e.g. an SQLite database) rather than on the file system—but the latter has the advantage of making it much easier to integrate with other tools. However, Zotero and Mendeley both natively export to <span class="tex">BibT<span class="texE">E</span>X</span> format, and Mendeley natively supports <a href="http://blog.mendeley.com/tipstricks/howto-use-mendeley-to-create-citations-using-latex-and-bibtex/">sync</a> to a <span class="tex">BibT<span class="texE">E</span>X</span> library (Zotero can do the same, but via third-party <a href="https://zoteromusings.wordpress.com/tag/bibtex/">plugins</a>), so those remain viable options, which I may use for future projects.</p>
<p>I also want to look at making my library of resources available publicly, perhaps (a) as a standalone library associated with each project, so that anyone who wants to can download it along with the Markdown source to play with as an example and (b) as a general library covering my various reading and research interests, which will certainly be irrelevant to most people but might nonetheless provide some value to someone along the way. I’m a big fan of making this kind of data open wherever possible, because people come up with neat things to do with it that the original creators never expect. Not <em>everything</em> should be open—but lots of things should, and this might be among them.</p>
</section>
<section id="summary" class="level2">
<h2>Summary</h2>
<p>I’m pretty happy with the current state of affairs, the aforementioned interest in other reference managers notwithstanding:</p>
<ul>
<li>I can set up the citations <em>once</em>, in a tool designed to manage references, instead of multiple times in multiple places.</li>
<li>I can use Pandoc and a <abbr>CSL</abbr> to get the citations formatted correctly throughout a paper, including generating the bibliography automatically.</li>
<li>I can use the same tooling, integrated into my static site generator, to build a web version of the content—with no extra effort, once I configured it properly the first time.</li>
</ul>
<p>Perhaps most importantly, this helps me meet one of my major goals for all my writing: to have a single canonical <em>source</em> for the content, which I will be able to access in the future regardless of what operating system I am using or what publishing systems come and go. Simple plain text files—Markdown—get me there. Now I’ve put good tools around that process, and I love it even more.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Coming up with names for footnotes in Markdown can be painful in general for long documents. If you try to name them manually, like I do for posts on my website, you will very quickly end up wasting time on the names. If you try to number them, they will end up out of order in a hurry. My own <a href="http://2012-2013.chriskrycho.com/web/markdown-and-academic-writing/">previous solution</a> to this problem quickly became unwieldy for larger papers, and required a <em>lot</em> of hand-editing. Gladly, I no longer deal with that manually. Instead, I do all my drafting in <a href="http://www.ulyssesapp.com">Ulysses</a>, where you just type <code>(fn)</code> and it creates a footnote automatically, and will move that footnote <em>object</em> around transparently as you edit, handling all the number-setting, etc. on its own.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>The irony of site for software which boasts that it is “a high-quality typesetting system” and looks like <a href="http://www.latex-project.org"><em>this</em></a> is not lost on me…<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>If you used the installers on Pandoc’s website, <code>pandoc-citeproc</code> comes with it. If you installed it via a package manager (e.g. by running <code>brew install pandoc</code>), it may not have, so you’ll need to install it manually yourself (e.g. <code>brew install pandoc-citeproc</code>).<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>All of the content, including the rendered footnotes and the bibliography, has sensible content types set on it: headers are headers, body text is body text, etc. You can then customize to match the specifications of your style guide. I have a Chicago/Turabian style set set up with the formatting rules to match.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn5" role="doc-endnote"><p>Actually, it was even hairier than this, because I also had a <code>--reference-docx path/to/template.docx</code> specified. If you think it’s perhaps a bit too complex, well, I agree. I plan to turn that into a command line alias in pretty short order, because remembering it every time is just not going to happen.<a href="#fnref5" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn6" role="doc-endnote"><p>Using the <code>--reference-docx</code> argument to Pandoc, you can hand it a document that already uses your desired style set, so you don’t have to go in and apply it manually.<a href="#fnref6" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn7" role="doc-endnote"><p>I could have done that with Pandoc’s <span class="tex">L<span class="texA">A</span>T<span class="texE">E</span>X</span> <abbr>PDF</abbr> tools, as well, but didn’t really feel like taking the time to tweak the <span class="tex">L<span class="texA">A</span>T<span class="texE">E</span>X</span> template for it.<a href="#fnref7" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn8" role="doc-endnote"><p>Probably someone does, but not me, and not most people!<a href="#fnref8" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn9" role="doc-endnote"><p>If you’re using Pelican, you can take a look at my Pelican configuration file <a href="https://github.com/chriskrycho/chriskrycho.com/blob/ef3ecbca1765750392086355aeae026c1159d4b9/pelicanconf.py#L109">here</a> to see the full configuration for using Pandoc this way.<a href="#fnref9" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn10" role="doc-endnote"><p>Optimally, I’d really just prefer to be able to set <em>all</em> of these arguments at a per-file level—i.e., not use <code>--filter pandoc cite-proc</code> unless the file actually specifies a bibliography. And I could hack Pelican to do that; I’ve actually already <a href="https://github.com/liob/pandoc_reader/pull/5">messed around</a> with other, semi-related bits regarding Pelican and Pandoc’s shared handling of <abbr>YAML</abbr> metadata. But I’d prefer to keep my installation as “vanilla” as possible to minimize the cost of setting things up again on a new machine or after a crash, etc.<a href="#fnref10" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
HTML5 Location, <base>, and SVG2015-06-20T10:30:00-04:002015-07-02T22:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-06-20:/2015/html5-location-base-and-svg.htmlAngular requires <code><base></code> if you want to use HTML5's <code>location</code>… but if you get it wrong, SVG things can and will break under you.
<p>For quite some time, I have been frustrated by a bug in HolyBible.com: Firefox would not render SVGs using the <code><use xlink:xhref="#some-SVG-ID"></use></code> pattern. Today, I set aside my ongoing work on new user-facing functionality and dedicated what working time I had to hunting down the cause of this and fixing it at last.</p>
<p>I was surprised to find the culprit: the <code><base></code> tag. If you don’t know what the <code><base></code> tag is, you’re not alone. It is <em>not</em> used all that much in general, and I had never actually seen it on a site before starting on this project last year.</p>
<p>So what went wrong? How do these two things play together?</p>
<p>I am using (and reusing) SVG items throughout the HolyBible.com interface, taking advantage of the ability to define symbols and reference them with the <code><use></code> tag, like so:</p>
<pre class="html"><code><svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events" style="display: none">
<symbol id="logo-shape" viewBox="0 0 256 256">
<title>Logo</title>
<desc>The HolyBible.com logo: sunrise breaking over an open book (the Bible).</desc>
<path id="logo-light" d="M172.1 116.3l5.1-4.1-12.5-.5 32-26.3-41.4 18.4 11-20.1L148 96l12.2-37.5L138.8 91l.1-36.2-10.3 34.4L114 36.1l4.3 54.9-22.2-34.9 13 39.9-18.3-12.4 11 20.1-42.5-19.2 32.8 26.9-10.4.8 4.4 3.9c13.1-1.6 27.4-2.7 42.4-2.7 15.4 0 30.1 1.2 43.6 2.9z"/>
<path id="logo-book" d="M199.9 219.9c-47.4-9.8-96.4-9.8-143.8 0-6-28.9-12-57.7-17.9-86.6 59.3-12.3 120.4-12.3 179.7 0-6 28.9-12 57.8-18 86.6z"/>
</symbol>
</svg>
<!-- somewhere else on the page -->
<svg>
<use xlink:href="#logo-shape"></use>
</svg></code></pre>
<p>Throughout all my early prototyping, this worked perfectly across all modern browsers. (For more, see <a href="https://css-tricks.com/svg-sprites-use-better-icon-fonts/">CSS Tricks</a>.) Now, when I started moving from the prototype phase into actually building the application in Angular last fall, I learned that you have to set the base URL for the application using the <code><base></code> tag to use the HTML5 Location API with Angular 1.x. If you want URL-based, rather than <code>#</code>-based navigation in an Angular app, you need this. Following the recommendation of whatever documentation and tutorials I found, I set it so:</p>
<pre class="html"><code><base href="/"></code></pre>
<p>Again, this was the recommendation I saw in every bit of documentation and every tutorial, so I assumed it would have no problems. As it turns it, that’s not the case. (This is a <a href="http://v4.chriskrycho.com/2015/how-to-build-a-single-page-app-api-right.html">recurring theme</a> in my experience with Angular.) In Chrome, Safari, and IE9+, this works exactly as expected. In Firefox, however, it does <em>not</em>. The use of the <code><base></code> tag changes the behavior of <code>#</code>-based URLs on a page. Specifically, it makes it so that if you’re at a URL that <em>isn’t</em> the base route, anchor links don’t behave as expected. In order to make the <code><use></code> tag as expected, we would have to use the same URL as the base tag. Among other things, this would require making sure that any place we used the <code><use></code> tag, we would have to set that—not exactly a good idea, given that it would entail an awful lot of changes if the base URL were ever changed.</p>
<p>What if, instead, we did this?</p>
<pre class="html"><code><script>document.write('<base href="' + document.location.origin + '" />');</script></code></pre>
<p>This way, when the page renders, it writes the document location based on the <em>current</em> location. The URL history still behaves as expected with Angular, but the relative URLs for IDs behave as expected in Firefox again, while not breaking the behavior in any other browsers.</p>
<p>But… then you’ll navigate to another page, and Firefox will be back to not working.</p>
<p>The <a href="https://github.com/angular/angular.js/issues/8934#issuecomment-56568466">solution</a>, it turns out, only came into being after I’d done the initial implementation, and I have no idea how much later it found its way into the Angular docs. However, even though it now <em>exists</em> in the docs, it’s by no means obvious why you should do it this way, and certainly no mention of SVG! This might not seem odd to you… but it should, given that the only reason that Angular introduced this API change was to account for <em>exactly this issue</em>.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>As the Angular docs note, leaving out the <code><base></code> tag means all your URLs have to be absolute if you want to use HTML5 location and the <code>$locationProvider</code>. If you want to use SVGs with <code><use></code> and Firefox, though, that’s what you have to do (and therefore that’s what I’m doing).</p>
<p>Fun times, right?</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>The closest it gets is this reference:</p>
<blockquote>
<p>Links that only contain a hash fragment (e.g. <code><a href="#target"></code>) will only change <code>$location.hash()</code> and not modify the url otherwise. This is useful for scrolling to anchors on the same page without needing to know on which page the user currently is.</p>
</blockquote>
<p>Even this, however, only <em>hints</em> at the root of the SVG issue.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
How to Build a Single-Page App API Right2015-06-09T22:16:00-04:002015-06-09T22:16:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-06-09:/2015/how-to-build-a-single-page-app-api-right.htmlHow to write a single-page app API so that you get usable data on the first load *and* have a nice interface for your single-page application built in Ember/Angular/Knockout/Backbone/etc.
<p>When I was first working on HolyBible.com, I struggled for quite a while to wrap my head around the right way to structure its API—and in truth, I actually didn’t come up with what I would call the <em>right</em> solution. I came up with a <em>working</em> solution, and the site performs all right, most of the time. However, our goal as developers shouldn’t be “all right, most of the time.” It should be “really well, all the time.” A big part of what I did wrong came from the bad advice I found in reading up on the issue along the way. This is my shot at helping you, dear reader, avoid making the same mistake.</p>
<section id="the-challenge" class="level2">
<h2>The challenge</h2>
<p>When building a client-side application, we need to get the data for each view so that we can render it. In the case of HolyBible.com, that means everything from actual Bible text to study Bible notes, about pages, etc. The question is <em>how</em> to do this: we need to be able to load an actual page from our server, and we need a way to request data (rather than whole pages) from the server.</p>
<p>(More experienced developers already know where this is going: that last sentence there has the key to this whole thing. I know. But the internet <em>doesn’t.</em> I learned this the hard way.)</p>
<section id="the-mistake" class="level3">
<h3>The mistake</h3>
<p>Here’s the mistake I made: I built the Bible data API as (essentially) a <em>single</em> endpoint. When I went looking for advice on how to build this in Angular and Node/Express, every single tutorial or blog post I found outlined the same basic solution: routes for your data endpoints, and catch-all route that returns the basic frame page for everything else. So, for HolyBible.com, that would come out with route matchers for e.g. <code>/data/gen.1.1</code>, and for any other specific routes needed (for other views, static resources, etc.), with a default behavior of just dropping a static, basically empty template at the catchall <code>*</code> route. Then, once the application has loaded, it can inspect the URL and load the relevant data.</p>
<p>This works. It’s exactly what I did on HolyBible.com, in fact. But it’s <em>slow</em>.</p>
<p>Don’t get me wrong: the time until the initial page load is actually relatively quick (though I plan to improve it substantially over the next couple months). The real problem is that the initial page load <em>doesn’t include any content</em>.</p>
<p>I <em>hate</em> this. That’s why people are on the site: not to see my neat skills with JavaScript, just to read the Bible. And they have to wait, because once the page <em>does</em> load, Angular has to spin up the full application, see what content <em>should</em> have been loaded, and request it.</p>
</section>
<section id="the-solution" class="level3">
<h3>The solution</h3>
<p>Don’t write <em>one</em> API. Write <em>two</em>. They should be structured nearly identically, but one of them will be a <em>page</em> API endpoint, and one will be a <em>data</em> API endpoint. In the context of HolyBible.com, here’s how that would play out.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> One endpoint would be based purely on the standard URL, something like <code>holybible.com/jhn.3.16</code>. The other would be to retrieve a set of <em>data</em> associated with a given address, like <code>holybible.com/data/jhn.3.16</code>. This is only a little different from the approach suggested above, but that small difference matters—in fact, it matters a <em>lot</em>.</p>
<p>Instead of having the <code>/jhn.3.16</code> route get handled by a catchall <code>*</code> route on the back end, it gets its own API endpoint, which looks for URLS of this shape and hands back a full page. That API endpoint is responsible to actually render the content of the page appropriately—in this case, with something like the whole chapter of John 3.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> <em>That</em> gets handed back to the browser, so the very first thing the user sees is not a blank page while the JavaScript framework spins up and requests data, but rather <em>the Bible text they asked for in the first place</em>.</p>
<p>Meanwhile, the JavaScript framework <em>can</em> spin up, and load any required session data, etc. and start managing the UI like normal. Once we get to this point, the framework can go ahead and request a data payload from the <code>/data/<reference></code> endpoint. So, for example, if there is a navigation control on the page (as on HolyBible.com and indeed most sites), clicking to navigate to Job 14 could, instead of requesting <code>/job.14.4</code>, fetch the data from the other endpoint by running an AJAX request to <code>/data/job.14.4</code>.</p>
<p>The backend thus supplies <em>both</em> a <code>/<resource></code> and a <code>/data/<resource></code> route. This might seem redundant, but we’ve just seen why it isn’t, Moreover, if you have any logic that needs to be in place—in our example here, a Bible reference parser, for example, to decide what content should be supplied—you can easily reuse it between the two routes. The differences is simply in the form of the data returned: is it a fully-rendered template, or just the data?</p>
</section>
</section>
<section id="so-what" class="level2">
<h2>So what?</h2>
<p>This approach has two big advantages over the catch-all approach that was frequently recommended in e.g. Angular SPA tutorials I read.</p>
<ol type="1">
<li><p>It’s <em>progressive enhancement</em>. If the JavaScript fails, or the user has it disabled, or it fails to load because it’s loaded asynchronously, the user still gets the page they asked for. Moreover, as long as the page content is build carefully (links built appropriately for other content, and so on), the entire application could continue to work even if the JavaScript <em>never</em> becomes available.</p></li>
<li><p>It’s <em>performant</em>. Loading the content this way will be <em>much</em> faster than the standard approach recommended for single-page apps. As noted above, it gets the content to the user immediately, then lets the JavaScript UI bits come into play. Since future page loads can take advantage of both caching and smaller data payloads, the whole thing can actually be faster than either a pure client-side <em>or</em> a pure server-side approach. That is, once the client-side application is running, it can just update its views with data delivered via AJAX, rather than reloading the whole page. But <em>before</em> that, the user doesn’t have to wait to see something useful until the JavaScript framework spins up.</p></li>
</ol>
<p>It’s not often an approach gives you progressive enhancement and actually increases the performance of an application, but this one does. Better yet, you can apply this in just about any framework: it’s equally applicable to AngularJS with ExpressJS, Backbone with Rails, Ember with Django, Aurelia with Phoenix, or any other combination you come up with.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Note: this is <em>not</em> the actual API structure of HolyBible.com, or even particularly close to it. Remember, I learned everything I’m writing here by doing it <em>wrong</em>.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Or possibly a section which constitutes a semantic block of data. I have some thoughts on chunking Bible data semantically rather than by chapter and verse for this kind of thing. That’s another post for another day, though.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Corporate and Government Surveillance2015-06-02T22:43:00-04:002015-06-02T22:43:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-06-02:/2015/corporate-and-government-surveillance.htmlA response to Senator Sheldon Whitehouse's address to the NSA on Americans’ greater mistrust of government collection of data than corporations’.
<p><i class="editorial"><a href="https://witheredgrass.wordpress.com/">Brian Auten</a> shared <a href="http://www.lawfareblog.com/2015/06/why-americans-hate-government-surveillance-but-tolerate-corporate-data-aggregators/">this speech</a> by Sen. Sheldon Whitehouse on Facebook, and I wrote up what follows in response.</i></p>
<hr />
<ol type="1">
<li><p>I broadly agree with the critique of the libertarian/TP angle on government as essentially an appendage to business. I am <em>by no means</em> hostile to the government in general or in principle, nor even to <em>spying</em>, nor even to warranted (double entendre intended) use of data for law enforcement. The idea that all government is bad is woefully incorrect; it is better to speak of <em>abuses</em>, either of government or of business or indeed of any sphere exceeding its right domain or acting inappropriately within its domain.</p></li>
<li><p>There is a profound and important difference between corporate data collection and federal government data collection: one of them, people accede to directly (though see below); the other they accede to (at best!) indirectly through elected representatives, with whom they may profoundly disagree and against whom they have no recourse (unlike the case of, say, Google or Facebook—one <em>can</em> simply stop dealing with them). Whatever information I have granted to a corporation, I have chosen to grant them, and I can stop doing so with future information at any time. I <em>cannot</em> do so with the NSA, FBI, etc.</p></li>
<li><p>That distinction may be relatively meaningless for most people in practice, given that the terms, means, and consequences of the data collection carried about by corporations are often obscure to the point of incomprehensibility.</p></li>
<li><p>As such, a serious reformation ought to occur in the realm of business and the way that people’s information is handled. Treating information about customers as the primary point of transactional value has significantly deleterious costs on any number of things.</p></li>
<li><p>For this reason, I consistently advocate for and (where possible) choose to use services which are supported by direct payment, rather than by advertising, and so on. This is not always possible, but where it is, we should consider taking that path.</p></li>
<li><p>Nonetheless, because of the government’s power of coercion—a power not held by corporations, though to be sure they can exercise significant force of a certain sort through legal machinery/chicanery—the collection of metadata by the government does pose a more potent and long-term threat to liberty than that by corporations.</p></li>
<li><p>As such, people are <em>absolutely right</em> to be more tolerant of corporate data collection than of federal data collection. That they ought to be less tolerant of corporate data collection by no means suggests that their hostility to unwarranted governmental data collection should be diminished: quite the contrary.</p></li>
<li><p>Therefore, while some of the criticism of the government’s data collection may well be driven by the sorts of corporate interests he suggests, and while much of the opposition from companies like Facebook and Google is indeed hypocritical, the criticism is still warranted. The NSA has clearly and repeatedly overstepped even the extremely wide bounds granted it by the Patriot Act, and the Patriot Act itself licensed behavior that should be horrifying to people concerned with the long-term effects of mass surveillance on governance.</p></li>
</ol>
Python Enums, ctypes.Structures, and DLL exports2015-05-28T18:00:00-04:002015-05-28T18:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-05-28:/2015/ctypes-structures-and-dll-exports.htmlUnfortunately, the official docs for <code>ctypes</code> leaves a few things out—namely, the most basic use case with <code>from_param</code>! Here's a simple, working example from my own development work.
<p>For one of my contracts right now, I’m writing a <code>ctypes</code> Python interface to existing C code. I got stuck and confused for quite a while on getting the interface to a given function to build correctly, and along the way had to try to understand the <code>from_param</code> class method. The official docs are… fine… but the examples provided don’t cover the most common/basic use case: defining a simple, <em>non-ctypes</em> data type as an argument to a DLL-exported function.</p>
<p>Let’s say you have a C function exported from a DLL; for convenience we’ll make it something rather silly but easy to understand:</p>
<pre class="c"><code>/** my_exported.h */
#include "exports.h"
typedef enum {
ZERO,
ONE,
TWO
} MyEnum;
MY_API int getAnEnumValue(MyEnum anEnum);</code></pre>
<p>The implementation just gives back the integer value of the function:</p>
<pre class="c"><code>int getAnEnumValue(MyEnum anEnum) {
return (int)anEnum;
}</code></pre>
<p>As I said, a <em>very</em> silly example. Note that you don’t technically need the <code>(int)</code> cast there; I’ve just put it in to be explicit about what we’re doing.</p>
<p>How would we use this from Python? Assuming we have a DLL named <code>my_dll</code> which exports the <code>getAnEnumValue</code> function, we’d load it up roughly like this:<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<pre class="python"><code>import ctypes as c
my_dll = c.cdll.LoadLibrary('my_dll')</code></pre>
<p>Then, we bind to the function like this:</p>
<pre class="python"><code>get_an_enum_value = my_dll.getAnEnumValue</code></pre>
<p>Now, when you do this, you usually also supply the <code>argtypes</code> and <code>restype</code> values for these functions. If you’re like me, you’d think, “Oh, an enum—a perfect opportunity to use the <code>Enum</code> type in Python 3.4+!” and then you’d do something like this:</p>
<pre class="python"><code>import ctypes as c
from enum import IntEnum
class MyEnum(IntEnum):
ZERO = 0
ONE = 1
TWO = 2
my_dll = c.cdll.LoadLibrary('my_dll')
get_an_enum_value = my_dll.getAnEnumValue
get_an_enum_value.argtypes = [MyEnum]
get_an_enum_value.restype = c.c_int</code></pre>
<p>That seems sensible enough, but as it is, it won’t work: you’ll get an error:</p>
<pre><code>TypeError: item 1 in _argtypes_ has no from_param method</code></pre>
<p>This is because <code>argtypes</code> values <em>have</em> to be either existing <code>ctypes</code> types<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> or supply either:</p>
<ul>
<li>a <code>from_param</code> classmethod, or</li>
<li>an <code>_as_parameter_</code> attribute</li>
</ul>
<p>You can use <code>ctypes.Structure</code> subclasses natively that way, because the <code>Structure</code> class supplies its <code>from_param</code> classmethod. The same is <em>not</em> true of our custom enum class, though. As the docs put it:</p>
<blockquote>
<p>If you have defined your own classes which you pass to function calls, you have to implement a <code>from_param()</code> class method for them to be able to use them in the argtypes sequence. The <code>from_param()</code> class method receives the Python object passed to the function call, it should do a typecheck or whatever is needed to make sure this object is acceptable, and then return the object itself, its <code>_as_parameter_</code> attribute, or whatever you want to pass as the C function argument in this case. Again, the result should be an integer, string, bytes, a <code>ctypes</code> instance, or an object with an <code>_as_parameter_</code> attribute.</p>
</blockquote>
<p>So, to make the enum type work, we need to add a <code>from_param</code> class method or an <code>_as_parameter_</code> attribute to it. Thus, either of these options will work:</p>
<pre class="python"><code>class MyEnum(IntEnum):
ZERO = 0
ONE = 1
TWO = 2
# Option 1: set the _as_parameter value at construction.
def __init__(self, value):
self._as_parameter = int(value)
# Option 2: define the class method `from_param`.
@classmethod
def from_param(cls, obj):
return int(obj)</code></pre>
<p>In the constructor-based option, the <code>value</code> argument to the constructor is the value of the <code>Enum</code> instance. Since the value of anan <code>IntEnum</code> is always the same as the integer to whcih it is bound, we can just return <code>int(value)</code>.</p>
<p>The <code>from_param</code> approach works a little differently, but with the same results. The <code>obj</code> argument to the <code>from_param</code> method is the object instance, in this case the enumerated value itself. <em>Any</em> <code>Enum</code> with an integer value can be directly cast to <code>int</code> (though it is possible for <code>Enum</code> instances to have other values, so be careful), and since we have an <code>IntEnum</code> here, we can again just return <code>int(obj)</code> directly.</p>
<p>Now, let’s say we want to apply this pattern to more than a single <code>IntEnum</code> class, because our C code defines more than one enumeration. Extracting it to be common functionality is simple enough: just create a class that implements the class method, and inherit from it.</p>
<pre class="python"><code>class CtypesEnum(IntEnum):
"""A ctypes-compatible IntEnum superclass."""
@classmethod
def from_param(cls, obj):
return int(obj)
class MyEnum(CtypesEnum):
ZERO = 0
ONE = 1
TWO = 2</code></pre>
<p>Our final (working!) Python code, then, would be:</p>
<pre class="python"><code># Import the standard library dependencies
import ctypes as c
from enum import IntEnum
# Define the types we need.
class CtypesEnum(IntEnum):
"""A ctypes-compatible IntEnum superclass."""
@classmethod
def from_param(cls, obj):
return int(obj)
class MyEnum(CtypesEnum):
ZERO = 0
ONE = 1
TWO = 2
# Load the DLL and configure the function call.
my_dll = c.cdll.LoadLibrary('my_dll')
get_an_enum_value = my_dll.getAnEnumValue
get_an_enum_value.argtypes = [MyEnum]
get_an_enum_value.restype = c.c_int
# Demonstrate that it works.
print(get_an_enum_value(MyEnum.TWO))</code></pre>
<p>The output will be <code>2</code>, just as you’d expect!</p>
<p>An important note: The type definition we’ve provided here will work for <code>argtypes</code> or <code>restype</code> assignments, but <em>not</em> as one of the members of a custom <code>ctypes.Structure</code> type’s <code>_fields_</code> value. (Discussing how you’d go about doing that is beyond the scope of this post; the most direct approach is just to use a <code>ctypes.c_int</code> and note that it is intended to be used with a given <code>IntEnum</code>/<code>CtypesEnum</code> type.)</p>
<hr />
<p>Thanks to <a href="https://alpha.app.net/oluseyi">@oluseyi</a> for being my <a href="http://en.wikipedia.org/wiki/Rubber_duck_debugging">rubber ducky</a> while I was working this out earlier this week!</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I’m leaving out the part where we build the DLL, and also the part where we locate the DLL, and only using the Windows convention. If you’re on a *nix system, you should use <code>'my_dll.so'</code> instead, and in any case you need to make sure the DLL is available in the search path.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>I <em>love</em> the redundancy of “<code>ctypes</code> types,” don’t you?<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Open Source is Neat2015-05-17T22:52:00-04:002015-05-17T22:52:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-05-17:/2015/open-source-is-neat.htmlLink: Andrew J. Camenga took advantage of the fact that my site design is under an open-source license, and adapted it. It is truly lovely!<p>I confess: my <em>first</em> response to seeing <a href="//andrewcamenga.com/">this page</a> was a flash of anger: <em>Hey, he didn’t just learn from my site configuration, he actually stole my site </em><strong>design</strong>_!_ And then I remembered: I open-sourced the design precisely so people could do that. This was just the first time I’ve ever actually had someone reuse something I did and shared like this. It was a strange (but ultimately wonderful) feeling. I hope to have it again many more times.</p>
<p>In any case, I rather like the tweaks Andrew Comenga made to my design to make it his own; <a href="//andrewcamenga.com/">go take a look</a>!</p>
A Modern Python Development Toolchain2015-05-16T22:40:00-04:002015-05-16T22:40:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-05-16:/2015/a-modern-python-development-toolchain.htmlUsing homebrew, pyenv, and pip to manage Python development environments and workspaces.<p>Most of my development time these days—and especially the majority of my happiest time!—is spent working in Python. As such, I’ve experimented off and on over the last few years with the best workflow, and have settled down with a set of tools that is <em>very</em> effective and efficient for me. I’m sure I’m not the only one who’s had to wrestle with some of the issues particular to this toolchain, and I know that information like this can be valuable especially for people just starting off, so I thought I would document it all in one place.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>Note: when talking about a given program, I will italicize it, like <em>brew</em> or <em>git</em> or <em>python</em>. When talking about things to type, I will make them a code block like <code>git clone <a repository></code>. For any extended samples, I will make them full-on code blocks:</p>
<pre class="python"><code>import re
def a_neat_function():
my_string = "Isn't it cool?"
if re.match(r"i\w+", my_string, flags=re.I):
print(my_string)</code></pre>
<hr />
<p>The main tools I use are: a good text editor (I like all of <a href="//www.sublimetext.com">Sublime Text</a>, <a href="//atom.io">Atom</a>, <a href="//github.com/textmate/textmate">TextMate</a>, and <a href="//chocolatapp.com">Chocolat</a>; each has its own strengths and weaknesses) or sometimes <a href="https://www.jetbrains.com/pycharm/">a full IDE</a>, version control software (I appreciate and use both <a href="http://www.git-scm.com">Git</a> and <a href="http://mercurial.selenic.com">Mercurial</a>), and three dedicated tools to which the rest of this post is devoted: <em>pyenv</em>, <em>pip</em>, and virtual environments.</p>
<p>Everyone is going to have their own preferences for version control tools and an editor; but the recommendations I make regarding Python installations, package management, and workspaces/virtual environments should be fairly standard for anyone doing Python development on a Unix-like system in 2015.</p>
<section id="python-proper" class="level2">
<h2>Python Proper</h2>
<p>First up: Python itself. OS X ships with a built-in copy of Python 2; in the latest version of Yosemite, it’s running Python 2.7.6. The latest version of Python 2 is 2.7.9, so that isn’t <em>terribly</em> far behind—but it is still behind. Moreover, OS X does <em>not</em> ship with Python 3, and since I do all of my development in Python 3<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> I need to install it.</p>
<section id="homebrew" class="level3">
<h3>Homebrew</h3>
<p>For a long time, I managed all my Python installations with <a href="http://brew.sh"><em>homebrew</em></a>. If you’re not familiar with it, <em>homebrew</em> is a package manager that lets you installed tools on the command line, similar to what you get from <em>aptitude</em> or <em>yum</em> on Ubuntu or Fedora respectively.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> If you’re not using <em>homebrew</em> yet, I highly recommend it for installing command-line tools. (If you’re not using command-line tools yet, then the rest of this post will either bore you to death, or prove extremely enlightening!) If you haven’t started yet, now’s a good time: <a href="http://brew.sh">go install it!</a>.</p>
<p>While <em>homebrew</em> is great for installing and managing packages in general, I can’t say this loud enough: <em>don’t manage Python with homebrew</em>. It’s finicky, and really isn’t meant for all the things you have to do to manage more than one version of Python at a time.<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a> (There’s a reason there’s a whole <a href="https://github.com/Homebrew/homebrew/blob/master/share/doc/homebrew/Homebrew-and-Python.md">troubleshooting section</a> devoted to it.) If you think it’s crazy that I might want more than one copy of Python installed a time, well… let’s just say I suspect you’ll change your mind after doing a bit more development. (At the most basic, most people will end up wanting both Python 2 and 3 installed, and will want to upgrade them as bug fixes and the like come out.)</p>
</section>
<section id="pyenv" class="level3">
<h3>pyenv</h3>
<p>Instead of installing via <em>homebrew</em>, use it to install <a href="https://github.com/yyuu/pyenv"><em>pyenv</em></a>, and use that to manage your installations. <em>pyenv</em> is a dedicated tool for managing your “Python environment,” and it excels at that. If you were on a Mac with <em>homebrew</em> installed, your setup process to add the latest version of Python might look something like this:</p>
<pre class="shell"><code>$ brew install pyenv
$ echo 'eval "$(pyenv init -)"' >> ~.profile
$ source ~/.profile
$ pyenv install 3.4.3</code></pre>
<p>Line by line, that (a) installs <em>pyenv</em>, (b) adds a hook to your shell profile,<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> (c) updates your current session using the updated profile, and (d) installs the latest version of Python (as of the time I’m writing this). Now you have a full version of Python 3.4.3 alongside the system install of Python 2.7.6. If you wanted to install 2.7.9, or 2.2.3, or the development version of PyPy3, you could easily do that as well.</p>
<p>In addition, <em>pyenv</em> lets you specify which version to use globally (<code>pyenv global <name></code>) and which version to use in a given directory structure (<code>pyenv local <name></code>). So if you prefer to use Python 3 in general, but need to use Python 2 on one project, you can just navigate to the root of that project and set it:</p>
<pre class="shell"><code>$ pyenv global 3.4.3
$ cd path/to/my/project
$ pyenv local 2.7.9</code></pre>
<p>This will create a simple plain text file, <code>.python-version</code>, whose contents will be just <code>2.7.9</code>—but for everything under <code>path/to/my/project</code>, typing <code>python</code> will launch Python 2.7.9, while typing it <em>outside</em> that folder will launch Python 3.4.3. (If you want, you can just create the <code>.python-version</code> file yourself manually and give it the name of a version. There’s nothing special about it all; it’s just the place <code>pyenv</code> looks to know which Python version to use.)</p>
</section>
</section>
<section id="managing-python-packages" class="level2">
<h2>Managing Python Packages</h2>
<p>There are four basic approaches to managing Python packages:</p>
<ul>
<li>installing them manually</li>
<li>using a system-level package manager like <em>homebrew</em>, <em>yum</em>, or <em>aptitude</em></li>
<li>using <em>easy_install</em></li>
<li>using <em>pip</em></li>
</ul>
<p>The vast majority of the time, the right choice is using <em>pip</em>. Over the last few years, <em>pip</em> has become the default install tool for Python packages and it now ships natively with it on every platform. Suffice it to say: if you need to install a package, do not install it not with <em>homebrew</em> (or <em>aptitude</em> or <em>yum</em>). Install it with <em>pip</em>. It integrates with Python better, it always has access both to the latest versions of Python packages (including those only available in e.g. development repositories on GitHub or Bitbucket or wherever else) and to all previously released versions, and it’s the community’s main tool for the job.</p>
<p>That said, occasionally it makes sense to install packages manually by downloading them and running <code>python setup.py install</code> or to use a system-level package manager. On the other hand, given <em>pip</em>’s ability to do everything <em>easy_install</em> does, and its ability to do quite a few more things as well, there really isn’t a time to use <em>easy_install</em>. Using the language-supplied tools keeps everything playing nicely together. Perhaps just as importantly, it is the only way to make sure everything behaves the way it should when you start using…</p>
</section>
<section id="virtual-environments" class="level2">
<h2>Virtual Environments</h2>
<p>When working with a variety of different clients, or simply on different projects, it is common not only to end up with different versions of Python but also with different sets of packages or—tricker still!—different versions of the same package required for different projects. Virtual environments provide a solution: they reuse the main Python executable (by creating links on the file system to it), but create isolated “workspaces” for the various packages you might install.</p>
<p>That way, in one workspace, you might have version 1.2 of a package installed, and in another you might have version 3.3 installed—because those are the required dependencies for something <em>else</em> you’re doing. This isn’t a hypothetical situation. For quite a while with one of my clients, we had pinned a particular version of the Python documentation package we use because it broke our use case after an update—but I still wanted to have the latest version of that tool in my <em>other</em> projects. Setting up virtual environments neatly solves that problem.</p>
<section id="venv-and-virtualenv" class="level3">
<h3>venv and virtualenv</h3>
<p>If you have Python 3.3 or later, you have a built-in tool for this called <a href="https://docs.python.org/3/library/venv.html"><em>pyvenv</em></a>; if you have Python 3.4 or later, it supports <em>pip</em> right out of the gate so you don’t have to install it yourself. If you’re on older versions, you can install <a href="https://virtualenv.pypa.io/en/latest/"><em>virtualenv</em></a> (<code>pip install virtualenv</code>) and get the same basic tooling: <em>pyvenv</em> was inspired by <em>virtualenv</em>. Then you can create virtual environments with the <code>pyvenv</code> or <code>virtualenv</code> commands, and use those to isolate different setups from each other. If you haven’t started using virtual environments yet, start now!</p>
</section>
<section id="pyenv-with-virtualenv" class="level3">
<h3>pyenv with virtualenv</h3>
<p>I know, the similarity of names for <em>pyenv</em> and <em>pyvenv</em> is unfortunate. If it helps, you can call the latter as <code>venv</code> rather than <code>pyvenv</code>. But, more importantly, one of the areas <em>pyenv</em> is much better than <em>homebrew</em> is its support for managing virtual environments. Install <a href="https://github.com/yyuu/pyenv-virtualenv"><em>pyenv-virtualenv</em></a>:</p>
<pre class="shell"><code>$ brew install pyenv-virtualenv
$ echo 'eval "$(pyenv virtualenv-init -)"' >> ~/.profile</code></pre>
<p>Now you’re off to the races: you’ll never have to type <code>pyvenv <path to a virtual environment></code>, because instead you can just type <code>pyenv virtualenv <version> <name></code> and <em>pyenv</em> will take care of setting it up for you. Even better: all the nice tricks I listed above about setting directory-specific and global preferences for which Python version to use work equally well with virtual environments managed via <em>pyenv</em>. In other words, you can do something like this:</p>
<pre class="shell"><code>$ pyenv install 2.7.9
$ pyenv install 3.4.3
$ pyenv global 3.4.3
$ pyenv virtualenv 2.7.9 my-virtual-environment
$ cd path/to/my/project
$ pyenv local my-virtual-environment</code></pre>
<p>The <code>.python-version</code> file will contain <code>my-virtual-environment</code>. The Python version will be 2.7.9. The environment will be isolated, just as if you had run <code>pyvenv</code> to set up a virtual environment. Everything works together beautifully! Moreover, you can easily reuse virtual environments this way, because you can set the <code>local</code> value in more than one place. For example, I use the same virtual environment for this site and <a href="//www.winningslowly.org/" title="A podcast: taking the long view on technology, religion, ethics, and art.">Winning Slowly</a>, because they have slightly different site configurations but all the same Python dependencies. Creating it was simple:</p>
<pre class="shell"><code>$ pyenv install 3.4.3
$ pyenv virtualenv 3.4.3 pelican
$ cd ~/Sites/chriskrycho.com
$ pyenv local pelican
$ cd ~/Sites/winningslowly.org
$ pyenv local pelican</code></pre>
<p>I named the virtual environment after <a href="//docs.getpelican.com/">the tool I use to generate the sites</a>, and reused it in both sites. Both now have a <code>.python-version</code> file that reads <code>pelican</code>. Now, anytime I’m working anywhere under <code>~/Sites/chriskrycho.com</code> <em>or</em> <code>~/Sites/winningslowly.org</code>, I have the same tooling in place.</p>
</section>
</section>
<section id="summary" class="level2">
<h2>Summary</h2>
<p>The combination of <em>pip</em>, <em>pyenv</em> and virtual environments makes for a very simple, straightforward process to manage Python environments these days:</p>
<ul>
<li>Install Python versions with <em>pyenv</em>.</li>
<li>Install Python packages with <em>pip</em>.</li>
<li>Set up virtual environments with <em>pyenv-virtualenv</em>.</li>
</ul>
<p>If you stick to those basic rules, Python itself shouldn’t give you any trouble at all.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>All the usual caveats apply, of course: this may or may not work well for you; it’s just what works for me, and I make no claim or warranty on the tools below—they’re working well for <em>me</em>, but I don’t maintain them, so if they break, please tell the people who maintain them! Also, because I do nearly all my development on a Mac (I test on Windows, but that’s it), the following is necessarily <em>fairly</em> specific to OS X. You can readily adapt most of it to Linux, though, or even to a <a href="https://www.cygwin.com">Cygwin</a> install on Windows—I do just that when I have cause. But my main tool is a Mac, so that’s what I’ve specialized for.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Lucky me, I know!<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>Yes, I know that those are wrappers around Debian and Arch, and I know about <em>apt-get</em> and <em>rpm</em>. No, that information isn’t especially relevant for the rest of this post.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>For example, if you upgrade your Python installation using homebrew and then cleanup the old version (e.g., by running the typical <code>brew update && brew upgrade && brew cleanup</code> sequence)—say, from 3.4.2 to 3.4.3—and you have virtual environments which depended on 3.4.2… well, you’re in a bad spot now. A <em>very</em> bad spot. Have fun getting back to a working state!<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn5" role="doc-endnote"><p>You can of course drop it directly in <code>.zshrc</code> or <code>.bash_profile</code> or wherever else. <a href="//github.com/chriskrycho/profile">My setup</a> puts all common handling in <code>.profile</code> and runs <code>source .profile</code> as the first action in any other shell configurations.<a href="#fnref5" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Tolle Lege!2015-05-01T10:30:00-04:002015-05-01T10:30:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-05-01:/2015/tolle-lege.htmlDesigning Readable Bibles with Digital Typography (BibleTech 2015 conference talk)
<p>I was delighted to be able to give a talk at <a href="http://bibletechconference.com/">BibleTech</a> this year. I spoke for almost exactly 40 minutes on the subject of digital typography, with a focus on some of the nitty-gritty details that make texts readable… or not. Here is the screen capture and audio from the talk!</p>
<div class="iframe-wrapper four-to-three">
<iframe src="https://player.vimeo.com/video/126655499" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen>
</iframe>
</div>
<p>You can also access the <a href="/talks/bibletech2015/">slides</a> whenever you like (though note that they were designed to be complements to the talk, <em>not</em> the content of the talk, and as such they elide a great deal of the content).</p>
Lessons Learned2015-04-12T13:49:00-04:002015-04-12T13:49:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-04-12:/2015/lessons-learned.html<p>Since mid July 2014, I have been working on a complete redesign and re-build of <a href="//holybible.com">HolyBible.com</a>. The good folks at <a href="//prts.edu">Puritan Reformed Theological Seminary</a> who own the site wanted to replace its previous content with a Bible reading tool. While there’s still a lot to wrap up, the …</p><p>Since mid July 2014, I have been working on a complete redesign and re-build of <a href="//holybible.com">HolyBible.com</a>. The good folks at <a href="//prts.edu">Puritan Reformed Theological Seminary</a> who own the site wanted to replace its previous content with a Bible reading tool. While there’s still a lot to wrap up, the project is <em>nearing</em> its conclusion, and I thought I’d note a few things I’ve learned (in some cases, learned <em>again</em>) along the way. I want to say up front, lest these be taken the wrong way: I’m extremely proud of the work I’ve done, and the application I’ve delivered <em>does</em> work to the specifications I was hired to meet. More than that, it does it well. But, of course, it could do it <em>better</em>. The following thoughts are therefore not, “How I failed” but rather “How I will do this <em>even better</em> next time around.”</p>
<ol type="1">
<li><p><em>Single page apps are great, but not always the right choice.</em> I made the decision, based on my expectations and understandings of what I would need, to develop the site as a single-page web application. This was a mistake. Not the worst mistake ever: it has its upsides, including performance <em>once the app spins up</em>, but for the kind of content I have here, I would take a different tack today. Better in this case to deliver static content and <em>update</em> it dynamically as appropriate than to try to load all the content dynamically every time.</p>
<p>At a technical level, that would probably mean supplementing standard HTML with <a href="//backbonejs.org">Backbone</a> instead of developing it as a single-page app in <a href="//angularjs.org">Angular</a>. For the backend, while I did it in Node.js and that would work fine, I’d probably do a straight Django app (especially with a few of the goals I learned about <em>after</em> the project was well along in development).</p></li>
<li><p><em>Progressive enhancement or graceful degradation are hard in web applications, but they still matter.</em> In the past, I’ve always taken a hard line on making sure things either degrade gracefully or are simply enhanced by JavaScript content. In the architecture decisions I made for this app, I failed to take that into account (largely because I thought it would just <em>need</em> to work as a web app, but see above). I regret that enormously at this point; it would be much better in this particular case to have content available even if the additional functionality doesn’t work. Even if you <em>are</em> doing something where you are building an <em>app</em>, finding ways to make it work on poor connections, older browsers, etc. matters. I’m still thinking a <em>lot</em> about the best way to do this in the future.</p></li>
<li><p><em>More popular doesn’t mean better.</em> Angular has a ton of traction and uptake, and that was deceptive early on. I won’t so easily be fooled in the future. Angular is so very popular in part because Google can put serious money behind its development—and its marketing. But it’s <em>not</em> the best for many applications; if you’re not in the business of developing your own custom framework, it’s not even <em>close</em> to the best. Use Ember or Knockout or any number of other full-stack frameworks rather than a meta-framework.</p>
<p>How to avoid making that mistake? Well, for my part since then, I’ve learned to look not just as the <em>quantity</em> of material in a given community, but its <em>quality</em>. For example, <a href="//emberjs.com">Ember</a> has <em>incredible</em> documentation (far better than Angular’s), and they also have a much clearer vision and a more dependable approach to development (strict semantic versioning, etc.). Had I taken the time to read <em>both</em> sets of docs more carefully and think through the consequences of their designs more thoroughly, I could have recognized this before starting. Next time, I will do just that.</p>
<p>I will also look at the way the community behaves. The Ember community is <em>far</em> friendlier for newcomers from what I’ve seen than the Angular community—no slam meant on the Angular crowd, but the Ember folks are just doing that really well. That matters, too. (I can’t speak for other communities, of course; these are just the groups I’ve watched the most.)</p>
<p>All in all, Ember would have been the better fit between these two (even though, as noted above, it also wouldn’t have been the <em>best</em> fit).</p></li>
<li><p><em>Unit tests really are the best.</em> I did a vast majority of this project with unit tests—the first time I’ve ever been able to do that for a whole project. In other projects, I’ve been able to do it for parts, but never this much. It saved my bacon a <em>lot</em>. Where I got in a hurry and felt like I didn’t have time to write the tests, I (inevitably and predictably!) ended up spending a lot of time chasing down hard-to-isolate bugs—time I could have avoided by writing well-tested (and therefore better-factored) code in the first place. Lesson learned <em>very</em> thoroughly. Server- and client-side unit tests are <em>really</em> good. They’re also sometimes <em>hard</em>; getting mocks set up correctly for dealing with databases, etc. can take a while. That difficulty pays for itself, though.</p></li>
<li><p><em>Unit tests <strong>really</strong> don’t replace API documentation.</em> I have seen people advocate test-driven-development as a way of obviating the need to do major documentation of an API. This is, in a word, ridiculous. Having to read unit tests if you want to remember how you structured an API call is a pain in the neck. Don’t believe it. Design your API and document it, <em>then</em> do test-driven development against that contract.</p></li>
<li><p><em>Sometimes ‘good enough’ is enough.</em> There is always more to be done, and inevitably you can see a thousand things that could be improved. But ‘good’ shipping code is far more valuable than ‘perfect’ code that never ships. You should never ship <em>bad</em> code, but sometimes you do have to recognize ‘good enough’ and push it out the door.</p></li>
<li><p><em>Full-stack development is fun, but it’s also really hard.</em> I wrote every scrap of code in HolyBible.com proper (though of course it relies on a lot of third-party code). It was very, very difficult to manage that all by myself; it’s a lot to hold in one’s head. (One of the reasons I chose Node was because keeping my implementation and testing all in one language helped reduce that load somewhat.) Would I do it again? Sure. But very much chastened about the difficulties involved. It has been enormously rewarding, and I <em>like</em> being a full-stack developer. But it’s a lot of work, and now I know more clearly just how much.</p></li>
</ol>
<p>I could say a great deal more about the technical side of things especially, but my biggest takeaway here is that a lot of the hardest and most important work in developing software has nothing to do with the code itself. Architecture and approach shape <em>far</em> more than the implementation details (even if those details still matter an awful lot). And popularity is not at all the same as either <em>quality</em> or (especially) <em>suitability for a given task</em>. In the future, I will be better equipped for the necessary kinds of evaluation, and will hopefully make still better decisions accordingly.</p>
The NSA wants tech companies to give it 'front door' access to encrypted data2015-04-12T13:16:00-04:002015-04-12T13:16:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-04-12:/2015/the-nsa-wants-tech-companies-to-give-it-front-door-access-to-encrypted-data.html<p>The Verge:</p>
<blockquote>
<p>“I don’t want a back door,” Rogers said. “I want a front door. And I want the front door to have multiple locks. Big locks….”</p>
<p>Rogers suggests the adoption of “front door” access will allow for essential security measures while keeping data safe from hackers or an …</p></blockquote><p>The Verge:</p>
<blockquote>
<p>“I don’t want a back door,” Rogers said. “I want a front door. And I want the front door to have multiple locks. Big locks….”</p>
<p>Rogers suggests the adoption of “front door” access will allow for essential security measures while keeping data safe from hackers or an outside attack. But opponents of the idea note that even broken into pieces, a master digital key creates security flaws. “There’s no way to do this where you don’t have unintentional vulnerabilities,” Donna Dodson, chief cybersecurity adviser at the Commerce Department’s National Institute of Standards and Technologies, told the Post.</p>
</blockquote>
<p>That last bit is absolutely true. The government basically wants to make sure it can spy on anyone, any time it wants. That’s a bad, bad plan.</p>
Unsurprisingly, In Flux2015-04-08T16:05:00-04:002015-08-28T19:50:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-04-08:/2015/unsurprisingly-in-flux.htmlThe state of JavaScript frameworks today is a scale, really, from not-at-all-monolithic to totally-monolithic, in roughly this order: Backbone – React & Angular – Ember – Meteor.
<p><i class="editorial">This started as a <a href="https://alpha.app.net/chriskrycho/post/57102562">series of posts</a> on App.net. I <a href="http://v4.chriskrycho.com/2014/a-few-theses-on-blogging.html">resolved</a> a while ago that if I was tempted to do that, I should just write a blog post instead. I failed at that resolution, but at a friend’s <a href="https://alpha.app.net/jws/post/57108281">suggestion</a>, am adapting it into a blog post anyway. You can see the posts that prompted it <a href="https://alpha.app.net/keita/post/57096585">here</a> and <a href="https://alpha.app.net/jws/post/57096838">here</a>.</i></p>
<hr />
<ul>
<li><p>The state of JavaScript frameworks today is a scale, really, from not-at-all-monolithic to totally-monolithic, in roughly this order: Backbone – React & Angular – Ember – Meteor.</p></li>
<li><p>Backbone and related library Underscore are really collections of common JS tools and patterns you can use to write apps, but they’re not <em>frameworks</em>, per se. You’ll write all your own boilerplate there.</p></li>
<li><p>React and Angular supply much <em>more</em> of the functionality, but Angular is a “meta-framework” that aims to do <em>some</em> boilerplate but let you construct your own custom app framework.</p></li>
<li><p>Angular is very powerful, but it’s kind of like Git: wires are exposed; you have to understand a <em>lot</em> about the internals to get it to do what you want. Its routing functionality is pretty limited out of the box, too—so much so that there’s a near-standard third-party router.</p></li>
<li><p>React, as I understand it, supplies a paradigm and associated tools oriented primarily at view state management, though with capabilities via extensions for routing, etc. These tools are <em>extremely</em> powerful for performance in particular. It’s not a full framework, and the docs expressly note that you can <em>just</em> use React for the view layer with other tools if you want.</p></li>
<li><p>In any case, Angular and React do <em>different</em> things from each other, but both do substantially more than Backbone.</p></li>
<li><p>Ember is a full framework, strongly emphasizing shared conventions (with a lot of common developers from Rails). It’s perhaps less adaptable than React or Angular, but is much more full-featured; you have very little boilerplate to do.</p></li>
<li><p>Meteor is like Ember, but does server-side Node as well as client-side stuff, with the goal being to minimize code duplication, sharing assets as much as possible.</p></li>
<li><p>Of all of those, Ember has easily (easily!) the best-explained roadmap, most articulate leadership, and best development path. They are also aggressively adopting the best features of other frameworks wherever it makes sense.</p></li>
<li><p>Angular is currently in flux, as Google has announced Angular 2.0 will be basically a completely different framework; there will be <em>no</em> direct migration path for Angular 1.x apps to Angular 2.0+. Total rewrite required.</p></li>
<li><p>Ember uses a steady 6-week release schedule with very careful regression testing and semantic versioning, with clear deprecation notices and upgrade paths, and is therefore both rapidly iterating <em>and</em> relatively stable for use.</p></li>
<li><p>If you just need a set of tools for enhance functionality on otherwise relatively static pages, Backbone+Underscore is a great combo. If you already have a bunch of things in place but want a dedicated view layer, React is good.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p></li>
<li><p>If you’re writing a new, full-on web <em>application</em> (SPA, or organized in whatever other way), I think Ember is the very clear winner at this point. I have good confidence in their leadership and they’re firing on all cylinders.</p></li>
</ul>
<p>Regarding Angular, <a href="https://alpha.app.net/mikehoss">@mikehoss</a> <a href="https://alpha.app.net/mikehoss/post/57105656">posted</a>:</p>
<blockquote>
<p>For the record they are doing that to make it more mobile-friendly. The Ang1 has abysmal performance on mobile. Besides a time machine, this maybe the best option. And Miško is a bit of a jerk.</p>
</blockquote>
<p>I can’t speak to his comment about Miško (Miško Hevery, one of the leads on AngularJS), but I agree about Angular itself: the rewrite needs to happen. Angular 1.x is a mess—as are its docs. It’s just not a good time to be using 1.x for any new projects.</p>
<p>I’ll add to these points that I’ve used Angular for the last 9 months on HolyBible.com development. As I noted: the documentation is pretty rough, and in a lot of cases you really do have to understand what the framework is doing and how before you can get it to do the things you want. This is, in one sense, exactly the <em>opposite</em> of what I’m looking for in a framework—but it makes sense given Angular’s goal of being a meta-framework.</p>
<p>Rather like Git, though, which was originally going to be infrastructure for version control systems which would have their own interface, but eventually just had a “good enough” interface that we’re all now stuck with, Angular is being used <em>as</em> a framework, not just as a <em>meta-framework</em>, and it’s unsurprisingly not great for that.</p>
<hr />
<p><i class="editorial">Take this for what it’s worth: not the final word (by a long stretch) on JavaScript frameworks, but rather the perspective of one guy who notably <em>hasn’t used all of the frameworks</em>, but has spent some time looking at them. Moreover, I haven’t particularly edited this; it’s more a summary in the kind of short-form posts that I originally created than a detailed analysis. The only things I’ve done are expand some of the notes on Angular and React, and add the footnote on React.</i></p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I <em>really</em> don’t know a ton about React, but I do think a lot of what I do know about it is cool from a programming perspective. From a designer perspective, however, it’s a bit of a pain: React’s “JSX” domain-specific language is <em>much</em> less friendly to developers than standard HTML, and therefore than either Ember or Angular, both of which implement their templating via HTML templating languages. There’s a substantil tradeoff there: React’s model is interesting not only academically but in practice because of the performance results it produces. It’s worth note, though, that others have recognized this and are adopting it to varying degrees; notably, Ember is incorporating the idea of minimizing changes to the DOM by keeping track of state and updating only differences, rather than refreshing the whole tree, in the new rendering engine (HTMLBars) they’re rolling out over the past several and future several releases.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
The New Macbook2015-03-13T08:00:00-04:002015-03-13T08:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2015-03-13:/2015/the-new-macbook.html<p>I have seen and heard lots of discussion of the <a href="http://www.apple.com/macbook/">new Macbook</a> this week, and have been thinking about its appeal and Apple’s strategy a bit along the way. At first I was extremely skeptical of the only-one-port approach, but the more I’ve thought about it, the more …</p><p>I have seen and heard lots of discussion of the <a href="http://www.apple.com/macbook/">new Macbook</a> this week, and have been thinking about its appeal and Apple’s strategy a bit along the way. At first I was extremely skeptical of the only-one-port approach, but the more I’ve thought about it, the more sense it makes to me. Why? <em>Market segmentation.</em></p>
<p>This is a MacBook, not a MacBook Pro. <em>I</em> need more ports than this. But <a href="http://jaimiekrycho.com/">Jaimie</a>? I don’t remember the last time I saw her plug anything into the machine besides its power cord. This is a MacBook for ordinary users, not a machine for power-users. Now, I still think that the loss of MagSafe is a bit sad; it has saved us more than once (especially with young children in the house). But in terms of the needs of ordinary users, a single port that <em>can</em> double as video out or USB input really is perfect.</p>
<p>In the meantime, it lets Apple cleanly differentiate between its MacBook and MacBook Pro lines. If you need the ports for expandability—because you’re a power user—you get a Pro. If you don’t, you get the MacBook. The tradeoffs with CPU make sense here, too: a computer that performs about like a 2012 MacBook Air would not be my favorite for development work. But for the writing work that Jaimie does? Again, the performance levels there are perfectly reasonable. It’ll do everything she needs, and do it <em>well</em>. Throw in the retina screen, and it’ll be really nice for her purposes.</p>
<p>In fact, I fully expect that we’ll end up getting her a 2nd or 3rd generation machine when we need to replace her current (a 2010 white MacBook) sometime in 2016–17.</p>
<p>So: better done than I initially thought, Apple.</p>
The Tablet “Productivity” Problem2015-02-25T21:35:00-05:002015-02-25T21:35:00-05:00Chris Krychotag:v4.chriskrycho.com,2015-02-25:/2015/the-tablet-productivity-problem.html<p>I’m thinking this one through out loud. I rather hope that I can take these nascent thoughts and turn them into a more fully-fledged essay over the course of this year, so if you have thoughts, I’d <em>love</em> to hear them. Hit me up on <a href="https://twitter.com/chriskrycho">Twitter</a>, <a href="https://alpha.app.net/chriskrycho">ADN</a>, or …</p><p>I’m thinking this one through out loud. I rather hope that I can take these nascent thoughts and turn them into a more fully-fledged essay over the course of this year, so if you have thoughts, I’d <em>love</em> to hear them. Hit me up on <a href="https://twitter.com/chriskrycho">Twitter</a>, <a href="https://alpha.app.net/chriskrycho">ADN</a>, or via <a href="mailto:[email protected]">email</a>. In the meantime… consider this a rough draft of a larger idea I’m working out.</p>
<hr />
<p>I saw a <a href="https://jasonirwin.ca/2015/02/24/whats-a-tablet-for/">post</a> by internet acquaintance Jason Irwin (<a href="https://alpha.app.net/matigo">@matigo</a> on ADN) yesterday about how he doesn’t find tablets especially compelling. There were quite a few things he said in the piece that did <em>not</em> resonate with me (and even a few suspicions I think are out and out incorrect), but generally on technology things like this I simply say: to each his own. So what follows is not so much a response to Jason’s post as some thoughts inspired by it.</p>
<p>Jason hit on a meme that’s been extremely common about tablets in general and iPads in particular: that you cannot do real work on them, only “consumption”. What is meant, nearly always, in such discussions, is that it is harder to write, develop software, and other keyboard-intensive activities using an iPad than a traditional laptop or desktop form factor. This is certainly true of <em>those</em> activities. Even of a few other activities Jason mentions, iPads do <em>very</em> well.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> But there is another, more important issue here.</p>
<p>We (quite readonably) tend to define productivity poorly in terms of output. In that sense, there are many categories for which the iPad is <em>not</em> as capable as a laptop. It is true, for example, that I do not do a lot of writing or software development on my iPad (a retina Mini)—I’ll start drafts of blog posts (part of this was dictated on my iPhone!), and occasionally log into and do administrative work on a server via SSH using an iPad client. That doesn’t mean it isn’t a valuable device for me, though. It simply means that “valuable” and “productive” are not synonyms.</p>
<p>Less helpfully, however, we also tends to define “value” in terms of “productivity”. People say that iPads are not valuable to them because they do not specifically allow them to be <em>productive</em> in the sense outlined above… but then, there are a great many valuable things that are not producing content. I use my iPad daily for a wide array of things, and find it enormously preferable to a laptop for nearly all of them. True, many of them are “consumptive”—but since when did that become a bad thing?</p>
<p>I recognize that the answer may seem obvious against the backdrop of a consumerist culture against which many an anti-consumerism critique has rightly been levied. But think about what we mean by “consumption” in this case. Nearly every day I use my iPad both for reading and for displaying (and for learning) music. To be sure, I also watch the occasional YouTube video, interact on Twitter and App.net, and so on. But the vast majority of what I do with an iPad is best summed up as <em>learning</em>.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> Whether it is reading through a few carefully selected RSS feeds in <a href="http://supertop.co/unread/">Unread</a>, reading the news in <a href="http://cir.ca">Circa</a>,<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> working through reading for school in iBooks,<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a> perusing the <a href="http://emberjs.com">EmberJS</a> docs in preparation for a major project I’ll be starting with the tool in a few months, or reading the Bible every morning, I do a <em>lot</em> of reading on my iPad. Add in the fact that I use it for music as I practice piano, and I get an awful lot of mileage out of it every day.</p>
<p>Now, none of this negates Jason’s post in particular. If he doesn’t get that kind of traction out of an iPad, that’s no skin off my back. But I do think that the criticism of devices which are primarily “consumptive”—perhaps implied in Jason’s post; certainly stated outright in many other responses to the iPad—is misplaced. Whether simply for entertainment (joy in the arts is good!) or in reading (joy in the arts <em>or</em> in self-betterment is good!) or in the myriad other ways that people put the iPad<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> to use that are not making something new, there is value in the kinds of consumption done with it.</p>
<p>Are there valid critiques to be offered of tablets, including that certain kinds of consumptive habits are problematic? Of course. But reducing things to their productive utility is ethically flawed, and reducing human pursuits to their productive output even more so. It is just fine if <a href="https://alpha.app.net/matigo">@matigo</a> isn’t the sort of guy who loves an iPad. It is <em>not</em> fine if tech pundits want to slam the iPad and other tablets because they have a misanthropic view of human flourishing—and make no mistake, the utilitarian calculus so often levied against tablets is just that. People are more than what they make; their time is valuable even (and sometimes especially) when not producing anything tangible at all.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Notably, his point about keyboards that differ for different applications has been addressed quite thoroughly in that market! Most music apps ship with music-oriented interfaces, <em>not</em> traditional QUERTY-style keyboards.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Yes, in theory I could do that on another, less expensive device—but I had a Nexus 7 and nothing I have seen about Android tablets since then convinces me the Android tablet ecosystem has meaningfully improved in the last couple years. The experience factor in using things really does matter to me, and iOS gives me an enormously better experience in every category, even with its foibles and flaws, and nowhere more so than in the massively better app ecosystem.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>An app my friend <a href="http://independentclauses.com/">Stephen Carradini</a> and I like so much that we did a <a href="http://www.winningslowly.org/2015/01/take-my-money-now/">whole episode</a> of <a href="http://www.winningslowly.org/">Winning Slowly</a> on it!<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>I like ePUB way better than Kindle’s proprietary, and haven’t gotten around to finding a replacement for <a href="http://readmill.com">Readmill</a> yet.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn5" role="doc-endnote"><p>And yes, with plenty of other tablets, too! If you’re a Microsoft Surface person, that’s splendid as well.<a href="#fnref5" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Facebook's "Security" Requirements2015-02-21T12:35:00-05:002015-02-21T12:35:00-05:00Chris Krychotag:v4.chriskrycho.com,2015-02-21:/2015/facebooks-security-requirements.html<p>I went to set up 2-step login (AKA 2-factor authentication, or what Facebook calls “Login Approvals”) on Facebook yesterday morning, and was greeted with this lovely message when I clicked “enable”:</p>
<blockquote>
<p>Your current Firefox settings might make it hard to use Login Approvals. It’s probably because:</p>
<ul>
<li>You sometimes clear …</li></ul></blockquote><p>I went to set up 2-step login (AKA 2-factor authentication, or what Facebook calls “Login Approvals”) on Facebook yesterday morning, and was greeted with this lovely message when I clicked “enable”:</p>
<blockquote>
<p>Your current Firefox settings might make it hard to use Login Approvals. It’s probably because:</p>
<ul>
<li>You sometimes clear your cookies.</li>
<li>Your browser is set to automatically clear cookies whenever it closes.</li>
<li>You use your browser’s “private browsing” or “incognito” mode.</li>
<li>You’re using a new browser.</li>
</ul>
<p>It may take a few days after fixing these issues before you will be able to enable Login Approvals. You also may need to log out and then log in again after fixing these settings for the changes to take effect.</p>
<p>Visit the Help Center for step-by-step directions on how to fix these settings.</p>
</blockquote>
<p>I use Firefox for the social media access I do online—and because I don’t like being tracked, I tell Firefox not to remember history and to delete cookies as soon as I close the browser, and I run <a href="https://github.com/gorhill/uBlock">μBlock</a><a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> and <a href="https://disconnect.me/">Disconnect</a>.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<p>When you attempt to enable 2-step login, Facebook checks your security policy… and <em>will not let you turn it on</em> if your settings are like mine. They supply the message above, with no option to proceed anyway. Of course, there is no technical issue with using 2-step login with a browser configured this way. I use it for GitHub, Google, my domain registrar, and every other service with 2-step login.</p>
<p>Facebook probably has two motives here. The better one is user experience: it <em>would</em> be frustrating if you are a non-tech-savvy user who doesn’t understand the consequences of setting this given the browser settings I have. But of course, if they were primarily just concerned with that, they could give the warning and then let users say, “Go ahead; I know what I’m getting into.” The second, less obvious but almost certainly more important motive from Facebook’s point of view, is to discourage people from using a browser the way I do. They want to be able to monetize my Facebook use better, and this means not just my time on Facebook, but my time all over the web. Facebook wants to know what I’m looking at any time I’m surfing <em>anywhere</em> so that they can tailor their ads to me.</p>
<p>I’m not interested in being tracked that way.</p>
<p>Apparently, Facebook isn’t interested in letting people have actual, modern security unless they’re willing to be tracked that way.</p>
<p>We have a problem here.</p>
<p>As it turns out, of course, people like me aren’t particularly valuable customers to Facebook anyway, so they probably don’t mind the fact that they’re losing more and more of our time. But losing that time they are. My use of Facebook is diminishing at an ever-increasing rate, for countless little reasons like this, where Facebook’s ad-driven motivations push them to treat me poorly. Too bad for them.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>If anyone tells you that blocking ads is “stealing”, they’re talking up nonsense. The Internet is built in such a way that if nothing else you can always just request the plain text version of a website, and that’s extremely important for many reasons, including accessibility. I <em>choose</em> to leave ads on for any number of sites I want to support, but at the end of the day it’s every publisher’s choice how theyw ant to make money. If a newspaper supports itself with ads and coupons, I have every right to throw them in the trash without a glance; the same is true online.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Yes, I know this isn’t foolproof and I’m still being tracked. It’s impossible <em>not</em> to be tracked to some degree or another. What I am doing here is <em>decreasing</em> the degree to which companies can track me.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Growing Up Together2014-11-15T00:30:00-05:002014-11-15T00:30:00-05:00Chris Krychotag:v4.chriskrycho.com,2014-11-15:/2014/growing-up-together.html<p>A few years ago, you might have caught me in a grumpy moment grousing about JavaScript. I distinctly did <em>not</em> like writing it. Every time I sat down to deal with it, I found myself in a tangled mess of plain JavaScript, jQuery, and DOM manipulations that inevitably left me …</p><p>A few years ago, you might have caught me in a grumpy moment grousing about JavaScript. I distinctly did <em>not</em> like writing it. Every time I sat down to deal with it, I found myself in a tangled mess of plain JavaScript, jQuery, and DOM manipulations that inevitably left me tearing my hair out.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> I found it difficult to write in the first place, and even harder to maintain in the long run. I could not come up with good ways to organize it, especially because so much of what I was doing was so thoroughly <em>ad hoc</em> in nature. Cobble this together over here; scrounge together those things over there; hope nothing collides in the middle.</p>
<p>In the last four months, I have written several thousand lines of JavaScript, and I have <em>loved</em> it.</p>
<p>For my latest major project, relaunching <a href="https://holybible.com">HolyBible.com</a>, I wrote the front end in <a href="https://angularjs.org">AngularJS</a> and the back end as an <a href="http://expressjs.com">Express</a> app (the most popular <a href="http://nodejs.org">NodeJS</a> web framework). I’ve written gobs of tests in <a href="http://jasmine.github.io">Jasmine</a> (using <a href="https://github.com/mhevery/jasmine-node">jasmine-node</a> for server-side tests) and drawn on tons of other open-source packages.</p>
<p>And I have <em>loved</em> it.</p>
<p>A small example: a moment ago, looking up the link for Jasmine, I noted that the latest version released today. My response was, “Ooh—cool!”<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<p>What changed? Well, mostly I changed, but also JavaScript changed a bit. We both grew up over the last four years. On the JavaScript side of things, a lot of good design patterns and tools have come into play in that span. I’m sure there were plenty of good, disciplined web developers writing clear, careful, well-organized client-side JavaScript four years go. But in the interval, that kind of JavaScript got a lot more prominent, in part because it has had help from the rapid rise of server-side JavaScript in the form of Node.js and its flourishing ecosystem of components and tools. Build tools like <a href="http://browserify.org">Browserify</a> and development tools like <a href="http://livereload.com">LiveReload</a> and <a href="https://incident57.com/codekit/">Codekit</a> have combined with best practices learned from those long years of jQuery/DOM-manipulation hell so that these days, good JavaScript is a lot like good programming in any other language: highly modular, carefully designed, and well-organized.</p>
<p>In the same period of time, I have matured enormously as a developer (just enough to see how far I still have to go, of course). At the point where I most hated JavaScript, I also really struggled to see the utility of callbacks. Frankly, it took me the better part of a month just to get my head around it—most of the tutorials out there just assumed you understood them already, and, well: I didn’t. Functions as first-class members of a language was new to me at that point. Fast-forward through several years of full-time Python development, lots of time spent reading about software development and some harder computer science concepts, and my perspective on JavaScript has shifted more than a little. Closures are beautiful, wonderful things now. Functions as arguments to other functions are delightful and extremely expressive. Prototypal inheritance—trip me up though it sometimes still does—is a fascinating variation on the idea of inheritance and one that I think I like rather better than classical inheritance.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a></p>
<p>There are still things I don’t love about JavaScript. Its syntax owes far too much to the C family of languages to make me happy; I quite like the way that CoffeeScript borrows from Python (white-space-delimited blocks, use of equality words like <code>is</code> and boolean rules like <code>and</code> rather than <code>===</code> and <code>&&</code> respectively, etc.). And I am looking forward to a number of features coming in the next version of JavaScript—especially generators and the <code>const</code> and <code>let</code> keywords, which will allow for <em>much</em> saner patterns.</p>
<p>But all of that is simply to say that I am now starting to know JavaScript enough to know that its <em>real</em> issues aren’t the surface-level differences from the other languages with which I’m familiar. They’re not even the warts I noted here. They’re things like the mix of classical and prototypal inheritance in the way the language keywords and object instantiation work. But I don’t mind those. Every language has tradeoffs. Python’s support for lambdas is pretty minimal, despite the utility of anonymous functions, for example. But I <em>like</em> the tradeoffs JavaScript makes.<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a></p>
<p>In other words, I discovered the same thing so many other people have over the last few years: JavaScript isn’t just a good choice for utilitarian reasons. Beneath that messy exterior is a gem of a language. I’m having a lot of fun with it.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Thus the early balding starting by my temples.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>My wife’s bemused response: “Is that <em>another</em> language?” Take that as you will.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>The couple weeks I got to spend <a href="http://v4.chriskrycho.com/2014/a-little-crazy.html">playing</a> with <a href="http://iolanguage.org">Io</a> certainly helped! Io’s prototypal inheritance is semantically “purer” than JavaScript’s, which is quite an improvement in my view. JavaScript’s <code>new</code> keyword and the pseudo-classical object pattern it brings along can go rot in a bog.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>Truth be told, I like them even better from the perspective of CoffeeScript, which hides a lot of the rough edges of JavaScript and, as noted above, brings in quite a few things I like from Python. For my part, I intend to write as much CoffeeScript as possible going forward.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
A Ridiculous Situation2014-11-07T21:00:00-05:002014-11-07T21:00:00-05:00Chris Krychotag:v4.chriskrycho.com,2014-11-07:/2014/a-ridiculous-situation.htmlAn example of just how deep the rabbit-hole can go.<p>One of the pieces of code I’m maintaining has an <em>absurd</em> situation in its build structure—honestly, I’m not sure how it ever compiled. For simplicity’s sake, let us assume the four following files:</p>
<ul>
<li><code>main.c</code></li>
<li><code>secondary.c</code></li>
<li><code>writer.h</code></li>
<li><code>calculator.h</code></li>
</ul>
<p>The project has many more files than this, of course, but these are the important ones for demonstrating this particular piece of insanity (which shows up <em>many</em> places in the codebase).</p>
<p>I’m reproducing here some dummy code representing an <em>actual set of relationships in the codebase</em>. The functions and module nameshave been changed; the relationships between the pieces of code have not.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> When I started trying to build the program that included what I am representing as <code>main.c</code> below, this is the basic structure I found:</p>
<section id="main.cpp" class="level3">
<h3><code>main.cpp</code></h3>
<p>This is the main module of the program. In the actual code in which I found this particular morass, it was actually code generated by the UI builder in Visual Studio 6<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> and then turned into an unholy mess by a developer whose idea of good programming involved coupling the various parts of the code as tightly as possible.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a></p>
<pre class="c"><code>#include "calculator.h"
#include "secondary.h"
int a=0, int b=0;
int addNumbers(a, b) {
return a+b;
}
void doBadThingsWithGlobals(int * someNumber) {
a = 6;
*someOtherNumber = 5;
}
#include "writer.h"
void main() {
a = 3;
doBadThingsWithGlobals(&b);
addNumbers(a, b);
doStuffWithNumbers(a,b);
subtractNumbers(b, a);
}
// More insanity follows...</code></pre>
<p>Yes, the main function and the <code>doBadThingsWithGlobals</code> function are both modifying global state, and yes, there is an include statement midway down through the module. (Just wait till you see what it does.)</p>
</section>
<section id="secondary" class="level3">
<h3>“secondary”</h3>
<p>Here is a secondary module which has been somewhat cleaned up. It has normal relationships between header and source files, and includes all its dependency headers at the top of the file. It has a header which defines the public API for the module, and that even has inclusion guards on it.</p>
<section id="secondary.h" class="level4">
<h4><code>secondary.h</code></h4>
<pre class="c"><code>#ifndef SECONDARY_H
#define SECONDARY_H
int doStuffWithNumbers();
#endif SECONDARY_H</code></pre>
</section>
<section id="secondary.c" class="level4">
<h4><code>secondary.c</code></h4>
<p>The <code>doStuffWithNumbers</code> function here calls <code>addNumbers</code>:</p>
<pre class="c"><code>#include "secondary.h"
#include "calculator.h"
int doStuffWithNumbers(int x, int y) {
addNumbers(x, y);
}</code></pre>
<p><em>But wait!</em> you say, <em>That function isn’t defined here!</em> Ah, and you would be right, except that it doesn’t refer to the <code>addNumbers</code> function in <code>main.c</code>. It refers to a function implementation in <code>calculator.h</code>.</p>
</section>
</section>
<section id="calculator.h" class="level3">
<h3><code>calculator.h</code></h3>
<pre class="c"><code>int addNumbers(int p, int q) {
return p + q;
}
int subtractNumbers(int r, int s) {
return r - s;
}</code></pre>
<p>Strangely, this <code>addNumbers</code> function is identical to the one in <code>main.c</code>. Even <em>more</em> strangely, it is defined—not merely declared, actually defined—in the header file! Nor is this the only such function. Look at the details of <code>writer.h</code>, which was mysteriously included above in the middle of the main module.</p>
</section>
<section id="writer.h" class="level3">
<h3><code>writer.h</code></h3>
<pre class="c"><code>void writeStuff() {
fprintf(stdout, "a: %d, b: %d", a, b);
}</code></pre>
<p>Once again, we have a full-fledged implementation in the header file. Why, you ask? Presumably because the developer responsible for writing this code never quite got his head around how C’s build system works. The entirety of one of the central components of this software—an element that in any normal build would be a common library—was a single, approximately 2,000-line <em>header file</em>. (Say hello to <code>calculator.h</code> up there; that’s what I’m abstracting away for this example.)<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a></p>
<p>Worse: it is printing the values of <code>a</code> and <code>b</code>, and no, I am not skipping some part of <code>writer.h</code>. It is getting those from <code>main.c</code>, because it was included after they were defined, and the build process essentially drops this header inline into <code>main.c</code> before it compilation.<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> So here we have a header file with the implementation of a given piece of code, included in a specific location and defined in such a way that if you change where it is included, it will no longer function properly (since the variables will not have been defined!)</p>
<p>Worse, there are conflicting definitions for one of the functions used in <code>main.c</code>, and because of its dependency on <em>other</em> functions in <code>calculator.h</code> (e.g. <code>subtractNumbers</code> in this mock-up), it cannot be removed! Moreover, given the many places <code>calculator.h</code> is referenced throughout the code base, it is non-trivial to refactor it.<a href="#fn6" class="footnote-ref" id="fnref6" role="doc-noteref"><sup>6</sup></a></p>
<p>If this sounds insane… that’s because it is.</p>
<p>If you’re curious how I dealt with it, well… I renamed the <code>addNumbers()</code> function in <code>main.c</code> to <code>_addNumbers()</code> and put a loud, angry <code>TODO</code> on it for the current release, because the only way to fix it is to refactor this whole giant mess.</p>
<p>The takeaway of the story, if there is one, is that people will do crazier, weirder, worse things than you can possibly imagine when they don’t understand the tools they are using and just hack at them till they can make them work. The moral of the story? I’m not sure. Run away from crazy code like this? Be prepared to spend your life refactoring?</p>
<p>How about: try desperately <em>not</em> to leave this kind of thing for the person following you.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>That’s actually not <em>wholly</em> true, because these pieces of code are also duplicated in numerous places throughout the codebase. We’ve eliminated as many as possible at present… but not all of them, courtesy of the crazy dependency chains that exist. Toss in a dependency on Visual Studio 6 for some of those components, and, well… suffice it to say that we’re just happy there are only two versions floating around instead of the seven that were present when I started working with this codebase two and a half years ago.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Yes, <a href="http://en.wikipedia.org/wiki/Microsoft_Visual_Studio#Visual_Studio_6.0_.281998.29"><em>that</em></a> Visual Studio 6. The one from 1998. Yes, that’s insane. No, we haven’t managed to get rid of it yet, though we’re close. So close.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>I am not joking. Multi-thousand line functions constituting the entirety of a program are not just <em>normal</em>, they are pretty much the only way that programmer ever wrote. When you see the code samples below, you will see why: someone was lacking an understanding of C’s build system.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>Also, that’s the piece of code of which I found seven different versions in various places when I started. Seven!<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn5" role="doc-endnote"><p>I once ran into some code working on a different project for an entirely different client where there had been a strict 1,000-line limit to C source files, as part of an attempt to enforce some discipline in modularizing the code. Instead of embracing modularity, the developers just got in the habit of splitting the source file and adding <code>#include</code> statements at the end of each file so that they could just keep writing their non-modular code.<a href="#fnref5" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn6" role="doc-endnote"><p>I have tried. Twice. I’m hoping that the third time <em>will</em> be the charm.<a href="#fnref6" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Nailed It2014-10-22T22:15:00-04:002014-10-22T22:15:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-10-22:/2014/nailed-it.htmlSomeone leaked a copy of the trailer for Avengers: Age of Ultron… and Marvel didn’t throw a fit. Quite the opposite, in fact.<p>Yesterday, something rather remarkable happened. Someone leaked a copy of the trailer for <em>Avengers: Age of Ultron</em>… and Marvel, rather than throwing a hissy fit, just tweeted:</p>
<blockquote>
<p>Dammit, Hydra. (<a href="https://twitter.com/Marvel/status/525071656306626560">October 22, 7:50 PM EST</a>)</p>
</blockquote>
<p>Pitch perfect. It’s self-aware, self-<em>referential</em> in a funny way without being too clever-seeming or coming off like it’s trying too hard, and just a generally good response. The team could have fought it (though ultimately that would have just made things worse), but instead Marvel played its hand perfectly. The response was was funny <em>and</em> demonstrated that the folks who work there actually understand how the internet works.</p>
<p>That alone would have been good enough to put Marvel in a league of its own when it comes to managing things <em>not</em> going the way hoped for. But (after what I’m sure was considerable back-room wrangling), they followed it up an hour and a half later with another, equally fantastic tweet:</p>
<blockquote>
<p>Here it is! Watch the <a href="https://twitter.com/Avengers">@Avengers</a>: <a href="https://twitter.com/hashtag/AgeofUltron?src=hash">#AgeofUltron</a> Teaser Trailer right NOW: <a href="http://youtu.be/tmeOjFno6Do" class="uri">http://youtu.be/tmeOjFno6Do</a> <a href="https://twitter.com/hashtag/Avengers?src=hash">#Avengers</a> (<a href="https://twitter.com/Marvel/status/525093857772318720">October 22, 9:18 PM EST</a>)</p>
</blockquote>
<p>Your average old-media company these days would have thrown a fit and made a stink about the release of their media. They would have done everything in their power to get the video taken down. Many companies <em>have</em> done just that under similar circumstances, aiming to get the trailer, snippets of the movie, music, or the like removed from the internet. But that simply isn’t how the internet works: it famously “treats censorship like damage and routes around it” (<a href="http://www.chemie.fu-berlin.de/outerspace/internet-article.html">John Gilmore</a>). Once a video is online, it’s online. Someone, somewhere, still has a copy of it and can put it back up. So rather than fight it… Marvel just rolled with it and made the best of the situation. They cracked a joke, went ahead and put the trailer out themselves, and earned general approval from the internet. Again.</p>
<p>Despite being a decades-old company, Marvel is clearly a new media company through and through at this point. They managed to dodge the <a href="http://www.economist.com/blogs/economist-explains/2013/04/economist-explains-what-streisand-effect">Streisand effect</a> quite nicely, turning what could have been an opportunity for hostility all around into a PR coup and a win that they couldn’t have scored on their own.</p>
<p>Other old (and new!) media companies, take note. <em>This</em> is the way you play the game. You recognize when the cat is out of the bag and you run with it. Own it. Make it your own somehow. Don’t let it own you. The internet is a big, crazy, chaotic place, and you can never hope to control it—nor even the narrative about you and your stuff, whatever that may be—like you might have been able to do twenty-five years ago. But that’s okay. If you can roll with the punches, you can still come out ahead, and you’ll look a little more human doing it. I call that winning.</p>
<p>(Go Marvel.)</p>
The Next Generation of Version Control2014-10-16T21:45:00-04:002014-10-20T07:25:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-10-16:/2014/next-gen-vcs.htmlThe current state of affairs with version control is a mess. Things we can get right next time around.
<p>The current state of affairs in version control systems is a mess. To be sure, software development is <em>far</em> better with <em>any</em> of the distributed version control systems in play—the three big ones being <a href="http://git-scm.com">Git</a>, <a href="http://mercurial.selenic.com">Mercurial</a> (<code>hg</code>), and <a href="http://bazaar.canonical.com/en/">Bazaar</a> (<code>bzr</code>), with a few other names like <a href="http://www.fossil-scm.org">Fossil</a> floating around the periphery—than it ever was in a centralized version control system. There are definitely a few downsides for people converting over from some standard centralized version control systems, notably the increased number of steps in play to accomplish the same tasks.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> But on the whole, the advantages of being able to commit locally, have multiple complete copies of the repository, and share work without touching a centralized server far outweigh any downsides compared to the old centralized system.</p>
<p>That being so, my opening statement remains true, I think: <em>The current state of affairs in version control is a mess.</em> Here is what I mean: of those three major players (Git, Hg, and Bazaar), each has significant downsides relative to the others. Git is famously complex (even arcane), with a user interface design philosphy closely matching the UI sensibilities of Linus Torvalds—which is to say, all the wires are exposed, and it is about as user-hostile as it could be.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> It often outperforms Hg or Bazaar, but it has quirks, to say the very least. Hg and Bazaar both have <em>much</em> better designed user interfaces. They also have saner defaults (especially before the arrival of Git 2.0), and they have better branching models and approaches to history.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> They have substantially better documentation—perhaps especially so with Bazaar, but with either one a user can understand how to use the tool <em>without having to understand the mechanics of the tool</em>. This is simply not the case with Git, and while I <em>enjoy</em> knowing the mechanics of Git because I find them interesting, <em>having</em> to understand the mechanics of a tool to be able to use it is a problem.</p>
<p>But the other systems have their downsides relative, to Git, too. (I will focus on Hg because I have never used Bazaar beyond playing with it, though I have read a good bit of the documentation.) Mutable history in Git is valuable and useful at times; I have rewritten whole sequences of commits when I realized I committed the wrong things but hadn’t yet pushed.<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a> Being able to commit chunks instead of having to commit whole files at a go is good; I feel the lack of this every time I use Hg.<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> (Needing to understand the <em>file system</em> that Git invented to make sure you do not inadvertently destroy your repository is… not so good.) A staging area is nice,<a href="#fn6" class="footnote-ref" id="fnref6" role="doc-noteref"><sup>6</sup></a> (even if <em>having</em> to stage everything manually can be in the pain in the neck<a href="#fn7" class="footnote-ref" id="fnref7" role="doc-noteref"><sup>7</sup></a>).</p>
<p>In short, then, there was no clear winner for this generation. Each of the tools has significant upsides and downsides relative to the others. Git has become the <em>de facto</em> standard, but <em>not</em> because of its own superiority over the alternatives. Rather, it won because of other forces in the community. Mostly I mean <a href="https://github.com">GitHub</a>, which is a <em>fantastic</em> piece of software and easily the most significant driving factor in the wider adoption of Git as a tool. The competition (<a href="https://bitbucket.org">Bitbucket</a> and <a href="https://launchpad.net">Launchpad</a>) are nowhere near the same level of sophistication or elegance, and they certainly have not managed to foster the sorts of community that GitHub has. The result has been wide adoption of Git, and a degree of Stockholm Syndrome among developers who have adopted it and concluded that the way Git works is the way a distributed version control system <em>should</em> work.</p>
<p>It is not. Git is complicated to use and in need of tools for managing its complexity; the same is true of Hg and Bazaar, though perhaps to a slightly lesser extent because of their saner branching models. This is what has given rise to the <a href="http://nvie.com/posts/a-successful-git-branching-model/">plethora</a> of <a href="http://scottchacon.com/2011/08/31/github-flow.html">different</a> formal <a href="https://about.gitlab.com/2014/09/29/gitlab-flow/">workflows</a> representing various attempts to manage that complexity (which have been <a href="https://bitbucket.org/yujiewu/hgflow/wiki/Home">applied</a> to other systems <a href="https://andy.mehalick.com/2011/12/24/an-introduction-to-hgflow">as well</a>). Managing branching, linking that workflow to issues, and supplying associated documentation for projects have also cropped up as closely associated tasks— thus the popularity of GitHub issues and Bitbucket wikis, not to mention <a href="http://www.fossil-scm.org">Fossil’s</a> integration of both into the DVCS tool itself. None of the tools handle differences between file systems very elegantly (and indeed, it took <em>years</em> for Git even to be useable on Windows). All of them especially struggle to manage symlinks and executable flags.</p>
<p>So there is an enormous opportunity for the <em>next</em> generation of tools. Git, Hg, and so on are huge steps forward for developers from CVS, Visual SourceSafe, or SVN. But they still have major weaknesses, and there are many things that not only can but should be better. In brief, I would love for the next-generation version control system to be:</p>
<ul>
<li>distributed (this is now a non-negotiable);</li>
<li>fast;</li>
<li>well-documented—<em>at least</em> as well as Hg is, and preferably as well as Bazaar is;</li>
<li>well-designed, which is to say having a user interface that is actually a user-interface (like Hg’s) and not an extremely leaky abstraction around the mechanics;<a href="#fn8" class="footnote-ref" id="fnref8" role="doc-noteref"><sup>8</sup></a></li>
<li>fast;</li>
<li>file-system oriented, <em>not</em> diff-oriented: this is one of Git’s great strengths and the reason for a lot of its performance advantages;</li>
<li>extensible, with a good public API so that it is straightforward to add functionality like wikis, documentation, social interaction, and issue tracking in a way that actually integrates the tool;<a href="#fn9" class="footnote-ref" id="fnref9" role="doc-noteref"><sup>9</sup></a></li>
<li>and last but not least, truly cross-platform.</li>
</ul>
<p>That is a non-trivial task, but the first DVCS that manages to hit even a sizeable majority of these desires will gain a lot of traction in a hurry. The second generation of distributed version control has been good for us. The third could be magical.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>A point that was highlighted for me in a conversation a few months ago with my father, a programmer who has been using SVN for a <em>long</em> time and found the transition to Git distinctly less than wonderful.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Anyone who feels like arguing with me on this point should go spend five minutes laughing at the <a href="http://git-man-page-generator.lokaltog.net">fake man pages</a> instead.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>Few things are as hotly debated as the relative merits of the different systems’ branching models and approaches to history. At the least, I can say that Hg and Bazaar’s branching models are <em>more to my taste</em>.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>Yes, there are extensions that let you do this with Hg, but they are fragile at best in my experience, and substantially less capable than Git’s.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn5" role="doc-endnote"><p>Yes, I know about Hg’s record extension. No, it is <em>not</em> quite the same, especially because given the way it is implemented major GUI tools cannot support it without major chicanery.<a href="#fnref5" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn6" role="doc-endnote"><p>Yes, I know about Hg’s queue extension, too. There is a reason it is not turned on by default, and using it is substantially more arcane than Git’s staging are. Think about that for a minute.<a href="#fnref6" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn7" role="doc-endnote"><p>Yes, there is the <code>-a</code> flag. No, I do not want to have to remember it for every commit.<a href="#fnref7" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn8" role="doc-endnote"><p>Let’s be honest: if Git’s abstraction were a boat, it would sink. It’s just that leaky.<a href="#fnref8" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn9" role="doc-endnote"><p>GitHub does all of this quite well… but they have had to write heaps and gobs of software <em>around</em> Git to make it work.<a href="#fnref9" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Pushing Into C's Corner Cases2014-08-12T09:00:00-04:002014-08-12T09:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-08-12:/2014/pushing-into-cs-corner-cases.html<p>I’m working on a project that is all in C because of its long history and legacy. We’re slowly modernizing the codebase and writing all our new code in Python (using NumPy, C extensions, and so on for performance where necessary). Occasionally, I just want to bang my …</p><p>I’m working on a project that is all in C because of its long history and legacy. We’re slowly modernizing the codebase and writing all our new code in Python (using NumPy, C extensions, and so on for performance where necessary). Occasionally, I just want to bang my head against the wall because there are things we can do so simply in any modern language that you just can’t do in any straightforward way in C. For example, I have file writers that all work <em>exactly</em> the same way, with the single exception that the format string and the data that you put into it vary for each file.</p>
<p>In Python, this would be straightforward to handle with the class machinery: you could simply specify the format string in each inheriting class and define the data points to be supplied at the top of an overriding function, call the parent function with <code>super()</code> and be done.</p>
<p>To do something similar in pure C is nearly impossible. You can supply a format string with each function (or module, or however you separate out the code), and if you feel especially clever you could convert all your data types to strings and pass them as a list to be printed by the standard function. The net result would be <em>longer</em> and <em>less maintainable</em> than simply having a set of essentially-duplicate functions, though.</p>
Don't Be Rude2014-07-12T15:30:00-04:002014-07-12T15:30:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-07-12:/2014/dont-be-rude.htmlCustomer service matters. In this post, I explain how I bid a company adieu because they talked down to me. Don‘t be like them.<section id="note" class="level6">
<h6>Note</h6>
<p>I have left the original post here as I wrote it, but there is an important <a href="#edit-and-addendum">addendum</a> at the bottom of the post that you should make sure to read (and note in particular the <a href="#further-addendum">further addendum</a>).</p>
<p>This post came off as pretty critical of MarketCircle, and that <em>really</em> wasn’t my point. I wanted to use a bad experience I had with MarketCircle to illustrate a general principle, <em>not</em> to poke at any particular company. I did that poorly in this particular piece; for some follow-up on that see <a href="http://v4.chriskrycho.com/2014/i-wrote-it-wrong.html">this post</a> which I wrote later that day, analyzing how and why this piece so spectacularly failed to accomplish my desired goals.</p>
<p>In any case, I do not want this piece to turn people off of using MarketCircle’s software. I leave the unedited version below because I believe in having the intellectual integrity to own one’s mistakes. This was one of mine.</p>
<hr />
<p>The quickest way to make me bid your company or product farewell is to patronize me. Don’t talk down to me. Never treat me like anything but an adult. The moment you do, I am gone.</p>
<p>Given which: farewell <a href="https://www.marketcircle.com">MarketCircle</a>, and adieu <a href="https://www.marketcircle.com/billingspro/">Billings</a>.</p>
<hr />
<p>A story: When I started working as a freelance software developer on the side a few years ago,<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> I looked at my options for tracking time and invoicing clients. I eventually settled on <a href="https://www.marketcircle.com/billingspro/">Billings</a>, by <a href="https://www.marketcircle.com">MarketCircle</a>. It’s solid software: it is reliable, works well, and does everything I need it to, including tracking different clients and projects easily and sending them estimates or invoices. Best of all, from my perspective, it was a local app. You <em>could</em> sync with a server out in the cloud somewhere via Billings Pro, but you did not have to, and you could use the Mac-native application, not some web app out there. Last but not least, it had a <em>great</em> menubar app. I was sold, and I gladly dropped $40 for a single-user license.</p>
<p>Fast forward to June 2013. MarketCircle, like a lot of software development companies, came to the conclusion that it is <em>really</em> hard to develop software as a series of discrete releases, for which you get people to pay over and over again. Perfectly sensibly, they <a href="https://www.marketcircle.com/blog/streamlining-the-billings-product-line/">discontinued development</a> on and support for their standalone software and provided <a href="https://www.marketcircle.com/billingspro/offer/">a (discounted!) migration path</a> for users to upgrade to the Pro (syncing, etc.) version of the software. Note that they did <em>not</em> do anything to disable functionality in existing Billings installations—just provided an upgrade path and stopped developing it. That is the right way to handle it. So far so good.</p>
<p>I am a software developer, and I have seen the pressures that exist in this industry. This move made good business sense to me, and I liked Billings as a product. I was quite willing to look at their Pro plan, and possibly even to invest in it, despite the fact that I did not <em>need</em> it, because I believe in supporting the developers of the software I use.</p>
<p>I emailed them a couple follow-up questions. One of them, and among the most important to me because of how I work for one particular client:</p>
<blockquote>
<p>I note that in Billings Pro, unlike in Billings, I can’t track multiple slips simultaneously. This is problematic for me, as I often do this to keep track of hours worked against a “Personal projects” bit so I can see my hourly variations. That’s a make-or-break kind of thing for me—any chance you guys will change that behavior?</p>
</blockquote>
<p>To elaborate: I like to track my total hours worked every week in a simple way, so I have a “Personal” timer going alongside the project timer for whatever I am doing. The fact that Billings let me do this was one of the selling points for me. Even so, I did not necessarily expect them to support the functionality going forward. The response I got started out reasonably enough:</p>
<blockquote>
<p>We allow you to have multiple active timers, but you can only time one task at a time in both applications. In Billings, there was a bug with this, however, this was corrected in Billings Pro.</p>
</blockquote>
<p>Thus far, fair enough: they saw this as a bug. I disagreed, but I understand.</p>
<p>Then this, though:</p>
<blockquote>
<p>While we all multi-task we cannot work on two billable items at once.</p>
</blockquote>
<p><em>Whoops.</em> You just talked down to me.</p>
<p>You also clearly didn’t read the original email, because you followed up by asking this:</p>
<blockquote>
<p>Can you explain a little more about what you track and how and I can see if there’s a different way to do this in Billings Pro that will give you the same result?</p>
</blockquote>
<p>Hmm. Let me get this straight: I told you what I track and how I use your software, and you thought the appropriate response was to instruct me on what I can and cannot do with it? Clearly not having even read the original question carefully?</p>
<p><em>Nope.</em></p>
<p>Let me explain: you don’t tell your customers that they can’t use your software in ways peculiar to them. You particularly do not do so as though explaining to a child that we simply cannot do certain things. If a user has a quirky way of using your software, you can of course say you don’t intend to support that quirky behavior—but you do not get to tell them that their unanticipated usage is <em>wrong</em>, and especially not in a condescending tone</p>
<p>I cancelled my Billings Pro trial within five minutes of receiving that email. The original software I kept: I was at a busy time in the year, switching time- tracking software is non-trivial, and it wasn’t hurting me a bit to keep using the original software anyway. As I am evaluating time tracking software again, not least because I do not know through how many OS X upgrades Billings will continue to perform properly, MarketCircle isn’t on the list. It only took one bad experience to leave a bad taste in my mouth and convince me to move on.</p>
<p>At this point, it looks like I’m headed to <a href="http://www.getharvest.com/">Harvest</a>. It turns out they don’t support multiple timers, either. But they haven’t talked down to me, and that makes all the difference in the world.</p>
<hr />
<p>There is a takeaway here for anyone paying attention. Namely: respect your customers. Do not talk down to them. Do not assume their uses for your software are wrong, or stupid, even if they are not what you intended. (If anything, that means your users have thought of use cases you didn’t.)</p>
<p>It is going to far to say that the customer is always right. Sometimes, the customer is wrong. Sometimes, <em>I</em> am wrong as a customer. But the customer <em>is</em> always someone to respect. The moment you stop treating your customer with respect is the moment you cross the line into being a company with which I want to do business to one I will avoid.</p>
<hr />
</section>
<section id="edit-and-addendum" class="level6">
<h6>Edit and Addendum</h6>
<p>When I posted this on App.net, a few thoughtful acquaintances <a href="https://alpha.app.net/chriskrycho/post/34459957">pushed back</a>, noting that the customer service interactions did <em>not</em> read as condescending to them. It is possible that I misread the original customer service rep’s tone in interacting in me. This is a constant danger in dealing with text-only communication. I take some responsibility for that—but I also note that the frustration had already built up in the course of a conversation that had already included a number of failures to respond to address or respond to my questions and concerns.</p>
<p>I should also note that I didn’t mean this as a critique of MarketCircle in particular, though re-reading the post in light of the response, it is clear it comes off more that way than I intended. My interactions with MarketCircle were meant simply to illustrate the broader point: customer service matters, and even one bad customer experience can turn off a customer.</p>
<p>But the takeaway from this addendum is a bit different: I can sympathize with the difficulties facing the customer service rep. I failed at precisely the same task of communicating my intent in writing effectively. Now, whether that rep meant it the way I took it or the way others took it in reading the post, he certainly did not accomplish what he meant to with the exchange. My sympathies are with him. I am perfectly willing (though not perhaps <em>happy</em>; humility is rarely particularly pleasant) to say that I got it wrong here.</p>
<p>All that being said… I still have a bad taste in my mouth, and I am still leery of doing further business with MarketCircle. And that <em>does</em> make the original point in a way, because the emotional response from a bad experience, even one you did not intend, doesn’t fade quickly or at all, even in the face of reasonable articulations of alternative explanations for the bad experience. You have to work at a good customer experience continually.</p>
</section>
<section id="further-addendum" class="level6">
<h6>Further Addendum</h6>
<p>MarketCircle actually saw this piece—presumably via my link on Twitter—and got back to me, looking to fix this issue, which I really appreciated. In some sense, then, they <em>are</em> doing exactly what I advocated in this piece.</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Early 2010, if you’re curious.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Goodbye, Notifications2014-07-11T18:50:00-04:002014-07-11T18:50:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-07-11:/2014/goodbye-notifications.htmlThe Accidental Tech Podcast guys inspired me to change how I handle notifications for social media—silencing them all. So far, I like it a lot.<p>In <a href="http://atp.fm/episodes/73" title="73: Notifications Duck">this week’s episode</a> of <a href="http://atp.fm">Accidental Tech Podcast</a>, hosts Casey Liss and John Siracusa mentioned that they have the sound aspect of notifications disabled on their iDevices (Liss’ iPhone, Siracusa’s iPod Touch). Strange though it might seem, the thought hadn’t occurred to me. I like getting the notice of things having happened on my social media accounts, but I’d concluded recently that I actively disliked having the interruption even of a buzz in my pocket: it forces a mental context shift which inevitably degrades my concentration on whatever task I am about.</p>
<p>I spent ten minutes this evening and went through my iPhone’s notification settings. The only things which have audible or vibrating notifications now are phone calls (including FaceTime) and text messages. Everything else I disabled. Now, I still have notifications on a number of other items: they can show up in Notification Center, and they can put markers on the home screen apps. After all: if I already have my phone out, it is almost certainly no problem to see a notification come in, and I definitely want to be able to glance at the app on my home screen and see that someone has interacted with me in some way. But when I <em>don’t</em> have my phone out? It is unhelpful. It is distracting.</p>
<p>I actually turned on app badges for a number of apps for which I had previously disabled them, because they had been extraneous when I was getting noises or buzzes for the apps and services in question. I also tweaked a number of other apps: some can show app badges but not appear in notification center. Most cannot show anything on the lock screen at all. If I want to check on notifications, I can look explicitly.</p>
<p>We will see how the experiment goes. Even just a few hours in, though, I can already say I like it. I did <em>not</em> get any buzzing in my pocket when a few people interacted with me on App.net, or Instagram, or anywhere else. And, social media being what it is, none of those interactions are temporally important (however much it might feel otherwise). They will still be there waiting when I get back.</p>
<p>Now, this does not automatically make me more productive. I still need self control to be most effective in using my time. It does take away a few of the most obvious distractions and interruptions that make it hard to focus, though, and that is <em>definitely</em> a win.</p>
Economies of Scale2014-07-11T10:35:00-04:002014-07-11T10:35:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-07-11:/2014/economies-of-scale.htmlIn which I strongly critique Robert Gates' years-old comments on the cost of the B-2, because I'm a pedant.<p>I was reading through an interesting Ars Technica <a href="">article</a> on the new Long Range Strike Bomber (LRS-B) proposal the Air Force is soliciting. It’s generally interesting to me in part because I’ve worked on a related project in the past, and we talked fairly often about how the LRS-B program might impact it. The article is worth your time. This quote from Robert Gates in the middle of the article, which touches on the program the LRS-B would replace, caught my attention, though:</p>
<blockquote>
<p>What we must not do is repeat what happened with our last manned bomber. By the time the research, development, and requirements processes ran their course, the aircraft, despite its great capability, turned out to be so expensive—$2 billion each in the case of the B-2—that less than one-sixth of the planned fleet of 132 was ever built.</p>
<p>Looking ahead, it makes little sense to pursue a future bomber—a prospective B-3, if you will—in a way that repeats this history. We must avoid a situation in which the loss of even one aircraft—by accident, or in combat—results in a loss of a significant portion of the fleet, a national disaster akin to the sinking of a capital ship. This scenario raises our costs of action and shrinks our strategic options, when we should be looking to the kind of weapons systems that limit the costs of action and expand our options.</p>
</blockquote>
<p>Now, in one sense, Gates was absolutely right. On the other hand, he seems to have committed a classic blunder in dealing with these kinds of costs: economies of scale matter. Part of the reason the per-unit price of the B-2 was so high was precisely that we only bought 20 of them. While the units were individually expensive to manufacture and maintain, because of unique materials used in their construction and so on, they were much <em>more</em> expensive to manufacture in small numbers than they would have been in large numbers. There are basically two reasons for this:</p>
<ol type="1">
<li>The manufacturing process couldn’t do what it does best (turn out large numbers of standardized parts and thereby reduce costs).</li>
<li>The costs of development—research, software development, etc.—were all distributed over a much smaller pool than they would have been had the government purchased more aircraft.</li>
</ol>
<p>This second point is incredibly important to understand. It is certainly true that the absolute cost of buying 132 B-2s would have been high, possibly astronomically and unaffordably high. What it would <em>not</em> have been is $264 billion. Even assuming that manufacture costs were fully half of the cost-per- plane (almost certainly not the case), it would have been barely over half that. Assume that the B-2 cost $1B per plane to build, and that the other $10B was research. Well, that’s still an expensive plan… but the total cost is something like $144B, not $264B. Those economies of scale matter.</p>
<p>This same reality is a point made later in the article by another commentator, but I couldn’t let it go. Things like this drive me nuts, because they’re such a common failing in our political discourse. Ignorance of basic economics from the people making decisions with this kind of economic impact is profoundly unhelpful.</p>
Bundling!2014-05-13T14:35:00-04:002014-05-13T14:35:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-05-13:/2014/bundling.htmlThe world of ebook pricing is sometimes a bit crazy. A smart bundling strategy might make it a bit less so.<p><a href="http://www.digitalbookworld.com/2012/why-do-we-have-to-choose-between-print-and-digital/">“Why do we have to choose between print and digital?”</a> asked Richard Curtis at Digital Book World last week, before tackling the topic of bundling—getting ebooks at reduced cost or even free when buying a physical copy of the book. Drawing an analogy from music purchases that have moved in the same direction, he suggests that publishers <em>ought</em> to be bundling, and then poses the query: When you purchase a print book you should be able to get the e-book for…</p>
<ol type="1">
<li>the full combined retail prices of print and e-book editions</li>
<li>an additional 50% of the retail price of the print edition</li>
<li>an additional 25% of the retail price of the print edition</li>
<li>$1.00 more than the retail price of the print edition</li>
<li>free</li>
</ol>
<p>He suggests that this proves to be something of a conundrum for decision-makers in the publishing industry. With respect, and while recognizing that it probably <em>feels</em> like a conundrum to the publishers, I think the answer is really quite simple. Publishers can dramatically increase their profits, and do it in a way that readers will <em>love</em>. (This is the part where you call me crazy. Up next is the part where I show you why I’m not.)</p>
<section id="all-or-nothing" class="level2">
<h2>All or nothing</h2>
<p>First, we should note that while readers would always choose (e) and publishers would love it if they could get away with (a), the reality is that both of these leave one party out in the cold. Publishers need readers, and readers need publishers. Publishers need readers or they die. Readers need publishers as providers of quality content—not only as the gatekeepers but also as polishers who take good books and make them great. Any system that will pan out well must therefore respect <em>both</em> sides of that equation. Both (a) and (e) fail that test immediately.</p>
<p>In the case of (a), the consumer can rightly point out that the cost of distribution of a book is minimal, trivial even, in the grand scheme of book production. That goes double for ebooks: the cost of running a server is a pittance compared to the cost of writing, editing, and proofing a book. “So,” any smart reader says, “I’ve already paid for the book. Why should I have to pay <em>just as much again</em> for the ebook?”</p>
<p>In the case of (e), the consumer is getting something of real value—the ebook, with its associated portability, the ability to create [communal interactions][craigmod] around the content through shared marginalia, and so forth—but without recognizing any infrastructure costs this poses to the publisher. As always, there is no free lunch, and that is as it should be.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> The worker deserves his wages, and that includes the editor who turns a manuscript into ebook form—especially for <em>good</em> ebooks, which entail a great deal of work beyond simply running the print manuscript through a conversion script. That involves real people’s time, and therefore costs real money.</p>
<p>Neither of these options, then, is ultimately good for the market. The readers will rightly reject paying the full price again for a book in a different form; they’ve been conditioned by too many interactions on the internet not to recognize that digital transmission of files the size of a book is, while not costless, not costly either. On the other hand, publishers still need to make money, and they do sink real time and money into the ebook—not at the distribution point, but in the infrastructure involved in the preparation of the manuscript and readying it for digital and physical publication.</p>
<p>Again: publishers need readers and readers need publishers.</p>
</section>
<section id="percentage-games" class="level2">
<h2>Percentage games</h2>
<p>Percentage-based cuts—like Curtis’ options (b) and (c)—are much more sensible and reasonable from the perspective of both the consumer and the publisher. In each of these cases, the publisher is granting that the customer has already made a purchase—perhaps a significant one, in the case of a hardcover fiction book. Indeed, when we move out into the realm of reference books or textbooks, the consumer has already given the publisher quite a lot of money. Thus, options (b) and (c) are much friendlier to the consumer than choice (a), while still affording the publisher some profits, unlike (e). This is clearly a step in the right direction.</p>
<p>The percentage option quickly runs into issues when we start thinking about how such a scheme would work in practice, though. Is it 25% of the hardcover but 50% of the paperback, so that the publisher can recoup more of the costs? In this scheme, it is difficult to match the actual cost of the ebook sale to its relative value compared to the physical copy. Moreover, it’s difficult to standardize. When purchasing a textbook at $150, should someone have to pay another $37.50 or $75 to have a digital copy? It seems unlikely that preparing an ebook of a textbook is really 5-6 times more costly than the preparation of a fiction ebook, which on a percentage basis would come out around $6.50 or $13 for the hardcover at those rates, or $2 or $4 for paperbacks.</p>
<p>Equally important: <em>will</em> people pay that much for a digital copy? Publishers may want to study the question in depth by testing the market, but this is a waste of time. The answer is obvious to anyone under the age of 30: <em>no</em>. The market simply won’t support those kinds of costs on the upper end of the spectrum.</p>
<p>Again, customers may recognize that they are subsidizing more than simply the cost of distribution, but the preparation and distribution of the ebook don’t justify an additional percentage on these scales beyond some point. I suspect that most customers are willing to pay extra to get the ebook in addition to the physical copy—just not, in most cases, <em>that</em> much extra.</p>
<section id="aside-on-reasonability-and-trained-markets" class="level3">
<h3>Aside: on reasonability and trained markets</h3>
<p>We must recognize that markets can be <em>trained</em>. People have come to see $0.99 as a reasonable price for individual songs. There was nothing inevitable about that outcome; it is a direct result of the success of the iTunes store. Had prices been set at $1.49 or $0.33, it’s likely we would have settled on that number as a reasonable price. Similarly, TV show episodes sell at $1.99,<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> and people seem to treat that as a reasonable price: the perceived value matches the cost well. They could have been $1 or $2.50, and consumers probably would have settled in with those numbers equally well.</p>
<p>Of course, had the price been too high, we would have rejected it entirely: markets can be trained, but they’re not capable of stretching into just any shape at all.</p>
<p>Admittedly, the music market remains volatile, but consumers on the whole don’t seem to balk at spending a dollar on a song. While the piracy rate remains high, iTunes and similar markets provide an outlet for those who are interested in purchasing their music legitimately.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a></p>
<p>This outcome results from the combination of a trained market and a sensible cost/value relationship that allowed the training to occur in the first place. Book publishers should aim for the same outcome: profitability on the basis of perceived reasonability of their prices. This will require training the market, but that is possible so long as their expectations are reasonable.</p>
</section>
</section>
<section id="a-reasonable-target" class="level2">
<h2>A reasonable target</h2>
<section id="price-points" class="level3">
<h3>Price points</h3>
<p>Curtis’ final suggested price point is close to the mark, but I think some revision is in order. Remember: the aim is to buoy both customer satisfaction <em>and</em> publisher profitability. Here’s my proposed pricing scheme for fiction (which could be adapted to other parts of the market fairly straightforwardly):</p>
<ul>
<li>Standalone ebook: $4.99</li>
<li>Paperback:
<ul>
<li>Book: $7.99</li>
<li>Bundle: $9.99</li>
</ul></li>
<li>Trade paperback:<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a>
<ul>
<li>Book: $14.99</li>
<li>Bundle: $15.99</li>
</ul></li>
<li>Hardcover:
<ul>
<li>Book: $26.99</li>
<li>Bundle: $26.99</li>
</ul></li>
</ul>
<p>I’m basing these on the current pricing schemes in the market—these are the normal suggested retail prices for paperbacks, trade paperbacks, and hard covers—and on the assumption that the publisher’s goal is to maximize revenue, while the consumer’s goal is to get the most content at a price he feels is reasonable. I’m also taking into account the existing profit curves for publishers: paperbooks are relatively low margin, while hardcovers are the major profit points, at least when they’re successful.</p>
</section>
<section id="rationale" class="level3">
<h3>Rationale</h3>
<p>First, and most importantly, I believe the market will support these price points. The standalone ebook is less expensive than the paperback, as it should be, since its distribution costs are much lower than the costs of printing and shipping paperbacks. At the same time, ebooks sales will still generate revenue for the publisher; $5 is not a meaningless amount of money.</p>
<p>For each tier upwards, the cost of the bundled ebook drops. The publisher thus acknowledges the increasing profitability of each tier as well as the increasing cost to the reader. At the same time, the lowered bundling cost incentivizes the user toward the higher profit items. In each case, the bundling cost is sufficiently low as to be in the “impulse purchase” range for most users.<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a></p>
<p>Readers will be far more likely to front the cost of a hardcover if an ebook comes bundled with it, because the value proposition is so much better. At the same time, this is unlikely to decrease the profits of the publisher, because the margins are much higher for hardcovers.</p>
<p>In fact, bundling at these rates will likely increase publisher profits from ebooks, as most readers currently choose between ebook and physical books. The price of a hardcover is simply too high to allow for the purchase of both. (Even when this is not actually true, it <em>seems</em> true to consumers, which is equally important in determining their behavior.) With a sufficiently lower barrier to getting the additional content, the likelihood that the reader purchases both goes up substantially.<a href="#fn6" class="footnote-ref" id="fnref6" role="doc-noteref"><sup>6</sup></a></p>
<p>This potential for increased profitability is compounded by the availability of the bundle at initial purchase time. A consumer who has already committed to spending $8 on a book is unlikely to balk at $10, and even less to balk at the transition from $15 to $16. In many cases, the publisher will earn more money from the book purchase than before, but the reader is still getting a good deal on the ebook. This is <em>exactly</em> the combination that makes for a flourishing market.</p>
<p>Finally, the simplicity of these numbers is extremely helpful. Standardizing these prices will decrease the friction inherent in making the purchase decision, which increases the likelihood that a purchase will be made. I’m not suggesting a cartel—price standardization is natural in this sort of market—and I believe the price points I’ve suggested are where the market will settle in the long run. The publishers who get there first will earn enormous goodwill from their readers in the short term, as well as demonstrating their leadership in the industry in ways that set them up for long term success.</p>
</section>
</section>
<section id="bundle-up" class="level2">
<h2>Bundle up</h2>
<p>A smart approach to bundling could be enormously beneficial to the publishing industry. In addition to the pure numerical profitability of the approach outlined above—no small detail in an industry struggling to adapt to the realities of the new economy—it establishes that the publishers are responsive to customers in a way that other large media have not seemed to be. Nothing is so helpful to a company’s long-term sustainability as for consumers to <em>like</em> it. Reasonable bundling prices would go a long way toward helping readers see publishers as friends, rather than enemies.</p>
<p>Obviously these numbers work best in the context of fiction. The value propositions are entirely different in other contexts; a cookbook is an entirely different thing than a copy of <em>The Hobbit</em>. Across the board, though, publishers should keep the same goals in mind: profitability by means of reasonability and approachability. Be allies of the readers, not their enemies. Make it easy and affordable for them to pay you for your work, and they will.</p>
<hr />
<p>My thanks to <a href="http://stephencarradini.com">Stephen Carradini</a> for invaluable contributions to this piece in two forms: many long conversations about this very topic, and a helpful edit of the actual content.</p>
<p>[craigmod]: http://craigmod.com/journal/post_artifact/#section_4 “Post Artifact Books and Publishing, Section 4: The post-artifact system” from Craig Mod"</p>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Additionally, there is a signaling problem here: “free” suggests “low value” in a way that publishers rightly want to avoid. See <a href="http://informationarchitects.net/blog/ia-writer-on-prices-and-features/">“iA Writer: On Prices and Features”</a>, Section 2: Cost, by Oliver Richtenstein for a lengthy and sensible exploration of this issue. The issue of signaling value should be taken into account in my suggestions later, as well. But more on that below.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>When they sell at all, of course. I’ve written about this problem [before][piracy]: piracy explodes when there is demand without supply. It also tends to grow at a higher rate when the cost is perceived as unreasonable. TV shows priced at $5/episode wouldn’t do well; they seem to sell quite briskly at $1.99. Publishers run the risk of fomenting piracy by setting their prices too high.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>I have never seen someone complain that a song is too expensive at a dollar who was willing to pay <em>anything</em>. A penny would be too pricey from the pirates’ point of view.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>Trade paperbacks (TPBs) are similar in size to hardcovers, but have soft covers similar to those in a paperback. Fiction TPBs typically go for around $15. Over the last few years, publishers have started shifting away from the low-margin paperback market into these trade paperbacks, which provide a bit higher profit for them. Personally, I don’t mind, because these books tend to be higher quality paper and bindings. If I’m sitting down with a monster like one of the books in <cite>The Wheel of Time</cite>, this is far and away the best format for a physical copy.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn5" role="doc-endnote"><p>On the signaling issue: the price of the ebook is sufficiently high as to continue to signal real value here, I think. However, in the case of other kinds of books, this scheme should be revisited. A complex EPUB3 with embedded videos or interactive content should signal that it offers a higher value proposition than other ebooks with a higher price point; in some cases, if that content is sufficiently central to the value proposition of the book, it might be more expensive than the physical copies.</p>
<p>Similarly, a textbook might sell for $150, its ebook at $50, and the bundle at $165—because the cost of preparing a textbook ebook may be substantially higher than that of preparing a fiction ebook. Signaling matters, but overpricing is as much a risk here as underpricing.<a href="#fnref5" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn6" role="doc-endnote"><p>This has the added benefit of making the purchase of new books over used books more attractive to the consumer: if the coupon for ebook at reduced rate is only available at new book purchase, a $3 used book suddenly has a much lower value proposition relative to the original when the reader is interested in having an ebook copy as well, since the cost of having both is still $8.</p>
<p>Of course, this leads us to the question of ebook resale, which is currently a legally murky area at best, and requires considerable legal and intellectual development.<a href="#fnref6" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Why the Smart Reading Device of the Future May Be … Paper2014-05-03T10:45:00-04:002014-05-03T10:45:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-05-03:/2014/why-the-smart-reading-device-of-the-future-may-be-paper.htmlLink—I enjoy Kindle and iPad, but I still love books best. Turns out I'm not alone... and there might just be reason for it.<p>One thing I didn’t talk about in comparing reading experiences on a Kindle and on an iPad the other day is the elephant in the room: old-fashioned books. I enjoy Kindle and iPad, but I still love books best. Turns out I’m not alone… and there might just be reason for it.</p>
<p><a href="http://www.wired.com/2014/05/reading-on-screen-versus-paper/">Brandon Keim at Wired:</a></p>
<blockquote>
<p>Paper books were supposed to be dead by now. For years, information theorists, marketers, and early adopters have told us their demise was imminent. Ikea even redesigned a bookshelf to hold something other than books. Yet in a world of screen ubiquity, many people still prefer to do their serious reading on paper.</p>
<p>Count me among them. When I need to read deeply—when I want to lose myself in a story or an intellectual journey, when focus and comprehension are paramount—I still turn to paper. Something just feels fundamentally richer about reading on it. And researchers are starting to think there’s something to this feeling.</p>
</blockquote>
iPad vs. Kindle2014-04-30T21:20:00-04:002014-04-30T21:20:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-04-30:/2014/ipad-vs-kindle.html<p>I’ve been a happy owner of both a Kindle and an iPad Mini for the last several months, and it occurred to me tonight that I use them <em>very</em> similarly in some ways. Both are primarily reading devices for me. What is different is the kinds of material I …</p><p>I’ve been a happy owner of both a Kindle and an iPad Mini for the last several months, and it occurred to me tonight that I use them <em>very</em> similarly in some ways. Both are primarily reading devices for me. What is different is the kinds of material I read on each.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>My Kindle is a first generation Paperwhite, in fairly good condition. (It has one significant quirk in that it sometimes turns on without the power button being pushed. Alas.) I use it nearly every day right now. I have most of my school books on it, and several of my favorite novels. I’m rereading Patrick Rothfuss’ <em>The Wise Man’s Fear</em> right now, and so I spend a good half a hour a day on the Kindle for that alone. I also get a lot of my seminary reading done on the device.</p>
<p>On the iPad, on the other hand, I read a lot of web pages, nearly all via <a href="https://www.instapaper.com">Instapaper</a>. I had sometimes had Instapaper items delivered to my Kindle, and that worked <em>fairly</em> well, but I much prefer the experience of using the app on the iPad. I opt to do pretty much any technical reading on the device: its screen just works much better for dealing with things like code samples embedded in a blog post—not least because I can scroll easily if I need to! I also do basically all my Bible reading on the iPad. It is far easier to navigate to different parts of the text, switch translations (or original languages!) while keeping my place there on any of the top-tier iPad apps than on the Kindle. And I sometimes read comics on the iPad—something I would not try in a million years on the current Kindle screen!</p>
<p>A friend asked a few months ago if I thought one would obviate the other. Given the qualification that neither is in any sense truly a <em>necessity</em>—we could quite easily get along without either—my answer after several months with both is <em>no</em>. Though the devices are similar in a number of ways, they fit into very different niches. The things I actively enjoy on each are very different. The Kindle is good for much longer-form reading, and its lack of distractions is nice (though I often take advantage of the Do Not Disturb mode on the iPad when I actually want to accomplish things besides talking on social media). The iPad is better for anything with color, for technical documents, and for anything where navigation more complex than one-page-after-another is important. I would not particularly want to read a novel on it, though!</p>
<p>I will be curious to see if the devices converge at some point in the future.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> At present, no technology gives both the responsiveness and gorgeous color of the iPad <em>and</em> the low-contrast, pleasant long-form reading experience offered by the Kindle’s e-ink. If at some point we get a technology that does both, it will be pretty amazing. In the meantime… we still have pretty amazing pieces of technology, and I enjoy them both a lot.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I also use the iPad for a number of other things: App.net and Twitter and so on, <a href="http://www.fiftythree.com">Paper</a>, starting some ideas for blog posts, etc. But mainly I read on it!<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>No, Amazon’s Kindle Fire series of tablets are nothing like that convergence: they are <em>functionally</em> just poor-man’s-iPads hooked into Amazon’s ecosystem. Note that I’m not making a comment about the quality or lack thereof on the devices—only that they’re much reduced in capabilities compared to an iPad or Android (e.g. Nexus 7).<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
A Little Crazy2014-04-29T19:30:00-04:002014-04-29T19:30:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-04-29:/2014/a-little-crazy.htmlI am going to write a static site generator in Io. Oh, and also the Markdown parser and HTML templating engine required to go with it.
<p>I’m going to do something a little crazy, I’ve decided. I’m going to go ahead and do like I wrote <a href="http://v4.chriskrycho.com/2014/doing-it-myself.html">a bit back</a>, and make <a href="http://step-stool.io">Step Stool</a> actually a thing over the course of the rest of the year. Not so crazy. What is a bit nuts is the way I’ve decided to go about that process. In short: as close to the hardest way possible as I can conceive.</p>
<hr />
<p>Over the last couple weeks, I’ve been spending a fair bit of time toying with <a href="http://iolanguage.org">Io</a>. It’s a neat little language, very different in its approach to a <em>lot</em> of things than the languages I’ve used previously. My programming language history is very focused on the “normal” languages. The vast majority of real- world code I’ve written has been in one of C, PHP, or Python. I’ve done a good bit of Javascript along the way, more Fortran than anyone my age has any business having done, and a little each of Java and Ruby. Like I said: the normal ones. With the exception of Javascript, all of those are either standard imperative, object-oriented, or mixed imperative and object-oriented languages. Python and Ruby both let you mix in a fair bit of functional-style programming, and Javascript does a <em>lot</em> of that and tosses in prototypal inheritance to boot.</p>
<p>But still: they’re all pretty mainstream, “normal” languages. Io isn’t like that at all. For one thing, it’s hardly popular in any sense at all. Well-known among the hackers<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> I know, perhaps, but not popular by any measure. It’s small. And it’s very <em>alien</em> in some ways. It’s <a href="http://en.wikipedia.org/wiki/Prototype-based_programming">prototypal inheritance</a>, not normal inheritance. Courtesy of <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Inheritance_and_the_prototype_chain">Javascript</a>, I have a <em>little</em> familiarity with that, but it’s definitely still not my default way of thinking about inheritance. Python’s inheritance model (the one I use most frequently) is <em>essentially</em> the same as that in C++, Java, PHP, and so on—it’s normal class-driven inheritance. Io goes off and does full-blown prototypal inheritance; even just the little I’ve played with it has been fun.</p>
<p>Io also does a bunch of other things a <em>lot</em> different from the other languages I’ve used. First, there are no keywords or—formally speaking—even operators in the language. Every action (including ones like <code>+</code> or <code>for</code>) is simply a message. Every value is an object (so <code>1.0</code> is just as fully an object as an arbitrarily-defined <code>Person</code>). The combination means that writing <code>1 + 2</code> is actually just interpreted as the object <code>1</code> receiving the <code>+</code> message carrying as its “argument” the <code>2</code> object (really just the message contents). This is <em>completely</em> different at a deep paradigm level from the normal object-oriented approach with object methods, even in a language like Python where all elements are objects (including functions). The net result isn’t necessarily particularly different from calling methods on objects, but it is a <em>little</em> different, with have some interesting consequences. Notably (though trivially—or at least, so it seems to me at this point), you can pass a message to the null object without it being an error. More importantly, the paradigm shift is illuminating.</p>
<p>Io also has far more capabilities in terms of concurrency than any of the other languagues with which I’m familiar, because it actively implements the <a href="http://en.wikipedia.org/wiki/Actor_model">Actor Model</a>, which means its implementation of messaging instead of object method calls can behave in concurrent ways. (I’d say more if I understood it better. I don’t yet, which is one of the reasons I want to study the language. Concurrency is very powerful, but it’s also fairly foreign to me.) It’s also like Lisp in that its code can be inspected and modified at runtime. I’ve wanted to learn a Lisp for several years for this kind of mental challenge, but the syntax has always just annoyed me too much ever to get there. Io will give me a lot of its benefits with a much more pleasant syntax. It has coroutines, which are new to me, and also helpful for concurrency.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></p>
<p>The long and short of it is that the language has a ton of features not present in the languages I have used, and—more importantly—is <em>paradigmatically</em> different from them. Just getting familiar with it by writing a goodly amount of code in it would be a good way to learn in practice a bunch of computer science concepts I never had a chance to learn formally.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a></p>
<hr />
<p>By now, as long as I’ve rambled about Io, you’ve probably figured out where I was going in that first paragraph. I’ve decided to stretch my brain a bit and write Step Stool in Io. There are bunches of static site generators out there in Python already, many of them quite mature. (This site is running on <a href="https://github.com/getpelican">one of them</a> as of the time I write this post—it’s quite solid, even its quirks and limitations occasionally annoy me.) The point of Step Stool has always been twofold, though. First, I’ve wanted to get to a spot where I was really running my own software to manage my site, letting me do whatever I want with it and guaranteeing I always understand it well enough to make those kinds of changes. Second, I’ve just wanted to <em>learn</em> a whole bunch along the way. Third, it’s right there in the website link: <a href="http://step-stool.io">step-stool.io</a>! How could I pass up such an opportunity?</p>
<p>It is that second goal that has pushed me to do this crazy project this crazy way. It’s crazier than just teaching myself a language in order to do the static site generator itself, too, because there are a few other pieces missing that I’ll need to write to make this work… like a Markdown implementation and an HTML templating language. I’ve never written anything remotely like either before, so I’m going to take the chance to learn a <em>lot</em> of new things. For the Markdown implementation, rather than relying on regular expression parsing (like most Markdowns do), I’m going to use a Parsing Expression Grammar. That will certainly be more efficient and reliable, but—more importantly—it is also outside my experience. I have yet to start thinking through how to tackle the HTML templating language implementation (though I know I am going to make it an Io implementation of <a href="http://slim-lang.com">Slim</a>, which I quite like).</p>
<p>In any case, I’m going to be taking a good bit longer to get Step Stool finished. That is all right: I am going to learn a ton along the way, and I am quite sure I will have a blast doing it. And that is <em>exactly</em> what these kinds of projects are for.</p>
<p>I’ll post updates as I go, with the things I’m learning along the way. Hopefully they’ll be interesting (or at least entertaining).</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>Hackers in the original sense of the world. Not “crackers”, but people who like hacking on code, figuring things out the hard way.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Python 3.5 is actually adding coroutines, and I’m excited about that. I’ll feel much more comfortable with them there having used them in Io, I’m sure!<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>I got here backwards, as it were—by way of an undergraduate degree in physics. I don’t regret that for a second: I got a much broader education than I could have managed while getting an engineering degree, and most importantly learned <em>how to learn</em>: easily the most important skill anyone gains from any engineering degree.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Learning QML, Part 12014-04-11T15:30:00-04:002014-04-11T15:30:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-04-11:/2014/learning-qml-part-1.html<p>For part of my work with Quest Consultants, I’ve been picking up Qt’s QML toolkit to use in building out the UI. The declarative syntax and ability to define one’s own model in non-C++- or Python-specific ways is quite nice. That said, the learning process has had …</p><p>For part of my work with Quest Consultants, I’ve been picking up Qt’s QML toolkit to use in building out the UI. The declarative syntax and ability to define one’s own model in non-C++- or Python-specific ways is quite nice. That said, the learning process has had more than a few bumps along the way. I decided to go ahead and write those up as I go, both for my own reference and in the hope that it may prove useful to others as I go.</p>
<p>QML is a <em>Javascript-like</em> language for <em>declarative programming</em> of a user interface. So it’s a Javascript-based language that sort of behaves like HTML. In fact, it behaves like Javascript in terms of how you define, access, and update properties, and you can embed full-featured (mostly) Javascript functions and objects in it.</p>
<p>But when you have nested QML Types, you end up with them behaving more like HTML.</p>
<p>The weirdest bit, and the thing that I’m having the hardest time adjusting to, is that you can only edit properties of root Types when you’re working with an instance of that Type. And those Types are defined by <em>documents</em>.</p>
<p>So, to give the simplest possible example, let’s say I defined a new type called <code>Monkey</code>, in the <code>Monkey.qml</code> file, like this:</p>
<pre><code>// Monkey.qml
import QtQuick 1.1
Item {
id: monkey_root
property int monkey_id: -1
property string monkey_name: "I don't have a name!"
Item {
id: monkey_foot
property string monkey_foot_desc: "The monkey has a foot!"
}
}</code></pre>
<p>I can use that in another file. If they’re in the same directory, it’s automatically imported, so I can just do something like this:</p>
<pre><code>//main.qml
import QtQuick 1.1
// Rectangle is exactly what it sounds like. Here we can display things.
Rectangle {
id: the_basic_shape
height: 400
width: 400
color: green
Monkey {
id: monkey_instance
monkey_id = 42
monkey_name = "George" // he's kind of a curious little guy
}
Text {
text: monkey_instance.monkey_name
color: "red"
}
}</code></pre>
<p>That creates a (really ugly) rectangle that prints the <code>Monkey</code>’s name in red text on a green background. It’s impossible to access directly the <code>monkey_foot</code> element, though, which means that composing more complex objects in reusable ways is difficult. In fact, I haven’t come up with a particularly good way to do it yet. At least, I should say that I haven’t come up with a good way to create high-level reusable components yet. I can see pretty easily how to create low-level reusable components, but once you start putting them together in any <em>specific</em> way, you can’t recompose them in other ways.</p>
<p>From what I’ve gotten my head around so far, this ends up being less flexible than either HTML templating languages (which are, or at least can be, completely declarative) or normal Javascript (which is obviously <em>not</em> declarative). Mind you, it’s all sorts of <em>interesting</em>, and I have a pretty decent idea what I’m going to do to implement our UI with it, but it’s taken me most of the day to get a good handle on that, and my head still feels a bit funny whenever I’m trying to see how best to create composable components.</p>
<p>Note, too, that this is the <em>only</em> way to create a new basic type of object in QML: it has to be the root level object in a QML document. I would <em>really</em> like to be able to access internal declarations—to have named internal types/objects. Unfortunately, QML doesn’t let you do this. I suspect this has to do with how the QML type system works: it actually binds these types to C++ objects behind the scenes. This is a non-trivially helpful decision in terms of the performance of the application, but it certainly makes my brain a little bit twitchy.</p>
<p>There are two basic consequences of this structure. First, any types you need to be able to use in other QML objects have to be defined in their own QML documents. Second, it is (as near as I can see so far, at least) difficult to create good generic QML types of more complex structures that you can then use to implement specific variations. For example: if you want to create accordions, you can create a fair number of the low-level elements in generic ways that you can reuse, but once you get to the relationships between the actual model, delegate, and view elements, you will need to create them in custom forms for each distinct approach.</p>
<p>This is more like creating HTML documents than Javascript, which makes sense, <em>if</em> you remember that QML is Javascript-based but <em>declarative</em>. You just have to remember that while you can define some reusable components, the full-fledged elements are like full HTML pages with a templating system: you can include elements, but not override their internal contents. In QML, you can override <em>some</em> of their contents, which is nice—but that is not the primary way to go about it.</p>
Feels Right2014-04-04T21:30:00-04:002014-04-04T21:30:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-04-04:/2014/feels-right.htmlLittle details in how things work can make all the difference when it comes to the experience of using software. So be diligent, and do it right.<p>I had spent most of the last week and a half working on getting <a href="http://www.firebirdsql.org">FirebirdSQL</a> configured and ready to use for a project I’m working on with <a href="http://www.questconsult.com">Quest Consultants</a>. It was slow going. The tool is decent, but the documentation is spotty and it felt like everything was just a bit of a slog—to get it working correctly, to get it playing nicely with other pieces of the development puzzle, to get it working across platforms.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> Then, because I had done something a <em>little</em> bit silly in my eagerness to get up and going last week and written code without a testable configuration, I hit a wall today. The queries weren’t working. I had made a <a href="http://stackoverflow.com/questions/22865573/sqlalchemy-successful-insertion-but-then-raises-an-exception">bug</a>.</p>
<p>I spent a substantial part of the day chasing down that bug, and then a conversation with user <em>agronholm</em> on the <a href="http://docs.sqlalchemy.org/en/rel_0_9/">SQLAlchemy</a> IRC channel (<a href="irc://irc.freenode.net/sqlalchemy">freenode/#sqlalchemy</a>) got me thinking. The Firebird team describes one of their options as an “embedded” server, but <em>agronholm</em> pointed out that what they really mean is <em>portable</em>. It’s running a standalone server and client, but it’s not part of the same thread/process (like SQLite is). Then <em>agronholm</em> very helpfully asked—my having mentioned my preference for <a href="http://www.postgresql.org">PostgreSQL</a> earlier—“Does Postgres not have a portable version?” Two minutes later, we had both found <a href="http://sourceforge.net/projects/postgresqlportable/">PostgreSQL Portable</a>, and I rejoiced.</p>
<p>It took me less than half an hour to get it downloaded and set up and to confirm that it would work the way we need for this particular piece of software. (Firebird had taken me a good three hours, what with digging through badly organized and not terribly clear documentation.) It took me less than half an hour more to get PostgreSQL to the same point that I’d finally gotten Firebird to after multiple hours working with it. And I was so <em>very</em> happy. What had been an especially frustrating work day now had me quietly smiling to myself constantly for the last two and a half hours as I <a href="http://stackoverflow.com/questions/22865573/sqlalchemy-successful-insertion-but-then-raises-an-exception/22872598#22872598">finished</a> tracking down the bug that had set me on this path in the first place.</p>
<p>Several years ago, when I first started doing web development, I got my feet wet in database work with MySQL—probably the single most common starting point for anyone going that route, courtesy of the ubiquity of the standard Linux-Apache- MySQL-PHP stack.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> A year after that, I picked up some work that was already using PostgreSQL and fell in love almost immediately.<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> Something just felt <em>better</em> about running <code>psql</code> than running <code>mysql</code> on the command line. Postgres’ implementation of the SQL standard felt more natural. Even the tiniest little details like the way tables display when you query them in <code>psql</code> was nicer. In less than a week, I was sold and haven’t looked back. While I’ve used MySQL out of convenience on shared hosting from time to time, PostgreSQL is unquestionably my preferred database target.</p>
<p>Today’s experience brought that all home again. That grin on my face all afternoon felt a bit silly, but it highlights the difference that really good software design makes. I am not just talking about how it looks here—though, to be sure, PostgreSQL is prettier than FirebirdSQL—but how it works. PostgreSQL feels responsive, its command set makes a lot of sense and is easy to use, and it is <em>extremely</em> well documented. In fact, I would go so far as to say that it is the best documented open source software I have ever used, as well as among the very most robust. (The only other open source software I find to be as incredibly rock-solid and reliable as PostgreSQL is the Linux kernel. I am by no means an expert on either, or on open source software in general, but the Linux kernel is an unarguably amazing piece of work. So is PostgreSQL.) All those tiny little details add up.</p>
<p>It’s a good reminder for me as I write software that yes, the things I care about—the small matters that would be so easy to overlook when customers express no interest in them—really do matter. People may not know that things like typography make a difference in their experience, but those subtle, often imperceptible things matter. They may not consciously notice the differences in your interface design (even a command line interface), but it will change their experience of the software. Do it poorly, or even in a just-good-enough-to-get- by fashion, and you’ll annoy or simply bore them. Do it well, and you might just delight them—even if they can’t tell you why.</p>
<hr />
<section id="examples" class="level2">
<h2>Examples</h2>
<p>To make my point a little more visible, I thought it might be useful to post samples of SQL to accomplish the same task in the two different database dialects.</p>
<section id="firebirdsql4" class="level3">
<h3>FirebirdSQL:<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a></h3>
<pre><code>CREATE TABLE projects (
id INT NOT NULL PRIMARY KEY,
title VARCHAR(32) NOT NULL,
file_name VARCHAR(32) NOT NULL,
file_location VARCHAR(256) NOT NULL,
CONSTRAINT unique_file UNIQUE (file_name, file_location)
);
CREATE SEQUENCE project_id_sequence;
SET TERM + ;
CREATE TRIGGER project_id_sequence_update
ACTIVE BEFORE INSERT OR UPDATE POSITION 0
ON projects
AS
BEGIN
IF ((new.id IS NULL) OR (new.id = 0))
THEN new.id = NEXT VALUE FOR project_id_sequence;
END+
SET TERM ; +</code></pre>
</section>
<section id="postgresql" class="level3">
<h3>PostgreSQL</h3>
<pre><code>CREATE TABLE projects (
id SERIAL NOT NULL PRIMARY KEY,
title VARCHAR(32) NOT NULL,
file_name VARCHAR(32) NOT NULL,
file_location VARCHAR(256) NOT NULL,
CONSTRAINT unique_file UNIQUE (file_name, file_location)
);</code></pre>
<p>It is not just that the PostgreSQL example is shorter and clearer—it is that it is shorter and clearer because its designers and developers have taken the time to make sure that the shorter, cleaner way works well, and have documented it so you can know how to use that shorter cleaner way without too much difficulty.</p>
</section>
</section>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I do most of my development on a Mac, but do all the testing on the target platform (Windows) in a VM.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>At this point, I would only use one of those by default if I were building a web app: Linux. I’d use <a href="http://wiki.nginx.org/Main">nginx</a> instead of Apache, <a href="http://www.postgresql.org">PostgreSQL</a> instead of MySQL, and <a href="https://www.python.org">Python</a> (though <a href="https://www.ruby-lang.org/">Ruby</a>, Javascript via <a href="http://nodejs.org">node.js</a>, <a href="http://msdn.microsoft.com/en-us/vstudio/hh341490">C# and the .NET stack</a>, or just about anything <em>but</em> PHP would do fine).<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p><em>Almost</em> immediately because at that point configuration on OS X was a bit of a pain. That is <a href="http://postgresapp.com" title="Postgres.app">no longer the case</a>.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>To be perfectly fair to Firebird, it is improving. The upcoming 3.0 series release will make these two a lot more similar than they are at present, and clean up a number of other issues. What it won’t do is get the <em>feel</em> of using Firebird more like that of using Postgres, or make the installation procedure smoother or easier, or make the documentation more complete.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
FirebirdSQL and IntelliJ IDEA (etc.)2014-03-28T09:00:00-04:002014-03-28T09:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-03-28:/2014/firebirdsql-and-intellij-idea-etc.htmlConfiguration instructions for FirebirdSQL JDBC with JetBrains IntelliJ IDEA platform (including PyCharm, RubyMine, WebStorm, etc.).
<p>Setting up IntelliJ IDEA’s built-in database tools to work with FirebirdSQL requires a particular setup configuration, which I’m documenting here for public consumption.</p>
<p>These setup tools <em>should</em> be applicable to any of JetBrains’ other Java-based IDEs which include database support (e.g. PyCharm, RubyMine, WebStorm, etc.). <em>Note:</em> the following apply to IntelliJ IDEA 12 and the associated platforms, but <em>not</em> to the IDEA 13 platform, which made substantial changes to how databases are configured. The underlying details are consistent, but the interface has changed. I have tested on PyCharm 3.1 to confirm that.</p>
<p>This was all done on OS X 10.9, so I also make no guarantees that this works on other platforms, though the likelihood that it behaves the same on Linux is fairly good. I will update the post if and when I have confirmed that it does.</p>
<p>Steps to configuring a database correctly for use with IDEA/etc. Note that steps 1–3 are fairly obvious; the real point of interest is in steps 4 and 5, which took me the longest time to figure out.</p>
<ol type="1">
<li><p>Download the latest version of the Firebird <a href="http://www.firebirdsql.org/en/jdbc-driver/">Java drivers</a> for your operating system and your Java version. (You can check your Java version by running <code>java -version</code> at the command line.) Extract the downloaded zip file. The extracted folder should include a file named <code>jaybird-full-<version>.jar</code> (<code><version></code> is currently 2.2.4).</p></li>
<li><p>In IDEA, in the database view, add a new data source: in the Database view (accessible via a menu button on the right side of the screen), right click and choose <strong>New -> Data Source</strong>.</p></li>
<li><p>Under <strong>JDBC driver files</strong>, browse to the location where you extracted the Jaybird driver files and select <code>jaybird-full-<version>.jar</code>.</p></li>
<li><p>Under <strong>JDBC driver class</strong>, choose <code>org.firebirdsql.jdbc.FBDriver</code>.</p></li>
<li><p>Under <strong>Database URL</strong>, specify <code>jdbc:firebirdsql://localhost:3050/</code> followed by <em>either</em> the full path to the database in question or a corresponding alias.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> A full path might look like this on Windows:</p>
<pre><code>jdbc:firebirdsql://localhost:3050/C:/my_project/the_database.db</code></pre>
<p>With an alias, you would instead have:</p>
<pre><code>jdbc:firebirdsql://localhost:3050/the_alias</code></pre>
<p>Then specify valid values for the <strong>User</strong> and <strong>Password</strong> fields from your existing configuration of the database.</p></li>
<li><p>Click the <strong>Test Connection</strong> button and make sure the configuration works.</p></li>
</ol>
<p>That should do it. Note that the driver choice and path configuration both matter. On OS X, I found that only the <code>FBDriver</code> with this (and one other, older-style and therefore not recommended) path setup worked successfully.</p>
<p>Observations, corrections, additional information, and miscellaneous comments welcomed on <a href="https://alpha.app.net/chriskrycho">App.net</a> or <a href="https://www.twitter.com/chriskrycho">Twitter</a>.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I strongly recommend configuring an alias in the aliases.conf file in the Firebird home directory (usually set as <code>$FIREBIRD_HOME</code> during installation on *nix systems). This lets you move the database around at will, update just the configuration file, and not have to update any references to the database file whatsoever.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
The End of Surfing2014-03-26T20:00:00-04:002014-03-26T20:00:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-03-26:/2014/the-end-of-surfing.htmlTabbed browsers killed "surfing." You only thought it was Facebook to blame.<p>Sometime in the last few months it occurred to me that I no longer “surf” the internet. I read, to be sure, and every once in a long while I even go on a spree where I follow links from one site to another (or just in a long trail on Wikipedia). In general, however, I no longer surf. I suspect I am not alone in this: if we took a straw poll I would venture that most of my friends offline and acquaintances online alike spend rather less time in “browsing” mode than they do reading Facebook or Twitter or Instagram. Motion from link to link has been replaced by individual hops out onto Buzzfeed or a viral cat picture website.<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></p>
<p>The obvious explanation for all of this is already there in what I’ve written: Facebook and Twitter and all the rest of the social media web. To be sure, the advent of social media and the increasing degree to which social media have captured user attention on the web are a significant factor in the end of the old surfing/browsing behavior. This is a dream come true for those social media giants which have found ways to deliver ads to their many millions of users and thereby turn enormous profits.</p>
<p>At the same time, I think there is an oft-overlooked factor in the shifting nature of the web over the last decade: the browser. In fact, if there is any single cause behind the death of old-fashioned surfing, I would point to Firefox 1.0: the browser which popularized tabbed browsing to increasingly large sections of the internet-using public.<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> The open-source browser steadily ate away at Internet Explorer’s then absurd levels of dominance, until Internet Explorer 8 included of tabs itself. By the time that Chrome came on the scene, tabbed browsing had long since become a given.</p>
<p>So why do I think that <em>tabbed browsing</em> of all things contributed to the end of “browsing” and “surfing” as our dominant mode of reading the internet? Simply put: it broke linearity. Previously,<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a> one’s experience of the web was single- stranded, leaping from one point to another in a line that however contorted was always connected by the forward and backward buttons on the browser. The moment tabbed browsing came on the scene, that line was broken. Following a link might mean it opened in a new tab instead of moving the whole view forward to it.</p>
<p>Surfing as I remember it in the late ’90s and early ’00s was inherently the experience of getting lost along that timeline, finding myself dozens of links along the chain and wondering how I had ended up there, and then being able to trace my way back. With tabs, that traceability was gone. With it went the inherent tension that we faced with every link: to follow, or not? To get sucked down into <em>this</em> vortex or <em>that</em>? Because in all likelihood, we knew, we were not going to be coming back to this page. With tabs, though, I could open both of those pages without ever leaving this one. I could start new journeys without ending the old. But there was a hidden cost: that newly opened tab had no history. It was a clean slate; before that newly opened link there was only a blank page. If I closed the original from which I had opened it, there was no going back.<a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a> If I closed this new tabs, there was no going forward to them. The line was broken.</p>
<p>From there it was only a short step to the idea of a single site being the center from which one ventured out to other points on the web before returning: the Facebooks and Twitters of the world. In some sense, Facebook’s entire model is predicated on the idea that it is natural to open a new tab with that juicy Buzzfeed content while keeping Facebook itself open in a background tab. Would it work in that old linear model? Sort of. Would it feel natural? Never.</p>
<p>All of this because of tabs. Invention’s most significant results are rarely those the minds behind it expect. When we are designing things—whether a piece of furniture or a piece of the web—we have to remember that design decisions all have repercussions that we may not see. Technology is never neutral. Particular innovations may or may not be <em>morally</em> significant, but they always produce changes in people’s behavior. Design has consequences.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>For the record, lots of that hopping from link to link was on Buzzfeed- like and viral-cat-picture-like sites, too. I am not concerned with the <em>kind</em> of content being read here, so much as the way it is being read.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Note that I am not crediting Firefox 1.0 with <em>creating</em> the tabbed browser—only with popularizing it. That distinction matters.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn3" role="doc-endnote"><p>Excepting having multiple browser windows open, which I am sure people did—but to a much lesser extent.<a href="#fnref3" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn4" role="doc-endnote"><p>Yes, yes, browser history and re-open closed tab commands. But the <em>experience</em> of those is different, and that’s what we’re talking about here.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Doing It Myself2014-03-21T22:14:00-04:002014-03-21T22:14:00-04:00Chris Krychotag:v4.chriskrycho.com,2014-03-21:/2014/doing-it-myself.htmlWorking with Pelican—the static site generator I use for my blog currently—has reinforced my desire to write my own such software. Sometimes, you just have to do it yourself.<p>Last summer, I started work on a project I named <a href="http://step-stool.io">Step Stool</a>—aiming to make a static site generator that would tick of all the little boxes marking my desires for a website generator. In due time, the project got put on hold, as I started up classes again and needed to focus more on my family than on fun side projects.</p>
<p>Come the beginning of 2014, I was ready to bit WordPress farewell once and for all, though. While <a href="https://ghost.org">Ghost</a> looks interesting, since I do all my writing in Markdown files, there is something tempting about the canonical version of the documents being the version on my computer (and thus also on my iPad and iPhone and anywhere I have Dropbox and/or Git access). I did not have time at the beginning of the year to finish writing Step Stool, and I knew as much,<a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> so instead I moved to <a href="http://docs.getpelican.com/en/3.3.0/">Pelican</a> as a stop-gap. There were lots of good reasons to pick Pelican: it has an active development community, fairly thorough documentation,<a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a> and it’s in Python and uses Jinja2 templates—the same basic approach I had taken with Step Stool, and the same toolset.</p>
<p>Unfortunately, while I have been glad to be away from WordPress, my experience with Pelican so far has only reinforced my desire to get Step Stool done. There are <em>lots</em> of little things that it does in ways that just annoy me. Many of them have to do with configuration and documentation. On the latter, while the documentation is <em>fairly</em> complete, there are quite a few holes and gaps. (Yes, yes, open source software and anyone can add to the docs. That’s great—it really is—but if I’m going to use someone else’s solution, it had better <em>just work</em>. Otherwise, I’d rather spend my time getting my own going.)</p>
<p>For example, if you want to see how the pagination actually works, good luck figuring it out from the documentation. You’ll need to go looking at the way the sample themes (yes, both of them) are implemented to start getting a feel for it. Along the same lines, many of the objects that get handed to the templates are not fully documented, so it is difficult to know what one can or cannot do. I do not particularly want to spend my time adding debug print statements to my templates just to figure out what options I have available.</p>
<p>The same kinds of things hold true for configuration options. Moreover, the configuration is done through a Python module. While that makes the module easier to integrate on the code side of things, it makes its actual content much less transparent than one might hope. Python is not really well optimized for writing configuration files—nor is any normal programming language. Configuration is inherently declarative, rather than imperative.</p>
<p>This is not to say that Pelican is bad software. It is not. It is, however, a fairly typical example of open source software implemented by committee. It has holes (some of them serious), bumps, and quirks. Here is the reality: so will Step Stool, though they will be the quirks that come from an individual developer’s approach rather than a group’s. But the one thing I can guarantee, and the reason I am increasingly motivated to get back to working on Step Stool. And yes, I do have a couple other projects on my plate as well—contributions to the Smartypants and Typogrify modules, my own <a href="https://bitbucket.org/chriskrycho/spacewell">Spacewell typography project</a>, and quite possibly a <a href="https://bitbucket.org/chriskrycho/markdown-poetry/">Markdown Poetry extension</a>. But I would like very much to just get back to doing this myself. There is freedom in rolling my own solution to things. I will not always have time to do these kinds of things; I figure I should do them when I can.</p>
<p>So here’s to <a href="http://step-stool.io">Step Stool</a>, and—more importantly—to writing your own software just to scratch that itch.</p>
<section class="footnotes" role="doc-endnotes">
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>I spent quite a bit of time tweaking my friend Vernon King’s <a href="http://www.vernonking.org">Jekyll-powered site</a>, I got Winning Slowly off the ground, including designing the site from scratch and implementing it (also in Pelican), and I did some substantial redesign work on this site. That was more than enough for my three week break—as evidenced by the fact that I didn’t get to the sort of 1.0 version of this site until just a week or so ago.<a href="#fnref1" class="footnote-back" role="doc-backlink">↩</a></p></li>
<li id="fn2" role="doc-endnote"><p>Emphasis on “fairly.” More on <em>that</em> in a moment as well.<a href="#fnref2" class="footnote-back" role="doc-backlink">↩</a></p></li>
</ol>
</section>
Why Is American Internet So Slow?2014-03-07T19:55:00-05:002014-03-07T19:55:00-05:00Chris Krychotag:v4.chriskrycho.com,2014-03-07:/2014/why-is-american-internet-so-slow.html<p>Pretty damning of the current (lack of a) regulatory regime, if you ask me:</p>
<blockquote>
<p>According to a recent study by Ookla Speedtest, the U.S. ranks a shocking 31st in the world in terms of average download speeds. The leaders in the world are Hong Kong at 72.49 Mbps …</p></blockquote><p>Pretty damning of the current (lack of a) regulatory regime, if you ask me:</p>
<blockquote>
<p>According to a recent study by Ookla Speedtest, the U.S. ranks a shocking 31st in the world in terms of average download speeds. The leaders in the world are Hong Kong at 72.49 Mbps and Singapore on 58.84 Mbps. And America? Averaging speeds of 20.77 Mbps, it falls behind countries like Estonia, Hungary, Slovakia, and Uruguay.</p>
</blockquote>
Goodbye, Chrome2014-02-24T15:20:00-05:002014-02-24T15:20:00-05:00Chris Krychotag:v4.chriskrycho.com,2014-02-24:/2014/goodbye-chrome.htmlOpting people into Google Now in the browser is bad enough. Doing it when they're signed out? Beyond creepy.
<p>Last week, Chrome crossed the line for me. I deleted it from my system to clean up its many hooks into my system—I searched out every trace of it I could find—and will put it back on my system only for testing websites. Why? Because it’s just too creepy now.</p>
<p>Here’s the story: two weekends ago, I was sitting at a coffee shop working on a friend’s website, when up popped a series of Google Now OS X desktop notifications from Chrome, informing me of the weather, a package having recently shipped, and so on.</p>
<p>There were just two problems with this:</p>
<ol type="1">
<li>I never gave Chrome permission to do anything of the sort.</li>
<li>I was not signed into Chrome or any Google service at the time.</li>
</ol>
<p>Number 1 is bothersome. Number 2 is so far beyond bothersome that I took the nuclear option. Let’s walk through them.</p>
<p>Google apparently decided to start opting people into Google Now on the Chrome 33 Beta. Opting people into anything new is nearly always a bad idea in my view; opting someone into something that actively integrates with email, calendar, etc. without asking them is just creepy. Now, full disclosure: I had previously granted Google access to some of this data for Google Now on my Android phone (though I have since moved to an iPhone). However, as is usual for Google these days, the company took that permission in one context and treated it as global permission in all contexts.</p>
<p>No doubt the box I checked when I gave them access to that data in the first place legally allowed them to continue touching it. That did not particularly bother me. Rather, it was the assumption that I wanted the same kind of interactions from the service in a different context. This is typical of Google —typically un-human-friendly, that is. People do different things with their phones than with their browsers, and have different expectations of what each will do. More importantly, though, even if we might <em>want</em> our browsers to start supplying us the kinds of sometimes-valuable information that we get from Google Now (or Apple or Microsoft’s similar tools), we generally want the opportunity to make that decision. Increasingly, Google is making that decision for its users, leaving them to opt out and turn it off if they so desire. That is not a policy I particularly like. So: strike one. Or rather: strike several dozen, of the sort that had me moving away from Google’s services for quite some time— but it probably still wouldn’t have pushed me across the line to this kind of hard kill-it-with-fire mentality.</p>
<p>What did? That would be the part where Chrome started sending me desktop Google Now notifications. Without asking me. In a browser to which I was not logged in, nor logged into any Google services.</p>
<p>I will say that again to be clear: I was not signed into Chrome. I was not signed into any Google services in the browser. I had not allowed the browser to create desktop notifications. And it started sending me Google Now notifications for my main Google account. Worse: nothing I could do with the browser itself changed that behavior. (Unsurprising: there was no way Chrome should have been able to do that in the first place, logged out of all Google services as I was.)</p>
<p>Goodbye, Chrome. You’re just too creepy.</p>
<p>This was not the first time I have seen Chrome engage in behavior that does not respect its users. I have repeatedly run into cases where clearing the cache and deleting browsers… doesn’t. Cookies sometimes still stick around. Private browsing sessions inherit cookies from the main window (and sometimes vice versa). Closing a private session and launching a new one would sometimes still include cookies and cache from a previous session (bank accounts still logged in, etc.). Chrome had thus long been untrustworthy to me. But this was a bridge too far. This was not just slightly unnerving. It was creepy.</p>
<p>Call it a bug if you like. It is likely that it was, in fact, a bug. So, most likely, were the other cases I saw above. But these are the kinds of bugs that make a browser fundamentally untrustworthy, and they are the kinds of bugs that are that much creepier coming from a company whose profit comes almost entirely from selling advertising—that is, from selling user information to advertising companies. The deal was that we trusted Google not to abuse that information. Unfortunately, that deal just keeps getting worse all the time. (Pray they do not alter it further.)</p>
<p>I will have a copy of the browser on my system for testing purposes, but for nothing else. Goodbye, Chrome. And for that matter: goodbye, Google services. Over the course of the rest of this year, I will be moving myself completely off of all Google services (mail, calendar, etc.), with the sole exception of (non- logged-in) search. You’re just too creepy now.</p>