As I reported recently, there seems to be a mutual acceptance of raku and perl in the wake of the name change and a bit of time passing:
This is a reproduction of the talk I gave today at the awesome London Perl & Raku Workshop. I met a couple of cool rakuteers and enjoyed the relaxed congregation of perl and raku folk. Seems like the anger has subsided not least thanks to the name change from perl6 to raku. Thanks to all the sponsors, organisers and attendees.
One of the questions I was asked at the workshop by some professional perl coders was “what is Raku and why should I use it?”. I was quite flustered by the question (despite having prepared for it the day before).
OK – I’m gonna unpack that last statement a bit. By best, I mean that raku is still not really used at scale or for large projects by large teams. There are many things to recommend raku for these kind of use cases – concurrency, strong types (optional), role-based composition, unicode and so on. Stability, bug stomping and performance are improving all the time.
But, in addition to great features, any innovative language has to be proven before a real business will be happy to commit to it as a fundamental technical building block. Businesses are, rightly, risk averse. They will consider aspects such as availability of skilled staff, core team bus factor, ecosystem health, etc. So best, for me, means most practical, most likely to succeed, most added value when real-world constraints are applied.
I run a web design consultancy – we focus on WordPress and other PHP based technologies. Have I rewritten WordPress in raku? No indeed. But, since I am an all in rakuteer I have effectively used raku to streamline our business processes:
rawp setup && rawp launch && rawp renewal
. This short performs nginx install, database install, WordPress install, TLS certificate generation, cron for certificate renewal – yes all thatrawm migrate
(and a suitable yaml config)Why do I write and share these modules as FOSS?
I like to code in raku – it is truly a pleasure. When I assemble and prove out a set of CLI commands to do a task, I am thinking “this is cool, how can I capture this recipe and run it automatically every time” (ie. I a way to remember what works and refine it). And I hope that by sharing others will be able to benefit from these potted scripts and may wish to extend and refine them in turn.
In common with the other raku modules listed above, this one works like this:
cat xxx.pl | perl
to run itCLI::Wordpress::Migrator is a script to migrate a WordPress site from one server to another. This performs export (backup), install, import (restore) and search-replace steps according to the configuration in ~/.rawm-config/rawm-config.yaml
This module installs the raku rawm
command for use from the local client command line. (RAku WordPress Migrator).
The process involves three systems:
from
server which is running the source siteto
server which is ready for the migrated siteHere’s the raku MAIN usage:
> rawm
Usage:
./rawm [--ts=<Str>] [--backup-only] [--download-only] [--upload-only] [--restore-only] [--cleanup-only] [--dry-run] <cmd>
<cmd> One of <connect export install import search-replace migrate>
--ts=<Str> Enter timestamp of files [Str] eg. --ts='20241025-17-02-42'
--backup-only Only perform remote backup
--download-only Only perform download (requires timestamp [--ts])
--upload-only Only perform upload (requires timestamp [--ts])
--restore-only Only perform restore (requires timestamp [--ts])
--cleanup-only Only perform remote cleanup
--dry-run Do not perform replace
Here’s the (sanitised) yaml:
from:
user: myusername
subdom: sdname
domain: mydomain.com
key-pub: kpname
port: 22
to:
user: myusername
subdom: sdname
domain: mydomain.com
key-pub: kpname
port: 22
wp-config:
locale: en_GB
db:
name: dbname
user: dbuser
pass: dbpass
prefix: wp_
title: My New WP Installation
url: mysite.com
admin:
user: aduser
pass: adpass
email: ademail
Here’s the perl code to perform the remote backup:
method perl {
my $code = q:to/END/;
#!/usr/bin/perl
use strict;
use warnings;
print "Doing remote backup at %NOW%\n";
`wp db --path='../%WP-DIR%' export %BU-DB-FN%`;
`tar -czf %BU-FS-FN% ../%WP-DIR%/wp-content`;
END
$code ~~ s:g/'%NOW%' /{ $.timestamp}/;
$code ~~ s:g/'%BU-DB-FN%'/{ $.bu-db-fn }/;
$code ~~ s:g/'%BU-FS-FN%'/{ $.bu-fs-fn }/;
$code ~~ s:g/'%WP-DIR%' /{ $.server.wp-dir }/;
$code
}
And here’s the raku code that runs it:
method backup {
my $s := $.server;
my $proc = Proc::Async.new:
:w, qqw|ssh -p { $s.port } -tt -i { $s.key-path } { $s.login }|;
my $promise = $proc.start;
$proc.say("mkdir { $s.tp-dir }");
$proc.say("cd { $s.tp-dir }");
$proc.say("echo \'{ $.perl }\' > exporter.pl");
$proc.say('cat exporter.pl | perl');
sleep 30;
$proc.say("exit");
await $promise;
}
This is a snippet of the source code at https://github.com/librasteve/raku-CLI-Wordpress-Migrator if you want to see the full story.
You are welcome to review it all, fork, PR any changes you would like and use it to manage your WordPress estate!
If you have been paying close attention to my raku module history you will know that often I have the opportunity to install raku on the remote machine and to run that for various tasks (i.e. raws-ec2 setup
). But in the case of Migrator
the idea is to backup and restore remote machines hosted by a third party firm running cPanel and with tight restrictions on installing non standard languages. Happily, as with pretty much all modern Linux installs, their hosted systems are preloaded with perl. So automating the process of logon, save generated perl file on the remote system and run it is very widely applicable.
Meantime, driving this with raku is a very natural choice. Raku features that facilitate this are:
qqx
Ultimately, technically, perl and raku are very complementary – combining the ubiquity of perl with the expressivity of raku to produce a practical outcome. And both have a familiar look and feel…
As usual, comments and feedback very welcome!
~librasteve
I published my new book: A Language a Day, which is a collection of brief overviews to 21 programming languages.
This book provides a concise overview of 21 different programming languages. Each language is introduced using the same approach: solving several programming problems to showcase its features and capabilities. Languages covered in the book: C++, Clojure, Crystal, D, Dart, Elixir, Factor, Go, Hack, Hy, Io, Julia, Kotlin, Lua, Mercury, Nim, OCaml, Raku, Rust, Scala, and TypeScript.
Each chapter covers the essentials of a different programming language. To make the content more consistent and comparable, I use the same structure for each language, focusing on the following mini projects:
Each language description follows—where applicable—this pattern:
You can find all the code examples in this book on GitHub: github.com/ash/a-language-a-day.
You can buy it on Amazon or LeanPub as an electronic or Kindle edition, or as a paper hardcover or paperback version. More information with the links to the shops.
About 3 weeks ago I thought it was time to go through the outstanding Rakudo compiler issues (an implementation of the Raku Programming Language) to see how many of them would have been fixed by now with the new Raku grammar.
Why? Because we have reached 85+% of test coverage of the new Raku Grammar in roast, the official Raku test-suite.
Success as defined by the number of test files that completely pass without any test failures. Most othertest files also have tests passing, but they're not 100% clean.
At that point there were 1312 open Rakudo issues, with the oldest being from 2017.
There are now 778 issues left open. So you could say that's 534 issues closed. Actually, it is a few more as during this 3 week period, a few new issues were posted.
In this blog post I'll be describing the types of issues that I've seen, and how I handled them.
Sadly, there's not a lot I could do at those specific issues, as my JVM foo is close to zero. There are currently 22 open issues for the JVM-backend. If you have any JVM chops, it would be greatly appreciated if you would apply them to solving these Rakudo issues!
When I started, the oldest open issue in the Rakudo repository was about 7.5 years old, and effectively lost in the mist of time. In any case they predated significant changes, such as the new dispatch mechanism. Not to mention the Raku Programming Language still had a different name then.
If you're really interested in the mists of time, you could check out the old issues that were migrated from the venerable RT system, with the oldest open issue from 2010!
So going from the oldest issue to the newest, it was a matter of checking if the problem still existed. And if it appeared fixed, and if it was possible to write a test for it, write a test for it and close the issue. And sometimes there even was a PR for a test already, so that was even more a no-brainer.
Quite a few number of issues were actually marked as fixed, but were also marked as needing tests. Sadly, the original author of the issue had not done that or didn't do that after the issue was fixed. In most cases it was just a few minutes to write a test, test it and commit it.
After 18 months of work in 2020 and 2021, the new dispatch mechanism became default in the 2021.10 release of the Rakudo compiler. Most of the multi method dispatch related issues that were made before that time, appeared fixed. So it just was a matter of writing the tests (if there weren't any yet) and commit them.
While testing all of these issues, I always also tested whether they were fixed in the new Raku grammar, based on the RakuAST project (which is why I started doing this bug hunting streak in the first place).
Running code with the new Raku grammar, is as easy as prefixing
RAKUDO_RAKUAST=1
to your call toraku
. For instanceraku -e '{ FIRST say "first" }'
does not output anything with the legacy grammar. But withRAKUDO_RAKUAST=1 raku -e '{ FIRST say "first" }'
it will say "first" because theFIRST
phaser fires for any block when it is first executed with the new Raku grammar.
And to my surprise, in many cases they were! For those cases a special test file is being reserved to which tests are added for issues that have been fixed with the new Raku grammar in RakuAST.
These issues are then marked as being fixed in RakUAST but left open, so that people are hopefully prevented from creating duplicate issues for problems that apparently haven't been fixed yet.
A large percentage of these issues appear fixed because they were essentially static optimizer issues, and the new Raku grammar doesn't have any of these compile-time optimisations yet. So it's important to be able to check for regressions of this type once optimizations are being added back in. In turn, these static optimizer issues were often caused by the optimizer not having enough or the correct information for doing optimizations. Which in turn was one of the reasons to start with RakuAST to begin with.
And then there were the issues that were simply still reporting an existing problem. Some of them, with the knowledge that I acquired over the years, looked like easy to fix. So I put in some effort to fix them. A non-exhaustive list:
$:F
as placeholder variable with use isms
Failure
objects when numerically comparing non-numerical values+permutations(30)
@*ARGS
to contain other Cool
values (apart from strings)Rat
s with 0
denominator (e.g. <1/0> <=> <-1/0>
)⁰¹²³⁴⁵⁶⁷⁸⁹
superscript characters--repl-mode=interactive
CLI argument always force an interactive REPLany
junctions in regex interpolationval()
(such as ⅓
)rlrwap
as line editor in REPL if no other modules installedAbout 50 of the outstanding issues look like they should be fixable without turning into large projects, so I will be looking at these in the coming days / weeks.
Some of the open issues were basically feature requests. Sometimes I felt that they could be easily implemented (such as several error message improvements) so I implemented them. Others I created a Pull Request for. And for still others I felt a problem-solving issue would be needed (which I then created). And some I closed, knowing almost 100% they would never be accepted.
If this was one of your issues, and you still feel that feature should become part of the Raku Programming Language, please don't be discouraged! Looking at a 60+ issues for 3 weeks in a row sometimes made me a bit grumpy at the end of the day. Please make a new problem solving issue in that case!
Many issues looked like they would be more easily solvable in the new Raku grammar with RakuAST. There are now 289 of them. These will be next on my list.
It was an interesting ride through memory lane the past weeks. With about 200 commits, that's not bad at all!
Note that with these numbers of issues, if I had an error rate of only 1%, there are at least 5 issues that were closed when they shouldn't have been closed. If you feel that an issue has been closed incorrectly, please leave a comment and I'll re-open them if you cannot do that yourself.
Sadly, because of the additional tests that I wrote, the number of roast test-files passing has now dropped again below the 85% mark. Still, I do think this is progress, as the errors that they check for would have been encountered during the development of the Raku grammar sooner or later anyway.
Anyway, it was fun being able to close as many issues as I did! Want to join in the fun? There are still 778 issues open waiting for someone to close them!
If you like what I'm doing, committing to a small sponsorship would mean a great deal to me!
Late Edit – thanks to a genius suggestion from @wamba, I have made some late changes to improve the code – specifically the concerns I mentioned in v1 about when
clause and lisp ((((
are now fixed. Excellent input!!
Regular followers of this blog will know that I am on a bit of a Functional tack … I’m highly motivated to improve my functional chops because I am giving a talk at the London Perl and Raku Conference shortly entitled Raku HTML::Functional
and I just realised that the audience is going to be a bunch of deep perl and raku (linux) experts who know Functional coding inside out. Yikes … hope I don’t get any tough questions from folks who really know their stuff!
By contrast, I am a dabbler in Functional. I like the feel of using .map
and .grep
and so on, but I am on the learning curve. And I am resistant to languages that constantly get in my way since largely I am trying to make code that works rather than to wrestle with compile errors. (And no I do not work in large teams since you ask).
So when I saw a recent post on HN, written in F#, I felt challenged to work out what was going on and to try and relate to Raku.
type Meat = Chicken | Beef | Pork | Fish | Veggie
type Ingredient =
| Cheese | Rice | Beans | Salsa | Guacamole | SourCream | Lettuce
| Tomato | Onion | Cilantro | PicoDeGallo
type Burrito = Meat option * Ingredient list
let (>>=) burrito f =
match burrito with
| Some meat, ingredients -> f (Some meat, ingredients)
| None, _ -> None, []
let returnBurrito (meat, ingredients) = meat, ingredients
let tortilla = returnBurrito (Some Veggie, [])
let addMeat meat (m, ingredients) = Some meat, ingredients
let addIngredient ingredient (meat, ingredients) =
meat, ingredient :: ingredients
let addMissionBurritoIngredients (meat, ingredients) =
meat, Cheese :: Rice :: Beans :: ingredients
let holdThe ingredient (meat, ingredients) =
meat, List.filter (fun i -> i <> ingredient) ingredients
let burrito =
tortilla
>>= addMeat Chicken
>>= addMissionBurritoIngredients
>>= holdThe Cheese
>>= addIngredient PicoDeGallo
>>= addIngredient Salsa
>>= addIngredient Guacamole
>>= addIngredient SourCream
printfn "%A" burrito
This from the OP by William Cotton
Since we are talking Monads, I realised that the raku Definitely module written by masukomi
would come in handy. This module arose from a post I made here some time back, so it was a good time to revisit.
https://github.com/librasteve/raku-Burrito/blob/main/burrito-dm.raku
use Definitely;
enum Meat <Chicken Beef Pork Fish Veggie>;
enum Ingredient <Cheese Rice Beans Salsa Guacamole SourCream Lettuce
Tomato Onion Cilantro PicoDeGallo>;
sub returnBurrito($meat, @ingredients) {
$meat, @ingredients
}
sub tortilla {
returnBurrito(something(Veggie), [])
}
sub add-meat($meat, ($, @ingredients)) {
something($meat), @ingredients
}
sub add-ingredient($ingredient, ($meat, @ingredients)) {
$meat, [$ingredient, |@ingredients]
}
sub add-mission-burrito-ingredients(($meat, @ingredients)) {
$meat, [Cheese, Rice, Beans, |@ingredients]
}
sub hold-the($ingredient, ($meat, @ingredients)) {
($meat, [@ingredients.grep(* != $ingredient)]);
}
multi infix:«>>=»((None $, @), +@ ) is prec(prec => 'f=') {
nothing(),[]
}
multi infix:«>>=»($burrito, +(&f, *@args)) is prec(prec => 'f=') {
f( |@args, $burrito )
}
tortilla()
>>= (&add-meat, Beef)
>>= (&add-mission-burrito-ingredients)
>>= (&hold-the, Cheese)
>>= (&add-ingredient, PicoDeGallo)
>>= (&add-ingredient, Salsa)
>>= (&add-ingredient, Guacamole)
>>= (&add-ingredient, SourCream)
==> say();
I hope that you will agree that Raku does a generally solid job of handling the translation from F#.
There are a couple of raised eyebrows around the EDIT – NOW FIXEDwhen {...}
clauses and the handling of variadic arity of the passed in function in the match
and the lisp-like ((((((
parens in the application of the custom binder. Otherwise, it is pretty smooth.
The Definitely module works well here, I have also tried with Rawley Fowlers Monad::Result module which was similarly successful.
In this self-study, I leaned on the excellent Wikipedia Monad page, which mentions that a true Monad implementation has three operations:
And it shows chaining of this halve
function as an example of chaining with the bind operator in Haskell:
halve :: Int -> Maybe Int
halve x
| even x = Just (x `div` 2)
| odd x = Nothing
-- This code halves x twice. it evaluates to Nothing if x is not a multiple of 4
halve x >>= halve
So, to improve the Definitely module, I have added a binding operator to be used like this:
use Definitely;
sub halve(Int $x --> Maybe[Int]) {
given $x {
when * %% 2 { something( $x div 2 ) }
when ! * %% 2 { nothing(Int) }
}
}
say (halve 4) >>= &halve; #1
say (something 32) >>= &halve >>= &halve >>= &halve; #4
say (halve 3) ~~ None; #True
Note that the Result::Monad module already has bind and map operators provided.
For now this is a PR, feel free to install directly from my fork here if you would like to try it. Now released to zef package installer eco-system…
zef install https://github.com/librasteve/Definitely.git
As usual all comments and feedback welcome!
~librasteve
Back in ’21 I asked the question Can Raku replace HTML? As expected that rather click-baity title got a lot of complaints. So I couldn’t resist repeating the meme.
If you are wondering, Raku can replace PHP literally…
"PHP".subst(/PHP/, 'Raku').say; #Raku
BUT that’s beside the point. Just my sense of -Ofun getting out of hand.
In recent posts, I have been digging in to HTMX and Raku Cro…
And while in the web application frame of mind, I started to think maybe I can use Raku with WordPress, perhaps initially to just write some front end with Raku and HTMX served with Cro and to talk to the WP database backend. (This kind of combination is already a thing with WordPress and React).
And then that made me think yeah well WordPress (and Laravel, OJS, etc.) continue to be popular and lend PHP a kind of ongoing zombie existence. PHP is not likely to suddenly bust out of its web language niche, so likely over time it will gradually fade away in popularity. And much of the gravity in web development is going to drag PHPers towards JavaScript. And, since I am a PHP coder in my day job, I realised that (like me) many PHPers travellers would rather not get dragged into the JavaScript / React / Composer / Node black hole of complexity. And so maybe Raku and HTMX would one day become a good upgrade path from PHP since it has roots in perl – the original web language – with a friendlier syntax (eg for OO). Even the $
sigil for variables, {}
curlies and the ;
semicolon make for a smooth transition from PHP. Maybe in this niche Raku can ultimately replace PHP…
Then I started to think about what made PHP the goto language for web developers originally. How would Raku stack up?
Remember this:
<body>
<div class="container">
<h1>Welcome to My Simple PHP Page!</h1>
<p>
Today is:
<?php
// Get the current date and time
echo date("l, F j, Y, g:i a");
?>
</p>
<p>
Random number:
<?php
// Generate a random number between 1 and 100
echo rand(1, 100);
?>
</p>
</div>
</body>
</html>
The full source of this index.php file is in this gist … Simple PHP HTML Page
To serve this page, you can run a server like this:
php -S localhost:8000 -t /path/to/directory
Horrible though it is, this intertwining of PHP and HTML is what made PHP the goto web language in its heyday. And that got me thinking, could this be done with Raku?
So, knowing Raku to be a very flexible language, I made a new module Cro::WebApp::Evaluate. Here’s the synopsis:
<body>
<div class="container">
<h1>Welcome to My Simple Raku Page!</h1>
<p>
Today is:
<?raku
#Get the current date and time
use DateTime::Format;
say strftime('%Y-%m-%d %H:%M:%S', DateTime.now);
?>
</p>
<p>
Random number:
<?raku
#Generate a random number between 1 and 100
say (^100).pick;
?>
</p>
</div>
</body>
And here’s how to serve this as index.raku using the Raku Cro web framework.
use Cro::HTTP::Router;
use Cro::WebApp::Template;
use Cro::WebApp::Evaluate;
sub routes() is export {
route {
evaluate-location 'evaluates';
get -> {
evaluate 'index.raku';
}
get -> *@path {
static 'static', @path;
}
}
}
I leave it as an exercise for the reader to show how to have Cro render and serve index.php files in a parallel directory and route structure – perhaps for an incremental migration effort.
Do I expect this new module to be embraced by the PHP community? No. In most cases, I think that hybrid PHP/HTML pages like this have been replaced by templating systems or web frameworks.
Am I a little ashamed to have made this module? Yes. Honestly, I would not encourage coders to start using Raku like this – Cro Templates would be a better solution for most projects.
Are there some point needs where this approach can be applied? Maybe. Since this was a seminal feature of early PHP, I expect that there are some point cases where embedding Raku and HTML will be the cleanest way to (re)package some code. For example where a single dynamic page uses PHP for a database query, wrapping the results as json and then passing the data into a JavaScript function … for some client-side logic with the (eg.) Google Maps API and then dynamic presentation.
Is this a module prerequisite for PHPers to migrate to Raku? Probably not. However, I think that the presence of this module can bring some comfort to PHP coders that anything that can be done in PHP can be (re)done in Raku.
As usual comments & feedback welcome!
~librasteve
In the past 2 years, module distributions in the https://raku.org that were being published through the original (but deprecated) "p6c" ecosystem, were no longer being harvested by the default "zef" harvester. But they were still being harvested by the Raku Ecosystem Archive harvester. And thus, any updates to these distributions remained visible.
But this harvesting stopped with the 0.0.26 release of Ecosystem::Archive::Update.
The reasoning for no longer supporting the "p6c" ecosystem is explained in the problem solving issue "Preparing the Raku Ecosystem for the Future".
This means that any updates to the 651 distributions still in the "p6c" ecosystem, will not be noticed anymore in the live Raku ecosystem. To alert the authors / current maintainers of these distributions, 202 issues were generated, listing the distributions affected.
Now, one week later, many authors responded to the issue they found in one of their repositories. The reactions were generally positive to this effort. Some of the authors took this notice as an opportunity to update their distribution to the "fez" ecosystem. Kudos to these authors:
Other authors responded that they don't want to spend any time on these distributions anymore, but would like to have them transferred to the Raku Community Module Adoption Center. Kudos to these authors for spending their time and effort on these distributions so far and making them available for the future users of the Raku Programming Language:
Note that the distributions marked "in progress" still need some Tender Loving Care before they will be properly integrated into the Raku ecosystem again. Pull Requests for these distributions are very welcome!
A number of other authors responded that they thought that (some of) their distributions were not fit to be modernized or transferred to the Raku Module Adoption Center. Kudos to these authors nonetheless, because of their time and efforts in the past. Even if that didn't result into something they thought was worth salvaging. These distributions have been removed from the "p6c" ecosystem, which thusly contains 552 distributions now: a 15% reduction in one week!
Yours truly will actually look at all of these packages to see whether they warrant a transfer to the Raku Module Adoption Center. Beauty is in the eye of the beholder :-)
All in all a fruitful week in the Raku Ecosystem world.
Many authors that received an issue notice about this, have not responded yet. If you are one of them, please respond to the issue and tell us what you would like (us) to do.
Meanwhile the distributions marked "in progress" could use someone looking at the code and/or the tests to see what's stopping inclusion into the Raku ecosystem again. Many kudos in advance.
And if you're an author who already transferred their modules to the "zef" ecosystem: thank you for you continued practical support of the Raku Programming Language, and the extension of the https://raku.land!
The question has been raised, how to get named arguments into sub EXPORT
via a use
-statement. The ever helpful raiph provided an answer, which in turn left me with the question, why he didn’t just use a Capture
to move the data around. Well, because that doesn’t work. The compiler actually evaluates the expression \(:1a, :2b)
into (1, 2)
before passing it on to EXPORT
.
If it’s hard, do it functional!
# foo.raku
use v6.d;
constant &transporter = sub { \(:1a, :2b); }
use foo &transporter;
# lib/foo.rakumod
use v6.d;
proto sub EXPORT(|) { * }
multi sub EXPORT(&transporter) {
&EXPORT(|transporter);
}
multi sub EXPORT(:$a, :$b) {
dd $a, $b;
Map.new
}
The idea is to hand a function to use
to be called by EXPORT
, and then redispatch the value that is produced by that function, to take advantage of Raku´s excellent signature binding. The proto
and refering to sub EXPORT
explicitly is needed because there is also a predefined (and in this case hidden) package
called EXPORT
.
I’m passing on named arguments to EXPORT
, but all kinds of stuff could be returned by &transporter
. So long as everything is known pretty early on at compile-time. The use
-statement is truly an early bird.
(in chronological order, with comment references)
Hope to see you again next year!
According to Larry, laziness is a programmers virtue. The best way to be lazy is having somebody else do it. By my request, SmokeMachine kindly did so. This is not fair. We both should have been lazy and offload the burden to the CORE-team.
Please consider the following code.
my @many-things = (1..10).List;
sub doing-one-thing-at-a-time($foo) { ... }
say doing-one-thing-at-a-time(@many-things.all);
Rakudo goes out of it’s way to create the illusion that sub doing-one-thing-at-a-time
can deal with a Junction
. It can’t, the dispatcher does all the work of running code in parallel. There are tricks we can play to untangle a Junction
, but there is no guarantee that all values are produced. Junction
s are allowed to short-circuit.
This was bouncing around in my head for quite some time, until it collided with my thoughts about Range
. We may be handling HyperSeq
and RaceSeq
wrong.
my @many-things = (1..10).List;
sub doing-one-thing-at-a-time($foo) { ... }
say doing-one-thing-at-a-time(@many-tings.hyper(:degree<10>));
As with Junction
s doing dispatch-magic to make hyper/race just work, moving the handling to the dispatcher would move the decision from the callee to the caller and, as such, from the author of a module to the user. We can do that by hand already with .hyper.grep(*.foo)
or other forms of boilerplate. In Raku-land we should be able to do better and provide a generalisation of transforming calls with the help of the dispatcher.
I now know what to ask Santa for this year.
My version of JSON::Class
is now released. The previous post explains why does this worth a note.
Lately, some unhappiness has popped up about Range
and it’s incomplete numericaliness. Having just one blogpost about it is clearly not enough, given how big Range
s can be.
say (-∞..∞).elems;
# Cannot .elems a lazy list
in block <unit> at tmp/2021-03-08.raku line 2629
I don’t quite agree with Rakudo here. There are clearly ∞ elements in that lazy list. This could very well be special-cased.
The argument has been made, that many operators in Raku tell you what type the returned value will have. Is that so? (This question is always silly or unnecessary.)
say (1 + 2&3).WHAT;
# (Junction)
Granted, Junction
is quite special. But so are Range
s. Yet, Raku covers the former everywhere but the latter feels uncompleted. Please consider the following code.
multi sub infix:<±>(Numeric \n, Numeric \variance --> Range) {
(n - variance) .. (n + variance)
}
say 2.6 > 2 ± 0.5;
# True
my @heavy-or-light = 25.6, 50.3, 75.4, 88.8;
@heavy-or-light.map({ $_ ≤ 75 ± 0.5 ?? „$_ is light“ !! „$_ is heavy“ }).say;
# (25.6 is heavy 50.3 is heavy 75.4 is heavy 88.8 is heavy)
To me that looks like it should DWIM. It doesn’t, because &infix:«≤»
defaults to coercing to Real
and then comparing numerically.
This could easily be fixed by adding a few more multis and I don’t think it would break any production code. We already provide quite a few good tools for scientists. And those scientists do love their error bars — which are ranges. I would love for them to have another reason to use Raku over … that other language.
This will be a short one. I have recently released a family of WWW::GCloud
modules for accessing Google Cloud services. Their REST API is, apparently, JSON-based. So, I made use of the existing JSON::Class
. Unfortunately, it was missing some features critically needed for my work project. I implemented a couple of workarounds, but still felt like it’s not the way it has to be. Something akin to LibXML::Class
would be great to have…
There was a big “but” in this. We already have XML::Class
, LibXML::Class
, and the current JSON::Class
. All are responsible for doing basically the same thing: de-/serializing classes. If I wanted another JSON serializer then I had to take into account that JSON::Class
is already taken. There are three ways to deal with it:
JSON::Class
and re-implement it as a backward-incompatible version.The first two options didn’t appeal to me. The third one is now about to happen.
I expect it to be a stress-test for Raku ecosystem as, up to my knowledge, it’s going to be the first case where two different modules share the same name but not publishers.
As a little reminder:
JSON::Class:auth<zef:jonathanstowe>
in their dependencies and, perhaps, in their use
statement.JSON::Class:auth<zef:vrurg>
.There is still some time before I publish it because the documentation is not ready yet.
Let’s 🤞🏻.
]]>I was always concerned about making things easier.
No, not this way. A technology must be easy to start with, but also be easy in accessing its advanced or fine-tunable features. Let’s have an example of the former.
This post is a quick hack, no proof-reading or error checking is done. Please, feel free to report any issue.
Part of my ongoing project is to deal with JSON data and deserialize it into Raku classes. This is certainly a task
for JSON::Class
. So far, so good.
The keys of JSON structures tend to use lower camel case which is OK, but we like
kebabing in Raku. Why not, there is
JSON::Name
. But using it:
There are roles. At the point I came to the final solution I was already doing something like1:
class SomeStructure does JSONRecord {...}
Then there is AttrX::Mooish
, which is my lifevest on many occasions:
use AttrX::Mooish;
class Foo {
has $.foo is mooish(:alias<bar>);
}
my $obj = Foo.new: bar => "the answer";
say $obj.foo; # the answer
Apparently, this way it would still be a lot of manual interaction with aliasing, and that’s what I was already doing for a while until realized that there is a bettter way. But be back to this later…
And, eventually, there are traits and MOP.
That’s the easiest part. What I want is to makeThisName
look like make-this-name
. Ha, big deal!
unit module JSONRecord::Utils;
our sub kebabify-attr(Attribute:D $attr) {
if $attr.name ~~ /<.lower><.upper>/ {
my $alias = (S:g/<lower><upper>/$<lower>-$<upper>/).lc given $attr.name.substr(2);
...
}
}
I don’t export the sub because it’s for internal use mostly. Would somebody need it for other purposes it’s a rare case where a long name like JSONRecord::Utils::kebabify-attr($attr)
must not be an issue.
The sub is not optimal, it’s what I came up with while expermineting with the approach. The number of method calls and regexes can be reduced.
I’ll get back later to the yada-yada-yada up there.
Now we need a bit of MOP magic. To handle all attributes of a class we need to iterate over them and apply the aliasing. The first what comes to mind is to use role body because it is invoked at the early class composition times:
unit role JSONRecord;
for ::?CLASS.^attributes(:local) -> $attr {
# take care of it...
}
Note the word “early” I used above. It actually means that when role’s body is executed there are likely more roles waiting for their turn to be composed into the class. So, there are likely more attributes to be added to the class.
But we can override Metamodel::ClassHOW
compose_attributes
method of our target ::?CLASS
and rest assured no one would be missed:
unit role JSONRecordHOW;
use JSONRecord::Utils;
method compose_attributes(Mu \obj, |) {
for self.attributes(obj, :local) -> $attr {
# Skip if it already has `is mooish` trait applied – we don't want to mess up with user's intentions.
next if $attr ~~ AttrX::Mooish::Attribute;
JSONRecord::Utils::kebabify-attr($attr);
}
nextsame
}
Basically, that’s all we currently need to finalize the solution. We can still use role’s body to implement the key elements of it:
unit role JSONRecord;
use JSONRecordHOW;
unless ::?CLASS.HOW ~~ JSONRecordHOW {
::?CLASS.HOW does JSONRecordHOW;
}
Job done! Don’t worry, I haven’t forgot about the yada-yada-yada above!
But…
The original record role name itself is even longer than JSONRecord
, and it consists of three parts. I’m lazy. There are a lot of JSON structures and I want less typing per each. A trait? is jrecord
?
unit role JSONRecord;
multi sub trait_mod:<is>(Mu:U \type, Bool:D :$jrecord) is export {
unless type.HOW ~~ JSONRecordHOW {
type.HOW does JSONRecordHOW
type.^add_role(::?ROLE);
}
}
Now, instead of class SomeRecord does JSONRecord
I can use class SomeRecord is jrecord
. In the original case the win is even bigger.
There is absolutely nothing funny about it. Just a common way to keep a reader interested!
Seriously.
The reason for the yada in that snippet is to avoid a distraction from the primary purpose of the example. Here is what is going on there:
I want AttrX::Mooish
to do the dirty work for me. Eventually, what is needed is to apply the is mooish
trait as shown above. But the traits are just subs. Therefore all is needed now is to:
&trait_mod:<is>($attr, :mooish(:$alias));
Because this is what Raku does internally when encounters is mooish(:alias(...))
. The final version of the kebabifying sub is:
our sub kebabify-attr(Attribute:D $attr) {
if $attr.name ~~ /<.lower><.upper>/ {
my $alias = (S:g/<lower><upper>/$<lower>-$<upper>/).lc given $attr.name.substr(2);
&trait_mod:<is>($attr, :mooish(:$alias));
}
}
Since the sub is used by the HOW above, we can say that the &trait_mod<is>
would be called at compile time2.
Now, it used to be:
class SomeRecord does JSONRecord {
has $.aLongAttrName is mooish(:alias<a-long-attr-name>);
has $.shortname;
}
Where, as you can see, I had to transfer JSON key names to attribute names, decide where aliasing is needed, add it, and make sure no mistakes were made or attributes are missed.
With the above rather simple tweaks:
class SomeRecord is jrecord {
has $.aLongAttrName;
has $.shortname;
}
Job done.
Before I came down to this solution I’ve got 34 record classes implemented using the old approach. Some are little, some are quite big. But it most certainly could’ve taken much less time would I have the trait at my disposal back then…
]]>I have managed to finish one more article in the Advanced Raku For Beginners series, this time about type and object composition in Raku.
It’s likely to take a long before I can write another.
]]>Once, long ago, coincidentally a few people were asking the same question: how do I get a method object of a class?
Answers to the question would depend on particular circumstances of the code where this functionality is needed. One
would be about using MOP methods like .^lookup
, the other is to use method name and indirect resolution on invocant:
self."$method-name"(...)
. Both are the most useful, in my view. But sometimes declaring a method as our
can be
helpful too:
class Foo {
our method bar {}
}
say Foo::<&bar>.raku;
Just don’t forget that this way we always get the method of class Foo
, even if a subclass overrides method bar
.
In the earliest days of Raku, Damian Conway specified a documentation markup language to accompany it. Since it was modeled on Perl's POD it was called <sound of trumpets and dramatic pause>
POD6.
The Specification of POD6 (S26) was mostly incorporated without much extra explanation in the documentation suite. In this way, the description of POD6 was itself was an illustration of many of the features it documented, and some that it did not document.
Since Raku is defined by its test suite, and not its documentation, there were other details of POD6 in the tests that were not documented, even in S26.
Raku developed and morphed, but POD6 remained. The tooling for rendering the documentation sources needed updating, and the documentation site had to be modernised.
A project of mine was to upgrade the basic renderer that would transform POD6 to HTML, but allow for developers to customise the templates for each type of POD6 block type. (The first Pod::To::HTML
renderer hard-coded representations of POD6 markup, eg. B<this is bold>
was <strong>this is bold</strong>
and could not be changed.)
It turned out that S26 allowed for much more than had been included in the first documentation sources, including custom blocks and custom markup.
The project to upgrade the original HTML renderer morphed into Raku::Pod::Render, and transforming a directory full of individual documentation sources into an interlinked and searchable set of documents required another layer of tooling Collection. For example, collecting together all the pages that can be grouped as tutorials, or reference, or language, and creating a separate page for them automatically.
I covered these two projects in a presentation to RakuCon 2022.
Some of the original ideas in S26 had not been implemented, such as aliases and generic numbering. Other ideas had become outdated, such as a way to specify document encoding, which is now solved with Unicode.
In addition, RakuAST (see RakuAST for early adopters ) is on the horizon, which will radically change the speed of documentation processing.
There are also two implementations of POD6, one in Raku and one in Javascript, namely Alexandr Zahatski's Podlite.
This was an ideal time to revisit POD6 and recast it into Rakudoc - new name for the markup language, and its new file extension ".rakudoc".
I was invited to the first Raku Core Summit and I put together a presentation about the changes I thought needed to be made based on my own experience, but also using comments from other developers.
We came to a number of consensus agreements about the minimal changes that were needed, and some extra functionality to handle new questions, such as documentation versioning.
It was also clear that Rakudoc (aka POD6) has two separate parts: components that interact closely with the program being documented, and components that will be rendered separately into HTML (or an ebook). The documentation file needs to make this clear.
I have now written the first draft of the revision and the documentation file that encapsulates it. An HTML version can be found at new-raku.finanalyst.org/language/rakudoc, alongside the old documentation file and the simple table implementation. I am planning future blogs to describe some of the proposed revisions.
However, none of the revisions will break existing POD6, so Rakudoc should be backwards compatible with POD6. The version at new-raku
is a VERY early first draft, and it will go through several review stages.
The first Raku Core Summit was organised by Elizabeth Mattijsen and hosted by Elizabeth and Wendy at their home. It was a really good meeting and I am sincerely grateful for their generosity and hospitality. The summit was also supported by The Perl and Raku Foundation, Rootprompt, and Edument.
The first Raku Core Summit, a gathering of folks who work on “core” Raku things, was held on the first weekend of June, and I was one of those invited to attend. It’s certainly the case that I’ve been a lot less active in Raku things over the last 18 months, and I hesitated for a moment over whether to go. However, even if I’m not so involved day to day in Raku things at the moment, I’m still keen to see the language and its ecosystem move forward, and – having implemented no small amount of the compiler and runtime since getting involved in 2007 – I figured I’d find something useful to do there!
The area I was especially keen to help with is RakuAST, something I started, and that I’m glad I managed to bring far enough that others could see the potential and were excited enough to pick it up and run with it.
One tricky aspect of implementing Raku is the whole notion of BEGIN time (of course, this is also one of the things that makes Raku powerful and thus is widely used). In short, BEGIN time is about running code during the compile time, and in Raku there’s no separate meta-language; anything you can do at runtime, you can (in principle) do at compile time too. The problem at hand was what to do about references from code running at compile time to lexically scoped symbols in the surrounding scope. Of note, that lexical scope is still being compiled, so doesn’t really exist yet so far as the runtime is concerned. The current compiler deals with this by building up an entire flattened table of everything that is visible, and installing it as a fake outer scope while running the BEGIN-time code. This is rather costly, and the hope in RakuAST was to avoid this kind of approach in general.
A better solution seemed to be at hand by spotting such references during compilation, resolving them, and fixating them – that is, they get compiled as if they were lookups into a constant table. (This copies the suggested approach for quasiquoted code that references symbols in the lexical scope of where the quasiquoted code appears.) This seemed promising, but there’s a problem:
my $x = BEGIN %*ENV<DEBUG> ?? -> $x { note "Got $x"; foo($x) } !! -> $x { foo($x) };
It’s fine to post-declare subs, and so there’s no value to fixate. Thankfully, the generalized dispatch mechanism can ride to the rescue; we can:
When compiling Raku code, timing is everything. I knew this and tried to account for it in the RakuAST design from the start, but a couple of things in particular turned out a bit awkward.
I got a decent way into this restructuring work during the core summit, and hope to find time soon to get it a bit further along (I’ve been a mix of busy, tired, and had an eye infection to boot since getting back from the summit, so thus far there’s not been time for it).
I also took part in various other discussions and helped with some other things; those that are probably most worth mentioning are:
Thanks goes to Liz for organizing the summit, to Wendy for keeping everyone so well fed and watered, to the rest of attendees for many interesting discussions over the three days, to TPRF and Rootprompt for sponsoring the event, and to Edument for supporting my attendance.
I’d like to thank everyone who voted for me in the recent Raku Steering Council elections. By this point, I’ve been working on the language for well over a decade, first to help turn a language design I found fascinating into a working implementation, and since the Christmas release to make that implementation more robust and performant. Overall, it’s been as fun as it has been challenging – in a large part because I’ve found myself sharing the journey with a lot of really great people. I’ve also tried to do my bit to keep the community around the language kind and considerate. Receiving a vote from around 90% of those who participated in the Steering Council elections was humbling.
Alas, I’ve today submitted my resignation to the Steering Council, on personal health grounds. For the same reason, I’ll be taking a step back from Raku core development (Raku, MoarVM, language design, etc.) Please don’t worry too much; I’ll almost certainly be fine. It may be I’m ready to continue working on Raku things in a month or two. It may also be longer. Either way, I think Raku will be better off with a fully sized Steering Council in place, and I’ll be better off without the anxiety that I’m holding a role that I’m not in a place to fulfill.
I want to revive Carl Mäsak's Coding Contest as a crowd-sourced contest.
The contest will be in four phases:
For the first phase, development of tasks, I am looking for volunteers who come up with coding tasks collaboratively. Sadly, these volunteers, including myself, will be excluded from participating in the second phase.
I am looking for tasks that ...
This is non-trivial, so I'd like to have others to discuss things with, and to come up with some more tasks.
If you want to help with task creation, please send an email to [email protected], stating your intentions to help, and your freenode IRC handle (optional).
There are other ways to help too:
In these cases you can use the same email address to contact me,
or use IRC (moritz
on freenode) or twitter.
After a perilous drive up a steep, narrow, winding road from Lake Geneva we arrived at an attractive Alpine village (Villars-sur-Ollon) to meet with fellow Perl Mongers in a small restaurant. There followed much talk and a little clandestine drinking of exotic spirits including Swiss whisky. The following morning walking to the conference venue there was an amazing view of mountain ranges. On arrival I failed to operate the Nespresso machine which I later found was due to it simply being off. Clearly software engineers should never try to use hardware. At least after an evening of drinking.
Wendy’s stall was piled high with swag including new Bailador (Perl 6 dancer like framework) stickers, a Shadowcat booklet about Perl 6 and the new O’Reilly “Thinking in Perl 6″. Unfortunately she had sold out of Moritz’s book “Perl 6 Fundamentals” (although there was a sample display copy present). Thankfully later that morning I discovered I had a £3 credit on Google Play Books so I bought the ebook on my phone.
The conference started early with Damian Conway’s Three Little Words. These were “has”, “class” and “method” from Perl 6 which he liked so much that he had added them to Perl 5 with his “Dios” – “Declarative Inside-Out Syntax” module. PPI wasn’t fast enough so he had to replace it with a 50,000 character regex PPR. Practical everyday modules mentioned included Regexp::Optimizer and Test::Expr. If the video doesn’t appear shortly on youtube a version of his talk dating from a few weeks earlier is available at https://www.youtube.com/watch?v=ob6YHpcXmTg
Jonathan Worthington returned with his Perl 6 talk on “How does deoptimization help us go faster?” giving us insight into why Perl 6 was slow at the Virtual Machine level (specifically MoarVM). Even apparently simple and fast operations like indexing an array were slow due to powerful abstractions, late binding and many levels of Multiple Dispatch. In short the flexibility and power of such an extensible language also led to slowness due to the complexity of code paths. The AST optimizer helped with this at compile time but itself took time and it could be better to do this at a later compile time (like Just In Time). Even with a simple program reading lines from a file it was very hard to determine statically what types were used (even with type annotations) and whether it was worth optimizing (since the file could be very short).
The solution to these dynamic problems was also dynamic but to see what was happening needed cheap logging of execution which was passed to another thread. This logging is made visible by setting the environment variable MVM_SPESH_LOG to a filename. Better tooling for this log would be a good project for someone.
For execution planning we look for hot (frequently called) code, long blocks of bytecode (slow to run) and consider how many types are used (avoiding “megamorphic” cases with many types which needs many versions of code). Also analysis of the code flow between different code blocks and SSA. Mixins made the optimization particularly problematic.
MoarVM’s Spesh did statistical analysis of the code in order to rewrite it in faster, simpler ways. Guards (cheap check for things like types) were placed to catch cases where it got it wrong and if these were triggered (infrequently) it would deoptimize as well, hence the counterintuitive title since “Deoptimization enables speculation” The slides are at http://jnthn.net/papers/2017-spw-deopt.pdf with the video at https://www.youtube.com/watch?v=3umNn1KnlCY The older and more dull witted of us (including myself) might find the latter part of the video more comprehensible at 0.75 Youtube speed.
After a superb multi-course lunch (the food was probably the best I’d had at any Perl event) we returned promptly to hear Damian talk of “Everyday Perl 6”. He pointed out that it wasn’t necessary to code golf obfuscated extremes of Perl 6 and that the average Perl 5 programmer would see many things simpler in Perl 6. Also a rewrite from 5 to 6 might see something like 25% fewer lines of code since 6 was more expressive in syntax (as well as more consistent) although performance problems remained (and solutions in progress as the previous talk had reminded us).
Next Liz talked of a “gross” (in the numerical sense of 12 x 12 rather than the American teen sense) of Perl 6 Weeklies as she took us down memory lane to 2014 (just about when MoarVM was launched and when unicode support was poor!) with some selected highlights and memories of Perl 6 developers of the past (and hopefully future again!). Her talk was recorded at https://www.youtube.com/watch?v=418QCTXmvDU
Cal then spoke of Perl 6 maths which he thought was good with its Rats and FatRats but not quite good enough and his ideas of fixing it. On the following day he showed us he had started some TDD work on TrimRats. He also told us that Newton’s Method wasn’t very good but generated a pretty fractal. See https://www.youtube.com/watch?v=3na_Cx-anvw
Lee spoke about how to detect Perl 5 memory leaks with various CPAN modules and his examples are at https://github.com/leejo/Perl_memory_talk
The day finished with Lightning Talks and a barbecue at givengain — a main sponsor.
On the second day I noticed the robotic St Bernards dog in a tourist shop window had come to life.
Damian kicked off the talks with my favourite of his talks, “Standing on the Shoulders of Giants”, starting with the Countess of Lovelace and her Bernoulli number program. This generated a strange sequence with many zeros. The Perl 6 version since it used rational numbers not floating point got the zeros right whereas the Perl 5 version initially suffered from floating point rounding errors (which are fixable).
Among other things he showed us how to define a new infix operator in Perl 6. He also showed us a Perl 6 sort program that looked exactly like LISP even down to the Lots of Irritating Superfluous Parentheses. I think this was quicksort (he certainly showed us a picture of Sir Tony Hoare at some point). Also a very functional (Haskell-like) equivalent with heavy use of P6 Multiple Dispatch. Also included was demonstration of P6 “before” as a sort of typeless/multi-type comparison infix. Damian then returned to his old favourite of Quantum Computing.
My mind and notes got a bit jumbled at this point but I particularly liked the slide that explained how factorisation could work by observing the product of possible inputs since this led to a collapse that revealed the factors. To do this on RSA etc., of course, needs real hardware support which probably only the NSA and friends have (?). Damian’s code examples are at http://www.bit.do/Perl6SOG with an earlier version of his talk at https://www.youtube.com/watch?v=Nq2HkAYbG5o Around this point there was a road race of classic cars going on outside up the main road into the village and there were car noises in the background that strangely were more relaxing than annoying.
After Quantum Chaos Paul Johnson brought us all back down to ground with an excellent practical talk on modernising legacy Perl 5 applications based on his war stories. Hell, of course, is “Other People’s Code”, often dating from Perl’s early days and lacking documentation and sound engineering.
Often the original developers had long since departed or, in the worse cases, were still there. Adding tests and logging (with stack traces) were particularly useful. As was moving to git (although its steep learning curve meant mentoring was needed) and handling CPAN module versioning with pinto. Many talks had spoken of the Perl 6 future whereas this spoke of the Perl 5 past and present and the work many of us suffer to pay the bills. It’s at https://www.youtube.com/watch?v=4G5EaUNOhR0
Jonathan then spoke of reactive distributed software. A distributed system is an async one where “Is it working?” means “some of it is working but we don’t know which bits”. Good OO design is “tell don’t ask” — you tell remote service to do something for you and not parse the response and do it yourself thus breaking encapsulation. This is particularly important in building well designed distributed systems since otherwise the systems are less responsive and reliable. Reactive (async) works better for distributed software than interactive (blocking or sync).
We saw a table that used a Perl 6 promise for one value and a supply for many values for reactive (async) code and the equivalent (one value) and a Perl 6 Seq for interactive code. A Supply could be used for pub/sub and the Observer Pattern. A Supply could either be live (like broadcast TV) or, for most Perl 6 supplies, on-demand (like Netflix). Then samples of networking (socket) based code were discussed including a web client, web server and SSH::LibSSH (async client bindings often very useful in practical applications like port forwarding)
https://github.com/jnthn/p6-ssh-libssh
Much of the socket code had a pattern of “react { whenever {” blocks with “whenever” as a sort of async loop.He then moved on from sockets to services (using a Supply pipeline) and amazed us by announcing the release of “cro”, a microservices library that even supports HTTP/2 and Websockets, at http://mi.cro.services/. This is installable using Perl 6 by “zef install –/test cro”.
Slides at http://jnthn.net/papers/2017-spw-sockets-services.pdf and video at https://www.youtube.com/watch?v=6CsBDnTUJ3A
Next Lee showed Burp Scanner which is payware but probably the best web vulnerabilities scanner. I wondered if anyone had dare run it on ACT or the hotel’s captive portal.
Wendy did some cheerleading in her “Changing Image of Perl”. An earlier version is at https://www.youtube.com/watch?v=Jl6iJIH7HdA
Sue’s talk was “Spiders, Gophers, Butterflies” although the latter were mostly noticeably absent. She promises me that a successor version of the talk will use them more extensively. Certainly any Perl 6 web spidering code is likely to fit better on one slide than the Go equivalent.
During the lightning talks Timo showed us a very pretty Perl 6 program using his SDL2::Raw to draw an animated square spiral with hypnotic colour cycling type patterns. Also there was a talk by the author about https://bifax.org/bif/— a distributed bug tracking system (which worked offline like git).
Later in the final evening many of us ate and chatted in another restaurant where we witnessed a dog fight being narrowly averted and learnt that Wendy didn’t like Perl 5’s bless for both technical and philosophical reasons.
Time for some old man's reminiscence. Or so it feels when I realize that I've spent more than 10 years involved with the Perl 6 community.
It was February 2007.
I was bored. I had lots of free time (crazy to imagine that now...), and I spent some of that answering (Perl 5) questions on perlmonks. There was a category of questions where I routinely had no good answers, and those were related to threads. So I decided to play with threads, and got frustrated pretty quickly.
And then I remember that a friend in school had told me (about four years earlier) that there was this Perl 6 project that wanted to do concurrency really well, and even automatically parallelize some stuff. And this was some time ago, maybe they had gotten anywhere?
So I searched the Internet, and found out about Pugs, a Perl 6 compiler written in Haskell. And I wanted to learn more, but some of the links to the presentations were dead. I joined the #perl6 IRC channel to report the broken link.
And within three minutes I got a "thank you" for the report, the broken links were gone, and I had an invitation for a commit bit to the underlying SVN repo.
I stayed.
Those were they wild young days of Perl 6 and Pugs. Audrey Tang was pushing Pugs (and Haskell) very hard, and often implemented a feature within 20 minutes after somebody mentioned it. Things were unstable, broken often, and usually fixed quickly. No idea was too crazy to be considered or even implemented.
We had bots that evaluated Perl 6 and Haskell code, and gave the result directly on IRC. There were lots of cool (and sometimes somewhat frightening) automations, for example for inviting others to the SVN repo, to the shared hosting system (called feather), for searching SVN logs and so on. Since git was still an obscure and very unusable, people tried to use SVK, an attempt to implement a decentralized version control system on top of of the SVN protocol.
Despite some half-hearted attempts, I didn't really make inroads into compiler developments. Having worked with neither Haskell nor compilers before proved to be a pretty steep step. Instead I focused on some early modules, documentation, tests, and asking and answering questions. When the IRC logger went offline for a while, I wrote my own, which is still in use today.
I felt at home in that IRC channel and the community. When the community asked for mentors for the Google Summer of Code project, I stepped up. The project was a revamp of the Perl 6 test suite, and to prepare for mentoring task, I decided to dive deeper. That made me the maintainer of the test suite.
I can't recount a full history of Perl 6 projects during that time range, but I want to reflect on some projects that I considered my pet projects, at least for some time.
It is not quite clear from this (very selected) timeline, but my Perl 6 related activity dropped around 2009 or 2010. This is when I started to work full time, moved in with my girlfriend (now wife), and started to plan a family.
The technologies and ideas in Perl 6 are fascinating, but that's not what kept me. I came for the technology, but stayed for the community.
There were and are many great people in the Perl 6 community, some of whom I am happy to call my friends. Whenever I get the chance to attend a Perl conference, workshop or hackathon, I find a group of Perl 6 hackers to hang out and discuss with, and generally have a good time.
Four events stand out in my memory. In 2010 I was invited to the Open Source Days in Copenhagen. I missed most of the conference, but spent a day or two with (if memory serve right) Carl Mäsak, Patrick Michaud, Jonathan Worthington and Arne Skjærholt. We spent some fun time trying to wrap our minds around macros, the intricacies of human and computer language, and Japanese food. (Ok, the last one was easy). Later the same year, I attended my first YAPC::EU in Pisa, and met most of the same crowd again -- this time joined by Larry Wall, and over three or four days. I still fondly remember the Perl 6 hallway track from that conference. And 2012 I flew to Oslo for a Perl 6 hackathon, with a close-knit, fabulous group of Perl 6 hackers. Finally, the Perl Reunification Summit in the beautiful town of Perl in Germany, which brought together Perl 5 and Perl 6 hackers in a very relaxed atmosphere.
For three of these four events, different private sponsors from the Perl and Perl 6 community covered travel and/or hotel costs, with their only motivation being meeting folks they liked, and seeing the community and technology flourish.
The Perl 6 community has evolved a lot over the last ten years, but it is still a very friendly and welcoming place. There are lots of "new" folks (where "new" is everybody who joined after me, of course :D), and a surprising number of the old guard still hang around, some more involved, some less, all of them still very friendly and supportive
I anticipate that my family and other projects will continue to occupy much of my time, and it is unlikely that I'll be writing another Perl 6 book (after the one about regexes) any time soon. But the Perl 6 community has become a second home for me, and I don't want to miss it.
In the future, I see myself supporting the Perl 6 community through infrastructure (community servers, IRC logs, running IRC bots etc.), answering questions, writing a blog article here and there, but mostly empowering the "new" guard to do whatever they deem best.
After about nine months of work, my book Perl 6 Fundamentals is now available for purchase on apress.com and springer.com.
The ebook can be purchased right now, and comes in the epub and PDF formats (with watermarks, but DRM free). The print form can be pre-ordered from Amazon, and will become ready for shipping in about a week or two.
I will make a copy of the ebook available for free for everybody who purchased an earlier version, "Perl 6 by Example", from LeanPub.
The book is aimed at people familiar with the basics of programming; prior
Perl 5 or Perl 6 knowledge is not required. It features a practical example in most chapters (no mammal hierarchies or class Rectangle
inheriting from class Shape
), ranging from simple input/output and text formatting to plotting with python's matplotlib libraries. Other examples include date and time conversion, a Unicode search tool and a directory size visualization.
I use these examples to explain subset of Perl 6, with many pointers to more
documentation where relevant. Perl 6 topics include the basic lexicographic
structure, testing, input and output, multi dispatch, object orientation, regexes and grammars, usage of modules, functional programming and interaction
with python libraries through Inline::Python
.
Let me finish with Larry Wall's description of this book, quoted from his foreword:
It's not just a reference, since you can always find such materials online. Nor is it just a cookbook. I like to think of it as an extended invitation, from a well-liked and well-informed member of our circle, to people like you who might want to join in on the fun. Because joy is what's fundamental to Perl. The essence of Perl is an invitation to love, and to be loved by, the Perl community. It's an invitation to be a participant of the gift economy, on both the receiving and the giving end.
The Perl 6 naming debate has started again. And I guess with good reason. Teaching people that Perl 6 is a Perl, but not the Perl requires too much effort. Two years ago, I didn't believe. Now you're reading a tired man's words.
I'm glad that this time, we're not discussing giving up the "Perl" brand, which still has very positive connotations in my mind, and in many other minds as well.
And yet, I can't bring myself to like "Rakudo Perl 6" as a name. There are two vary shallow reasons for that: Going from two syllables, "Perl six", to five of them, seems a step in the wrong direction. And two, I remember the days when the name was pretty young, and people would misspell it all the time. That seems to have abated, though I don't know why.
But there's also a deeper reason, probably sentimental old man's reason. I remember the days when Pugs was actively developed, and formed the center of a vibrant community. When kp6 and SMOP and all those weird projects were around. And then, just when it looked like there was only a single compiler was around, Stefan O'Rear conjured up niecza, almost single-handedly, and out of thin air. Within months, it was a viable Perl 6 compiler, that people on #perl6 readily recommended.
All of this was born out of the vision that Perl 6 was a language with no single, preferred compiler. Changing the language name to include the compiler name means abandoning this vision. How can we claim to welcome alternative implementations when the commitment to one compiler is right in the language name?
However I can't weigh this loss of vision against a potential gain in popularity. I can't decide if it's my long-term commitment to the name "Perl 6" that makes me resent the new name, or valid objections. The lack of vision mirrors my own state of mind pretty well.
I don't know where this leaves us. I guess I must apologize for wasting your time by publishing this incoherent mess.
At YAPC::EU 2010 in Pisa I received a business card with "Rakudo Star" and the
date July 29, 2010 which was the date of the first release -- a week earlier
with a countdown to 1200 UTC. I still have mine, although it has a tea stain
on it and I refreshed my memory over the holidays by listening again to Patrick
Michaud speaking about the launch of Rakudo Star (R*):
https://www.youtube.com/watch?v=MVb6m345J-Q
R* was originally intended as first of a number of distribution releases (as
opposed to a compiler release) -- useable for early adopters but not initially production
Quality. Other names had been considered at the time like Rakudo Beta (rejected as
sounding like "don't use this"!) and amusingly Rakudo Adventure Edition.
Finally it became Rakudo Whatever and Rakudo Star (since * means "whatever"!).
Well over 6 years later and we never did come up with a better name although there
was at least one IRC conversation about it and perhaps "Rakudo Star" is too
well established as a brand at this point anyway. R* is the Rakudo compiler, the main docs, a module installer, some modules and some further docs.
However, one radical change is happening soon and that is a move from panda to
zef as the module installer. Panda has served us well for many years but zef is
both more featureful and more actively maintained. Zef can also install Perl
6 modules off CPAN although the CPAN-side support is in its early days. There
is a zef branch (pull requests welcome!) and a tarball at:
http://pl6anet.org/drop/rakudo-star-2016.12.zef-beta2.tar.gz
Panda has been patched to warn that it will be removed and to advise the use of
zef. Of course anyone who really wants to use panda can reinstall it using zef
anyway.
The modules inside R* haven't changed much in a while. I am considering adding
DateTime::Format (shown by ecosystem stats to be widely used) and
HTTP::UserAgent (probably the best pure perl6 web client library right now).
Maybe some modules should also be removed (although this tends to be more
controversial!). I am also wondering about OpenSSL support (if the library is
available).
p6doc needs some more love as a command line utility since most of the focus
has been on the website docs and in fact some of these changes have impacted
adversely on command line use, eg. under Windows cmd.exe "perl 6" is no longer
correctly displayed by p6doc. I wonder if the website generation code should be
decoupled from the pure docs and p6doc command line (since R* has to ship any
new modules used by the website). p6doc also needs a better and faster search
(using sqlite?). R* also ships some tutorial docs including a PDF generated from perl6intro.com.
We only ship the English one and localisation to other languages could be
useful.
Currently R* is released roughly every three months (unless significant
breakage leads to a bug fix release). Problems tend to happen with the
less widely used systems (Windows and the various BSDs) and also with the
module installers and some modules. R* is useful in spotting these issues
missed by roast. Rakudo itself is still in rapid development. At some point a less frequently
updated distribution (Star LTS or MTS?) will be needed for Linux distribution
packagers and those using R* in production). There are also some question
marks over support for different language versions (6.c and 6.d).
Above all what R* (and Rakudo Perl 6 in general) needs is more people spending
more time working on it! JDFI! Hopefully this blog post might
encourage more people to get involved with github pull requests.
https://github.com/rakudo/star
Feedback, too, in the comments below is actively encouraged.
There is a Release Candidate for Rakudo Star 2016.11 (currently RC2) available at
http://pl6anet.org/drop/
This includes binary installers for Windows and Mac.
Usually Star is released about every three months but last month's release didn't include a Windows installer so there is another release.
I'm hoping to release the final version next weekend and would be grateful if people could try this out on as many systems as possible.
Any feedback email steve *dot* mynott *at* gmail *dot* com
Full draft announce at
https://github.com/rakudo/star/blob/master/docs/announce/2016.11.md
We turned up in Cluj via Wizz Air to probably one of the best pre YAPC parties ever located on three levels on the rooftop of Evozon’s plush city centre offices. We were well supplied with excellent wine, snacks and the local Ursus beer and had many interesting conversations with old friends.
On the first day Tux spoke about his Text::CSV modules for both Perl 5 and 6 on the first day and I did a short talk later in the day on benchmarking Perl 6. Only Nicholas understood my trainspotter joke slide with the APT and Deltic! Sadly my talk clashed with Lee J talking about Git which I wanted to see so I await the youtube version! Jeff G then spoke about Perl 6 and parsing languages such as JavaScript. Sadly I missed Leon T’s Perl 6 talk which I also plan on watching on youtube. Tina M gave an excellent talk on writing command line tools. She also started the lightning talks with an evangelical talk about how tmux was better than screen. Geoffrey A spoke about configuring sudo to run restricted commands in one directory which seemed a useful technique to me. Dave C continued his conference tradition of dusting off his Perl Vogue cover and showing it again. The age of the image was emphasised by the amazingly young looking mst on it. And Stefan S ended with a call for Perl unification.
The main social event was in the courtyard of the main museum off the central square with free food and beer all evening and an impressive light show on the slightly crumbling facade. There were some strange chairs which resembled cardboard origami but proved more comfortable than they looked when I was finally able to sit in one. The quality of the music improved as the evening progressed (or maybe the beer helped) I was amazed to see Perl Mongers actually dancing apparently inspired by the younger Cluj.pm members.
Day Two started with Sawyer’s State of the Velociraptor which he had, sensibly, subcontracted to various leading lights of the Perl Monger community. Sue S (former London.pm leader) was up first with a short and sweet description of London.pm. Todd R talked about Houston.pm. Aaron Crane spoke about the new improved friendlier p5p. Tina about Berlin.pm and the German Perl community site she had written back in the day. This new format worked very well and it was obvious Perl Mongers groups could learn much from each other. Max M followed with a talk about using Perl and ElasticSearch to index websites and documents and Job about accessibility.
1505 had, from the perspective of London.pm, one of the most unfortunate scheduling clashes at YAPC::EU ever, with three titans of London.pm (all former leaders) battling for audience share. I should perhaps tread carefully here lest bias become apparent but the heavyweight Sue Spence was, perhaps treacherously, talking about Go in the big room and Dave Cross and Tom talking about Perl errors and HTML forms respectively in the other rooms. This momentous event should be reproducible by playing all three talks together in separate windows once they are available.
Domm did a great talk on Postgres which made me keen to use this technology again. André W described how he got Perl 6 running on his Sailfish module phone while Larry did a good impression of a microphone stand. I missed most of Lance Wick’s talk but the bit I caught at the end made me eager to watch the whole thing.
Guinevere Nell gave a fascinating lightning talk about agent based economic modelling. Lauren Rosenfield spoke of porting (with permission) a “Python for CS” book to perl 6. Lukas Mai described his journey from Perl to Rust. Lee J talked about photography before Sue encouraged people to break the London.pm website. Outside the talk rooms on their stall Liz and Wendy had some highly cool stuffed toy Camelia butterflies produced by the Beverly Hills Teddy Bear Company and some strange “Camel Balls” bubblegum. At the end of the day Sue cat herded many Mongers to eat at the Enigma Steampunk Bar in central Cluj with the cunning ploy of free beer money (recycled from the previous year’s Sherry money).
The third day started with Larry’s Keynote in which photographs of an incredible American house “Fallingwater” and Chinese characters (including “arse rice”) featured heavily. Sweth C gave a fast and very useful introduction to swift. Nicholas C then confused a room of people for an hour with a mixture of real Perl 5 and 6 and an alternative timeline compete with T shirts. The positive conclusion was that even if the past had been different the present isn’t likely to have been much better for the Perl language family than it is now! Tom spoke about Code Review and Sawyer about new features in Perl 5.24. Later I heard Ilya talk about running Perl on his Raspberry PI Model B and increasing the speed of his application very significantly to compensate for its low speed! And we finished with lightning talks where we heard about the bug tracker OTRS (which was new to me), Job spoke about assistive tech and Nine asked us to ask our bosses for money for Perl development amongst several other talks. We clapped a lot in thanks, since this was clearly a particularly well organised YAPC::EU (due to Amalia and her team!) and left to eat pizza and fly away the next day. Some stayed to visit a salt mine (which looked most impressive from the pictures!) and some stayed longer due to Lufthansa cancelling their flights back!
The meeting first night was in a large beer bar in the centre of Nuremberg.
We went back to the Best Western to find a certain exPumpkin already resident in the bar.
Despite several of the well named Bitburgers we managed to arrive at the
conference venue on time the following morning. Since my knowledge of German was
limited to a C grade 'O' Level last century my review talks will be mostly
limited to English talks. Apologies in advance to those giving German talks
(not unreasonable considering the country). Hopefully other blog posts will
cover these.
Masak spoke about the dialectic between planning (like physics) and chaos (like
biology) in software development.
http://masak.org/carl/gpw-2016-domain-modeling/talk.pdf
Tobias gave a good beginners guide to Perl 6 in German and I was able to follow
most of the slides since I knew more Perl 6 than German and even learnt a thing
or two.
After lunch Stefan told us he was dancing around drunk and naked on the turn of
the 2000s and also about communication between Perl 6 and Perl 5 and back again
via his modules Inline::Perl5 (from Perl 6) -- the most important take away
being that "use Foo::Bar:from<Perl5>" can be used from Perl 6 and "use
Inline::Perl6" from Perl 5. The modules built bridges like those built in the
old school computer game "Lemmings".
http://niner.name/talks/Perl%205%20and%20Perl%206%20-%20a%20great%20team/Perl%205%20and%20Perl%206%20-%20a%20great%20team.odp
Max told us (in German) about his Dancer::SearchApp search
engine which has based on Elastic Search but I was able to follow along on the
English version of his slides on the web.
http://corion.net/talks/dancer-searchapp/dancer-searchapp.en.html
Sue got excited about this. Tina showed us some slides in Vim and her module
to add command line tab completion to script arguments using zsh and bash. I
wondered whether some of her code could be repurposed to add fish shell man
page parsing autocompletion to zsh. She also had a good lightening talk about
Ingy's command line utility for github.
https://github.com/perlpunk/myslides/tree/master/app-spec
Second day started early with Moritz talking about Continuous Delivery which
could mean just delivering to a staging server. He was writing a book about it
at deploybook.com with slides at:
https://deploybook.com/talks/gpw2016-continuous-delivery.pdf
Salve wanted us to write elegant code as a reply to the Perl Jam guy at CCC in
a self confessed "rant".
Sawyer described writing Ref::Util to optimise things like "ref $foo" in a
Hardcore Perl 5 XS/Core talk and Masak told us about his little 007 language
written in Perl 6 as a proof of concept playroom for future Perl 6 extended
macro support and demonstrated code written over lunch in support of this.
http://masak.org/carl/gpw-2016-big-hairy-yaks/talk.pdf
Stefan gave a great talk about CURLI and explained the complexity of what was
intended.
I gave my talk on "Simple Perl 6 Fractals and Concurrency" on Friday. It
started badly with AV issues my side but seemed well received. It was useful
speaking with people about it and I managed to speed things up *after* the talk
and I should have new material for a 2.0 version.
There were very good talks on extracting data from PDFs and writing JSON apis.
https://github.com/mickeyn/PONAPI
looked very interesting and would have saved me much coding at a recent job.
There were some great lightening talks at th end of the day. Sawyer wanted
people to have English slides and gave his talk in Hebrew to stress this.
Things ended Friday night with great food and beer in a local bar.
To me It seemed a particularly good FOSDEM for both for Perl5/6 and
other talks although very crowded as usual and I didn't see the usual
*BSD or Tor stalls. I was stuck by the statistic that there were
about 500 speakers from many thousands of people so of the order of
one speaker per tens of attendees which is very high.
Videos are already starting to appear at
On Saturday I started with Poettering and systemd which was a keynote
and perhaps a little disappointing since he usually is a better
speaker and the audio was a little indistinct. systemd had won being
used by all distros except gentoo and slackware. They were now working
on a dns resolver component which supported DNSSEC although in
practice validating signed zone files would slow down browsing and
currently only 2% of websites had it activated. He didn't mention
strong criticisms of its security by crypto experts such as DJB.
The most amusing talk was Stark's retro running of Postgres on
NetBSD/VAX which exposed some obscure OS bugs and was livened up by a
man in an impressive Postgres Elephant costume appearing. We later
spoke to Mr Elephant who said he was both blind and very hot at the
time. I then went to the Microkernel room to hear about GNU/Hurd
progress from Thibault since this room is usually "OPEN" and he's an
excellent speaker. I noticed even this obscure room was quite crowded
as compared with previous years so I'd guess total attendees this year
were high. He stressed the advantages of running device drivers in
userspace as allowing more user "freedom" to mount fs etc. without
root and improving kernel stability since the drivers could crash and
restart without bringing down the kernel. In previous years he had
talked of his DDE patches allowing linux 2.6 hardware drivers on Hurd
and this year he was using the NetBSD Rump kernel under Hurd to add
sound support with USB support promised. His demo was RMS singing his
song on his Hurd laptop. The irony was he needed to use BSD code on a
GNU/BSD/Hurd system to do it! There had been some work on X86-64 Hurd
but it wasn't there yet since he needed more help from the community.
I then saw some lightening talks (actually 20 mins long) including a
good one on C refactoring.
The Perl dinner on Saturday night featured the usual good food and
conversation and the devroom was on Sunday. Ovid spoke about Perl 6
and its advantages (such as being able to perform maths on floats
correctly). I had a python guy sitting next to me who admitted he had
never been to a Perl talk before so that was a success in reaching
someone new. Will Braswell spoke next about his "Rperl" compiler
which translated his own quite restricted subset (no regexps yet and
no $_) of Perl 5 line by line into C++ in order to run some of the
language shootups benchmarks (a graphical animation of planetary
motion) at increased speed. I'd not seen Will before and he was an
excellent speaker who left me more impressed than I'd expected and I
hope he gets to YAPC::EU in the summer. I saw some non-Perl stuff
next for variety including a good one on the Go debugger Delve which
was aware of the go concurrency and could be used as a basic REPL. I
returned to Perl to see Bart explain some surprisingly simple X86-64
assembly language to do addition and ROT13 which he interfaced with
Perl 6 using NativeCall (although it stuck me that the
CPAN P5NCI module on Perl 5 would have also worked).
Again an excellent talk and a good start to the a
run of some of the best Perl talks I'd ever seen. Stevan Little's talk
was one of the his most amusing ever and perl wasn't really dead.
Sawyer also did an excellent promotion of Perl 5 targeted at the
people who maybe hadn't used it since the early 2000s explaining what
had changed. Liz finished with her autobiographical account of Perl
development and some nice short Perl 6 examples. We all ate again in
the evening together my only regrets being I'd missed the odd talk or
two (which I should be able to watch on video).
At FOSDEM 2015, Larry announced that there will likely be a Perl 6 release candidate in 2015, possibly around the September timeframe. What we’re aiming for is concurrent publication of a language specification that has been implemented and tested in at least one usable compilation environment — i.e., Rakudo Perl 6.
So, for the rest of 2015, we can expect the Rakudo development team to be highly focused on doing only those things needed to prepare for the Perl 6 release later in the year. And, from previous planning and discussion, we know that there are three major areas that need work prior to release: the Great List Refactor (GLR), Native Shaped Arrays (NSA), and Normalization Form Grapheme (NFG).
…which brings us to Parrot. Each of the above items is made significantly more complicated by Rakudo’s ongoing support for Parrot, either because Parrot lacks key features needed for implementation (NSA, NFG) or because a lot of special-case code is being used to maintain adequate performance (lists and GLR).
At present most of the current userbase has switched over to MoarVM as the backend, for a multitude of reasons. And more importantly, there currently aren’t any Rakudo or NQP developers on hand that are eager to tackle these problems for Parrot.
In order to better focus our limited resources on the tasks needed for a Perl 6 language release later in the year, we’re expecting to suspend Rakudo’s support for the Parrot backend sometime shortly after the 2015.02 release.
Unfortunately the changes that need to be made, especially for the GLR, make it impractical to simply leave existing Parrot support in place and have it continue to work at a “degraded” level. Many of the underlying assumptions will be changing. It will instead be more effective to (re)build the new systems without Parrot support and then re-establish Parrot as if it is a new backend VM for Rakudo, following the techniques that were used to create JVM, MoarVM, and other backends for Rakudo.
NQP will continue to support Parrot as before; none of the Rakudo refactorings require any changes to NQP.
If there are people that want to work on refactoring Rakudo’s support for Parrot so that it’s more consistent with the other VMs, we can certainly point them in the right direction. For the GLR this will mainly consists of migrating parrot-specific code from Rakudo into NQP’s APIs. For the NSA and NFG work, it will involve developing a lot of new code and feature capabilities that Parrot doesn’t possess.
This past weekend I attended the 2014 Austrian Perl Workshop and Hackathon in Salzburg, which turned out to be an excellent way for me to catch up on recent changes to Perl 6 and Rakudo. I also wanted to participate directly in discussions about the Great List Refactor, which has been a longstanding topic in Rakudo development.
What exactly is the “Great List Refactor” (GLR)? For several years Rakudo developers and users have identified a number of problems with the existing implementation of list types — most notably performance. But we’ve also observed the need for user-facing changes in the design, especially in generating and flattening lists. So the term GLR now encompasses all of the list-related changes that seem to want to be made.
It’s a significant (“great”) refactor because our past experience has shown that small changes in the list implementation often have far-reaching effects. Almost any bit of rework of list fundamentals requires a fairly significant refactor throughout much of the codebase. This is because lists are so fundamental to how Perl 6 works internally, just like the object model. So, as the number of things that are desirable to fix or change has grown, so has the estimated size of the GLR effort, and the need to try to achieve it “all at once” rather than piecemeal.
The pressure to make progress on the GLR has been steadily increasing, and APW2014 was significant in that a lot of the key people needed for that would be in the same location. Everyone I’ve talked to agrees that APW2014 was a smashing success, and I believe that we’ve now resolved most of the remaining GLR design issues. The rest of this post will describe that.
This is an appropriate moment to recognize and thank the people behind the APW effort. The organizers did a great job. The Techno-Z and ncm.at venues were fantastic locations for our meetings and discussions, and I especially thank ncm.at, Techno-Z, yesterdigital, and vienna.pm for their generous support in providing venues and food at the event.
So, here’s my summary of GLR issues where we were able to reach significant progress and consensus.
(Be sure to visit our gift shop!)
Much of the GLR discussion at APW2014 concerned flattening list context in Perl 6. Over the past few months and years Perl 6 has slowly but steadily reduced the number of functions and operators that flatten by default. In fact, a very recent (and profound) change occurred within the last couple of months, when the .[]
subscript operator for Parcels switched from flattening to non-flattening. To illustrate the difference, the expression
(10,(11,12,13),(14,15)).[2]
previously would flatten out the elements to return 12, but now no longer flattens and produces (14,15)
. As a related consequence, .elems
no longer flattens either, changing from 6 to 3.
Unfortunately, this change created a inconsistency between Parcels and Lists, because .[]
and .elems
on Lists continued to flatten. Since programmers often don’t know (or care) when they’re working with a Parcel or a List, the inconsistency was becoming a significant pain point. Other inconsistencies were increasing as well: some methods like .sort
, .pick
, and .roll
have become non-flattening, while other methods like .map
, .grep
, and .max
continue to flatten. There’s been no really good guideline to know or decide which should do which.
Flattening behavior is great when you want it, which is a lot of the time. After all, that’s what Perl 5 does, and it’s a pretty popular language. But once a list is flattened it’s hard to get the original structure if you wanted that — flattening discards information.
So, after many animated discussions, review of lots of code snippets, and seeking some level of consistency, the consensus on Perl 6 flattening behavior seems to be:
[ ]
array constructor are unchanged; they continue to flatten their input elements. (Arrays are naturally flat.)for @a,@b { ... }
flattens @a,@b
and applies the block to each element of @a
followed by each element of @b
. Note that flattening can easily be suppressed by itemization, thus for @a, $@b { ... }
flattens @a
but does all of @b
in a single iteration..map
, .grep
, and .first
… the programmer will have to use .flat.grep
and .flat.first
to flatten the list invocant. Notably, .map
will no longer flatten its invocant — a significant change — but we’re introducing .for
as a shortcut for .flat.map
to preserve a direct isomorphism with the for
statement.There’s ongoing conjecture of creating an operator or syntax for flattening, likely a postfix of some sort, so that something like .|grep
would be a convenient alternative to .flat.grep
, but it doesn’t appear that decision needs to be made as part of the GLR itself.((1,2), 3, (4,5)).map({...}) # iterates over three elements map {...}, ((1,2),3,(4,5)) # iterates over five elements (@a, @b, @c).pick(1) # picks one of three arrays pick 1, @a, @b, @c # flatten arrays and pick one element
As a result of improvements in flattening consistency and behavior, it appears that we can eliminate the Parcel type altogether. There was almost unanimous agreement and enthusiasm at this notion, as having both the Parcel and List types is quite confusing.
Parcel was originally conceived for Perl 6 as a “hidden type” that programmers would rarely encounter, but it didn’t work out that way in practice. It’s nice that we may be able to hide it again — by eliminating it altogether.
Thus infix:<,>
will now create Lists directly. It’s likely that comma-Lists will be immutable, at least in the initial implementation. Later we may relax that restriction, although immutability also provides some optimization benefits, and Jonathan points out that may help to implement fixed-size Arrays.
Speaking of optimization, eliminating Parcel may be a big boost to performance, since Rakudo currently does a fair bit of converting Parcels to Lists and vice-versa, much of which goes away if everything is a List.
During a dinner discussion Jonathan reminded me that Synopsis 4 has all of the looping constructs as list generators, but Rakudo really only implements for
at the moment. He also pointed out that if the loop generators are implemented, many functions that currently use gather/take
could potentially use a loop instead, and this could be much more performant. After thinking on it a bit, I think Jonathan is on to something. For example, the code for IO::Handle.lines()
currently does something like:
gather { until not $!PIO.eof { $!ins = $!ins + 1; take self.get; } }
With a lazy while
generator, it could be written as
(while not $!PIO.eof { $!ins++; self.get });
This is lazily processed, but doesn’t involve any of the exception or continuation handling that gather/take
requires. And since while
might choose to not be strictly lazy, but lines()
definitely should be, we may also use the lazy
statement prefix:
lazy while not $!PIO.eof { $!ins++; self.get };
The lazy
prefix tells the list returned from the while
that it’s to generate as lazily as it possibly can, only returning the minimum number of elements needed to satisfy each request.
So as part of the GLR, we’ll implement the lazy list forms of all of the looping constructs (for
, while
, until
, repeat
, loop
). In the process I also plan to unify them under a single LoopIter
type, which can avoid repetition and be heavily optimized.
This new loop iterator pattern should also make it possible to improve performance of for
statements when performed in sink context. Currently for
statements always generate calls to .map
, passing the body of the loop as a closure. But in sink context the block of a for
statement could potentially be inlined. This is the way blocks in most other loops are currently generated. Inlining the block of the body could greatly increase performance of for
loops in sink context (which are quite common).
Many people are aware of the problem that constructs such as for
and map
aren’t “consuming” their input during processing. In other words, if you’re doing .map
on a temporary list containing a million elements, the entire list stays around until all have been processed, which could eat up a lot of memory.
Naive solutions to this problem just don’t work — they carry lots of nasty side effects related to binding that led us to design immutable Iterators. We reviewed a few of them at the hackathon, and came back to the immutable Iterator we have now as the correct one. Part of the problem is that the current implementation is a little “leaky”, so that references to temporary objects hang around longer than we’d like and these keep the “processed” elements alive. The new implementation will plug some of the leaks, and then some judicious management of temporaries ought to take care of the rest.
In the past year much work has been done to improve sink context to Rakudo, but I’ve never felt the implementation we have now is what we really want. For one, the current approach bloats the codegen by adding a call to .sink
after every sink-context statement (i.e., most of them). Also, this only handles sink for the object returned by a Routine — the Routine itself has no way of knowing it’s being called in sink context such that it could optimize what it produces (and not bother to calculate or return a result).
We’d really like each Routine to know when it’s being called in sink context. Perl 5 folks will instantly say “Hey, that’s wantarray
!”, which we long ago determined isn’t generally feasible in Perl 6.
However, although a generalized wantarray
is still out of reach, we can provide it for the limited case of detecting sink contexts that we’re generating now, since those are all statically determined. This means a Routine can check if it’s been called in sink context, and use that to select a different codepath or result. Jonathan speculates that the mechanism will be a flag in the callsite, and I further speculate the Routine will have a macro-like keyword to check that flag.
Even with detecting context, we still want any objects returned by a Routine to have .sink
invoked on them. Instead of generating code for this after each sink-level statement, we can do it as part of the general return handler for Routines; a Routine in sink context invokes .sink
on the object it would’ve otherwise returned to the caller. This directly leads to other potential optimizations: we can avoid .sink
on some objects altogether by checking their type, and the return handler probably doesn’t need to do any decontainerizing on the return value.
As happy as I am to have discovered this way to pass sink context down into Routines, please don’t take this as opening an easy path to lots of other wantarray-like capabilities in Perl 6. There may be others, and we can look for them, but I believe sink context’s static nature (as well as the fact that a false negative generally isn’t harmful) makes it quite a special case.
One area that has always been ambiguous in the Synopses is determining when various contextualizing methods must return a copy or are allowed to return self
. For example, if I invoke .values
on a List object, can I just return self
, or must I return a clone that can be modified without affecting the original? What about .list
and .flat
on an already-flattened list?
The ultra-safe answer here is probably to always return a copy… but that can leave us with a lot of (intermediate) copies being made and lying around. Always returning self
leads to unwanted action-at-a-distance bugs.
After discussion with Larry and Jonathan, I’ve decided that true contextualizers like .list
and .flat
are allowed to return self
, but other method are generally obligated to return an independent object. This seems to work well for all of the methods I’ve considered thus far, and may be a general pattern that extends to contextualizers outside of the GLR.
(small matter of programming and documentation)
The synopses — especially Synopsis 7 — have always been problematic in describing how lists work in Perl 6. The details given for lists have often been conjectural ideas that quickly prove to epic fail in practice. The last major list implementation was done in Summer 2010, and Synopsis 7 was supposed to be updated to reflect this design. However, the ongoing inconsistencies (that have led to the GLR) really precluded any meaningful update to the synopses.
With the progress recently made at APW2014, I’m really comfortable about where the Great List Refactor is leading us. It won’t be a trivial effort; there will be significant rewrite and refactor of the current Rakudo codebase, most of which will have to be done in a branch. And of course we’ll have to do a lot of testing, not only of the Perl 6 test suite but also the impact on the module ecosystem. But now that much of the hard decisions have been made, we have a roadmap that I hope will enable most of the GLR to be complete and documented in the synopses by Thanksgiving 2014.
Stay tuned.
[This is a response to the Russian Perl Podcast transcribed by Peter Rabbitson and discussed at blogs.perl.org.]
I found this translation and podcast to be interesting and useful, thanks to all who put it together.
Since there seems to have been some disappointment that Perl 6 developers didn’t join in the discussions about “Perl 7” earlier this year, and in the podcast I’m specifically mentioned by name, I thought I’d go ahead and comment now and try to improve the record a bit.
While I can’t speak for the other Perl 6 developers, in my case I didn’t contribute to the discussion because nearly all the things I would’ve said were already being said better by others such as Larry, rjbs, mst, chromatic, etc. I think a “Perl 7” rebrand is the wrong approach, for exactly the reasons they give.
A couple of statements in the podcast refer to “hurting the feelings of Perl 6 developers” as being a problem resulting from a rebrand to Perl 7. I greatly appreciate that people are concerned with the possible impact of a Perl 5 rebrand on Perl 6 developers and our progress. I believe that Perl 6’s success or failure at this point will have little to do with the fact that “6 is larger than 5”. I don’t find the basic notion of “Perl 7” offensive or directly threatening to Perl 6.
But I fully agree with mst that “you can’t … have two successive numbers in two brands and not expect people to be confused.” We already have problems explaining “5” and “6” — adding more small integers to the explanation would just make an existing problem even worse, and wouldn’t do anything to address the fundamental problems Perl 6 was intended to resolve.
Since respected voices in the community were already saying the things I thought about the name “Perl 7”, I felt that adding my voice to that chorus could only be more distracting than helpful to the discussion. My involvement would inject speculations on the motivations of Perl 6 developers into what is properly a discussion about how to promote progress with Perl 5. I suspect that other Perl 6 developers independently arrived at similar conclusions and kept silent as well (Larry being a notable exception).
I’d also like to remark on a couple of @sharifulin’s comments in the podcast (acknowledging that the transcribed comments may be imprecise in the translation from Russian):
First, I’m absolutely not the “sole developer” of Perl 6 (13:23 in the podcast), or even the sole developer of Rakudo Perl 6. Frankly I think it’s hugely disrespectful to so flippantly ignore the contributions of others in the Perl 6 development community. Let’s put some actual facts into this discussion… in the past twelve months there have been over 6,500 commits from over 70 committers to the various Perl 6 related repositories (excluding module repositories), less than 4% (218) of those commits are from me. Take a look at the author lists from the Perl 6 commit logs and you may be a little surprised at some of the people you find listed there.
Second, there is not any sense in which I think that clicking “Like” on a Facebook posting could be considered “admitting defeat” (13:39 in the podcast). For one, my “Like” was actually liking rjbs’ reply to mst’s proposal, as correctly noted in the footnotes (thanks Peter!).
But more importantly, I just don’t believe that Perl 5 and Perl 6 are in a battle that requires there to be a conquerer, a vanquished, or an admission of defeat.
Pm